Use Containerlab to emulate open-source routers

Containerlab is a new open-source network emulator that quickly builds network test environments in a devops-style workflow. It provides a command-line-interface for orchestrating and managing container-based networking labs and supports containerized router images available from the major networking vendors.

More interestingly, Containerlab supports any open-source network operating system that is published as a container image, such as the Free Range Routing (FRR) router. This post will review how Containerlab works with the FRR open-source router.

While working through this example, you will learn about most of Containerlab’s container-based features. Containerlab also supports VM-based network devices so users may run commercial router disk images in network emulation scenarios. I’ll write about building and running VM-based labs in a future post.

While it was initially developed by Nokia engineers, Containerlab is intended to be a vendor-neutral network emulator and, since its first release, the project has accepted contributions from other individuals and companies.

The Containerlab project provides excellent documentation so I don’t need to write a tutorial. But, Containerlab does not yet document all the steps required to build an open-source router lab that starts in a pre-defined state. This post will cover that scenario so I hope it adds something of value.

Install Containerlab

You may install Containerlab using your distribution’s package manager or you may download and run an install script. Users may also manually install Containerlab because it’s a Go application so users just need to copy the application binary to a directory in their system’s path and copy some configuration files to etc/containerlab.

Prerequisites:

Containerlab runs best on Linux. It works on both Debian and RHEL-based distributions, and can even run in Windows Subsystem for Linux (WSL2). It’s main dependency is Docker so first you must install Docker. I am running an Ubuntu 20.04 system.

$ sudo apt install apt-transport-https ca-certificates
$ sudo apt install -y curl software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
$ sudo apt update
$ apt-cache policy docker-ce
$ sudo apt install -y docker-ce

Install Containerlab

To install Containerlab from its repository, run the Containerlab install script:

$ bash -c "$(curl -sL https://get-clab.srlinux.dev)"

See the Containerlab installation documentation for other ways to install Containerlab, including manual installation for distributions that do not use Debian or RHEL-based packaging tools.

Containerlab files

The Containerlab installation script copies the Containerlab executable file to /usr/bin and copies lab-example and template files to /etc/containerlab. The latter directory is the most interesting because it contains the lab examples that users can use as models for lab development.

Build a lab using FRR

Containerlab supports commercial containerized router appliances such as Nokia’s SR Linux and Arista’s CEOS. In each case, Containerlab takes into account the specific requirements of each device. If you wish to use commercial containerized network operating systems that are not listed among the supported device types, you may need to communicate with the Containerlab developers and request that support for your device be added or, better yet, offer to contribute to the project.

However, Containerlab should be able to any use open-source network operating system, such as Free Range Routing (FRR), that runs on a Linux container. In this example, I will use the network-multitool container and the FRR container from DockerHub to create nodes in my network emulation scenario.

To build a lab, first create a new directory. In that directory, create a Containerlab topology file. You may optionally add any configuration files you wish to mount in the container and, as you will see below, you may need to write some simple shell scripts to ensure all the nodes in the lab start in a predefined state.

Create a Topology file

Containerlab defines lab topologies in topology definition files that use a simple YAML syntax. Look at the topology file examples in the /etc/containerlab/lab-examples directory for inspiration.

Create a directory for the network emulation scenario’s files:

$ mkdir -p ~/Documents/frrlab 
$ cd ~/Documents/frrlab

The lab in this example will consist of three routers connected in a ring topology and each router will have one PC connected to it. You must plan the topology and determine which ports will connect to each other.

Use your favorite text editor to create a file named frrlab.yml and add the following text to it:

name: frrlab

topology:
  nodes:
    router1:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router1/daemons:/etc/frr/daemons
    router2:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router2/daemons:/etc/frr/daemons
    router3:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router3/daemons:/etc/frr/daemons
    PC1:
      kind: linux
      image: praqma/network-multitool:latest
    PC2:
      kind: linux
      image: praqma/network-multitool:latest
    PC3:
      kind: linux
      image: praqma/network-multitool:latest

  links:
    - endpoints: ["router1:eth1", "router2:eth1"]
    - endpoints: ["router1:eth2", "router3:eth1"]
    - endpoints: ["router2:eth2", "router3:eth2"]
    - endpoints: ["PC1:eth1", "router1:eth3"]
    - endpoints: ["PC2:eth1", "router2:eth3"]
    - endpoints: ["PC3:eth1", "router3:eth3"]

The Containerlab topology file format is self-explanatory. The file starts with the name of the lab, followed by the lab topology. If you wish to run more than one lab at the same time, you must ensure each lab has a different name in the topology file. It defines each device and then it defines the links between devices. You also see it mounts a daemons configuration file to each router. We will create those files, next.

Add configuration files

The FRR network operating system must have a copy of the daemons file in its /etc/frr directory or FRR will not start. As you saw above, Containerlab makes it easy specify which files to mount into each container.

Each router needs its own copies of the configuration files. Make separate directories for each router:

$ mkdir router1
$ mkdir router2
$ mkdir router3

Copy the standard FRR daemons config file from the FRR documentation to the frrlab/router1 directory. Edit the file:

$ vi router1/daemons

Change zebra, ospfd, and ldpd to “yes”. The new frrlab/router1/daemons file will look like the listing below:

zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=yes
nhrpd=no
eigrpd=no
babeld=no
sharpd=no
staticd=no
pbrd=no
bfdd=no
fabricd=no

vtysh_enable=yes
zebra_options=" -s 90000000 --daemon -A 127.0.0.1"
bgpd_options="   --daemon -A 127.0.0.1"
ospfd_options="  --daemon -A 127.0.0.1"
ospf6d_options=" --daemon -A ::1"
ripd_options="   --daemon -A 127.0.0.1"
ripngd_options=" --daemon -A ::1"
isisd_options="  --daemon -A 127.0.0.1"
pimd_options="  --daemon -A 127.0.0.1"
ldpd_options="  --daemon -A 127.0.0.1"
nhrpd_options="  --daemon -A 127.0.0.1"
eigrpd_options="  --daemon -A 127.0.0.1"
babeld_options="  --daemon -A 127.0.0.1"
sharpd_options="  --daemon -A 127.0.0.1"
staticd_options="  --daemon -A 127.0.0.1"
pbrd_options="  --daemon -A 127.0.0.1"
bfdd_options="  --daemon -A 127.0.0.1"
fabricd_options="  --daemon -A 127.0.0.1"

Save the file and copy it to the other router folders so each router has its own copy.

$ cp router1/daemons router2/daemons
$ cp router1/daemons router3/daemons

Start the lab

To start a Containerlab network emulation, run the clab deploy command with the new frrlab topology file. Containerlab will download the docker images used to create the PCs and routers, start containers based on the images and connect them together.

Since we are using containers from Docker Hub, we need to first login to Docker.

$ sudo docker login

Enter your Docker userid and password.

Now, run the Containerlab command:

$ sudo clab deploy --topo frrlab.yml

Containerlab outputs logs to the terminal while it sets up the lab. If you have any errors in your configuration file, Containerlab outputs descriptive error messages. The lisitng below shows a normal lab setup, based on the frrlab tolopogy.

<

pre>INFO[0000] Parsing & checking topology file: frrlab.ymlINFO[0000] Pulling docker.io/praqma/network-multitool:latest Docker image
INFO[0009] Done pulling docker.io/praqma/network-multitool:latest
INFO[0009] Pulling docker.io/frrouting/frr:v7.5.1 Docker image
INFO[0032] Done pulling docker.io/frrouting/frr:v7.5.1
INFO[0032] Creating lab directory: /home/brian/Documents/frrlab/clab-frrlab
INFO[0032] Creating docker network: Name='clab', IPv4Subnet='172.20.20.0/24', IPv6Subnet='2001:172:20:20::/64', MTU='1500'
INFO[0000] Creating container: router2
INFO[0000] Creating container: router1
INFO[0000] Creating container: Router3
INFO[0000] Creating container: PC1
INFO[0000] Creating container: PC2
INFO[0000] Creating container: PC3
INFO[0006] Creating virtual wire: router1:eth2 <--> router3:eth1
INFO[0006] Creating virtual wire: router2:eth2 <--> router3:eth2
INFO[0006] Creating virtual wire: PC1:eth1 <--> router1:eth3
INFO[0006] Creating virtual wire: router1:eth1 <--> router2:eth1
INFO[0006] Creating virtual wire: PC2:eth1 <--> router2:eth3
INFO[0006] Creating virtual wire: PC3:eth1 <--> router3:eth3
+---+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | Group | State | IPv4 Address | IPv6 Address |
+---+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+
| 1 | clab-frrlab-PC1 | 3be7d5136a58 | praqma/network-multitool:latest | linux | | running | 172.20.20.4/24 | 2001:172:20:20::4/64 |
| 2 | clab-frrlab-PC2 | 447d4a3fd09d | praqma/network-multitool:latest | linux | | running | 172.20.20.5/24 | 2001:172:20:20::5/64 |
| 3 | clab-frrlab-PC3 | 146915d85bfe | praqma/network-multitool:latest | linux | | running | 172.20.20.6/24 | 2001:172:20:20::6/64 |
| 4 | clab-frrlab-router1 | fa4beabef9e4 | frrouting/frr:v7.5.1 | linux | | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
| 5 | clab-frrlab-router2 | c65b32cc2b46 | frrouting/frr:v7.5.1 | linux | | running | 172.20.20.7/24 | 2001:172:20:20::7/64 |
| 6 | clab-frrlab-router3 | c992143448f7 | frrouting/frr:v7.5.1 | linux | | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
+---+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+

Containerlab outputs a table containing information about the running lab. You can get the same information table later by running the sudo clab inspect --name frrlab command.

In the table, you see each node has an IPv4 address on the management network. If your network nodes run an SSH server, you would be able to connect to them via SSH. However, the containers I am using in this example are both based on Alpine Linux and do not have openssh-server installed so we will connect to each node using Docker. If you want lab users to have a more realistic experience, you could build new containers based on the frrouting and network-mulitool containers that also include openssh-server.

Configure network nodes

Currently, the nodes are running but the network is not configured. To configure the network, log into each node and run its native configuration commands, either in the shell (the ash shell in Alpine Linux), or in its router CLI (vtysh in FRR).

To configure PC1, run Docker to execute a new shell on the container, clab-frrlab-PC1.

$ sudo docker exec -it clab-frrlab-PC1 /bin/ash

Based on the network plan we created when we designed this network, configure PC1’s eth1 interface with an IP address and static routes to the external data networks.

/ # ip addr add 192.168.11.2/24 dev eth1
/ # ip route add 192.168.0.0/16 via 192.168.11.1 dev eth1
/ # ip route add 10.10.10.0/24 via 192.168.11.1 dev eth1
/ # exit

Configure PC2 in a similar way:

$ sudo docker exec -it clab-frrlab-PC2 /bin/ash
/ # ip addr add 192.168.12.2/24 dev eth1
/ # ip route add 192.168.0.0/16 via 192.168.12.1 dev eth1
/ # ip route add 10.10.10.0/24 via 192.168.12.1 dev eth1
/ # exit

Configure PC3 :

$ sudo docker exec -it clab-frrlab-PC3 /bin/ash
/ # ip addr add 192.168.13.2/24 dev eth1
/ # ip route add 192.168.0.0/16 via 192.168.13.1 dev eth1
/ # ip route add 10.10.10.0/24 via 192.168.13.1 dev eth1
/ # exit

Configure Router1 by running vtysh in the Docker container clab-frrlab-router1.

$ sudo docker exec -it clab-frrlab-router1 vtysh

Enter the following FRR CLI commands to configure interfaces eth1, eth2, and eth3 with IP address that match the network design.

configure terminal 
service integrated-vtysh-config
interface eth1
 ip address 192.168.1.1/24
 exit
interface eth2
 ip address 192.168.2.1/24
 exit
interface eth3
 ip address 192.168.11.1/24
 exit
interface lo
 ip address 10.10.10.1/32
 exit
exit
write
exit

Configure Router2 in a similar way:

$ sudo docker exec -it clab-frrlab-router2 vtysh
configure terminal 
service integrated-vtysh-config
interface eth1
 ip address 192.168.1.2/24
 exit
interface eth2
 ip address 192.168.3.1/24
 exit
interface eth3
 ip address 192.168.12.1/24
 exit
interface lo
 ip address 10.10.10.2/32
 exit
exit
write
exit

Configure Router3:

$ sudo docker exec -it clab-frrlab-router3 vtysh
configure terminal 
service integrated-vtysh-config
interface eth1
 ip address 192.168.2.2/24
 exit
interface eth2
 ip address 192.168.3.2/24
 exit
interface eth3
 ip address 192.168.13.1/24
 exit
interface lo
 ip address 10.10.10.3/32
 exit
exit
write
exit

Some quick tests

After configuring the interfaces on each node, you should be able to ping from PC1 to any IP address configured on Router1, but not to interfaces on other nodes.

$ sudo docker exec -it clab-frrlab-PC1 /bin/ash
/ # ping -c1 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_seq=1 ttl=64 time=0.066 ms

--- 192.168.11.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms
/ #
/ # ping -c1 192.168.13.2
PING 192.168.13.2 (192.168.13.2) 56(84) bytes of data.

--- 192.168.13.2 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

/ # 
/ # exit

Add OSPF

So that we can reach all networks in this example, set up a dynamic routing protocol on the FRR routers. In this example, we will set up a simple OSPF area for all networks connected to the routers.

Connect to vtysh on Router1:

$ sudo docker exec -it clab-frrlab-router1 vtysh

Add a simple OSPF configuration to Router1:

configure terminal 
router ospf
 passive-interface eth3
 passive-interface lo
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.11.0/24 area 0.0.0.0
 exit
exit
write
exit

Configure Router2 in a similar way.

Connect to vtysh on Router2:

$ sudo docker exec -it clab-frrlab-router2 vtysh

Configure OSPF:

configure terminal 
router ospf
 passive-interface eth3
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.12.0/24 area 0.0.0.0
 exit
exit
write
exit

Connect to vtysh on Router3:

$ sudo docker exec -it clab-frrlab-router3 vtysh

Configure OSPF:

configure terminal 
router ospf
 passive-interface eth3
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.13.0/24 area 0.0.0.0
 exit
exit
write
exit

OSPF testing

Now, PC1 should be able to ping any interface on any network node. Run the ping command on PC1 to try to reach PC3 over the network.

$ sudo docker exec clab-frrlab-PC1 ping -c1 192.168.13.2
PING 192.168.13.2 (192.168.13.2) 56(84) bytes of data.
64 bytes from 192.168.13.2: icmp_seq=1 ttl=62 time=0.127 ms

--- 192.168.13.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms

A traceroute shows that the packets pass from PC1 to Router1, then to Router3, then to PC3:

$ sudo docker exec clab-frrlab-PC1 traceroute 192.168.13.2
traceroute to 192.168.13.2 (192.168.13.2), 30 hops max, 46 byte packets
 1  192.168.11.1 (192.168.11.1)  0.004 ms  0.005 ms  0.004 ms
 2  192.168.2.2 (192.168.2.2)  0.004 ms  0.005 ms  0.005 ms
 3  192.168.13.2 (192.168.13.2)  0.004 ms  0.007 ms  0.004 ms

This shows that the OSPF protocol successful set up the routing tables on the Routers so that all nodes on this network can reach each other.

Network defect introduction

To further demonstrate that the network configuration is correct, see what happens if the link between Router1 and Router3 goes down. If everything works correctly, the OSPF protocol will detect that the link has failed and reroute any traffic going from PC1 to PC3 through Router1 and Router3 via Router2.

But, there is no function in Containerlab that allows the user to control the network connections between nodes. So you cannot disable a link or introduce link errors using Containerlab commands. In addition, Docker does not manage the Containerlab links between nodes so we cannot use Docker network commands to disable a link.

Containerlab links are composed of pairs of veth interfaces which are managed in each node’s network namespaces. We need to use Docker to run network commands on each container or use native Linux networking commands to manage the links in each node’s network namespace..

One simple way to interrupt a network link is to run the ip command on a node to shut down a link on the node. For example, to shut off eth2 on Router1:

$ sudo docker exec -d clab-frrlab-router1 ip link set dev eth2 down

Then, run the traceroute command on PC1 and see how the path to PC3 changes:

$ sudo docker exec clab-frrlab-PC1 traceroute 192.168.13.2
traceroute to 192.168.13.2 (192.168.13.2), 30 hops max, 46 byte packets
 1  192.168.11.1 (192.168.11.1)  0.005 ms  0.004 ms  0.004 ms
 2  192.168.1.2 (192.168.1.2)  0.005 ms  0.004 ms  0.002 ms
 3  192.168.3.2 (192.168.3.2)  0.002 ms  0.005 ms  0.002 ms
 4  192.168.13.2 (192.168.13.2)  0.002 ms  0.007 ms  0.011 ms

We see that the packets now travel from PC1 to PC3 via Router1, Router2, and Router3.

Restore the link on Router1:

$ sudo docker exec clab-frrlab-router1 ip link set dev eth2 up

And see that the traceroute between PC1 and PC3 goes back to its original path.

$ sudo docker exec clab-frrlab-PC1 traceroute 192.168.13.2
traceroute to 192.168.13.2 (192.168.13.2), 30 hops max, 46 byte packets
 1  192.168.11.1 (192.168.11.1)  0.004 ms  0.005 ms  0.003 ms
 2  192.168.2.2 (192.168.2.2)  0.004 ms  0.004 ms  0.002 ms
 3  192.168.13.2 (192.168.13.2)  0.002 ms  0.005 ms  0.003 ms

Links can also be managed by ip commands executed on the host system. Each node is in its own network namespace which is named the same as its container name. To bring down a link on Router1 we first list all the links in the namespace, clab-frrlab-router1

$ sudo ip netns exec clab-frrlab-router1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
91: eth0@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:14:14:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
106: eth2@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 16:36:c6:ca:4e:77 brd ff:ff:ff:ff:ff:ff link-netns clab-frrlab-router3
107: eth3@if108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue state UP mode DEFAULT group default 
    link/ether f2:4e:6d:f5:e9:01 brd ff:ff:ff:ff:ff:ff link-netns clab-frrlab-PC1
114: eth1@if113: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 42:ca:0d:5c:15:3c brd ff:ff:ff:ff:ff:ff link-netns clab-frrlab-router2
$

We see device eth2 is attached to the network namespace clab-frrlab-router1. To bring down the device eth2 in clab-frrlab-router1 run the following command:

$ sudo ip netns exec clab-frrlab-router1 ip link set dev eth2 down

We see the traceroute from PC1 to PC3 again passes through Router1, Router2, and Router3 just like it did when we disabled Router2‘s eth2 link from inside the conbtainer.

$ sudo docker exec clab-frrlab-PC1 traceroute 192.168.13.2
traceroute to 192.168.13.2 (192.168.13.2), 30 hops max, 46 byte packets
 1  192.168.11.1 (192.168.11.1)  0.007 ms  0.006 ms  0.005 ms
 2  192.168.1.2 (192.168.1.2)  0.006 ms  0.009 ms  0.006 ms
 3  192.168.3.2 (192.168.3.2)  0.005 ms  0.008 ms  0.004 ms
 4  192.168.13.2 (192.168.13.2)  0.004 ms  0.007 ms  0.004 ms

Then, bring the device back up:

$ sudo ip netns exec clab-frrlab-router1 ip link set dev eth2 up
$ sudo docker exec -it clab-frrlab-PC1 /bin/ash

Then, see that the traceroute from PC1 to PC3 goes back to the normal route, passing through Router1 and Router3.

$ sudo docker exec -it clab-frrlab-PC1 /traceroute 192.168.13.2
traceroute to 192.168.13.2 (192.168.13.2), 30 hops max, 46 byte packets
 1  192.168.11.1 (192.168.11.1)  0.008 ms  0.006 ms  0.003 ms
 2  192.168.3.2 (192.168.3.2)  0.005 ms  0.008 ms  0.005 ms
 3  192.168.13.2 (192.168.1377.2)  0.005 ms  0.006 ms  0.005 ms

So, we see we can impact network behavior using ip commands on the host system.

Stop the network emulation

To stop a Containerlab network, run the clab destroy command using the same topology file you used to deploy the network:

$ sudo clab destroy --topo frrlab.yml

Persistent configuration

Containerlab will import and save configuration files for some kinds of nodes, such as the Nokia SR linux kind. However, Linux containers only have access to standard Docker tools like volume mounting, although Containerlab facilitates mounting volumes by allowing users to specify bind mounts in the lab topology file.

Persistent configuration for FRR routers

The routers in this example are based on FRR, which uses the configuration files /etc/frr/deamons and etc/frr/frr.conf.

Create an frr.conf file for each router and save each file in its lab folder’s router directory.

Router1:

Create the configuration file for Router1 and save it in router1/frr.conf.

frr version 7.5.1_git
frr defaults traditional
hostname router1
no ipv6 forwarding
!
interface eth1
 ip address 192.168.1.1/24
!
interface eth2
 ip address 192.168.2.1/24
!
interface eth3
 ip address 192.168.11.1/24
!
interface lo
 ip address 10.10.10.1/32
!
router ospf
 passive-interface eth3
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.11.0/24 area 0.0.0.0
!
line vty
!

Router2:

Create the configuration file for Router2 and save it in router2/frr.conf.

frr version 7.5.1_git
frr defaults traditional
hostname router2
no ipv6 forwarding
!
interface eth1
 ip address 192.168.1.2/24
!
interface eth2
 ip address 192.168.3.1/24
!
interface eth3
 ip address 192.168.12.1/24
!
interface lo
 ip address 10.10.10.2/32
!
router ospf
 passive-interface eth3
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.12.0/24 area 0.0.0.0
!
line vty
!
Router3:

Create the configuration file for Router3 and save it in router3/frr.conf.

frr version 7.5.1_git
frr defaults traditional
hostname router3
no ipv6 forwarding
!
interface eth1
 ip address 192.168.2.2/24
!
interface eth2
 ip address 192.168.3.2/24
!
interface eth3
 ip address 192.168.13.1/24
!
interface lo
 ip address 10.10.10.3/32
!
router ospf
 passive-interface eth3
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.13.0/24 area 0.0.0.0
!
line vty
!

Modify the topology file

Edit the frrlab.yml file and add the mounts for the frr.conf files for each router:

name: frrlab

topology:
  nodes:
    router1:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router1/daemons:/etc/frr/daemons
        - router1/frr.conf:/etc/frr/frr.conf
    router2:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router2/daemons:/etc/frr/daemons
        - router2/frr.conf:/etc/frr/frr.conf
    router3:
      kind: linux
      image: frrouting/frr:v7.5.1
      binds:
        - router3/daemons:/etc/frr/daemons
        - router3/frr.conf:/etc/frr/frr.conf
    PC1:
      kind: linux
      image: praqma/network-multitool:latest
    PC2:
      kind: linux
      image: praqma/network-multitool:latest
    PC3:
      kind: linux
      image: praqma/network-multitool:latest

  links:
    - endpoints: ["router1:eth1", "router2:eth1"]
    - endpoints: ["router1:eth2", "router3:eth1"]
    - endpoints: ["router2:eth2", "router3:eth2"]
    - endpoints: ["PC1:eth1", "router1:eth3"]
    - endpoints: ["PC2:eth1", "router2:eth3"]
    - endpoints: ["PC3:eth1", "router3:eth3"]

Persistent configuration for PC network interfaces

To permanently configure network setting on an Alpine Linux system, one would normally save an interfaces configuration file on each PC in the /etc/network directory, or save a startup script in one of the network hook directories such as /etc/network/if-up.d.

However, Docker containers do not have permission to manage their own networking with initialization scripts. The user must connect to the container’s shell and run ip commands or must configure the container’s network namespace. I think it is easier to work with each container using Docker commands.

To create a consistent initial network state for each PC container, create a script that runs on the host that will configure the PCs’ eth1 interface and set up some static routes.

Create a file named PC-interfaces and save it in the lab directory. Make it executable. The file contents are shown below:

#!/bin/sh
sudo docker exec clab-frrlab-PC1 ip link set eth1 up
sudo docker exec clab-frrlab-PC1 ip addr add 192.168.11.2/24 dev eth1
sudo docker exec clab-frrlab-PC1 ip route add 192.168.0.0/16 via 192.168.11.1 dev eth1
sudo docker exec clab-frrlab-PC1 ip route add 10.10.10.0/24 via 192.168.11.1 dev eth1

sudo docker exec clab-frrlab-PC2 ip link set eth1 up
sudo docker exec clab-frrlab-PC2 ip addr add 192.168.12.2/24 dev eth1
sudo docker exec clab-frrlab-PC2 ip route add 192.168.0.0/16 via 192.168.12.1 dev eth1
sudo docker exec clab-frrlab-PC2 ip route add 10.10.10.0/24 via 192.168.12.1 dev eth1

sudo docker exec clab-frrlab-PC3 ip link set eth1 up
sudo docker exec clab-frrlab-PC3 ip addr add 192.168.13.2/24 dev eth1
sudo docker exec clab-frrlab-PC3 ip route add 192.168.0.0/16 via 192.168.13.1 dev eth1
sudo docker exec clab-frrlab-PC3 ip route add 10.10.10.0/24 via 192.168.13.1 dev eth1

Make the file executable:

$ chmod +x PC-interfaces.sh

After you start this lab using the Containerlab topology file, run the PC-interfaces.sh script to configure the PCs. The routers will get their initial configuration from each one’s mounted frr.conf file.

Create a small script that starts everything. For example, I created an executable script named lab.sh and saved it in the lab directory. The script is shown below:

#!/bin/sh
clab deploy --topo frrlab.yml
./PC-interfaces.sh

Now, when I want to start the FRR lab in a known state, I run the command:

$ sudo ./lab.sh

Get lab information

You can get some information about the lab using the inspect and graph functions.

$ sudo containerlab inspect --name frrlab
+---+-----------------+----------+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+
| # |    Topo Path    | Lab Name |        Name         | Container ID |              Image              | Kind  | Group |  State  |  IPv4 Address  |     IPv6 Address     |
+---+-----------------+----------+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+
| 1 | frrlab.clab.yml | frrlab   | clab-frrlab-PC1     | 02eea96ab0f0 | praqma/network-multitool:latest | linux |       | running | 172.20.20.4/24 | 2001:172:20:20::4/64 |
| 2 |                 |          | clab-frrlab-PC2     | 9987d5ac6bd9 | praqma/network-multitool:latest | linux |       | running | 172.20.20.6/24 | 2001:172:20:20::6/64 |
| 3 |                 |          | clab-frrlab-PC3     | 66c24d270c1a | praqma/network-multitool:latest | linux |       | running | 172.20.20.7/24 | 2001:172:20:20::7/64 |
| 4 |                 |          | clab-frrlab-router1 | 4936f56d28b2 | frrouting/frr:v7.5.1            | linux |       | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
| 5 |                 |          | clab-frrlab-router2 | 610563b7052a | frrouting/frr:v7.5.1            | linux |       | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 6 |                 |          | clab-frrlab-router3 | 9f501e040a65 | frrouting/frr:v7.5.1            | linux |       | running | 172.20.20.5/24 | 2001:172:20:20::5/64 |
+---+-----------------+----------+---------------------+--------------+---------------------------------+-------+-------+---------+----------------+----------------------+

The graph function, however, does not appear to work

Run:

$ sudo containerlab graph --topo frrlab.yml

Open a web browser to URL: `https://localhost:50080`. You will see a web page with a network diagram and a table with management information.

For small networks, this is not very useful because it does not show the port names on each node. I think it would be more useful in large network emulation scenarios with dozens or hundreds of nodes.

Packet capture on lab interfaces

To capture network traffic on one of the containerlab network connections, one must again access interfaces in the network namespaces for each container.

For example, we know that traffic from PC1 to PC3 will, when all links are up, pass via the link between Router1 and Router3. Let’s monitor the traffic on one of the interfaces that make up that connection.

We know, from our topology file, that interface eth2 on Router1 is connected to eth1 on Router3. So, let’s look at the traffic on Router3 eth1.

Router3’s network namespace has the same name as the container that run Router3: clab-frrlab-router3. Follow the directions from the Containerlab documentation and run the following command to execute tcpdump and forward the tcpdump output to Wireshark:

$ sudo ip netns exec clab-frrlab-router3 tcpdump -U -n -i eth1 -w - | wireshark -k -i -

In the above command, tcpdump will send an unbuffered stream (the -U option) of packets read on interface eth1 (the -i eth1 option) without converting addresses to names (the -n option) to standard output (the -w – option), which is piped to Wireshark which reads from standard input (the -i – option) and starts reading packets immediately (the -k option).

You should see a Wireshark window open on your desktop, displaying packets captured from Router3’s eth1 interface.

Stop the capture and Wireshark with the Ctrl-C key combination in the terminal window.

Stopping a network emulation

To stop a Containerlab network, run the clab destroy command using the same topology file you used to deploy the network:

$ sudo clab destroy --topo frrlab.yml

Contributing to Containerlab

If you create an interesting network emulation scenario, you may wish to contribute it to the lab examples in the Containerlab project.

In my case, I opened pull request #417 on Containerlab’s GitHub project page to offer them the files that create this example and hope it will be accepted.

Conclusion

Containerlab is a new network emulation tool that can create large, complex network emulation scenarios using a simple topology file. It leverages the strengths of Docker and Linux networking to build a lightweight infrastructure in which the emulated nodes run. The Containerlab developers include strong integrations for the SR Linux network operating system and also built in basic support for other commercial network operating systems.

Containerlab would be most interesting to network engineers who need to automate the setup of test networks as part of a development pipeline for network changes. The topology file for the test network can be included with the network configurations that need to be tested.

Containerlab does not abstract away all the complexity, however. Users may still need to have intermediate-level knowledge of Linux networking commands and Docker to emulate network failures and to capture network traffic for analysis and verification.

Some users will notice similarities between Containerlab and vrnetlab or docker-topo. The Containerlab developers documented how they re-used some of vrnetlab’s features and how they also were inspired by the topology file format used by the docker-topo network emulator.

Scroll to Top