├── LICENSE ├── README.md ├── build-containers ├── bw.py ├── clab-clab └── Dockerfile ├── clab-frr ├── Dockerfile ├── daemons ├── frr.conf ├── start.sh └── vtysh.conf ├── clab-iperf3 └── Dockerfile ├── clab-sflow-rt └── Dockerfile ├── clos3.clab.gotmpl ├── clos3.clab_vars.yml ├── clos3.png ├── clos3.yml ├── clos5.png ├── clos5.yml ├── ddos.png ├── ddos.yml ├── evpn3.png ├── evpn3.yml ├── rocev2.tcl ├── rocev2.yml ├── run-clab ├── srlinux.png ├── srlinux.yml └── topo.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 InMon Corporation 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # containerlab 2 | Experiment with real-time network telemetry using [containerlab](https://containerlab.dev/) to simulate Clos (leaf/spine) fabrics. 3 | 4 | * [Real-time telemetry from a 5 stage Clos fabric](https://blog.sflow.com/2022/02/real-time-telemetry-from-5-stage-clos.html) 5 | * [Topology aware fabric analytics](https://blog.sflow.com/2022/02/topology-aware-fabric-analytics.html) 6 | * [Real-time EVPN fabric visibility](https://blog.sflow.com/2022/03/real-time-evpn-fabric-visibility.html) 7 | * [Containerlab DDoS testbed](https://blog.sflow.com/2022/03/containerlab-ddos-testbed.html) 8 | * [DDoS attacks and BGP Flowspec responses](https://blog.sflow.com/2022/03/ddos-attacks-and-bgp-flowspec-responses.html) 9 | * [BGP Remotely Triggered Blackhole (RTBH)](https://blog.sflow.com/2022/04/bgp-remotely-triggered-blackhole-rtbh.html) 10 | * [SR Linux in Containerlab](https://blog.sflow.com/2022/07/sr-linux-in-containerlab.html) 11 | * [Real-time flow analytics with Containerlab templates](https://blog.sflow.com/2023/02/real-time-flow-analytics-with.html) 12 | * [Leaf and spine network emulation on Mac OS M1/M2 systems](https://blog.sflow.com/2023/05/leaf-and-spine-network-emulation-on-mac.html) 13 | * [Containerlab dashboard](https://blog.sflow.com/2023/08/containerlab-dashboard.html) 14 | * [Emulating congestion with Containerlab](https://blog.sflow.com/2024/09/emulating-congestion-with-containerlab.html) 15 | * [AI network performance monitoring using containerlab](https://blog.sflow.com/2025/06/ai-network-performance-monitoring-using.html) 16 | 17 | Get started (on a system running Docker): 18 | ``` 19 | git clone https://github.com/sflow-rt/containerlab.git 20 | cd containerlab 21 | ./run-clab 22 | ``` 23 | 24 | ## 5 Stage Clos Topology 25 | ![](clos5.png) 26 | 27 | Deploy 5 stage Clos topology: 28 | 29 | `containerlab deploy -t clos5.yml` 30 | 31 | Upload topology to sFlow-RT: 32 | 33 | `./topo.py clab-clos5` 34 | 35 | Generate traffic between `h1` and `h4`: 36 | 37 | `docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2` 38 | 39 | `docker exec -it clab-clos5-h1 iperf3 -c 2001:172:16:4::2` 40 | 41 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 42 | 43 | ## 3 Stage Clos Topology 44 | ![](clos3.png) 45 | 46 | Deploy 3 stage Clos topology: 47 | 48 | `containerlab deploy -t clos3.yml` 49 | 50 | Upload topology to sFlow-RT: 51 | 52 | `./topo.py clab-clos3` 53 | 54 | Generate traffic between `h1` and `h2`: 55 | 56 | `docker exec -it clab-clos3-h1 iperf3 -c 172.16.2.2` 57 | 58 | `docker exec -it clab-clos3-h1 iperf3 -c 2001:172:16:2::2` 59 | 60 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 61 | 62 | # EVPN Topology 63 | 64 | 65 | Deploy EVPN topology: 66 | 67 | `containerlab deploy -t evpn3.yml` 68 | 69 | Upload topology to sFlow-RT: 70 | 71 | `./topo.py clab-evpn3` 72 | 73 | Generate traffic between `h1` and `h2`: 74 | 75 | `docker exec -it clab-evpn3-h1 iperf3 -c 172.16.10.2` 76 | 77 | `docker exec -it clab-evpn3-h1 iperf3 -c 2001:172:16:10::2` 78 | 79 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 80 | 81 | ## RoCEv2 Topology 82 | 83 | ![](clos3.png) 84 | 85 | Deploy RoCEv2 toplogy: 86 | 87 | `containerlab deploy -t rocev2.yml` 88 | 89 | Upload topology to sFlow-RT: 90 | 91 | `./topo.py clab-rocev2` 92 | 93 | Generate traffic between `h1` and `h2`: 94 | 95 | `docker exec -it clab-rocev2-h1 hping3 exec rocev2.tcl 172.16.2.2 10000 500 100` 96 | 97 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 98 | 99 | # DDoS Topology 100 | ![](ddos.png) 101 | 102 | Deploy DDOS topology: 103 | 104 | `containerlab deploy -t ddos.yml` 105 | 106 | Simulate DDoS attack against `victim`: 107 | 108 | `docker exec -it clab-ddos-attacker hping3 --flood --udp -k -a 198.51.100.1 -s 53 192.0.2.129` 109 | 110 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 111 | 112 | # Nokia SR Linux 113 | ![](srlinux.png) 114 | 115 | Deploy SR Linux topology: 116 | 117 | `containerlab deploy -t srlinux.yml` 118 | 119 | Simulate traffic: 120 | 121 | `docker exec -it clab-srlinux-h1 iperf3 -c 172.16.2.2` 122 | 123 | Connect to http://localhost:8008/ for analytics, see [Quickstart](https://sflow-rt.com/intro.php) for more information. 124 | -------------------------------------------------------------------------------- /build-containers: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker build --pull -t sflow/clab-frr clab-frr 3 | docker build --pull -t sflow/clab-iperf3 clab-iperf3 4 | docker build --pull -t sflow/clab-sflow-rt clab-sflow-rt 5 | -------------------------------------------------------------------------------- /bw.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # Restrict inter-switch link bandwidth 4 | # ./bw.py clab-clos3 5 | 6 | import sys 7 | from json import load 8 | from subprocess import run 9 | 10 | kbps = '10000' 11 | with open(sys.argv[1] + '/topology-data.json') as f: 12 | contents = load(f) 13 | router_image = 'sflow/clab-frr' 14 | nodes = contents['nodes'] 15 | def is_fabric_link(link): 16 | if nodes[link['a']['node']]['image'] != router_image: 17 | return False 18 | if nodes[link['z']['node']]['image'] != router_image: 19 | return False 20 | return True 21 | def limit(port): 22 | cmd = ['containerlab','tools','netem','set','-n',nodes[port['node']]['longname'],'-i',port['interface'],'--rate',kbps] 23 | run(cmd) 24 | for link in contents['links']: 25 | if is_fabric_link(link): 26 | limit(link['a']) 27 | limit(link['z']) 28 | -------------------------------------------------------------------------------- /clab-clab/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ghcr.io/srl-labs/clab 2 | LABEL maintainer="InMon Corp. https://inmon.com" 3 | LABEL description="CONTAINERlab with Python3" 4 | LABEL url=https://hub.docker.com/r/sflow/clab 5 | RUN apk add --no-cache python3 6 | -------------------------------------------------------------------------------- /clab-frr/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.20 AS build 2 | RUN apk --update add \ 3 | build-base \ 4 | gcc \ 5 | git \ 6 | libpcap-dev \ 7 | linux-headers \ 8 | && git clone https://github.com/sflow/host-sflow.git \ 9 | && cd host-sflow \ 10 | && sed -i 's/#define SFL_INTERNAL_INTERFACE 0x3FFFFFFF/#define SFL_INTERNAL_INTERFACE 0/g' src/sflow/sflow.h \ 11 | && make FEATURES="PCAP" \ 12 | && make install 13 | 14 | FROM quay.io/frrouting/frr:10.2.3 15 | LABEL maintainer="InMon Corp. https://inmon.com" 16 | LABEL description="FRR and Host sFlow agent for CONTAINERlab" 17 | LABEL url=https://hub.docker.com/r/sflow/clab-frr 18 | COPY --from=build /usr/sbin/hsflowd /usr/sbin/hsflowd 19 | COPY --from=build /etc/hsflowd/ /etc/hsflowd/ 20 | RUN apk add --no-cache dmidecode libpcap 21 | ADD frr.conf vtysh.conf daemons /etc/frr/ 22 | ADD start.sh / 23 | ENTRYPOINT [ "/sbin/tini", "--", "/start.sh" ] 24 | -------------------------------------------------------------------------------- /clab-frr/daemons: -------------------------------------------------------------------------------- 1 | # This file tells the frr package which daemons to start. 2 | # 3 | # Sample configurations for these daemons can be found in 4 | # /usr/share/doc/frr/examples/. 5 | # 6 | # ATTENTION: 7 | # 8 | # When activating a daemon for the first time, a config file, even if it is 9 | # empty, has to be present *and* be owned by the user and group "frr", else 10 | # the daemon will not be started by /etc/init.d/frr. The permissions should 11 | # be u=rw,g=r,o=. 12 | # When using "vtysh" such a config file is also needed. It should be owned by 13 | # group "frrvty" and set to ug=rw,o= though. Check /etc/pam.d/frr, too. 14 | # 15 | # The watchfrr, zebra and staticd daemons are always started. 16 | # 17 | bgpd=yes 18 | ospfd=no 19 | ospf6d=no 20 | ripd=no 21 | ripngd=no 22 | isisd=no 23 | pimd=no 24 | ldpd=no 25 | nhrpd=no 26 | eigrpd=no 27 | babeld=no 28 | sharpd=no 29 | pbrd=no 30 | bfdd=no 31 | fabricd=no 32 | vrrpd=no 33 | pathd=no 34 | 35 | # 36 | # If this option is set the /etc/init.d/frr script automatically loads 37 | # the config via "vtysh -b" when the servers are started. 38 | # Check /etc/pam.d/frr if you intend to use "vtysh"! 39 | # 40 | vtysh_enable=yes 41 | zebra_options=" -A 127.0.0.1 -s 90000000" 42 | bgpd_options=" -A 127.0.0.1" 43 | ospfd_options=" -A 127.0.0.1" 44 | ospf6d_options=" -A ::1" 45 | ripd_options=" -A 127.0.0.1" 46 | ripngd_options=" -A ::1" 47 | isisd_options=" -A 127.0.0.1" 48 | pimd_options=" -A 127.0.0.1" 49 | ldpd_options=" -A 127.0.0.1" 50 | nhrpd_options=" -A 127.0.0.1" 51 | eigrpd_options=" -A 127.0.0.1" 52 | babeld_options=" -A 127.0.0.1" 53 | sharpd_options=" -A 127.0.0.1" 54 | pbrd_options=" -A 127.0.0.1" 55 | staticd_options="-A 127.0.0.1" 56 | bfdd_options=" -A 127.0.0.1" 57 | fabricd_options="-A 127.0.0.1" 58 | vrrpd_options=" -A 127.0.0.1" 59 | pathd_options=" -A 127.0.0.1" 60 | 61 | # configuration profile 62 | # 63 | #frr_profile="traditional" 64 | frr_profile="datacenter" 65 | 66 | # 67 | # This is the maximum number of FD's that will be available. 68 | # Upon startup this is read by the control files and ulimit 69 | # is called. Uncomment and use a reasonable value for your 70 | # setup if you are expecting a large number of peers in 71 | # say BGP. 72 | MAX_FDS=1024 73 | 74 | # The list of daemons to watch is automatically generated by the init script. 75 | #watchfrr_options="" 76 | 77 | # To make watchfrr create/join the specified netns, use the following option: 78 | #watchfrr_options="--netns" 79 | # This only has an effect in /etc/frr//daemons, and you need to 80 | # start FRR with "/usr/lib/frr/frrinit.sh start ". 81 | 82 | # for debugging purposes, you can specify a "wrap" command to start instead 83 | # of starting the daemon directly, e.g. to use valgrind on ospfd: 84 | # ospfd_wrap="/usr/bin/valgrind" 85 | # or you can use "all_wrap" for all daemons, e.g. to use perf record: 86 | # all_wrap="/usr/bin/perf record --call-graph -" 87 | # the normal daemon command is added to this at the end. 88 | -------------------------------------------------------------------------------- /clab-frr/frr.conf: -------------------------------------------------------------------------------- 1 | frr defaults datacenter 2 | log stdout 3 | 4 | ip nht resolve-via-default 5 | 6 | router bgp LOCAL_AS 7 | bgp bestpath as-path multipath-relax 8 | bgp bestpath compare-routerid 9 | neighbor fabric peer-group 10 | neighbor fabric remote-as external 11 | neighbor fabric description Internal Fabric Network 12 | neighbor fabric capability extended-nexthop 13 | -------------------------------------------------------------------------------- /clab-frr/start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | COLLECTOR="${COLLECTOR:-none}" 4 | POLLING="${POLLING:-30}" 5 | SAMPLING="${SAMPLING:-1000}" 6 | NEIGHBORS="${NEIGHBORS:-eth1 eth2}" 7 | HOSTPORT="${HOSTPORT:-eth3}" 8 | HOSTNET="${HOSTNET:-none}" 9 | EVPNSRC="${EVPNSRC:-none}" 10 | CTLRASN="${CTLASN:-none}" 11 | FLOWSPEC="${FLOWSPEC:-no}" 12 | RTBH="${RTBH:-none}" 13 | 14 | CONF='/etc/hsflowd.conf' 15 | 16 | printf "sflow {\n" > $CONF 17 | printf " sampling=$SAMPLING\n" >> $CONF 18 | printf " sampling.bps_ratio=0\n" >> $CONF 19 | printf " polling=$POLLING\n" >> $CONF 20 | if [ "$COLLECTOR" != "none" ]; then 21 | printf " collector { ip=$COLLECTOR }\n" >> $CONF 22 | fi 23 | for dev in ${NEIGHBORS} 24 | do 25 | printf " pcap { dev=$dev }\n" >> $CONF 26 | done 27 | if [ "$HOSTNET" != "none" ]; then 28 | printf " pcap { dev=$HOSTPORT }\n" >> $CONF 29 | fi 30 | printf "}\n" >> $CONF 31 | 32 | BGP=/etc/frr/frr.conf 33 | LOCAL_ADDR=`hostname -i` 34 | LOCAL_AS="${LOCAL_AS:-65000}" 35 | sed -i "s/LOCAL_AS/$LOCAL_AS/g" $BGP 36 | if [ "$RTBH" != "none" ]; then 37 | printf "neighbor fabric ebgp-multihop 255\n" >> $BGP 38 | fi 39 | for dev in ${NEIGHBORS} 40 | do 41 | printf "neighbor $dev interface peer-group fabric\n" >> $BGP 42 | done 43 | if [ "$CTLASN" != "none" ]; then 44 | printf "neighbor $COLLECTOR remote-as $CTLASN\n" >> $BGP 45 | printf "neighbor $COLLECTOR port 1179\n" >> $BGP 46 | printf "neighbor $COLLECTOR timers connect 10\n" >> $BGP 47 | printf "\n" >> $BGP 48 | fi 49 | if [ "$FLOWSPEC" == "yes" ]; then 50 | printf "address-family ipv4 flowspec\n" >> $BGP 51 | printf "neighbor fabric activate\n" >> $BGP 52 | if [ "$CTLASN" != "none" ]; then 53 | printf "neighbor $COLLECTOR activate\n" >> $BGP 54 | fi 55 | printf "exit-address-family\n" >> $BGP 56 | fi 57 | printf "\n" >> $BGP 58 | if [ "$HOSTNET" == "evpn" ]; then 59 | printf "address-family l2vpn evpn\n" >> $BGP 60 | printf "neighbor fabric activate\n" >> $BGP 61 | if [ "$EVPNSRC" != "none" ]; then 62 | printf "advertise-all-vni\n" >> $BGP 63 | fi 64 | printf "exit-address-family\n" >> $BGP 65 | printf "\n" >> $BGP 66 | if [ "$EVPNSRC" != "none" ]; then 67 | printf "address-family ipv4 unicast\n" >> $BGP 68 | printf "network $EVPNSRC/32\n" >> $BGP 69 | printf "exit-address-family\n" >> $BGP 70 | printf "\n" >> $BGP 71 | printf "exit\n" >> $BGP 72 | printf "\n" >> $BGP 73 | fi 74 | else 75 | if [ "$HOSTNET" != "none" ]; then 76 | printf "address-family ipv4 unicast\n" >> $BGP 77 | printf "redistribute connected route-map HOST_ROUTES\n" >> $BGP 78 | if [ "$RTBH" != "none" ]; then 79 | printf "neighbor fabric route-map RTBH in\n" >> $BGP 80 | fi 81 | printf "exit-address-family\n" >> $BGP 82 | printf "\n" >> $BGP 83 | printf "address-family ipv6 unicast\n" >> $BGP 84 | printf "redistribute connected route-map HOST_ROUTES\n" >> $BGP 85 | printf "neighbor fabric activate\n" >> $BGP 86 | printf "exit-address-family\n" >> $BGP 87 | printf "\n" >> $BGP 88 | printf "exit\n" >> $BGP 89 | printf "\n" >> $BGP 90 | printf "route-map HOST_ROUTES permit 10\n" >> $BGP 91 | printf "match interface $HOSTPORT\n" >> $BGP 92 | printf "exit\n" >> $BGP 93 | printf "\n" >> $BGP 94 | printf "interface $HOSTPORT\n" >> $BGP 95 | printf "ip address $HOSTNET\n" >> $BGP 96 | if [ "HOSTNET6" != "none" ]; then 97 | printf "ipv6 address $HOSTNET6\n" >> $BGP 98 | fi 99 | printf "exit\n" >> $BGP 100 | printf "\n" >> $BGP 101 | else 102 | printf "address-family ipv6 unicast\n" >> $BGP 103 | printf "neighbor fabric activate\n" >> $BGP 104 | printf "exit-address-family\n" >> $BGP 105 | printf "\n" >> $BGP 106 | printf "exit\n" >> $BGP 107 | printf "\n" >> $BGP 108 | fi 109 | fi 110 | if [ "$RTBH" != "none" ]; then 111 | printf "bgp community-list standard BLACKHOLE seq 5 permit blackhole\n" >> $BGP 112 | printf "\n" >> $BGP 113 | printf "route-map RTBH permit 10\n" >> $BGP 114 | printf "match community BLACKHOLE\n" >> $BGP 115 | printf "set ip next-hop $RTBH\n" >> $BGP 116 | printf "exit\n" >> $BGP 117 | printf "\n" >> $BGP 118 | printf "route-map RTBH permit 20\n" >> $BGP 119 | printf "exit\n" >> $BGP 120 | printf "\n" >> $BGP 121 | printf "ip route $RTBH/32 Null0\n" >> $BGP 122 | printf "\n" >> $BGP 123 | fi 124 | 125 | chown -R frr:frr /etc/frr 126 | 127 | sysctl -w net.ipv4.fib_multipath_hash_policy=1 128 | sysctl -w net.ipv6.conf.all.forwarding=1 129 | 130 | while [ ! -f /tmp/initialized ]; do sleep 1; done 131 | 132 | if [ "$HOSTNET" == "evpn" ] && [ "$EVPNSRC" != "none" ]; then 133 | ip addr add $EVPNSRC/32 dev lo 134 | ip link add vxlan10 type vxlan id 10 dstport 0 local $EVPNSRC 135 | brctl addbr br10 136 | brctl addif br10 vxlan10 137 | brctl stp br10 off 138 | ip link set up dev br10 139 | ip link set up dev vxlan10 140 | brctl addif br10 $HOSTPORT 141 | fi 142 | 143 | if [ "$COLLECTOR" != "none" ]; then 144 | /usr/sbin/hsflowd 145 | fi 146 | exec /usr/lib/frr/docker-start 147 | -------------------------------------------------------------------------------- /clab-frr/vtysh.conf: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /clab-iperf3/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.22 2 | LABEL maintainer="InMon Corp. https://inmon.com" 3 | LABEL description="iperf3 for CONTAINERlab" 4 | LABEL url=https://hub.docker.com/r/sflow/clab-iperf3 5 | RUN apk add --no-cache iperf3 tini \ 6 | && apk add --no-cache hping3 --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing 7 | ENTRYPOINT ["/sbin/tini","--","iperf3","-s"] 8 | -------------------------------------------------------------------------------- /clab-sflow-rt/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM sflow/sflow-rt:latest 2 | LABEL url=https://hub.docker.com/r/sflow/clab-sflow-rt 3 | RUN /sflow-rt/get-app.sh sflow-rt prometheus && /sflow-rt/get-app.sh sflow-rt browse-metrics && /sflow-rt/get-app.sh sflow-rt browse-flows && /sflow-rt/get-app.sh sflow-rt containerlab-dashboard 4 | -------------------------------------------------------------------------------- /clos3.clab.gotmpl: -------------------------------------------------------------------------------- 1 | name: clos3 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | topology: 8 | defaults: 9 | kind: linux 10 | env: 11 | COLLECTOR: sflow-rt 12 | SAMPLING: {{ $.sflow.sampling }} 13 | POLLING: {{ $.sflow.polling }} 14 | nodes: 15 | {{- range $leafIndex := seq 1 $.leaves.num }} 16 | leaf{{ $leafIndex }}: 17 | image: sflow/clab-frr 18 | group: leaf 19 | env: 20 | LOCAL_AS: {{ printf "65%03d" $leafIndex }} 21 | NEIGHBORS:{{- range $spineIndex := seq 1 $.spines.num }} eth{{ $spineIndex}}{{- end }} 22 | HOSTPORT: eth{{ add $.spines.num 1 }} 23 | HOSTNET: 172.16.{{ $leafIndex }}.1/24 24 | HOSTNET6: 2001:172:16:{{ $leafIndex }}::1/64 25 | exec: 26 | - touch /tmp/initialized 27 | {{- end }} 28 | {{- range $spineIndex := seq 1 $.spines.num }} 29 | spine{{ $spineIndex }}: 30 | image: sflow/clab-frr 31 | group: spine 32 | env: 33 | LOCAL_AS: {{ printf "65%03d" (add 1 $.leaves.num) }} 34 | NEIGHBORS:{{- range $leafIndex := seq 1 $.leaves.num }} eth{{ $leafIndex }}{{- end }} 35 | exec: 36 | - touch /tmp/initialized 37 | {{- end }} 38 | {{- range $leafIndex := seq 1 $.leaves.num }} 39 | h{{ $leafIndex }}: 40 | image: sflow/clab-iperf3 41 | group: server 42 | exec: 43 | - ip addr add 172.16.{{ $leafIndex }}.2/24 dev eth1 44 | - ip route add 172.16.0.0/16 via 172.16.{{ $leafIndex }}.1 45 | - ip addr add 2001:172:16:{{ $leafIndex }}::2/64 dev eth1 46 | - ip route add 2001:172:16::/48 via 2001:172:16:{{ $leafIndex }}::1 47 | {{- end }} 48 | sflow-rt: 49 | image: sflow/clab-sflow-rt 50 | ports: 51 | - 8008:8008 52 | links: 53 | {{- range $spineIndex := seq 1 $.spines.num }} 54 | {{- range $leafIndex := seq 1 $.leaves.num }} 55 | - endpoints: ["spine{{ $spineIndex }}:eth{{ $leafIndex }}", "leaf{{ $leafIndex }}:eth{{ $spineIndex }}"] 56 | {{- end }} 57 | {{- end }} 58 | {{- range $leafIndex := seq 1 $.leaves.num }} 59 | - endpoints: ["leaf{{ $leafIndex }}:eth{{ add $.spines.num 1 }}", "h{{ $leafIndex }}:eth1"] 60 | mtu: 1500 61 | {{- end }} 62 | -------------------------------------------------------------------------------- /clos3.clab_vars.yml: -------------------------------------------------------------------------------- 1 | sflow: 2 | sampling: 1000 3 | polling: 30 4 | spines: 5 | num: 2 6 | leaves: 7 | num: 2 8 | -------------------------------------------------------------------------------- /clos3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sflow-rt/containerlab/b02f8846f38d023ad0cac82316030df0be8ad909/clos3.png -------------------------------------------------------------------------------- /clos3.yml: -------------------------------------------------------------------------------- 1 | name: clos3 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | 8 | topology: 9 | defaults: 10 | kind: linux 11 | env: 12 | COLLECTOR: sflow-rt 13 | SAMPLING: ${SAMPLING:=1000} 14 | POLLING: ${POLLING:=30} 15 | nodes: 16 | leaf1: 17 | image: sflow/clab-frr 18 | group: leaf 19 | env: 20 | LOCAL_AS: 65001 21 | NEIGHBORS: eth1 eth2 22 | HOSTPORT: eth3 23 | HOSTNET: "172.16.1.1/24" 24 | HOSTNET6: "2001:172:16:1::1/64" 25 | exec: 26 | - touch /tmp/initialized 27 | leaf2: 28 | image: sflow/clab-frr 29 | group: leaf 30 | env: 31 | LOCAL_AS: 65002 32 | NEIGHBORS: eth1 eth2 33 | HOSTPORT: eth3 34 | HOSTNET: "172.16.2.1/24" 35 | HOSTNET6: "2001:172:16:2::1/64" 36 | exec: 37 | - touch /tmp/initialized 38 | spine1: 39 | image: sflow/clab-frr 40 | group: spine 41 | env: 42 | LOCAL_AS: 65003 43 | NEIGHBORS: eth1 eth2 44 | exec: 45 | - touch /tmp/initialized 46 | spine2: 47 | image: sflow/clab-frr 48 | group: spine 49 | env: 50 | LOCAL_AS: 65003 51 | NEIGHBORS: eth1 eth2 52 | exec: 53 | - touch /tmp/initialized 54 | h1: 55 | image: sflow/clab-iperf3 56 | group: server 57 | exec: 58 | - ip addr add 172.16.1.2/24 dev eth1 59 | - ip route add 172.16.2.0/24 via 172.16.1.1 60 | - ip addr add 2001:172:16:1::2/64 dev eth1 61 | - ip route add 2001:172:16:2::/64 via 2001:172:16:1::1 62 | h2: 63 | image: sflow/clab-iperf3 64 | group: server 65 | exec: 66 | - ip addr add 172.16.2.2/24 dev eth1 67 | - ip route add 172.16.1.0/24 via 172.16.2.1 68 | - ip addr add 2001:172:16:2::2/64 dev eth1 69 | - ip route add 2001:172:16:1::/64 via 2001:172:16:2::1 70 | sflow-rt: 71 | image: sflow/clab-sflow-rt 72 | ports: 73 | - 8008:8008 74 | links: 75 | - endpoints: ["leaf1:eth1","spine1:eth1"] 76 | - endpoints: ["leaf1:eth2","spine2:eth1"] 77 | - endpoints: ["leaf2:eth1","spine1:eth2"] 78 | - endpoints: ["leaf2:eth2","spine2:eth2"] 79 | - endpoints: ["h1:eth1","leaf1:eth3"] 80 | mtu: 1500 81 | - endpoints: ["h2:eth1","leaf2:eth3"] 82 | mtu: 1500 83 | -------------------------------------------------------------------------------- /clos5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sflow-rt/containerlab/b02f8846f38d023ad0cac82316030df0be8ad909/clos5.png -------------------------------------------------------------------------------- /clos5.yml: -------------------------------------------------------------------------------- 1 | name: clos5 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | 8 | topology: 9 | defaults: 10 | kind: linux 11 | env: 12 | COLLECTOR: sflow-rt 13 | SAMPLING: ${SAMPLING:=1000} 14 | POLLING: ${POLLING:=30} 15 | nodes: 16 | leaf1: 17 | image: sflow/clab-frr 18 | group: leaf 19 | env: 20 | LOCAL_AS: 65001 21 | NEIGHBORS: eth1 eth2 22 | HOSTPORT: eth3 23 | HOSTNET: "172.16.1.1/24" 24 | HOSTNET6: "2001:172:16:1::1/64" 25 | exec: 26 | - touch /tmp/initialized 27 | leaf2: 28 | image: sflow/clab-frr 29 | group: leaf 30 | env: 31 | LOCAL_AS: 65002 32 | NEIGHBORS: eth1 eth2 33 | HOSTPORT: eth3 34 | HOSTNET: "172.16.2.1/24" 35 | HOSTNET6: "2001:172:16:2::1/64" 36 | exec: 37 | - touch /tmp/initialized 38 | leaf3: 39 | image: sflow/clab-frr 40 | group: leaf 41 | env: 42 | LOCAL_AS: 65003 43 | NEIGHBORS: eth1 eth2 44 | HOSTPORT: eth3 45 | HOSTNET: "172.16.3.1/24" 46 | HOSTNET6: "2001:172:16:3::1/64" 47 | exec: 48 | - touch /tmp/initialized 49 | leaf4: 50 | image: sflow/clab-frr 51 | group: leaf 52 | env: 53 | LOCAL_AS: 65004 54 | NEIGHBORS: eth1 eth2 55 | HOSTPORT: eth3 56 | HOSTNET: "172.16.4.1/24" 57 | HOSTNET6: "2001:172:16:4::1/64" 58 | exec: 59 | - touch /tmp/initialized 60 | spine1: 61 | image: sflow/clab-frr 62 | group: spine 63 | env: 64 | LOCAL_AS: 65005 65 | NEIGHBORS: eth1 eth2 eth3 66 | exec: 67 | - touch /tmp/initialized 68 | spine2: 69 | image: sflow/clab-frr 70 | group: spine 71 | env: 72 | LOCAL_AS: 65005 73 | NEIGHBORS: eth1 eth2 eth3 74 | exec: 75 | - touch /tmp/initialized 76 | spine3: 77 | image: sflow/clab-frr 78 | group: spine 79 | env: 80 | LOCAL_AS: 65006 81 | NEIGHBORS: eth1 eth2 eth3 82 | exec: 83 | - touch /tmp/initialized 84 | spine4: 85 | image: sflow/clab-frr 86 | group: spine 87 | env: 88 | LOCAL_AS: 65006 89 | NEIGHBORS: eth1 eth2 eth3 90 | exec: 91 | - touch /tmp/initialized 92 | superspine1: 93 | image: sflow/clab-frr 94 | group: superspine 95 | env: 96 | LOCAL_AS: 65007 97 | NEIGHBORS: eth1 eth2 98 | exec: 99 | - touch /tmp/initialized 100 | superspine2: 101 | image: sflow/clab-frr 102 | group: superspine 103 | env: 104 | LOCAL_AS: 65007 105 | NEIGHBORS: eth1 eth2 106 | exec: 107 | - touch /tmp/initialized 108 | h1: 109 | image: sflow/clab-iperf3 110 | group: server 111 | exec: 112 | - ip addr add 172.16.1.2/24 dev eth1 113 | - ip route add 172.16.0.0/16 via 172.16.1.1 114 | - ip addr add 2001:172:16:1::2/64 dev eth1 115 | - ip route add 2001:172:16::/48 via 2001:172:16:1::1 116 | h2: 117 | image: sflow/clab-iperf3 118 | group: server 119 | exec: 120 | - ip addr add 172.16.2.2/24 dev eth1 121 | - ip route add 172.16.0.0/16 via 172.16.2.1 122 | - ip addr add 2001:172:16:2::2/64 dev eth1 123 | - ip route add 2001:172:16::/48 via 2001:172:16:2::1 124 | h3: 125 | image: sflow/clab-iperf3 126 | group: server 127 | exec: 128 | - ip addr add 172.16.3.2/24 dev eth1 129 | - ip route add 172.16.0.0/16 via 172.16.3.1 130 | - ip addr add 2001:172:16:3::2/64 dev eth1 131 | - ip route add 2001:172:16::/48 via 2001:172:16:3::1 132 | h4: 133 | image: sflow/clab-iperf3 134 | group: server 135 | exec: 136 | - ip addr add 172.16.4.2/24 dev eth1 137 | - ip route add 172.16.0.0/16 via 172.16.4.1 138 | - ip addr add 2001:172:16:4::2/64 dev eth1 139 | - ip route add 2001:172:16::/48 via 2001:172:16:4::1 140 | sflow-rt: 141 | image: sflow/clab-sflow-rt 142 | ports: 143 | - 8008:8008 144 | links: 145 | - endpoints: ["leaf1:eth1","spine1:eth1"] 146 | - endpoints: ["leaf1:eth2","spine2:eth1"] 147 | - endpoints: ["leaf2:eth1","spine1:eth2"] 148 | - endpoints: ["leaf2:eth2","spine2:eth2"] 149 | - endpoints: ["leaf3:eth1","spine3:eth1"] 150 | - endpoints: ["leaf3:eth2","spine4:eth1"] 151 | - endpoints: ["leaf4:eth1","spine3:eth2"] 152 | - endpoints: ["leaf4:eth2","spine4:eth2"] 153 | - endpoints: ["spine1:eth3","superspine1:eth1"] 154 | - endpoints: ["spine2:eth3","superspine2:eth1"] 155 | - endpoints: ["spine3:eth3","superspine1:eth2"] 156 | - endpoints: ["spine4:eth3","superspine2:eth2"] 157 | - endpoints: ["h1:eth1","leaf1:eth3"] 158 | mtu: 1500 159 | - endpoints: ["h2:eth1","leaf2:eth3"] 160 | mtu: 1500 161 | - endpoints: ["h3:eth1","leaf3:eth3"] 162 | mtu: 1500 163 | - endpoints: ["h4:eth1","leaf4:eth3"] 164 | mtu: 1500 165 | -------------------------------------------------------------------------------- /ddos.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sflow-rt/containerlab/b02f8846f38d023ad0cac82316030df0be8ad909/ddos.png -------------------------------------------------------------------------------- /ddos.yml: -------------------------------------------------------------------------------- 1 | name: ddos 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | topology: 8 | defaults: 9 | kind: linux 10 | nodes: 11 | sp-router: 12 | image: sflow/clab-frr 13 | mgmt-ipv4: 172.100.100.2 14 | mgmt-ipv6: 2001:172:100:100::2 15 | env: 16 | LOCAL_AS: 64496 17 | NEIGHBORS: eth1 18 | FLOWSPEC: yes 19 | RTBH: 203.0.113.2 20 | HOSTPORT: eth2 21 | HOSTNET: "198.51.100.1/24" 22 | exec: 23 | - touch /tmp/initialized 24 | ce-router: 25 | image: sflow/clab-frr 26 | mgmt-ipv4: 172.100.100.3 27 | mgmt-ipv6: 2001:172:100:100::3 28 | env: 29 | LOCAL_AS: 64497 30 | NEIGHBORS: eth1 31 | FLOWSPEC: yes 32 | HOSTPORT: eth2 33 | HOSTNET: "192.0.2.1/24" 34 | CTLASN: 64497 35 | COLLECTOR: 172.100.100.6 36 | exec: 37 | - touch /tmp/initialized 38 | attacker: 39 | image: sflow/clab-iperf3 40 | mgmt-ipv4: 172.100.100.4 41 | mgmt-ipv6: 2001:172:100:100::4 42 | exec: 43 | - ip addr add 198.51.100.2/24 dev eth1 44 | - ip route add 192.0.2.0/24 via 198.51.100.1 45 | victim: 46 | image: sflow/clab-iperf3 47 | mgmt-ipv4: 172.100.100.5 48 | mgmt-ipv6: 2001:172:100:100::5 49 | exec: 50 | - ip addr add 192.0.2.129/24 dev eth1 51 | - ip route add 198.51.100.0/24 via 192.0.2.1 52 | controller: 53 | image: sflow/ddos-protect 54 | mgmt-ipv4: 172.100.100.6 55 | mgmt-ipv6: 2001:172:100:100::6 56 | env: 57 | RTPROP: > 58 | -Dddos_protect.as=64497 59 | -Dddos_protect.nexthop=203.0.113.2 60 | -Dddos_protect.enable.flowspec=yes 61 | -Dddos_protect.router=172.100.100.3 62 | -Dddos_protect.group.local=192.0.2.0/24 63 | -Dddos_protect.mode=automatic 64 | -Dddos_protect.icmp_flood.action=filter 65 | -Dddos_protect.icmp_flood.threshold=10000 66 | -Dddos_protect.icmp_flood.timeout=2 67 | -Dddos_protect.ip_flood.action=filter 68 | -Dddos_protect.ip_flood.threshold=10000 69 | -Dddos_protect.ip_flood.timeout=2 70 | -Dddos_protect.ip_fragmentation.action=filter 71 | -Dddos_protect.ip_fragmentation.threshold=10000 72 | -Dddos_protect.ip_fragmentation.timeout=2 73 | -Dddos_protect.tcp_amplification.action=filter 74 | -Dddos_protect.tcp_amplification.threshold=10000 75 | -Dddos_protect.tcp_amplification.timeout=2 76 | -Dddos_protect.tcp_flood.action=filter 77 | -Dddos_protect.tcp_flood.threshold=10000 78 | -Dddos_protect.tcp_flood.timeout=2 79 | -Dddos_protect.udp_amplification.action=filter 80 | -Dddos_protect.udp_amplification.threshold=10000 81 | -Dddos_protect.udp_amplification.timeout=2 82 | -Dddos_protect.udp_flood.action=filter 83 | -Dddos_protect.udp_flood.threshold=10000 84 | -Dddos_protect.udp_flood.timeout=2 85 | ports: 86 | - 8008:8008 87 | links: 88 | - endpoints: ["sp-router:eth1","ce-router:eth1"] 89 | - endpoints: ["sp-router:eth2","attacker:eth1"] 90 | mtu: 1500 91 | - endpoints: ["ce-router:eth2","victim:eth1"] 92 | mtu: 1500 93 | -------------------------------------------------------------------------------- /evpn3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sflow-rt/containerlab/b02f8846f38d023ad0cac82316030df0be8ad909/evpn3.png -------------------------------------------------------------------------------- /evpn3.yml: -------------------------------------------------------------------------------- 1 | name: evpn3 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | 8 | topology: 9 | defaults: 10 | kind: linux 11 | env: 12 | COLLECTOR: sflow-rt 13 | SAMPLING: ${SAMPLING:=1000} 14 | POLLING: ${POLLING:=30} 15 | nodes: 16 | leaf1: 17 | image: sflow/clab-frr 18 | group: leaf 19 | env: 20 | LOCAL_AS: 65001 21 | NEIGHBORS: eth1 eth2 22 | HOSTPORT: eth3 23 | HOSTNET: evpn 24 | EVPNSRC: "192.168.1.1" 25 | exec: 26 | - touch /tmp/initialized 27 | leaf2: 28 | image: sflow/clab-frr 29 | group: leaf 30 | env: 31 | LOCAL_AS: 65002 32 | NEIGHBORS: eth1 eth2 33 | HOSTPORT: eth3 34 | HOSTNET: evpn 35 | EVPNSRC: "192.168.1.2" 36 | exec: 37 | - touch /tmp/initialized 38 | spine1: 39 | image: sflow/clab-frr 40 | group: spine 41 | env: 42 | LOCAL_AS: 65003 43 | NEIGHBORS: eth1 eth2 44 | HOSTNET: evpn 45 | exec: 46 | - touch /tmp/initialized 47 | spine2: 48 | image: sflow/clab-frr 49 | group: spine 50 | env: 51 | LOCAL_AS: 65003 52 | NEIGHBORS: eth1 eth2 53 | HOSTNET: evpn 54 | exec: 55 | - touch /tmp/initialized 56 | h1: 57 | image: sflow/clab-iperf3 58 | group: server 59 | exec: 60 | - ip addr add 172.16.10.1/24 dev eth1 61 | - ip addr add 2001:172:16:10::1/64 dev eth1 62 | h2: 63 | image: sflow/clab-iperf3 64 | group: server 65 | exec: 66 | - ip addr add 172.16.10.2/24 dev eth1 67 | - ip addr add 2001:172:16:10::2/64 dev eth1 68 | sflow-rt: 69 | image: sflow/clab-sflow-rt 70 | mgmt-ipv4: 172.100.100.8 71 | mgmt-ipv6: 2001:172:100:100::8 72 | ports: 73 | - 8008:8008 74 | links: 75 | - endpoints: ["leaf1:eth1","spine1:eth1"] 76 | - endpoints: ["leaf1:eth2","spine2:eth1"] 77 | - endpoints: ["leaf2:eth1","spine1:eth2"] 78 | - endpoints: ["leaf2:eth2","spine2:eth2"] 79 | - endpoints: ["h1:eth1","leaf1:eth3"] 80 | mtu: 1500 81 | - endpoints: ["h2:eth1","leaf2:eth3"] 82 | mtu: 1500 83 | -------------------------------------------------------------------------------- /rocev2.tcl: -------------------------------------------------------------------------------- 1 | if { $::argc ne 4} { 2 | puts "USAGE $::argv0 target packets delay cycles" 3 | return 1 4 | } 5 | set dst [lindex $argv 0] 6 | set n [lindex $argv 1] 7 | set delay [lindex $argv 2] 8 | set cycles [lindex $argv 3] 9 | set packet "ip(daddr=$dst,tos=0,ttl=64)+udp(sport=49152,dport=4791)+data(hex=00)" 10 | set messages { 11 | "0640000000000123000000000000000000000000000000000000800000000000000000000000" 12 | "0740000000000123000000000000000000000000000000000000800000000000000000000000" 13 | "0740000000000123000000000000000000000000000000000000800000000000000000000000" 14 | "0740000000000123000000000000000000000000000000000000800000000000000000000000" 15 | "0740000000000123000000000000000000000000000000000000800000000000000000000000" 16 | "1140000000000123000000000700000000000000000000000000000000000000000000000000" 17 | "8100000000000123000000000000000000000000000000000000000000000000000000000000" 18 | } 19 | for {set c 0} {$c < $cycles} {incr c} { 20 | set p 0 21 | while {$p < $n} { 22 | foreach message $messages { 23 | if {rand()<0.1} {set tos 3} else {set tos 0} 24 | set packet [hping setfield data hex $message $packet] 25 | set packet [hping setfield ip tos $tos $packet] 26 | hping send $packet 27 | incr p 28 | } 29 | } 30 | after $delay 31 | } 32 | 33 | -------------------------------------------------------------------------------- /rocev2.yml: -------------------------------------------------------------------------------- 1 | name: rocev2 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | mtu: 1500 7 | 8 | topology: 9 | defaults: 10 | kind: linux 11 | env: 12 | COLLECTOR: sflow-rt 13 | SAMPLING: ${SAMPLING:=100} 14 | POLLING: ${POLLING:=10} 15 | nodes: 16 | leaf1: 17 | image: sflow/clab-frr 18 | group: leaf 19 | env: 20 | LOCAL_AS: 65001 21 | NEIGHBORS: eth1 eth2 22 | HOSTPORT: eth3 23 | HOSTNET: "172.16.1.1/24" 24 | HOSTNET6: "2001:172:16:1::1/64" 25 | exec: 26 | - touch /tmp/initialized 27 | leaf2: 28 | image: sflow/clab-frr 29 | group: leaf 30 | env: 31 | LOCAL_AS: 65002 32 | NEIGHBORS: eth1 eth2 33 | HOSTPORT: eth3 34 | HOSTNET: "172.16.2.1/24" 35 | HOSTNET6: "2001:172:16:2::1/64" 36 | exec: 37 | - touch /tmp/initialized 38 | spine1: 39 | image: sflow/clab-frr 40 | group: spine 41 | env: 42 | LOCAL_AS: 65003 43 | NEIGHBORS: eth1 eth2 44 | exec: 45 | - touch /tmp/initialized 46 | spine2: 47 | image: sflow/clab-frr 48 | group: spine 49 | env: 50 | LOCAL_AS: 65003 51 | NEIGHBORS: eth1 eth2 52 | exec: 53 | - touch /tmp/initialized 54 | h1: 55 | image: sflow/clab-iperf3 56 | group: server 57 | binds: 58 | - ./rocev2.tcl:/rocev2.tcl 59 | exec: 60 | - ip addr add 172.16.1.2/24 dev eth1 61 | - ip route add 172.16.2.0/24 via 172.16.1.1 62 | - ip addr add 2001:172:16:1::2/64 dev eth1 63 | - ip route add 2001:172:16:2::/64 via 2001:172:16:1::1 64 | h2: 65 | image: sflow/clab-iperf3 66 | group: server 67 | binds: 68 | - ./rocev2.tcl:/rocev2.tcl 69 | exec: 70 | - ip addr add 172.16.2.2/24 dev eth1 71 | - ip route add 172.16.1.0/24 via 172.16.2.1 72 | - ip addr add 2001:172:16:2::2/64 dev eth1 73 | - ip route add 2001:172:16:1::/64 via 2001:172:16:2::1 74 | sflow-rt: 75 | image: sflow/ai-metrics 76 | ports: 77 | - 8008:8008 78 | links: 79 | - endpoints: ["leaf1:eth1","spine1:eth1"] 80 | - endpoints: ["leaf1:eth2","spine2:eth1"] 81 | - endpoints: ["leaf2:eth1","spine1:eth2"] 82 | - endpoints: ["leaf2:eth2","spine2:eth2"] 83 | - endpoints: ["h1:eth1","leaf1:eth3"] 84 | mtu: 4200 85 | - endpoints: ["h2:eth1","leaf2:eth3"] 86 | mtu: 4200 87 | -------------------------------------------------------------------------------- /run-clab: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --rm -it --privileged \ 3 | --network host --pid="host" \ 4 | -v /var/run/docker.sock:/var/run/docker.sock \ 5 | -v /var/run/netns:/var/run/netns \ 6 | -v /var/lib/docker/containers:/var/lib/docker/containers \ 7 | -v $(pwd):$(pwd) -w $(pwd) \ 8 | --name containerlab \ 9 | sflow/clab bash 10 | -------------------------------------------------------------------------------- /srlinux.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sflow-rt/containerlab/b02f8846f38d023ad0cac82316030df0be8ad909/srlinux.png -------------------------------------------------------------------------------- /srlinux.yml: -------------------------------------------------------------------------------- 1 | name: srlinux 2 | mgmt: 3 | network: fixedips 4 | ipv4-subnet: 172.100.100.0/24 5 | ipv6-subnet: 2001:172:100:100::/80 6 | topology: 7 | nodes: 8 | switch: 9 | kind: srl 10 | image: ghcr.io/nokia/srlinux 11 | group: leaf 12 | mgmt-ipv4: 172.100.100.2 13 | mgmt-ipv6: 2001:172:100:100::2 14 | exec: 15 | - sr_cli -e -- set / interface ethernet-1/1 admin-state enable 16 | - sr_cli -e -- set / interface ethernet-1/1 subinterface 0 ipv4 admin-state enable 17 | - sr_cli -e -- set / interface ethernet-1/1 subinterface 0 ipv4 address 172.16.1.1/24 18 | - sr_cli -e -- set / interface ethernet-1/2 admin-state enable 19 | - sr_cli -e -- set / interface ethernet-1/2 subinterface 0 ipv4 admin-state enable 20 | - sr_cli -e -- set / interface ethernet-1/2 subinterface 0 ipv4 address 172.16.2.1/24 21 | - sr_cli -e -- set / interface ethernet-1/3 admin-state enable 22 | - sr_cli -e -- set / interface ethernet-1/3 subinterface 0 ipv4 admin-state enable 23 | - sr_cli -e -- set / interface ethernet-1/3 subinterface 0 ipv4 address 172.100.100.6/24 24 | - sr_cli -e -- set / network-instance default type ip-vrf 25 | - sr_cli -e -- set / network-instance default admin-state enable 26 | - sr_cli -e -- set / network-instance default interface ethernet-1/1.0 27 | - sr_cli -e -- set / network-instance default interface ethernet-1/2.0 28 | - sr_cli -e -- set / network-instance default interface ethernet-1/3.0 29 | - sr_cli -e -- set / system sflow admin-state enable 30 | - sr_cli -e -- set / system sflow sample-rate 10 31 | - sr_cli -e -- set / system sflow collector 1 collector-address 172.100.100.5 32 | - sr_cli -e -- set / system sflow collector 1 source-address 172.100.100.6 33 | - sr_cli -e -- set / system sflow collector 1 network-instance default 34 | - sr_cli -e -- set / interface ethernet-1/1 sflow admin-state enable 35 | - sr_cli -ec -- set / interface ethernet-1/2 sflow admin-state enable 36 | h1: 37 | kind: linux 38 | image: sflow/clab-iperf3 39 | group: server 40 | mgmt-ipv4: 172.100.100.3 41 | mgmt-ipv6: 2001:172:100:100::3 42 | exec: 43 | - ip link set dev eth1 mtu 1500 44 | - ip addr add 172.16.1.2/24 dev eth1 45 | - ip route add 172.16.2.0/24 via 172.16.1.1 46 | h2: 47 | kind: linux 48 | image: sflow/clab-iperf3 49 | group: server 50 | mgmt-ipv4: 172.100.100.4 51 | mgmt-ipv6: 2001:172:100:100::4 52 | exec: 53 | - ip link set dev eth1 mtu 1500 54 | - ip addr add 172.16.2.2/24 dev eth1 55 | - ip route add 172.16.1.0/24 via 172.16.2.1 56 | sflow-rt: 57 | kind: linux 58 | image: sflow/clab-sflow-rt 59 | mgmt-ipv4: 172.100.100.5 60 | mgmt-ipv6: 2001:172:100:100::5 61 | ports: 62 | - 8008:8008 63 | links: 64 | - endpoints: ["switch:e1-1", "h1:eth1"] 65 | - endpoints: ["switch:e1-2", "h2:eth1"] 66 | - endpoints: ["switch:e1-3", "mgmt-net:switch-e1-3"] 67 | -------------------------------------------------------------------------------- /topo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # Post Containerlab topology to sflow-rt 4 | # ./topo.py clab-clos3 5 | 6 | import sys 7 | from json import load, dumps 8 | from urllib.request import build_opener, HTTPHandler, Request 9 | 10 | with open(sys.argv[1] + '/topology-data.json') as f: 11 | contents = load(f) 12 | collector = '127.0.0.1' 13 | router_image = 'sflow/clab-frr' 14 | rt_links = {} 15 | link_no = 1 16 | nodes = contents['nodes'] 17 | def is_fabric_link(link): 18 | if nodes[link['node1']]['image'] != router_image: 19 | return False 20 | if nodes[link['node2']]['image'] != router_image: 21 | return False 22 | return True 23 | for link in contents['links']: 24 | rt_link = { 25 | 'node1': link['a']['node'], 26 | 'port1': link['a']['interface'], 27 | 'node2': link['z']['node'], 28 | 'port2': link['z']['interface'] 29 | } 30 | if is_fabric_link(rt_link): 31 | rt_links['link%i' % link_no] = rt_link 32 | link_no = link_no + 1 33 | rt_topo = {'links':rt_links} 34 | 35 | opener = build_opener(HTTPHandler) 36 | request = Request('http://%s:8008/topology/json' % collector, data=dumps(rt_topo).encode('utf-8')) 37 | request.add_header('Content-Type','application/json') 38 | request.get_method = lambda: 'PUT' 39 | url = opener.open(request) 40 | --------------------------------------------------------------------------------