├── .gitignore ├── .gitmodules ├── CHANGELOG.md ├── README.md ├── ansible.cfg ├── check_group.yml ├── example.png ├── filter_plugins ├── indent.py └── report.py ├── hosts ├── ntc-templates ├── arista_eos_show_ip_igmp_snooping_groups.template ├── arista_eos_show_ip_mroute.template ├── arista_eos_show_ip_pim_rp-hash.template ├── arista_eos_show_ip_pim_rp.template ├── arista_eos_show_port-channel_summary.template ├── cisco_ios_show_etherchannel_summary.template ├── cisco_nxos_show_interface.template ├── cisco_nxos_show_ip_igmp_snooping_groups.template ├── cisco_nxos_show_ip_mroute.template ├── cisco_nxos_show_ip_pim_rp-hash.template ├── cisco_nxos_show_ip_pim_rp.template ├── cisco_nxos_show_lldp_neighbors.template └── index ├── requirements.txt ├── secrets └── .gitignore ├── ssh_config ├── tasks ├── eos │ └── rp.yml ├── interfaces.yml ├── ios │ ├── lags.yml │ └── snooping.yml ├── lags.yml ├── mroutes.yml ├── neighbours.yml ├── nxos │ ├── interfaces.yml │ ├── lags.yml │ └── neighbours.yml ├── rp.yml └── snooping.yml ├── templates ├── graph.j2 └── report.j2 ├── tests ├── arista_eos │ ├── arista_eos_show_ip_igmp_snooping_groups.raw │ ├── arista_eos_show_ip_mroute.raw │ ├── arista_eos_show_ip_pim_rp-hash.raw │ ├── arista_eos_show_ip_pim_rp.raw │ └── arista_eos_show_port-channel_summary.raw ├── cisco_ios │ └── cisco_ios_show_etherchannel_summary.raw └── cisco_nxos │ ├── cisco_nxos_show_interface.raw │ ├── cisco_nxos_show_ip_igmp_snooping_groups.raw │ ├── cisco_nxos_show_ip_mroute.raw │ ├── cisco_nxos_show_ip_pim_rp-hash.raw │ ├── cisco_nxos_show_ip_pim_rp.raw │ └── cisco_nxos_show_lldp_neighbors.raw └── vars └── ntc.yml /.gitignore: -------------------------------------------------------------------------------- 1 | # python virtualenv 2 | env 3 | 4 | # Generated files 5 | *.pyc 6 | 7 | # Local inventory file 8 | inventory 9 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "lib/ntc-ansible"] 2 | path = lib/ntc-ansible 3 | url = https://github.com/networktocode/ntc-ansible.git 4 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | = v0.1 - 2017-12-14 2 | 3 | * Initial release 4 | 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ansible-multicast-graph 2 | 3 | Visualises the multicast trees for a single IPv4 group address using information 4 | scraped from NX-OS, EOS and IOS devices using ansible, ntc-ansible and TextFSM. 5 | 6 | It is capable of showing the following information (provided all relevant 7 | devices are queried): 8 | 9 | * The Rendezvouz Point device 10 | * What every other device believes the RP to be (to help detect config mismatches) 11 | * The direction of traffic flow on the shared multicast tree 12 | * The direction of traffic flow on any source-specific multicast trees 13 | * The publishes (IP addresses and name as discovered through LLDP) 14 | * The subscribers (as discovered through IGMP Snooping and LLDP) 15 | 16 | ![An example report](example.png) 17 | 18 | This tool is very young and could be significantly improved upon. 19 | It was a first attempt at doing something useful while studying for Ivan Pepelnjak's 20 | [Building Network Automation Solutions](http://www.ipspace.net/Building_Network_Automation_Solutions) 21 | course. 22 | 23 | ## Quick start 24 | 25 | * Clone this repository 26 | * Initialise the ntc-ansible submodule: 27 | 28 | ```bash 29 | git submodule update --init --recursive 30 | ``` 31 | 32 | * Create a python virtualenv and install the dependencies: 33 | 34 | ```bash 35 | virtualenv env 36 | source env/bin/activate 37 | pip install -r requirements.txt 38 | ``` 39 | 40 | * Update the `hosts` inventory file with your own devices 41 | * Create a vault file at `secrets/vault.yml` if you need ssh username and password 42 | to access your devices, otherwise just create an empty file there: 43 | 44 | ```bash 45 | ansible-vault create secrets/vault.yml 46 | # or 47 | touch secrets/vault.yml 48 | ``` 49 | 50 | * Run the tool for your chosen multicast group with: 51 | 52 | ```bash 53 | ansible-playbook check_group.yml --extra-vars "mcast_group=224.0.0.123" 54 | ``` 55 | 56 | The rendered graph will be written to the `outputs` directory with the group 57 | address as the filename and a .`.png` extension, e.g. `outputs/224.0.0.123.png`. 58 | 59 | If you need to use a bastion host to connect to your network devices, uncomment the 60 | line in `hosts` file, and change the hostname of the bastion host there. Also edit 61 | `ssh_config` and set the right hostname there. 62 | 63 | ## Dependencies 64 | 65 | * python (tested with 2.7 only) 66 | * ansible (tested with 2.3) 67 | * graphviz 68 | 69 | This tool depends on several python libraries which are listed in the 70 | `requirements.txt` file, which is intended to be used with a python virtualenv. 71 | 72 | ## How it works 73 | 74 | The playbook has several stages (each of which is tagged and can be run separately): 75 | 76 | * `setup` - Creates the output directories 77 | * `fetch` - Fetches information from each device and writes them to separate output 78 | files. This stage is split into mutiple sub-stages, each of which are written to 79 | separate fiels. 80 | ** `fetch-interfaces` - Retrieves list of interfaces 81 | ** `fetch-lags` - Retrieves the list of link-aggregation groups 82 | ** `fetch-neighbours` - Retrieves the list of LLDP neighbours 83 | ** `fetch-rp` - Retrieves the Rendezvouz Point 84 | ** `fetch-mroutes` - Retrieves the list of mroutes 85 | ** `fetch-snooping` - Retrieves the list of IGMP subscribers 86 | * `compile` - Merges the separate output files into a single report for easier 87 | processing 88 | * `generate` - Generates the dot graph from the compiled report 89 | * `render` - Renders the output `png` image from the dot graph 90 | 91 | The data is scraped from the devices using `ntc-ansible` and `ntc-templates`. 92 | This tool ships its own modified copy of several of the TextFSM templates shipped 93 | by ntc-templates which did not retrieve all of the necessary data. I intend to 94 | upstream those modifications when time permits. 95 | 96 | All of the business logic for calculating the list of devices and relationships 97 | in the dot graph is done (ab)using jinja2 filters. 98 | 99 | ## Contributions 100 | 101 | Pull requests are very much welcome! 102 | 103 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | transport=paramiko 3 | hostfile=./hosts 4 | host_key_checking=False 5 | timeout=5 6 | retry_files_enabled = False 7 | 8 | library = env/lib/python2.7/site-packages/napalm_ansible/:lib/ntc-ansible/library 9 | #stdout_callback = selective 10 | callback_whitelist = profile_tasks 11 | -------------------------------------------------------------------------------- /check_group.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Collate multicast details 3 | hosts: all 4 | connection: local 5 | gather_facts: false 6 | vars_files: 7 | - "{{vault_file|default('./secrets/vault.yml')}}" 8 | - "{{inventory_dir}}/vars/ntc.yml" 9 | vars: 10 | mcast_group: "UNSET" 11 | mcast_net: "224.0.0.0/4" 12 | results: "{{inventory_dir}}/outputs" 13 | host_results: "{{ results }}/{{ inventory_hostname }}" 14 | task_list: 15 | - "tasks/{{os}}/{{task_path|basename}}" 16 | - "{{task_path}}" 17 | 18 | tasks: 19 | - name: Validate multicast group address 20 | fail: 21 | msg: "Valid multicast group must be passed to playbook with -e/--extra-vars, e.g. `-e mcast_group=224.0.0.1`" 22 | when: not mcast_group|ipaddr or not mcast_group|ipaddr(mcast_net) 23 | 24 | - name: Create output directories 25 | local_action: file path={{ results }} state=directory 26 | run_once: true 27 | tags: ['setup'] 28 | 29 | - name: Create host output directories 30 | local_action: file path={{ host_results }} state=directory 31 | tags: ['setup'] 32 | 33 | - name: "Include tasks" 34 | include: "{{ lookup('first_found', task_list) }}" 35 | with_fileglob: [ "tasks/*.yml" ] 36 | loop_control: 37 | loop_var: task_path 38 | 39 | - name: "Compile Report" 40 | template: 41 | src: "templates/report.j2" 42 | dest: "{{results}}/{{mcast_group}}.yaml" 43 | run_once: true 44 | tags: ['compile'] 45 | 46 | - name: "Generate Graph" 47 | template: 48 | src: "templates/graph.j2" 49 | dest: "{{results}}/{{mcast_group}}.dot" 50 | vars: 51 | report: "{{ lookup('file', results + '/' + mcast_group + '.yaml')|from_yaml }}" 52 | edges: "{{ report|get_mroute_edges(mcast_group, play_hosts) }}" 53 | run_once: true 54 | tags: ['generate'] 55 | 56 | - name: "Render Graph" 57 | command: dot -Tpng "{{results}}/{{mcast_group}}.dot" -o "{{results}}/{{mcast_group}}.png" 58 | run_once: true 59 | tags: ['render'] 60 | 61 | 62 | 63 | # vim: set ts=2 shiftwidth=2 expandtab : 64 | -------------------------------------------------------------------------------- /example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/optiz0r/ansible-multicast-graph/7a69044be25a7ea366352fcc715f86e3742c8a0f/example.png -------------------------------------------------------------------------------- /filter_plugins/indent.py: -------------------------------------------------------------------------------- 1 | class FilterModule(object): 2 | 3 | def indent_block(self, value, indent=2): 4 | return "\n".join([' '*indent + l for l in value.splitlines()]) 5 | 6 | def filters(self): 7 | return { 8 | 'indent_block': self.indent_block, 9 | } 10 | -------------------------------------------------------------------------------- /filter_plugins/report.py: -------------------------------------------------------------------------------- 1 | import re 2 | import traceback 3 | 4 | from netaddr import IPAddress, IPNetwork 5 | from socket import gethostbyaddr 6 | 7 | 8 | class FilterModule(object): 9 | 10 | def hostname(self, hostname): 11 | """ Returns short hostname from given hostname or FQDN 12 | 13 | :param hostname: str The hostname or FQDN to shorten 14 | :return: str 15 | """ 16 | return hostname.split('.', 1)[0] 17 | 18 | def normalise_iface(self, iface): 19 | """ Normalise interface names to the short form 20 | 21 | :param iface: str|list Name(s) of the interface to normalise 22 | :return: str Normalised interface name 23 | """ 24 | pattern = r'^(Et|Gi|Lo|Po|po|Te|Vl)(?:[a-zA-Z-]*)(\d.*)$' 25 | if type(iface) == list: 26 | return [re.sub(pattern, r'\1\2', i, re.IGNORECASE).capitalize() for i in iface] 27 | else: 28 | return re.sub(pattern, r'\1\2', iface, re.IGNORECASE).capitalize() 29 | 30 | def normalise_address(self, address): 31 | """ Normalise ip addresses to drop cidr prefix for /32 32 | 33 | :param iface: str|list IP address(es) to normalise 34 | :return: str Normalised IP address 35 | """ 36 | pattern = r'^(.*?)(\/32)?$' 37 | if type(address) == list: 38 | return [re.sub(pattern, r'\1', i, re.IGNORECASE) for i in address] 39 | else: 40 | return re.sub(pattern, r'\1', address, re.IGNORECASE) 41 | 42 | def has_mroutes(self, value): 43 | """ Checkes whether the given mroutes is valid or not 44 | 45 | :param value: The mroute object to check 46 | :return: bool 47 | """ 48 | if not value: 49 | return False 50 | 51 | if len(value) == 1 and type(value[0]) != dict: 52 | return False 53 | 54 | return True 55 | 56 | def has_rp(self, value): 57 | """ Checkes whether the given RP is valid or not 58 | 59 | :param value: The RP object to check 60 | :return: bool 61 | """ 62 | if not value: 63 | return False 64 | 65 | if len(value) == 1 and type(value[0]) != dict: 66 | return False 67 | 68 | return True 69 | 70 | def has_snooping(self, value): 71 | """ Checkes whether the given snooping data is valid or not 72 | 73 | :param value: The snooping data to check 74 | :return: bool 75 | """ 76 | if not value: 77 | return False 78 | 79 | if len(value) == 1 and type(value[0]) != dict: 80 | return False 81 | 82 | return True 83 | 84 | def get_rp_address(self, value): 85 | """ Retrieves the RP IP address from a host's 'rp' report section 86 | 87 | :param value: A list of RP or RP hash entries 88 | :return: The IP address of the RP 89 | """ 90 | try: 91 | if len(value) < 1: 92 | return None 93 | 94 | # Deal with the Arista EOS rp-hash case 95 | if 'selected_rp' in value[0]: 96 | return value[0]['selected_rp'] 97 | 98 | # Standard case, list of potential RPs 99 | rps = sorted(value, key=lambda k: k['priority']) 100 | if 'rp' in rps[0]: 101 | return rps[0]['rp'] 102 | else: 103 | return None 104 | except Exception: 105 | traceback.print_exc() 106 | 107 | def is_rp(self, rp, interfaces): 108 | """ Returns true if the given RP is listed in any of the interfaces 109 | 110 | :param rp: str|None The RP to check for 111 | :param interfaces: list List of interfaces to check against 112 | :return: bool 113 | """ 114 | 115 | if not rp or not interfaces: 116 | return False 117 | 118 | for interface in interfaces: 119 | if interface['ip_address'] and interface['ip_address'].split('/')[0] == rp: 120 | return True 121 | 122 | return False 123 | 124 | def get_mroute_neighbours(self, mroute, neighbours, direction='in'): 125 | """ Retrieves the LLDP neighbour information for an mroute incoming interface from the given neighbour list 126 | 127 | :param mroute: The mroute object to find the neighbour for 128 | :param neighbours: The neighbour list for the host to search in 129 | :param direction: Whether the neighbour being sought is incoming or outgoing 130 | :return: The found neighbor, or None 131 | """ 132 | results = [] 133 | 134 | # Not parsed correctly, or not matched 135 | if type(mroute) == str: 136 | return [] 137 | 138 | if not mroute or not neighbours: 139 | return [] 140 | 141 | field = 'incoming_iface' if direction == 'in' else 'outgoing_iface' 142 | 143 | candidate_interface_names = set(self.normalise_iface([mroute[field]] if direction == 'in' else mroute[field])) 144 | 145 | # Find neighbours based on LLDP links between physical L3 interfaces 146 | for neighbour in neighbours: 147 | if self.normalise_iface(neighbour['local_interface']) in candidate_interface_names: 148 | results.append(neighbour) 149 | 150 | return results 151 | 152 | def get_l2_neighbours(self, host, mroute, direction, report): 153 | """ Retrives the list of L2 neighbours based on matching Vlans between switches 154 | 155 | This isn't a perfect model of the real network, but it's as close as we can get with data available 156 | 157 | :param host: str Hostname for the current host 158 | :param mroute: Mroute to look for matches on 159 | :param direction: Whether to look for matches on inbound or outbound interfaces for this mroute 160 | :param report: Full report object 161 | :return: list List of neighbours for this L2 interface 162 | """ 163 | results = [] 164 | 165 | # Not parsed correctly, or not matched 166 | if type(mroute) == str: 167 | return [] 168 | 169 | field = 'incoming_iface' if direction == 'in' else 'outgoing_iface' 170 | candidate_interface_names = set( 171 | filter( 172 | lambda x: re.match(r'^Vl', x, re.IGNORECASE), 173 | self.normalise_iface([mroute[field]] if direction == 'in' else mroute[field]))) 174 | 175 | for neighbour_host in report: 176 | if host == neighbour_host or neighbour_host == '_meta': 177 | continue 178 | 179 | try: 180 | for remote_interface in report[neighbour_host]['interfaces']: 181 | if self.normalise_iface(remote_interface['interface']) in candidate_interface_names: 182 | # Found a potential L2 relationship 183 | # Could potentially filter this out based on matching L3 info, if we see false positives 184 | results.append({ 185 | 'local_interface': remote_interface['interface'], # matches the local interface name by definition 186 | 'neighbor': neighbour_host, 187 | 'neighbor_interface': remote_interface['interface'], 188 | }) 189 | except Exception: 190 | print("Failed matching up l2 interfaces between {} and {}".format(host, neighbour_host)) 191 | traceback.print_exc() 192 | 193 | return results 194 | 195 | def get_portchannel_neighbours(self, host, mroute, neighbours, port_channels, direction, report): 196 | """ Retrieves the list of PortChannel neighbours between switches 197 | 198 | :param host: str Hostname for the current host 199 | :param mroute: Mroute to look for matches on 200 | :param port_channels: Portchannel list to look for matches in 201 | :param direction: Whether to look for matches on inbound or outbound interfaces for this mroute 202 | :param report: Full report object 203 | :return: list List of neighbours for this L2 interface 204 | """ 205 | results = [] 206 | 207 | # Not parsed correctly, or not matched 208 | if type(mroute) == str: 209 | return [] 210 | 211 | field = 'incoming_iface' if direction == 'in' else 'outgoing_iface' 212 | 213 | port_channel_names = filter( 214 | lambda x: re.match(r'^Po', x, re.IGNORECASE), 215 | self.normalise_iface([mroute[field]] if direction == 'in' else mroute[field])) 216 | 217 | for port_channel_name in port_channel_names: 218 | # Find the physical interfaces for this portchannel 219 | for port_channel in port_channels: 220 | if port_channel_name == self.normalise_iface(port_channel['bundle_iface']): 221 | normalised_phys_ifaces = self.normalise_iface(port_channel['phys_iface']) 222 | for neighbour in neighbours: 223 | if self.normalise_iface(neighbour['local_interface']) in normalised_phys_ifaces: 224 | normalised_neighbour = self.hostname(neighbour['neighbor']) 225 | if normalised_neighbour not in report: 226 | # Discovered device not polled, ignoring silently 227 | continue 228 | # Find the port channel which contains this interface on the neighbour side 229 | for npc in report[normalised_neighbour]['lags']: 230 | normalised_neighbour_interface = self.normalise_iface(neighbour['neighbor_interface']) 231 | if normalised_neighbour_interface in self.normalise_iface(npc['phys_iface']): 232 | npc_name = self.normalise_iface(npc['bundle_iface']) 233 | 234 | results.append({ 235 | 'local_interface': port_channel_name, 236 | 'neighbor': neighbour['neighbor'], 237 | 'neighbor_interface': npc_name, 238 | }) 239 | 240 | return results 241 | 242 | def get_mroute_edges(self, report, mcast_group, play_hosts=[]): 243 | """ Calculates the set of mroute edges between play_hosts in the report 244 | 245 | :param report: The full report object 246 | :param play_hosts: list Optional list of play hosts to filter the report to 247 | :return: Dictionary of edges with {left|right}_{host|interface}, indexed by label 248 | """ 249 | edges = {} 250 | 251 | if not report: 252 | return edges 253 | 254 | if not play_hosts: 255 | play_hosts = report.keys() 256 | 257 | def add_edge(neighbour, direction): 258 | try: 259 | if direction == 'in': 260 | edge = { 261 | 'left_host': self.hostname(neighbour['neighbor']), 262 | 'left_interface': self.normalise_iface(neighbour['neighbor_interface']), 263 | 'right_host': host, 264 | 'right_interface': self.normalise_iface(neighbour['local_interface']), 265 | } 266 | else: 267 | edge = { 268 | 'left_host': host, 269 | 'left_interface': self.normalise_iface(neighbour['local_interface']), 270 | 'right_host': self.hostname(neighbour['neighbor']), 271 | 'right_interface': self.normalise_iface(neighbour['neighbor_interface']), 272 | } 273 | 274 | if re.match(r'^Vl', neighbour['local_interface'], re.IGNORECASE): 275 | edge['key'] = neighbour['local_interface'] 276 | else: 277 | edge['key'] = "{}:{}\\n-\\n{}:{}".format( 278 | edge['left_host'], edge['left_interface'], 279 | edge['right_host'], edge['right_interface']) 280 | 281 | edge['publisher'] = mroute['publisher'].replace('0.0.0.0', '*') 282 | edge['key'] += "\\n({}, {})".format(self.normalise_address(edge['publisher']), self.normalise_address(mroute['group'])) 283 | 284 | if edge['key'] not in edges: 285 | edges[edge['key']] = edge 286 | 287 | except Exception as e: 288 | traceback.print_exc() 289 | 290 | for host in play_hosts: 291 | if host not in report: 292 | # Failed host 293 | continue 294 | 295 | for mroute in report[host]['mroutes']: 296 | try: 297 | for in_neighbour in self.get_mroute_neighbours(mroute, report[host]['neighbours'], 'in'): 298 | add_edge(in_neighbour, 'in') 299 | 300 | for in_l2_neighbour in self.get_l2_neighbours(host, mroute, 'in', report): 301 | add_edge(in_l2_neighbour, 'in') 302 | 303 | for in_pc_neighbour in self.get_portchannel_neighbours(host, mroute, report[host]['neighbours'], report[host]['lags'], 'in', report): 304 | add_edge(in_pc_neighbour, 'in') 305 | 306 | for out_neighbour in self.get_mroute_neighbours(mroute, report[host]['neighbours'], 'out'): 307 | add_edge(out_neighbour, 'out') 308 | 309 | for out_l2_neighbour in self.get_l2_neighbours(host, mroute, 'out', report): 310 | add_edge(out_l2_neighbour, 'out') 311 | 312 | for out_pc_neighbour in self.get_portchannel_neighbours(host, mroute, report[host]['neighbours'], report[host]['lags'], 'out', report): 313 | add_edge(out_pc_neighbour, 'out') 314 | 315 | except Exception as e: 316 | traceback.print_exc() 317 | return edges 318 | 319 | def get_interface_neighbour(self, report, host, port): 320 | """ Retrieves the host attached to an interface 321 | 322 | Attempts to locate the host via LLDP information first, and by description second 323 | 324 | :param report: Full report object 325 | :param host: str Hostname of the device to lookup the interface on 326 | :param interface: str Name of the interface to lookup the neighbour of 327 | """ 328 | #return "Unknown" 329 | if host not in report or not host or not port: 330 | return "Unknown" 331 | 332 | normalised_port = self.normalise_iface(port) 333 | 334 | # Build am index into this host's neighbours to ease repeat lookups 335 | if 'neighbour_map' not in report[host]: 336 | report[host]['neighbour_map'] = {self.normalise_iface(n['local_interface']): n for n in report[host]['neighbours']} 337 | 338 | if normalised_port in report[host]['neighbour_map']: 339 | return self.hostname(report[host]['neighbour_map'][normalised_port]['neighbor']) 340 | 341 | # Build am index into this host's neighbours to ease repeat lookups 342 | if 'interface_map' not in report[host]: 343 | report[host]['interface_map'] = {i['interface']: i for i in report[host]['interfaces']} 344 | 345 | if port in report[host]['interface_map']: 346 | pass 347 | 348 | return "{}_{}".format(host, re.sub(r'[^a-zA-Z0-9-]', r'_', normalised_port)) 349 | 350 | def get_publishers(self, mroutes, interfaces): 351 | """ Returns a list of Publisher nodes attached to this switch by 352 | matching mroute publisher addresses to interface subnets 353 | 354 | :param mroutes: list List of mroutes on this switch 355 | :param interfaces: list List of interfaecs on this switch 356 | :return: list List of hostnames that publish to this switch 357 | """ 358 | results = [] 359 | 360 | for mroute in mroutes: 361 | for interface in interfaces: 362 | if not mroute['publisher'] or mroute['publisher'] == '*' or mroute['publisher'] == '0.0.0.0': 363 | continue 364 | 365 | if not interface['ip_address']: 366 | continue 367 | 368 | ip_address = IPAddress(mroute['publisher'].split('/')[0]) 369 | ip_network = IPNetwork(interface['ip_address']) 370 | if ip_address in ip_network: 371 | publisher_hostname = self.hostname(gethostbyaddr(str(ip_address))[0]) 372 | results.append({ 373 | 'hostname': publisher_hostname, 374 | 'ip': mroute['publisher'], 375 | }) 376 | 377 | return results 378 | 379 | def filters(self): 380 | return { 381 | 'hostname': self.hostname, 382 | 'normalise_iface': self.normalise_iface, 383 | 'normalise_group': self.normalise_address, 384 | 'has_mroutes': self.has_mroutes, 385 | 'has_rp': self.has_rp, 386 | 'has_snooping': self.has_snooping, 387 | 'get_rp_address': self.get_rp_address, 388 | 'is_rp': self.is_rp, 389 | 'get_mroute_neighbours': self.get_mroute_neighbours, 390 | 'get_mroute_edges': self.get_mroute_edges, 391 | 'get_interface_neighbour': self.get_interface_neighbour, 392 | 'get_publishers': self.get_publishers, 393 | } 394 | -------------------------------------------------------------------------------- /hosts: -------------------------------------------------------------------------------- 1 | [core] 2 | core1 os=nxos port=8182 3 | core2 os=eos 4 | core3 os=ios 5 | 6 | [all:vars] 7 | ansible_python_interpreter="/usr/bin/env python" 8 | 9 | # Used by built-in modules to reach devices via jump host 10 | # Uncomment if needed 11 | #ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastionhost"' 12 | 13 | -------------------------------------------------------------------------------- /ntc-templates/arista_eos_show_ip_igmp_snooping_groups.template: -------------------------------------------------------------------------------- 1 | Value VLAN (\d+) 2 | Value GROUP (\d+\.\d+\.\d+\.\d+) 3 | Value VERSION (\d+|-) 4 | Value TYPE ([a-zA-Z]+) 5 | Value List PORTS (\S+) 6 | 7 | Start 8 | ^Vlan\s+Group 9 | ^---- -> Entry 10 | 11 | Entry 12 | ^${VLAN}\s+${GROUP}\s+${TYPE}\s+${VERSION}\s+${PORTS}$$ -> Record 13 | ^${VLAN}\s+${GROUP}\s+${TYPE}\s+${VERSION}\s+${PORTS}, -> Continue 14 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+${PORTS}$$ -> Record 15 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+${PORTS}, -> Continue 16 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+\S+,\s+${PORTS}$$ -> Record 17 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+\S+,\s+${PORTS}, -> Continue 18 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+\S+,\s+\S+,\s+${PORTS}$$ -> Record 19 | ^\S+\s+\S+\s+\S+\s+\S+\s+\S+,\s+\S+,\s+\S+,\s+${PORTS}, -> Continue 20 | ^\s{52}${PORTS}(,|$$) -> Continue 21 | ^\s{52}\S+\s+${PORTS}(,|$$) -> Continue 22 | ^\s{52}\S+\s+\S+\s+${PORTS}(,|$$) -> Continue 23 | ^\s{52}\S+\s+\S+\s+\S+\s+${PORTS}(,|$$) -> Continue 24 | ^.*[^,]$$ -> Record 25 | 26 | -------------------------------------------------------------------------------- /ntc-templates/arista_eos_show_ip_mroute.template: -------------------------------------------------------------------------------- 1 | #PIM Sparse Mode Multicast Routing Table 2 | #Flags: E - Entry forwarding on the RPT, J - Joining to the SPT 3 | # R - RPT bit is set, S - SPT bit is set, L - Source is attached 4 | # W - Wildcard entry, X - External component interest 5 | # I - SG Include Join alert rcvd, P - Ex-Prune alert rcvd 6 | # H - Joining SPT due to policy, D - Joining SPT due to protocol 7 | # Z - Entry marked for deletion, C - Learned from a DR via a register 8 | # A - Learned via Anycast RP Router, M - Learned via MSDP 9 | # N - May notify MSDP, K - Keepalive timer not running 10 | # T - Switching Incoming Interface, B - Learned via Border Router 11 | #RPF route: U - From unicast routing table 12 | # M - From multicast routing table 13 | #224.0.64.194 14 | # 0.0.0.0, 17:06:55, RP 156.48.99.5, flags: W 15 | # Incoming interface: Ethernet33 16 | # RPF route: [U] 156.48.99.5/32 [200/0] via 172.30.1.173 17 | # Outgoing interface list: 18 | # Vlan3407 19 | # 20 | Value PUBLISHER (\*|\d+\.\d+\.\d+\.\d+) 21 | Value Filldown GROUP (\d+\.\d+\.\d+\.\d+) 22 | Value UPTIME (\S+) 23 | Value RP (\*|\d+\.\d+\.\d+\.\d+) 24 | Value INCOMING_IFACE ([a-zA-Z0-9/-]+) 25 | Value OUTGOING_IFACE_COUNT (\d+) 26 | Value List OUTGOING_IFACE (\S+) 27 | 28 | Start 29 | ^${GROUP}$$ -> List 30 | 31 | List 32 | ^\s+${PUBLISHER}, ${UPTIME}, flags: .* -> Entry 33 | ^\s+${PUBLISHER}, ${UPTIME}, RP ${RP}, flags: .* -> Entry 34 | 35 | Entry 36 | ^\s{2}\S -> Continue.Record 37 | ^\s+Incoming interface: ${INCOMING_IFACE} 38 | ^\s+RPF route:.* 39 | ^\s+Outgoing interface list: -> Outgoing_List 40 | 41 | Outgoing_List 42 | ^\s{6}${OUTGOING_IFACE}\s*$$ 43 | ^\s{1,3}\S -> Continue.Record 44 | ^\s+${PUBLISHER}, ${UPTIME}, flags: .* -> Entry 45 | ^\s+${PUBLISHER}, ${UPTIME}, RP ${RP}, flags: .* -> Entry 46 | 47 | -------------------------------------------------------------------------------- /ntc-templates/arista_eos_show_ip_pim_rp-hash.template: -------------------------------------------------------------------------------- 1 | #RP 10.210.255.60 2 | # PIM v2 Hash Values: 3 | # RP: 10.210.255.60 4 | # Uptime: 354d12h, Expires: 0:02:21, Priority: 5, HashMaskLen: 1, HashMaskValue: 968088410 5 | # RP: 10.210.255.7 6 | # Uptime: 354d12h, Expires: 0:02:21, Priority: 20, HashMaskLen: 1, HashMaskValue: 1591431583 7 | Value Filldown SELECTED_RP (\d+\.\d+\.\d+\.\d+) 8 | 9 | Start 10 | ^RP ${SELECTED_RP} -> Record 11 | 12 | EOF 13 | 14 | -------------------------------------------------------------------------------- /ntc-templates/arista_eos_show_ip_pim_rp.template: -------------------------------------------------------------------------------- 1 | #Group: 225.0.0.0/22 2 | # RP: 10.210.255.60 3 | # Uptime: 354d17h, Expires: 0:02:22, Priority: 5 4 | # RP: 10.210.255.7 5 | # Uptime: 354d17h, Expires: 0:02:22, Priority: 20 6 | Value Filldown GROUP (\d+\.\d+\.\d+\.\d+\/\d+) 7 | Value RP (\d+\.\d+\.\d+\.\d+) 8 | Value UPTIME (\S+) 9 | Value EXPIRES ([0-9:]+) 10 | Value PRIORITY (\d+) 11 | 12 | Start 13 | ^Group: ${GROUP} -> RP_List 14 | 15 | RP_List 16 | ^\s+RP: ${RP} -> RP_Entry 17 | 18 | RP_Entry 19 | ^\s+Uptime: ${UPTIME}, Expires: ${EXPIRES}, Priority: ${PRIORITY} 20 | ^RP: -> Continue.Record 21 | ^RP: -> RP_List 22 | ^$$ -> Record -> RP_List 23 | 24 | -------------------------------------------------------------------------------- /ntc-templates/arista_eos_show_port-channel_summary.template: -------------------------------------------------------------------------------- 1 | # 2 | # Flags 3 | #------------------------ ---------------------------- ------------------------- 4 | # a - LACP Active p - LACP Passive * - static fallback 5 | # F - Fallback enabled f - Fallback configured ^ - individual fallback 6 | # U - In Use D - Down 7 | # + - In-Sync - - Out-of-Sync i - incompatible with agg 8 | # P - bundled in Po s - suspended G - Aggregable 9 | # I - Individual S - ShortTimeout w - wait for agg 10 | # 11 | #Number of channels in use: 2 12 | #Number of aggregators: 2 13 | # 14 | # Port-Channel Protocol Ports 15 | #------------------ -------------- ---------------------- 16 | # Po47(U) LACP(a) Et47(G+) Et48(G+) 17 | # Po51(U) LACP(a) Et51/1(G+) Et52/1(G+) 18 | Value Required,Filldown BUNDLE_IFACE (Po\d+) 19 | Value Filldown BUNDLE_STATUS (\(\w+\)) 20 | Value Filldown BUNDLE_PROTO (\w+) 21 | Value Filldown BUNDLE_PROTO_STATE (\(\w+\)) 22 | Value List PHYS_IFACE (Et[0-9\/]+) 23 | Value List PHYS_IFACE_STATUS (\([a-zA-Z^*+-]+\)) 24 | 25 | Start 26 | ^\s+Port-Channel\s+Protocol -> Header 27 | 28 | Header 29 | ^--- -> List 30 | 31 | List 32 | ^\s+Po -> Continue.Record 33 | ^\s+${BUNDLE_IFACE}${BUNDLE_STATUS}\s+${BUNDLE_PROTO}${BUNDLE_PROTO_STATE}\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 34 | ^\s+Po\d+\(\w+\)\s+\w+\(\w+\)\s+Et.+?\(.+?\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 35 | ^\s+Po\d+\(\w+\)\s+\w+\(\w+\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 36 | ^\s+Po\d+\(\w+\)\s+\w+\(\w+\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 37 | ^\s+Po\d+\(\w+\)\s+\w+\(\w+\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 38 | ^\s+Po\d+\(\w+\)\s+\w+\(\w+\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+Et.+?\(.+?\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 39 | ^\s{24}${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 40 | ^\s{24}Et.+?\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 41 | ^\s{24}Et.+?\s+Et.+?\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 42 | ^\s{24}Et.+?\s+Et.+?\s+Et.+?\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 43 | ^\s{24}Et.+?\s+Et.+?\s+Et.+?\s+Et.+?\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 44 | -------------------------------------------------------------------------------- /ntc-templates/cisco_ios_show_etherchannel_summary.template: -------------------------------------------------------------------------------- 1 | # 2 | #Group Port-channel Protocol Ports 3 | #------+-------------+-----------+----------------------------------------------- 4 | #47 Po47(SU) LACP Gi1/47(P) Gi1/48(P) 5 | # 6 | Value Required,Filldown BUNDLE_IFACE (Po\d+) 7 | Value BUNDLE_STATUS (\(\S+\)) 8 | Value BUNDLE_PROTO (\S+) 9 | Value BUNDLE_PROTO_STATE (\(\S+\)) 10 | Value List PHYS_IFACE (\S+) 11 | Value List PHYS_IFACE_STATUS (\(\S+\)) 12 | 13 | Start 14 | ^Group\s+ -> Header 15 | 16 | Header 17 | ^--- -> List 18 | 19 | List 20 | ^\d+ -> Continue.Record 21 | ^\d+\s+${BUNDLE_IFACE}${BUNDLE_STATUS}\s+${BUNDLE_PROTO}\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 22 | ^\d+\s+\S+\s+\S+\s+\S+\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 23 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\(\S+\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 24 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\(\S+\)\s+\S+\(\S+\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 25 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\(\S+\)\s+\S+\(\S+\)\s+\S+\(\S+\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 26 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\(\S+\)\s+\S+\(\S+\)\s+\S+\(\S+\)\s+\S+\(\S+\)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 27 | ^\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 28 | ^\s+S+(S+)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 29 | ^\s+S+(S+)\s+S+(\S+)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 30 | ^\s+S+(S+)\s+S+(\S+)\s+\S+(\S+)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 31 | ^\s+S+(S+)\s+S+(\S+)\s+\S+(\S+)\s+\S+(\S+)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 32 | ^\s+S+(S+)\s+S+(\S+)\s+\S+(\S+)\s+\S+(\S+)\s+\S+(\S+)\s+${PHYS_IFACE}${PHYS_IFACE_STATUS}(\s|$$) -> Continue 33 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_interface.template: -------------------------------------------------------------------------------- 1 | Value Required INTERFACE (\S+) 2 | Value LINK_STATUS (.+?) 3 | Value ADMIN_STATE (.+?) 4 | Value HARDWARE_TYPE (.*) 5 | Value ADDRESS ([a-zA-Z0-9]+.[a-zA-Z0-9]+.[a-zA-Z0-9]+) 6 | Value BIA ([a-zA-Z0-9]+.[a-zA-Z0-9]+.[a-zA-Z0-9]+) 7 | Value DESCRIPTION (.*) 8 | Value IP_ADDRESS (\d+\.\d+\.\d+\.\d+\/\d+) 9 | Value MTU (\d+) 10 | Value DUPLEX (.+duplex?) 11 | Value SPEED (.+?) 12 | Value BANDWIDTH (\d+\s+\w+) 13 | Value DELAY (\d+\s+\w+) 14 | Value ENCAPSULATION (\w+) 15 | 16 | Start 17 | ^${INTERFACE}\s+is\s+${LINK_STATUS},\sline\sprotocol\sis\s${ADMIN_STATE}$$ 18 | ^${INTERFACE}\s+is\s+${LINK_STATUS}$$ 19 | ^admin\s+state\s+is\s+${ADMIN_STATE}, 20 | ^\s+Hardware(:|\s+is)\s+${HARDWARE_TYPE},\s+address(:|\s+is)\s+${ADDRESS}(.*bia\s+${BIA})* -> Details 21 | ^\s*$$ -> Details 22 | 23 | Details 24 | ^\s+Description:\s+${DESCRIPTION} 25 | ^\s+Internet\s+Address\s+is\s+${IP_ADDRESS} 26 | ^\s+${DUPLEX}, ${SPEED}(,|$$) 27 | ^\s+MTU\s+${MTU}.*BW\s+${BANDWIDTH}.*DLY\s+${DELAY} 28 | ^\s+Encapsulation\s+${ENCAPSULATION} 29 | ^\s*$$ -> Record Start 30 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_ip_igmp_snooping_groups.template: -------------------------------------------------------------------------------- 1 | #Type: S - Static, D - Dynamic, R - Router port, F - Fabricpath core port 2 | # 3 | #Vlan Group Address Ver Type Port list 4 | #1041 225.0.0.131 v3 D Eth1/2 Eth1/23 Eth1/1 5 | # Eth1/21 Eth1/25 6 | # 7 | Value VLAN (\d+) 8 | Value GROUP (\d+\.\d+\.\d+\.\d+) 9 | Value VERSION (v\d+) 10 | Value TYPE ([A-Z]) 11 | Value List PORTS (\S+) 12 | 13 | Start 14 | ^${VLAN}\s+${GROUP}\s+${VERSION}\s+${TYPE}\s+${PORTS}(\s|$$) -> Continue 15 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 16 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 17 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 18 | ^\d+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 19 | ^\s{37}${PORTS}(\s|$$) -> Continue 20 | ^\s{37}\S+\s+${PORTS}(\s|$$) -> Continue 21 | ^\s{37}\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 22 | ^\s{37}\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 23 | ^\s{37}\S+\s+\S+\s+${PORTS}(\s|$$) -> Continue 24 | ^($$) -> Record 25 | 26 | 27 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_ip_mroute.template: -------------------------------------------------------------------------------- 1 | # IP Multicast Routing Table for VRF "default" 2 | # 3 | # (*, 225.0.0.34/32), uptime: 2d12h, pim ip 4 | # Incoming interface: loopback0, RPF nbr: 10.210.255.60 5 | # Outgoing interface list: (count: 1) 6 | # port-channel141, uptime: 2d12h, pim 7 | # 8 | # (10.210.32.205/32, 225.0.0.34/32), uptime: 2d12h, pim ip 9 | # Incoming interface: port-channel141, RPF nbr: 10.210.44.18, internal 10 | # Outgoing interface list: (count: 0) 11 | Value PUBLISHER (\*|\d+\.\d+\.\d+\.\d+\/\d+) 12 | Value GROUP (\d+\.\d+\.\d+\.\d+\/\d+) 13 | Value UPTIME (\S+) 14 | Value INCOMING_IFACE ([a-zA-Z0-9/-]+) 15 | Value OUTGOING_IFACE_COUNT (\d+) 16 | Value List OUTGOING_IFACE (\S+) 17 | 18 | Start 19 | ^\(${PUBLISHER}, ${GROUP}\), uptime: ${UPTIME}, 20 | ^\s+Incoming interface: ${INCOMING_IFACE}, 21 | ^\s+Outgoing interface list: \(count: ${OUTGOING_IFACE_COUNT}\) 22 | ^\s+${OUTGOING_IFACE}, 23 | ^\s*$$ -> Record 24 | 25 | 26 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_ip_pim_rp-hash.template: -------------------------------------------------------------------------------- 1 | #PIM Hash Information for VRF "default" 2 | #PIM RPs for group 225.0.0.35, using hash-length: 1 from BSR: 10.210.255.61 3 | # RP 10.210.255.60, hash: 968088410 (selected) 4 | # 5 | Value GROUP (\d+\.\d+\.\d+\.\d+) 6 | Value HASH_LENGTH (v\d+) 7 | Value BSR (\d+\.\d+\.\d+\.\d+) 8 | Value SELECTED_RP (\d+\.\d+\.\d+\.\d+) 9 | 10 | Start 11 | ^PIM RPs for group ${GROUP}, using hash-length: ${HASH_LENGTH} from BSR: ${BSR} 12 | ^\s+RP ${SELECTED_RP}, hash: \d+ \(selected\) 13 | 14 | EOF 15 | 16 | 17 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_ip_pim_rp.template: -------------------------------------------------------------------------------- 1 | #PIM RP Status Information for VRF "default" 2 | #PIM RP Information for group 239.65.0.123 in VRF "default" 3 | # 4 | #RP: 10.210.255.16, (0), uptime: 1w2d, expires: 00:02:06, 5 | # priority: 10, RP-source: 10.210.255.61 (B), group ranges: 6 | # 239.129.0.0/16 239.65.0.0/16 7 | #RP: 10.210.255.17, (0), uptime: 1w2d, expires: 00:02:06, 8 | # priority: 5, RP-source: 10.210.255.61 (B), group ranges: 9 | # 239.129.0.0/16 239.65.0.0/16 10 | # 11 | Value Filldown GROUP (\d+\.\d+\.\d+\.\d+) 12 | Value Filldown VRF ([a-z]+) 13 | Value RP (\d+\.\d+\.\d+\.\d+) 14 | Value UPTIME (\S+) 15 | Value EXPIRES ([0-9:]+) 16 | Value PRIORITY (\d+) 17 | Value RP_SOURCE (\d+\.\d+\.\d+\.\d+) 18 | Value RP_PROTOCOL ([A-Z]) 19 | Value List GROUP_RANGES (\d+\.\d+\.\d+\.\d+\/\d+) 20 | 21 | Start 22 | ^PIM RP Status Information for VRF "${VRF}" 23 | ^PIM RP Information for group ${GROUP} in VRF "${VRF}" 24 | ^$$ -> RP_List 25 | 26 | RP_List 27 | ^RP: ${RP}\*?, \(\d+\), uptime: ${UPTIME}, expires: ${EXPIRES}, -> RP_Entry 28 | 29 | RP_Entry 30 | ^\s+priority: ${PRIORITY}, RP-source: ${RP_SOURCE} \(${RP_PROTOCOL}\), group ranges: 31 | ^\s+${GROUP_RANGES}(\s|$$) -> Continue 32 | ^\s+\S+\s+${GROUP_RANGES}(\s|$$) -> Continue 33 | ^\s+\S+\s+\S+\s+${GROUP_RANGES}(\s|$$) -> Continue 34 | ^\s+\S+\s+\S+\s+\S+\s+${GROUP_RANGES}(\s|$$) 35 | ^RP: -> Continue.Record 36 | ^RP: ${RP}, \(\d+\), uptime: ${UPTIME}, expires: ${EXPIRES}, -> Next 37 | ^$$ -> Record -> RP_List 38 | 39 | -------------------------------------------------------------------------------- /ntc-templates/cisco_nxos_show_lldp_neighbors.template: -------------------------------------------------------------------------------- 1 | Value NEIGHBOR (\S+?) 2 | Value LOCAL_INTERFACE (Eth(ernet)?\S+) 3 | Value NEIGHBOR_INTERFACE (\S+) 4 | 5 | Start 6 | ^Device.*ID -> LLDP 7 | 8 | LLDP 9 | ^${NEIGHBOR}${LOCAL_INTERFACE}\s+\d+\s+[\w+\s]+\S+\s+${NEIGHBOR_INTERFACE}\s*$$ -> Record 10 | ^${NEIGHBOR}\s+${LOCAL_INTERFACE}\s+\d+\s+[\w+\s]+\S+\s+${NEIGHBOR_INTERFACE}\s*$$ -> Record 11 | ^\S+$$ -> Continue.Record 12 | ^${NEIGHBOR}$$ 13 | ^\s+${LOCAL_INTERFACE}\s+\d+\s+[\w+\s]+\S+\s+${NEIGHBOR_INTERFACE}$$ -> Record -------------------------------------------------------------------------------- /ntc-templates/index: -------------------------------------------------------------------------------- 1 | 2 | # First line is the header fields for columns and is mandatory. 3 | # Regular expressions are supported in all fields except the first. 4 | # Last field supports variable length command completion. 5 | # abc[[xyz]] is expanded to abc(x(y(z)?)?)?, regexp inside [[]] is not supported 6 | # 7 | # Rules of Ordering: 8 | # - OS in alphbetical order 9 | # - Command in length other 10 | # - When Length is the same, use alphabetical order 11 | # - Keep space between OS's 12 | # 13 | Template, Hostname, Platform, Command 14 | 15 | arista_eos_show_ip_igmp_snooping_groups.template, .*, arista_eos, sh[[ow]] ip ig[[mp]] s[[nooping]] g[[roups]] 16 | arista_eos_show_ip_pim_rp-hash.template, .*, arista_eos, sh[[ow]] ip pi[[m]] rp-h[[ash]] 17 | arista_eos_show_lldp_neighbors.template, .*, arista_eos, sh[[ow]] ll[[dp]] nei[[ghbors]] 18 | arista_eos_show_ip_pim_rp.template, .*, arista_eos, sh[[ow]] ip pi[[m]] rp 19 | arista_eos_show_ip_mroute.template, .*, arista_eos, sh[[ow]] ip mr[[oute]] 20 | arista_eos_show_port-channel_summary.template, .*, arista_eos, sh[[ow]] port-c[[hannel]] s[[ummary]] 21 | 22 | cisco_ios_show_etherchannel_summary.template, .*, cisco_ios, sh[[ow]] etherc[[hannel]] s[[ummary]] 23 | 24 | cisco_nxos_show_ip_igmp_snooping_groups.template, .*, cisco_nxos, sh[[ow]] ip ig[[mp]] s[[nooping]] g[[roups]] 25 | cisco_nxos_show_ip_pim_rp-hash.template, .*, cisco_nxos, sh[[ow]] ip pi[[m]] rp-h[[ash]] 26 | cisco_nxos_show_lldp_neighbors.template, .*, cisco_nxos, sh[[ow]] ll[[dp]] nei[[ghbors]] 27 | cisco_nxos_show_ip_pim_rp.template, .*, cisco_nxos, sh[[ow]] ip pi[[m]] rp 28 | cisco_nxos_show_ip_mroute.template, .*, cisco_nxos, sh[[ow]] ip mr[[oute]] 29 | cisco_nxos_show_interface.template, .*, cisco_nxos, sh[[ow]] int[[erface]] 30 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | ansible==2.3.2.0 2 | asn1crypto==0.23.0 3 | bcrypt==3.1.4 4 | cffi==1.11.2 5 | cryptography==2.1.4 6 | enum34==1.1.6 7 | idna==2.6 8 | ipaddress==1.0.18 9 | Jinja2==2.10 10 | MarkupSafe==1.0 11 | netaddr==0.7.19 12 | netmiko==1.4.3 13 | paramiko==2.4.0 14 | pyasn1==0.4.2 15 | pycparser==2.18 16 | pycrypto==2.6.1 17 | PyNaCl==1.2.1 18 | PyYAML==3.12 19 | scp==0.10.2 20 | six==1.11.0 21 | textfsm==0.3.2 22 | -------------------------------------------------------------------------------- /secrets/.gitignore: -------------------------------------------------------------------------------- 1 | vault.yml 2 | -------------------------------------------------------------------------------- /ssh_config: -------------------------------------------------------------------------------- 1 | Host bastionhost.local 2 | Hostname bastionhost.local 3 | ControlMaster auto 4 | ControlPath ~/.ssh/ansible-%r@%h:%p 5 | ControlPersist 5m 6 | 7 | Host * 8 | ProxyCommand ssh -W %h:%p bastionhost.local 9 | 10 | -------------------------------------------------------------------------------- /tasks/eos/rp.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve RP 3 | # 4 | --- 5 | - name: "Fetch RP" 6 | ntc_show_command: 7 | command: "show ip pim rp-hash {{ mcast_group }}" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_ip_pim_rp.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: rp 18 | tags: ['fetch'] 19 | 20 | - debug: 21 | msg: "{{ rp }}" 22 | tags: ['fetch'] 23 | 24 | - copy: 25 | content: "{{ rp.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/rp-{{mcast_group}}.yaml" 27 | tags: ['fetch'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/interfaces.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve interface details 3 | # 4 | --- 5 | - name: "Fetch interfaces" 6 | ntc_show_command: 7 | command: "show interface" 8 | template_dir: "{{ inventory_dir }}/lib/ntc-ansible/ntc-templates/templates" 9 | platform: "{{ platforms[os] }}" 10 | host: "{{ inventory_hostname }}" 11 | username: "{{ ansible_ssh_user }}" 12 | password: "{{ ansible_ssh_pass }}" 13 | connection_args: 14 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 15 | register: interfaces 16 | tags: ['fetch', 'fetch-interfaces'] 17 | 18 | - debug: 19 | msg: "{{ interfaces }}" 20 | tags: ['fetch', 'fetch-interfaces'] 21 | 22 | - copy: 23 | content: "{{ interfaces.response|to_nice_yaml(indent=2) }}" 24 | dest: "{{ host_results }}/interfaces-{{mcast_group}}.yaml" 25 | tags: ['fetch', 'fetch-interfaces'] 26 | 27 | # vim: set ts=2 shiftwidth=2 expandtab: 28 | -------------------------------------------------------------------------------- /tasks/ios/lags.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve lag details 3 | # 4 | --- 5 | - name: "Fetch lag" 6 | ntc_show_command: 7 | command: "show etherchannel summary" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_etherchannel_summary.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: lags 18 | tags: ['fetch', 'fetch-lags'] 19 | 20 | - debug: 21 | msg: "{{ lags }}" 22 | tags: ['fetch', 'fetch-lags'] 23 | 24 | - copy: 25 | content: "{{ lags.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/lags-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-lags'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/ios/snooping.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve igmp snooping entries 3 | # 4 | --- 5 | - name: "Fetch IGMP Snooping entries" 6 | ntc_show_command: 7 | command: "show ip igmp snooping groups" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_ip_igmp_snooping_groups.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: snooping 18 | tags: ['fetch'] 19 | 20 | - debug: 21 | msg: "{{ snooping }}" 22 | tags: ['fetch'] 23 | 24 | - copy: 25 | content: "{{ snooping.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/snooping-{{mcast_group}}.yaml" 27 | tags: ['fetch'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/lags.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve lag details 3 | # 4 | --- 5 | - name: "Fetch lag" 6 | ntc_show_command: 7 | command: "show port-channel summary" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_port-channel_summary.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: lags 18 | tags: ['fetch', 'fetch-lags'] 19 | 20 | - debug: 21 | msg: "{{ lags }}" 22 | tags: ['fetch', 'fetch-lags'] 23 | 24 | - copy: 25 | content: "{{ lags.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/lags-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-lags'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | 31 | -------------------------------------------------------------------------------- /tasks/mroutes.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve multicast routing table entries 3 | # 4 | --- 5 | - name: "Fetch multicast routes" 6 | ntc_show_command: 7 | command: "show ip mroute {{ mcast_group }}" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_ip_mroute.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: mroute 18 | tags: ['fetch', 'fetch-mroutes'] 19 | 20 | - debug: 21 | msg: "{{ mroute }}" 22 | tags: ['fetch', 'fetch-mroutes'] 23 | 24 | - copy: 25 | content: "{{ mroute.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/mroutes-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-mroutes'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/neighbours.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve LLDP neighbours 3 | # 4 | --- 5 | - name: "Fetch LLDP neighbours" 6 | ntc_show_command: 7 | command: "show lldp neighbors" 8 | template_dir: "{{ inventory_dir }}/lib/ntc-ansible/ntc-templates/templates" 9 | platform: "{{ platforms[os] }}" 10 | host: "{{ inventory_hostname }}" 11 | username: "{{ ansible_ssh_user }}" 12 | password: "{{ ansible_ssh_pass }}" 13 | connection_args: 14 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 15 | register: neighbours 16 | tags: ['fetch', 'fetch-neighbours'] 17 | 18 | - debug: 19 | msg: "{{ neighbours }}" 20 | tags: ['fetch', 'fetch-neighbours'] 21 | 22 | - copy: 23 | content: "{{ neighbours.response|to_nice_yaml(indent=2) }}" 24 | dest: "{{ host_results }}/neighbours-{{mcast_group}}.yaml" 25 | tags: ['fetch', 'fetch-neighbours'] 26 | 27 | # vim: set ts=2 shiftwidth=2 expandtab: 28 | -------------------------------------------------------------------------------- /tasks/nxos/interfaces.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve interface details 3 | # 4 | --- 5 | - name: "Fetch interfaces" 6 | ntc_show_command: 7 | command: "show interface" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{ platforms[os] }}_show_interface.raw" 10 | template_dir: "{{ playbook_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: interfaces 18 | tags: ['fetch', 'fetch-interfaces'] 19 | 20 | - debug: 21 | msg: "{{ interfaces }}" 22 | tags: ['fetch', 'fetch-interfaces'] 23 | 24 | - copy: 25 | content: "{{ interfaces.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/interfaces-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-interfaces'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/nxos/lags.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve lag details 3 | # 4 | --- 5 | - name: "Fetch lag" 6 | ntc_show_command: 7 | command: "show port-channel summary" 8 | template_dir: "{{ inventory_dir }}/lib/ntc-ansible/ntc-templates/templates" 9 | platform: "{{ platforms[os] }}" 10 | host: "{{ inventory_hostname }}" 11 | username: "{{ ansible_ssh_user }}" 12 | password: "{{ ansible_ssh_pass }}" 13 | connection_args: 14 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 15 | register: lags 16 | tags: ['fetch', 'fetch-lags'] 17 | 18 | - debug: 19 | msg: "{{ lags }}" 20 | tags: ['fetch', 'fetch-lags'] 21 | 22 | - copy: 23 | content: "{{ lags.response|to_nice_yaml(indent=2) }}" 24 | dest: "{{ host_results }}/lags-{{mcast_group}}.yaml" 25 | tags: ['fetch', 'fetch-lags'] 26 | 27 | # vim: set ts=2 shiftwidth=2 expandtab: 28 | -------------------------------------------------------------------------------- /tasks/nxos/neighbours.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve LLDP neighbours 3 | # 4 | --- 5 | - name: "Fetch LLDP neighbours" 6 | ntc_show_command: 7 | command: "show lldp neighbors" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/cisco_nxos/cisco_nxos_show_lldp_neighbors.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: neighbours 18 | tags: ['fetch', 'fetch-neighbours'] 19 | 20 | - debug: 21 | msg: "{{ neighbours }}" 22 | tags: ['fetch', 'fetch-neighbours'] 23 | 24 | - copy: 25 | content: "{{ neighbours.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/neighbours-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-neighbours'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/rp.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve RP 3 | # 4 | --- 5 | - name: "Fetch RP" 6 | ntc_show_command: 7 | command: "show ip pim rp {{ mcast_group }}" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_ip_pim_rp.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: rp 18 | tags: ['fetch-rp'] 19 | 20 | - debug: 21 | msg: "{{ rp }}" 22 | tags: ['fetch-rp'] 23 | 24 | - copy: 25 | content: "{{ rp.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/rp-{{mcast_group}}.yaml" 27 | tags: ['fetch-rp'] 28 | 29 | # vim: set ts=2 shiftwidth=2 expandtab: 30 | -------------------------------------------------------------------------------- /tasks/snooping.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Retrieve igmp snooping entries 3 | # 4 | --- 5 | - name: "Fetch IGMP Snooping entries" 6 | ntc_show_command: 7 | command: "show ip igmp snooping groups {{ mcast_group }}" 8 | #connection: "offline" 9 | #file: "{{ playbook_dir }}/tests/{{ platforms[os] }}/{{platforms[os] }}_show_ip_igmp_snooping_groups.raw" 10 | template_dir: "{{ inventory_dir }}/ntc-templates" 11 | platform: "{{ platforms[os] }}" 12 | host: "{{ inventory_hostname }}" 13 | username: "{{ ansible_ssh_user }}" 14 | password: "{{ ansible_ssh_pass }}" 15 | connection_args: 16 | ssh_config_file: "{{ inventory_dir }}/ssh_config" 17 | register: snooping 18 | tags: ['fetch', 'fetch-snooping'] 19 | 20 | - debug: 21 | msg: "{{ snooping }}" 22 | tags: ['fetch', 'fetch-snooping'] 23 | 24 | - copy: 25 | content: "{{ snooping.response|to_nice_yaml(indent=2) }}" 26 | dest: "{{ host_results }}/snooping-{{mcast_group}}.yaml" 27 | tags: ['fetch', 'fetch-snooping'] 28 | 29 | 30 | # vim: set ts=2 shiftwidth=2 expandtab: 31 | -------------------------------------------------------------------------------- /templates/graph.j2: -------------------------------------------------------------------------------- 1 | {% macro link_colour(publisher) %}{% if publisher == '*' %}red{% else %}darkgreen{% endif %}{% endmacro %} 2 | digraph network { 3 | /* graph metadata */ 4 | label="Multicast routing for {{ mcast_group }} (compiled {{ report._meta.compiled_at }})"; 5 | labelloc=top; 6 | labeljust=left; 7 | 8 | /* hosts */ 9 | {% for host in play_hosts %} 10 | {% if host in report %} 11 | {% if report[host].mroutes|has_mroutes %} 12 | "{{ host }}" [ 13 | shape=box; 14 | {% if report[host].rp|has_rp %} 15 | {% if report[host].rp|get_rp_address|is_rp(report[host].interfaces) %} 16 | style=filled; 17 | fillcolor=grey80; 18 | {% endif %} 19 | label="{{ host }}\nRP: {{ report[host].rp|get_rp_address }}" 20 | {% endif %} 21 | ]; 22 | {% endif %} 23 | {% endif %} 24 | {% endfor %} 25 | 26 | /* multicast routes */ 27 | {% for label, edge in edges.iteritems() %} 28 | {{ edge.left_host }} -> {{ edge.right_host }} [label="{{ label }}"; fontcolor="{{ link_colour(edge.publisher) }}"; color="{{ link_colour(edge.publisher) }}"] 29 | {% endfor %} 30 | 31 | /* IGMP snoops */ 32 | {% for host in play_hosts %} 33 | {% if host in report and report[host].snooping|has_snooping %} 34 | {% for entry in report[host].snooping %} 35 | {% for port in entry.ports %} 36 | {% if port != "Cpu" %} 37 | {{ host }} -> {{ report|get_interface_neighbour(host, port) }} [color="blue"; fontcolor="blue"; label="{{ port }}";]; 38 | {% endif %} 39 | {% endfor %} 40 | {% endfor %} 41 | {% endif %} 42 | {% endfor %} 43 | 44 | /* Publishers */ 45 | {% for host in play_hosts %} 46 | {% if host in report and report[host].mroutes|has_mroutes %} 47 | {% for publisher in report[host].mroutes|get_publishers(report[host].interfaces) %} 48 | {{ host}}_{{ publisher.hostname }} [ 49 | shape=diamond; 50 | style=filled; 51 | fillcolor=orange; 52 | label="{{ publisher.hostname }}\n{{ publisher.ip|ipaddr('address') }}"; 53 | ]; 54 | {{ host }}_{{ publisher.hostname }} -> {{ host }} 55 | {% endfor %} 56 | {% endif %} 57 | {% endfor %} 58 | 59 | /* Legend */ 60 | subgraph cluster_key { 61 | label="Key" 62 | pos="0,0!" 63 | 64 | rp [ 65 | shape=box; 66 | style=filled; 67 | fillcolor=grey80; 68 | label="RP"; 69 | ]; 70 | 71 | publisher [ 72 | shape=diamond; 73 | style=filled; 74 | fillcolor=orange; 75 | label="Publisher"; 76 | ]; 77 | 78 | left [ 79 | shape=box; 80 | label="Queried Device"; 81 | ]; 82 | right [ 83 | label="Discovered Device"; 84 | ]; 85 | 86 | left -> right [ 87 | label="PIM (*,G) join"; 88 | fontcolor="{{ link_colour('*') }}"; 89 | color="{{ link_colour('*') }}"; 90 | ] 91 | 92 | left -> right [ 93 | label="PIM (S,G) join"; 94 | fontcolor="{{ link_colour('1.2.3.4') }}"; 95 | color="{{ link_colour('1.2.3.4') }}"; 96 | ] 97 | 98 | left -> right [ 99 | label="IGMP join"; 100 | fontcolor="blue"; 101 | color="blue"; 102 | ] 103 | 104 | publisher -> left; 105 | 106 | } 107 | } 108 | {# vim: set ts=2 shiftwidth=2 expandtab: #} 109 | -------------------------------------------------------------------------------- /templates/report.j2: -------------------------------------------------------------------------------- 1 | --- 2 | _meta: 3 | compiled_at: "{#% now 'local', '%Y-%m-%d %H:%M:%S %z' %#}" 4 | {% for host in play_hosts %} 5 | {{ host }}: 6 | {% for input in ['rp', 'mroutes', 'snooping', 'neighbours', 'interfaces', 'lags'] %} 7 | {{ input }}: 8 | {{ lookup('file', results + '/' + host + '/' + input + '-' + mcast_group + '.yaml')|indent_block(4) }} 9 | {% endfor %} 10 | {% endfor %} 11 | -------------------------------------------------------------------------------- /tests/arista_eos/arista_eos_show_ip_igmp_snooping_groups.raw: -------------------------------------------------------------------------------- 1 | Vlan Group Type Version Port-List 2 | -------------------------------------------------------------------------------- 3 | 3401 233.74.125.32 Dynamic - Cpu, Et1 4 | 3401 239.49.65.5 Dynamic - Cpu, Et3 5 | 324 225.0.0.124 Dynamic - Et13, Et16, Et19, Et23, 6 | Et28, Cpu -------------------------------------------------------------------------------- /tests/arista_eos/arista_eos_show_ip_mroute.raw: -------------------------------------------------------------------------------- 1 | PIM Sparse Mode Multicast Routing Table 2 | Flags: E - Entry forwarding on the RPT, J - Joining to the SPT 3 | R - RPT bit is set, S - SPT bit is set, L - Source is attached 4 | W - Wildcard entry, X - External component interest 5 | I - SG Include Join alert rcvd, P - Ex-Prune alert rcvd 6 | H - Joining SPT due to policy, D - Joining SPT due to protocol 7 | Z - Entry marked for deletion, C - Learned from a DR via a register 8 | A - Learned via Anycast RP Router, M - Learned via MSDP 9 | N - May notify MSDP, K - Keepalive timer not running 10 | T - Switching Incoming Interface, B - Learned via Border Router 11 | RPF route: U - From unicast routing table 12 | M - From multicast routing table 13 | 225.0.0.35 14 | 0.0.0.0, 2d18h, RP 192.168.123.60, flags: W 15 | Incoming interface: Ethernet48 16 | RPF route: [U] 192.168.123.60/32 [110/12] via 192.168.123.57 17 | Outgoing interface list: 18 | Vlan2300 19 | 192.168.123.203, 0:52:41, flags: R 20 | Incoming interface: Ethernet48 21 | RPF route: [U] 192.168.123.60/32 [110/12] via 192.168.123.57 22 | -------------------------------------------------------------------------------- /tests/arista_eos/arista_eos_show_ip_pim_rp-hash.raw: -------------------------------------------------------------------------------- 1 | RP 192.168.123.60 2 | PIM v2 Hash Values: 3 | RP: 192.168.123.60 4 | Uptime: 354d12h, Expires: 0:02:21, Priority: 5, HashMaskLen: 1, HashMaskValue: 968088410 5 | RP: 192.168.123.7 6 | Uptime: 354d12h, Expires: 0:02:21, Priority: 20, HashMaskLen: 1, HashMaskValue: 1591431583 7 | -------------------------------------------------------------------------------- /tests/arista_eos/arista_eos_show_ip_pim_rp.raw: -------------------------------------------------------------------------------- 1 | Group: 225.0.0.0/22 2 | RP: 192.168.123.60 3 | Uptime: 354d17h, Expires: 0:02:22, Priority: 5 4 | RP: 192.168.123.7 5 | Uptime: 354d17h, Expires: 0:02:22, Priority: 20 6 | 7 | -------------------------------------------------------------------------------- /tests/arista_eos/arista_eos_show_port-channel_summary.raw: -------------------------------------------------------------------------------- 1 | 2 | Flags 3 | ------------------------ ---------------------------- ------------------------- 4 | a - LACP Active p - LACP Passive * - static fallback 5 | F - Fallback enabled f - Fallback configured ^ - individual fallback 6 | U - In Use D - Down 7 | + - In-Sync - - Out-of-Sync i - incompatible with agg 8 | P - bundled in Po s - suspended G - Aggregable 9 | I - Individual S - ShortTimeout w - wait for agg 10 | 11 | Number of channels in use: 2 12 | Number of aggregators: 2 13 | 14 | Port-Channel Protocol Ports 15 | ------------------ -------------- ---------------------- 16 | Po47(U) LACP(a) Et47(G+) Et48(G+) 17 | Po51(U) LACP(a) Et51/1(G+) Et52/1(G+) 18 | 19 | -------------------------------------------------------------------------------- /tests/cisco_ios/cisco_ios_show_etherchannel_summary.raw: -------------------------------------------------------------------------------- 1 | 2 | Group Port-channel Protocol Ports 3 | ------+-------------+-----------+----------------------------------------------- 4 | 47 Po47(SU) LACP Gi1/47(P) Gi1/48(P) 5 | 6 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_interface.raw: -------------------------------------------------------------------------------- 1 | loopback0 is up 2 | 3 | Hardware: Loopback 4 | Internet Address is 192.168.123.60/32 5 | MTU 1500 bytes, BW 8000000 Kbit,, BW 8000000 Kbit, DLY 5000 usec 6 | reliability 255/255, txload 1/255, rxload 1/255 7 | Encapsulation LOOPBACK, medium is broadcast 8 | 273399342 packets input 19752844568 bytes 9 | 0 multicast frames 0 compressed 10 | 0 input errors 0 frame 0 overrun 0 fifo 11 | 1 packets output 56 bytes 0 underruns 12 | 0 output errors 0 collisions 0 fifo 13 | 14 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_ip_igmp_snooping_groups.raw: -------------------------------------------------------------------------------- 1 | Type: S - Static, D - Dynamic, R - Router port, F - Fabricpath core port 2 | 3 | Vlan Group Address Ver Type Port list 4 | 1041 225.0.0.131 v3 D Eth1/2 Eth1/23 Eth1/1 5 | Eth1/21 Eth1/25 6 | 7 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_ip_mroute.raw: -------------------------------------------------------------------------------- 1 | IP Multicast Routing Table for VRF "default" 2 | 3 | (*, 225.0.0.35/32), uptime: 4d15h, pim ip 4 | Incoming interface: Vlan2300, RPF nbr: 192.168.123.18 5 | Outgoing interface list: (count: 1) 6 | Ethernet1/48, uptime: 4d15h, pim 7 | 8 | (192.168.123.203/32, 225.0.0.35/32), uptime: 4d15h, pim mrib ip 9 | Incoming interface: Ethernet1/48, RPF nbr: 192.168.123.153 10 | Outgoing interface list: (count: 0) 11 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_ip_pim_rp-hash.raw: -------------------------------------------------------------------------------- 1 | PIM Hash Information for VRF "default" 2 | PIM RPs for group 225.0.0.35, using hash-length: 1 from BSR: 192.168.123.61 3 | RP 192.168.123.60, hash: 968088410 (selected) 4 | 5 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_ip_pim_rp.raw: -------------------------------------------------------------------------------- 1 | PIM RP Status Information for VRF "default" 2 | PIM RP Information for group 239.65.0.123 in VRF "default" 3 | 4 | RP: 192.168.123.16, (0), uptime: 1w2d, expires: 00:02:06, 5 | priority: 10, RP-source: 192.168.123.61 (B), group ranges: 6 | 239.129.0.0/16 239.65.0.0/16 7 | RP: 192.168.123.17, (0), uptime: 1w2d, expires: 00:02:06, 8 | priority: 5, RP-source: 192.168.123.61 (B), group ranges: 9 | 239.129.0.0/16 239.65.0.0/16 10 | -------------------------------------------------------------------------------- /tests/cisco_nxos/cisco_nxos_show_lldp_neighbors.raw: -------------------------------------------------------------------------------- 1 | Capability codes: 2 | (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device 3 | (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other 4 | Device ID Local Intf Hold-time Capability Port ID 5 | core2.local Eth1/16 120 BR Ethernet48 6 | access9.local Eth1/29 120 BR Ethernet43 7 | access9.local Eth1/30 120 BR Ethernet42 8 | access5.local Eth1/31 120 B Eth1/46 9 | access5.local Eth1/32 120 B Eth1/45 10 | access25.local Eth1/41 120 BR Ethernet45 11 | access25.local Eth1/42 120 BR Ethernet46 12 | access15.local Eth1/43 120 BR Ethernet47 13 | access15.local Eth1/44 120 BR Ethernet48 14 | access24.local Eth1/45 120 BR Ethernet45 15 | access24.local Eth1/46 120 BR Ethernet46 16 | access14.local Eth1/47 120 BR Ethernet45 17 | access14.local Eth1/48 120 BR Ethernet46 18 | access12.local Eth2/3 120 BR Ethernet50/1 19 | access22.local Eth2/4 120 BR Ethernet50/1 20 | core1.local Eth2/5 120 B Eth2/5 21 | core1.local Eth2/6 120 B Eth2/6 22 | Total entries displayed: 16 23 | -------------------------------------------------------------------------------- /vars/ntc.yml: -------------------------------------------------------------------------------- 1 | --- 2 | platforms: 3 | ios: cisco_ios 4 | nxos: cisco_nxos 5 | eos: arista_eos 6 | --------------------------------------------------------------------------------