├── .gitignore ├── README.md ├── config.yaml ├── doc └── screenshot.jpg ├── lib ├── __init__.py ├── args.py ├── config.py ├── loghandler.py ├── pipeline.py ├── procnetdev.py ├── sources.py ├── status_server.py ├── system_health_reporter.py └── watchdog.py ├── main.py ├── requirements.txt ├── ui ├── bootstrap-4.1.3 │ ├── bootstrap.min.css │ └── bootstrap.min.css.map ├── index.html ├── jquery-3.3.1.min.js ├── ui.css └── ui.js └── webgui.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | 5 | # C extensions 6 | *.so 7 | 8 | # Distribution / packaging 9 | .Python 10 | *.egg-info/ 11 | .installed.cfg 12 | *.egg 13 | 14 | # PyInstaller 15 | # Usually these files are written by a python script from a template 16 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 17 | *.manifest 18 | *.spec 19 | 20 | # Installer logs 21 | pip-log.txt 22 | pip-delete-this-directory.txt 23 | 24 | # Unit test / coverage reports 25 | htmlcov/ 26 | .tox/ 27 | .coverage 28 | .cache 29 | nosetests.xml 30 | coverage.xml 31 | 32 | # Translations 33 | *.mo 34 | *.pot 35 | 36 | # Django stuff: 37 | *.log 38 | 39 | # Sphinx documentation 40 | docs/_build/ 41 | 42 | # PyBuilder 43 | target/ 44 | 45 | # Output-Files 46 | *.wav 47 | 48 | #editor backups 49 | *~ 50 | *.swp 51 | 52 | # VirtualEnv 53 | /env 54 | /venv 55 | 56 | # IDE project files 57 | /*.iml 58 | /.idea 59 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # A Linux/GStreamer-Based AES67 Multitrack Audio Recording Solution 2 | Given you have an AES67 based Audio-over-IP Device or -Network and want to contiously record a given set of tracks uncompressed, then this might be for you. 3 | 4 |  5 | 6 | On 34C3 we used tascam SSD-recorder to record all audio-tracks from all sub-groups to SSDs. this had some major drawbacks; 7 | * the recorder did record all 128 tracks and did not name them; so finding the ones of interest was quite hard 8 | * the recorder did not have NTP and their clocks were not set correctly 9 | * Someone had to unload the SSDs when they were full, carry them to an unloading-station, unload them and carry them back 10 | * The 120GB+ had do be copyied at-once every hours (whenever the SSDs were full) to the storage, spiking the network load 11 | * The backup files were hours long of multi-GB .wav-Files, so seeking in them (via network) was quite a challenge 12 | 13 | On 35C3 we plan to fix these issues by capturing the Audio via AES67 and gstreamer to chunked, nicely named .wav-files, constantly syncing them to a storage-server. 14 | 15 | ## Configuration 16 | For the time being, just take a look at [config.yaml](config.yaml). 17 | 18 | ## Requirements 19 | ``` 20 | # for the main recording application 21 | apt-get install gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-tools libgstreamer1.0-0 gir1.2-gstreamer-1.0 gir1.2-gst-plugins-base-1.0 22 | apt-get install python3 python3-yaml python3-paho-mqtt python3-gi 23 | 24 | # for the web-gui application 25 | sudo apt-get install python3-socketio-client python3-flask 26 | ``` 27 | 28 | ## Running 29 | The recorder itself is a headless application. It publishes status information via tcp on localhost port 9999. 30 | ``` 31 | ./main.py -i my-config.yaml 32 | 33 | nc 127.0.0.1 9999 34 | ``` 35 | 36 | A second application connects to this status-stream and serves a Web-UI on port 9998. 37 | ``` 38 | ./web-ui.py -i my-config.yaml 39 | ``` 40 | 41 | The reason fo this split is, that a Web-UI with Websockets for live level-graphs does not fit nicely with the main recorder, 42 | built around GObject/GStreamer. 43 | 44 | 45 | ## Troubleshooting 46 | ### Error-Message from gst_element_request_pad 47 | ``` 48 | gst_element_request_pad: assertion 'templ != NULL' failed 49 | WARNING: erroneous pipeline: could not link audiotestsrc0 to mux_0, mux_0 can't handle caps audio/x-raw, format=(string)S24LE, rate=(int)48000, channels=(int)1 50 | ``` 51 | 52 | [Known Bug in GStreamer](https://bugzilla.gnome.org/show_bug.cgi?id=797241), a Problem with the `splitmuxsink` not able to 53 | handle simple audio-only codecy, like `wavenc` a Patch has been merged to master which fixes this problem. 54 | Until it landed in your Distribution, you probably need to build your own Version of GStreamer. 55 | -------------------------------------------------------------------------------- /config.yaml: -------------------------------------------------------------------------------- 1 | source: 2 | # expected sample-format 3 | format: S24BE 4 | 5 | # excted sample-rate 6 | rate: 48000 7 | 8 | sources: 9 | # for RAVENNA VS 10 | #- type: rtsp 11 | # location: rtsp://192.168.178.24:8080/by-id/245599667748866 12 | # channels: 2 13 | 14 | # for Mediornet 15 | #- type: udp 16 | # channels: 8 17 | # address: 239.1.42.1 18 | # iface: enp4s1.3 19 | # port: 5004 20 | 21 | #- type: udp 22 | # channels: 8 23 | # address: 239.1.42.2 24 | # iface: enp4s1.3 25 | # port: 5004 26 | 27 | # for Demo 28 | - type: demo 29 | channels: 8 30 | 31 | - type: demo 32 | channels: 8 33 | 34 | clocking: 35 | source: system 36 | 37 | #source: ptp 38 | #ptp_domain: 0 39 | #ptp_interfaces: ['enp4s1.3'] 40 | 41 | # in Seconds 42 | #jitterbuffer: 20 # for RAVENNA VS 43 | 44 | # for Mediornet Nexus 45 | jitterbuffer: false 46 | 47 | # name of each channel. unnamed channels will issue a warning and will be named "unnamed/XX" 48 | # channel-names can contains slashes, denoting folders. a wall-clock-timestamp and the filetype suffix will be added to the name 49 | # channel-name "s1/mics/head1" might result in a filename like "s1/mics/head1/2018-01-01_10-00-00.wav 50 | # to *not* record a channel, set its name to the special value "!discard". 51 | channelmap: 52 | 0: "s1/mics/head1" 53 | 1: "s1/mics/head2" 54 | 55 | 2: "s1/mics/atmo1" 56 | 3: "s1/mics/atmo2" 57 | 58 | 4: "!discard" 59 | 5: "!discard" 60 | 6: "!discard" 61 | 7: "!discard" 62 | 63 | 8: "s2/mics/head1" 64 | 9: "s2/mics/head2" 65 | 66 | 10: "s2/mics/atmo1" 67 | 11: "s2/mics/atmo2" 68 | 69 | 12: "!discard" 70 | 13: "!discard" 71 | 14: "!discard" 72 | 15: "!discard" 73 | 74 | capture: 75 | # can also be specified via command-line arg 76 | folder: /video/audio-backup/ 77 | 78 | # length of a segment in seconds 79 | segment-length: 5 80 | 81 | # desired capture-format, check supported with `gst-inspect-1.0 wavenc` 82 | format: S24LE 83 | 84 | status_server: 85 | port: 9999 86 | bind: '::' 87 | level_interval_ms: 250 88 | system_health_report_interval_ms: 1000 89 | 90 | watchdog: 91 | enabled: true 92 | check_interval_ms: 1000 93 | warn_after_missing_signal_for_ms: 2000 94 | restart_after_missing_signal_for_ms: 5000 95 | mqtt: 96 | enabled: true 97 | host: mng.c3voc.de 98 | port: 1883 99 | username: 'aes67-backup' 100 | password: 'xxx' 101 | -------------------------------------------------------------------------------- /doc/screenshot.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/voc/aes67-recorder/734c65999ce78edb0ececceeef21a54b224173a1/doc/screenshot.jpg -------------------------------------------------------------------------------- /lib/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/voc/aes67-recorder/734c65999ce78edb0ececceeef21a54b224173a1/lib/__init__.py -------------------------------------------------------------------------------- /lib/args.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | parser = argparse.ArgumentParser('AES64 Backup') 4 | 5 | parser.add_argument('-v', '--verbose', action='count', default=0, 6 | help="Also print INFO and DEBUG messages.") 7 | 8 | parser.add_argument('-c', '--color', 9 | action='store', 10 | choices=['auto', 'always', 'never'], 11 | default='auto', 12 | help="Control the use of colors in the Log-Output") 13 | 14 | parser.add_argument('-t', '--timestamp', action='store_true', 15 | help="Enable timestamps in the Log-Output") 16 | 17 | parser.add_argument('-i', '--config-file', action='store', required=True, 18 | help="Path to a specific Config-Yaml-File to load") 19 | 20 | parser.add_argument('-s', '--source-url', action='store', 21 | help="RTSP Source-Url") 22 | 23 | parser.add_argument('-f', '--capture-folder', action='store', 24 | help="Destination Folder") 25 | 26 | parser.add_argument('--demo', action='store_true', 27 | help="Enable Demo-Mode, where instead of a real RTSP-Source a internally generated test-source is used") 28 | 29 | 30 | def parse(): 31 | global parser 32 | 33 | return parser.parse_args() 34 | -------------------------------------------------------------------------------- /lib/config.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os.path 3 | from pprint import pformat 4 | 5 | import yaml 6 | 7 | log = logging.getLogger('Config') 8 | 9 | 10 | def load(args): 11 | config = _load(args) 12 | 13 | if args.demo: 14 | config['sources'] = [{ 15 | "type": "demo", 16 | "channels": 8 17 | }] 18 | 19 | log.debug('Loaded config: \n%s', pformat(config)) 20 | 21 | return config 22 | 23 | 24 | def _load(args): 25 | if args.config_file is not None: 26 | log.info("Loading specified Config-File %s", args.config_file) 27 | with open(args.config_file, 'r') as f: 28 | return yaml.safe_load(f) 29 | 30 | else: 31 | files = [ 32 | '/etc/aes67-backup.yaml', 33 | os.path.expanduser('~/.aes67-backup.yaml'), 34 | ] 35 | for file in files: 36 | try: 37 | log.info("Trying to load Config-File %s", file) 38 | with open(file, 'r') as f: 39 | return yaml.safe_load(f) 40 | except: 41 | pass 42 | 43 | log.info("No Config-File found") 44 | raise RuntimeError('no config-file found or specified') 45 | -------------------------------------------------------------------------------- /lib/loghandler.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import time 3 | 4 | 5 | class LogFormatter(logging.Formatter): 6 | 7 | def __init__(self, docolor, timestamps=False): 8 | super().__init__() 9 | self.docolor = docolor 10 | self.timestamps = timestamps 11 | 12 | def formatMessage(self, record): 13 | if self.docolor: 14 | c_lvl = 33 15 | c_mod = 32 16 | c_msg = 0 17 | 18 | if record.levelno == logging.WARNING: 19 | c_lvl = 31 20 | # c_mod = 33 21 | c_msg = 33 22 | 23 | elif record.levelno > logging.WARNING: 24 | c_lvl = 31 25 | c_mod = 31 26 | c_msg = 31 27 | 28 | fmt = ''.join([ 29 | '\x1b[%dm' % c_lvl, # set levelname color 30 | '%(levelname)8s', # print levelname 31 | '\x1b[0m', # reset formatting 32 | '\x1b[%dm' % c_mod, # set name color 33 | ' %(name)s', # print name 34 | '\x1b[%dm' % c_msg, # set message color 35 | ': %(message)s', # print message 36 | '\x1b[0m' # reset formatting 37 | ]) 38 | else: 39 | fmt = '%(levelname)8s %(name)s: %(message)s' 40 | 41 | if self.timestamps: 42 | fmt = '%(asctime)s ' + fmt 43 | 44 | if 'asctime' not in record.__dict__: 45 | record.__dict__['asctime'] = time.strftime( 46 | "%Y-%m-%d %H:%M:%S", 47 | time.localtime(record.__dict__['created']) 48 | ) 49 | 50 | return fmt % record.__dict__ 51 | 52 | 53 | class LogHandler(logging.StreamHandler): 54 | 55 | def __init__(self, docolor, timestamps): 56 | super().__init__() 57 | self.setFormatter(LogFormatter(docolor, timestamps)) 58 | -------------------------------------------------------------------------------- /lib/pipeline.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | import os 4 | import sys 5 | import time 6 | 7 | from gi.repository import Gst 8 | 9 | from lib.sources import Source 10 | from lib.watchdog import Watchdog 11 | 12 | DISCARD_CHANNEL_KEYWORD = "!discard" 13 | 14 | 15 | class Pipeline(object): 16 | 17 | def __init__(self, config, statusServer): 18 | """ 19 | :param statusServer: lib.status_server.StatusServer 20 | :type config lib.config.VocConfigParser 21 | """ 22 | self.config = config 23 | self.statusServer = statusServer 24 | 25 | self.log = logging.getLogger('Pipeline') 26 | pipeline = "" 27 | 28 | sources = config['sources'] 29 | 30 | dirnames = list(self.config['channelmap'].values()) 31 | non_unique = set(x for x in dirnames if dirnames.count(x) > 1 and x != DISCARD_CHANNEL_KEYWORD) 32 | if len(non_unique) > 0: 33 | self.log.error("Multiple channels use the same Recording-Name: " + ', '.join(non_unique)) 34 | self.log.error("Invalid config - Quitting") 35 | sys.exit(42) 36 | 37 | self.log.debug('Constructing Pipeline-Description') 38 | for idx, source in enumerate(sources): 39 | pipeline += self.build_source_pipeline(idx, Source.from_config(config, source)) + "\n" 40 | 41 | # parse pipeline 42 | self.log.debug('Creating Pipeline:\n%s', pipeline) 43 | self.pipeline = Gst.parse_launch(pipeline) 44 | 45 | if config['clocking']['source'] == 'ptp': 46 | from gi.repository import GstNet 47 | ptp_domain = config['clocking']['ptp_domain'] 48 | ptp_interfaces = config['clocking']['ptp_interfaces'] 49 | 50 | self.log.info('initializing PTP-Subsystem for Network-Interface(s) %s', ptp_interfaces) 51 | success = GstNet.ptp_init(GstNet.PTP_CLOCK_ID_NONE, ptp_interfaces) 52 | if success: 53 | self.log.debug('successfully initializing PTP-Subsystem') 54 | else: 55 | self.log.error('failed to initializing PTP-Subsystem') 56 | sys.exit(42) 57 | 58 | self.log.debug('obtaining PTP-Clock for domain %u', ptp_domain) 59 | ptp_clock = GstNet.PtpClock.new('PTP-Master', ptp_domain) 60 | if ptp_clock != None: 61 | self.log.debug('obtained PTP-Clock for domain %u', ptp_domain) 62 | else: 63 | self.log.error('failed to obtain PTP-Clock') 64 | sys.exit(42) 65 | 66 | self.log.debug('waiting for PTP-Clock to sync') 67 | ptp_clock.wait_for_sync(Gst.CLOCK_TIME_NONE) 68 | self.log.info('successfully synced PTP-Clock') 69 | 70 | self.pipeline.use_clock(ptp_clock) 71 | 72 | self.log.debug('Caclulating Channel-Offsets') 73 | channel_offset = 0 74 | self.channel_offsets = {} 75 | for idx, source in enumerate(sources): 76 | last_channel = channel_offset + source['channels'] - 1 77 | self.log.debug('Source %d provides channels %d to %d', idx, channel_offset, last_channel) 78 | self.channel_offsets[idx] = (channel_offset, last_channel) 79 | channel_offset += source['channels'] 80 | 81 | self.log.debug('Configuring Pipelines') 82 | for idx, source in enumerate(sources): 83 | self.configure_source_pipeline(idx, Source.from_config(config, source)) 84 | 85 | # configure bus 86 | self.log.debug('Binding Bus-Signals') 87 | bus = self.pipeline.get_bus() 88 | bus.add_signal_watch() 89 | bus.enable_sync_message_emission() 90 | 91 | # connect bus-message-handler for error-messages 92 | bus.connect("message::eos", self.on_eos) 93 | bus.connect("message::error", self.on_error) 94 | 95 | # connect bus-message-handler for level-messages 96 | bus.connect("message::element", self.on_message) 97 | 98 | if config['watchdog']['enabled']: 99 | self.log.info('Starting Watchdog') 100 | self.watchdog = Watchdog(config) 101 | 102 | def build_source_pipeline(self, idx, source): 103 | channels = source.source_config['channels'] 104 | 105 | pipeline = source.build_pipeline().rstrip() + """ ! 106 | audioconvert ! 107 | audio/x-raw, channels={channels}, format={capture_format}, rate={rate} ! 108 | tee name=tee_src_{idx} 109 | 110 | tee_src_{idx}. ! audioconvert ! audio/x-raw, format=S16LE ! level interval={level_interval} name=lvl_src_{idx} 111 | tee_src_{idx}. ! deinterleave name=d_src_{idx} 112 | """.format( 113 | idx=idx, 114 | channels=channels, 115 | rate=self.config['source']['rate'], 116 | capture_format=self.config['capture']['format'], 117 | level_interval=self.config['status_server']['level_interval_ms'] * 1000000 118 | ) 119 | 120 | segment_length = self.config['capture']['segment-length'] * 1000000000 121 | for channel in range(0, channels): 122 | dirname = self.config['channelmap'].get(str(channel)) 123 | 124 | if dirname == DISCARD_CHANNEL_KEYWORD: 125 | continue 126 | 127 | pipeline += """ 128 | d_src_{idx}.src_{channel} ! splitmuxsink name=mux_src_{idx}_ch_{channel} muxer=wavenc max-size-time={segment_length} location=/dev/null 129 | """.rstrip().format( 130 | idx=idx, 131 | channel=channel, 132 | segment_length=segment_length 133 | ) 134 | 135 | return pipeline 136 | 137 | def configure_source_pipeline(self, source_idx, source): 138 | channels = source.source_config['channels'] 139 | channel_offset, last_channel = self.channel_offsets[source_idx] 140 | 141 | self.log.debug('configuring channels %d to %d', channel_offset, last_channel) 142 | for source_channel in range(0, channels): 143 | channel = channel_offset + source_channel 144 | dirname = self.config['channelmap'].get(channel) 145 | 146 | if dirname == DISCARD_CHANNEL_KEYWORD: 147 | continue 148 | 149 | if dirname is None: 150 | dirname = "unknown/{channel}".format(channel=channel) 151 | self.log.warn("Channel {channel} has no mapping in the config and will be recorded as {dirname}" 152 | .format(channel=channel, dirname=dirname)) 153 | 154 | dirpath = os.path.join(self.config['capture']['folder'], dirname) 155 | os.makedirs(dirpath, exist_ok=True) 156 | 157 | el = self.pipeline.get_by_name( 158 | "mux_src_{source}_ch_{channel}".format(source=source_idx, channel=source_channel)) 159 | el.connect('format-location', self.on_format_location, channel, dirpath) 160 | 161 | def start(self): 162 | # start process 163 | self.log.debug('Launching Mixing-Pipeline') 164 | self.pipeline.set_state(Gst.State.PLAYING) 165 | 166 | def on_format_location(self, mux, fragment, channel, dirpath): 167 | filename = time.strftime('%Y-%m-%d_%H-%M-%S', time.localtime()) + ".wav" 168 | filepath = os.path.join(dirpath, filename) 169 | self.log.info("constructing filepath for channel {channel}: {filepath}".format( 170 | channel=channel, filepath=filepath)) 171 | self.send_filepath_message(channel, filepath) 172 | return filepath 173 | 174 | def on_message(self, bus, msg): 175 | if not msg.src.name.startswith('lvl_'): 176 | return 177 | 178 | if msg.type != Gst.MessageType.ELEMENT: 179 | return 180 | 181 | src_idx = int(msg.src.name[len("lvl_src_"):]) 182 | rms = msg.get_structure().get_value('rms') 183 | peak = msg.get_structure().get_value('peak') 184 | decay = msg.get_structure().get_value('decay') 185 | self.log.debug('level_callback src #%u\n rms=%s\n peak=%s\n decay=%s', src_idx, rms, peak, decay) 186 | self.send_level_message(src_idx, rms, peak, decay) 187 | self.watchdog.ping(src_idx) 188 | 189 | def on_eos(self, bus, message): 190 | self.log.debug('Received End-of-Stream-Signal on Mixing-Pipeline') 191 | 192 | def on_error(self, bus, message): 193 | self.log.debug('Received Error-Signal on Mixing-Pipeline') 194 | (error, debug) = message.parse_error() 195 | self.log.debug('Error-Details: #%u: %s', error.code, debug) 196 | 197 | def send_level_message(self, src_idx, rms, peak, decay): 198 | from_channel, to_channel = self.channel_offsets[src_idx] 199 | message = json.dumps({ 200 | "type": "audio_level", 201 | "source_index": src_idx, 202 | "from_channel": from_channel, 203 | "to_channel": to_channel, 204 | "rms": rms, 205 | "peak": peak, 206 | "decay": decay, 207 | }) 208 | self.statusServer.transmit(message) 209 | 210 | def send_filepath_message(self, channel, filepath): 211 | message = json.dumps({ 212 | "type": "new_filepath", 213 | "channel_index": channel, 214 | "filepath": filepath, 215 | }) 216 | self.statusServer.transmit(message) 217 | -------------------------------------------------------------------------------- /lib/procnetdev.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime 2 | 3 | 4 | # https://gist.github.com/cnelson/1658752 5 | class ProcNetDev(object): 6 | """Parses /proc/net/dev into a usable python datastructure. 7 | 8 | By default each time you access the structure, /proc/net/dev is re-read 9 | and parsed so data is always current. 10 | 11 | If you want to disable this feature, pass auto_update=False to the constructor. 12 | 13 | >>> pnd = ProcNetDev() 14 | >>> pnd['eth0']['receive']['bytes'] 15 | 976329938704 16 | 17 | """ 18 | 19 | def __init__(self, auto_update=True): 20 | """Opens a handle to /proc/net/dev and sets up the initial object.""" 21 | 22 | # we don't wrap this in a try as we want to raise an IOError if it's not there 23 | self.proc = open('/proc/net/dev', 'r') 24 | 25 | # we store our data here, this is populated in update() 26 | self.data = None 27 | self.updated = None 28 | self.auto_update = auto_update 29 | 30 | self.update() 31 | 32 | def __iter__(self): 33 | return self.data.__iter__() 34 | 35 | def __getitem__(self, key): 36 | """Allows accessing the interfaces as self['eth0']""" 37 | if self.auto_update: 38 | self.update() 39 | 40 | return self.data[key] 41 | 42 | def __len__(self): 43 | """Returns the number of interfaces available.""" 44 | return len(self.data.keys()) 45 | 46 | def __contains__(self, key): 47 | """Implements contains by testing for a KeyError.""" 48 | try: 49 | self[key] 50 | return True 51 | except KeyError: 52 | return False 53 | 54 | def __nonzero__(self): 55 | """Eval to true if we've gottend data""" 56 | if self.updated: 57 | return True 58 | else: 59 | return False 60 | 61 | def __del__(self): 62 | """Ensure our filehandle is closed when we shutdown.""" 63 | try: 64 | self.proc.close() 65 | except AttributeError: 66 | pass 67 | 68 | def update(self): 69 | """Updates the instances internal datastructures.""" 70 | 71 | # reset our location 72 | self.proc.seek(0) 73 | 74 | # read our first line, and note the character positions, it's important for later 75 | headerline = self.proc.readline() 76 | if not headerline.count('|'): 77 | raise ValueError("Header was not in the expected format") 78 | 79 | # we need to find out where all the pipes are 80 | sections = [] 81 | 82 | position = -1 83 | while position: 84 | last_position = position + 1 85 | position = headerline.find('|', last_position) 86 | 87 | if position < 0: 88 | position = None 89 | 90 | sections.append((last_position, position, headerline[last_position:position].strip().lower())) 91 | 92 | # first section is junk "Inter- 93 | sections.pop(0) 94 | 95 | # now get the labels 96 | labelline = self.proc.readline().strip("\n") 97 | labels = [] 98 | for section in sections: 99 | labels.append(labelline[section[0]:section[1]].split()) 100 | 101 | interfaces = {} 102 | # now get the good stuff 103 | for info in self.proc.readlines(): 104 | info = info.strip("\n") 105 | 106 | # split the data into interface name and counters 107 | (name, data) = info.split(":", 1) 108 | 109 | # clean them up 110 | name = name.strip() 111 | data = data.split() 112 | 113 | interfaces[name] = {} 114 | absolute_position = 0 115 | 116 | # loop through each section, receive, transmit, etc 117 | for section_number in range(len(sections)): 118 | tmp = {} 119 | 120 | # now loop through each label in that section 121 | # they aren't always the same! transmit doesn't have multicast for example 122 | for label_number in range(len(labels[section_number])): 123 | # for each label, we need to associate it with it's data 124 | # we use absolute position since the label_number resets for each section 125 | tmp[labels[section_number][label_number]] = int(data[absolute_position]) 126 | absolute_position += 1 127 | 128 | # push our data into the final location 129 | # name=eth0, section[i][2] = receive (for example) 130 | interfaces[name][sections[section_number][2]] = tmp 131 | 132 | # update the instance level variables. 133 | self.data = interfaces 134 | self.updated = datetime.utcnow() 135 | -------------------------------------------------------------------------------- /lib/sources.py: -------------------------------------------------------------------------------- 1 | from abc import abstractmethod 2 | 3 | 4 | class Source(object): 5 | @staticmethod 6 | def from_config(config, source_config): 7 | type = source_config['type'].lower() 8 | if type == 'demo': 9 | return DemoSource(config, source_config) 10 | elif type == 'rtsp': 11 | return RtspSource(config, source_config) 12 | elif type == 'udp': 13 | return UdpSource(config, source_config) 14 | else: 15 | raise RuntimeError("Unknown type of source: " + type) 16 | 17 | def __init__(self, config, source_config): 18 | self.config = config 19 | self.source_config = source_config 20 | 21 | @abstractmethod 22 | def build_pipeline(self): 23 | pass 24 | 25 | def _build_jitterbuffer(self): 26 | if self.config['clocking']['jitterbuffer']: 27 | return "rtpjitterbuffer latency={latency} !".format(latency=self.config['clocking']['jitterbuffer']) 28 | else: 29 | return "" 30 | 31 | def _build_sourcecaps(self): 32 | return "audio/x-raw, channels={channels}, format={source_format}, rate={rate}".format( 33 | channels=self.source_config['channels'], 34 | rate=self.config['source']['rate'], 35 | source_format=self.config['source']['format'], 36 | ) 37 | 38 | 39 | class DemoSource(Source): 40 | def build_pipeline(self): 41 | return """ 42 | audiotestsrc is-live=true 43 | """ 44 | 45 | 46 | class RtspSource(Source): 47 | def build_pipeline(self): 48 | return """ 49 | rtspsrc protocols=udp-mcast location={location} ! 50 | {jitterbuffer} 51 | rtpL24depay ! 52 | {sourcecaps} 53 | """.format( 54 | location=self.source_config['location'], 55 | jitterbuffer=self._build_jitterbuffer(), 56 | sourcecaps=self._build_sourcecaps(), 57 | ) 58 | 59 | 60 | class UdpSource(Source): 61 | def build_pipeline(self): 62 | return """ 63 | udpsrc address={address} port={port} multicast-iface={iface} ! 64 | application/x-rtp, clock-rate={rate}, channels={channels} ! 65 | {jitterbuffer} 66 | rtpL24depay ! 67 | {sourcecaps} 68 | """.format( 69 | address=self.source_config['address'], 70 | port=self.source_config['port'], 71 | iface=self.source_config.get('iface'), 72 | rate=self.config['source']['rate'], 73 | channels=self.source_config['channels'], 74 | jitterbuffer=self._build_jitterbuffer(), 75 | sourcecaps=self._build_sourcecaps(), 76 | ) 77 | -------------------------------------------------------------------------------- /lib/status_server.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | import socket 4 | 5 | from gi.repository import GObject 6 | 7 | 8 | class StatusServer(object): 9 | def __init__(self, config): 10 | self.log = logging.getLogger('StatusServer') 11 | self.config = config 12 | 13 | self.boundSocket = None 14 | self.currentConnections = dict() 15 | 16 | port = config['status_server']['port'] 17 | bind = config['status_server']['bind'] 18 | self.log.debug('Binding to Source-Socket on [%s]:%u', bind, port) 19 | self.boundSocket = socket.socket(socket.AF_INET6) 20 | self.boundSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 21 | self.boundSocket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 22 | False) 23 | 24 | self.boundSocket.bind((bind, port)) 25 | self.boundSocket.listen(1) 26 | 27 | self.log.debug('Setting GObject io-watch on Socket') 28 | GObject.io_add_watch(self.boundSocket, GObject.IO_IN, self.on_connect) 29 | 30 | def on_connect(self, sock, *args): 31 | conn, addr = sock.accept() 32 | conn.setblocking(False) 33 | 34 | self.log.info("Incomming Connection from [%s]:%u (fd=%u)", 35 | addr[0], addr[1], conn.fileno()) 36 | 37 | self.currentConnections[conn] = conn 38 | self.log.info('Now %u Receiver connected', len(self.currentConnections)) 39 | 40 | self.log.debug('setting gobject io-watch on connection') 41 | GObject.io_add_watch(conn, GObject.IO_ERR | GObject.IO_HUP, self.on_error) 42 | 43 | self.send_config(conn) 44 | return True 45 | 46 | def on_error(self, conn, condition): 47 | self.close_connection(conn) 48 | 49 | def close_connection(self, conn): 50 | if conn in self.currentConnections: 51 | conn.close() 52 | del (self.currentConnections[conn]) 53 | 54 | self.log.info('Now %u Receiver connected', len(self.currentConnections)) 55 | 56 | def transmit(self, line): 57 | connections = list(self.currentConnections) 58 | for conn in connections: 59 | try: 60 | conn.sendall(bytes(line + "\n", "utf-8")) 61 | except Exception: 62 | self.log.debug('Exception during transmit, closing connection') 63 | self.close_connection(conn) 64 | 65 | def send_config(self, conn): 66 | message = {"type": "system_config"} 67 | message.update(self.config) 68 | message['watchdog']['mqtt']['password'] = '***'; 69 | 70 | line = json.dumps(message) 71 | conn.sendall(bytes(line + "\n", "utf-8")) 72 | -------------------------------------------------------------------------------- /lib/system_health_reporter.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | import os 4 | 5 | from gi.repository import GObject 6 | 7 | from lib.procnetdev import ProcNetDev 8 | 9 | 10 | class SystemHealthReporter(object): 11 | def __init__(self, config, statusServer): 12 | self.log = logging.getLogger('SystemHealthReporter') 13 | self.config = config 14 | self.statusServer = statusServer 15 | 16 | self.log.info('Fetching initial network status') 17 | self.last_net_stats = ProcNetDev(auto_update=False) 18 | 19 | self.log.debug('Setting Timer for System-Health-Reports') 20 | GObject.timeout_add(config['status_server']['system_health_report_interval_ms'], self.send_system_health) 21 | 22 | def send_system_health(self): 23 | self.log.info('Sending System-Health-Reports') 24 | f_bsize, f_frsize, f_blocks, f_bfree, f_bavail, f_files, f_ffree, f_favail = \ 25 | os.statvfs(self.config['capture']['folder'])[0:8] 26 | 27 | updated_net_stats = ProcNetDev(auto_update=False) 28 | last_net_stats = self.last_net_stats 29 | seconds = (updated_net_stats.updated - last_net_stats.updated).seconds 30 | if seconds < 1: 31 | self.log.error("System-Health re-send attempt") 32 | return True 33 | 34 | message = json.dumps({ 35 | "type": "system_health_report", 36 | 37 | "bytes_total": f_blocks * f_frsize, 38 | "bytes_free": f_bfree * f_frsize, 39 | "bytes_available": f_bavail * f_frsize, 40 | "bytes_available_percent": f_bfree / f_blocks, 41 | "bytes_used": (f_blocks - f_bfree) * f_frsize, 42 | 43 | "inodes_total": f_files, 44 | "inodes_free": f_ffree, 45 | "inodes_available": f_favail, 46 | "inodes_available_percent": f_favail / f_files, 47 | "inodes_used": (f_files - f_ffree), 48 | 49 | "interfaces": dict(map( 50 | lambda ifname: ( 51 | ifname, 52 | self.extract_interface_data(self.last_net_stats[ifname], updated_net_stats[ifname], seconds)), 53 | updated_net_stats 54 | )) 55 | }) 56 | self.last_net_stats = updated_net_stats 57 | self.statusServer.transmit(message) 58 | return True 59 | 60 | def extract_interface_data(self, last_net_stats, updated_net_stats, seconds): 61 | return { 62 | "rx": { 63 | "bytes": updated_net_stats['receive']['bytes'], 64 | "packets": updated_net_stats['receive']['packets'], 65 | "bytes_per_second": (updated_net_stats['receive']['bytes'] - 66 | last_net_stats['receive']['bytes']) / seconds, 67 | "packets_per_second": (updated_net_stats['receive']['packets'] - 68 | last_net_stats['receive']['packets']) / seconds 69 | }, 70 | "tx": { 71 | "bytes": updated_net_stats['transmit']['bytes'], 72 | "packets": updated_net_stats['transmit']['packets'], 73 | "bytes_per_second": (updated_net_stats['transmit']['bytes'] - 74 | last_net_stats['transmit']['bytes']) / seconds, 75 | "packets_per_second": (updated_net_stats['transmit']['packets'] - 76 | last_net_stats['transmit']['packets']) / seconds 77 | }, 78 | } 79 | -------------------------------------------------------------------------------- /lib/watchdog.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | import socket 4 | import sys 5 | from datetime import datetime 6 | 7 | import paho.mqtt.client as mqtt 8 | from gi.repository import GObject 9 | 10 | 11 | class Watchdog(object): 12 | def __init__(self, config): 13 | self.config = config 14 | self.log = logging.getLogger("Watchdog") 15 | 16 | num_sources = len(config['sources']) 17 | self.last_ping = datetime.utcnow() 18 | GObject.timeout_add(config['watchdog']['check_interval_ms'], self.check_status) 19 | 20 | self.mqtt = None 21 | if config['watchdog']['mqtt']['enabled']: 22 | self.init_mqtt() 23 | 24 | def init_mqtt(self): 25 | self.mqtt = mqtt.Client() 26 | self.mqtt.connect( 27 | self.config['watchdog']['mqtt']['host'], 28 | self.config['watchdog']['mqtt']['port'], 29 | keepalive=60, bind_address="") 30 | 31 | if self.config['watchdog']['mqtt']['username'] and self.config['watchdog']['mqtt']['password']: 32 | self.mqtt.username_pw_set( 33 | self.config['watchdog']['mqtt']['username'], 34 | self.config['watchdog']['mqtt']['password']) 35 | 36 | self.mqtt.loop_start() 37 | 38 | def ping(self, src_idx): 39 | self.log.debug("Got Ping") 40 | self.last_ping = datetime.utcnow() 41 | 42 | def check_status(self): 43 | now = datetime.utcnow() 44 | d = (now - self.last_ping) 45 | milisconds = int((d.seconds * 1000) + (d.microseconds / 1000)) 46 | if milisconds > self.config['watchdog']['warn_after_missing_signal_for_ms']: 47 | self.report( 48 | "warn", 49 | "No Data received within {} ms".format(milisconds)) 50 | 51 | if milisconds > self.config['watchdog']['restart_after_missing_signal_for_ms']: 52 | self.report( 53 | "error", 54 | "No Data received within {} ms, restarting recorder".format(milisconds)) 55 | sys.exit(42) 56 | 57 | return True 58 | 59 | def report(self, level, message): 60 | if level == 'warn': 61 | self.log.warn(message) 62 | else: 63 | self.log.error(message) 64 | 65 | if (self.mqtt): 66 | self.mqtt.publish("/voc/alert", json.dumps({ 67 | "level": level, 68 | "msg": message, 69 | "component": socket.getfqdn() + ':aes67-recorder' 70 | })) 71 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import logging 3 | import signal 4 | import sys 5 | 6 | import gi 7 | 8 | from lib.system_health_reporter import SystemHealthReporter 9 | 10 | gi.require_version('Gst', '1.0') 11 | gi.require_version('GstNet', '1.0') 12 | from gi.repository import Gst, GObject 13 | 14 | # import local classes 15 | from lib.loghandler import LogHandler 16 | from lib.pipeline import Pipeline 17 | from lib.status_server import StatusServer 18 | import lib.args 19 | import lib.config 20 | 21 | # check min-version 22 | minGst = (1, 5) 23 | minPy = (3, 0) 24 | 25 | Gst.init([]) 26 | if Gst.version() < minGst: 27 | raise Exception('GStreamer version', Gst.version(), 28 | 'is too old, at least', minGst, 'is required') 29 | 30 | if sys.version_info < minPy: 31 | raise Exception('Python version', sys.version_info, 32 | 'is too old, at least', minPy, 'is required') 33 | 34 | # init GObject & Co. before importing local classes 35 | GObject.threads_init() 36 | 37 | 38 | class Backuptool(object): 39 | def __init__(self, config): 40 | """ 41 | :type config lib.config.VocConfigParser 42 | """ 43 | self.config = config 44 | 45 | # initialize mainloop 46 | self.log = logging.getLogger('Main') 47 | self.log.debug('creating GObject-MainLoop') 48 | self.mainloop = GObject.MainLoop() 49 | 50 | # initialize subsystem 51 | self.log.debug('creating Status-Server') 52 | self.statusServer = StatusServer(config) 53 | 54 | self.log.debug('creating System-Health Reporter') 55 | self.systemHealthReporter = SystemHealthReporter(config, self.statusServer) 56 | 57 | 58 | self.log.debug('creating Audio-Pipeline') 59 | self.pipeline = Pipeline(config, self.statusServer) 60 | 61 | def run(self): 62 | self.log.info('starting Pipeline') 63 | self.pipeline.start() 64 | 65 | try: 66 | self.log.info('running GObject-MainLoop') 67 | self.mainloop.run() 68 | except KeyboardInterrupt: 69 | self.log.info('Terminated via Ctrl-C') 70 | 71 | def quit(self): 72 | self.log.info('quitting GObject-MainLoop') 73 | self.mainloop.quit() 74 | 75 | 76 | # run mainclass 77 | def main(): 78 | # parse command-line args 79 | args = lib.args.parse() 80 | 81 | docolor = (args.color == 'always') \ 82 | or (args.color == 'auto' and sys.stderr.isatty()) 83 | 84 | handler = LogHandler(docolor, args.timestamp) 85 | logging.root.handlers = [handler] 86 | 87 | if args.verbose >= 2: 88 | level = logging.DEBUG 89 | elif args.verbose == 1: 90 | level = logging.INFO 91 | else: 92 | level = logging.WARNING 93 | 94 | logging.root.setLevel(level) 95 | 96 | # make killable by ctrl-c 97 | logging.debug('setting SIGINT handler') 98 | signal.signal(signal.SIGINT, signal.SIG_DFL) 99 | 100 | logging.info('Python Version: %s', sys.version_info) 101 | logging.info('GStreamer Version: %s', Gst.version()) 102 | 103 | logging.debug('loading Config') 104 | config = lib.config.load(args) 105 | 106 | # init main-class and main-loop 107 | logging.debug('initializing AES67-Backup') 108 | backup_tool = Backuptool(config) 109 | 110 | logging.debug('running AES67-Backup') 111 | backup_tool.run() 112 | 113 | 114 | if __name__ == '__main__': 115 | try: 116 | main() 117 | except RuntimeError as e: 118 | logging.error(str(e)) 119 | sys.exit(1) 120 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp==3.5.1 2 | async-timeout==3.0.1 3 | attrs==18.2.0 4 | chardet==3.0.4 5 | idna==2.8 6 | idna-ssl==1.1.0 7 | multidict==4.5.2 8 | paho-mqtt==1.4.0 9 | pycairo==1.18.0 10 | PyGObject==3.30.4 11 | PyYAML==3.13 12 | typing-extensions==3.6.6 13 | websockets==7.0 14 | yarl==1.3.0 15 | -------------------------------------------------------------------------------- /ui/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 |
5 | 6 | 7 | 8 | 9 |–