├── .gitignore ├── .gitmodules ├── README.md ├── can_utils ├── can_utils.py ├── candump_converter.py ├── dbc_file_from_can_log.py ├── list_can_ids.py └── list_can_messages.py ├── data_log.py ├── examples ├── README.md ├── accessport_sample.csv ├── can_sample.log ├── csv_sample.csv └── sample_can_spec.dbc ├── license.fuck ├── motec_log.py └── motec_log_generator.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | __pycache__ 3 | 4 | # MoTeC logs 5 | *.ld 6 | *.ldx 7 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "ldparser"] 2 | path = ldparser 3 | url = https://github.com/gotzl/ldparser 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MotecLogGenerator 2 | 3 | Utility for generating MoTeC .ld files that can be analyzed with [i2 Pro](https://www.motec.com.au/i2/i2overview/) from external log sources. Generated log files are "Pro Enabled", so they can be opened in either *i2 Standard* or *i2 Pro*. 4 | 5 | Currently the following input types are supported: 6 | * Raw CAN bus logs (see logging instructions below) 7 | * CSV files 8 | * [COBB Accessport](https://www.cobbtuning.com/products/accessport) logs 9 | 10 | CAN bus logs must be paired with a [DBC](https://docs.openvehicles.com/en/latest/components/vehicle_dbc/docs/dbc-primer.html) file describing the structure of the frames. 11 | 12 | This will resample all signals from the input log to be at a fixed frequency. This is done because MoTeC expects channels to have messages at a constant frequency, while this may not always be the case for input log files. This is especially true for CAN logs from a vehicle, where some messages only get triggered by certain actions. The frequency which the data is resampled is configurable (see usage below). 13 | 14 | *Tip:* To clone with the submodule included run: 15 | ```bash 16 | git clone --recursive git@github.com:stevendaniluk/MotecLogGenerator.git 17 | ``` 18 | 19 | ## Usage 20 | Check out the examples directory for some sample log files to test the tool with. 21 | 22 | ### CAN Bus Logs 23 | ```bash 24 | python3 motec_log_generator.py /path/to/my/data/can_data.log CAN --dbc /path/to/my/data/car.dbc 25 | ``` 26 | 27 | This will generate a motec .ld file `/path/to/my/data/can_data.ld`. 28 | 29 | ### CSV Logs 30 | ```bash 31 | python3 motec_log_generator.py /path/to/my/data/csv_data.csv CSV 32 | ``` 33 | 34 | This will generate a motec .ld file `/path/to/my/data/csv_data.ld`. 35 | 36 | **The first column of the CSV file must be time.** Channels will not be assigned any units. 37 | 38 | ### Accessport Logs 39 | 40 | ```bash 41 | python3 motec_log_generator.py /path/to/my/data/accessport_data.csv ACCESSPORT 42 | ``` 43 | 44 | This will generate a motec .ld file `/path/to/my/data/accessport_data.ld`. 45 | 46 | ### Additional Options 47 | A different destination and filename for the generated .ld file can also be specified by adding the following to the command: 48 | ```bash 49 | --output /path/to/different/location/new_filename.ld 50 | ``` 51 | 52 | It is also possible to provide additional arguments to populate the metadata in the motec log file for driver, venue, vehicle, etc. See the usage below for full details. 53 | 54 | ``` 55 | usage: motec_log_generator.py [-h] [--output OUTPUT] [--frequency FREQUENCY] 56 | [--dbc DBC] [--driver DRIVER] 57 | [--vehicle_id VEHICLE_ID] 58 | [--vehicle_weight VEHICLE_WEIGHT] 59 | [--vehicle_type VEHICLE_TYPE] 60 | [--vehicle_comment VEHICLE_COMMENT] 61 | [--venue_name VENUE_NAME] 62 | [--event_name EVENT_NAME] 63 | [--event_session EVENT_SESSION] 64 | [--long_comment LONG_COMMENT] 65 | [--short_comment SHORT_COMMENT] 66 | log {CAN,CSV,ACCESSPORT} 67 | 68 | Generates MoTeC .ld files from external log files generated by: CAN bus dumps, 69 | CSV files, or COBB Accessport CSV files 70 | 71 | positional arguments: 72 | log Path to logfile 73 | {CAN,CSV,ACCESSPORT} Type of log to process 74 | 75 | options: 76 | -h, --help show this help message and exit 77 | --output OUTPUT Name of output file, defaults to same as 'candump' 78 | --frequency FREQUENCY 79 | Fixed frequency to resample all channels at 80 | --dbc DBC Path to DBC file, required if log type CAN 81 | --driver DRIVER Motec log metadata field 82 | --vehicle_id VEHICLE_ID 83 | Motec log metadata field 84 | --vehicle_weight VEHICLE_WEIGHT 85 | Motec log metadata field 86 | --vehicle_type VEHICLE_TYPE 87 | Motec log metadata field 88 | --vehicle_comment VEHICLE_COMMENT 89 | Motec log metadata field 90 | --venue_name VENUE_NAME 91 | Motec log metadata field 92 | --event_name EVENT_NAME 93 | Motec log metadata field 94 | --event_session EVENT_SESSION 95 | Motec log metadata field 96 | --long_comment LONG_COMMENT 97 | Motec log metadata field 98 | --short_comment SHORT_COMMENT 99 | Motec log metadata field 100 | 101 | The CAN bus log must be the same format as what is generated by 'candump' with 102 | the '-l' option from the linux package can-utils. A MoTeC channel will be 103 | created for every signal in the DBC file that has messages in the CAN log. The 104 | signal name and units will be directly copied from the DBC file. CSV files 105 | must have time as their first column. A MoTeC channel will be generated for 106 | all remaining columns. All channels will not have any units assigned. COBB 107 | Accessport CSV logs are simply generated by starting a logging session on the 108 | accessport. A MoTeC channel will be created for every channel logged, the name 109 | and units will be directly copied over. 110 | ``` 111 | 112 | ## Generating CAN Logs 113 | 114 | On a linux machine connected to the CAN bus you can run: 115 | ```bash 116 | candump can0 -l > my_candump.log 117 | ``` 118 | 119 | It will generate a log file formatted like below: 120 | ``` 121 | (1630268615.800257) can0 0D4#0000000000000000 122 | (1630268615.801277) can0 152#E9BC00000000008C 123 | (1630268615.802316) can0 380#150A00000000001F 124 | (1630268615.807716) can0 140#0082C34300000981 125 | (1630268615.808638) can0 141#6A263B27C4C32103 126 | ... 127 | ``` 128 | 129 | ## CAN Utilities 130 | Under the `can_utils` directory there are some tools for: 131 | * Inspecting the CAN Id's contained in a log file 132 | * Inspecting the messages from a particular Id in a CAN log 133 | * Generating a DBC file with signals for individual bytes from every Id present 134 | 135 | ## Dependencies 136 | * Python 3 137 | * [cantools](https://cantools.readthedocs.io) 138 | * [numpy](https://numpy.org/) 139 | 140 | ```bash 141 | pip install cantools numpy 142 | ``` 143 | 144 | ## Disclaimer 145 | This work was produced for research purposes. It should in no way be used to circumvent MoTeC's licensing requirements for their data loggers or i2 analysis software. 146 | -------------------------------------------------------------------------------- /can_utils/can_utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | class CanByteStats(): 4 | def __init__(self, initial_val: int = 0): 5 | self.min: int = initial_val 6 | self.max: int = initial_val 7 | self.range: int = 0 8 | 9 | def update(self, val: int): 10 | self.min = min(self.min, val) 11 | self.max = max(self.max, val) 12 | self.range = self.max - self.min 13 | 14 | 15 | class CanFrameStats(): 16 | def __init__(self, id: str, start_time: float, data: str): 17 | n_bytes = int(len(data) / 2) 18 | 19 | self.id = id 20 | self.msgs: int = 0 21 | self.bytes_min: int = n_bytes 22 | self.bytes_max: int = n_bytes 23 | self.start_time: float = start_time 24 | self.end_time: float = start_time 25 | self.byte_stats: list[CanByteStats] = [] 26 | 27 | self._update_byte_stats(data) 28 | 29 | def update(self, stamp: float, data: str): 30 | n_bytes = int(len(data) / 2) 31 | 32 | self.msgs += 1 33 | self.end_time = stamp 34 | self.bytes_min = min(n_bytes, self.bytes_min) 35 | self.bytes_max = max(n_bytes, self.bytes_max) 36 | 37 | self._update_byte_stats(data) 38 | 39 | def avg_frequency(self): 40 | if self.msgs > 1: 41 | return self.msgs / (self.end_time - self.start_time) 42 | else: 43 | return 0.0 44 | 45 | def _update_byte_stats(self, data: str): 46 | n_bytes = int(len(data) / 2) 47 | 48 | for i in range(n_bytes): 49 | name = f"byte_{str(i)}" 50 | val = int(f"0x{data[2*i:2*i + 2]}", 16) 51 | 52 | if i + 1 > len(self.byte_stats): 53 | self.byte_stats.append(CanByteStats(val)) 54 | else: 55 | self.byte_stats[i].update(val) 56 | 57 | def __str__(self): 58 | return "{:10} | {:9} | {:6.2f}".format(self.id, self.msgs, self.avg_frequency()) 59 | 60 | 61 | def parse_can_line(line): 62 | stamp, bus, msg = line.split() 63 | stamp = float(stamp[1:-1]) 64 | id, data = msg.split("#") 65 | 66 | return stamp, id, data 67 | 68 | 69 | def get_id_stats_from_lines(lines): 70 | id_stats = {} 71 | for line in lines: 72 | stamp, id, data = parse_can_line(line) 73 | bytes = int(len(data) / 2) 74 | 75 | if id not in id_stats: 76 | id_stats[id] = CanFrameStats(id, float(stamp), data) 77 | else: 78 | id_stats[id].update(float(stamp), data) 79 | 80 | return id_stats 81 | -------------------------------------------------------------------------------- /can_utils/candump_converter.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import os 5 | 6 | DESCRIPTION = """Convertes a candump recorded with options '-ta' for human 7 | readable format to one recorded with '-l' which can be replayed with canplayer""" 8 | 9 | if __name__ == '__main__': 10 | parser = argparse.ArgumentParser(description=DESCRIPTION) 11 | parser.add_argument("log", type=str, help="Path to logfile") 12 | parser.add_argument("--output", type=str, help="New file to write to, defaults to 'log'.converted") 13 | 14 | args = parser.parse_args() 15 | 16 | if args.log: 17 | args.log = os.path.expanduser(args.log) 18 | 19 | # Make sure our input file is valid 20 | if not os.path.isfile(args.log): 21 | print("ERROR: log file %s does not exist" % args.log) 22 | exit(1) 23 | 24 | if not args.output: 25 | args.output = args.log + ".converted" 26 | 27 | with open(args.log, "r") as file: 28 | lines = file.readlines() 29 | 30 | with open(args.output, "w") as file: 31 | for line in lines: 32 | line_split = line.split() 33 | stamp = line_split[0] 34 | bus = line_split[1] 35 | id = line_split[2] 36 | msg = "".join(line_split[4:]) 37 | 38 | new_line = "{} {} {}#{}\n".format(stamp, bus, id, msg) 39 | file.write(new_line) 40 | -------------------------------------------------------------------------------- /can_utils/dbc_file_from_can_log.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import os 5 | import can_utils 6 | 7 | DESCRIPTION = """Generates a DBC file with individual signals for every byte from every CAN id 8 | present in a log file.""" 9 | 10 | DBC_HEADER = """VERSION "TODO" 11 | 12 | NS_ : 13 | BA_ 14 | BA_DEF_ 15 | BA_DEF_DEF_ 16 | BA_DEF_DEF_REL_ 17 | BA_DEF_REL_ 18 | BA_DEF_SGTYPE_ 19 | BA_REL_ 20 | BA_SGTYPE_ 21 | BO_TX_BU_ 22 | BU_BO_REL_ 23 | BU_EV_REL_ 24 | BU_SG_REL_ 25 | CAT_ 26 | CAT_DEF_ 27 | CM_ 28 | ENVVAR_DATA_ 29 | EV_DATA_ 30 | FILTER 31 | NS_DESC_ 32 | SGTYPE_ 33 | SGTYPE_VAL_ 34 | SG_MUL_VAL_ 35 | SIGTYPE_VALTYPE_ 36 | SIG_GROUP_ 37 | SIG_TYPE_REF_ 38 | SIG_VALTYPE_ 39 | VAL_ 40 | VAL_TABLE_ 41 | 42 | BS_: 43 | 44 | BU_: TODO 45 | """ 46 | 47 | def get_dbc_message_def(id, bytes): 48 | """ Generates a DBC file message definition for a particular CAN id with one signal for each 49 | byte present. 50 | 51 | Example, for the CAN id 0x002 which has 3 bytes of data the following would be produced: 52 | BO_ 2 002: 8 TODO 53 | SG_ 002_B1: 0|8@1+ (1, 0) [0|254] "" TODO 54 | SG_ 002_B2: 8|8@1+ (1, 0) [0|254] "" TODO 55 | SG_ 002_B3: 16|8@1+ (1, 0) [0|254] "" TODO 56 | 57 | :id: CAN id in hex 58 | :bytes: Number of bytes of data from this id 59 | """ 60 | id_hex = id.lstrip("0") 61 | id_field = int(id_hex, 16) 62 | if id_field > 2047: 63 | # This is an extended frame. The DBC file spec does not provide a flag 64 | # to indicate this, instead a single bit in the id field is used instead 65 | # so we have to set that manually. 66 | id_field += 0x80000000 67 | 68 | msg_def = "BO_ " + str(id_field) + " ID_" + id_hex + ": " + str(max(bytes) + 1) + " TODO\n" 69 | for i in bytes: 70 | msg_def += " SG_ ID_" + id_hex + "_B" + str(i + 1) + ": " + str(i * 8) + """|8@1+ (1, 0) [0|254] "" TODO\n""" 71 | 72 | return msg_def 73 | 74 | if __name__ == '__main__': 75 | parser = argparse.ArgumentParser(description=DESCRIPTION) 76 | parser.add_argument("log", type=str, help="Path to CAN log") 77 | parser.add_argument("--output", type=str, help="Path of DBC file to generate, defaults to log") 78 | parser.add_argument("--use_min_bytes", action="store_true", \ 79 | help="Use the minimum number of bytes observed for the DBC file") 80 | parser.add_argument("--ignore_constant", action="store_true", \ 81 | help="Ignore any channels that have constant data across the entire log") 82 | parser.add_argument("--min_frequency", type=float, default=None, \ 83 | help="Minimum frequency, below which an ID is ignore") 84 | parser.add_argument("--max_frequency", type=float, default=None, \ 85 | help="Maximum frequency, below which an ID is ignore") 86 | 87 | args = parser.parse_args() 88 | 89 | if args.log: 90 | args.log = os.path.expanduser(args.log) 91 | 92 | # Make sure our input files are valid 93 | if not os.path.isfile(args.log): 94 | print("ERROR: CAN log '%s' does not exist" % args.log) 95 | exit(1) 96 | 97 | if not args.output: 98 | args.output = os.path.splitext(args.log)[0] + ".dbc" 99 | 100 | with open(args.log, "r") as file: 101 | lines = file.readlines() 102 | 103 | id_stats = can_utils.get_id_stats_from_lines(lines) 104 | 105 | if not id_stats: 106 | print("ERROR: No CAN data found in log!") 107 | exit(1) 108 | 109 | with open(args.output, "w") as file: 110 | file.write(DBC_HEADER) 111 | 112 | for id, stats in sorted(id_stats.items()): 113 | # Prune based on frequency 114 | avg_hz = stats.avg_frequency() 115 | if args.min_frequency and avg_hz < args.min_frequency: 116 | continue 117 | if args.max_frequency and avg_hz > args.max_frequency: 118 | continue 119 | 120 | # Filter which bytes to select 121 | max_byte_num = stats.bytes_min if args.use_min_bytes else stats.bytes_max 122 | if args.ignore_constant: 123 | bytes = [] 124 | for i in range(max_byte_num): 125 | if stats.byte_stats[i].range > 0: 126 | bytes.append(i) 127 | else: 128 | bytes = list(range(max_byte_num)) 129 | 130 | if not bytes: 131 | continue 132 | 133 | msg_def = get_dbc_message_def(id, bytes) 134 | file.write("\n") 135 | file.write(msg_def) 136 | 137 | print("Done!") 138 | -------------------------------------------------------------------------------- /can_utils/list_can_ids.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import os 5 | import can_utils 6 | 7 | if __name__ == '__main__': 8 | parser = argparse.ArgumentParser() 9 | parser.add_argument("log", type=str, help="Path to logfile") 10 | 11 | args = parser.parse_args() 12 | 13 | if args.log: 14 | args.log = os.path.expanduser(args.log) 15 | 16 | # Make sure our input files are valid 17 | if not os.path.isfile(args.log): 18 | print("ERROR: log file %s does not exist" % args.log) 19 | exit(1) 20 | 21 | with open(args.log, "r") as file: 22 | lines = file.readlines() 23 | 24 | id_stats = can_utils.get_id_stats_from_lines(lines) 25 | 26 | print(" ID | Msg Count | Avg. Frequency") 27 | print("---------------------------------------") 28 | for id, stats in sorted(id_stats.items()): 29 | print(stats) 30 | -------------------------------------------------------------------------------- /can_utils/list_can_messages.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import os 5 | import textwrap 6 | import can_utils 7 | 8 | if __name__ == '__main__': 9 | parser = argparse.ArgumentParser() 10 | parser.add_argument("log", type=str, help="Path to logfile") 11 | parser.add_argument("id", type=str, help="CAN id") 12 | 13 | args = parser.parse_args() 14 | 15 | if args.log: 16 | args.log = os.path.expanduser(args.log) 17 | 18 | # Make sure our input files are valid 19 | if not os.path.isfile(args.log): 20 | print("ERROR: log file %s does not exist" % args.log) 21 | exit(1) 22 | 23 | with open(args.log, "r") as file: 24 | lines = file.readlines() 25 | 26 | for line in lines: 27 | stamp, id, data = can_utils.parse_can_line(line) 28 | 29 | if id == args.id: 30 | data_bytes = textwrap.wrap(data, 2) 31 | data_bytes = ' '.join(data_bytes) 32 | print("%f - %s" % (stamp, data_bytes)) 33 | -------------------------------------------------------------------------------- /data_log.py: -------------------------------------------------------------------------------- 1 | import cantools 2 | import math 3 | 4 | class DataLog(object): 5 | """ Container for storing log data which contains a set of channels with time series data.""" 6 | def __init__(self, name=""): 7 | self.name = name 8 | self.channels = {} 9 | 10 | def clear(self): 11 | self.channels = {} 12 | 13 | def add_channel(self, name, units, data_type, decimals, initial_message=None): 14 | msg = [] if not initial_message else [initial_message] 15 | self.channels[name] = Channel(name, units, data_type, decimals, msg) 16 | 17 | def start(self): 18 | """ Returns the earliest timestamp from all existing channels [s]. """ 19 | t = math.inf 20 | for name, channel in self.channels.items(): 21 | t = min(t, channel.start()) 22 | 23 | if t != math.inf: 24 | return t 25 | else: 26 | return 0.0 27 | 28 | def end(self): 29 | """ Returns the latest timestamp from all existing channels [s]. """ 30 | end = 0 31 | for name, channel in self.channels.items(): 32 | end = max(end, channel.end()) 33 | 34 | return end 35 | 36 | def duration(self): 37 | """ Returns the duration of the log [s]. """ 38 | return self.end() - self.start() 39 | 40 | def resample(self, frequency): 41 | """ Resamples all channels such that all messages occur at a fixed frequency. 42 | 43 | See the resample method of the Channel class for more details. 44 | """ 45 | start = self.start() 46 | end = self.end() 47 | for channel_name in self.channels: 48 | self.channels[channel_name].resample(start, end, frequency) 49 | 50 | def from_can_log(self, log_lines, can_db): 51 | """ Creates channels populated with messages from a candump file and can database. 52 | 53 | This will create a channel for each entry in the database that has messages present in the 54 | log. 55 | 56 | log_lines: List, containing candump log lines (recorded with 'candump' with '-l') 57 | can_db: cantools.database 58 | """ 59 | self.clear() 60 | 61 | # Cache all the frame ids in the database for quick lookups 62 | known_ids = set() 63 | for msg in can_db.messages: 64 | known_ids.add(msg.frame_id) 65 | 66 | for line in log_lines: 67 | stamp, bus, id, data = self.__parse_can_log_line(line) 68 | 69 | if id not in known_ids: 70 | continue 71 | 72 | db_msg = can_db.get_message_by_frame_id(id) 73 | msg_decoded = can_db.decode_message(id, data) 74 | 75 | for msg, signal in zip(msg_decoded.items(), db_msg.signals): 76 | name = msg[0] 77 | value = msg[1] 78 | 79 | if name in self.channels: 80 | self.channels[name].messages.append(Message(stamp, value)) 81 | else: 82 | self.add_channel(name, signal.unit, float, 3, Message(stamp, value)) 83 | 84 | def from_csv_log(self, log_lines): 85 | """ Creates channels populated with messages from a CSV log file. 86 | 87 | This will create a channel for each column in the CSV file, with the name of that channel 88 | taken from the CSV header. All channels will be created without any units. Any non numeric data 89 | will be ignored, and that channel will be removed. The first column of data must be time 90 | 91 | log_lines: List, containing CSV log lines 92 | """ 93 | self.clear() 94 | 95 | if not log_lines: 96 | return 97 | 98 | # Get the channel names, ignore the first column as it is assumed to be time 99 | header = log_lines[0] 100 | channel_names = header.split(",")[1:] 101 | 102 | # We'll keep a map of names and column numbers for easy channel lookups when parsing rows 103 | i = 0 104 | channel_dict = {} 105 | for name in channel_names: 106 | self.add_channel(name, "", float, 0) 107 | 108 | channel_dict[name] = i 109 | i += 1 110 | 111 | # Go through each line grabbing all the channel values 112 | for line in log_lines[1:]: 113 | line = line.strip("\n") 114 | values = line.split(",") 115 | 116 | # Timestamp is the first element 117 | t = float(values[0]) 118 | 119 | # Grab each remaining channel value. We keep a map of all the channel names and column 120 | # numbers we are retrieving, so we will look at that to determine which columns to read. 121 | # If we fail to read an entry in any column, we will delete that channel entirely. 122 | invalid_channels = [] 123 | for name, i in channel_dict.items(): 124 | # We'll only parse numeric data 125 | try: 126 | val = float(values[i + 1]) 127 | message = Message(t, val) 128 | self.channels[name].messages.append(message) 129 | 130 | val_text_split = values[i + 1].split(".") 131 | decimals_present = 0 if len(val_text_split) == 1 else len(val_text_split[1]) 132 | self.channels[name].decimals = max(decimals_present, self.channels[name].decimals) 133 | except ValueError: 134 | print("WARNING: Found non numeric values for channel %s, removing channel" % \ 135 | name) 136 | invalid_channels.append(name) 137 | 138 | for name in invalid_channels: 139 | del channel_dict[name] 140 | del self.channels[name] 141 | 142 | def from_accessport_log(self, log_lines): 143 | """ Creates channels populated with messages from a COBB Accessport CSV log file. 144 | 145 | This will create a channel for each column in the CSV file, with the name and units of that 146 | channel taken from the CSV header. Any non numeric data will be ignored, and that channel 147 | will be removed. 148 | 149 | log_lines: List, containing CSV log lines 150 | """ 151 | 152 | self.from_csv_log(log_lines) 153 | 154 | # Accessport logs have a column for AP info which is not of any value so we'll delete it 155 | for key in self.channels.keys(): 156 | if "AP Info" in key: 157 | del self.channels[key] 158 | break 159 | 160 | # Update all the channel names and units 161 | for channel_name, channel in self.channels.items(): 162 | # Channels have the format "Name (Units)" 163 | print(channel_name) 164 | name, units = channel_name.split(" (") 165 | units = units[:-1] 166 | 167 | channel.name = name 168 | channel.units = units 169 | 170 | @staticmethod 171 | def __parse_can_log_line(line): 172 | """ Extracts the timestamp, bus, arbitration id, and data from a single line in a can log file 173 | recorded with candump -l. 174 | """ 175 | stamp, bus, msg = line.split() 176 | stamp = float(stamp[1:-1]) 177 | id, data = msg.split("#") 178 | id = int(id, 16) 179 | data = bytearray.fromhex(data) 180 | 181 | return stamp, bus, id, data 182 | 183 | def __str__(self): 184 | output = "Log: %s, Duration: %f s" % (self.name, (self.end() - self.start())) 185 | for channel_name, channel_data in self.channels.items(): 186 | output += "\n\t%s" % channel_data 187 | return output 188 | 189 | class Channel(object): 190 | """ Represents a singe channel of data containing a time series of values.""" 191 | def __init__(self, name, units, data_type, decimals, messages=None): 192 | self.name = str(name) 193 | self.units = str(units) 194 | self.data_type = data_type 195 | self.decimals = decimals 196 | if messages: 197 | self.messages = messages 198 | else: 199 | self.messages = [] 200 | 201 | def start(self): 202 | if self.messages: 203 | return self.messages[0].timestamp 204 | else: 205 | return 0 206 | 207 | def end(self): 208 | if self.messages: 209 | return self.messages[-1].timestamp 210 | else: 211 | return 0 212 | 213 | def avg_frequency(self): 214 | """ Computes the average frequency from the samples based on the duration of the channel 215 | and the number of messages""" 216 | if len(self.messages) >= 2: 217 | dt = self.end() - self.start() 218 | return len(self.messages) / dt 219 | else: 220 | return 0 221 | 222 | def resample(self, start_time, end_time, frequency): 223 | """ Resamples the data such that all messages occur at a fixed frequency. 224 | 225 | If multiple messages fall within the time interval between messages for the new frequency, 226 | the latest message will be used. When no existing messages fall within the time interval 227 | the most recent value will be retained. If no existing message is present within the first 228 | new time interval, then the first message will be initialized at 0. 229 | """ 230 | if not self.messages: 231 | return 232 | 233 | # Determine how many messages this channel should have, 234 | num_msgs = math.floor(frequency * (end_time - start_time)) 235 | dt_step = 1.0 / frequency 236 | 237 | # Create a new message at each time new time point based on the frequency. As we step 238 | # through the new sample points we'll find the latest pre existing message to insert there, 239 | # and will hold that value until we find another message. 240 | value = 0 241 | t = start_time 242 | current_msgs_index = 0 243 | new_msgs = [] 244 | for i in range(num_msgs): 245 | # Grab the latest message that falls in this time window, if there is one, and update 246 | # the current channel value 247 | while current_msgs_index < len(self.messages): 248 | msg_stamp = self.messages[current_msgs_index].timestamp 249 | 250 | if msg_stamp < t + 0.5 * dt_step: 251 | # This message falls in the time window 252 | value = self.messages[current_msgs_index].value 253 | current_msgs_index += 1 254 | else: 255 | # This messages belongs in a future window 256 | break 257 | 258 | new_msgs.append(Message(t, value)) 259 | t += dt_step 260 | 261 | self.messages = new_msgs 262 | 263 | def __str__(self): 264 | return "Channel: %s, Units: %s, Decimals: %d, Messages: %d, Frequency: %.2f Hz" % \ 265 | (self.name, self.units, self.decimals, len(self.messages), self.avg_frequency()) 266 | 267 | class Message(object): 268 | """ A single message in a time series of data. """ 269 | def __init__(self, timestamp=0, value=0): 270 | self.timestamp = float(timestamp) 271 | self.value = float(value) 272 | 273 | def __str__(self): 274 | return "t=%f, value=%f" % (self.timestamp, self.value) 275 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # MotecLogGenerator Examples 2 | 3 | This directory contains some sample files to use with the MoTeC log generator tool. 4 | 5 | ## CAN 6 | Files: 7 | * `can_sample.log` 8 | * `sample_can_spec.dbc` 9 | 10 | Usage: 11 | ```bash 12 | python3 ../motec_log_generator.py can_sample.log CAN --dbc sample_can_spec.dbc 13 | ``` 14 | 15 | ## CSV 16 | Files: 17 | * `csv_sample.csv` 18 | 19 | Usage: 20 | ```bash 21 | python3 ../motec_log_generator.py csv_sample.csv CSV 22 | ``` 23 | 24 | ## Accessport 25 | Files: 26 | * `accessport_sample.log` 27 | 28 | Usage: 29 | ```bash 30 | python3 ../motec_log_generator.py accessport_sample.csv ACCESSPORT 31 | ``` 32 | -------------------------------------------------------------------------------- /examples/sample_can_spec.dbc: -------------------------------------------------------------------------------- 1 | VERSION "Sample" 2 | 3 | NS_ : 4 | BA_ 5 | BA_DEF_ 6 | BA_DEF_DEF_ 7 | BA_DEF_DEF_REL_ 8 | BA_DEF_REL_ 9 | BA_DEF_SGTYPE_ 10 | BA_REL_ 11 | BA_SGTYPE_ 12 | BO_TX_BU_ 13 | BU_BO_REL_ 14 | BU_EV_REL_ 15 | BU_SG_REL_ 16 | CAT_ 17 | CAT_DEF_ 18 | CM_ 19 | ENVVAR_DATA_ 20 | EV_DATA_ 21 | FILTER 22 | NS_DESC_ 23 | SGTYPE_ 24 | SGTYPE_VAL_ 25 | SG_MUL_VAL_ 26 | SIGTYPE_VALTYPE_ 27 | SIG_GROUP_ 28 | SIG_TYPE_REF_ 29 | SIG_VALTYPE_ 30 | VAL_ 31 | VAL_TABLE_ 32 | 33 | BS_: 34 | 35 | BU_: Vector__XXX 36 | 37 | BO_ 208 ID_D0: 8 Vector__XXX 38 | SG_ STEER_ANGLE : 0|16@1- (0.1, 0) [-450|450] "deg" Vector__XXX 39 | SG_ GYRO_YAW: 16|9@1- (0.005, 0) [-1.2775|1.2775] "rad/s" Vector__XXX 40 | SG_ ACCEL_LAT: 48|8@1- (0.2, 0) [-25.4|25.4] "m/s/s" Vector__XXX 41 | SG_ ACCEL_LONG: 56|8@1- (0.15, 0) [-19.05|19.05] "m/s/s" Vector__XXX 42 | 43 | BO_ 209 ID_D1: 4 Vector__XXX 44 | SG_ SPEED : 0|16@1+ (0.0155168902, 0) [0|300] "m/s" Vector__XXX 45 | SG_ BRAKE : 16|8@1+ (1.0, 0) [0|100] "%" Vector__XXX 46 | 47 | BO_ 212 ID_D4: 8 Vector__XXX 48 | SG_ WS_FL : 0|16@1+ (0.0155168902, 0) [0|300] "m/s" Vector__XXX 49 | SG_ WS_FR : 16|16@1+ (0.0155168902, 0) [0|300] "m/s" Vector__XXX 50 | SG_ WS_RL : 32|16@1+ (0.0155168902, 0) [0|300] "m/s" Vector__XXX 51 | SG_ WS_RR : 48|16@1+ (0.0155168902, 0) [0|300] "m/s" Vector__XXX 52 | 53 | BO_ 320 ID_140: 8 Vector__XXX 54 | SG_ THROTTLE : 0|8@1+ (0.39215686274509803, 0) [0|100] "%" Vector__XXX 55 | SG_ CLUTCH : 15|1@1+ (100.0, 0) [0|100] "%" Vector__XXX 56 | SG_ RPM : 16|14@1+ (1.0, 0) [0|10000] "rpm" Vector__XXX 57 | 58 | BO_ 321 ID_141: 8 Vector__XXX 59 | SG_ GEAR : 48|4@1+ (1, 0) [0|6] "" Vector__XXX 60 | 61 | BO_ 340 ID_154: 7 Vector__XXX 62 | SG_ REVERSE : 49|1@1+ (1, 0) [0|255] "" Vector__XXX 63 | SG_ EBRAKE : 51|1@1+ (1, 0) [0|255] "" Vector__XXX 64 | 65 | BO_ 342 ID_156: 8 Vector__XXX 66 | SG_ FUEL : 56|8@1+ (2.155, 6.315) [0|61] "litre" Vector__XXX 67 | 68 | BO_ 864 ID_360: 8 Vector__XXX 69 | SG_ COOLANT_TEMP: 24|8@1+ (1, -40) [-40|215] "C" Vector__XXX 70 | SG_ BOOST: 32|8@1+ (1.9885, -98.85) [-99|507] "kPa" Vector__XXX 71 | SG_ SI_MODE : 40|3@1+ (0.5, -1) [0|2] "" Vector__XXX 72 | -------------------------------------------------------------------------------- /license.fuck: -------------------------------------------------------------------------------- 1 | DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 2 | Version 2, December 2004 3 | 4 | Copyright (C) 2004 Sam Hocevar 5 | 6 | Everyone is permitted to copy and distribute verbatim or modified 7 | copies of this license document, and changing it is allowed as long 8 | as the name is changed. 9 | 10 | DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 11 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 12 | 13 | 0. You just DO WHAT THE FUCK YOU WANT TO. 14 | -------------------------------------------------------------------------------- /motec_log.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import numpy as np 3 | import struct 4 | from data_log import DataLog, Message, Channel 5 | from ldparser.ldparser import ldVehicle, ldVenue, ldEvent, ldHead, ldChan, ldData 6 | 7 | class MotecLog(object): 8 | """ Handles generating a MoTeC .ld file from log data. 9 | 10 | This utilizes the ldparser library for packing all the meta data and channel data into the 11 | correct binary format. Some functionality and information (e.g. pointer constants below) was 12 | missing from the ldparser library, so this class servers as a wrapper to fill in the gaps. 13 | 14 | This operates on containers from the data_log library. 15 | """ 16 | # Pointers to locations in the file where data sections should be written. These have been 17 | # determined from inspecting some MoTeC .ld files, and were consistent across all files. 18 | VEHICLE_PTR = 1762 19 | VENUE_PTR = 5078 20 | EVENT_PTR = 8180 21 | HEADER_PTR = 11336 22 | 23 | CHANNEL_HEADER_SIZE = struct.calcsize(ldChan.fmt) 24 | 25 | def __init__(self): 26 | self.driver = "" 27 | self.vehicle_id = "" 28 | self.vehicle_weight = 0 29 | self.vehicle_type = "" 30 | self.vehicle_comment = "" 31 | self.venue_name = "" 32 | self.event_name = "" 33 | self.event_session = "" 34 | self.long_comment = "" 35 | self.short_comment = "" 36 | self.datetime = datetime.datetime.now() 37 | 38 | # File components from ldparser 39 | self.ld_header = None 40 | self.ld_channels = [] 41 | 42 | def initialize(self): 43 | """ Initializes all the meta data for the motec log. 44 | 45 | This must be called before adding any channel data. 46 | """ 47 | ld_vehicle = ldVehicle(self.vehicle_id, self.vehicle_weight, self.vehicle_type, \ 48 | self.vehicle_comment) 49 | ld_venue = ldVenue(self.venue_name, self.VEHICLE_PTR, ld_vehicle) 50 | ld_event = ldEvent(self.event_name, self.event_session, self.long_comment, \ 51 | self.VENUE_PTR, ld_venue) 52 | 53 | self.ld_header = ldHead(self.HEADER_PTR, self.HEADER_PTR, self.EVENT_PTR, ld_event, \ 54 | self.driver, self.vehicle_id, self.venue_name, self.datetime, self.short_comment, \ 55 | self.event_name, self.event_session) 56 | 57 | def add_channel(self, log_channel): 58 | """ Adds a single channel of data to the motec log. 59 | 60 | log_channel: data_log.Channel 61 | """ 62 | # Advance the header data pointer 63 | self.ld_header.data_ptr += self.CHANNEL_HEADER_SIZE 64 | 65 | # Advance the data pointers of all previous channels 66 | for ld_channel in self.ld_channels: 67 | ld_channel.data_ptr += self.CHANNEL_HEADER_SIZE 68 | 69 | # Determine our file pointers 70 | if self.ld_channels: 71 | meta_ptr = self.ld_channels[-1].next_meta_ptr 72 | prev_meta_ptr = self.ld_channels[-1].meta_ptr 73 | data_ptr = self.ld_channels[-1].data_ptr + self.ld_channels[-1]._data.nbytes 74 | else: 75 | # First channel needs the previous pointer zero'd out 76 | meta_ptr = self.HEADER_PTR 77 | prev_meta_ptr = 0 78 | data_ptr = self.ld_header.data_ptr 79 | next_meta_ptr = meta_ptr + self.CHANNEL_HEADER_SIZE 80 | 81 | # Channel specs 82 | data_len = len(log_channel.messages) 83 | data_type = np.float32 if log_channel.data_type is float else np.int32 84 | freq = int(log_channel.avg_frequency()) 85 | shift = 0 86 | multiplier = 1 87 | scale = 1 88 | 89 | # Decimal places must be hard coded to zero, the ldparser library doesn't properly 90 | # handle non zero values, consequently all channels will have zero decimal places 91 | # decimals = log_channel.decimals 92 | decimals = 0 93 | 94 | ld_channel = ldChan(None, meta_ptr, prev_meta_ptr, next_meta_ptr, data_ptr, data_len, \ 95 | data_type, freq, shift, multiplier, scale, decimals, log_channel.name, "", \ 96 | log_channel.units) 97 | 98 | # Add in the channel data 99 | ld_channel._data = np.array([], data_type) 100 | for msg in log_channel.messages: 101 | ld_channel._data = np.append(ld_channel._data, data_type(msg.value)) 102 | 103 | # Add the ld channel and advance the file pointers 104 | self.ld_channels.append(ld_channel) 105 | 106 | def add_all_channels(self, data_log): 107 | """ Adds all channels from a DataLog to the motec log. 108 | 109 | data_log: data_log.DataLog 110 | """ 111 | for channel_name, channel in data_log.channels.items(): 112 | self.add_channel(channel) 113 | 114 | def write(self, filename): 115 | """ Writes the motec log data to disc. """ 116 | # Check for the presence of any channels, since the ldData write() method doesn't 117 | # gracefully handle zero channels 118 | if self.ld_channels: 119 | ld_data = ldData(self.ld_header, self.ld_channels) 120 | 121 | # Need to zero out the final channel pointer 122 | ld_data.channs[-1].next_meta_ptr = 0 123 | 124 | ld_data.write(filename) 125 | else: 126 | with open(filename, "wb") as f: 127 | self.ld_header.write(f, 0) 128 | -------------------------------------------------------------------------------- /motec_log_generator.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import cantools 5 | import os 6 | 7 | from data_log import DataLog 8 | from motec_log import MotecLog 9 | 10 | DESCRIPTION = """Generates MoTeC .ld files from external log files generated by: CAN bus dumps, CSV 11 | files, or COBB Accessport CSV files""" 12 | 13 | EPILOG = """The CAN bus log must be the same format as what is generated by 'candump' with the '-l' 14 | option from the linux package can-utils. A MoTeC channel will be created for every signal in the 15 | DBC file that has messages in the CAN log. The signal name and units will be directly copied from 16 | the DBC file. 17 | 18 | CSV files must have time as their first column. A MoTeC channel will be generated for all remaining 19 | columns. All channels will not have any units assigned. 20 | 21 | COBB Accessport CSV logs are simply generated by starting a logging session on the accessport. A 22 | MoTeC channel will be created for every channel logged, the name and units will be directly copied 23 | over. 24 | """ 25 | 26 | if __name__ == '__main__': 27 | parser = argparse.ArgumentParser(description=DESCRIPTION, epilog=EPILOG) 28 | parser.add_argument("log", type=str, help="Path to logfile") 29 | parser.add_argument("log_type", type=str, help="Type of log to process", \ 30 | choices=["CAN", "CSV", "ACCESSPORT"]) 31 | 32 | parser.add_argument("--output", type=str, \ 33 | help="Name of output file, defaults to the same filename as 'log'") 34 | parser.add_argument("--frequency", type=float, default=20.0, \ 35 | help="Fixed frequency to resample all channels at") 36 | parser.add_argument("--dbc", type=str, help="Path to DBC file, required if log type CAN") 37 | 38 | parser.add_argument("--driver", type=str, default="", help="Motec log metadata field") 39 | parser.add_argument("--vehicle_id", type=str, default="", help="Motec log metadata field") 40 | parser.add_argument("--vehicle_weight", type=int, default=0, help="Motec log metadata field") 41 | parser.add_argument("--vehicle_type", type=str, default="", help="Motec log metadata field") 42 | parser.add_argument("--vehicle_comment", type=str, default="", help="Motec log metadata field") 43 | parser.add_argument("--venue_name", type=str, default="", help="Motec log metadata field") 44 | parser.add_argument("--event_name", type=str, default="", help="Motec log metadata field") 45 | parser.add_argument("--event_session", type=str, default="", help="Motec log metadata field") 46 | parser.add_argument("--long_comment", type=str, default="", help="Motec log metadata field") 47 | parser.add_argument("--short_comment", type=str, default="", help="Motec log metadata field") 48 | args = parser.parse_args() 49 | 50 | if args.log: 51 | args.log = os.path.expanduser(args.log) 52 | if args.dbc: 53 | args.dbc = os.path.expanduser(args.dbc) 54 | if args.output: 55 | args.output = os.path.expanduser(args.output) 56 | 57 | # Make sure our input files are valid 58 | if not os.path.isfile(args.log): 59 | print("ERROR: log file %s does not exist" % args.log) 60 | exit(1) 61 | 62 | if args.log_type == "CAN" and not os.path.isfile(args.dbc): 63 | print("ERROR: DBC file %s does not exist" % args.dbc) 64 | exit(1) 65 | 66 | print("Loading log...") 67 | with open(args.log, "r") as file: 68 | lines = file.readlines() 69 | 70 | # Create our data log from the input data 71 | data_log = DataLog() 72 | 73 | if args.log_type == "CAN": 74 | if not os.path.isfile(args.dbc): 75 | print("ERROR: DBC file %s does not exist" % args.dbc) 76 | exit(1) 77 | 78 | # Load the databse and log file 79 | print("Loading DBC...") 80 | can_db = cantools.database.load_file(args.dbc) 81 | 82 | print("Extracting data...") 83 | data_log.from_can_log(lines, can_db) 84 | elif args.log_type == "CSV": 85 | print("Extracting data...") 86 | data_log.from_csv_log(lines) 87 | elif args.log_type == "ACCESSPORT": 88 | print("Extracting data...") 89 | data_log.from_accessport_log(lines) 90 | 91 | if not data_log.channels: 92 | print("ERROR: Failed to find any channels in log data") 93 | exit(1) 94 | 95 | print("Parsed %.1fs log with %s channels:" % (data_log.duration(), len(data_log.channels))) 96 | for channel_name, channel in data_log.channels.items(): 97 | print("\t%s" % channel) 98 | 99 | # Resample all the channels to occur at a fixed frequency. We must do this because the data in 100 | # motec log expects a constant sample rate, it does not associate a timestamp to each individual 101 | # message in a channel. 102 | data_log.resample(args.frequency) 103 | 104 | print("Converting to MoTeC log...") 105 | 106 | motec_log = MotecLog() 107 | motec_log.driver = args.driver 108 | motec_log.vehicle_id = args.vehicle_id 109 | motec_log.vehicle_weight = args.vehicle_weight 110 | motec_log.vehicle_type = args.vehicle_type 111 | motec_log.vehicle_comment = args.vehicle_comment 112 | motec_log.venue_name = args.venue_name 113 | motec_log.event_name = args.event_name 114 | motec_log.event_session = args.event_session 115 | motec_log.long_comment = args.long_comment 116 | motec_log.short_comment = args.short_comment 117 | 118 | motec_log.initialize() 119 | motec_log.add_all_channels(data_log) 120 | 121 | print("Saving MoTeC log...") 122 | if args.output: 123 | ld_filename = os.path.splitext(args.output)[0] + ".ld" 124 | else: 125 | # Copy the path and name from the source file, but change the extension 126 | candump_dir, candump_filename = os.path.split(args.log) 127 | candump_filename = os.path.splitext(candump_filename)[0] 128 | ld_filename = os.path.join(candump_dir, candump_filename + ".ld") 129 | 130 | output_dir = os.path.dirname(ld_filename) 131 | if output_dir and not os.path.isdir(output_dir): 132 | print("Directory '%s' does not exist, will create it" % output_dir) 133 | os.makedirs(output_dir) 134 | 135 | motec_log.write(ld_filename) 136 | print("Done!") 137 | --------------------------------------------------------------------------------