├── Mayan_aggregate_polars.png ├── Mayan_2019-09_RBBS_4_strip.png ├── Mayan_aggregate_combined_polars.png ├── regatta.json ├── README.md └── polarize.py /Mayan_aggregate_polars.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lanceberc/polarize/HEAD/Mayan_aggregate_polars.png -------------------------------------------------------------------------------- /Mayan_2019-09_RBBS_4_strip.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lanceberc/polarize/HEAD/Mayan_2019-09_RBBS_4_strip.png -------------------------------------------------------------------------------- /Mayan_aggregate_combined_polars.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lanceberc/polarize/HEAD/Mayan_aggregate_combined_polars.png -------------------------------------------------------------------------------- /regatta.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "boat": "Mayan", 4 | "regatta": "2019 Leukemia Cup", 5 | "basefn": "2019-10-20_LeukemiaCup", 6 | "rudderCorrection": 2, 7 | "tz": -7, 8 | "cogsogSource": 1, 9 | "latlonSource": 1 10 | }, 11 | { 12 | "race": "1", 13 | "course": "18", 14 | "data": "2019-10-20_LeukemiaCup.log", 15 | "start": "2019-10-20T12:04:00", 16 | "end": "2019-10-20T14:14:00", 17 | "legs": [ 18 | { "start": "2019-10-20T12:05:56", "end": "2019-10-20T12:24:35" }, 19 | { "start": "2019-10-20T12:24:55", "end": "2019-10-20T13:06:46" }, 20 | { "start": "2019-10-20T13:07:16", "end": "2019-10-20T13:27:04" }, 21 | { "start": "2019-10-20T13:27:25", "end": "2019-10-20T13:43:34" }, 22 | { "start": "2019-10-20T13:44:34", "end": "2019-10-20T14:09:56" }, 23 | { "start": "2019-10-20T14:10:56", "end": "2019-10-20T14:13:45" } 24 | ] 25 | }, 26 | { 27 | "course": "18", 28 | "length": 11.76, 29 | "legs": [ 30 | { "label": "Start to 6s (Mason)", "bearing": 149, "distance": 2.38 }, 31 | { "label": "6s (Mason) to 16s (Blackaller)", "bearing": 253, "distance": 1.65 }, 32 | { "label": "16s (Blackaller) to 17s (Harding)", "bearing": 17, "distance": 1.98 }, 33 | { "label": "17s (Harding) to 18p (Blossom)", "bearing": 107, "distance": 2.35 }, 34 | { "label": "18p (Blossom) to 12s (Little Harding)", "bearing": 290, "distance": 2.98 }, 35 | { "label": "12s (Little Harding) to Finish", "bearing": 26, "distance": 0.42 } 36 | ] 37 | } 38 | ] 39 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # polarize 2 | 3 | Analyze sailboat performance data gathered from NMEA-0183 and NMEA-2000 (N2K) data sources 4 | 5 | ## Synopsis 6 | 7 | - Analyze sailboat race performance using data collected from NMEA bus 8 | - Compute polar plots, per-leg strip charts, minute-by-minute summaries, and Expedition-ready text 9 | - Generate reports in various formats - text files, graphics, and spreadsheets 10 | 11 | It has become very easy and relatively inexpensive to record raw NMEA data, for instance: 12 | - Directly via ~$250 Yacht Devices Voyage Data Recorder 13 | - Indirectly via N2K to WiFi gateways (such as any B&G Zeus or Vulcan MFD, Vesper AIS/WiFi gateways, and Yacht Devices YDWG NMEA-2000 dongles) and apps like SEAiq ($5 for iPhone/USA, $50 for Android/World) 14 | 15 | polarize takes this data and converts it to more familiar forms - polar charts, strip charts, track files, spreadsheets, etc. 16 | 17 | ## Inputs/Configuration 18 | 19 | polarize requires two inputs: 20 | 1. A list of .json files describing the races and courses in a regatta (typically regatta.json) 21 | 2. A per-race NMEA data file pointed to from inside the regatta file 22 | - .log files are assumed to have raw N2K data 23 | - .nmea files are assumed to have NMEA-0183 text sentences 24 | 25 | The regatta file contains additional information including boat name, timezone offset, rudder correction, and COG/SOG sources. 26 | Typically one creates a directory for each regatta since they share courses. 27 | 28 | ## Output 29 | 30 | polarize can generate several kinds of output: 31 | 32 | Option | Effect 33 | ------ | ------ 34 | \-strip | Create per-race strip graph files 35 | \-legs | Create per-leg analysis 36 | \-minute | Create minute-by-minute reports in per-leg analysis 37 | \-spreadsheet | Create xlsx spreadsheet file 38 | \-polars | Create aggregate polar graph file 39 | \-exp | Create aggregate Expedition polar text file 40 | \-gpx | Create gpx track 41 | 42 | Polars can be generated per-regatta or aggregated from multiple regattas (polarize -polars \*/regatta.json) 43 | 44 | ## Notes 45 | 46 | ### Input data format 47 | NMEA-0183 .nmea files (i.e. recorded with SEAiq) are parsed directly 48 | 49 | NMEA-2000 .log files from Yacht Devices VDR Voyage Data Recorder are converted to 50 | JSON by the canboat analyzer (https://www.github.com/canboat/canboat) 51 | 52 | ### Conflicting NMEA sources 53 | It's becoming common for boats to have multiple sources of similar data. For instance COG/SOG data 54 | on a boat with a B&G MFD and ZG100 GPS/heading sensor will disagree due to different damping heuristics, 55 | sometimes by quite a bit. The N2K Source ID can be specified in the regatta file for N2K data to choose 56 | which device to use. 57 | If naively converted to 58 | NMEA-0183 by a WiFi gateway the source ID field is lost making it hard/impossible to filter for only one source. 59 | This may confuse polarize. 60 | 61 | ### Software Environment 62 | polarize requires Python3.x and additional modules: 63 | - matplotlib for plotting (https://matplotlib.org) 64 | - numpy for some statistics (https://numpy.org) 65 | - scipy for Savitsky-Golay filtering (https://scipy.org) 66 | - xlsxwriter to generate Excel-compatible .xlsx files (https://pypi.org/project/XlsxWriter) 67 | 68 | ### Localization 69 | polarize requires kludgey internal configuration: 70 | - The ANALYZE variable must point to the local copy of analyze to enable N2K data conversion 71 | - Polar wind ranges are set manually in a table. The default is good for a particular schooner in typical San Francisco Bay wind ranges. 72 | 73 | ## Example Output 74 | 75 | ![Strip Chart](Mayan_2019-09_RBBS_4_strip.png) 76 | ![Polar Chart](Mayan_aggregate_polars.png) 77 | ![Combined Polar Chart](Mayan_aggregate_combined_polars.png) 78 | 79 | ## License 80 | Copyright 2019 Lance Berc 81 | 82 | Permission is hereby granted, free of charge, to any person obtaining 83 | a copy of this software and associated documentation files (the 84 | "Software"), to deal in the Software without restriction, including 85 | without limitation the rights to use, copy, modify, merge, publish, 86 | distribute, sublicense, and/or sell copies of the Software, and to 87 | permit persons to whom the Software is furnished to do so, subject to 88 | the following conditions: 89 | 90 | The above copyright notice and this permission notice shall be 91 | included in all copies or substantial portions of the Software. 92 | 93 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 94 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 95 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 96 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 97 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 98 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 99 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 100 | -------------------------------------------------------------------------------- /polarize.py: -------------------------------------------------------------------------------- 1 | #!/usr/local/bin/python3 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | Copyright 2019 Lance Berc 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining 8 | a copy of this software and associated documentation files (the 9 | "Software"), to deal in the Software without restriction, including 10 | without limitation the rights to use, copy, modify, merge, publish, 11 | distribute, sublicense, and/or sell copies of the Software, and to 12 | permit persons to whom the Software is furnished to do so, subject to 13 | the following conditions: 14 | 15 | The above copyright notice and this permission notice shall be 16 | included in all copies or substantial portions of the Software. 17 | 18 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 19 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 20 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 21 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 22 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 23 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 24 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 25 | """ 26 | 27 | # Analyze sailboat race performance using data collected from NMEA bus 28 | # Compute polar plots, per-leg strip charts, minute-by-minute summaries, and Expedition-ready text 29 | # Generate reports in various formats - text files, graphics, and spreadsheets 30 | 31 | # NMEA-2000 .log files from Yacht Devices VDR Voyage Data Recorder are converted to 32 | # JSON by the canboat analyzer (https://www.github.com/canboat/canboat) 33 | 34 | # NMEA-0183 .nmea files from SEAiq are parsed directly 35 | 36 | # Configuration is in a per-regatta regatta.json file which defines the courses, 37 | # leg start/end times, and boat-specific parameters 38 | 39 | # Polars can be generated per-regatta or aggregated from multiple regattas 40 | 41 | # Requires Python3.x and additional modules: 42 | # matplotlib for plotting https://matplotlib.org/ 43 | # numpy for some statistics https://numpy.org 44 | # scipy for Savitsky-Golay filtering https://scipy.org 45 | # xlsxwriter to generate Excel-compatible .xlsx files https://pypi.org/project/XlsxWriter/ 46 | 47 | deg = u'\N{DEGREE SIGN}' 48 | 49 | from dataclasses import dataclass 50 | import platform 51 | import sys 52 | import os 53 | import os.path 54 | import datetime 55 | import subprocess 56 | from math import sqrt, sin, cos, pi, tau, radians, degrees, atan2, fmod 57 | from statistics import mean 58 | import numpy as np 59 | import matplotlib.pyplot as plt 60 | import matplotlib.dates as mdates 61 | import scipy.signal 62 | import json 63 | import argparse 64 | import xlsxwriter 65 | 66 | regattalist = [] 67 | regattas = {} 68 | racecount = 0 69 | legcount = 0 70 | 71 | rudderCorrection = 0.0 72 | STWCorrection = 1.0 73 | 74 | # Build the N2K analyzer from www.github.com/canboat/canboat 75 | # It's written in C. Pretty straight forward. 76 | # 'analyzer' converts raw binary N2K to somewhat verbose JSON, but it's a small price to pay 77 | if platform.system() == "Windows": 78 | ANALYZER = 'C:/Users/mayan/src/canboat/rel/cygwin_nt-10.0-x86_64/analyzer.exe' 79 | else: 80 | # Should figure out which arch we're running on 81 | ANALYZER = "/Users/lance/src/canboat/canboat/rel/darwin-x86_64/analyzer" 82 | ANALYZER = "/Users/lance/src/canboat/rel/darwin-arm64/analyzer" 83 | 84 | tzoffset = None 85 | sampleSeconds = 10 86 | expedition_sample_seconds = 1 87 | reportSeconds = 60 88 | 89 | # YYYY-MM-DD HH:MM:SS.sss 90 | boards = { 'Port': { }, 'Stbd': { } } 91 | 92 | raceRawFields = ['AWA', 'AWS', 'STW', 'RUD', 'COG', 'SOG', 'HDG', 'LATLON', 'TWA', 'TWD', 'TWS', 'Yaw', 'Pitch', 'Roll', 'ROT', 'Heave'] 93 | legRawFields = ['AWA', 'AWS', 'STW', 'RUD', 'COG', 'SOG', 'HDG', 'LATLON', 'TWA', 'TWD', 'TWS', 'Pitch', 'Roll', 'ROT', 'Heave'] 94 | legFields = ['Time', 'TWA', 'TWD', 'TWS', 'AWA', 'AWS', 'STW', 'RUD', 'COG', 'SOG', 'HDG', 'Pitch', 'Roll', 'ROT', 'Heave'] 95 | 96 | # Min and max true wind angles for polars - assume if it's outside the range we're tacking or gybing 97 | minTWA = 25.0 98 | maxTWA = 165.0 99 | 100 | # Min and max apparent wind angles for a leg - assume if it's outside the range we're tacking or gybing 101 | minAWA = 25.0 102 | maxAWA = 165.0 103 | 104 | # Set latlonSource and cogsogSource to 1 for B&G or 21 for the Vesper XB8000 - may vary per-boat 105 | LATLONSOURCE = None 106 | COGSOGSOURCE = None 107 | 108 | # Allow defining lat/lon for Expedition logs since it's known to have bad data. 109 | BOUNDS = None 110 | 111 | # Color map for most lines in strip and polar plots. Chosen to look 'nice' 112 | #cmap = plt.cm.Dark2.colors 113 | # Dark Pastels from https://www.schemecolor.com/ with yellow moved to end 114 | cmap = [ '#C23B23', '#F39A27', '#03C03C', '#579ABE', '#976ED7', '#EADA52'] 115 | 116 | # Convert meters per second to knots 117 | def ms2kts(ms): 118 | return(ms * 1.94384) 119 | 120 | def addtz(ts): 121 | return("%s" % (ts + tzoffset)) 122 | 123 | def parse_regatta(fn): 124 | # Parse a regatta description, which is a JSON file composed of a list of dictionary elements. 125 | # Each element is a "regatta", "race", or "course" 126 | 127 | with open(fn, "r") as f: 128 | try: 129 | j = json.load(f) 130 | except: 131 | print("Couldn't load %s as JSON" % (fn)) 132 | raise 133 | 134 | r = None 135 | for e in j: 136 | # Look at each element; so far "regatta", "race", and "course" are defined 137 | if "regatta" in e: 138 | name = e["regatta"] 139 | regattalist.append(name) 140 | regattas[name] = e # copy all params into regatta 141 | regatta = regattas[name] 142 | p, n = os.path.split(fn) 143 | regatta["path"] = p + '/' if p != "" else "./" 144 | regatta["races"] = [] 145 | regatta["courses"] = {} 146 | regatta["marks"] = {} 147 | global rudderCorrection 148 | rudderCorrection = 0.0 if not 'rudderCorrection' in e else float(e['rudderCorrection']) 149 | print("## Setting rudder correction to %4.1f%s" % (rudderCorrection, deg)) 150 | global STWCorrection 151 | STWCorrection = 1.0 if not 'STWCorrection' in e else float(e['STWCorrection']) 152 | print("## Setting STW correction to %5.2f%s" % (STWCorrection * 100, "%")) 153 | global LATLONSOURCE, COGSOGSOURCE 154 | LATLONSOURCE = None if not 'latlonSource' in e else int(e['latlonSource']) 155 | COGSOGSOURCE = None if not 'cogsogSource' in e else int(e['cogsogSource']) 156 | tzoffset = datetime.timedelta(0) if not 'tz' in e else datetime.timedelta(hours=int(e['tz'])) 157 | print("## Parse Regatta %s" % (name)) 158 | global BOUNDS 159 | BOUNDS = None if not 'bounds' in e else e['bounds'] 160 | if not BOUNDS is None: 161 | print("## Bounds: %r" % (BOUNDS)) 162 | elif "race" in e: 163 | ## print("## Parse Race %r" % (e)) 164 | e['startts'] = datetime.datetime.strptime(e['start'], '%Y-%m-%dT%H:%M:%S') - tzoffset 165 | e['endts'] = datetime.datetime.strptime(e['end'], '%Y-%m-%dT%H:%M:%S') - tzoffset 166 | for f in raceRawFields: 167 | e[f] = [] 168 | regatta["races"].append(e) # Add this to the list of races 169 | print("## Parse Race %s - Course %s %s - %s" % (e['race'], e['course'], e['startts'], e['endts'])) 170 | elif "course" in e: 171 | regatta['courses'][e["course"]] = e # Add this to the dictionary of courses 172 | print("## Parse Course %s" % (e['course'])) 173 | elif "mark" in e: 174 | regatta['marks'][e['mark']] = e 175 | else: 176 | print("Unknown regatta element '%s'" % (e)) 177 | 178 | # https://www.eye4software.com/hydromagic/documentation/nmea0183/ 179 | # https://www.trimble.com/OEM_ReceiverHelp/V4.44/en/NMEA-0183messages_MessageOverview.html 180 | # https://www.gpsinformation.org/dale/nmea.htm#intro 181 | """ 182 | Sentences seen at Doc Brown's Lab - B&G Zeus MFD, ZG100, SDT transducer, H5000 masthead, Off-brand heading sensor 183 | Recorded with SeaIQ via a Yacht Devices YDG wireless gateway 184 | Count Sntnce * Description 185 | 57854 $SDHDG * HDG Heading w/ magnetic variation 186 | 17667 $GPGSV GSV Satellites in view 187 | 11774 $WIMWV * MVW Wind speed and angle - Two sentences, relative and true? 188 | 9197 $PSIQREC * SeaIQ recording time stamps 189 | 5889 $SDDPT DPT Depth 190 | 5889 $IIXDR * XDR Transducer measurement - includes rudder angle? 191 | 5889 $GPXTE XTE Cross-track error 192 | 5889 $GPRMB RMB Navigation info 193 | 5889 $GPGSA GSA GPS Info 194 | 5889 $GPGLL * GLL GPS-based lat/lon 195 | 5889 $GPGLC GLC Loran-based lat/lon 196 | 5889 $GPGGA GGA GPS fix data 197 | 5889 $GPBWR BWR Bearing and distance to waypoint - Ruhmb line 198 | 5889 $GPBWC BWC Bearing and distance to waypoint - Great circle 199 | 5888 $WIMWD * MWD Wind Direction and Speed, with respect to north 200 | 5888 $SDVLW VLW Distance through water 201 | 5888 $SDVHW * VHW Water speed and heading 202 | 5888 $SDMTW MTW Water temp 203 | 5888 $GPZDA * ZDA Date & Time (UTC) 204 | 5888 $GPVTG * VTG Track made good & ground speed (cog/sog?) 205 | 5888 $GPBOD BOD Bearing - waypoint to waypoint 206 | 5888 $GPAPB APB Autopilot 'B' 207 | 5888 $GPAAM AAM Waypoint arrival alarm 208 | 5887 $GPRMC RMC Navigation info 209 | """ 210 | 211 | # NMEA 0183 sentences we use 212 | interesting_sentences = ["$SDHDG", # Heading 213 | "$WIMWV", # Wind data 214 | "$PSIQREC", # SeaIQ Time stamps 215 | "$IIXDR", # Transducer measurement w/ rudder angle 216 | "$GPGLL", # LAT/LON 217 | "$SDVHW", # Water speed - and heading? 218 | "$GPZDA", # Time stamp 219 | "$GPVTG"] # COG/SOG 220 | 221 | def legInit(l): 222 | for f in legRawFields: 223 | l[f] = [] 224 | l['startts'] = datetime.datetime.strptime(l['start'], '%Y-%m-%dT%H:%M:%S') - tzoffset 225 | l['endts'] = datetime.datetime.strptime(l['end'], '%Y-%m-%dT%H:%M:%S') - tzoffset 226 | l['duration'] = l['endts'] - l['startts'] 227 | l['sindex'] = {} 228 | l['eindex'] = {} 229 | print("## Init leg %s - %s" % (l['startts'], l['endts'])) 230 | 231 | interesting_sentences = ["$SDHDG", # Heading 232 | "$WIMWV", # Wind data 233 | "$PSIQREC", # SeaIQ Time stamps 234 | "$GPGLL", # LAT/LON 235 | "$SDVHW", # Water speed - and heading? 236 | "$GPZDA", # Time stamp 237 | "$GPVTG"] # COG/SOG 238 | 239 | def parse_race_0183(regatta, r): 240 | with open(regatta['path'] + r['data'], 'r') as f: 241 | r['variation'] = 0 242 | r['LATLON'] = [] 243 | variation = 0 244 | ts = datetime.datetime(year=1970, day=1, month=1, hour=0, minute=0, second=0) 245 | sampleCount = 0 246 | sentenceCount = 0 247 | 248 | while True: 249 | line = f.readline() 250 | if not line: 251 | break 252 | fields = line.split(',') 253 | 254 | # I'm tempted to select on the entire field, but I don't know if some brands emit 255 | # some sentences with different talker IDs 256 | talker = fields[0][:-3] 257 | sentence = fields[0][-3:] 258 | 259 | # Look for timestamps first. 260 | if sentence == "REC": 261 | # SeaIQ recording timestamp 262 | # $PSIQREC,0,1,1572722175.566,20191102,121615 263 | # print("## REC %s: %s %s %s" % (line, fields[3], fields[4], fields[5])) 264 | y = int(fields[4][0:4]) 265 | m = int(fields[4][4:6]) 266 | d = int(fields[4][6:8]) 267 | h = int(fields[5][0:2]) 268 | minute = int(fields[5][2:4]) 269 | s = int(fields[5][4:6]) 270 | ms = int(fields[3][-3:]) 271 | tslocal = datetime.datetime(year=y, month=m, day=d, hour=h, minute=minute, second=s, microsecond=ms*1000) 272 | newts = datetime.datetime.utcfromtimestamp(float(fields[3])) 273 | delta = newts - ts 274 | # Ignore the SeaIQ timestamps - use the ZDA sentence from the B&G 275 | #print("## New time REC %s delta %s" % (newts.strftime('%Y-%m-%dT%H:%M:%S'), delta)) 276 | #ts = newts 277 | 278 | elif sentence == "ZDA": 279 | # GPS timestamp 280 | # $GPZDA,191614,02,11,2019,07,00*4D 281 | # $GPZDA,UTC,Day,Month,Year,TZ-hours,TZ-minutes 282 | # UTC is in HHMMSS.xxxx 283 | y = int(fields[4]) 284 | m = int(fields[3]) 285 | d = int(fields[2]) 286 | h = int(fields[1][0:2]) 287 | minute = int(fields[1][2:4]) 288 | s = int(fields[1][4:6]) 289 | newts = datetime.datetime(year=y, month=m, day=d, hour=h, minute=minute, second=s) 290 | delta = newts - ts 291 | #print("## New time ZDA %s delta %s" % (newts.strftime('%Y-%m-%dT%H:%M:%S'), delta)) 292 | ts = newts 293 | 294 | if (ts < r['startts']): 295 | # Not yet in the race 296 | # print("## Not leg %s vs %s" % (ts.strftime('%Y-%m-%dT%H:%M:%S'), r['startts'].strftime('%Y-%m-%dT%H:%M:%S'))) 297 | continue 298 | 299 | if (ts >= r['endts']): 300 | print("## Done %s - %s %d of %d samples" % (r['startts'], r['endts'], sampleCount, sentenceCount)) 301 | break 302 | 303 | sentenceCount += 1 304 | if sentence == "GLL": 305 | # LAT/LON 306 | # $GPGLL,3748.8071,N,12227.9801,W,191614,A,A*5D 307 | # Remember to make South and West negative 308 | lat1, lat2 = fields[1].split('.') 309 | n = fields[2] == 'N' 310 | lat = (float(lat1[0:-2]) + (float(lat1[-2:] + '.' + lat2)/60)) * (1 if n else -1) 311 | 312 | lon1, lon2 = fields[3].split('.') 313 | e = fields[4] == 'E' 314 | lon = (float(lon1[0:-2]) + (float(lon1[-2:] + '.' + lon2)/60)) * (1 if e else -1) 315 | 316 | r['LATLON'].append((ts, lat, lon)) 317 | sampleCount += 1 318 | 319 | elif sentence == "HDG": 320 | # Heading 321 | # $SDHDG,336.4,,,13.3,E*06 322 | if fields[4] != '': 323 | east = fields[5] == 'E' 324 | variation = float(fields[4]) * (1 if east else -1) 325 | if r['variation'] != variation: 326 | r['variation'] = variation 327 | variation = variation 328 | # print("Setting compass variation to %.1f" % (variation)) 329 | hdg = float(fields[1]) 330 | r['HDG'].append((ts, hdg)) 331 | sampleCount += 1 332 | 333 | elif sentence == "MWV": 334 | # Wind data 335 | # $WIMWV,336.8,R,18.7,N,A*13 336 | # $WIMWV,324.6,T,13.0,N,A*14 337 | if fields[4] == 'N': 338 | speed = float(fields[3]) 339 | elif fields[4] == 'M': 340 | speed = m2k(float(fields[3])) 341 | if fields[2] == 'R': 342 | # Apparent wind - should make sure it's in knots 343 | awa = float(fields[1]) 344 | awa = awa if awa < 180 else awa - 360 345 | r['AWA'].append((ts, awa)) 346 | r['AWS'].append((ts, speed)) 347 | if fields[2] == 'T': 348 | # True wind - should make sure it's in knots 349 | #l['TWA'].append((ts, float(fields[1]))) 350 | #l['TWS'].append((ts, speed)) 351 | #sampleCount += 1 352 | # We compute this from smoothed data rather than trust the instrument's calc 353 | pass 354 | 355 | elif sentence == "MWD": 356 | # Wind Direction and Speed, with respect to north 357 | # $WIMWD,70.4,T,57.1,M,4.7,N,2.4,M*5F 358 | #twd = float(fields[3]) 359 | #tws = float(fields[5]) 360 | # Not saving these for now - not sure I believe them 361 | pass 362 | 363 | elif sentence == "VHW": 364 | # Speed through Water (with heading?) 365 | # $SDVHW,348.7,T,335.4,M,5.7,N,10.6,K*7E 366 | #hdg = float(fields[3]) # Do not believe heading from boat speed sensor 367 | stw = float(fields[5]) * STWCorrection 368 | r['STW'].append((ts, stw)) 369 | sampleCount += 1 370 | elif sentence == "VTG": 371 | # COG/SOG 372 | # $GPVTG,342.9,T,329.5,M,5.4,N,10.1,K,A*13 373 | cog = float(fields[3]) # use magnetic COG 374 | sog = float(fields[5]) 375 | r['COG'].append((ts, cog)) 376 | r['SOG'].append((ts, sog)) 377 | sampleCount += 1 378 | print("## Done Race %s %s - %s kept %d of %d pgns" % (r['race'], r['startts'], r['endts'], sampleCount, sentenceCount)) 379 | 380 | 381 | # The N2k PGNs we need for sailing performance. Ignores things like routes/waypoints, AIS, etc. 382 | interesting_pgns = [127245, # Rudder angle 383 | 127250, # Vessel heading 384 | 127251, # Rate of turn 385 | 127252, # Heave 386 | 127257, # Attitude (yaw, pitch, roll) 387 | 127258, # Magnetic variation 388 | 128259, # Speed through water (boatspeed) 389 | 129025, # Position (lat/lon) 390 | 129026, # COG / SOG 391 | 130306] # Wind data 392 | 393 | def parse_race_n2k(regatta, r): 394 | # Parse the NMEA data for a single race. 395 | # Data starts in a .log file from the Yacht Devices YDVRConv app 396 | # Use the analyzer to convert it to JSON, filtering for the interesting PGNs 397 | 398 | c = regatta["courses"][r['course']] 399 | print("## Parse Race %s N2K" % (r['race'])) 400 | jsonfile = regatta['path'] + regatta["basefn"] + '_' + r['race'] + ".json" 401 | 402 | # If we don't have a JSON file, run the raw log through the analyzer filtering for the PGNs we care about 403 | if not os.path.isfile(jsonfile): 404 | pgnpat = '(' + str(interesting_pgns[0]) 405 | for pgn in interesting_pgns[1:]: 406 | pgnpat += '|' + str(pgn) 407 | pgnpat += ')' 408 | 409 | logfile = regatta["path"] + r['data'] 410 | print("Converting N2K %s to %s" % (logfile, jsonfile)) 411 | 412 | #with open(logfile, 'r') as infile, open(jsonfile, 'wb') as outfile: 413 | with open(jsonfile, 'wb') as outfile: 414 | if platform.system() == "Windows": 415 | # Analyzer has to be run under Cygwin; use bash to start both the analyzer and grep 416 | cwd=os.getcwd().replace('\\', '/') 417 | p1cmd = [ 'C:/cygwin64/bin/bash', '--login', '-c', '%s -json -file %s/%s' % (ANALYZER, cwd, logfile) ] 418 | p2cmd = [ 'C:/cygwin64/bin/bash', '--login', '-c', "grep -E '%s'" % (pgnpat)] 419 | else: 420 | # Assume Linux / MacOS 421 | p1cmd = [ ANALYZER, '-json', '-d', '-file', logfile ] 422 | p2cmd = [ '/usr/bin/grep', '-E', pgnpat] 423 | #p1 = subprocess.Popen(p1cmd, stdin=infile, stdout=subprocess.PIPE) 424 | # Using the -file argument instead of stdin 425 | p1 = subprocess.Popen(p1cmd, stdout=subprocess.PIPE) 426 | p2 = subprocess.Popen(p2cmd, stdin=p1.stdout, stdout=outfile) 427 | 428 | #out1, err1 = p1.communicate() 429 | ret1 = p1.wait() 430 | if ret1 == None: 431 | print("Analyzer wait() returned None") 432 | elif ret1 != 0: 433 | print("Analyzer exited with code %d" % (ret1)) 434 | print("infile %r" % (logfile)) 435 | print("p1cmd: %r" % (p1cmd)) 436 | print("p2cmd: %r" % (p2cmd)) 437 | print("outfile: %r" % (jsonfile)) 438 | # Not sure how to get stderr - if communicate is called it intercepts stdout being piped to grep 439 | #print("stderr: %r" % (err1)) 440 | else: 441 | print("Analyzer exited OK") 442 | ret2 = p2.wait() 443 | 444 | if ret2 == None: 445 | print("N2K analyze pipe not terminated?") 446 | p2.kill() 447 | os.remove(jsonfile) 448 | elif ret2 != 0: 449 | print("N2K analyze pipe exited with code %d" % (ret2)) 450 | print("p1cmd: %r" % (p1cmd)) 451 | print("p2cmd: %r" % (p2cmd)) 452 | print("Removing %s" % (jsonfile)) 453 | os.remove(jsonfile) 454 | return # maybe should quit program 455 | 456 | with open(jsonfile, "r") as f: 457 | r['variation'] = 0 458 | r['LATLON'] = [] 459 | variation = 0 460 | sampleCount = 0 461 | pgnCount = 0 462 | 463 | while True: 464 | line = f.readline() 465 | if not line: 466 | print("## End of data at %s (%s)" % (ts, j['timestamp'])) 467 | break 468 | try: 469 | j = json.loads(line) 470 | except: 471 | print("Parse exception %s" % (line)) 472 | continue 473 | 474 | #print("Line %s" % (line)) 475 | #print("JSON %r" % (j)) 476 | ts = datetime.datetime.strptime(j['timestamp'][:-4], '%Y-%m-%d-%H:%M:%S') 477 | pgn = j['pgn'] 478 | 479 | if pgn == 127258: 480 | #{"timestamp":"2019-10-20-19:04:56.535","prio":7,"src":1,"dst":255,"pgn":127258,"description":"Magnetic Variation","fields":{"SID":53,"Variation":13.3}} 481 | variation = j['fields']['Variation'] 482 | if variation != r['variation']: 483 | r['variation'] = variation 484 | # print("## Setting compass variation to %f" % (variation)) 485 | 486 | if (ts < r['startts']): 487 | # Not yet on the new leg 488 | continue 489 | 490 | # print("%s vs %s" % (time, r["legs"][leg]["start"])) 491 | if (ts >= r['endts']): 492 | print("## Done Race %s %s - %s kept %d of %d pgns" % (r['race'], r['startts'], r['endts'], sampleCount, pgnCount)) 493 | break 494 | 495 | pgnCount += 1 496 | if pgn == 129025: 497 | # Record LAT/LON even when not on a leg 498 | #{"timestamp":"2019-10-20-19:04:57.588","prio":2,"src":1,"dst":255,"pgn":129025,"description":"Position, Rapid Update","fields":{"Latitude":37.8489280,"Longitude":-122.4480448}} 499 | #{"timestamp":"2019-10-20-19:04:57.598","prio":2,"src":21,"dst":255,"pgn":129025,"description":"Position, Rapid Update","fields":{"Latitude":37.8489088,"Longitude":-122.4480384}} 500 | if (LATLONSOURCE == None) or (j['src'] == LATLONSOURCE): 501 | if not 'fields' in j: 502 | print("LAT/LON missing fields %s" % (j)) 503 | else: 504 | r['LATLON'].append((ts, j['fields']['Latitude'], j['fields']['Longitude'])) 505 | sampleCount += 1 506 | 507 | elif pgn == 127245: 508 | #{"timestamp":"2019-10-20-19:04:56.206","prio":2,"src":204,"dst":255,"pgn":127245,"description":"Rudder","fields":{"Instance":252,"Direction Order":0}} 509 | #{"timestamp":"2019-10-20-19:04:56.206","prio":2,"src":204,"dst":255,"pgn":127245,"description":"Rudder","fields":{"Instance":0,"Position":19.2}} 510 | if (('Instance' in j['fields']) and (j['fields']['Instance'] == 0)): 511 | if not ('Position' in j['fields']): 512 | #print("## Rudder missing Position %s" % (j)) 513 | pass 514 | else: 515 | rudder = j['fields']['Position'] + rudderCorrection 516 | r['RUD'].append((ts, rudder)) 517 | sampleCount += 1 518 | 519 | elif pgn == 127250: 520 | #{"timestamp":"2019-10-20-19:04:56.308","prio":2,"src":204,"dst":255,"pgn":127250,"description":"Vessel Heading","fields":{"Heading":280.1,"Reference":"Magnetic"}} 521 | heading = j['fields']['Heading'] 522 | r['HDG'].append((ts, heading)) 523 | sampleCount += 1 524 | 525 | elif pgn == 128259: 526 | #{"timestamp":"2019-10-20-19:04:56.538","prio":2,"src":35,"dst":255,"pgn":128259,"description":"Speed","fields":{"SID":6,"Speed Water Referenced":1.35,"Speed Water Referenced Type":"Paddle wheel"}} 527 | ms = j['fields']['Speed Water Referenced'] 528 | stw = ms2kts(ms) * STWCorrection 529 | r['STW'].append((ts, stw)) 530 | sampleCount += 1 531 | 532 | elif pgn == 129026: 533 | #{"timestamp":"2019-10-20-19:04:57.891","prio":2,"src":1,"dst":255,"pgn":129026,"description":"COG & SOG, Rapid Update","fields":{"SID":54,"COG Reference":"True","COG":281.5,"SOG":1.38}} 534 | #{"timestamp":"2019-10-20-19:04:58.001","prio":2,"src":21,"dst":255,"pgn":129026,"description":"COG & SOG, Rapid Update","fields":{"COG Reference":"True","COG":187.6,"SOG":1.47}} 535 | if (COGSOGSOURCE == None) or (j['src'] == COGSOGSOURCE): 536 | if not 'COG Reference' in j['fields']: 537 | # print("No COG Reference %s" % (j)) 538 | pass 539 | else: 540 | cog = j['fields']['COG'] if j['fields']['COG Reference'] != "True" else j['fields']['COG'] - variation 541 | sog = ms2kts(j['fields']['SOG']) 542 | r['COG'].append((ts, cog)) 543 | r['SOG'].append((ts, sog)) 544 | sampleCount += 1 545 | 546 | elif pgn == 130306: 547 | #{"timestamp":"2019-10-20-19:04:58.009","prio":2,"src":9,"dst":255,"pgn":130306,"description":"Wind Data","fields":{"SID":0,"Wind Speed":7.18,"Wind Angle":281.8,"Reference":"Apparent"}} 548 | aws = ms2kts(j['fields']['Wind Speed']) 549 | awa = j['fields']['Wind Angle'] 550 | awa = awa if awa <= 180.0 else awa - 360.0 551 | r['AWS'].append((ts, aws)) 552 | r['AWA'].append((ts, awa)) 553 | sampleCount += 1 554 | 555 | elif pgn == 127257: 556 | # Attitude - pgn fields from analyzer are in degrees? 557 | r['Yaw'].append((ts, j['fields']['Yaw'])) 558 | r['Pitch'].append((ts, j['fields']['Pitch'])) 559 | r['Roll'].append((ts, j['fields']['Roll'])) 560 | sampleCount += 1 561 | elif pgn == 127251: 562 | # Rate of turn 563 | r['ROT'].append((ts, j['fields']['Rate'])) 564 | sampleCount += 1 565 | elif pgn == 127252: 566 | # Heave 567 | r['Heave'].append((ts, j['fields']['Heave'])) 568 | sampleCount += 1 569 | 570 | def parse_race_csv(regatta, r): 571 | # For now assume that CSV files are from Expedition 572 | # Expedition files start with '!Boat in the A0 cell followed by field names in B0-x0 573 | # The second line starts with !boat in A1 and field numbers in B1-x1 574 | # Line three is the Expedition version, and lines 4-n are pairs of field number and value 575 | # The first field is the time (usually [0,Utc-time-as-a-float]) 576 | 577 | # https://stackoverflow.com/questions/46130132/converting-unix-time-into-date-time-via-excel 578 | # Unix GMT timestamp = (DATE - 25569) * (86400 * 1000) 579 | 580 | c = regatta["courses"][r['course']] 581 | fn = regatta['path'] + r['data'] 582 | print("## Parse Race %s csv (Expedition): %s" % (r['race'], fn)) 583 | with open(fn, 'r') as f: 584 | r['variation'] = 0 585 | r['LATLON'] = [] 586 | 587 | variation = 0 588 | sampleCount = 0 589 | sentenceCount = 0 590 | lineNumber = 0 591 | expFields = {} 592 | while True: 593 | line = f.readline() 594 | if not line: 595 | break 596 | # Strip off the trailing 597 | if len(line) > 0: 598 | line = line[:-1] 599 | 600 | fields = line.split(',') 601 | 602 | if (fields[0] == "!Boat"): 603 | #print("Found fieldNames"); 604 | fieldNames = fields 605 | continue; 606 | if (fields[0] == "!boat"): 607 | #print("Found fieldNumbers"); 608 | fieldNumbers = fields 609 | for i in range(len(fieldNames)): 610 | expFields[fieldNumbers[i]] = fieldNames[i] 611 | #print("Field #%s: %s" % (fieldNumbers[i], fieldNames[i])) 612 | lineNumber = lineNumber + 1 613 | continue; 614 | 615 | #print("field[0]: %s fields[0][0:1] '%s'" % (fields[0], fields[0][0:1])) 616 | #print("field[-1]: %s fields[1][-2:-1] '%s'" % (fields[0], fields[0][-2:-1])) 617 | if (fields[0][0:1] == "!"): 618 | print("Expedition version %s" % (fields[0][1:])) 619 | lineNumber = lineNumber + 1 620 | continue; 621 | 622 | field = 0; 623 | vals = {} 624 | bad = False 625 | while (field+1) < len(fields): 626 | bad = False 627 | fieldNumber, val = fields[field:field+2] 628 | if len(fieldNumber) == 0: 629 | field = field + 2 630 | continue; 631 | if not fieldNumber in expFields: 632 | print(line) 633 | print("line %d no field number %s" % (lineNumber, fieldNumber)) 634 | field = field + 2 635 | continue 636 | variable = expFields[fieldNumber] 637 | try: 638 | vals[variable] = float(val) 639 | except ValueError: 640 | print("## line %d: vals[%s (#%s)]: %s" % (lineNumber, variable, fieldNumber, val)) 641 | print("## %s" % (line)) 642 | bad = True 643 | break 644 | 645 | field = field + 2 646 | 647 | if bad: 648 | lineNumber = lineNumber + 1 649 | continue 650 | 651 | if not 'Utc' in vals: 652 | print("Line %s has no timestamp" % (lineNumber)) 653 | print("Line %s " % (line)) 654 | lineNumber = lineNumber + 1 655 | continue 656 | 657 | msDate = float(vals["Utc"]) 658 | seconds = int((msDate - 25569) * (86400)) 659 | ts = datetime.datetime.utcfromtimestamp(seconds) 660 | #print("timestamp for %s (%d): %r" % (vals["Utc"], seconds, ts)) 661 | 662 | if (ts < r['startts']): 663 | continue; 664 | if (ts >= r['endts']): 665 | break; 666 | 667 | # LATLON 668 | if ('Lat' in vals) and ('Lon' in vals): 669 | lat = vals['Lat'] 670 | lon = vals['Lon'] 671 | if not (BOUNDS is None): 672 | if (lat > BOUNDS['north']) or (lat < BOUNDS['south']) or (lon > BOUNDS['east']) or (lon < BOUNDS['west']): 673 | print("## Line %d out of bounds lat %.2f lon %.2f [north %.2f south %.2f west %.2f east %.2f]" % (lineNumber, lat, lon, BOUNDS['north'], BOUNDS['south'], BOUNDS['west'], BOUNDS['east'])) 674 | print(line) 675 | lineNumber = lineNumber + 1 676 | continue 677 | 678 | r['LATLON'].append((ts, vals["Lat"], vals["Lon"])) 679 | sampleCount += 1 680 | 681 | if 'HDG' in vals: 682 | r['HDG'].append((ts, vals['HDG'])) 683 | sampleCount += 1 684 | 685 | if 'AWA' in vals: 686 | r['AWA'].append((ts, vals['AWA'])) 687 | sampleCount += 1 688 | 689 | if 'AWS' in vals: 690 | r['AWS'].append((ts, vals['AWS'])) 691 | sampleCount += 1 692 | 693 | if 'TWA' in vals: 694 | r['TWA'].append((ts, vals['TWA'])) 695 | sampleCount += 1 696 | 697 | if 'TWS' in vals: 698 | r['TWS'].append((ts, vals['TWS'])) 699 | sampleCount += 1 700 | 701 | if 'BSP' in vals: 702 | r['STW'].append((ts, vals['BSP'])) 703 | sampleCount += 1 704 | 705 | if 'COG' in vals: 706 | r['COG'].append((ts, vals['COG'])) 707 | sampleCount += 1 708 | 709 | if 'SOG' in vals: 710 | r['SOG'].append((ts, vals['SOG'])) 711 | sampleCount += 1 712 | 713 | if 'Rudder' in vals: 714 | r['RUD'].append((ts, vals['Rudder'])) 715 | sampleCount += 1 716 | 717 | if 'ROT' in vals: 718 | r['ROT'].append((ts, vals['ROT'])) 719 | sampleCount += 1 720 | 721 | lineNumber = lineNumber + 1 722 | 723 | 724 | def parse_race(regatta, r): 725 | if r['data'][-5:] == '.nmea': 726 | parse_race_0183(regatta, r) 727 | elif r['data'][-4:] == '.log': 728 | parse_race_n2k(regatta, r) 729 | elif r['data'][-4:] == '.csv': 730 | parse_race_csv(regatta, r) 731 | else: 732 | print("Unknown NMEA format file %s" % (r['data'])) 733 | 734 | def analyze_race(regatta, r): 735 | # Find the start and end indices for each parameter on each leg 736 | # This is a few more lines than using a list comprehension, but it's O(n) instead of O(n^2) 737 | for l in r['legs']: 738 | legInit(l) 739 | 740 | for field in legFields: 741 | if field == 'Time': 742 | continue 743 | i = 0 744 | for leg, l in enumerate(r['legs']): 745 | if not field in r: 746 | # Don't have this field in the raw data 747 | l['sindex'][field] = 0 748 | l['eindex'][field] = 0 749 | print("## Leg %d No such field %s" % (leg, field)) 750 | break 751 | while (i < len(r[field])) and (r[field][i][0] < l['startts']): 752 | i += 1 753 | l['sindex'][field] = i 754 | while (i < len(r[field])) and (r[field][i][0] < l['endts']): 755 | i += 1 756 | # Last data item for this leg 757 | l['eindex'][field] = i 758 | #print("## Leg %d Field %s[%d:%d]" % (leg, f, l['sindex'][f], l['eindex'][f])) 759 | 760 | def analyze_leg(regatta, r, leg): 761 | # Coalesce the data from each leg into a list of 10 second samples 762 | # Compute the TWS, TWD, and TWA for each sample 763 | c = regatta['courses'][r['course']] 764 | l = r['legs'][leg] 765 | 766 | print("## Analyze %s Race %s Leg %d %s (%.2fnm @ %03d%s) - (%s - %s) %s" 767 | % (regatta['regatta'], r['race'], leg+1, c["legs"][leg]["label"], c["legs"][leg]["distance"], c["legs"][leg]["bearing"], deg, l['start'], l['end'], l['duration'])) 768 | 769 | # Chop the leg into 10 second buckets 770 | # Look for tacks and gybes 771 | bucketDelta = datetime.timedelta(seconds=sampleSeconds) 772 | bucketStart = l['startts'] 773 | 774 | index = {} 775 | for field in ['AWA', 'AWS', 'STW', 'SOG', 'RUD', 'TWS', 'COG', 'HDG', 'TWD', 'Pitch', 'Roll', 'ROT', 'Heave']: 776 | index[field] = l['sindex'][field] 777 | 778 | l['samples'] = [] 779 | while bucketStart < l['endts']: 780 | bucket = {} 781 | bucketEnd = bucketStart + bucketDelta 782 | bucketEnd = min(bucketEnd, l['endts']) 783 | bucket['ts'] = bucketStart 784 | 785 | # Fields that can be a simple per bucket average 786 | for field in ['AWA', 'AWS', 'STW', 'SOG', 'RUD', 'TWS', 'Pitch', 'Roll', 'ROT', 'Heave']: 787 | total = 0.0 788 | count = 0 789 | # Advance to the beginning of the run of data for this bucket 790 | while index[field] < l['eindex'][field] and r[field][index[field]][0] < bucketStart: 791 | index[field] += 1 792 | # Run through the valid data summing the value 793 | while index[field] < l['eindex'][field] and r[field][index[field]][0] < bucketEnd: 794 | d = r[field][index[field]] 795 | #if len(d) < 2: 796 | # print("Field %s data element too small: %r" % (field, d)) 797 | total += d[1] 798 | count += 1 799 | index[field] += 1 800 | # Take the average for the bucket 801 | bucket[field] = None if count == 0 else total / count 802 | 803 | markBearing = c["legs"][leg]["bearing"] 804 | # If we're going north-ish, convert COG and Heading ranges to [-180, 180] in case some samples straddle due north 805 | # This is to catch the "averaging samples around due north degress leads to due south" problem - mean([0,359.999]) = 180 806 | for field in ['COG', 'HDG', 'TWD']: 807 | total = 0.0 808 | count = 0 809 | # Advance to the beginning of the run of data for this bucket 810 | while index[field] < l['eindex'][field] and r[field][index[field]][0] < bucketStart: 811 | index[field] += 1 812 | while index[field] < l['eindex'][field] and r[field][index[field]][0] < bucketEnd: 813 | d = r[field][index[field]] 814 | # If we're going north-ish, convert COG and Heading ranges to [-180, 180] in case some samples straddle due north 815 | total += d[1] if (markBearing > 90 and markBearing < 270) or (d[1] <= 180) else d[1] - 360.0 816 | count += 1 817 | index[field] += 1 818 | if count == 0: 819 | bucket[field] = None 820 | else: 821 | bucket[field] = total / count 822 | bucket[field] += 0.0 if (markBearing > 90 and markBearing < 270) or (bucket[field] > 0) else 360.0 # Convert back to 0 - 360 if needed 823 | 824 | """ 825 | Compute TWS and TWD for this bucket 826 | 827 | AWA = + for Starboard, – for Port 828 | AWD = H + AWA ( 0 < AWD < 360 ) 829 | u = SOG * Sin (COG) – AWS * Sin (AWD) 830 | v = SOG * Cos (COG) – AWS * Cos (AWD) 831 | TWS = SQRT ( u*u + v*v ) 832 | TWD = ATAN ( u / v ) 833 | """ 834 | 835 | if bucket['HDG'] == None or bucket['COG'] == None or bucket['SOG'] == None or bucket['AWA'] == None or bucket['AWS'] == None: 836 | # Can't compute True Wind w/o HDG, COG, SOG and AWA, AWS 837 | bucket['TWD'] = None 838 | bucket['TWS'] = None 839 | bucket['TWA'] = None 840 | else: 841 | hdg = radians(bucket['HDG']) 842 | aws = bucket['AWS'] 843 | awa = radians(bucket['AWA']) 844 | cog = radians(bucket['COG']) 845 | sog = bucket['SOG'] 846 | 847 | # Compute TWD, TWS if it's not done by the instruments 848 | if bucket['TWD'] == None: 849 | awd = fmod(hdg + awa, tau) # Compensate for boat's heading 850 | u = (sog * cos(cog)) - (aws * cos(awd)) 851 | v = (sog * sin(cog)) - (aws * sin(awd)) 852 | tws = sqrt((u*u) + (v*v)) 853 | # Now we want to know where it's from, not where it's going, so add pi 854 | twd = degrees(fmod(atan2(v,u)+pi, tau)) 855 | 856 | bucket['TWS'] = tws 857 | bucket['TWD'] = twd 858 | 859 | # Compute TWA from TWD and Heading 860 | twa = fmod((bucket['TWD'] - bucket['HDG']) + 360.0, 360.0) 861 | bucket['TWA'] = twa 862 | 863 | bucketStart = bucketEnd 864 | for key, value in bucket.items(): 865 | # Make sure there's at least one valid data item for this bucket 866 | if key != 'ts' and value != None: 867 | l['samples'].append(bucket) # add this data item to the list of samples 868 | continue 869 | 870 | print("## analyze_race %s Leg %d samples %d" % (r['race'], leg+1, len(l['samples']))) 871 | 872 | def average_sample_fields(samples, markBearing): 873 | avg_fields = ['AWA', 'AWS', 'STW', 'RUD', 'SOG', 'TWS', 'COG', 'HDG', 'TWD', 'TWA'] 874 | d = {} 875 | counts = {} 876 | for field in avg_fields: 877 | d[field] = None 878 | counts[field] = 0 879 | 880 | for s in samples: 881 | for field in ['AWA', 'AWS', 'STW', 'RUD', 'SOG', 'TWS']: 882 | if s[field] != None: 883 | d[field] = s[field] if d[field] == None else d[field] + s[field] 884 | counts[field] += 1 885 | for field in ['COG', 'HDG', 'TWD', 'TWA']: 886 | if s[field] != None: 887 | # If course is biased north change range to [-180, 180] 888 | angle = s[field] if (markBearing >= 90 and markBearing <= 270) or (s[field] <= 180) else s[field] - 360 889 | d[field] = angle if d[field] == None else d[field] + angle 890 | counts[field] += 1 891 | for field in ['AWA', 'AWS', 'STW', 'RUD', 'SOG', 'TWS']: 892 | if d[field] != None: 893 | d[field] /= len(samples) 894 | for field in ['COG', 'HDG', 'TWD', 'TWA']: 895 | if d[field] != None: 896 | d[field] /= len(samples) 897 | d[field] += 0.0 if d[field] > 0 else 360.0 # Convert back to 0-360 898 | # Don't return a data bucket unless it has at least one valid data point 899 | for field in avg_fields: 900 | if d[field] != None: 901 | return(d) 902 | print("## average_sample_fields no data") 903 | return(None) 904 | 905 | @dataclass 906 | class LegItem: 907 | t: str = None 908 | comment: str = None 909 | start: datetime.datetime = None 910 | end: datetime.datetime = None 911 | duration: datetime.timedelta = None 912 | board: str = None 913 | hdg: float = None 914 | awa: float = None 915 | aws: float = None 916 | stw: float = None 917 | cog: float = None 918 | sog: float = None 919 | rud: float = None 920 | tws: float = None 921 | twd: float = None 922 | twa: float = None 923 | 924 | # Return a list of analysis items. Each item can be a comment or a line of data. 925 | # Data lines can be a minute, a board summary, or a leg summary 926 | def analyze_by_minute(regatta, r): 927 | items = [] 928 | c = regatta['courses'][r['course']] 929 | 930 | items.append(LegItem(t="Regatta", comment="%s - %s - %d race%s" % (regatta['boat'], regatta['regatta'], len(regatta['races']), "" if len(regatta['races']) < 2 else "s"))) 931 | for leg, l in enumerate(r['legs']): 932 | markBearing = c["legs"][leg]["bearing"] 933 | items.append(LegItem(t="Leg", comment="Race %s Course %s Leg %d %s (%.2fnm @ %03d%s)" % 934 | (r['race'], r['course'], leg+1, c["legs"][leg]["label"], c["legs"][leg]["distance"], markBearing, deg))) 935 | 936 | items.append(LegItem(t="Blank")) 937 | for leg, l in enumerate(r['legs']): 938 | legStart = l['startts'] 939 | legEnd = l['endts'] 940 | markBearing = c["legs"][leg]["bearing"] 941 | 942 | items.append(LegItem(t="Leg", comment="Race %s Leg %d %s (%.2fnm @ %03d%s) - (%s - %s) %s" % 943 | (r['race'], leg+1, c["legs"][leg]["label"], c["legs"][leg]["distance"], markBearing, deg, addtz(legStart)[11:], addtz(legEnd)[11:], l['duration']))) 944 | 945 | legSamples = l['samples'] 946 | if legSamples[0]['AWA'] == None: 947 | print("## No AWA at start of leg %s %r" % (addtz(legStart)[11:0], legSamples[0])) 948 | board = 'Port' if legSamples[0]['AWA'] < 0 else 'Stbd' 949 | # Loop through the AWAs looking for tacks and gybes 950 | l['boards'] = [] 951 | bstart = l['startts'] 952 | bend = l['startts'] 953 | for i in range(1, len(legSamples)): 954 | awa = legSamples[i]['AWA'] 955 | 956 | if awa is None: 957 | print("AWA is none for leg sample %d of %d legSamples" % (i, len(legSamples))) 958 | continue 959 | 960 | # Use sample if not near a tack or gybe 961 | if abs(awa) > minAWA and abs(awa) < maxAWA: 962 | samplets = legSamples[i]['ts'] 963 | if (board == 'Stbd' and awa < 0) or (board == 'Port' and awa > 0): 964 | l['boards'].append((bstart, bend, board)) 965 | board = 'Port' if awa < 0 else 'Stbd' 966 | #print("New board %s (%3f) @ %s" % (board, awa, samplets)) 967 | bstart = samplets 968 | bend = samplets # extend end timestamp to current sample - but not if tacking or gybing 969 | 970 | # Last board for this leg 971 | l['boards'].append((bstart, bend, board)) 972 | 973 | print("## analyze_by_minute Leg %d %s #boards %d" % (leg, l['boards'][0][1], len(l['boards']))) 974 | 975 | bucketDelta = datetime.timedelta(seconds=reportSeconds) 976 | for b in range(len(l['boards'])): 977 | boardStart = l['boards'][b][0] 978 | boardEnd = l['boards'][b][1] 979 | items.append(LegItem(t="Board", comment="Race %s Leg %d Board %d @ %s - %s %s" % (r['race'], leg+1, b+1, addtz(boardStart)[11:], addtz(boardEnd)[11:], boardEnd - boardStart))) 980 | 981 | # Time range is start to end (not > start) since boardStart includes waiting for tack or gybe to finish 982 | boardSamples = [ s for s in legSamples if s['ts'] >= boardStart and s['ts'] <= boardEnd ] 983 | minute = {} 984 | bucketStart = boardStart 985 | while bucketStart < boardEnd: 986 | # Per-minute 987 | bucketEnd = bucketStart + bucketDelta 988 | bucketEnd = min(bucketEnd, boardEnd) 989 | bucketSamples = [ s for s in boardSamples if s['ts'] > bucketStart and s['ts'] <= bucketEnd ] 990 | savg = average_sample_fields(bucketSamples, markBearing) 991 | if savg != None: 992 | items.append(LegItem("Minute", None, bucketStart, bucketEnd, None, l['boards'][b][2], savg['HDG'], savg['AWA'], savg['AWS'], savg['STW'], savg['COG'], savg['SOG'], savg['RUD'], savg['TWS'], savg['TWD'], savg['TWA'])) 993 | bucketStart = bucketEnd 994 | 995 | # Per-board 996 | if len(boardSamples) == 0: 997 | items.append(LegItem(t="Board", comment=("Board %s - %s %s - no samples" % 998 | (addtz(boardStart)[11:], addtz(boardEnd)[11:], boardEnd - boardStart)))) 999 | else: 1000 | savg = average_sample_fields(boardSamples, markBearing) 1001 | if savg != None: 1002 | items.append(LegItem("Board", None, boardStart, boardEnd, boardEnd - boardStart, l['boards'][b][2], savg['HDG'], savg['AWA'], savg['AWS'], savg['STW'], savg['COG'], savg['SOG'], savg['RUD'], savg['TWS'], savg['TWD'], savg['TWA'])) 1003 | 1004 | if (b < len(l['boards'])-1): 1005 | items.append(LegItem(t="Blank")) 1006 | 1007 | # Per-leg 1008 | if len(legSamples) == 0: 1009 | items.append(LegItem(t="Leg", comment=("Leg %s - %s %s - no samples" % 1010 | (addtz(boardStart)[11:], addtz(boardEnd)[11:], boardEnd - boardStart)))) 1011 | else: 1012 | savg = average_sample_fields(legSamples, markBearing) 1013 | items.append(LegItem("Leg", None, legStart, legEnd, legEnd - legStart, None, savg['HDG'], savg['AWA'], savg['AWS'], savg['STW'], savg['COG'], savg['SOG'], savg['RUD'], savg['TWS'], savg['TWD'], savg['TWA'])) 1014 | items.append(LegItem("Blank")) 1015 | return(items) 1016 | 1017 | @dataclass 1018 | class XlsxFormats: 1019 | degree3 = None 1020 | float0 = None 1021 | float1 = None 1022 | float2 = None 1023 | time = None 1024 | timedelta = None 1025 | bold = None 1026 | board = None 1027 | leg = None 1028 | 1029 | xlsxF = XlsxFormats() 1030 | 1031 | def per_race_xlsx(xl, regatta, r): 1032 | items = analyze_by_minute(regatta, r) 1033 | 1034 | ws = xl.add_worksheet("%s_%s" % (regatta['regatta'], r['race'])) 1035 | 1036 | column_formats = [ None, xlsxF.time, xlsxF.time, xlsxF.timedelta, None, xlsxF.degree3, xlsxF.degree3, xlsxF.float1, xlsxF.float1, xlsxF.degree3, xlsxF.float1, xlsxF.float1, xlsxF.float1, xlsxF.degree3, xlsxF.degree3 ] 1037 | last_col = len(column_formats) - 1 1038 | for col, f in enumerate(column_formats): 1039 | if f != None: 1040 | ws.set_column(col, col, None, f) 1041 | 1042 | row = 0 1043 | column_labels = [ None, "Start", "End", "Duration", "Board", "HDG", "AWA", "AWS", "STW", "COG", "SOG", "RUD", "TWS", "TWD", "TWA" ] 1044 | ws.set_row(row, None, xlsxF.header) 1045 | ws.write_row(row, 0, column_labels) 1046 | ws.freeze_panes(1, 0) 1047 | row += 1 1048 | 1049 | for i in items: 1050 | if i.t == "Leg" or i.t == "Board" or i.t == "Minute" or i.t == "Regatta": 1051 | if i.comment != None: 1052 | ws.write_row(row, 0, [i.t, i.comment]) 1053 | else: 1054 | ws.write_row(row, 0, [i.t, addtz(i.start)[11:], addtz(i.end)[11:], i.end - i.start if (i.t == "Board") or (i.t == "Leg") else None, i.board if i.t == "Minute" else None, 1055 | i.hdg, i.awa, i.aws, i.stw, i.cog, i.sog, i.rud, i.tws, i.twd, i.twa]) 1056 | if i.hdg == None and i.t == "Minute": 1057 | print("## %s HDG None" % (addtz(i.start)[11:])) 1058 | row += 1 1059 | elif i.t == "Blank": 1060 | row += 1 1061 | 1062 | # Create the conditional highlighting for the regatta, leg, and board summary lines 1063 | # I don't understand why we need the INDIRECT() call 1064 | ws.conditional_format(0, 0, row, last_col, {'type': 'formula', 'criteria': '=INDIRECT("A"&ROW())="Regatta"', 'format': xlsxF.regatta}) 1065 | ws.conditional_format(0, 0, row, last_col, {'type': 'formula', 'criteria': '=INDIRECT("A"&ROW())="Leg"', 'format': xlsxF.leg}) 1066 | ws.conditional_format(0, 0, row, last_col, {'type': 'formula', 'criteria': '=INDIRECT("A"&ROW())="Board"', 'format': xlsxF.board}) 1067 | 1068 | def spreadsheet_report(): 1069 | bn = regattas[regattalist[0]]['boat'] 1070 | xlfn = "%s_%s.xlsx" % (bn, "aggregate" if len(regattalist) > 1 else regattas[regattalist[0]]['basefn']) 1071 | 1072 | with xlsxwriter.Workbook(xlfn) as xl: 1073 | xlsxF.degree3 = xl.add_format({'num_format': '000'}) 1074 | xlsxF.float0 = xl.add_format({'num_format': '##0'}) 1075 | xlsxF.float1 = xl.add_format({'num_format': '##0.0'}) 1076 | xlsxF.float2 = xl.add_format({'num_format': '##0.00'}) 1077 | xlsxF.time = xl.add_format({'num_format': '[h]:mm:ss'}) 1078 | xlsxF.timedelta = xl.add_format({'num_format': '[h]:mm:ss'}) 1079 | xlsxF.header = xl.add_format({'bold': True, 'align': 'right'}) 1080 | 1081 | # Pastels from https://www.schemecolor.com/rainbow-pastels-color-scheme.php 1082 | xlsxF.regatta = xl.add_format({'bold': True, 'bg_color': '#FFB7B2'}) # Red-ish 1083 | xlsxF.leg = xl.add_format({'bold': True, 'bg_color': '#E2F0CB'}) # Green-ish 1084 | xlsxF.board = xl.add_format({'bg_color': '#C7CEEA'}) # Blue-ish 1085 | 1086 | for rn in regattalist: 1087 | reg = regattas[rn] 1088 | for r in reg['races']: 1089 | per_race_xlsx(xl, reg, r) 1090 | 1091 | def none_sub(v, f): 1092 | # this is a kludgey way of returning a string the length of the format with a - at the end 1093 | tmp = " -" 1094 | if v != None: 1095 | return(f % (v)) 1096 | return(tmp[-int(f[1]):]) 1097 | 1098 | def per_leg_report(regatta, r): 1099 | ofn = "%s_%s_%s_legs.txt" % (regatta['boat'], regatta['basefn'], r['race']) 1100 | print("## Create %s" % (ofn)) 1101 | 1102 | items = analyze_by_minute(regatta, r) 1103 | 1104 | # Create a minute-by-minute report for the leg 1105 | with open(ofn, "w") as f: 1106 | for i in items: 1107 | if i.t == "Leg" or i.t == "Board" or i.t == "Minute" or i.t == "Regatta": 1108 | if i.comment != None: 1109 | f.write("%s\n" % (i.comment)) 1110 | elif i.t == "Leg" or i.t == "Board": 1111 | #f.write("%6s %s - %s %8s HDG %3.0f AWA %4.0f AWS %4.1f STW %4.1f COG %3.0f SOG %4.1f RUD %5.1f TWS %4.1f TWD %4.0f TWA %4.0f\n" % 1112 | f.write("%6s %s - %s %8s HDG %s AWA %s AWS %s STW %s COG %s SOG %s RUD %s TWS %s TWD %s TWA %s\n" % 1113 | (i.t, addtz(i.start)[11:], addtz(i.end)[11:], i.end - i.start, 1114 | none_sub(i.hdg, "%3.0f"), none_sub(i.awa, "%4.0f"), none_sub(i.aws, "%4.1f"), none_sub(i.stw, "%4.1f"), none_sub(i.cog, "%3.0f"), none_sub(i.sog, "%4.1f"), none_sub(i.rud, "%5.1f"), none_sub(i.tws, "%4.1f"), none_sub(i.twd, "%4.0f"), none_sub(i.twa, "%4.0f"))) 1115 | elif i.t == "Minute": 1116 | #f.write("Minute %s - %s %s HDG %3.0f AWA %4.0f AWS %4.1f STW %4.1f COG %3.0f SOG %4.1f RUD %5.1f TWS %4.1f TWD %4.0f TWA %4.0f\n" % 1117 | f.write("Minute %s - %s %s HDG %s AWA %s AWS %s STW %s COG %s SOG %s RUD %s TWS %s TWD %s TWA %s\n" % 1118 | (addtz(i.start)[11:], addtz(i.end)[11:], i.board, 1119 | none_sub(i.hdg, "%3.0f"), 1120 | none_sub(i.awa, "%4.0f"), 1121 | none_sub(i.aws, "%4.1f"), 1122 | none_sub(i.stw, "%4.1f"), 1123 | none_sub(i.cog, "%3.0f"), 1124 | none_sub(i.sog, "%4.1f"), 1125 | none_sub(i.rud, "%5.1f"), 1126 | none_sub(i.tws, "%4.1f"), 1127 | none_sub(i.twd, "%4.0f"), 1128 | none_sub(i.twa, "%4.0f"))) 1129 | elif i.t == "Blank": 1130 | f.write("\n") 1131 | 1132 | # Many possible line styles 1133 | # line_styles = cycle(['-','-','-', '--', '-.', ':', '.', ',', 'o', 'v', '^', '<', '>', '1', '2', '3', '4', 's', 'p', '*', 'h', 'H', '+', 'x', 'D', 'd', '|', '_']) 1134 | 1135 | maxLegDuration = 0 1136 | maxPlotHeight = 3.0 1137 | maxPlotWidth = 10.0 1138 | 1139 | def leg_chart(regatta, r, leg, fig, ax): 1140 | y_scales = {} 1141 | y_scales["compass"] = {"in_use": False, "min": 0, "max": 360, "label": ""} 1142 | y_scales["apparent"] = {"in_use": False, "min": -180, "max": 180, "label": ""} 1143 | y_scales["windspeed"] = {"in_use": False, "min": 0, "max": 25, "label": ""} 1144 | y_scales["boatspeed"] = {"in_use": False, "min": 0, "max": 12, "label": ""} 1145 | y_scales["rudder"] = {"in_use": False, "min": -20, "max": 20, "label": ""} 1146 | 1147 | plotItems = {} 1148 | plotItems['COG'] = { 'label': 'COG', 'scale': 'compass', 'color': 'black', 'style': '-' } 1149 | plotItems['SOG'] = { 'label': 'SOG', 'scale': 'boatspeed', 'color': 'black', 'style': '-' } 1150 | plotItems['TWD'] = { 'label': 'TWD', 'scale': 'compass', 'color': cmap[0], 'style': '--' } 1151 | plotItems['TWS'] = { 'label': 'TWS', 'scale': 'windspeed', 'color': cmap[0], 'style': '-' } 1152 | plotItems['AWA'] = { 'label': 'AWA', 'scale': 'apparent', 'color': cmap[1], 'style': '-' } 1153 | plotItems['AWS'] = { 'label': 'AWS', 'scale': 'windspeed', 'color': cmap[1], 'style': '--' } 1154 | plotItems['HDG'] = { 'label': 'HDG', 'scale': 'compass', 'color': cmap[4], 'style': '-' } 1155 | plotItems['STW'] = { 'label': 'STW', 'scale': 'boatspeed', 'color': cmap[2], 'style': '-' } 1156 | plotItems['RUD'] = { 'label': 'Rudder', 'scale': 'rudder', 'color': cmap[3], 'style': ':' } 1157 | plotItems['Roll'] = { 'label': 'Heel', 'scale': 'rudder', 'color': cmap[3], 'style': '-' } 1158 | 1159 | c = regatta['courses'][r['course']] 1160 | l = r['legs'][leg] 1161 | 1162 | legDataFields = ["AWA", 'STW', "TWD", "TWS", "RUD", 'Roll'] 1163 | legData = [] 1164 | for ld in legDataFields: 1165 | if (len(l['samples']) == 0): 1166 | print("## Leg %d No samples for %s" % (leg, ld)) 1167 | else: 1168 | if ld in l['samples'][0] and l['samples'][0][ld] != None: 1169 | legData.append(ld) 1170 | 1171 | firstTime = l['startts'] + tzoffset 1172 | lastTime = firstTime + datetime.timedelta(seconds=maxLegDuration) 1173 | 1174 | print("## leg_chart Race %s Leg %r Samples %d" % (r['race'], leg, len(legData))) 1175 | 1176 | ax[leg].set_title(loc='left', label="Race %s Leg %d %s (%.2fnm @ %03d%s) - (%s - %s) %s" 1177 | % (r['race'], leg+1, c["legs"][leg]["label"], c["legs"][leg]["distance"], c["legs"][leg]["bearing"], deg, l['start'][11:], l['end'][11:], l['duration'])) 1178 | ax[leg].set_xlim(firstTime, lastTime) 1179 | ax[leg].hlines(y=0, color='black', xmin=firstTime, xmax=lastTime, linestyles='--', linewidth=0.5) 1180 | 1181 | hostUsed = False 1182 | 1183 | for d in legData: 1184 | s = y_scales[plotItems[d]["scale"]] 1185 | if not s["in_use"]: 1186 | s["in_use"] = True 1187 | s["label"] = plotItems[d]["label"] 1188 | if not hostUsed: 1189 | s["isHost"] = True 1190 | s["axis"] = ax[leg] 1191 | else: 1192 | s["isHost"] = False 1193 | s['axis'] = ax[leg].twinx() 1194 | hostUsed = True 1195 | else: 1196 | s["label"] = "%s, %s" % (s["label"], plotItems[d]["label"]) 1197 | 1198 | numScales = 0 1199 | for scale in y_scales: 1200 | if s["in_use"]: 1201 | numScales += 1 1202 | scaleWidth = 1.2 # inches for each scale 1203 | 1204 | scalesWidth = numScales * scaleWidth 1205 | maxDataWidth = maxPlotWidth - scalesWidth 1206 | scalesPct = (scalesWidth-1) / maxPlotWidth # one scale is on the left 1207 | 1208 | i = 0 1209 | for scale in y_scales: 1210 | s = y_scales[scale] 1211 | if s["in_use"]: 1212 | #print("Axis R%s L%d Axis %d: %s(%s)" % (race, leg, i, scale, s['label'])) 1213 | a = s["axis"] 1214 | a.set_ylabel(s["label"]) 1215 | a.set_autoscaley_on(False) 1216 | a.set_ylim(s["min"], s["max"]) 1217 | if not s['isHost']: 1218 | a.spines['right'].set_position(('axes', 1.0+(scalesPct * i / numScales))) 1219 | i += 1 1220 | 1221 | legendLines = [] 1222 | portLegend = None 1223 | stbdLegend = None 1224 | for d in legData: 1225 | if (l['samples'][0][d] == None): 1226 | # Don't plot if there's no data 1227 | continue 1228 | a = y_scales[plotItems[d]['scale']]['axis'] 1229 | 1230 | x = [ (s['ts']+tzoffset) for s in l['samples'] ] 1231 | y = [ s[d] for s in l['samples'] ] 1232 | color = plotItems[d]['color'] 1233 | a.yaxis.label.set_color(color) 1234 | a.tick_params(axis='y', colors=color) 1235 | a.spines['right'].set_color(color) 1236 | #if d == 'TWS(avg)': 1237 | # a.hlines(y=[12, 15, 18], color=color, xmin=firstTime, xmax=lastTime, linestyles=':', linewidth=0.5) 1238 | 1239 | style = plotItems[d]['style'] 1240 | #print("Plot R%s L%d B%d %s %s %s" % (race, leg, b, d, color, style)) 1241 | smooth = scipy.signal.savgol_filter(y, 7, 3) 1242 | line, = a.plot(x, smooth, color=color, linestyle=style, label=plotItems[d]['label']) 1243 | legendLines.append(line) 1244 | 1245 | # Setting the xaxis has to come after the plot - maybe because it needs x data? 1246 | xfmt = mdates.DateFormatter("%H:%M") 1247 | ax[leg].xaxis.set_major_formatter(xfmt) 1248 | 1249 | if portLegend: 1250 | legendLines.insert(0, portLegend) 1251 | if stbdLegend: 1252 | legendLines.insert(0, stbdLegend) 1253 | ax[leg].legend(handles=legendLines, loc='best') 1254 | 1255 | #plt.subplot(ax[leg]) 1256 | #plt.axhline() 1257 | #plt.close(fig) 1258 | 1259 | def strip_charts(regatta, r): 1260 | legs = len(r['legs']) 1261 | #fig, ax = plt.subplots(legs, figsize=(maxPlotWidth, maxPlotHeight * legs), constrained_layout=True) 1262 | #fig, ax = plt.subplots(legs, constrained_layout=True) 1263 | #fig, ax = plt.subplots(legs) 1264 | 1265 | fig, ax = plt.subplots(legs, figsize=(maxPlotWidth, maxPlotHeight * legs + .5)) 1266 | #fig.autofmt_xdate() 1267 | fig.suptitle("%s - %s Race %s - %s %s - %s\n " % (regatta['boat'], regatta['regatta'], r['race'], r['legs'][0]['start'][:10], r['legs'][0]['start'][11:], r['legs'][-1]['end'][11:])) 1268 | 1269 | # Find the longest leg to make all the strip charts the same length 1270 | maxDur = datetime.timedelta(seconds=15 * 60) 1271 | for l in r['legs']: 1272 | dur = l['endts'] - l['startts'] 1273 | maxDur = dur if maxDur < dur else maxDur 1274 | # Round to the next minute for short legs or 5 minutes for longer races 1275 | minutes = int(maxDur.total_seconds() / 60) 1276 | minutes += 1 if minutes < 30 else 5 1277 | global maxLegDuration 1278 | maxLegDuration = minutes * 60 1279 | 1280 | for leg in range(len(r['legs'])): 1281 | leg_chart(regatta, r, leg, fig, ax) 1282 | 1283 | #fig.set_constrained_layout_pads(hspace=5.0, h_pad=1.0) 1284 | plt.tight_layout(h_pad=0.0, w_pad=0.0) 1285 | plt.subplots_adjust(top=0.94, bottom=0.04, left=0.08, right=0.72, wspace=0.05, hspace=0.25) 1286 | #plt.subplots_adjust(top=0.96, bottom=0.04, left=0.08, right=0.72, wspace=0.05, hspace=0.25) 1287 | #plt.show() 1288 | plt.savefig("%s_%s_%s_strip" % (regatta['boat'], regatta['basefn'], r['race']), bbox_inches="tight") 1289 | 1290 | ################### Polar variables 1291 | 1292 | # polarData defines the wind ranges we're interested in graphing. Should probably be defined externally. 1293 | 1294 | # This range matched an ORRez certificate 1295 | polarData = [ 1296 | { 'min': 0.0, 'max': 9.3, 'ax': None, 'data': [] }, 1297 | { 'min': 9.3, 'max': 11.5, 'ax': None, 'data': [] }, 1298 | { 'min': 11.5, 'max': 14.6, 'ax': None, 'data': [] }, 1299 | { 'min': 14.6, 'max': 18.7, 'ax': None, 'data': [] }, 1300 | { 'min': 18.7, 'max': 22.0, 'ax': None, 'data': [] }, 1301 | { 'min': 22.0, 'max': 28.0, 'ax': None, 'data': [] } 1302 | ] 1303 | 1304 | # This range matches where a particular boat accelerates 1305 | # Six plots fit well on a page 1306 | polarData = [ 1307 | { 'min': 0.0, 'max': 11.0, 'ax': None, 'data': [] }, 1308 | { 'min': 11.0, 'max': 13.0, 'ax': None, 'data': [] }, 1309 | { 'min': 13.0, 'max': 15.0, 'ax': None, 'data': [] }, 1310 | { 'min': 15.0, 'max': 18.0, 'ax': None, 'data': [] }, 1311 | { 'min': 18.0, 'max': 22.0, 'ax': None, 'data': [] }, 1312 | { 'min': 22.0, 'max': 28.0, 'ax': None, 'data': [] } 1313 | ] 1314 | 1315 | def gather_polar_data(rl): 1316 | global racecount 1317 | global legcount 1318 | for reg in rl: 1319 | regatta = regattas[reg] 1320 | print("## plot_polar regatta %s" % (regatta['regatta'])) 1321 | for r in regatta['races']: 1322 | print("## plot_polar regatta %s race %s" % (regatta['regatta'], r['race'])) 1323 | racecount += 1 1324 | legcount += len(r['legs']) 1325 | 1326 | for leg, l in enumerate(r['legs']): 1327 | for s in l['samples']: 1328 | # Add this sample to the correct polar wind range 1329 | twa = s['TWA'] 1330 | tws = s['TWS'] 1331 | stw = s['STW'] 1332 | twd = s['TWD'] 1333 | hdg = s['HDG'] 1334 | ts = s['ts'] 1335 | 1336 | if tws == None: 1337 | print("## Skipping sample with no TWS") 1338 | continue 1339 | 1340 | for p, polar in enumerate(polarData): 1341 | # If the max is greater than tws then this is the right bucket 1342 | if type(polar['max']) != type(tws): 1343 | print("## type(polar['max']): %r, type(tws): %r" % (type(polar['max']), type(tws))) 1344 | if polar['max'] > tws: 1345 | theta = radians(twa) 1346 | datum = (theta, stw, 'red' if theta > pi else 'green', r['race'], leg, ts, twd, hdg, twa) 1347 | #datum = (theta, stw, plt.cm.Set2.colors[p], int(race), leg, l['Time'][d], l['TWD(med)'][d], l['Heading'][d]) 1348 | polar['data'].append(datum) 1349 | break 1350 | 1351 | lineSpan = 7.5 1352 | 1353 | # Plot performance for various wind ranges and 1354 | # Make an aggregate plot with all wind ranges 1355 | # Six plots fit well on a standard page 1356 | def plot_polars(): 1357 | global racecount 1358 | global legcount 1359 | fig, axes = plt.subplots(2, 3, figsize=(10, 8), subplot_kw=dict(polar=True), constrained_layout=True) 1360 | polarData[0]['ax'] = axes[0, 0] 1361 | polarData[1]['ax'] = axes[0, 1] 1362 | polarData[2]['ax'] = axes[0, 2] 1363 | polarData[3]['ax'] = axes[1, 0] 1364 | polarData[4]['ax'] = axes[1, 1] 1365 | polarData[5]['ax'] = axes[1, 2] 1366 | 1367 | regattacount = len(regattas) 1368 | #racecount = 0 1369 | #legcount = 0 1370 | bn = regattas[regattalist[0]]['boat'] 1371 | if regattacount > 1: 1372 | fig.suptitle("%s - %d Regatta%s, %d Race%s, %d Legs\n" % (bn, 1373 | regattacount, 1374 | "" if regattacount == 1 else 's', 1375 | racecount, 1376 | "" if racecount == 1 else 's', 1377 | legcount)) 1378 | else: 1379 | fig.suptitle("%s - %s, %d Race%s, %d Legs\n" % (bn, 1380 | regattas[regattalist[0]]['regatta'], 1381 | racecount, 1382 | "" if racecount == 1 else 's', 1383 | legcount)) 1384 | 1385 | for p, pd in enumerate(polarData): 1386 | pd['ax'].set_title("%4.1fkts < Wind < %4.1fkts\n%d samples\n " % (pd['min'], pd['max'], len(pd['data'])), va='bottom') 1387 | print("## %f < Wind < %f - %d samples" % (pd['min'], pd['max'], len(pd['data']))) 1388 | ax = pd['ax'] 1389 | ax.set_thetalim(thetamin=-180, thetamax=180) 1390 | ax.set_ylim(0, 12) 1391 | ax.set_theta_offset(pi/2.0) 1392 | ax.set_theta_direction(-1) 1393 | theta_ticks = [(float(t)/360.0) * tau for t in [-135, -90, -45, 0, 45, 90, 135, 180]] 1394 | ax.set_xticks(theta_ticks) 1395 | 1396 | if len(pd['data']) > 0: 1397 | t, s, c, r, leg, time, twd, hdg, twa = zip(*pd['data']) # unzip the data to theta, speed, color w/ magic * operator 1398 | ax.scatter(t, s, color=c, marker='.', alpha=0.05) 1399 | 1400 | # Sort by theta 1401 | pd['data'].sort(key=lambda x: x[0]) 1402 | points = [] 1403 | 1404 | # Compute a simple per-bucket mean, but keep a list of STWs so we can use stats on it 1405 | bucketSTW = [] 1406 | bucketTheta = 0 1407 | bucketCount = 0 1408 | bucket = minTWA 1409 | next_bucket = bucket + lineSpan 1410 | for d in pd['data']: 1411 | (theta, stw, color, raceName, leg, timeStamp, twd, hdg, twa) = d 1412 | while twa > next_bucket: 1413 | if len(bucketSTW) != 0: 1414 | btheta = bucketTheta/bucketCount 1415 | bmean = mean(bucketSTW) 1416 | p90 = np.percentile(np.array(bucketSTW), 90) 1417 | #print("## Bucket [%d] %6.1f%s (Next %4.2f%s): %d" % (len(points), bucket, deg, next_bucket, deg, bucketCount)) 1418 | points.append((btheta, bmean, p90, degrees(btheta))) 1419 | bucket = next_bucket 1420 | next_bucket += lineSpan 1421 | bucketSTW = [] 1422 | bucketTheta = 0 1423 | bucketCount = 0 1424 | bucketSTW.append(stw) 1425 | bucketTheta += theta 1426 | bucketCount += 1 1427 | 1428 | # Last bucket 1429 | if len(bucketSTW) != 0: 1430 | btheta = bucketTheta/bucketCount 1431 | bmean = mean(bucketSTW) 1432 | p90 = np.percentile(np.array(bucketSTW), 90) 1433 | #print("## Bucket [%d] %6.1f%s (Last): %4.1f %3.0f" % (len(points), bucket, deg, theta, degrees(theta))) 1434 | points.append((btheta, bmean, p90, degrees(btheta))) 1435 | 1436 | # make the polar lines rounder by interpolating missing points 1437 | i = 0 1438 | while i < len(points)-1: 1439 | (t0, s0, p0, b0) = points[i] 1440 | (t1, s1, p1, b1) = points[i+1] 1441 | bprime = b0 + lineSpan 1442 | if (bprime < b1) and ((b0 >= minTWA and b0 <= maxTWA) or (b0 >= 360-maxTWA and b0 <= 360-minTWA)) and ((b1 >= minTWA and b1 <= maxTWA) or (b1 >= 360-maxTWA and b1 <= 360-minTWA)): 1443 | # Slope of line * degrees of run 1444 | bmeanprime = s0 + (((s1 - s0) / (b1 - b0)) * (bprime - b0)) 1445 | p90prime = p0 + (((p1 - p0) / (b1 - b0)) * (bprime - b0)) 1446 | points.insert(i+1, (radians(bprime), bmeanprime, p90prime, bprime)) 1447 | #print("## Insert [%d] %6.2f < %6.2f < %6.2f (%4.1f & %4.1f)" % (i, b0, bprime, b1, bmeanprime, p90prime)) 1448 | i += 1 1449 | 1450 | # Look for errors in data order (this is looking for bugs) - the above code was fussy 1451 | for i in range(len(points)-1): 1452 | theta0 = points[i][0] 1453 | bucket0 = points[i][3] 1454 | theta1 = points[i+1][0] 1455 | bucket1 = points[i+1][3] 1456 | if theta0 >= theta1: 1457 | print("## Backwards [%d]: %4.2f -> %4.2f (%5.2f -> %5.2f) (%3.0f -> %3.0f)" % (i, theta0, theta1, bucket0, bucket1, degrees(theta0), degrees(theta1))) 1458 | 1459 | if len(points) > 1: 1460 | # First work the starboard side 1461 | for scloseHauled in range(0, len(points)): 1462 | twa = degrees(points[scloseHauled][0]) 1463 | #print("## Close Hauled Stbd[%d] %f - %4.1f vs %4.1f" % (scloseHauled, points[scloseHauled][0], twa, minTWA)) 1464 | if twa >= minTWA: 1465 | break 1466 | 1467 | for sgybing in range(scloseHauled, len(points)): 1468 | twa = degrees(points[sgybing][0]) 1469 | #print("## Gybing Stbd[%d] %f - %4.1f vs %4.1f" % (sgybing, points[sgybing][0], twa, maxTWA)) 1470 | if twa > maxTWA: 1471 | break 1472 | 1473 | # Then work the port side 1474 | for pgybing in range(sgybing, len(points)): 1475 | twa = degrees(points[pgybing][0]) - 360 1476 | #print("## Gybing Port[%d] %f - %4.1f vs %4.1f" % (pgybing, points[pgybing][0], twa, -maxTWA)) 1477 | if twa > -maxTWA: 1478 | break 1479 | 1480 | for pcloseHauled in range(pgybing, len(points)): 1481 | twa = degrees(points[pcloseHauled][0]) - 360 1482 | #print("## Close Hauled Port[%d] %f - %4.1f vs %4.1f" % (pcloseHauled, points[pcloseHauled][0], twa, -minTWA)) 1483 | if twa > -minTWA: 1484 | break 1485 | 1486 | if scloseHauled != sgybing: 1487 | stheta, speed, p90, bucket = zip(*(points[scloseHauled:sgybing])) 1488 | print("starboard len speed: %d" % (len(speed))) 1489 | if len(speed) < 7: 1490 | smooth = speed 1491 | else: 1492 | smooth = scipy.signal.savgol_filter(speed, 7, 3) 1493 | ax.plot(stheta, smooth, color='orange', linestyle='-') 1494 | if len(speed) < 7: 1495 | ssmooth = speed 1496 | else: 1497 | ssmooth = scipy.signal.savgol_filter(p90, 7, 3) 1498 | ax.plot(stheta, ssmooth, color='blue', linestyle='-') 1499 | 1500 | if pcloseHauled != pgybing: 1501 | ptheta, speed, p90, bucket = zip(*(points[pgybing:pcloseHauled])) 1502 | print("port len speed: %d" % (len(speed))) 1503 | smooth = scipy.signal.savgol_filter(speed, 7, 3) 1504 | ax.plot(ptheta, smooth, color='orange', linestyle='-') 1505 | psmooth = scipy.signal.savgol_filter(p90, 7, 3) 1506 | ax.plot(ptheta, psmooth, color='blue', linestyle='-') 1507 | 1508 | # Save these lines for drawing combined polar 1509 | pd['p90'] = ((ptheta, psmooth, stheta, ssmooth)) 1510 | 1511 | pn = "%s_%s_polars" % (bn, "aggregate" if len(regattalist) > 1 else regattas[regattalist[0]]['basefn']) 1512 | plt.savefig(pn, bbox_inches="tight") 1513 | 1514 | # One plot with all wind ranges 1515 | fig, ax = plt.subplots(1, 1, figsize=(8, 10), subplot_kw=dict(polar=True), constrained_layout=True) 1516 | if regattacount > 1: 1517 | fig.suptitle("%s - %d Regatta%s, %d Race%s, %d Legs\n" % (bn, 1518 | regattacount, 1519 | "" if regattacount == 1 else 's', 1520 | racecount, 1521 | "" if racecount == 1 else 's', 1522 | legcount)) 1523 | else: 1524 | fig.suptitle("%s - %s, %d Race%s, %d Legs\n" % (bn, 1525 | regattas[regattalist[0]]['regatta'], 1526 | racecount, 1527 | "" if racecount == 1 else 's', 1528 | legcount)) 1529 | ax.set_thetalim(thetamin=-180, thetamax=180) 1530 | ax.set_ylim(0, 12) 1531 | ax.set_theta_offset(pi/2.0) 1532 | ax.set_theta_direction(-1) 1533 | theta_ticks = [(float(t)/360.0) * tau for t in [-135, -90, -45, 0, 45, 90, 135, 180]] 1534 | ax.set_xticks(theta_ticks) 1535 | 1536 | for p, pd in enumerate(polarData): 1537 | if 'p90' in pd: 1538 | (ptheta, psmooth, stheta, ssmooth) = pd['p90'] 1539 | ax.plot(ptheta, psmooth, color=cmap[p], linewidth=2, linestyle='-', label="%2.0f kts" % (pd['min'] + ((pd['max'] - pd['min']) / 2))) 1540 | ax.plot(stheta, ssmooth, color=cmap[p], linewidth=2, linestyle='-') 1541 | plt.legend(loc='best') 1542 | pn = "%s_%s_combined_polars" % (bn, "aggregate" if len(regattalist) > 1 else regattas[regattalist[0]]['basefn']) 1543 | plt.savefig(pn, bbox_inches="tight") 1544 | 1545 | # Generate text polars file suitable for Expedition 1546 | def expedition_polars(): 1547 | bn = regattas[regattalist[0]]['boat'] 1548 | en = "%s_%s_polars.txt" % (bn, "aggregate" if len(regattalist) > 1 else regattas[regattalist[0]]['basefn']) 1549 | with open(en, "w") as f: 1550 | f.write("!Expedition polar - %s\n" % (bn)) 1551 | for p in range(len(polarData)): 1552 | pd = polarData[p] 1553 | f.write("%-4.1f" % ((pd['min'] + pd['max']) / 2.0)) 1554 | points = [] 1555 | for center in range(45, 181, 15): 1556 | c = float(center) 1557 | high = c - 7.5 1558 | low = c + 7.5 1559 | for d in pd['data']: 1560 | (theta, stw, color, r, leg, time, twd, hdg, twa) = d 1561 | twa = abs(twa) 1562 | if twa > high and twa <= low: 1563 | points.append(stw) 1564 | f.write(" %4.1f %5.2f" % (float(center), np.percentile(np.array(points), 90) if len(points) > 0 else 0)) 1565 | f.write("\n") 1566 | 1567 | 1568 | #Boat,Utc,BSP,AWA,AWS,TWA,TWS,TWD,RudderFwd,Leeway,Set,Drift,HDG,AirTemp,SeaTemp,Baro,Depth,Heel,Trim,Rudder,Tab,Forestay,Downhaul,MastAng,FStayLen,MastButt,Load S,Load P,Rake,Volts,ROT,GpQual,PDOP,GpsNum,GpsAge,Altitude,GeoSep,GpsMode,Lat,Lon,COG,SOG,DiffStn,Error,RunnerS,RunnerP,Vang,Trav,Main,KeelAng,KeelHt,Board,Oil P,RPM 1,RPM 2,Board P,Board S,DistToLn,RchTmToLn,RchDtToLn,GPS time,TWD+90,TWD-90,Downhaul2,Mk Lat,Mk Lon,Port lat,Port lon,Stbd lat,Stbd lon,HPE,RH,Lead P,Lead S,BackStay,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,User,TmToGun,TmToLn,Burn,BelowLn,GunBlwLn,WvSigHt,WvSigPd,WvMaxHt,WvMaxPd,Slam,Heave,MWA,MWS,Boom,Twist,TackLossT,TackLossD,TrimRate,HeelRate,DeflectorP,RudderP,RudderS,RudderToe,BspTr,FStayInner,DeflectorS,Bobstay,Outhaul,D0 P,D0 S,D1 P,D1 S,V0 P,V0 S,V1 P,V1 S,BoomAng,Cunningham,FStayInHal,JibFurl,JibH,MastCant,J1,J2,J3,J4,Foil P,Foil S,Reacher,Blade,Staysail,Solent,Tack,TackP,TackS,DeflectU,DeflectL,WinchP,WinchS,SpinP,SpinS,MainH,Mast2 1569 | 1570 | exp_fields = [ 1571 | "Boat","Utc","BSP","AWA","AWS","TWA","TWS","TWD","RudderFwd","Leeway","Set","Drift","HDG","AirTemp","SeaTemp","Baro","Depth","Heel","Trim","Rudder","Tab","Forestay","Downhaul","MastAng","FStayLen","MastButt","Load S","Load P","Rake","Volts","ROT","GpQual","PDOP","GpsNum","GpsAge","Altitude","GeoSep","GpsMode","Lat","Lon","COG","SOG","DiffStn","Error","RunnerS","RunnerP","Vang","Trav","Main","KeelAng","KeelHt","Board","Oil P","RPM 1","RPM 2","Board P","Board S","DistToLn","RchTmToLn","RchDtToLn","GPS time","TWD+90","TWD-90","Downhaul2","Mk Lat","Mk Lon","Port lat","Port lon","Stbd lat","Stbd lon","HPE","RH","Lead P","Lead S","BackStay","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","User","TmToGun","TmToLn","Burn","BelowLn","GunBlwLn","WvSigHt","WvSigPd","WvMaxHt","WvMaxPd","Slam","Heave","MWA","MWS","Boom","Twist","TackLossT","TackLossD","TrimRate","HeelRate","DeflectorP","RudderP","RudderS","RudderToe","BspTr","FStayInner","DeflectorS","Bobstay","Outhaul","D0 P","D0 S","D1 P","D1 S","V0 P","V0 S","V1 P","V1 S","BoomAng","Cunningham","FStayInHal","JibFurl","JibH","MastCant","J1","J2","J3","J4","Foil P","Foil S","Reacher","Blade","Staysail","Solent","Tack","TackP","TackS","DeflectU","DeflectL","WinchP","WinchS","SpinP","SpinS","MainH","Mast2" 1572 | ] 1573 | 1574 | expedition_log_map = { 1575 | "Boat": ("boat", "%d"), 1576 | "Utc": ("ts","%f"), 1577 | "BSP": ("STW","%.2f"), 1578 | "AWA": ("AWA","%d"), 1579 | "AWS": ("AWS","%d"), 1580 | "TWA": ("TWA","%d"), 1581 | "TWS": ("TWS","%.2f"), 1582 | "TWD": ("TWD","%.2f"), 1583 | "HDG": ("HDG","%.3f"), 1584 | "Rudder": ("RUD","%.2f"), 1585 | "Lat": ("LAT","%.4f"), 1586 | "Lon": ("LON","%.4f"), 1587 | "COG": ("COG","%.3f"), 1588 | "SOG": ("SOG","%.3f"), 1589 | "Heel": ("Roll","%.2f"), 1590 | "Trim": ("Pitch","%.2f"), 1591 | "ROT": ("ROT","%.2f"), 1592 | "Heave": ("Heave","%.2f"), 1593 | "Mk Lat": ("mark lat", "%6f"), 1594 | "Mk Lon": ("mark lon", "%6f"), 1595 | } 1596 | 1597 | def excel_date(date1): 1598 | temp = datetime.datetime(1899, 12, 30) # Note, not 31st Dec but 30th! 1599 | delta = date1 - temp 1600 | return float(delta.days) + (float(delta.seconds) / 86400) 1601 | 1602 | def expedition_log(regatta, r): 1603 | #print('## Expedition log') 1604 | l = r['legs'] 1605 | c = regatta["courses"][r['course']] 1606 | 1607 | # Go through the race raw data, average into buckets, emit one line of data per bucket 1608 | # Keep track of legs, advance the waypoint at the end of each leg - this puts the burden of tack/gybe on the next leg 1609 | buckets = [] 1610 | 1611 | # Chop the race into buckets 1612 | bucketDelta = datetime.timedelta(seconds=expedition_sample_seconds) 1613 | bucketStart = r['startts'] # is this right? should it be start of leg 0? 1614 | 1615 | index = {} 1616 | # Set each field to the start of the first leg 1617 | for field in raceRawFields: 1618 | index[field] = 0 1619 | 1620 | course = r['course'] 1621 | legs = regatta['courses'][course]['legs'] 1622 | leg = 0 1623 | #print("#Legs %r" % (legs)) 1624 | 1625 | while bucketStart < r['endts']: 1626 | bucket = {} 1627 | bucketEnd = bucketStart + bucketDelta 1628 | bucketEnd = min(bucketEnd, r['endts']) 1629 | bucket['ts'] = bucketStart 1630 | 1631 | # Advance to current leg 1632 | while leg < (len(l)-1) and bucketStart > l[leg]['endts']: 1633 | leg += 1 1634 | #print("#Leg %d %r" % (leg, legs[leg])) 1635 | 1636 | #print("#Leg %d %r" % (leg, r['legs'][leg])) 1637 | #bucket['mark lat'] = regatta['marks'][r['legs'][leg]['mark']]['lat'] 1638 | #bucket['mark lon'] = regatta['marks'][r['legs'][leg]['mark']]['lon'] 1639 | 1640 | mark = legs[leg]['mark'] 1641 | bucket['mark lat'] = regatta['marks'][legs[leg]['mark']]['lat'] 1642 | bucket['mark lon'] = regatta['marks'][legs[leg]['mark']]['lon'] 1643 | 1644 | # Take the first lat/lon of the bucket - would mid bucket be better? 1645 | field = "LATLON" 1646 | # Advance to the beginning of the run of data for this bucket 1647 | while index[field] < (len(r[field])-1) and r[field][index[field]][0] < bucketStart: 1648 | index[field] += 1 1649 | bucket["LAT"] = r[field][index[field]][1] 1650 | bucket["LON"] = r[field][index[field]][2] 1651 | 1652 | # ['AWA', 'AWS', 'STW', 'SOG', 'RUD', 'TWS']: 1653 | fields =['AWA', 'AWS', 'STW', 'SOG'] 1654 | 1655 | if 'RUD' in r and len(r['RUD']) > 0: 1656 | #print("Has rudder angles") 1657 | fields.append('RUD') 1658 | if 'TWS' in r and len(r['TWS']) > 0: 1659 | #print("Has true wind") 1660 | fields.append('TWS') 1661 | if 'Roll' in r and len(r['Roll']) > 0: 1662 | #print("Has heel") 1663 | fields.append('Roll') 1664 | if 'Pitch' in r and len(r['Pitch']) > 0: 1665 | #print("Has trim") 1666 | fields.append('Pitch') 1667 | if 'Heave' in r and len(r['Heave']) > 0: 1668 | #print("Has Heave") 1669 | fields.append('Heave') 1670 | if 'ROT' in r and len(r['ROT']) > 0: 1671 | #print("Has ROT") 1672 | fields.append('ROT') 1673 | 1674 | for field in fields: 1675 | total = 0.0 1676 | count = 0 1677 | # Advance to the beginning of the run of data for this bucket 1678 | #print("%s ts index[%d] max %d" % (field, index[field], len(r[field]))) 1679 | #print("%s ts %r < bucketStart %r" % (field, r[field][index[field]][0], bucketStart)) 1680 | while index[field] < (len(r[field])-1) and r[field][index[field]][0] < bucketStart: 1681 | #print("## field %s[%d] %r < %r" % (field, index[field], r[field][index[field]][0], bucketStart)) 1682 | index[field] += 1 1683 | # Run through the valid data summing the value 1684 | while index[field] < len(r[field]) and r[field][index[field]][0] < bucketEnd: 1685 | d = r[field][index[field]] 1686 | #if len(d) < 2: 1687 | #print("Field %s data element too small: %r" % (field, d)) 1688 | total += d[1] 1689 | count += 1 1690 | index[field] += 1 1691 | # Take the average for the bucket 1692 | bucket[field] = None if count == 0 else total / count 1693 | 1694 | markBearing = c["legs"][leg]["bearing"] 1695 | # If we're going north-ish, convert COG and Heading ranges to [-180, 180] in case some samples straddle due north 1696 | # This is to catch the "averaging samples around due north degress leads to due south" problem - mean([0,359.999]) = 180 1697 | fields = ['COG', 'HDG'] 1698 | if len(r['TWD']) > 0: 1699 | fields.append('TWD') 1700 | for field in fields: 1701 | total = 0.0 1702 | count = 0 1703 | # Advance to the beginning of the run of data for this bucket 1704 | #if index[field] > len(r[field]): 1705 | #print("## %s[%d]: has %d data points" % (field, index[field], len(r[field]))) 1706 | while index[field] < (len(r[field])-1) and r[field][index[field]][0] < bucketStart: 1707 | index[field] += 1 1708 | while index[field] < len(r[field]) and r[field][index[field]][0] < bucketEnd: 1709 | d = r[field][index[field]] 1710 | # If we're going north-ish, convert COG and Heading ranges to [-180, 180] in case some samples straddle due north 1711 | total += d[1] if (markBearing > 90 and markBearing < 270) or (d[1] <= 180) else d[1] - 360.0 1712 | count += 1 1713 | index[field] += 1 1714 | if count == 0: 1715 | bucket[field] = None 1716 | else: 1717 | bucket[field] = total / count 1718 | bucket[field] += 0.0 if (markBearing > 90 and markBearing < 270) or (bucket[field] > 0) else 360.0 # Convert back to 0 - 360 if needed 1719 | 1720 | """ 1721 | Compute TWS and TWD for this bucket 1722 | 1723 | AWA = + for Starboard, – for Port 1724 | AWD = H + AWA ( 0 < AWD < 360 ) 1725 | u = SOG * Sin (COG) – AWS * Sin (AWD) 1726 | v = SOG * Cos (COG) – AWS * Cos (AWD) 1727 | TWS = SQRT ( u*u + v*v ) 1728 | TWD = ATAN ( u / v ) 1729 | """ 1730 | 1731 | if bucket['HDG'] == None or bucket['COG'] == None or bucket['SOG'] == None or bucket['AWA'] == None or bucket['AWS'] == None: 1732 | # Can't compute True Wind w/o HDG, COG, SOG and AWA, AWS 1733 | bucket['TWD'] = None 1734 | bucket['TWS'] = None 1735 | bucket['TWA'] = None 1736 | else: 1737 | hdg = radians(bucket['HDG']) 1738 | aws = bucket['AWS'] 1739 | awa = radians(bucket['AWA']) 1740 | cog = radians(bucket['COG']) 1741 | sog = bucket['SOG'] 1742 | 1743 | # Compute TWD, TWS if it's not done by the instruments 1744 | if not 'TWD' in bucket: 1745 | #print("# synthesizing TWS & TWD") 1746 | awd = fmod(hdg + awa, tau) # Compensate for boat's heading 1747 | u = (sog * cos(cog)) - (aws * cos(awd)) 1748 | v = (sog * sin(cog)) - (aws * sin(awd)) 1749 | tws = sqrt((u*u) + (v*v)) 1750 | # Now we want to know where it's from, not where it's going, so add pi 1751 | twd = degrees(fmod(atan2(v,u)+pi, tau)) 1752 | 1753 | bucket['TWS'] = tws 1754 | bucket['TWD'] = twd 1755 | 1756 | if (bucket['TWD'] != None) and (bucket['HDG'] != None): 1757 | # Compute TWA from TWD and Heading 1758 | twa = fmod((bucket['TWD'] - bucket['HDG']) + 360.0, 360.0) 1759 | bucket['TWA'] = twa 1760 | 1761 | bucketStart = bucketEnd 1762 | for key, value in bucket.items(): 1763 | # Make sure there's at least one valid data item for this bucket 1764 | if key != 'ts' and value != None: 1765 | buckets.append(bucket) # append to the list of buckets 1766 | continue 1767 | 1768 | # This is kludgey and maybe wrong, but Expedition only runs on Windows and this is 1769 | # where it looks for log files. 1770 | ofn = "C:/ProgramData/Expedition/log/%s_%s_%s_polarize.csv" % (regatta['boat'], regatta['basefn'], r['race']) 1771 | print('## Expedition log %s records: %d' % (ofn, len(buckets))) 1772 | with open(ofn, "w") as f: 1773 | header_string = "" 1774 | for field in exp_fields: 1775 | if field != "Boat": 1776 | header_string += "," 1777 | header_string += field 1778 | f.write(header_string) 1779 | f.write("\n") 1780 | 1781 | for b in buckets: 1782 | for col in exp_fields: 1783 | if col in expedition_log_map: 1784 | bucket_field = expedition_log_map[col][0] 1785 | if bucket_field == 'boat': 1786 | f.write('0') 1787 | elif bucket_field == 'ts': 1788 | # f.write("%f" % datetime.timestamp(b[bucket_field])) 1789 | # timestamp = (b['ts'] - datetime.datetime(1970, 1, 1)) / datetime.timedelta(seconds=1) 1790 | timestamp = excel_date(b['ts']) 1791 | f.write("%f" % (timestamp)) 1792 | elif not bucket_field in b or b[bucket_field] == None: 1793 | f.write(",") 1794 | else: 1795 | f.write(","+expedition_log_map[col][1] % (b[bucket_field])) 1796 | else: 1797 | f.write(",") 1798 | f.write('\n') 1799 | 1800 | # Create a gpx track file. Could add waypoints for marks, tacks & gybes, etc. Could annotate w/ sensor data 1801 | def gpx_track(regatta, r): 1802 | ofn = "%s_%s_%s.gpx" % (regatta['boat'], regatta['basefn'], r['race']) 1803 | with open(ofn, "w") as f: 1804 | f.write('\n') 1805 | f.write('\n') 1809 | now = datetime.datetime.now(datetime.timezone.utc) 1810 | f.write('\n' % (now.strftime("%Y-%m-%dT%H:%M:%SZ"))) # 2019-09-13T21:17:14Z 1811 | f.write('\n') 1812 | f.write(' %s_%s_%s\n' % (regatta['boat'], regatta['basefn'], r['race'])) 1813 | f.write(' \n') 1814 | 1815 | last_ts = datetime.datetime(year=1970, month=1, day=1, hour=0, minute=0, second=0) 1816 | for p in r['LATLON']: 1817 | if p[0] != last_ts: 1818 | tstring = p[0].strftime("%Y-%m-%dT%H:%M:%SZ") 1819 | f.write(' \n' % (p[1], p[2], tstring)) 1820 | last_ts = p[0] 1821 | 1822 | f.write(' \n') 1823 | f.write('\n') 1824 | f.write('\n') 1825 | # f.write('%s\n' % (p[1], p[2], tstring)) 1826 | 1827 | if __name__ == '__main__': 1828 | parser = argparse.ArgumentParser(description='Generate polars from boat data', epilog='Each regatta is usually in its own directory') 1829 | parser.add_argument("-strip", default=False, action='store_true', dest='strip', help='Create per-race strip graph files') 1830 | parser.add_argument("-legs", default=False, action='store_true', dest='leg', help='Create per-leg analysis') 1831 | parser.add_argument("-minute", default=False, action='store_true', dest='leg', help='Create minute-by-minute reports in per-leg analysis') 1832 | parser.add_argument("-spreadsheet", default=False, action='store_true', dest='spreadsheet', help='Create xlsx spreadsheet file') 1833 | parser.add_argument("-polars", default=False, action='store_true', dest='polars', help='Create aggregate polar graph file') 1834 | parser.add_argument("-exp", default=False, action='store_true', dest='exp', help='Create aggregate Expedition polar text file') 1835 | parser.add_argument("-explog", default=False, action='store_true', dest='explog', help='Create Expedition CSV-format log file') 1836 | parser.add_argument("-gpx", default=False, action='store_true', dest='gpx', help='Create gpx track') 1837 | parser.add_argument('regatta', nargs='*', help='Regatta JSON description files (default regatta.json)') 1838 | args = parser.parse_args() 1839 | 1840 | if args.regatta == []: 1841 | print("Defaulting to regatta.json") 1842 | args.regatta = ['regatta.json'] 1843 | 1844 | for arg in args.regatta: 1845 | parse_regatta(arg) 1846 | 1847 | regattalist.sort() 1848 | for rn in regattalist: 1849 | reg = regattas[rn] 1850 | tzoffset = datetime.timedelta(hours=reg['tz']) # global for this regatta 1851 | 1852 | for race in reg["races"]: 1853 | parse_race(reg, race) 1854 | analyze_race(reg, race) 1855 | 1856 | for leg in range(len(race["legs"])): 1857 | analyze_leg(reg, race, leg) 1858 | 1859 | if args.leg: 1860 | per_leg_report(reg, race) 1861 | 1862 | if args.strip: 1863 | strip_charts(reg, race) 1864 | 1865 | if args.gpx: 1866 | gpx_track(reg, race) 1867 | 1868 | if args.explog: 1869 | expedition_log(reg, race) 1870 | 1871 | if args.polars or args.exp: 1872 | gather_polar_data(regattalist) 1873 | 1874 | if args.polars: 1875 | plot_polars() 1876 | 1877 | if args.exp: 1878 | expedition_polars() 1879 | 1880 | if args.spreadsheet: 1881 | spreadsheet_report() 1882 | --------------------------------------------------------------------------------