├── extra ├── Fitbit_dashboard.png ├── Fitbit_Fetch_Autostart.service ├── influxdb_schema.md └── Fitbit_Fetch.ipynb ├── Grafana_Dashboard └── Dashboard.png ├── requirements.txt ├── .dockerignore ├── .github ├── ISSUE_TEMPLATE │ ├── feature_request.md │ └── bug_report.md ├── FUNDING.yml └── workflows │ ├── dev.pull.requests.yml │ └── prod.push.yml ├── Dockerfile ├── LICENSE ├── compose.yml ├── README.md └── Fitbit_Fetch.py /extra/Fitbit_dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arpanghosh8453/fitbit-grafana/HEAD/extra/Fitbit_dashboard.png -------------------------------------------------------------------------------- /Grafana_Dashboard/Dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/arpanghosh8453/fitbit-grafana/HEAD/Grafana_Dashboard/Dashboard.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | influxdb==5.3.1 2 | pytz==2022.1 3 | Requests==2.31.0 4 | schedule==1.2.0 5 | influxdb_client==1.39.0 6 | influxdb3-python==0.12.0 -------------------------------------------------------------------------------- /extra/Fitbit_Fetch_Autostart.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Fitbit Fetch Autostart Service 3 | After=network.target 4 | 5 | [Service] 6 | Type=simple 7 | WorkingDirectory=/home// 8 | User= 9 | ExecStart=/usr/bin/python3 /home//scripts/python_scripts/Fitbit_Fetch.py > /home//fitbit_autostart.log 2>&1 10 | Restart=on-failure 11 | RestartSec=180 12 | 13 | [Install] 14 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | **/__pycache__ 2 | **/.venv 3 | **/.classpath 4 | **/.dockerignore 5 | **/.env 6 | **/.git 7 | **/.gitignore 8 | **/.project 9 | **/.settings 10 | **/.toolstarget 11 | **/.vs 12 | **/.vscode 13 | **/*.*proj.user 14 | **/*.dbmdl 15 | **/*.jfm 16 | **/bin 17 | **/charts 18 | **/docker-compose* 19 | **/compose* 20 | **/Dockerfile* 21 | **/node_modules 22 | **/npm-debug.log 23 | **/obj 24 | **/secrets.dev.yaml 25 | **/values.dev.yaml 26 | LICENSE 27 | README.md 28 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: "[FEATURE] Explain your proposed feature here" 5 | labels: enhancement 6 | assignees: arpanghosh8453 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help the project improve 4 | title: "[BUG] A Title explaining your issue" 5 | labels: bug 6 | assignees: arpanghosh8453 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **Logs** 14 | The log content that precedes the error. Skip this section if there is no error in the log 15 | 16 | **Screenshots** 17 | If applicable, add screenshots to help explain your problem. 18 | 19 | **Are you using docker?** 20 | - Yes/No 21 | 22 | **Did you read the README and tried to Troubleshoot?** 23 | - Explain if you have tried anything additional to resolve the issue. 24 | 25 | **Additional context** 26 | Add any other context about the problem here. 27 | -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: arpanghosh8453 4 | patreon: # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: arpandesign 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # arpandesign 10 | issuehunt: # Replace with a single IssueHunt username 11 | lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry 12 | polar: # Replace with a single Polar username 13 | buy_me_a_coffee: arpandesign 14 | thanks_dev: # Replace with a single thanks.dev username 15 | custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] 16 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # For more information, please refer to https://aka.ms/vscode-docker-python 2 | FROM python:3.10-slim 3 | 4 | # Keeps Python from generating .pyc files in the container 5 | ENV PYTHONDONTWRITEBYTECODE=1 6 | 7 | # Turns off buffering for easier container logging 8 | ENV PYTHONUNBUFFERED=1 9 | 10 | # Install pip requirements 11 | COPY requirements.txt . 12 | RUN python -m pip install -r requirements.txt 13 | 14 | WORKDIR /app 15 | COPY ./Fitbit_Fetch.py /app 16 | COPY ./requirements.txt /app 17 | 18 | RUN groupadd --gid 1000 appuser && useradd --uid 1000 --gid appuser --shell /bin/bash --create-home appuser && chown -R appuser:appuser /app 19 | USER appuser 20 | 21 | # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug 22 | CMD ["python", "Fitbit_Fetch.py"] 23 | -------------------------------------------------------------------------------- /.github/workflows/dev.pull.requests.yml: -------------------------------------------------------------------------------- 1 | name: Build and test docker container for development 2 | 3 | on: 4 | pull_request: 5 | branches: 6 | - main 7 | paths: 8 | - 'Fitbit_Fetch.py' 9 | - 'compose.yml' 10 | - 'requirements.txt' 11 | - 'Dockerfile' 12 | - '.dockerignore' 13 | 14 | env: 15 | REGISTRY: docker.io 16 | IMAGE_NAME: thisisarpanghosh/fitbit-fetch-data 17 | 18 | jobs: 19 | build: 20 | runs-on: ubuntu-latest 21 | steps: 22 | - uses: actions/checkout@v4 23 | 24 | - name: Set up QEMU 25 | uses: docker/setup-qemu-action@v3 26 | 27 | - name: Set up Docker Buildx 28 | uses: docker/setup-buildx-action@v2 29 | 30 | - name: Build the Docker image (multi-arch) 31 | run: docker buildx build --platform linux/amd64 --load -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:dev . 32 | 33 | test: 34 | runs-on: ubuntu-latest 35 | steps: 36 | - uses: actions/checkout@v4 37 | - name: Test the Docker image 38 | run: docker compose up -d 39 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2023, Arpan Ghosh 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | 1. Redistributions of source code must retain the above copyright 7 | notice, this list of conditions and the following disclaimer. 8 | 2. Redistributions in binary form must reproduce the above copyright 9 | notice, this list of conditions and the following disclaimer in the 10 | documentation and/or other materials provided with the distribution. 11 | 3. All advertising materials mentioning features or use of this software 12 | must display the following acknowledgement: 13 | This product includes software developed by the . 14 | 4. Neither the name of the Arpan Ghosh nor the 15 | names of its contributors may be used to endorse or promote products 16 | derived from this software without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ''AS IS'' AND ANY 19 | EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 27 | USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /.github/workflows/prod.push.yml: -------------------------------------------------------------------------------- 1 | name: Build and Push Docker Image to Docker Hub 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | paths: 8 | - 'Fitbit_Fetch.py' 9 | - 'compose.yml' 10 | - 'requirements.txt' 11 | - 'Dockerfile' 12 | - '.dockerignore' 13 | 14 | env: 15 | REGISTRY: docker.io 16 | IMAGE_NAME: thisisarpanghosh/fitbit-fetch-data 17 | 18 | jobs: 19 | build: 20 | runs-on: ubuntu-latest 21 | steps: 22 | - uses: actions/checkout@v4 23 | 24 | - name: Set up QEMU 25 | uses: docker/setup-qemu-action@v3 26 | 27 | - name: Set up Docker Buildx 28 | uses: docker/setup-buildx-action@v2 29 | 30 | - name: Build the Docker image (multi-arch) 31 | run: docker buildx build --platform linux/amd64 --load -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest . 32 | 33 | test: 34 | runs-on: ubuntu-latest 35 | steps: 36 | - uses: actions/checkout@v4 37 | - name: Test the Docker image 38 | run: docker compose up -d 39 | 40 | push_to_registry: 41 | name: Push Docker image to Docker Hub 42 | runs-on: ubuntu-latest 43 | steps: 44 | - name: Check out the repo 45 | uses: actions/checkout@v4 46 | 47 | - name: Set up QEMU 48 | uses: docker/setup-qemu-action@v3 49 | 50 | - name: Set up Docker Buildx 51 | uses: docker/setup-buildx-action@v2 52 | 53 | - name: Log in to Docker Hub 54 | uses: docker/login-action@v3 55 | with: 56 | username: ${{ secrets.DOCKERHUB_USERNAME }} 57 | password: ${{ secrets.DOCKERHUB_PASSWORD }} 58 | 59 | - name: Build and push multi-arch Docker image 60 | uses: docker/build-push-action@v5 61 | with: 62 | context: . 63 | platforms: linux/amd64,linux/arm64 64 | push: true 65 | tags: | 66 | ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest 67 | -------------------------------------------------------------------------------- /compose.yml: -------------------------------------------------------------------------------- 1 | # Initial setup: Run the command `docker compose run --rm fitbit-fetch-data` and enter a valie fitbit refresh token 2 | # make sure the mapped logs and tokens folders exists and owned by 1000 uid, otherwise you may get permission denied error. 3 | # Check for successful API calls and then exit out with ctrl + c 4 | # Then run docker compose up -d to launch the full stack 5 | # This compose file creates an open read/write access influxdb database with no authentication. You may enable authentication and grant appropriate read/write access to the `fitbit_user` on the `FitbitHealthStats` database manually if you want with additional `INFLUXDB_ADMIN_ENABLED`, `INFLUXDB_ADMIN_USER`, and `INFLUXDB_ADMIN_PASSWORD` ENV variables following influxdb 1.8 guidelines. 6 | services: 7 | fitbit-fetch-data: 8 | restart: unless-stopped 9 | image: thisisarpanghosh/fitbit-fetch-data:latest 10 | container_name: fitbit-fetch-data 11 | volumes: 12 | - ./logs:/app/logs # logs folder should exist and owned by user id 1500 for influxdb v1.11 13 | - ./tokens:/app/tokens # tokens folder should esist and owned by user id 1500 for influxdb v1.11 14 | - /etc/timezone:/etc/timezone:ro 15 | environment: 16 | - FITBIT_LOG_FILE_PATH=/app/logs/fitbit.log 17 | - TOKEN_FILE_PATH=/app/tokens/fitbit.token 18 | - AUTO_DATE_RANGE=True # Switch between regular update and bulk update mode, read Historical Data Update section in README 19 | - INFLUXDB_VERSION=1 # supports 1 or 2 or 3 20 | - INFLUXDB_HOST=influxdb # for influxdb 1.x and 3.x 21 | - INFLUXDB_PORT=8086 # for influxdb 1.x and 3.x 22 | # Variables for influxdb 3.x ( additionally you need to change the influxdb container image to 3.x below ) 23 | # - INFLUXDB_V3_ACCESS_TOKEN=your_influxdb_admin_access_token # Required for influxdb V3 (ignored for V1 and V2), Set this to your admin access token (or a token that has database R/W access) - Check README installation notes under point 3 to generate this. 24 | # Variables for influxdb 2.x ( additionally you need to change the influxdb container image to 2.x below ) 25 | # - INFLUXDB_BUCKET=your_bucket_name_here # for influxdb 2.x 26 | # - INFLUXDB_ORG=your_org_here # for influxdb 2.x 27 | # - INFLUXDB_TOKEN=your_token_here # for influxdb 2.x 28 | # - INFLUXDB_URL=your_influxdb_server_location_with_port_here # for influxdb 2.x 29 | # Variables for influxdb 1.x 30 | - INFLUXDB_USERNAME=fitbit_user # for influxdb 1.x 31 | - INFLUXDB_PASSWORD=fitbit_password # for influxdb 1.x 32 | - INFLUXDB_DATABASE=FitbitHealthStats # for influxdb 1.x 33 | # MAKE SURE you set the application type to PERSONAL. Otherwise, you won't have access to intraday data series, resulting in 40X errors. 34 | - CLIENT_ID=your_application_client_ID # Change this to your client ID 35 | - CLIENT_SECRET=your_application_client_secret # Change this to your client Secret 36 | - DEVICENAME=Your_Device_Name # e.g. "Charge5" 37 | - LOCAL_TIMEZONE=Automatic # set to "Automatic" for Automatic setup from User profile (if not mentioned here specifically). 38 | depends_on: 39 | - influxdb 40 | 41 | # We are using influxdb 1.8 in this stack (tested and optimized - better dashboard support) 42 | influxdb: 43 | restart: unless-stopped 44 | container_name: influxdb 45 | hostname: influxdb 46 | environment: 47 | - INFLUXDB_DB=FitbitHealthStats 48 | - INFLUXDB_USER=fitbit_user 49 | - INFLUXDB_USER_PASSWORD=fitbit_password 50 | - INFLUXDB_DATA_INDEX_VERSION=tsi1 51 | ############################################################################### 52 | # The following ENV variables are applicable for InfluxDB V3 - No effect for V1 53 | ############################################################################### 54 | # - INFLUXDB3_MAX_HTTP_REQUEST_SIZE=10485760 55 | # - INFLUXDB3_NODE_IDENTIFIER_PREFIX=Influxdb-node1 56 | # - INFLUXDB3_BUCKET=GarminStats 57 | # - INFLUXDB3_OBJECT_STORE=file 58 | # - INFLUXDB3_DB_DIR=/data 59 | # - INFLUXDB3_QUERY_FILE_LIMIT=5000 # this set to be a very high value if you want to view long term data 60 | ports: 61 | - '8086:8086' # Influxdb V3 should map as "8181:8181" (Change INFLUXDB_PORT to 8181 on garmin-fetch-data appropriately for InfluxDB V3) 62 | volumes: 63 | - ./influxdb:/var/lib/influxdb # InfluxDB V3 bind mount should be set like - ./influxdb:/data if you set INFLUXDB3_DB_DIR=/data (instead of /var/lib/influxdb) - must be owned by 1500:1500 for influxdb v1.11 64 | image: 'influxdb:1.11' # You must change this to 'quay.io/influxdb/influxdb3-core:latest' for influxdb V3 65 | 66 | grafana: 67 | restart: unless-stopped 68 | container_name: grafana 69 | hostname: grafana 70 | environment: 71 | - GF_SECURITY_ADMIN_USER=admin 72 | - GF_SECURITY_ADMIN_PASSWORD=admin 73 | - GF_PLUGINS_PREINSTALL=marcusolsson-hourly-heatmap-panel 74 | volumes: 75 | - './grafana:/var/lib/grafana' # Must be owned by 472:472 76 | ports: 77 | - '3000:3000' 78 | image: 'grafana/grafana:latest' 79 | -------------------------------------------------------------------------------- /extra/influxdb_schema.md: -------------------------------------------------------------------------------- 1 | ### Measurement: `Activity Minutes` 2 | 3 | | Field Key | Field Type | 4 | | --- | --- | 5 | | `minutesFairlyActive` | integer | 6 | | `minutesLightlyActive` | integer | 7 | | `minutesSedentary` | integer | 8 | | `minutesVeryActive` | integer | 9 | 10 | | Tag Key | Tag Type | 11 | | --- | --- | 12 | | `Device` | string | 13 | 14 | --- 15 | 16 | ### Measurement: `Activity Records` 17 | 18 | | Field Key | Field Type | 19 | | --- | --- | 20 | | `ActiveDuration` | integer | 21 | | `AverageHeartRate` | integer | 22 | | `calories` | integer | 23 | | `distance` | float | 24 | | `duration` | integer | 25 | | `steps` | integer | 26 | 27 | | Tag Key | Tag Type | 28 | | --- | --- | 29 | | `ActivityName` | string | 30 | 31 | --- 32 | 33 | ### Measurement: `BreathingRate` 34 | 35 | | Field Key | Field Type | 36 | | --- | --- | 37 | | `value` | float | 38 | 39 | | Tag Key | Tag Type | 40 | | --- | --- | 41 | | `Device` | string | 42 | 43 | --- 44 | 45 | ### Measurement: `DeviceBatteryLevel` 46 | 47 | | Field Key | Field Type | 48 | | --- | --- | 49 | | `value` | float | 50 | 51 | --- 52 | 53 | ### Measurement: `GPS` 54 | 55 | | Field Key | Field Type | 56 | | --- | --- | 57 | | `altitude` | float | 58 | | `distance` | float | 59 | | `heart_rate` | integer | 60 | | `lat` | float | 61 | | `lon` | float | 62 | | `speed_kph` | float | 63 | 64 | | Tag Key | Tag Type | 65 | | --- | --- | 66 | | `ActivityID` | string | 67 | 68 | --- 69 | 70 | ### Measurement: `HR zones` 71 | 72 | | Field Key | Field Type | 73 | | --- | --- | 74 | | `Cardio` | integer | 75 | | `Fat Burn` | integer | 76 | | `Normal` | integer | 77 | | `Peak` | integer | 78 | 79 | | Tag Key | Tag Type | 80 | | --- | --- | 81 | | `Device` | string | 82 | 83 | --- 84 | 85 | ### Measurement: `HRV` 86 | 87 | | Field Key | Field Type | 88 | | --- | --- | 89 | | `dailyRmssd` | float | 90 | | `deepRmssd` | float | 91 | 92 | | Tag Key | Tag Type | 93 | | --- | --- | 94 | | `Device` | string | 95 | 96 | --- 97 | 98 | ### Measurement: `HeartRate_Intraday` 99 | 100 | | Field Key | Field Type | 101 | | --- | --- | 102 | | `value` | integer | 103 | 104 | | Tag Key | Tag Type | 105 | | --- | --- | 106 | | `Device` | string | 107 | 108 | --- 109 | 110 | ### Measurement: `RestingHR` 111 | 112 | | Field Key | Field Type | 113 | | --- | --- | 114 | | `value` | integer | 115 | 116 | | Tag Key | Tag Type | 117 | | --- | --- | 118 | | `Device` | string | 119 | 120 | --- 121 | 122 | ### Measurement: `SPO2` 123 | 124 | | Field Key | Field Type | 125 | | --- | --- | 126 | | `avg` | float | 127 | | `max` | float | 128 | | `min` | float | 129 | 130 | | Tag Key | Tag Type | 131 | | --- | --- | 132 | | `Device` | string | 133 | 134 | --- 135 | 136 | ### Measurement: `SPO2_Intraday` 137 | 138 | | Field Key | Field Type | 139 | | --- | --- | 140 | | `value` | float | 141 | 142 | | Tag Key | Tag Type | 143 | | --- | --- | 144 | | `Device` | string | 145 | 146 | --- 147 | 148 | ### Measurement: `Skin Temperature Variation` 149 | 150 | | Field Key | Field Type | 151 | | --- | --- | 152 | | `RelativeValue` | float | 153 | 154 | | Tag Key | Tag Type | 155 | | --- | --- | 156 | | `Device` | string | 157 | 158 | --- 159 | 160 | ### Measurement: `Sleep Levels` 161 | 162 | | Field Key | Field Type | 163 | | --- | --- | 164 | | `duration_seconds` | integer | 165 | | `level` | integer | 166 | 167 | | Tag Key | Tag Type | 168 | | --- | --- | 169 | | `Device` | string | 170 | | `isMainSleep` | string | 171 | 172 | --- 173 | 174 | ### Measurement: `Sleep Summary` 175 | 176 | | Field Key | Field Type | 177 | | --- | --- | 178 | | `efficiency` | integer | 179 | | `minutesAfterWakeup` | integer | 180 | | `minutesAsleep` | integer | 181 | | `minutesAwake` | integer | 182 | | `minutesDeep` | integer | 183 | | `minutesInBed` | integer | 184 | | `minutesLight` | integer | 185 | | `minutesREM` | integer | 186 | | `minutesToFallAsleep` | integer | 187 | 188 | | Tag Key | Tag Type | 189 | | --- | --- | 190 | | `Device` | string | 191 | | `isMainSleep` | string | 192 | 193 | --- 194 | 195 | ### Measurement: `Steps_Intraday` 196 | 197 | | Field Key | Field Type | 198 | | --- | --- | 199 | | `value` | integer | 200 | 201 | | Tag Key | Tag Type | 202 | | --- | --- | 203 | | `Device` | string | 204 | 205 | --- 206 | 207 | ### Measurement: `Total Steps` 208 | 209 | | Field Key | Field Type | 210 | | --- | --- | 211 | | `value` | float | 212 | 213 | | Tag Key | Tag Type | 214 | | --- | --- | 215 | | `Device` | string | 216 | 217 | --- 218 | 219 | ### Measurement: `bmi` 220 | 221 | | Field Key | Field Type | 222 | | --- | --- | 223 | | `value` | float | 224 | 225 | | Tag Key | Tag Type | 226 | | --- | --- | 227 | | `Device` | string | 228 | 229 | --- 230 | 231 | ### Measurement: `calories` 232 | 233 | | Field Key | Field Type | 234 | | --- | --- | 235 | | `value` | float | 236 | 237 | | Tag Key | Tag Type | 238 | | --- | --- | 239 | | `Device` | string | 240 | 241 | --- 242 | 243 | ### Measurement: `distance` 244 | 245 | | Field Key | Field Type | 246 | | --- | --- | 247 | | `value` | float | 248 | 249 | | Tag Key | Tag Type | 250 | | --- | --- | 251 | | `Device` | string | 252 | 253 | --- 254 | 255 | ### Measurement: `weight` 256 | 257 | | Field Key | Field Type | 258 | | --- | --- | 259 | | `value` | float | 260 | 261 | | Tag Key | Tag Type | 262 | | --- | --- | 263 | | `Device` | string | 264 | 265 | --- 266 | 267 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 3 |

4 | 5 | # Fitbit Fetch script and Influxdb Grafana integration 6 | 7 | A script to fetch data from Fitbit servers using their API and store the data in a local influxdb database. 8 | 9 | ## Dashboard Example 10 | 11 | ![Dashboard](https://github.com/arpanghosh8453/public-fitbit-projects/blob/main/Grafana_Dashboard/Dashboard.png?raw=true) 12 | 13 | ## Features 14 | 15 | - Automatic data collection from Fitbit API 16 | - Support for both InfluxDB 1.x and 2.x (limited support for 2.x) 17 | - Collects comprehensive health metrics including: 18 | - Heart Rate Data (including intraday) 19 | - Hourly steps Heatmap 20 | - Daily Step Count 21 | - Sleep Data and patterns 22 | - Sleep regularity heatmap 23 | - SpO2 Data 24 | - Breathing Rate 25 | - HRV 26 | - Activity Minutes 27 | - Device Battery Level 28 | - And more... 29 | - Automated token refresh 30 | - Historical data backfilling 31 | - Rate limit aware data collection 32 | 33 | ✅ Available Influxdb database measurements and schema is available [here](extra/influxdb_schema.md) 34 | 35 | ## Install with Docker (Recommended) 36 | 37 | 1. Follow this [guide](https://dev.fitbit.com/build/reference/web-api/developer-guide/getting-started/) to create an application. ❗ **The Fitbit `Oauth 2.0 Application Type` selection must be `personal` for intraday data access** ❗- Otherwise you might encounter `KeyError: 'activities-heart-intraday'` when fetching intraday Heart rate or steps data. 38 | 39 | ![image](https://github.com/user-attachments/assets/323884a6-8154-477b-811b-6e75b90f53f8) 40 | 41 | 2. `Default Access Type` should be `Read Only`. For the Privacy Policy and TOS URLs, you can enter any valid URL links. Those won't be checked or verified as long as they are valid URLs. The `Redirect URL` could be anything that does not redirect to any existing page/service (as the redirected page url will contain some tokens), I suggest using a dummy `http://localhost:8888` or `http://localhost:8000`. This process will give you a `client ID`, `client secret`, and then you must follow the Oauth 2.0 Tutorial link (marked with `2` above) to receive the required `refresh token` for the setup (see step `5`) 42 | 43 | 3. Create a folder named `fitbit-fetch-data`, cd into the folder, create a `compose.yml` file with the content of the given compose example below ( Change the enviornment variables accordingly ) 44 | 45 | 4. Create two folders named `logs` and `tokens` inside and make sure to chown them for uid `1000` as the docker container runs the scripts as user uid `1000` ( otherwise you may get read/write permission denied errors ) 46 | 47 | Note: If you are planning to use Influxdb V3, you need to enter the admin access token in `INFLUXDB_V3_ACCESS_TOKEN`. To generate the admin token you should run `docker exec influxdb influxdb3 create token --admin` command. This will give you the admin token which you must update to `INFLUXDB_V3_ACCESS_TOKEN` ENV variable. You can do this only once and the token can't be viewed or retrieved ever again (influxdb only stores a hash of it in the database for comparison). So please store this token carefully. 48 | 49 | 5. Initial set up of Access and Refresh tokens with the command : `docker pull thisisarpanghosh/fitbit-fetch-data:latest && docker compose run --rm fitbit-fetch-data` as this will save the initial access and refresh token pair to local storage inside the mapped `tokens` directory. Enter the refresh token you obtained from your fitbit account and hit enter when prompted. Exit out with `ctrl + c` after you see the **successful api requests** in the stdout log. This will automatically remove the orphan running container 50 | 51 | 6. Finally run : `docker compose up -d` ( to launch the full stack in detached mode ). Thereafter you should check the logs with `docker compose logs --follow` to see any potential error from the containers. This will help you debug the issue, if there is any (specially read/write permission issues) 52 | 53 | 7. Now you can check out the `localhost:3000` to reach Grafana, do the initial setup, add the influxdb as a datasource (the influxdb address should be `http://influxdb:8086` as they are part of the same network stack, and username should be `fitbit_user` with the password `fitbit_password` for the database name `FitbitHealthStats` if you are using the default settings from the compose file.). Test the connection to make sure the influxdb is up and rechable (you are good to go if it finds the measurements when you test the connection) 54 | 55 | 8. To use the Grafana dashboard, please use the [JSON files](https://github.com/arpanghosh8453/public-fitbit-projects/tree/main/Grafana_Dashboard) downloaded directly from the Grafana_Dashboard of the project (there are separate versions of the dashboard for influxdb v1 and v2) or use the import code **23088** (for influxdb-v1) or **23090** (for influxdb-v2) to pull them directly from the Grafana dashboard cloud. 56 | 57 | 9. In the Grafana dashboard, the heatmap panels require an additional plugin you must install. This can be done by using the `GF_PLUGINS_PREINSTALL=marcusolsson-hourly-heatmap-panel` environment variable like in the [compose.yml](./compose.yml) file, or after the creation of the container with docker commands. Just run `docker exec -it grafana grafana cli plugins install marcusolsson-hourly-heatmap-panel` and then run `docker restart grafana` to apply that plugin update. Now, you should be able to see the Heatmap panels on the dashboard loading successfully. 58 | 59 | --- 60 | 61 | This project is tested and optimized for InfluxDB 1.11, and using the same version is strongly recommended. Using InfluxDB 2.x may result in a less detailed dashboard as it's developed by other contributers and due to its sole reliance on Flux queries, which can be problematic to use with Grafana at times. In fact, InfluxQL is being reintroduced in InfluxDB 3.0, reflecting user feedback. Grafana also has better compatibility/stability with InfluxQL from InfluxDB 1.11. 62 | 63 | Since InfluxDB 2.x offers no clear benefits for this project, there are no plans for a full migration. While support for InfluxDB 2.x exists for this project and has been tested by others, same visual experience cannot be guaranteed on Grafana dashboard designed for influxdb 2.x. 64 | 65 | Example `compose.yml` file contents for influxdb 1.11 is given here for a quick start. If you prefer using influxdb 2.x and accept the limited Grafana dashboard, please refer to the [`compose.yml `](./compose.yml) file and update the `ENV` vriables accordingly. 66 | 67 | Support of current [Influxdb 3](https://docs.influxdata.com/influxdb3/core/) OSS is also available with this project [ `Exprimental` ] 68 | 69 | > [!IMPORTANT] 70 | > Please note that InfluxDB 3.x OSS limits the query time limit to 72 hours. This can be extended more by setting `INFLUXDB3_QUERY_FILE_LIMIT` to a very high value with a potential risk of crashing the container (OOM Error). As we are interested in visualization long term data trends, this limit defeats the purpose. Hence, we strongly recommend using InfluxDB 1.11.x (default settings) to our users as long as it's not discontinued from production. 71 | 72 | ```yaml 73 | services: 74 | fitbit-fetch-data: 75 | restart: unless-stopped 76 | image: thisisarpanghosh/fitbit-fetch-data:latest 77 | container_name: fitbit-fetch-data 78 | volumes: 79 | - ./logs:/app/logs 80 | - ./tokens:/app/tokens 81 | - /etc/timezone:/etc/timezone:ro 82 | environment: 83 | - FITBIT_LOG_FILE_PATH=/app/logs/fitbit.log 84 | - TOKEN_FILE_PATH=/app/tokens/fitbit.token 85 | - AUTO_DATE_RANGE=True # Used for bulk update, read Historical Data Update section in README 86 | - INFLUXDB_VERSION=1 87 | - INFLUXDB_HOST=influxdb 88 | - INFLUXDB_PORT=8086 89 | - INFLUXDB_USERNAME=fitbit_user 90 | - INFLUXDB_PASSWORD=fitbit_password 91 | - INFLUXDB_DATABASE=FitbitHealthStats 92 | - CLIENT_ID=your_application_client_ID # Change this to your client ID 93 | - CLIENT_SECRET=your_application_client_secret # Change this to your client Secret 94 | - DEVICENAME=Your_Device_Name # Change this to your device name - e.g. "Charge5" without quotes 95 | - LOCAL_TIMEZONE=Automatic 96 | depends_on: 97 | - influxdb 98 | 99 | 100 | influxdb: 101 | restart: unless-stopped 102 | container_name: influxdb 103 | hostname: influxdb 104 | environment: 105 | - INFLUXDB_DB=FitbitHealthStats 106 | - INFLUXDB_USER=fitbit_user 107 | - INFLUXDB_USER_PASSWORD=fitbit_password 108 | - INFLUXDB_DATA_INDEX_VERSION=tsi1 109 | ############################################################################### 110 | # The following ENV variables are applicable for InfluxDB V3 - No effect for V1 111 | ############################################################################### 112 | # - INFLUXDB3_MAX_HTTP_REQUEST_SIZE=10485760 113 | # - INFLUXDB3_NODE_IDENTIFIER_PREFIX=Influxdb-node1 114 | # - INFLUXDB3_BUCKET=GarminStats 115 | # - INFLUXDB3_OBJECT_STORE=file 116 | # - INFLUXDB3_DB_DIR=/data 117 | # - INFLUXDB3_QUERY_FILE_LIMIT=5000 # this set to be a very high value if you want to view long term data 118 | ports: 119 | - '8086:8086' # Influxdb V3 should map as "8181:8181" (Change INFLUXDB_PORT to 8181 on fitbit-fetch-data appropriately for InfluxDB V3) 120 | volumes: 121 | - ./influxdb:/var/lib/influxdb # InfluxDB V3 bind mount should be set like - ./influxdb:/data if you set INFLUXDB3_DB_DIR=/data (instead of /var/lib/influxdb) 122 | image: 'influxdb:1.11' # You must change this to 'quay.io/influxdb/influxdb3-core:latest' for influxdb V3 123 | 124 | grafana: 125 | restart: unless-stopped 126 | container_name: grafana 127 | hostname: grafana 128 | environment: 129 | - GF_SECURITY_ADMIN_USER=admin 130 | - GF_SECURITY_ADMIN_PASSWORD=admin 131 | - GF_PLUGINS_PREINSTALL=marcusolsson-hourly-heatmap-panel 132 | volumes: 133 | - './grafana:/var/lib/grafana' 134 | ports: 135 | - '3000:3000' 136 | image: 'grafana/grafana:latest' 137 | ``` 138 | 139 | ✅ The Above compose file creates an open read/write access influxdb database with no authentication. Unless you expose this database to the open internet directly, this poses no threat. You may enable authentication and grant appropriate read/write access to the `fitbit_user` on the `FitbitHealthStats` database manually if you want with `INFLUXDB_ADMIN_ENABLED`, `INFLUXDB_ADMIN_USER`, and `INFLUXDB_ADMIN_PASSWORD` ENV variables, following the [influxdb guide](https://github.com/docker-library/docs/blob/master/influxdb/README.md) but this won't be covered here for the sake of simplicity. 140 | 141 | ## Historical Data Update 142 | 143 | #### Background 144 | 145 | The primary purpose of this script is to visualize long term data and if you have just discovered it, you may need to wait a long time to acheive this by automatic daily data fetch process. But fear not! this script was written with that fact in mind. As you may know, **fitbit rate limits the API calls to their server from their users, so only 150 API calls are allowed per hour** and it resets every hour. Some API endpoints allows fetching long term data for months and years while most **intraday data is limited to 24 hours per API call**. So this means if you need to fetch HR and steps data for 5 days, there is no other way but making 5x2=10 API calls to their servers. Now imagine this at scale, multiple measurements over the years of data. I was faced with this exact problem and it really took me a long time to figure out the most optimal way to fetch bulk historic data is to group them into categories based on their period limits and implement robust handing of `429 Error` ('too many requests within an hour' error). 146 | 147 | This script has a feature that in the bulk update mode it will fill up the less limited data first and finally fill up the intraday data so you can see the data filling up in grafana real time as the script progresses. After it exausts it's available 150 calls for the hour, it will go to dormant mode for the remaining duration for that hour, and resume fetching the data as soon as the wait time is up automatically (so you can just leave it and let it work). To give you a timeline, **it took a little more than 24 hours to fetch all the historic data for my 2 years of historic data from their servers**. 148 | 149 | #### Procedure 150 | 151 | The process is quite simple. you need to add an ENV variable and rerun the container in interactive mode. here is a step-by-step guide 152 | 153 | - Stop the running container and remove it with `docker compose down` if running already 154 | - In the docker compose file, add a new ENV variable `AUTO_DATE_RANGE=False` under the `environment` section along with other variables. This variable switches the mode to bulk update instead of regular daily update 155 | - Assuming you are already in the directory where the `compose.yml` file is, run `docker compose run --rm fitbit-fetch-data` - this will run this container in _"remove container automatically after finish"_ mode which is useful for one time running like this. This will also attach the container to the shell as interactive mode, so don't close the shell until the bulk update is complete. 156 | - After initialization, you will be requested to input the start and end dates in YYYY-MM-DD format. the format is very important so please enter the dates like this `2024-03-13`. Start date must be earlier than end date. The script should work for any given range, but if you encounter an error during the bulk update with large date range, please break the date range into one year chunks (maybe a few days less than one year just to be safe), and run it for each one year chunk one after another. I personally did not encounter any issue with longer date ranges, but this is just a heads up. 157 | - You will see the update logs in the attached shell. Please wait until it shpws `Bulk Update Complete` and exits. It might take a long time depending on the given duration and 150 API call limit per hour. 158 | - You are done with the bulk update at this point. Remove the ENV variable from the compose or change it to `AUTO_DATE_RANGE=True`, save the compose file and run `docker compose up` to resume daily update. 159 | 160 | #### Non-interactive Procedure 161 | 162 | You can run the bulk update in non-interactive mode by setting these additoinal environment variables. 163 | 164 | - `MANUAL_START_DATE` optional, in YYYY-MM-DD format, if you want to bulk update only from specific date 165 | - `MANUAL_END_DATE` optional, in YYYY-MM-DD format, if you want to bulk update until a specific date 166 | 167 | ## Backup Database 168 | 169 | Whether you are using a bind mount or a docker volume, creating a restorable archival backup of your valuable health data is always advised. Assuming you named your database as `FitbitHealthStats` and influxdb container name is `influxdb`, you can use the following script to create a static archival backup of your data present in the influxdb database at that time point. This restore points can be used to re-create the influxdb database with the archived data without requesting them from Garmin's servers again, which is not only time consuming but also resource intensive. 170 | 171 | ```bash 172 | #!/bin/bash 173 | TIMESTAMP=$(date +%F_%H-%M) 174 | BACKUP_DIR="./influxdb_backups/$TIMESTAMP" 175 | mkdir -p "$BACKUP_DIR" 176 | docker exec influxdb influxd backup -portable -db FitbitHealthStats /tmp/influxdb_backup 177 | docker cp influxdb:/tmp/influxdb_backup "$BACKUP_DIR" 178 | docker exec influxdb rm -r /tmp/influxdb_backup" 179 | ``` 180 | 181 | The above bash script would create a folder named `influxdb_backups` inside your current working directory and create a subfolder under it with current date-time. Then it will create the backup for `FitbitHealthStats` database and copy the backup files to that location. 182 | 183 | For restoring the data from a backup, you first need to make the files available inside the new influxdb docker container. You can use `docker cp` or volume bind mount for this. Once the backup data is available to the container internally, you can simply run `docker exec influxdb influxd restore -portable -db FitbitHealthStats /path/to/internal-backup-directory` to restore the backup. 184 | 185 | Please read detailed guide on this from the [influxDB documentation for backup and restore](https://docs.influxdata.com/influxdb/v1/administration/backup_and_restore/) 186 | 187 | ## Direct Install method (For developers) 188 | 189 | Set up influxdb 1.8 ( direct install or via [docker](https://github.com/arpanghosh8453/public-docker-config#influxdb) ). Create an user with a password and an empty database. 190 | 191 | Set up grafana recent release ( direct install or via [docker](https://github.com/arpanghosh8453/public-docker-config#grafana) ) 192 | 193 | Use the `requirements.txt` file to install the required packages using pip 194 | 195 | Follow this [guide](https://dev.fitbit.com/build/reference/web-api/developer-guide/getting-started/) to create an application. This will give you a client ID, client secret, and a refresh token. 196 | 197 | ❗ **The Fitbit application must be personal type for the access of intraday data series** ❗ - Otherwise you might encounter `KeyError: 'activities-heart-intraday'` Error. 198 | 199 | Update the following variables in the python script ( use the influxdb-v2 specific variables for influxdb-v2 instance ) 200 | 201 | - FITBIT_LOG_FILE_PATH = "your/expected/log/file/location/path" 202 | - TOKEN_FILE_PATH = "your/expected/token/file/location/path" 203 | - INFLUXDB_USERNAME = 'your_influxdb_username' 204 | - INFLUXDB_PASSWORD = 'your_influxdb_password' 205 | - INFLUXDB_DATABASE = 'your_influxdb_database_name' 206 | - client_id = "your_application_client_ID" 207 | - client_secret = "your_application_client_secret" 208 | - DEVICENAME = "Your_Device_Name" # example - "Charge5" 209 | - LOCAL_TIMEZONE=Automatic # set to "Automatic" for Automatic setup from User profile (if not mentioned here specifically). 210 | 211 | Run the script; it will request a refresh token as input for the first run to set up the token file. You can check the logs to see the work in progress. The script, by default, keeps running forever, calling different functions at scheduled intervals. 212 | 213 | Finally, add the influxdb database as a Data source in Grafana, please use the [JSON files](https://github.com/arpanghosh8453/public-fitbit-projects/tree/main/Grafana_Dashboard) from the Grafana_Dashboard to replicate the dashboard quickly. 214 | 215 | You can use the [Fitbit_Fetch_Autostart.service](https://github.com/arpanghosh8453/public-fitbit-projects/blob/main/extra/Fitbit_Fetch_Autostart.service) template to set up an auto-starting ( and auto-restarting in case of temporary failure ) service in Linux based system ( or WSL ) 216 | 217 | ## Troubleshooting 218 | 219 | - If you are getting `KeyError: 'activities-heart-intraday'` please double check if your Fitbit Oauth application is set as `personal` type before you open an issue 220 | 221 | - If you are missing GPS data, but you know you have some within the selected time range in grafana, check if the variable GPS Activity variable is properly set or not. You should have a dropdown there. If you do not see any values, please go to the dashboard settings and check if the GPS variable datasource is properly set or not. 222 | 223 | - In some cases, for the `grafana` container, you may need to chown the corresponding mounted folders as *472*:*472* if you are having read/write errors inside the grafana container. The logs will inform you if this happens. The `influxdb:1.11` container requires the folder to be owned by `1500:1500` 224 | 225 | 226 | ## Own a Garmin Device? 227 | 228 | If you are a **Garmin user**, please check out the [sister project](https://github.com/arpanghosh8453/garmin-grafana) made for Garmin 229 | 230 | ## Deploy with Homeassistant integration 231 | 232 | User [@Jasonthefirst](https://github.com/Jasonthefirst) has developed a plugin (issue [#24](https://github.com/arpanghosh8453/public-fitbit-projects/issues/24) ) based on the python script which can be used to deploy the setup without docker. Please refer to [fitbit-ha-addon](https://gitlab.fristerspace.de/demian/fitbit-ha-addon) for the setup. 233 | 234 | ## Support me 235 | 236 | If you enjoy the script and love how it works with simple setup, please consider supporting me with a coffee ❤. You can view more detailed health statistics with this setup than paying a subscription fee to Fitbit, thanks to their free REST API services. 237 | 238 | [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/A0A84F3DP) 239 | 240 | ## Star History 241 | 242 | [![Star History Chart](https://api.star-history.com/svg?repos=arpanghosh8453/public-fitbit-projects&type=Date)](https://www.star-history.com/#arpanghosh8453/public-fitbit-projects&Date) 243 | -------------------------------------------------------------------------------- /extra/Fitbit_Fetch.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 2, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import base64, requests, schedule, time, json, pytz, logging\n", 10 | "from requests.exceptions import ConnectionError\n", 11 | "from datetime import datetime, timedelta\n", 12 | "from influxdb import InfluxDBClient\n", 13 | "from influxdb.exceptions import InfluxDBClientError" 14 | ] 15 | }, 16 | { 17 | "cell_type": "markdown", 18 | "metadata": {}, 19 | "source": [ 20 | "## Variables" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 3, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "FITBIT_LOG_FILE_PATH = \"your/expected/log/file/location/path\"\n", 30 | "TOKEN_FILE_PATH = \"your/expected/token/file/location/path\"\n", 31 | "OVERWRITE_LOG_FILE = True\n", 32 | "FITBIT_LANGUAGE = 'en_US'\n", 33 | "INFLUXDB_HOST = 'localhost'\n", 34 | "INFLUXDB_PORT = 8086\n", 35 | "INFLUXDB_USERNAME = 'your_influxdb_username'\n", 36 | "INFLUXDB_PASSWORD = 'your_influxdb_password'\n", 37 | "INFLUXDB_DATABASE = 'your_influxdb_database_name'\n", 38 | "# MAKE SURE you set the application type to PERSONAL. Otherwise, you won't have access to intraday data series, resulting in 40X errors.\n", 39 | "client_id = \"your_application_client_ID\" # Change this to your client ID\n", 40 | "client_secret = \"your_application_client_secret\" # Change this to your client Secret\n", 41 | "DEVICENAME = \"Your_Device_Name\" # e.g. \"Charge5\"\n", 42 | "ACCESS_TOKEN = \"\" # Empty Global variable initialization, will be replaced with a functional access code later using the refresh code\n", 43 | "AUTO_DATE_RANGE = True # Automatically selects date range from todays date and update_date_range variable\n", 44 | "auto_update_date_range = 1 # Days to go back from today for AUTO_DATE_RANGE *** DO NOT go above 2 - otherwise may break rate limit ***\n", 45 | "LOCAL_TIMEZONE = \"Automatic\" # set to \"Automatic\" for Automatic setup from User profile (if not mentioned here specifically). \n", 46 | "SCHEDULE_AUTO_UPDATE = True if AUTO_DATE_RANGE else False # Scheduling updates of data when script runs\n", 47 | "SERVER_ERROR_MAX_RETRY = 3\n", 48 | "EXPIRED_TOKEN_MAX_RETRY = 5\n", 49 | "SKIP_REQUEST_ON_SERVER_ERROR = True" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "## Logging setup" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": 4, 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [ 65 | "if OVERWRITE_LOG_FILE:\n", 66 | " with open(FITBIT_LOG_FILE_PATH, \"w\"): pass\n", 67 | "\n", 68 | "logging.basicConfig(\n", 69 | " level=logging.DEBUG,\n", 70 | " format=\"%(asctime)s - %(levelname)s - %(message)s\",\n", 71 | " filename=FITBIT_LOG_FILE_PATH,\n", 72 | " filemode=\"a\"\n", 73 | ")" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "## Setting up base API Caller function" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 5, 86 | "metadata": {}, 87 | "outputs": [], 88 | "source": [ 89 | "# Generic Request caller for all \n", 90 | "def request_data_from_fitbit(url, headers={}, params={}, data={}, request_type=\"get\"):\n", 91 | " global ACCESS_TOKEN\n", 92 | " retry_attempts = 0\n", 93 | " logging.debug(\"Requesting data from fitbit via Url : \" + url)\n", 94 | " while True: # Unlimited Retry attempts\n", 95 | " if request_type == \"get\":\n", 96 | " headers = {\n", 97 | " \"Authorization\": f\"Bearer {ACCESS_TOKEN}\",\n", 98 | " \"Accept\": \"application/json\",\n", 99 | " 'Accept-Language': FITBIT_LANGUAGE\n", 100 | " }\n", 101 | " try: \n", 102 | " if request_type == \"get\":\n", 103 | " response = requests.get(url, headers=headers, params=params, data=data)\n", 104 | " elif request_type == \"post\":\n", 105 | " response = requests.post(url, headers=headers, params=params, data=data)\n", 106 | " else:\n", 107 | " raise Exception(\"Invalid request type \" + str(request_type))\n", 108 | " \n", 109 | " if response.status_code == 200: # Success\n", 110 | " return response.json()\n", 111 | " elif response.status_code == 429: # API Limit reached\n", 112 | " retry_after = int(response.headers[\"Retry-After\"]) + 300\n", 113 | " logging.warning(\"Fitbit API limit reached. Error code : \" + str(response.status_code) + \", Retrying in \" + str(retry_after) + \" seconds\")\n", 114 | " print(\"Fitbit API limit reached. Error code : \" + str(response.status_code) + \", Retrying in \" + str(retry_after) + \" seconds\")\n", 115 | " time.sleep(retry_after)\n", 116 | " elif response.status_code == 401: # Access token expired ( most likely )\n", 117 | " logging.info(\"Current Access Token : \" + ACCESS_TOKEN)\n", 118 | " logging.warning(\"Error code : \" + str(response.status_code) + \", Details : \" + response.text)\n", 119 | " print(\"Error code : \" + str(response.status_code) + \", Details : \" + response.text)\n", 120 | " ACCESS_TOKEN = Get_New_Access_Token(client_id, client_secret)\n", 121 | " logging.info(\"New Access Token : \" + ACCESS_TOKEN)\n", 122 | " time.sleep(30)\n", 123 | " if retry_attempts > EXPIRED_TOKEN_MAX_RETRY:\n", 124 | " logging.error(\"Unable to solve the 401 Error. Please debug - \" + response.text)\n", 125 | " raise Exception(\"Unable to solve the 401 Error. Please debug - \" + response.text)\n", 126 | " elif response.status_code in [500, 502, 503, 504]: # Fitbit server is down or not responding ( most likely ):\n", 127 | " logging.warning(\"Server Error encountered ( Code 5xx ): Retrying after 120 seconds....\")\n", 128 | " time.sleep(120)\n", 129 | " if retry_attempts > SERVER_ERROR_MAX_RETRY:\n", 130 | " logging.error(\"Unable to solve the server Error. Retry limit exceed. Please debug - \" + response.text)\n", 131 | " if SKIP_REQUEST_ON_SERVER_ERROR:\n", 132 | " logging.warning(\"Retry limit reached for server error : Skipping request -> \" + url)\n", 133 | " return None\n", 134 | " else:\n", 135 | " logging.error(\"Fitbit API request failed. Status code: \" + str(response.status_code) + \" \" + str(response.text) )\n", 136 | " print(f\"Fitbit API request failed. Status code: {response.status_code}\", response.text)\n", 137 | " response.raise_for_status()\n", 138 | " return None\n", 139 | "\n", 140 | " except ConnectionError as e:\n", 141 | " logging.error(\"Retrying in 5 minutes - Failed to connect to internet : \" + str(e))\n", 142 | " print(\"Retrying in 5 minutes - Failed to connect to internet : \" + str(e))\n", 143 | " retry_attempts += 1\n", 144 | " time.sleep(30)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "## Token Refresh Management" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": 6, 157 | "metadata": {}, 158 | "outputs": [], 159 | "source": [ 160 | "def refresh_fitbit_tokens(client_id, client_secret, refresh_token):\n", 161 | " logging.info(\"Attempting to refresh tokens...\")\n", 162 | " url = \"https://api.fitbit.com/oauth2/token\"\n", 163 | " headers = {\n", 164 | " \"Authorization\": \"Basic \" + base64.b64encode((client_id + \":\" + client_secret).encode()).decode(),\n", 165 | " \"Content-Type\": \"application/x-www-form-urlencoded\"\n", 166 | " }\n", 167 | " data = {\n", 168 | " \"grant_type\": \"refresh_token\",\n", 169 | " \"refresh_token\": refresh_token\n", 170 | " }\n", 171 | " json_data = request_data_from_fitbit(url, headers=headers, data=data, request_type=\"post\")\n", 172 | " access_token = json_data[\"access_token\"]\n", 173 | " new_refresh_token = json_data[\"refresh_token\"]\n", 174 | " tokens = {\n", 175 | " \"access_token\": access_token,\n", 176 | " \"refresh_token\": new_refresh_token\n", 177 | " }\n", 178 | " with open(TOKEN_FILE_PATH, \"w\") as file:\n", 179 | " json.dump(tokens, file)\n", 180 | " logging.info(\"Fitbit token refresh successful!\")\n", 181 | " return access_token, new_refresh_token\n", 182 | "\n", 183 | "def load_tokens_from_file():\n", 184 | " with open(TOKEN_FILE_PATH, \"r\") as file:\n", 185 | " tokens = json.load(file)\n", 186 | " return tokens.get(\"access_token\"), tokens.get(\"refresh_token\")\n", 187 | "\n", 188 | "def Get_New_Access_Token(client_id, client_secret):\n", 189 | " try:\n", 190 | " access_token, refresh_token = load_tokens_from_file()\n", 191 | " except FileNotFoundError:\n", 192 | " refresh_token = input(\"No token file found. Please enter a valid refresh token : \")\n", 193 | " access_token, refresh_token = refresh_fitbit_tokens(client_id, client_secret, refresh_token)\n", 194 | " return access_token\n", 195 | "\n", 196 | "ACCESS_TOKEN = Get_New_Access_Token(client_id, client_secret)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "## Influxdb Database Initialization" 204 | ] 205 | }, 206 | { 207 | "cell_type": "code", 208 | "execution_count": 7, 209 | "metadata": {}, 210 | "outputs": [], 211 | "source": [ 212 | "try:\n", 213 | " influxdbclient = InfluxDBClient(host=INFLUXDB_HOST, port=INFLUXDB_PORT, username=INFLUXDB_USERNAME, password=INFLUXDB_PASSWORD)\n", 214 | " influxdbclient.switch_database(INFLUXDB_DATABASE)\n", 215 | "except InfluxDBClientError as err:\n", 216 | " logging.error(\"Unable to connect with influxdb database! Aborted\")\n", 217 | " raise InfluxDBClientError(\"InfluxDB connection failed:\" + str(err))\n", 218 | "\n", 219 | "def write_points_to_influxdb(points):\n", 220 | " try:\n", 221 | " influxdbclient.write_points(points)\n", 222 | " logging.info(\"Successfully updated influxdb database with new points\")\n", 223 | " except InfluxDBClientError as err:\n", 224 | " logging.error(\"Unable to connect with influxdb database! \" + str(err))\n", 225 | " print(\"Influxdb connection failed! \", str(err))" 226 | ] 227 | }, 228 | { 229 | "cell_type": "markdown", 230 | "metadata": {}, 231 | "source": [ 232 | "## Selecting Dates for update" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": 8, 238 | "metadata": {}, 239 | "outputs": [], 240 | "source": [ 241 | "if AUTO_DATE_RANGE:\n", 242 | " end_date = datetime.now()\n", 243 | " start_date = end_date - timedelta(days=auto_update_date_range)\n", 244 | " end_date_str = end_date.strftime(\"%Y-%m-%d\")\n", 245 | " start_date_str = start_date.strftime(\"%Y-%m-%d\")\n", 246 | "else:\n", 247 | " start_date_str = input(\"Enter start date in YYYY-MM-DD format : \")\n", 248 | " end_date_str = input(\"Enter end date in YYYY-MM-DD format : \")\n", 249 | " start_date = datetime.strptime(start_date_str, \"%Y-%m-%d\")\n", 250 | " end_date = datetime.strptime(end_date_str, \"%Y-%m-%d\")" 251 | ] 252 | }, 253 | { 254 | "cell_type": "markdown", 255 | "metadata": {}, 256 | "source": [ 257 | "## Setting up functions for Requesting data from server" 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": 9, 263 | "metadata": {}, 264 | "outputs": [], 265 | "source": [ 266 | "collected_records = []\n", 267 | "\n", 268 | "def update_working_dates():\n", 269 | " global end_date, start_date, end_date_str, start_date_str\n", 270 | " end_date = datetime.now()\n", 271 | " start_date = end_date - timedelta(days=auto_update_date_range)\n", 272 | " end_date_str = end_date.strftime(\"%Y-%m-%d\")\n", 273 | " start_date_str = start_date.strftime(\"%Y-%m-%d\")\n", 274 | "\n", 275 | "# Get last synced battery level of the device\n", 276 | "def get_battery_level():\n", 277 | " device = request_data_from_fitbit(\"https://api.fitbit.com/1/user/-/devices.json\")[0]\n", 278 | " if device != None:\n", 279 | " collected_records.append({\n", 280 | " \"measurement\": \"DeviceBatteryLevel\",\n", 281 | " \"time\": LOCAL_TIMEZONE.localize(datetime.fromisoformat(device['lastSyncTime'])).astimezone(pytz.utc).isoformat(),\n", 282 | " \"fields\": {\n", 283 | " \"value\": float(device['batteryLevel'])\n", 284 | " }\n", 285 | " })\n", 286 | " logging.info(\"Recorded battery level for \" + DEVICENAME)\n", 287 | " else:\n", 288 | " logging.error(\"Recording battery level failed : \" + DEVICENAME)\n", 289 | "\n", 290 | "# For intraday detailed data, max possible range in one day. \n", 291 | "def get_intraday_data_limit_1d(date_str, measurement_list):\n", 292 | " for measurement in measurement_list:\n", 293 | " data = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/' + measurement[0] + '/date/' + date_str + '/1d/' + measurement[2] + '.json')[\"activities-\" + measurement[0] + \"-intraday\"]['dataset']\n", 294 | " if data != None:\n", 295 | " for value in data:\n", 296 | " log_time = datetime.fromisoformat(date_str + \"T\" + value['time'])\n", 297 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 298 | " collected_records.append({\n", 299 | " \"measurement\": measurement[1],\n", 300 | " \"time\": utc_time,\n", 301 | " \"tags\": {\n", 302 | " \"Device\": DEVICENAME\n", 303 | " },\n", 304 | " \"fields\": {\n", 305 | " \"value\": int(value['value'])\n", 306 | " }\n", 307 | " })\n", 308 | " logging.info(\"Recorded \" + measurement[1] + \" intraday for date \" + date_str)\n", 309 | " else:\n", 310 | " logging.error(\"Recording failed : \" + measurement[1] + \" intraday for date \" + date_str)\n", 311 | "\n", 312 | "# Max range is 30 days, records BR, SPO2 Intraday, skin temp and HRV - 4 queries\n", 313 | "def get_daily_data_limit_30d(start_date_str, end_date_str):\n", 314 | "\n", 315 | " hrv_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/hrv/date/' + start_date_str + '/' + end_date_str + '.json')['hrv']\n", 316 | " if hrv_data_list != None:\n", 317 | " for data in hrv_data_list:\n", 318 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 319 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 320 | " collected_records.append({\n", 321 | " \"measurement\": \"HRV\",\n", 322 | " \"time\": utc_time,\n", 323 | " \"tags\": {\n", 324 | " \"Device\": DEVICENAME\n", 325 | " },\n", 326 | " \"fields\": {\n", 327 | " \"dailyRmssd\": data[\"value\"][\"dailyRmssd\"],\n", 328 | " \"deepRmssd\": data[\"value\"][\"deepRmssd\"]\n", 329 | " }\n", 330 | " })\n", 331 | " logging.info(\"Recorded HRV for date \" + start_date_str + \" to \" + end_date_str)\n", 332 | " else:\n", 333 | " logging.error(\"Recording failed HRV for date \" + start_date_str + \" to \" + end_date_str)\n", 334 | "\n", 335 | " br_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/br/date/' + start_date_str + '/' + end_date_str + '.json')[\"br\"]\n", 336 | " if br_data_list != None:\n", 337 | " for data in br_data_list:\n", 338 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 339 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 340 | " collected_records.append({\n", 341 | " \"measurement\": \"BreathingRate\",\n", 342 | " \"time\": utc_time,\n", 343 | " \"tags\": {\n", 344 | " \"Device\": DEVICENAME\n", 345 | " },\n", 346 | " \"fields\": {\n", 347 | " \"value\": data[\"value\"][\"breathingRate\"]\n", 348 | " }\n", 349 | " })\n", 350 | " logging.info(\"Recorded BR for date \" + start_date_str + \" to \" + end_date_str)\n", 351 | " else:\n", 352 | " logging.error(\"Recording failed : BR for date \" + start_date_str + \" to \" + end_date_str)\n", 353 | "\n", 354 | " skin_temp_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/temp/skin/date/' + start_date_str + '/' + end_date_str + '.json')[\"tempSkin\"]\n", 355 | " if skin_temp_data_list != None:\n", 356 | " for temp_record in skin_temp_data_list:\n", 357 | " log_time = datetime.fromisoformat(temp_record[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 358 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 359 | " collected_records.append({\n", 360 | " \"measurement\": \"Skin Temperature Variation\",\n", 361 | " \"time\": utc_time,\n", 362 | " \"tags\": {\n", 363 | " \"Device\": DEVICENAME\n", 364 | " },\n", 365 | " \"fields\": {\n", 366 | " \"RelativeValue\": temp_record[\"value\"][\"nightlyRelative\"]\n", 367 | " }\n", 368 | " })\n", 369 | " logging.info(\"Recorded Skin Temperature Variation for date \" + start_date_str + \" to \" + end_date_str)\n", 370 | " else:\n", 371 | " logging.error(\"Recording failed : Skin Temperature Variation for date \" + start_date_str + \" to \" + end_date_str)\n", 372 | "\n", 373 | " spo2_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/spo2/date/' + start_date_str + '/' + end_date_str + '/all.json')\n", 374 | " if spo2_data_list != None:\n", 375 | " for days in spo2_data_list:\n", 376 | " data = days[\"minutes\"]\n", 377 | " for record in data: \n", 378 | " log_time = datetime.fromisoformat(record[\"minute\"])\n", 379 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 380 | " collected_records.append({\n", 381 | " \"measurement\": \"SPO2_Intraday\",\n", 382 | " \"time\": utc_time,\n", 383 | " \"tags\": {\n", 384 | " \"Device\": DEVICENAME\n", 385 | " },\n", 386 | " \"fields\": {\n", 387 | " \"value\": float(record[\"value\"]),\n", 388 | " }\n", 389 | " })\n", 390 | " logging.info(\"Recorded SPO2 intraday for date \" + start_date_str + \" to \" + end_date_str)\n", 391 | " else:\n", 392 | " logging.error(\"Recording failed : SPO2 intraday for date \" + start_date_str + \" to \" + end_date_str)\n", 393 | "\n", 394 | "# Only for sleep data - limit 100 days - 1 query\n", 395 | "def get_daily_data_limit_100d(start_date_str, end_date_str):\n", 396 | "\n", 397 | " sleep_data = request_data_from_fitbit('https://api.fitbit.com/1.2/user/-/sleep/date/' + start_date_str + '/' + end_date_str + '.json')[\"sleep\"]\n", 398 | " if sleep_data != None:\n", 399 | " for record in sleep_data:\n", 400 | " log_time = datetime.fromisoformat(record[\"startTime\"])\n", 401 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 402 | " try:\n", 403 | " minutesLight= record['levels']['summary']['light']['minutes']\n", 404 | " minutesREM = record['levels']['summary']['rem']['minutes']\n", 405 | " minutesDeep = record['levels']['summary']['deep']['minutes']\n", 406 | " except:\n", 407 | " minutesLight= record['levels']['summary']['asleep']['minutes']\n", 408 | " minutesREM = record['levels']['summary']['restless']['minutes']\n", 409 | " minutesDeep = 0\n", 410 | "\n", 411 | " collected_records.append({\n", 412 | " \"measurement\": \"Sleep Summary\",\n", 413 | " \"time\": utc_time,\n", 414 | " \"tags\": {\n", 415 | " \"Device\": DEVICENAME,\n", 416 | " \"isMainSleep\": record[\"isMainSleep\"],\n", 417 | " },\n", 418 | " \"fields\": {\n", 419 | " 'efficiency': record[\"efficiency\"],\n", 420 | " 'minutesAfterWakeup': record['minutesAfterWakeup'],\n", 421 | " 'minutesAsleep': record['minutesAsleep'],\n", 422 | " 'minutesToFallAsleep': record['minutesToFallAsleep'],\n", 423 | " 'minutesInBed': record['timeInBed'],\n", 424 | " 'minutesAwake': record['minutesAwake'],\n", 425 | " 'minutesLight': minutesLight,\n", 426 | " 'minutesREM': minutesREM,\n", 427 | " 'minutesDeep': minutesDeep\n", 428 | " }\n", 429 | " })\n", 430 | " \n", 431 | " sleep_level_mapping = {'wake': 3, 'rem': 2, 'light': 1, 'deep': 0, 'asleep': 1, 'restless': 2, 'awake': 3}\n", 432 | " for sleep_stage in record['levels']['data']:\n", 433 | " log_time = datetime.fromisoformat(sleep_stage[\"dateTime\"])\n", 434 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 435 | " collected_records.append({\n", 436 | " \"measurement\": \"Sleep Levels\",\n", 437 | " \"time\": utc_time,\n", 438 | " \"tags\": {\n", 439 | " \"Device\": DEVICENAME,\n", 440 | " \"isMainSleep\": record[\"isMainSleep\"],\n", 441 | " },\n", 442 | " \"fields\": {\n", 443 | " 'level': sleep_level_mapping[sleep_stage[\"level\"]],\n", 444 | " 'duration_seconds': sleep_stage[\"seconds\"]\n", 445 | " }\n", 446 | " })\n", 447 | " wake_time = datetime.fromisoformat(record[\"endTime\"])\n", 448 | " utc_wake_time = LOCAL_TIMEZONE.localize(wake_time).astimezone(pytz.utc).isoformat()\n", 449 | " collected_records.append({\n", 450 | " \"measurement\": \"Sleep Levels\",\n", 451 | " \"time\": utc_wake_time,\n", 452 | " \"tags\": {\n", 453 | " \"Device\": DEVICENAME,\n", 454 | " \"isMainSleep\": record[\"isMainSleep\"],\n", 455 | " },\n", 456 | " \"fields\": {\n", 457 | " 'level': sleep_level_mapping['wake'],\n", 458 | " 'duration_seconds': None\n", 459 | " }\n", 460 | " })\n", 461 | " logging.info(\"Recorded Sleep data for date \" + start_date_str + \" to \" + end_date_str)\n", 462 | " else:\n", 463 | " logging.error(\"Recording failed : Sleep data for date \" + start_date_str + \" to \" + end_date_str)\n", 464 | "\n", 465 | "# Max date range 1 year, records HR zones, Activity minutes and Resting HR - 4 + 3 + 1 + 1 = 9 queries\n", 466 | "def get_daily_data_limit_365d(start_date_str, end_date_str):\n", 467 | " activity_minutes_list = [\"minutesSedentary\", \"minutesLightlyActive\", \"minutesFairlyActive\", \"minutesVeryActive\"]\n", 468 | " for activity_type in activity_minutes_list:\n", 469 | " activity_minutes_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/tracker/' + activity_type + '/date/' + start_date_str + '/' + end_date_str + '.json')[\"activities-tracker-\"+activity_type]\n", 470 | " if activity_minutes_data_list != None:\n", 471 | " for data in activity_minutes_data_list:\n", 472 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 473 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 474 | " collected_records.append({\n", 475 | " \"measurement\": \"Activity Minutes\",\n", 476 | " \"time\": utc_time,\n", 477 | " \"tags\": {\n", 478 | " \"Device\": DEVICENAME\n", 479 | " },\n", 480 | " \"fields\": {\n", 481 | " activity_type : int(data[\"value\"])\n", 482 | " }\n", 483 | " })\n", 484 | " logging.info(\"Recorded \" + activity_type + \"for date \" + start_date_str + \" to \" + end_date_str)\n", 485 | " else:\n", 486 | " logging.error(\"Recording failed : \" + activity_type + \" for date \" + start_date_str + \" to \" + end_date_str)\n", 487 | " \n", 488 | "\n", 489 | " activity_others_list = [\"distance\", \"calories\", \"steps\"]\n", 490 | " for activity_type in activity_others_list:\n", 491 | " activity_others_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/tracker/' + activity_type + '/date/' + start_date_str + '/' + end_date_str + '.json')[\"activities-tracker-\"+activity_type]\n", 492 | " if activity_others_data_list != None:\n", 493 | " for data in activity_others_data_list:\n", 494 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 495 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 496 | " activity_name = \"Total Steps\" if activity_type == \"steps\" else activity_type\n", 497 | " collected_records.append({\n", 498 | " \"measurement\": activity_name,\n", 499 | " \"time\": utc_time,\n", 500 | " \"tags\": {\n", 501 | " \"Device\": DEVICENAME\n", 502 | " },\n", 503 | " \"fields\": {\n", 504 | " \"value\" : float(data[\"value\"])\n", 505 | " }\n", 506 | " })\n", 507 | " logging.info(\"Recorded \" + activity_name + \" for date \" + start_date_str + \" to \" + end_date_str)\n", 508 | " else:\n", 509 | " logging.error(\"Recording failed : \" + activity_name + \" for date \" + start_date_str + \" to \" + end_date_str)\n", 510 | " \n", 511 | "\n", 512 | " HR_zones_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/heart/date/' + start_date_str + '/' + end_date_str + '.json')[\"activities-heart\"]\n", 513 | " if HR_zones_data_list != None:\n", 514 | " for data in HR_zones_data_list:\n", 515 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 516 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 517 | " collected_records.append({\n", 518 | " \"measurement\": \"HR zones\",\n", 519 | " \"time\": utc_time,\n", 520 | " \"tags\": {\n", 521 | " \"Device\": DEVICENAME\n", 522 | " },\n", 523 | " \"fields\": {\n", 524 | " \"Normal\" : data[\"value\"][\"heartRateZones\"][0][\"minutes\"],\n", 525 | " \"Fat Burn\" : data[\"value\"][\"heartRateZones\"][1][\"minutes\"],\n", 526 | " \"Cardio\" : data[\"value\"][\"heartRateZones\"][2][\"minutes\"],\n", 527 | " \"Peak\" : data[\"value\"][\"heartRateZones\"][3][\"minutes\"]\n", 528 | " }\n", 529 | " })\n", 530 | " if \"restingHeartRate\" in data[\"value\"]:\n", 531 | " collected_records.append({\n", 532 | " \"measurement\": \"RestingHR\",\n", 533 | " \"time\": utc_time,\n", 534 | " \"tags\": {\n", 535 | " \"Device\": DEVICENAME\n", 536 | " },\n", 537 | " \"fields\": {\n", 538 | " \"value\": data[\"value\"][\"restingHeartRate\"]\n", 539 | " }\n", 540 | " })\n", 541 | " logging.info(\"Recorded RHR and HR zones for date \" + start_date_str + \" to \" + end_date_str)\n", 542 | " else:\n", 543 | " logging.error(\"Recording failed : RHR and HR zones for date \" + start_date_str + \" to \" + end_date_str)\n", 544 | "\n", 545 | "# records SPO2 single days for the whole given period - 1 query\n", 546 | "def get_daily_data_limit_none(start_date_str, end_date_str):\n", 547 | " data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/spo2/date/' + start_date_str + '/' + end_date_str + '.json')\n", 548 | " if data_list != None:\n", 549 | " for data in data_list:\n", 550 | " log_time = datetime.fromisoformat(data[\"dateTime\"] + \"T\" + \"00:00:00\")\n", 551 | " utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat()\n", 552 | " collected_records.append({\n", 553 | " \"measurement\": \"SPO2\",\n", 554 | " \"time\": utc_time,\n", 555 | " \"tags\": {\n", 556 | " \"Device\": DEVICENAME\n", 557 | " },\n", 558 | " \"fields\": {\n", 559 | " \"avg\": data[\"value\"][\"avg\"],\n", 560 | " \"max\": data[\"value\"][\"max\"],\n", 561 | " \"min\": data[\"value\"][\"min\"]\n", 562 | " }\n", 563 | " })\n", 564 | " logging.info(\"Recorded Avg SPO2 for date \" + start_date_str + \" to \" + end_date_str)\n", 565 | " else:\n", 566 | " logging.error(\"Recording failed : Avg SPO2 for date \" + start_date_str + \" to \" + end_date_str)\n", 567 | "\n", 568 | "# Fetches latest activities from record ( upto last 100 )\n", 569 | "def fetch_latest_activities(end_date_str):\n", 570 | " recent_activities_data = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/list.json', params={'beforeDate': end_date_str, 'sort':'desc', 'limit':50, 'offset':0})\n", 571 | " if recent_activities_data != None:\n", 572 | " for activity in recent_activities_data['activities']:\n", 573 | " fields = {}\n", 574 | " if 'activeDuration' in activity:\n", 575 | " fields['ActiveDuration'] = int(activity['activeDuration'])\n", 576 | " if 'averageHeartRate' in activity:\n", 577 | " fields['AverageHeartRate'] = int(activity['averageHeartRate'])\n", 578 | " if 'calories' in activity:\n", 579 | " fields['calories'] = int(activity['calories'])\n", 580 | " if 'duration' in activity:\n", 581 | " fields['duration'] = int(activity['duration'])\n", 582 | " if 'distance' in activity:\n", 583 | " fields['distance'] = float(activity['distance'])\n", 584 | " if 'steps' in activity:\n", 585 | " fields['steps'] = int(activity['steps'])\n", 586 | " starttime = datetime.fromisoformat(activity['startTime'].strip(\"Z\"))\n", 587 | " utc_time = starttime.astimezone(pytz.utc).isoformat()\n", 588 | " collected_records.append({\n", 589 | " \"measurement\": \"Activity Records\",\n", 590 | " \"time\": utc_time,\n", 591 | " \"tags\": {\n", 592 | " \"ActivityName\": activity['activityName']\n", 593 | " },\n", 594 | " \"fields\": fields\n", 595 | " })\n", 596 | " logging.info(\"Fetched 50 recent activities before date \" + end_date_str)\n", 597 | " else:\n", 598 | " logging.error(\"Fetching 50 recent activities failed : before date \" + end_date_str)\n" 599 | ] 600 | }, 601 | { 602 | "cell_type": "markdown", 603 | "metadata": {}, 604 | "source": [ 605 | "## Set Timezone from profile data" 606 | ] 607 | }, 608 | { 609 | "cell_type": "code", 610 | "execution_count": 10, 611 | "metadata": {}, 612 | "outputs": [], 613 | "source": [ 614 | "if LOCAL_TIMEZONE == \"Automatic\":\n", 615 | " LOCAL_TIMEZONE = pytz.timezone(request_data_from_fitbit(\"https://api.fitbit.com/1/user/-/profile.json\")[\"user\"][\"timezone\"])\n", 616 | "else:\n", 617 | " LOCAL_TIMEZONE = pytz.timezone(LOCAL_TIMEZONE)" 618 | ] 619 | }, 620 | { 621 | "cell_type": "markdown", 622 | "metadata": {}, 623 | "source": [ 624 | "## Call the functions one time as a startup update OR do switch to bulk update mode" 625 | ] 626 | }, 627 | { 628 | "cell_type": "code", 629 | "execution_count": 32, 630 | "metadata": {}, 631 | "outputs": [], 632 | "source": [ 633 | "if AUTO_DATE_RANGE:\n", 634 | " date_list = [(start_date + timedelta(days=i)).strftime(\"%Y-%m-%d\") for i in range((end_date - start_date).days + 1)]\n", 635 | " if len(date_list) > 3:\n", 636 | " logging.warn(\"Auto schedule update is not meant for more than 3 days at a time, please consider lowering the auto_update_date_range variable to aviod rate limit hit!\")\n", 637 | " for date_str in date_list:\n", 638 | " get_intraday_data_limit_1d(date_str, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')]) # 2 queries x number of dates ( default 2)\n", 639 | " get_daily_data_limit_30d(start_date_str, end_date_str) # 3 queries\n", 640 | " get_daily_data_limit_100d(start_date_str, end_date_str) # 1 query\n", 641 | " get_daily_data_limit_365d(start_date_str, end_date_str) # 8 queries\n", 642 | " get_daily_data_limit_none(start_date_str, end_date_str) # 1 query\n", 643 | " get_battery_level() # 1 query\n", 644 | " fetch_latest_activities(end_date_str) # 1 query\n", 645 | " write_points_to_influxdb(collected_records)\n", 646 | " collected_records = []\n", 647 | "else:\n", 648 | " # Do Bulk update----------------------------------------------------------------------------------------------------------------------------\n", 649 | "\n", 650 | " schedule.every(1).hours.do(lambda : Get_New_Access_Token(client_id,client_secret)) # Auto-refresh tokens every 1 hour\n", 651 | " \n", 652 | " date_list = [(start_date + timedelta(days=i)).strftime(\"%Y-%m-%d\") for i in range((end_date - start_date).days + 1)]\n", 653 | "\n", 654 | " def yield_dates_with_gap(date_list, gap):\n", 655 | " start_index = -1*gap\n", 656 | " while start_index < len(date_list)-1:\n", 657 | " start_index = start_index + gap\n", 658 | " end_index = start_index+gap\n", 659 | " if end_index > len(date_list) - 1:\n", 660 | " end_index = len(date_list) - 1\n", 661 | " if start_index > len(date_list) - 1:\n", 662 | " break\n", 663 | " yield (date_list[start_index],date_list[end_index])\n", 664 | "\n", 665 | " def do_bulk_update(funcname, start_date, end_date):\n", 666 | " global collected_records\n", 667 | " funcname(start_date, end_date)\n", 668 | " schedule.run_pending()\n", 669 | " write_points_to_influxdb(collected_records)\n", 670 | " collected_records = []\n", 671 | "\n", 672 | " fetch_latest_activities(date_list[-1])\n", 673 | " write_points_to_influxdb(collected_records)\n", 674 | " do_bulk_update(get_daily_data_limit_none, date_list[0], date_list[-1])\n", 675 | " for date_range in yield_dates_with_gap(date_list, 360):\n", 676 | " do_bulk_update(get_daily_data_limit_365d, date_range[0], date_range[1])\n", 677 | " for date_range in yield_dates_with_gap(date_list, 98):\n", 678 | " do_bulk_update(get_daily_data_limit_100d, date_range[0], date_range[1])\n", 679 | " for date_range in yield_dates_with_gap(date_list, 28):\n", 680 | " do_bulk_update(get_daily_data_limit_30d, date_range[0], date_range[1])\n", 681 | " for single_day in date_list:\n", 682 | " do_bulk_update(get_intraday_data_limit_1d, single_day, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')])\n", 683 | "\n", 684 | " logging.info(\"Success : Bulk update complete for \" + start_date_str + \" to \" + end_date_str)\n", 685 | " print(\"Bulk update complete!\")" 686 | ] 687 | }, 688 | { 689 | "cell_type": "markdown", 690 | "metadata": {}, 691 | "source": [ 692 | "## Schedule functions at specific intervals (Ongoing continuous update)" 693 | ] 694 | }, 695 | { 696 | "cell_type": "code", 697 | "execution_count": 33, 698 | "metadata": {}, 699 | "outputs": [], 700 | "source": [ 701 | "# Ongoing continuous update of data\n", 702 | "if SCHEDULE_AUTO_UPDATE:\n", 703 | " \n", 704 | " schedule.every(1).hours.do(lambda : Get_New_Access_Token(client_id,client_secret)) # Auto-refresh tokens every 1 hour\n", 705 | " schedule.every(3).minutes.do( lambda : get_intraday_data_limit_1d(end_date_str, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')] )) # Auto-refresh detailed HR and steps\n", 706 | " schedule.every(20).minutes.do(get_battery_level) # Auto-refresh battery level\n", 707 | " schedule.every(3).hours.do(lambda : get_daily_data_limit_30d(start_date_str, end_date_str))\n", 708 | " schedule.every(4).hours.do(lambda : get_daily_data_limit_100d(start_date_str, end_date_str))\n", 709 | " schedule.every(6).hours.do( lambda : get_daily_data_limit_365d(start_date_str, end_date_str))\n", 710 | " schedule.every(6).hours.do(lambda : get_daily_data_limit_none(start_date_str, end_date_str))\n", 711 | " schedule.every(1).hours.do( lambda : fetch_latest_activities(end_date_str))\n", 712 | "\n", 713 | " while True:\n", 714 | " schedule.run_pending()\n", 715 | " if len(collected_records) != 0:\n", 716 | " write_points_to_influxdb(collected_records)\n", 717 | " collected_records = []\n", 718 | " time.sleep(30)\n", 719 | " update_working_dates()\n", 720 | " " 721 | ] 722 | } 723 | ], 724 | "metadata": { 725 | "kernelspec": { 726 | "display_name": "my-conda-env", 727 | "language": "python", 728 | "name": "python3" 729 | }, 730 | "language_info": { 731 | "codemirror_mode": { 732 | "name": "ipython", 733 | "version": 3 734 | }, 735 | "file_extension": ".py", 736 | "mimetype": "text/x-python", 737 | "name": "python", 738 | "nbconvert_exporter": "python", 739 | "pygments_lexer": "ipython3", 740 | "version": "3.11.3" 741 | }, 742 | "orig_nbformat": 4 743 | }, 744 | "nbformat": 4, 745 | "nbformat_minor": 2 746 | } 747 | -------------------------------------------------------------------------------- /Fitbit_Fetch.py: -------------------------------------------------------------------------------- 1 | # %% 2 | import base64, requests, schedule, time, json, pytz, logging, os, sys 3 | from requests.exceptions import ConnectionError 4 | from datetime import datetime, timedelta 5 | # for influxdb 1.x 6 | from influxdb import InfluxDBClient 7 | from influxdb.exceptions import InfluxDBClientError 8 | # for influxdb 2.x 9 | from influxdb_client import InfluxDBClient as InfluxDBClient2 10 | # from influxdb_client.client.exceptions import InfluxDBError # possible duplicate 11 | from influxdb_client.client.write_api import SYNCHRONOUS 12 | # for influxdb 3.x 13 | from influxdb_client_3 import InfluxDBClient3, InfluxDBError 14 | # For XML processing 15 | import xml.etree.ElementTree as ET 16 | 17 | # %% [markdown] 18 | # ## Variables 19 | 20 | # %% 21 | FITBIT_LOG_FILE_PATH = os.environ.get("FITBIT_LOG_FILE_PATH") or "your/expected/log/file/location/path" 22 | TOKEN_FILE_PATH = os.environ.get("TOKEN_FILE_PATH") or "your/expected/token/file/location/path" 23 | OVERWRITE_LOG_FILE = True 24 | FITBIT_LANGUAGE = 'en_US' 25 | INFLUXDB_VERSION = os.environ.get("INFLUXDB_VERSION") or "1" # Version of influxdb in use, supported values are 1 or 2 26 | assert INFLUXDB_VERSION in ['1','2','3'], "Only InfluxDB version 1 or 2 or 3 is allowed - please put either 1 or 2 or 3" 27 | # Update these variables for influxdb 1.x versions 28 | INFLUXDB_HOST = os.environ.get("INFLUXDB_HOST") or 'localhost' # for influxdb 1.x 29 | INFLUXDB_PORT = os.environ.get("INFLUXDB_PORT") or 8086 # for influxdb 1.x 30 | INFLUXDB_USERNAME = os.environ.get("INFLUXDB_USERNAME") or 'your_influxdb_username' # for influxdb 1.x 31 | INFLUXDB_PASSWORD = os.environ.get("INFLUXDB_PASSWORD") or 'your_influxdb_password' # for influxdb 1.x 32 | INFLUXDB_DATABASE = os.environ.get("INFLUXDB_DATABASE") or 'your_influxdb_database_name' # for influxdb 1.x 33 | # Update these variables for influxdb 2.x versions 34 | INFLUXDB_BUCKET = os.environ.get("INFLUXDB_BUCKET") or "your_bucket_name_here" # for influxdb 2.x 35 | INFLUXDB_ORG = os.environ.get("INFLUXDB_ORG") or "your_org_here" # for influxdb 2.x 36 | INFLUXDB_TOKEN = os.environ.get("INFLUXDB_TOKEN") or "your_token_here" # for influxdb 2.x 37 | INFLUXDB_URL = os.environ.get("INFLUXDB_URL") or "http://your_url_here:8086" # for influxdb 2.x 38 | INFLUXDB_V3_ACCESS_TOKEN = os.getenv("INFLUXDB_V3_ACCESS_TOKEN",'') # InfluxDB V3 Access token, required only for InfluxDB 3.x 39 | # MAKE SURE you set the application type to PERSONAL. Otherwise, you won't have access to intraday data series, resulting in 40X errors. 40 | client_id = os.environ.get("CLIENT_ID") or "your_application_client_ID" # Change this to your client ID 41 | client_secret = os.environ.get("CLIENT_SECRET") or "your_application_client_secret" # Change this to your client Secret 42 | DEVICENAME = os.environ.get("DEVICENAME") or "Your_Device_Name" # e.g. "Charge5" 43 | ACCESS_TOKEN = "" # Empty Global variable initialization, will be replaced with a functional access code later using the refresh code 44 | MANUAL_START_DATE = os.getenv("MANUAL_START_DATE", None) # optional, in YYYY-MM-DD format, if you want to bulk update only from specific date 45 | MANUAL_END_DATE = os.getenv("MANUAL_END_DATE", datetime.today().strftime('%Y-%m-%d')) # optional, in YYYY-MM-DD format, if you want to bulk update until a specific date 46 | AUTO_DATE_RANGE = False if os.environ.get("AUTO_DATE_RANGE") in ['False','false','FALSE','f','F','no','No','NO','0'] else (not bool(MANUAL_START_DATE)) # Automatically selects date range from todays date and update_date_range variable 47 | auto_update_date_range = 1 # Days to go back from today for AUTO_DATE_RANGE *** DO NOT go above 2 - otherwise may break rate limit *** 48 | LOCAL_TIMEZONE = os.environ.get("LOCAL_TIMEZONE") or "Automatic" # set to "Automatic" for Automatic setup from User profile (if not mentioned here specifically). 49 | SCHEDULE_AUTO_UPDATE = True if AUTO_DATE_RANGE else False # Scheduling updates of data when script runs 50 | SERVER_ERROR_MAX_RETRY = 3 51 | EXPIRED_TOKEN_MAX_RETRY = 5 52 | SKIP_REQUEST_ON_SERVER_ERROR = True 53 | 54 | # %% [markdown] 55 | # ## Logging setup 56 | 57 | # %% 58 | if OVERWRITE_LOG_FILE: 59 | with open(FITBIT_LOG_FILE_PATH, "w"): pass 60 | 61 | logging.basicConfig( 62 | level=logging.DEBUG, 63 | format="%(asctime)s - %(levelname)s - %(message)s", 64 | handlers=[ 65 | logging.FileHandler(FITBIT_LOG_FILE_PATH, mode='a'), 66 | logging.StreamHandler(sys.stdout) 67 | ] 68 | ) 69 | 70 | # %% [markdown] 71 | # ## Setting up base API Caller function 72 | 73 | # %% 74 | # Generic Request caller for all 75 | def request_data_from_fitbit(url, headers={}, params={}, data={}, request_type="get"): 76 | global ACCESS_TOKEN 77 | retry_attempts = 0 78 | logging.debug("Requesting data from fitbit via Url : " + url) 79 | while True: # Unlimited Retry attempts 80 | if request_type == "get" and headers == {}: 81 | headers = { 82 | "Authorization": f"Bearer {ACCESS_TOKEN}", 83 | "Accept": "application/json", 84 | 'Accept-Language': FITBIT_LANGUAGE 85 | } 86 | try: 87 | if request_type == "get": 88 | response = requests.get(url, headers=headers, params=params, data=data) 89 | elif request_type == "post": 90 | response = requests.post(url, headers=headers, params=params, data=data) 91 | else: 92 | raise Exception("Invalid request type " + str(request_type)) 93 | 94 | if response.status_code == 200: # Success 95 | if url.endswith(".tcx"): # TCX XML file for GPS data 96 | return response 97 | else: 98 | return response.json() 99 | elif response.status_code == 429: # API Limit reached 100 | retry_after = int(response.headers["Fitbit-Rate-Limit-Reset"]) + 300 # Fitbit changed their headers. 101 | logging.warning("Fitbit API limit reached. Error code : " + str(response.status_code) + ", Retrying in " + str(retry_after) + " seconds") 102 | print("Fitbit API limit reached. Error code : " + str(response.status_code) + ", Retrying in " + str(retry_after) + " seconds") 103 | time.sleep(retry_after) 104 | elif response.status_code == 401: # Access token expired ( most likely ) 105 | logging.info("Current Access Token : " + ACCESS_TOKEN) 106 | logging.warning("Error code : " + str(response.status_code) + ", Details : " + response.text) 107 | print("Error code : " + str(response.status_code) + ", Details : " + response.text) 108 | ACCESS_TOKEN = Get_New_Access_Token(client_id, client_secret) 109 | logging.info("New Access Token : " + ACCESS_TOKEN) 110 | headers["Authorization"] = f"Bearer {ACCESS_TOKEN}" # Update the renewed ACCESS_TOKEN to the headers dict 111 | time.sleep(30) 112 | if retry_attempts > EXPIRED_TOKEN_MAX_RETRY: 113 | logging.error("Unable to solve the 401 Error. Please debug - " + response.text) 114 | raise Exception("Unable to solve the 401 Error. Please debug - " + response.text) 115 | elif response.status_code in [500, 502, 503, 504]: # Fitbit server is down or not responding ( most likely ): 116 | logging.warning("Server Error encountered ( Code 5xx ): Retrying after 120 seconds....") 117 | time.sleep(120) 118 | if retry_attempts > SERVER_ERROR_MAX_RETRY: 119 | logging.error("Unable to solve the server Error. Retry limit exceed. Please debug - " + response.text) 120 | if SKIP_REQUEST_ON_SERVER_ERROR: 121 | logging.warning("Retry limit reached for server error : Skipping request -> " + url) 122 | return None 123 | else: 124 | logging.error("Fitbit API request failed. Status code: " + str(response.status_code) + " " + str(response.text) ) 125 | print(f"Fitbit API request failed. Status code: {response.status_code}", response.text) 126 | response.raise_for_status() 127 | return None 128 | 129 | except ConnectionError as e: 130 | logging.error("Retrying in 5 minutes - Failed to connect to internet : " + str(e)) 131 | print("Retrying in 5 minutes - Failed to connect to internet : " + str(e)) 132 | retry_attempts += 1 133 | time.sleep(30) 134 | 135 | # %% [markdown] 136 | # ## Token Refresh Management 137 | 138 | # %% 139 | def refresh_fitbit_tokens(client_id, client_secret, refresh_token): 140 | logging.info("Attempting to refresh tokens...") 141 | url = "https://api.fitbit.com/oauth2/token" 142 | headers = { 143 | "Authorization": "Basic " + base64.b64encode((client_id + ":" + client_secret).encode()).decode(), 144 | "Content-Type": "application/x-www-form-urlencoded" 145 | } 146 | data = { 147 | "grant_type": "refresh_token", 148 | "refresh_token": refresh_token 149 | } 150 | json_data = request_data_from_fitbit(url, headers=headers, data=data, request_type="post") 151 | access_token = json_data["access_token"] 152 | new_refresh_token = json_data["refresh_token"] 153 | tokens = { 154 | "access_token": access_token, 155 | "refresh_token": new_refresh_token 156 | } 157 | with open(TOKEN_FILE_PATH, "w") as file: 158 | json.dump(tokens, file) 159 | logging.info("Fitbit token refresh successful!") 160 | return access_token, new_refresh_token 161 | 162 | def load_tokens_from_file(): 163 | with open(TOKEN_FILE_PATH, "r") as file: 164 | tokens = json.load(file) 165 | return tokens.get("access_token"), tokens.get("refresh_token") 166 | 167 | def Get_New_Access_Token(client_id, client_secret): 168 | try: 169 | access_token, refresh_token = load_tokens_from_file() 170 | except FileNotFoundError: 171 | refresh_token = input("No token file found. Please enter a valid refresh token : ") 172 | access_token, refresh_token = refresh_fitbit_tokens(client_id, client_secret, refresh_token) 173 | return access_token 174 | 175 | ACCESS_TOKEN = Get_New_Access_Token(client_id, client_secret) 176 | 177 | # %% [markdown] 178 | # ## Influxdb Database Initialization 179 | 180 | # %% 181 | if INFLUXDB_VERSION == "2": 182 | try: 183 | influxdbclient = InfluxDBClient2(url=INFLUXDB_URL, token=INFLUXDB_TOKEN, org=INFLUXDB_ORG) 184 | influxdb_write_api = influxdbclient.write_api(write_options=SYNCHRONOUS) 185 | except InfluxDBError as err: 186 | logging.error("Unable to connect with influxdb 2.x database! Aborted") 187 | raise InfluxDBError("InfluxDB connection failed:" + str(err)) 188 | elif INFLUXDB_VERSION == "1": 189 | try: 190 | influxdbclient = InfluxDBClient(host=INFLUXDB_HOST, port=INFLUXDB_PORT, username=INFLUXDB_USERNAME, password=INFLUXDB_PASSWORD) 191 | influxdbclient.switch_database(INFLUXDB_DATABASE) 192 | except InfluxDBClientError as err: 193 | logging.error("Unable to connect with influxdb 1.x database! Aborted") 194 | raise InfluxDBClientError("InfluxDB connection failed:" + str(err)) 195 | elif INFLUXDB_VERSION == "3": 196 | try: 197 | influxdbclient = InfluxDBClient3( 198 | host=f"http://{INFLUXDB_HOST}:{INFLUXDB_PORT}", 199 | token=INFLUXDB_V3_ACCESS_TOKEN, 200 | database=INFLUXDB_DATABASE 201 | ) 202 | demo_point = { 203 | 'measurement': 'DemoPoint', 204 | 'time': '1970-01-01T00:00:00+00:00', 205 | 'tags': {'DemoTag': 'DemoTagValue'}, 206 | 'fields': {'DemoField': 0} 207 | } 208 | # The following code block tests the connection by writing/overwriting a demo point. raises error and aborts if connection fails. 209 | influxdbclient.write(record=[demo_point]) 210 | except InfluxDBError as err: 211 | logging.error("Unable to connect with influxdb 3.x database! Aborted") 212 | raise InfluxDBClientError("InfluxDB connection failed:" + str(err)) 213 | else: 214 | logging.error("No matching version found. Supported values are 1 and 2 and 3") 215 | raise InfluxDBClientError("No matching version found. Supported values are 1 and 2 and 3") 216 | 217 | def write_points_to_influxdb(points): 218 | if INFLUXDB_VERSION == "2": 219 | try: 220 | influxdb_write_api.write(bucket=INFLUXDB_BUCKET, org=INFLUXDB_ORG, record=points) 221 | logging.info("Successfully updated influxdb database with new points") 222 | except InfluxDBError as err: 223 | logging.error("Unable to connect with influxdb 2.x database! " + str(err)) 224 | print("Influxdb connection failed! ", str(err)) 225 | elif INFLUXDB_VERSION == "1": 226 | try: 227 | influxdbclient.write_points(points) 228 | logging.info("Successfully updated influxdb database with new points") 229 | except InfluxDBClientError as err: 230 | logging.error("Unable to connect with influxdb 1.x database! " + str(err)) 231 | print("Influxdb connection failed! ", str(err)) 232 | elif INFLUXDB_VERSION == "3": 233 | try: 234 | influxdbclient.write(record=points) 235 | logging.info("Successfully updated influxdb database with new points") 236 | except InfluxDBError as err: 237 | logging.error("Unable to connect with influxdb 3.x database! " + str(err)) 238 | print("Influxdb connection failed! ", str(err)) 239 | else: 240 | logging.error("No matching version found. Supported values are 1 and 2 and 3") 241 | raise InfluxDBClientError("No matching version found. Supported values are 1 and 2 and 3") 242 | 243 | # %% [markdown] 244 | # ## Set Timezone from profile data 245 | 246 | # %% 247 | if LOCAL_TIMEZONE == "Automatic": 248 | LOCAL_TIMEZONE = pytz.timezone(request_data_from_fitbit("https://api.fitbit.com/1/user/-/profile.json")["user"]["timezone"]) 249 | else: 250 | LOCAL_TIMEZONE = pytz.timezone(LOCAL_TIMEZONE) 251 | 252 | # %% [markdown] 253 | # ## Selecting Dates for update 254 | 255 | # %% 256 | if AUTO_DATE_RANGE: 257 | end_date = datetime.now(LOCAL_TIMEZONE) 258 | start_date = end_date - timedelta(days=auto_update_date_range) 259 | end_date_str = end_date.strftime("%Y-%m-%d") 260 | start_date_str = start_date.strftime("%Y-%m-%d") 261 | else: 262 | start_date_str = MANUAL_START_DATE or input("Enter start date in YYYY-MM-DD format : ") 263 | end_date_str = MANUAL_END_DATE or input("Enter end date in YYYY-MM-DD format : ") 264 | start_date = datetime.strptime(start_date_str, "%Y-%m-%d") 265 | end_date = datetime.strptime(end_date_str, "%Y-%m-%d") 266 | 267 | # %% [markdown] 268 | # ## Setting up functions for Requesting data from server 269 | 270 | # %% 271 | collected_records = [] 272 | 273 | def update_working_dates(): 274 | global end_date, start_date, end_date_str, start_date_str 275 | end_date = datetime.now(LOCAL_TIMEZONE) 276 | start_date = end_date - timedelta(days=auto_update_date_range) 277 | end_date_str = end_date.strftime("%Y-%m-%d") 278 | start_date_str = start_date.strftime("%Y-%m-%d") 279 | 280 | # Get last synced battery level of the device 281 | def get_battery_level(): 282 | device = request_data_from_fitbit("https://api.fitbit.com/1/user/-/devices.json")[0] 283 | if device != None: 284 | collected_records.append({ 285 | "measurement": "DeviceBatteryLevel", 286 | "time": LOCAL_TIMEZONE.localize(datetime.fromisoformat(device['lastSyncTime'])).astimezone(pytz.utc).isoformat(), 287 | "fields": { 288 | "value": float(device['batteryLevel']) 289 | } 290 | }) 291 | logging.info("Recorded battery level for " + DEVICENAME) 292 | else: 293 | logging.error("Recording battery level failed : " + DEVICENAME) 294 | 295 | # For intraday detailed data, max possible range in one day. 296 | def get_intraday_data_limit_1d(date_str, measurement_list): 297 | for measurement in measurement_list: 298 | data = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/' + measurement[0] + '/date/' + date_str + '/1d/' + measurement[2] + '.json')["activities-" + measurement[0] + "-intraday"]['dataset'] 299 | if data != None: 300 | for value in data: 301 | log_time = datetime.fromisoformat(date_str + "T" + value['time']) 302 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 303 | collected_records.append({ 304 | "measurement": measurement[1], 305 | "time": utc_time, 306 | "tags": { 307 | "Device": DEVICENAME 308 | }, 309 | "fields": { 310 | "value": int(value['value']) 311 | } 312 | }) 313 | logging.info("Recorded " + measurement[1] + " intraday for date " + date_str) 314 | else: 315 | logging.error("Recording failed : " + measurement[1] + " intraday for date " + date_str) 316 | 317 | # Max range is 30 days, records BR, SPO2 Intraday, skin temp and HRV - 4 queries 318 | def get_daily_data_limit_30d(start_date_str, end_date_str): 319 | 320 | hrv_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/hrv/date/' + start_date_str + '/' + end_date_str + '.json').get('hrv') 321 | if hrv_data_list != None: 322 | for data in hrv_data_list: 323 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 324 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 325 | collected_records.append({ 326 | "measurement": "HRV", 327 | "time": utc_time, 328 | "tags": { 329 | "Device": DEVICENAME 330 | }, 331 | "fields": { 332 | "dailyRmssd": float(data["value"]["dailyRmssd"]) if data["value"]["dailyRmssd"] else None, 333 | "deepRmssd": float(data["value"]["deepRmssd"]) if data["value"]["deepRmssd"] else None 334 | } 335 | }) 336 | logging.info("Recorded HRV for date " + start_date_str + " to " + end_date_str) 337 | else: 338 | logging.error("Recording failed HRV for date " + start_date_str + " to " + end_date_str) 339 | 340 | br_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/br/date/' + start_date_str + '/' + end_date_str + '.json').get("br") 341 | if br_data_list != None: 342 | for data in br_data_list: 343 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 344 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 345 | collected_records.append({ 346 | "measurement": "BreathingRate", 347 | "time": utc_time, 348 | "tags": { 349 | "Device": DEVICENAME 350 | }, 351 | "fields": { 352 | "value": float(data["value"]["breathingRate"]) 353 | } 354 | }) 355 | logging.info("Recorded BR for date " + start_date_str + " to " + end_date_str) 356 | else: 357 | logging.warning("Records not found : BR for date " + start_date_str + " to " + end_date_str) 358 | 359 | skin_temp_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/temp/skin/date/' + start_date_str + '/' + end_date_str + '.json').get("tempSkin") 360 | if skin_temp_data_list != None: 361 | for temp_record in skin_temp_data_list: 362 | log_time = datetime.fromisoformat(temp_record["dateTime"] + "T" + "00:00:00") 363 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 364 | collected_records.append({ 365 | "measurement": "Skin Temperature Variation", 366 | "time": utc_time, 367 | "tags": { 368 | "Device": DEVICENAME 369 | }, 370 | "fields": { 371 | "RelativeValue": temp_record["value"]["nightlyRelative"] 372 | } 373 | }) 374 | logging.info("Recorded Skin Temperature Variation for date " + start_date_str + " to " + end_date_str) 375 | else: 376 | logging.error("Recording failed : Skin Temperature Variation for date " + start_date_str + " to " + end_date_str) 377 | 378 | try: 379 | spo2_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/spo2/date/' + start_date_str + '/' + end_date_str + '/all.json') 380 | except requests.exceptions.HTTPError as e: 381 | logging.error(f"{e}") 382 | spo2_data_list = None 383 | if spo2_data_list != None: 384 | for days in spo2_data_list: 385 | data = days["minutes"] 386 | for record in data: 387 | log_time = datetime.fromisoformat(record["minute"]) 388 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 389 | collected_records.append({ 390 | "measurement": "SPO2_Intraday", 391 | "time": utc_time, 392 | "tags": { 393 | "Device": DEVICENAME 394 | }, 395 | "fields": { 396 | "value": float(record["value"]), 397 | } 398 | }) 399 | logging.info("Recorded SPO2 intraday for date " + start_date_str + " to " + end_date_str) 400 | else: 401 | logging.error("Recording failed : SPO2 intraday for date " + start_date_str + " to " + end_date_str) 402 | 403 | weight_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/body/log/weight/date/' + start_date_str + '/' + end_date_str + '.json').get("weight") 404 | if weight_data_list != None: 405 | for entry in weight_data_list: 406 | log_time = datetime.fromisoformat(entry["date"] + "T" + entry["time"]) 407 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 408 | collected_records.append({ 409 | "measurement": "weight", 410 | "time": utc_time, 411 | "tags": { 412 | "Device": DEVICENAME 413 | }, 414 | "fields": { 415 | "value": float(entry["weight"]), 416 | } 417 | }) 418 | collected_records.append({ 419 | "measurement": "bmi", 420 | "time": utc_time, 421 | "tags": { 422 | "Device": DEVICENAME 423 | }, 424 | "fields": { 425 | "value": float(entry["bmi"]), 426 | } 427 | }) 428 | logging.info("Recorded weight and BMI for date " + start_date_str + " to " + end_date_str) 429 | else: 430 | logging.error("Recording failed : weight and BMI for date " + start_date_str + " to " + end_date_str) 431 | 432 | # Only for sleep data - limit 100 days - 1 query 433 | def get_daily_data_limit_100d(start_date_str, end_date_str): 434 | 435 | sleep_data = request_data_from_fitbit('https://api.fitbit.com/1.2/user/-/sleep/date/' + start_date_str + '/' + end_date_str + '.json').get("sleep") 436 | if sleep_data != None: 437 | for record in sleep_data: 438 | log_time = datetime.fromisoformat(record["startTime"]) 439 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 440 | try: 441 | minutesLight= record['levels']['summary']['light']['minutes'] 442 | minutesREM = record['levels']['summary']['rem']['minutes'] 443 | minutesDeep = record['levels']['summary']['deep']['minutes'] 444 | except: 445 | minutesLight= record['levels']['summary']['asleep']['minutes'] 446 | minutesREM = record['levels']['summary']['restless']['minutes'] 447 | minutesDeep = 0 448 | 449 | collected_records.append({ 450 | "measurement": "Sleep Summary", 451 | "time": utc_time, 452 | "tags": { 453 | "Device": DEVICENAME, 454 | "isMainSleep": record["isMainSleep"], 455 | }, 456 | "fields": { 457 | 'efficiency': record["efficiency"], 458 | 'minutesAfterWakeup': record['minutesAfterWakeup'], 459 | 'minutesAsleep': record['minutesAsleep'], 460 | 'minutesToFallAsleep': record['minutesToFallAsleep'], 461 | 'minutesInBed': record['timeInBed'], 462 | 'minutesAwake': record['minutesAwake'], 463 | 'minutesLight': minutesLight, 464 | 'minutesREM': minutesREM, 465 | 'minutesDeep': minutesDeep 466 | } 467 | }) 468 | 469 | sleep_level_mapping = {'wake': 3, 'rem': 2, 'light': 1, 'deep': 0, 'asleep': 1, 'restless': 2, 'awake': 3, 'unknown': 4} 470 | for sleep_stage in record['levels']['data']: 471 | log_time = datetime.fromisoformat(sleep_stage["dateTime"]) 472 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 473 | collected_records.append({ 474 | "measurement": "Sleep Levels", 475 | "time": utc_time, 476 | "tags": { 477 | "Device": DEVICENAME, 478 | "isMainSleep": record["isMainSleep"], 479 | }, 480 | "fields": { 481 | 'level': sleep_level_mapping[sleep_stage["level"]], 482 | 'duration_seconds': sleep_stage["seconds"] 483 | } 484 | }) 485 | wake_time = datetime.fromisoformat(record["endTime"]) 486 | utc_wake_time = LOCAL_TIMEZONE.localize(wake_time).astimezone(pytz.utc).isoformat() 487 | collected_records.append({ 488 | "measurement": "Sleep Levels", 489 | "time": utc_wake_time, 490 | "tags": { 491 | "Device": DEVICENAME, 492 | "isMainSleep": record["isMainSleep"], 493 | }, 494 | "fields": { 495 | 'level': sleep_level_mapping['wake'], 496 | 'duration_seconds': None 497 | } 498 | }) 499 | logging.info("Recorded Sleep data for date " + start_date_str + " to " + end_date_str) 500 | else: 501 | logging.error("Recording failed : Sleep data for date " + start_date_str + " to " + end_date_str) 502 | 503 | # Max date range 1 year, records HR zones, Activity minutes and Resting HR - 4 + 3 + 1 + 1 = 9 queries 504 | def get_daily_data_limit_365d(start_date_str, end_date_str): 505 | activity_minutes_list = ["minutesSedentary", "minutesLightlyActive", "minutesFairlyActive", "minutesVeryActive"] 506 | for activity_type in activity_minutes_list: 507 | activity_minutes_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/tracker/' + activity_type + '/date/' + start_date_str + '/' + end_date_str + '.json').get("activities-tracker-"+activity_type) 508 | if activity_minutes_data_list != None: 509 | for data in activity_minutes_data_list: 510 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 511 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 512 | collected_records.append({ 513 | "measurement": "Activity Minutes", 514 | "time": utc_time, 515 | "tags": { 516 | "Device": DEVICENAME 517 | }, 518 | "fields": { 519 | activity_type : int(data["value"]) 520 | } 521 | }) 522 | logging.info("Recorded " + activity_type + "for date " + start_date_str + " to " + end_date_str) 523 | else: 524 | logging.error("Recording failed : " + activity_type + " for date " + start_date_str + " to " + end_date_str) 525 | 526 | 527 | activity_others_list = ["distance", "calories", "steps"] 528 | for activity_type in activity_others_list: 529 | activity_others_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/tracker/' + activity_type + '/date/' + start_date_str + '/' + end_date_str + '.json').get("activities-tracker-"+activity_type) 530 | if activity_others_data_list != None: 531 | for data in activity_others_data_list: 532 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 533 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 534 | activity_name = "Total Steps" if activity_type == "steps" else activity_type 535 | collected_records.append({ 536 | "measurement": activity_name, 537 | "time": utc_time, 538 | "tags": { 539 | "Device": DEVICENAME 540 | }, 541 | "fields": { 542 | "value" : float(data["value"]) 543 | } 544 | }) 545 | logging.info("Recorded " + activity_name + " for date " + start_date_str + " to " + end_date_str) 546 | else: 547 | logging.error("Recording failed : " + activity_name + " for date " + start_date_str + " to " + end_date_str) 548 | 549 | 550 | HR_zones_data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/heart/date/' + start_date_str + '/' + end_date_str + '.json').get("activities-heart") 551 | if HR_zones_data_list != None: 552 | for data in HR_zones_data_list: 553 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 554 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 555 | collected_records.append({ 556 | "measurement": "HR zones", 557 | "time": utc_time, 558 | "tags": { 559 | "Device": DEVICENAME 560 | }, 561 | # Using get() method with a default value 0 to prevent keyerror ( see issue #31) 562 | "fields": { 563 | "Normal" : data["value"]["heartRateZones"][0].get("minutes", 0), 564 | "Fat Burn" : data["value"]["heartRateZones"][1].get("minutes", 0), 565 | "Cardio" : data["value"]["heartRateZones"][2].get("minutes", 0), 566 | "Peak" : data["value"]["heartRateZones"][3].get("minutes", 0) 567 | } 568 | }) 569 | if "restingHeartRate" in data["value"]: 570 | collected_records.append({ 571 | "measurement": "RestingHR", 572 | "time": utc_time, 573 | "tags": { 574 | "Device": DEVICENAME 575 | }, 576 | "fields": { 577 | "value": data["value"]["restingHeartRate"] 578 | } 579 | }) 580 | logging.info("Recorded RHR and HR zones for date " + start_date_str + " to " + end_date_str) 581 | else: 582 | logging.error("Recording failed : RHR and HR zones for date " + start_date_str + " to " + end_date_str) 583 | 584 | HR_zone_minutes_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/active-zone-minutes/date/' + start_date_str + '/' + end_date_str + '.json').get("activities-active-zone-minutes") 585 | if HR_zone_minutes_list != None: 586 | for data in HR_zone_minutes_list: 587 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 588 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 589 | if data.get("value"): 590 | collected_records.append({ 591 | "measurement": "HR zones", 592 | "time": utc_time, 593 | "tags": { 594 | "Device": DEVICENAME 595 | }, 596 | "fields": data["value"] 597 | }) 598 | logging.info("Recorded HR zone minutes for date " + start_date_str + " to " + end_date_str) 599 | else: 600 | logging.error("Recording failed : HR zone minutes for date " + start_date_str + " to " + end_date_str) 601 | 602 | # records SPO2 single days for the whole given period - 1 query 603 | def get_daily_data_limit_none(start_date_str, end_date_str): 604 | try: 605 | data_list = request_data_from_fitbit('https://api.fitbit.com/1/user/-/spo2/date/' + start_date_str + '/' + end_date_str + '.json') 606 | except requests.exceptions.HTTPError as e: 607 | logging.error(f"{e}") 608 | data_list = None 609 | if data_list != None: 610 | for data in data_list: 611 | log_time = datetime.fromisoformat(data["dateTime"] + "T" + "00:00:00") 612 | utc_time = LOCAL_TIMEZONE.localize(log_time).astimezone(pytz.utc).isoformat() 613 | collected_records.append({ 614 | "measurement": "SPO2", 615 | "time": utc_time, 616 | "tags": { 617 | "Device": DEVICENAME 618 | }, 619 | "fields": { 620 | "avg": float(data["value"]["avg"]) if data["value"]["avg"] else None, 621 | "max": float(data["value"]["max"]) if data["value"]["max"] else None, 622 | "min": float(data["value"]["min"]) if data["value"]["min"] else None 623 | } 624 | }) 625 | logging.info("Recorded Avg SPO2 for date " + start_date_str + " to " + end_date_str) 626 | else: 627 | logging.error("Recording failed : Avg SPO2 for date " + start_date_str + " to " + end_date_str) 628 | 629 | # fetches TCX GPS data 630 | def get_tcx_data(tcx_url, ActivityID): 631 | tcx_headers = { 632 | "Authorization": f"Bearer {ACCESS_TOKEN}", 633 | "Accept": "application/x-www-form-urlencoded" 634 | } 635 | tcx_params = { 636 | 'includePartialTCX': 'false' 637 | } 638 | response = request_data_from_fitbit(tcx_url, headers=tcx_headers, params=tcx_params) 639 | if response.status_code != 200: 640 | logging.error(f"Error fetching TCX file: {response.status_code}, {response.text}") 641 | else: 642 | root = ET.fromstring(response.text) 643 | namespace = {"ns": "http://www.garmin.com/xmlschemas/TrainingCenterDatabase/v2"} 644 | trackpoints = root.findall(".//ns:Trackpoint", namespace) 645 | prev_time = None 646 | prev_distance = None 647 | 648 | for i, trkpt in enumerate(trackpoints): 649 | time_elem = trkpt.find("ns:Time", namespace) 650 | lat = trkpt.find(".//ns:LatitudeDegrees", namespace) 651 | lon = trkpt.find(".//ns:LongitudeDegrees", namespace) 652 | altitude = trkpt.find("ns:AltitudeMeters", namespace) 653 | distance = trkpt.find("ns:DistanceMeters", namespace) 654 | heart_rate = trkpt.find(".//ns:HeartRateBpm/ns:Value", namespace) 655 | 656 | if time_elem is not None and lat is not None: 657 | current_time = datetime.fromisoformat(time_elem.text.strip("Z")) 658 | fields = { 659 | "lat": float(lat.text), 660 | "lon": float(lon.text) 661 | } 662 | if altitude is not None: 663 | fields["altitude"] = float(altitude.text) 664 | if distance is not None: 665 | fields["distance"] = float(distance.text) 666 | current_distance = float(distance.text) 667 | else: 668 | current_distance = None 669 | if heart_rate is not None: 670 | fields["heart_rate"] = int(heart_rate.text) 671 | if i > 0 and prev_time is not None and prev_distance is not None and current_distance is not None: 672 | time_diff = (current_time - prev_time).total_seconds() 673 | distance_diff = current_distance - prev_distance 674 | if time_diff > 0: 675 | speed_mps = distance_diff / time_diff 676 | speed_kph = speed_mps * 3.6 677 | fields["speed_kph"] = speed_kph 678 | prev_time = current_time 679 | prev_distance = current_distance 680 | 681 | collected_records.append({ 682 | "measurement": "GPS", 683 | "tags": { 684 | "ActivityID": ActivityID 685 | }, 686 | "time": datetime.fromisoformat(time_elem.text.strip("Z")).astimezone(pytz.utc).isoformat(), 687 | "fields": fields 688 | }) 689 | 690 | # Fetches latest activities from record ( upto last 50 ) 691 | def fetch_latest_activities(end_date_str): 692 | next_end_date_str = (datetime.strptime(end_date_str, "%Y-%m-%d") + timedelta(days=1)).strftime("%Y-%m-%d") 693 | recent_activities_data = request_data_from_fitbit('https://api.fitbit.com/1/user/-/activities/list.json', params={'beforeDate': next_end_date_str, 'sort':'desc', 'limit':50, 'offset':0}) 694 | TCX_record_count, TCX_record_limit = 0,10 695 | if recent_activities_data != None: 696 | for activity in recent_activities_data['activities']: 697 | fields = {} 698 | if 'activeDuration' in activity: 699 | fields['ActiveDuration'] = int(activity['activeDuration']) 700 | if 'averageHeartRate' in activity: 701 | fields['AverageHeartRate'] = int(activity['averageHeartRate']) 702 | if 'calories' in activity: 703 | fields['calories'] = int(activity['calories']) 704 | if 'duration' in activity: 705 | fields['duration'] = int(activity['duration']) 706 | if 'distance' in activity: 707 | fields['distance'] = float(activity['distance']) 708 | if 'steps' in activity: 709 | fields['steps'] = int(activity['steps']) 710 | starttime = datetime.fromisoformat(activity['startTime'].strip("Z")) 711 | utc_time = starttime.astimezone(pytz.utc).isoformat() 712 | try: 713 | extracted_activity_name = activity['activityName'] 714 | except KeyError as MissingKeyError: 715 | extracted_activity_name = "Unknown-Activity" 716 | ActivityID = utc_time + "-" + extracted_activity_name 717 | collected_records.append({ 718 | "measurement": "Activity Records", 719 | "time": utc_time, 720 | "tags": { 721 | "ActivityName": extracted_activity_name 722 | }, 723 | "fields": fields 724 | }) 725 | if activity.get("hasGps", False): 726 | tcx_link = activity.get("tcxLink", False) 727 | if tcx_link and TCX_record_count <= TCX_record_limit: 728 | TCX_record_count += 1 729 | try: 730 | get_tcx_data(tcx_link, ActivityID) 731 | logging.info("Recorded TCX GPS data for " + tcx_link) 732 | except Exception as tcx_exception: 733 | logging.error("Failed to get GPS Data for " + tcx_link + " : " + str(tcx_exception)) 734 | logging.info("Fetched 50 recent activities before date " + end_date_str) 735 | else: 736 | logging.error("Fetching 50 recent activities failed : before date " + end_date_str) 737 | 738 | 739 | # %% [markdown] 740 | # ## Call the functions one time as a startup update OR do switch to bulk update mode 741 | 742 | # %% 743 | if AUTO_DATE_RANGE: 744 | date_list = [(start_date + timedelta(days=i)).strftime("%Y-%m-%d") for i in range((end_date - start_date).days + 1)] 745 | if len(date_list) > 3: 746 | logging.warn("Auto schedule update is not meant for more than 3 days at a time, please consider lowering the auto_update_date_range variable to aviod rate limit hit!") 747 | for date_str in date_list: 748 | get_intraday_data_limit_1d(date_str, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')]) # 2 queries x number of dates ( default 2) 749 | get_daily_data_limit_30d(start_date_str, end_date_str) # 3 queries 750 | get_daily_data_limit_100d(start_date_str, end_date_str) # 1 query 751 | get_daily_data_limit_365d(start_date_str, end_date_str) # 8 queries 752 | get_daily_data_limit_none(start_date_str, end_date_str) # 1 query 753 | get_battery_level() # 1 query 754 | fetch_latest_activities(end_date_str) # 1 query 755 | write_points_to_influxdb(collected_records) 756 | collected_records = [] 757 | else: 758 | # Do Bulk update---------------------------------------------------------------------------------------------------------------------------- 759 | 760 | schedule.every(1).hours.do(lambda : Get_New_Access_Token(client_id,client_secret)) # Auto-refresh tokens every 1 hour 761 | 762 | date_list = [(start_date + timedelta(days=i)).strftime("%Y-%m-%d") for i in range((end_date - start_date).days + 1)] 763 | 764 | def yield_dates_with_gap(date_list, gap): 765 | start_index = -1*gap 766 | while start_index < len(date_list)-1: 767 | start_index = start_index + gap 768 | end_index = start_index+gap 769 | if end_index > len(date_list) - 1: 770 | end_index = len(date_list) - 1 771 | if start_index > len(date_list) - 1: 772 | break 773 | yield (date_list[start_index],date_list[end_index]) 774 | 775 | def do_bulk_update(funcname, start_date, end_date): 776 | global collected_records 777 | funcname(start_date, end_date) 778 | schedule.run_pending() 779 | write_points_to_influxdb(collected_records) 780 | collected_records = [] 781 | 782 | fetch_latest_activities(date_list[-1]) 783 | write_points_to_influxdb(collected_records) 784 | do_bulk_update(get_daily_data_limit_none, date_list[0], date_list[-1]) 785 | for date_range in yield_dates_with_gap(date_list, 360): 786 | do_bulk_update(get_daily_data_limit_365d, date_range[0], date_range[1]) 787 | for date_range in yield_dates_with_gap(date_list, 98): 788 | do_bulk_update(get_daily_data_limit_100d, date_range[0], date_range[1]) 789 | for date_range in yield_dates_with_gap(date_list, 28): 790 | do_bulk_update(get_daily_data_limit_30d, date_range[0], date_range[1]) 791 | for single_day in date_list: 792 | do_bulk_update(get_intraday_data_limit_1d, single_day, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')]) 793 | 794 | logging.info("Success : Bulk update complete for " + start_date_str + " to " + end_date_str) 795 | print("Bulk update complete!") 796 | 797 | # %% [markdown] 798 | # ## Schedule functions at specific intervals (Ongoing continuous update) 799 | 800 | # %% 801 | # Ongoing continuous update of data 802 | if SCHEDULE_AUTO_UPDATE: 803 | 804 | schedule.every(1).hours.do(lambda : Get_New_Access_Token(client_id,client_secret)) # Auto-refresh tokens every 1 hour 805 | schedule.every(3).minutes.do( lambda : get_intraday_data_limit_1d(end_date_str, [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')] )) # Auto-refresh detailed HR and steps 806 | schedule.every(1).hours.do( lambda : get_intraday_data_limit_1d((datetime.strptime(end_date_str, "%Y-%m-%d") - timedelta(days=1)).strftime("%Y-%m-%d"), [('heart','HeartRate_Intraday','1sec'),('steps','Steps_Intraday','1min')] )) # Refilling any missing data on previous day end of night due to fitbit sync delay ( see issue #10 ) 807 | schedule.every(20).minutes.do(get_battery_level) # Auto-refresh battery level 808 | schedule.every(3).hours.do(lambda : get_daily_data_limit_30d(start_date_str, end_date_str)) 809 | schedule.every(4).hours.do(lambda : get_daily_data_limit_100d(start_date_str, end_date_str)) 810 | schedule.every(6).hours.do( lambda : get_daily_data_limit_365d(start_date_str, end_date_str)) 811 | schedule.every(6).hours.do(lambda : get_daily_data_limit_none(start_date_str, end_date_str)) 812 | schedule.every(1).hours.do( lambda : fetch_latest_activities(end_date_str)) 813 | 814 | while True: 815 | schedule.run_pending() 816 | if len(collected_records) != 0: 817 | write_points_to_influxdb(collected_records) 818 | collected_records = [] 819 | time.sleep(30) 820 | update_working_dates() 821 | 822 | 823 | 824 | 825 | --------------------------------------------------------------------------------