├── .github └── workflows │ └── test.yml ├── .gitignore ├── LICENSE ├── MANIFEST ├── README.md ├── cronitor ├── __init__.py ├── __main__.py ├── celery.py ├── monitor.py └── tests │ ├── __init__.py │ ├── cronitor.yaml │ ├── test_00.py │ ├── test_config.py │ ├── test_monitor.py │ └── test_pings.py ├── requirements.txt ├── setup.cfg └── setup.py /.github/workflows/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Test 3 | 4 | on: 5 | push: 6 | branches: 7 | - master 8 | pull_request: 9 | branches: 10 | - master 11 | 12 | jobs: 13 | build: 14 | runs-on: ubuntu-latest 15 | strategy: 16 | fail-fast: false 17 | matrix: 18 | python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"] 19 | 20 | steps: 21 | - uses: actions/checkout@v4 22 | - name: Set up Python ${{ matrix.python-version }} 23 | uses: actions/setup-python@v5 24 | with: 25 | python-version: ${{ matrix.python-version }} 26 | - name: Install Dependencies 27 | run: pip install --upgrade pip && pip install -r requirements.txt 28 | - name: Run Tests 29 | run: | 30 | pip install pytest 31 | pytest 32 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Created by .ignore support plugin (hsz.mobi) 2 | ### Python template 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | env/ 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | wheels/ 26 | *.egg-info/ 27 | .installed.cfg 28 | *.egg 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *,cover 49 | .hypothesis/ 50 | 51 | # Translations 52 | *.mo 53 | *.pot 54 | 55 | # Django stuff: 56 | *.log 57 | local_settings.py 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # dotenv 85 | .env 86 | 87 | # virtualenv 88 | .venv 89 | venv/ 90 | ENV/ 91 | 92 | # Spyder project settings 93 | .spyderproject 94 | 95 | # Rope project settings 96 | .ropeproject 97 | .idea 98 | 99 | #VSCode 100 | .vscode -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all 11 | copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 19 | SOFTWARE. 20 | -------------------------------------------------------------------------------- /MANIFEST: -------------------------------------------------------------------------------- 1 | # file GENERATED by distutils, do NOT edit 2 | setup.cfg 3 | setup.py 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Cronitor Python Library 2 | ![Test](https://github.com/cronitorio/cronitor-python/workflows/Test/badge.svg) 3 | 4 | [Cronitor](https://cronitor.io/) provides end-to-end monitoring for background jobs, websites, APIs, and anything else that can send or receive an HTTP request. This library provides convenient access to the Cronitor API from applications written in Python. See our [API docs](https://cronitor.io/docs/api) for detailed references on configuring monitors and sending telemetry pings. 5 | 6 | In this guide: 7 | 8 | - [Installation](#Installation) 9 | - [Monitoring Background Jobs](#monitoring-background-jobs) 10 | - [Sending Telemetry Events](#sending-telemetry-events) 11 | - [Configuring Monitors](#configuring-monitors) 12 | - [Package Configuration & Env Vars](#package-configuration) 13 | - [Command Line Usage](#command-line-usage) 14 | 15 | ## Installation 16 | 17 | ``` 18 | pip install cronitor 19 | ``` 20 | 21 | ## Monitoring Background Jobs 22 | 23 | #### Celery Auto-Discover 24 | `cronitor-python` can automatically discover all of your declared Celery tasks, including your Celerybeat scheduled tasks, 25 | creating monitors for them and sending pings when tasks run, succeed, or fail. Your API keys can be found [here](https://cronitor.io/settings/api). 26 | 27 | Requires Celery 4.0 or higher. Celery auto-discover utilizes the Celery [message protocol version 2](https://docs.celeryproject.org/en/stable/internals/protocol.html#version-2). 28 | 29 | **Some important notes on support** 30 | 31 | * Tasks on [solar schedules](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#solar-schedules) are not supported and will be ignored. 32 | * [`django-celery-beat`](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#using-custom-scheduler-classes) is not yet supported, but is in the works. 33 | * If you use the default `PersistentScheduler`, the celerybeat integration overrides the celerybeat local task run database (as referenced [here](https://docs.celeryproject.org/en/stable/userguide/periodic-tasks.html#starting-the-scheduler) in the docs), named `celerybeat-schedule` by default. If you currently specify a custom location for this database, this integration will override it. **Very** few people require setting custom locations for this database. If you fall into this group and want to use `cronitor-python`'s celerybeat integration, please reach out to Cronitor support. 34 | 35 | 36 | ```python 37 | import cronitor.celery 38 | from celery import Celery 39 | 40 | app = Celery() 41 | app.conf.beat_schedule = { 42 | 'run-me-every-minute': { 43 | 'task': 'tasks.every_minute_celery_task', 44 | 'schedule': 60 45 | } 46 | } 47 | 48 | # Discover all of your celery tasks and automatically add monitoring. 49 | cronitor.celery.initialize(app, api_key="apiKey123") 50 | 51 | @app.task 52 | def every_minute_celery_task(): 53 | print("running a background job with celery...") 54 | 55 | @app.task 56 | def non_scheduled_celery_task(): 57 | print("Even though I'm not on a schedule, I'll still be monitored!") 58 | ``` 59 | 60 | If you want only to monitor Celerybeat periodic tasks, and not tasks triggered any other way, you can set `celereybeat_only=True` when initializing: 61 | ```python 62 | app = Celery() 63 | cronitor.celery.initialize(app, api_key="apiKey123", celerybeat_only=True) 64 | ``` 65 | 66 | #### Manual Integration 67 | 68 | The `@cronitor.job` is a lightweight way to monitor any background task regardless of how it is executed. It will send telemetry events before calling your function and after it exits. If your function raises an exception a `fail` event will be sent (and the exception re-raised). 69 | 70 | ```python 71 | import cronitor 72 | 73 | # your api keys can found here - https://cronitor.io/settings/api 74 | cronitor.api_key = 'apiKey123' 75 | 76 | # Apply the cronitor decorator to monitor any function. 77 | # If no monitor matches the provided key, one will be created automatically. 78 | @cronitor.job('send-invoices') 79 | def send_invoices_task(*args, **kwargs): 80 | ... 81 | ``` 82 | 83 | #### You can provide monitor attributes that will be synced when your app starts 84 | 85 | To sync attributes, provide an API key with monitor:write privileges. 86 | 87 | ```python 88 | import cronitor 89 | 90 | # Copy your SDK Integration key from https://cronitor.io/app/settings/api 91 | cronitor.api_key = 'apiKey123' 92 | 93 | @cronitor.job('send-invoices', attributes={'schedule': '0 8 * * *', 'notify': ['devops-alerts']}) 94 | def send_invoices_task(*args, **kwargs): 95 | ... 96 | ``` 97 | 98 | ## Sending Telemetry Events 99 | 100 | If you want to send a heartbeat events, or want finer control over when/how [telemetry events](https://cronitor.io/docs/telemetry-api) are sent for your jobs, you can create a monitor instance and call the `.ping` method. 101 | 102 | ```python 103 | import cronitor 104 | 105 | # your api keys can found here - https://cronitor.io/settings/api 106 | cronitor.api_key = 'apiKey123' 107 | 108 | # optionally, set an environment 109 | cronitor.environment = 'staging' 110 | 111 | monitor = cronitor.Monitor('heartbeat-monitor') 112 | monitor.ping() # send a heartbeat event 113 | 114 | # optional params can be passed as keyword arguements. 115 | # for a complete list see https://cronitor.io/docs/telemetry-api#parameters 116 | monitor.ping( 117 | state='run|complete|fail|ok', # run|complete|fail used to measure lifecycle of a job, ok used for manual reset only. 118 | message='', # message that will be displayed in alerts as well as monitor activity panel on your dashboard. 119 | metrics={ 120 | 'duration': 100, # how long the job ran (complete|fail only). cronitor will calculate this when not provided 121 | 'count': 4500, # if your job is processing a number of items you can report a count 122 | 'error_count': 10 # the number of errors that occurred while this job was running 123 | } 124 | ) 125 | ``` 126 | 127 | ## Configuring Monitors 128 | 129 | You can configure all of your monitors using a single YAML file. This can be version controlled and synced to Cronitor as part of 130 | a deployment or build process. For details on all of the attributes that can be set, see the [Monitor API](https://cronitor.io/docs/monitor-api) documentation. 131 | 132 | 133 | ```python 134 | import cronitor 135 | 136 | # your api keys can found here - https://cronitor.io/settings/api 137 | cronitor.api_key = 'apiKey123' 138 | 139 | cronitor.read_config('./cronitor.yaml') # parse the yaml file of monitors 140 | 141 | cronitor.validate_config() # send monitors to Cronitor for configuration validation 142 | 143 | cronitor.apply_config() # sync the monitors from the config file to Cronitor 144 | 145 | cronitor.generate_config() # generate a new config file from the Cronitor API 146 | ``` 147 | 148 | The timeout value for validate_config, apply_config and generate_config is 10 seconds by default. The value can be rewritten by setting the environment variable `CRONITOR_TIMEOUT`. It can also be rewritten by assigning a value to cronitor.timeout. 149 | 150 | ```python 151 | import cronitor 152 | 153 | cronitor.timeout = 30 154 | cronitor.apply_config() 155 | ``` 156 | 157 | The `cronitor.yaml` file includes three top level keys `jobs`, `checks`, `heartbeats`. You can configure monitors under each key by defining [monitors](https://cronitor.io/docs/monitor-api#attributes). 158 | 159 | ```yaml 160 | jobs: 161 | nightly-database-backup: 162 | schedule: 0 0 * * * 163 | notify: 164 | - devops-alert-pagerduty 165 | assertions: 166 | - metric.duration < 5 minutes 167 | 168 | send-welcome-email: 169 | schedule: every 10 minutes 170 | assertions: 171 | - metric.count > 0 172 | - metric.duration < 30 seconds 173 | 174 | checks: 175 | cronitor-homepage: 176 | request: 177 | url: https://cronitor.io 178 | regions: 179 | - us-east-1 180 | - eu-central-1 181 | - ap-northeast-1 182 | assertions: 183 | - response.code = 200 184 | - response.time < 2s 185 | 186 | cronitor-ping-api: 187 | request: 188 | url: https://cronitor.link/ping 189 | assertions: 190 | - response.body contains ok 191 | - response.time < .25s 192 | 193 | heartbeats: 194 | production-deploy: 195 | notify: 196 | alerts: ['deploys-slack'] 197 | events: true # send alert when the event occurs 198 | 199 | ``` 200 | 201 | You can also create and update monitors by calling `Monitor.put`. For details on all of the attributes that can be set see the Monitor API [documentation)(https://cronitor.io/docs/monitor-api#attributes). 202 | 203 | ```python 204 | import cronitor 205 | 206 | monitors = cronitor.Monitor.put([ 207 | { 208 | 'type': 'job', 209 | 'key': 'send-customer-invoices', 210 | 'schedule': '0 0 * * *', 211 | 'assertions': [ 212 | 'metric.duration < 5 min' 213 | ], 214 | 'notify': ['devops-alerts-slack'] 215 | }, 216 | { 217 | 'type': 'check', 218 | 'key': 'Cronitor Homepage', 219 | 'schedule': 'every 45 seconds', 220 | 'request': { 221 | 'url': 'https://cronitor.io' 222 | }, 223 | 'assertions': [ 224 | 'response.code = 200', 225 | 'response.time < 600ms', 226 | ] 227 | } 228 | ]) 229 | ``` 230 | 231 | ### Pausing, Reseting, and Deleting 232 | 233 | ```python 234 | import cronitor 235 | 236 | monitor = cronitor.Monitor('heartbeat-monitor'); 237 | 238 | monitor.pause(24) # pause alerting for 24 hours 239 | monitor.unpause() # alias for .pause(0) 240 | monitor.ok() # manually reset to a passing state alias for monitor.ping({state: ok}) 241 | monitor.delete() # destroy the monitor 242 | ``` 243 | 244 | ## Package Configuration 245 | 246 | The package needs to be configured with your account's `API key`, which is available on the [account settings](https://cronitor.io/settings) page. You can also optionally specify an `api_version` and an `environment`. If not provided, your account default is used. These can also be supplied using the environment variables `CRONITOR_API_KEY`, `CRONITOR_API_VERSION`, `CRONITOR_ENVIRONMENT`. 247 | 248 | ```python 249 | import cronitor 250 | 251 | # your api keys can found here - https://cronitor.io/settings 252 | cronitor.api_key = 'apiKey123' 253 | cronitor.api_version = '2020-10-01' 254 | cronitor.environment = 'cluster_1_prod' 255 | ``` 256 | 257 | ## Command Line Usage 258 | 259 | ```bash 260 | >> python -m cronitor -h 261 | 262 | usage: cronitor [-h] [--apikey APIKEY] [--key KEY] [--msg MSG] 263 | (--run | --complete | --fail | --ok | --pause PAUSE) 264 | 265 | Send status messages to Cronitor ping API. 266 | 267 | optional arguments: 268 | -h, --help show this help message and exit 269 | --authkey AUTHKEY, -a AUTHKEY 270 | Auth Key from Account page 271 | --key KEY, -k KEY Unique key for the monitor to take ping 272 | --msg MSG, -m MSG Optional message to send with ping/fail 273 | --tick, -t Call ping on given monitor 274 | --run, -r Call ping with state=run on given monitor 275 | --complete, -C Call ping with state=complete on given monitor 276 | --fail, -f Call ping with state=fail on given monitor 277 | --pause PAUSE, -P PAUSE 278 | Call pause on given monitor 279 | ``` 280 | 281 | 282 | ## Contributing 283 | 284 | Pull requests and features are happily considered! By participating in this project you agree to abide by the [Code of Conduct](http://contributor-covenant.org/version/2/0). 285 | 286 | ### To contribute 287 | 288 | Fork, then clone the repo: 289 | 290 | git clone git@github.com:your-username/cronitor-python.git 291 | 292 | Set up your machine: 293 | 294 | pip install -r requirements 295 | 296 | Make sure the tests pass: 297 | 298 | pytest 299 | 300 | Make your change. Add tests for your change. Make the tests pass: 301 | 302 | pytest 303 | 304 | 305 | Push to your fork and [submit a pull request]( https://github.com/cronitorio/cronitor-python/compare/) 306 | -------------------------------------------------------------------------------- /cronitor/__init__.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from datetime import datetime 4 | from functools import wraps 5 | import sys 6 | import yaml 7 | from yaml.loader import SafeLoader 8 | import time 9 | import atexit 10 | import threading 11 | 12 | from .monitor import Monitor, YAML 13 | 14 | logger = logging.getLogger(__name__) 15 | logger.setLevel(logging.INFO) 16 | 17 | # configuration variables 18 | api_key = os.getenv('CRONITOR_API_KEY', None) 19 | api_version = os.getenv('CRONITOR_API_VERSION', None) 20 | environment = os.getenv('CRONITOR_ENVIRONMENT', None) 21 | config = os.getenv('CRONITOR_CONFIG', None) 22 | timeout = os.getenv('CRONITOR_TIMEOUT', None) 23 | if timeout is not None: 24 | timeout = int(timeout) 25 | 26 | celerybeat_only = False 27 | 28 | # monitor attributes can be synced at process startup 29 | monitor_attributes = [] 30 | 31 | # this is a pointer to the module object instance itself. 32 | this = sys.modules[__name__] 33 | if this.config: 34 | this.read_config() # set config vars contained within 35 | 36 | class MonitorNotFound(Exception): 37 | pass 38 | 39 | class ConfigValidationError(Exception): 40 | pass 41 | 42 | class APIValidationError(Exception): 43 | pass 44 | 45 | class AuthenticationError(Exception): 46 | pass 47 | 48 | class APIError(Exception): 49 | pass 50 | 51 | class State(object): 52 | OK = 'ok' 53 | RUN = 'run' 54 | COMPLETE = 'complete' 55 | FAIL = 'fail' 56 | 57 | # include_output is deprecated in favor of log_output and can be removed in 5.0 release 58 | def job(key, env=None, log_output=True, include_output=True, attributes=None): 59 | 60 | if type(attributes) is dict: 61 | attributes['key'] = key 62 | monitor_attributes.append(attributes) 63 | 64 | def wrapper(func): 65 | @wraps(func) 66 | def wrapped(*args, **kwargs): 67 | start = datetime.now().timestamp() 68 | 69 | monitor = Monitor(key, env=env) 70 | # use start as the series param to match run/fail/complete correctly 71 | monitor.ping(state=State.RUN, series=start) 72 | try: 73 | out = func(*args, **kwargs) 74 | except Exception as e: 75 | duration = datetime.now().timestamp() - start 76 | monitor.ping(state=State.FAIL, message=str(e), metrics={'duration': duration}, series=start) 77 | raise e 78 | 79 | duration = datetime.now().timestamp() - start 80 | message = str(out) if all([log_output, include_output]) else None 81 | monitor.ping(state=State.COMPLETE, message=message, metrics={'duration': duration}, series=start) 82 | return out 83 | 84 | return wrapped 85 | return wrapper 86 | 87 | def generate_config(): 88 | config = this.config or './cronitor.yaml' 89 | with open(config, 'w') as conf: 90 | conf.writelines(Monitor.as_yaml()) 91 | 92 | def validate_config(): 93 | return apply_config(rollback=True) 94 | 95 | def apply_config(rollback=False): 96 | if not this.config: 97 | raise ConfigValidationError("Must set a path to config file e.g. cronitor.config = './cronitor.yaml'") 98 | 99 | config = read_config(output=True) 100 | try: 101 | monitors = Monitor.put(monitors=config, rollback=rollback, format=YAML) 102 | job_count = len(monitors.get('jobs', [])) 103 | check_count = len(monitors.get('checks', [])) 104 | heartbeat_count = len(monitors.get('heartbeats', [])) 105 | total_count = sum([job_count, check_count, heartbeat_count]) 106 | logger.info('{} monitor{} {}'.format(total_count, 's' if total_count != 1 else '', 'validated.' if rollback else 'synced.',)) 107 | return True 108 | except (yaml.YAMLError, ConfigValidationError, APIValidationError, APIError, AuthenticationError) as e: 109 | logger.error(e) 110 | return False 111 | 112 | def read_config(path=None, output=False): 113 | this.config = path or this.config 114 | if not this.config: 115 | raise ConfigValidationError("Must include a path to config file e.g. cronitor.read_config('./cronitor.yaml')") 116 | 117 | with open(this.config, 'r') as conf: 118 | data = yaml.load(conf, Loader=SafeLoader) 119 | if output: 120 | return data 121 | 122 | def sync_monitors(wait=1): 123 | global monitor_attributes 124 | if wait > 0: 125 | time.sleep(wait) 126 | 127 | if len(monitor_attributes): 128 | Monitor.put(monitor_attributes) 129 | monitor_attributes = [] 130 | 131 | try: 132 | sync 133 | except NameError: 134 | sync = threading.Thread(target=sync_monitors) 135 | sync.start() 136 | atexit.register(sync.join) 137 | -------------------------------------------------------------------------------- /cronitor/__main__.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import sys 4 | 5 | from .monitor import Monitor 6 | 7 | 8 | def main(): 9 | parser = argparse.ArgumentParser(prog="cronitor", 10 | description='Send status messages to Cronitor ping API.') # noqa 11 | parser.add_argument('--apiKey', '-a', type=str, 12 | default=os.getenv('CRONITOR_API_KEY'), 13 | help='Auth Key from Account page') 14 | parser.add_argument('--id', '-i', type=str, 15 | default=os.getenv('CRONITOR_ID', os.getenv('CRONITOR_CODE')), 16 | help='Monitor Id to take action upon') 17 | # alias for id. deprecated. 18 | parser.add_argument('--code', '-c', type=str, 19 | default=os.getenv('CRONITOR_CODE'), 20 | help='DEPRECATED: Code for Monitor to take action upon. Alias of Id.') 21 | parser.add_argument('--msg', '-m', type=str, default='', 22 | help='Optional message to send with ping/fail') 23 | 24 | group = parser.add_mutually_exclusive_group(required=True) 25 | 26 | group.add_argument('--run', '-r', action='store_true', 27 | help='Send a run event') 28 | group.add_argument('--complete', '-C', action='store_true', 29 | help='Send a complete event') 30 | group.add_argument('--fail', '-f', action='store_true', 31 | help='Send a fail event') 32 | group.add_argument('--ok', '-o', action='store_true', 33 | help='Send an ok event') 34 | group.add_argument('--pause', '-p', type=str, default=24, 35 | help='Pause a monitor') 36 | 37 | args = parser.parse_args() 38 | 39 | if args.id is None and args.code is None: 40 | print('A monitorId must be supplied using the --id flag or setting the CRONITOR_ID enviromenment variable.') 41 | parser.print_help() 42 | sys.exit(1) 43 | 44 | monitor = Monitor(id=args.id, api_key=args.apiKey) 45 | 46 | if args.run: 47 | ret = monitor.ping('run', message=args.msg) 48 | elif args.complete: 49 | ret = monitor.ping('complete', message=args.msg) 50 | elif args.tick: 51 | ret = monitor.ping('tick', message=args.msg) 52 | elif args.fail: 53 | ret = monitor.ping('fail', message=args.msg) 54 | elif args.ok: 55 | ret = monitor.ping('ok', message=args.msg) 56 | elif args.pause: 57 | ret = monitor.pause(args.pause) 58 | else: 59 | ret = monitor.ping(message=args.msg) 60 | return ret 61 | 62 | 63 | if __name__ == '__main__': 64 | main() 65 | -------------------------------------------------------------------------------- /cronitor/celery.py: -------------------------------------------------------------------------------- 1 | import typing 2 | import datetime 3 | import humanize 4 | import logging 5 | from cronitor import State, Monitor 6 | import cronitor 7 | import functools 8 | import shutil 9 | import tempfile 10 | import sys 11 | 12 | logger = logging.getLogger(__name__) 13 | try: 14 | import celery 15 | import celery.beat 16 | from celery.schedules import crontab, schedule, solar 17 | from celery.signals import beat_init, task_prerun, task_failure, task_success, task_retry 18 | 19 | if typing.TYPE_CHECKING: 20 | from typing import Dict, List, Union, Optional, Tuple 21 | import billiard.einfo 22 | from celery.worker.request import Request 23 | except ImportError: 24 | logger.error("Cannot use the cronitor.celery module without celery installed") 25 | sys.exit(1) 26 | 27 | # For the signals to properly register, they need to be top-level objects. 28 | # Since they are defined dynamically in initialize(), we have to declare them up top, 29 | # make them global, and override them. 30 | celerybeat_startup = None 31 | ping_monitor_before_task = None 32 | ping_monitor_on_success = None 33 | ping_monitor_on_failure = None 34 | ping_monitor_on_retry = None 35 | 36 | 37 | def get_headers_from_task(task): # type: (celery.Task) -> Dict 38 | headers = task.request.headers or {} 39 | headers.update(task.request.get('properties', {}).get('application_headers', {})) 40 | return headers 41 | 42 | 43 | def initialize(app, celerybeat_only=False, api_key=None): # type: (celery.Celery, bool, Optional[str]) -> None 44 | if api_key: 45 | cronitor.api_key = api_key 46 | 47 | if celerybeat_only: 48 | cronitor.celerybeat_only = True 49 | 50 | global celerybeat_startup 51 | global ping_monitor_before_task 52 | global ping_monitor_on_success 53 | global ping_monitor_on_failure 54 | global ping_monitor_on_retry 55 | 56 | def celerybeat_startup(sender, **kwargs): # type: (celery.beat.Service, Dict) -> None 57 | # To avoid recursion, since restarting celerybeat will result in this 58 | # signal being called again, we disconnect the signal. 59 | beat_init.disconnect(celerybeat_startup, dispatch_uid=1) 60 | 61 | # Must use the cached_property from scheduler so as not to re-open the shelve database 62 | scheduler = sender.scheduler # type: celery.beat.Scheduler 63 | # Also need to use the property here, including for django-celery-beat 64 | schedules = scheduler.schedule 65 | monitors = [] # type: List[Dict[str, str]] 66 | 67 | add_periodic_task_deferred = [] 68 | for name in schedules: 69 | if name.startswith('celery.'): 70 | continue 71 | entry = schedules[name] # type: celery.beat.ScheduleEntry 72 | 73 | # ignore all celerybeat scheduled events with the Cronitor exclusion header 74 | headers = entry.options.pop('headers', {}) 75 | if headers.get('x-cronitor-exclude') in (True, 'true', 'True'): 76 | logger.info("celerybeat entry '{}' ignored per exclusion header".format(name)) 77 | continue 78 | 79 | item = entry.schedule # type: celery.schedules.schedule 80 | if isinstance(item, crontab): 81 | cronitor_schedule = ('{0._orig_minute} {0._orig_hour} {0._orig_day_of_week} {0._orig_day_of_month} ' 82 | '{0._orig_month_of_year}').format(item) 83 | elif isinstance(item, schedule): 84 | freq = item.run_every # type: datetime.timedelta 85 | cronitor_schedule = 'every ' + humanize.precisedelta(freq) 86 | elif isinstance(item, solar): 87 | # We don't support solar schedules 88 | logger.warning("The cronitor-python celery module does not support " 89 | "tasks using solar schedules. Task schedule '{}' will " 90 | "not be monitored".format(name)) 91 | continue 92 | else: 93 | logger.warning("The cronitor-python celery module does not support " 94 | "schedules of type `{}`".format(type(item))) 95 | continue 96 | 97 | monitors.append({ 98 | 'type': 'job', 99 | 'key': name, 100 | 'schedule': cronitor_schedule, 101 | }) 102 | 103 | headers.update({ 104 | 'x-cronitor-task-origin': 'celerybeat', 105 | 'x-cronitor-celerybeat-name': name, 106 | }) 107 | 108 | add_periodic_task_deferred.append( 109 | functools.partial(app.add_periodic_task, 110 | entry.schedule, 111 | # Setting headers in the signature 112 | # works better than in periodic task options 113 | app.tasks.get(entry.task).s().set(headers=headers), 114 | args=entry.args, kwargs=entry.kwargs, 115 | name=entry.name, **(entry.options or {})) 116 | ) 117 | 118 | if isinstance(sender.scheduler, celery.beat.PersistentScheduler): 119 | # The celerybeat-schedule file with shelve gets corrupted really easily, so we need 120 | # to set up a tempfile instead. 121 | new_schedule = tempfile.NamedTemporaryFile() 122 | with open(sender.schedule_filename, 'rb') as current_schedule: 123 | shutil.copyfileobj(current_schedule, new_schedule) 124 | # We need to stop and restart celerybeat to get the task updates in place. 125 | # This isn't ideal, but seems to work. 126 | 127 | sender.stop() 128 | # Now, actually add all the periodic tasks to overwrite beat with the headers 129 | for task in add_periodic_task_deferred: 130 | task() 131 | # Then, restart celerybeat, on the new schedule file (copied from the old one) 132 | app.Beat(schedule=new_schedule.name).run() 133 | 134 | else: 135 | # For django-celery, etc., we don't need to stop and restart celerybeat 136 | for task in add_periodic_task_deferred: 137 | task() 138 | 139 | logger.debug("[Cronitor] creating monitors: %s", [m['key'] for m in monitors]) 140 | Monitor.put(monitors) 141 | 142 | beat_init.connect(celerybeat_startup, dispatch_uid=1) 143 | 144 | @task_prerun.connect 145 | def ping_monitor_before_task(sender, **kwargs): # type: (celery.Task, Dict) -> None 146 | headers = get_headers_from_task(sender) 147 | if 'x-cronitor-celerybeat-name' in headers: 148 | monitor = Monitor(headers['x-cronitor-celerybeat-name']) 149 | elif not cronitor.celerybeat_only: 150 | monitor = Monitor(sender.name) 151 | else: 152 | return 153 | 154 | monitor.ping(state=State.RUN, series=sender.request.id) 155 | 156 | @task_success.connect 157 | def ping_monitor_on_success(sender, **kwargs): # type: (celery.Task, Dict) -> None 158 | headers = get_headers_from_task(sender) 159 | if 'x-cronitor-celerybeat-name' in headers: 160 | monitor = Monitor(headers['x-cronitor-celerybeat-name']) 161 | elif not cronitor.celerybeat_only: 162 | monitor = Monitor(sender.name) 163 | else: 164 | return 165 | 166 | monitor.ping(state=State.COMPLETE, series=sender.request.id) 167 | 168 | @task_failure.connect 169 | def ping_monitor_on_failure(sender, # type: celery.Task 170 | task_id, # type: str 171 | exception, # type: Exception 172 | args, # type: Tuple 173 | kwargs, # type: Dict 174 | traceback, 175 | einfo, # type: billiard.einfo.ExceptionInfo 176 | **kwargs2 # type: Dict 177 | ): 178 | headers = get_headers_from_task(sender) 179 | if 'x-cronitor-celerybeat-name' in headers: 180 | monitor = Monitor(headers['x-cronitor-celerybeat-name']) 181 | elif not cronitor.celerybeat_only: 182 | monitor = Monitor(sender.name) 183 | else: 184 | return 185 | 186 | monitor.ping(state=State.FAIL, series=sender.request.id, message=str(exception)) 187 | 188 | @task_retry.connect 189 | def ping_monitor_on_retry(sender, # type: celery.Task 190 | request, # type: celery.worker.request.Request 191 | reason, # type: Union[Exception, str] 192 | einfo, # type: billiard.einfo.ExceptionInfo 193 | **kwargs, # type: Dict 194 | ): 195 | headers = get_headers_from_task(sender) 196 | if 'x-cronitor-celerybeat-name' in headers: 197 | monitor = Monitor(headers['x-cronitor-celerybeat-name']) 198 | elif not cronitor.celerybeat_only: 199 | monitor = Monitor(sender.name) 200 | else: 201 | return 202 | 203 | monitor.ping(state=State.FAIL, series=sender.request.id, message=str(reason)) 204 | -------------------------------------------------------------------------------- /cronitor/monitor.py: -------------------------------------------------------------------------------- 1 | import time 2 | import yaml 3 | import logging 4 | import json 5 | import os 6 | import requests 7 | from yaml.loader import SafeLoader 8 | 9 | 10 | import cronitor 11 | from urllib3.util.retry import Retry 12 | from requests.adapters import HTTPAdapter 13 | 14 | logger = logging.getLogger(__name__) 15 | 16 | # https://stackoverflow.com/questions/49121365/implementing-retry-for-requests-in-python 17 | def retry_session(retries, session=None, backoff_factor=0.3): 18 | session = session or requests.Session() 19 | retry = Retry( 20 | total=retries, 21 | read=retries, 22 | connect=retries, 23 | backoff_factor=backoff_factor, 24 | ) 25 | adapter = HTTPAdapter(max_retries=retry) 26 | session.mount('http://', adapter) 27 | session.mount('https://', adapter) 28 | return session 29 | 30 | JSON = 'json' 31 | YAML = 'yaml' 32 | 33 | class Monitor(object): 34 | _headers = { 35 | 'User-Agent': 'cronitor-python', 36 | } 37 | 38 | _req = retry_session(retries=3) 39 | 40 | @classmethod 41 | def as_yaml(cls, api_key=None, api_version=None): 42 | timeout = cronitor.timeout or 10 43 | api_key = api_key or cronitor.api_key 44 | resp = cls._req.get('%s.yaml' % cls._monitor_api_url(), 45 | auth=(api_key, ''), 46 | headers=dict(cls._headers, **{'Content-Type': 'application/yaml', 'Cronitor-Version': api_version}), 47 | timeout=timeout) 48 | if resp.status_code == 200: 49 | return resp.text 50 | else: 51 | raise cronitor.APIError("Unexpected error %s" % resp.text) 52 | 53 | @classmethod 54 | def put(cls, monitors=None, **kwargs): 55 | api_key = cronitor.api_key 56 | api_version = cronitor.api_version 57 | request_format = JSON 58 | 59 | rollback = False 60 | if 'rollback' in kwargs: 61 | rollback = kwargs['rollback'] 62 | del kwargs['rollback'] 63 | if 'api_key' in kwargs: 64 | api_key = kwargs['api_key'] 65 | del kwargs['api_key'] 66 | if 'api_version' in kwargs: 67 | api_version = kwargs['api_version'] 68 | del kwargs['api_version'] 69 | if 'format' in kwargs: 70 | request_format = kwargs['format'] 71 | del kwargs['format'] 72 | 73 | _monitors = monitors or [kwargs] 74 | nested_format = True if type(monitors) == dict else False 75 | 76 | data = cls._put(_monitors, api_key, rollback, request_format, api_version) 77 | 78 | if nested_format: 79 | return data 80 | 81 | _monitors = [] 82 | for md in data: 83 | m = cls(md['key']) 84 | m.data = md 85 | _monitors.append(m) 86 | 87 | return _monitors if len(_monitors) > 1 else _monitors[0] 88 | 89 | @classmethod 90 | def _put(cls, monitors, api_key, rollback, request_format, api_version): 91 | timeout = cronitor.timeout or 10 92 | payload = _prepare_payload(monitors, rollback, request_format) 93 | if request_format == YAML: 94 | content_type = 'application/yaml' 95 | data = yaml.dump(payload) 96 | url = '{}.yaml'.format(cls._monitor_api_url()) 97 | else: 98 | content_type = 'application/json' 99 | data = json.dumps(payload) 100 | url = cls._monitor_api_url() 101 | 102 | resp = cls._req.put(url, 103 | auth=(api_key, ''), 104 | data=data, 105 | headers=dict(cls._headers, **{'Content-Type': content_type, 'Cronitor-Version': api_version}), 106 | timeout=timeout) 107 | 108 | if resp.status_code == 200: 109 | if request_format == YAML: 110 | return yaml.load(resp.text, Loader=SafeLoader) 111 | else: 112 | return resp.json().get('monitors', []) 113 | elif resp.status_code == 400: 114 | raise cronitor.APIValidationError(resp.text) 115 | else: 116 | raise cronitor.APIError("Unexpected error %s" % resp.text) 117 | 118 | def __init__(self, key, api_key=None, api_version=None, env=None): 119 | self.key = key 120 | self.api_key = api_key or cronitor.api_key 121 | self.api_verion = api_version or cronitor.api_version 122 | self.env = env or cronitor.environment 123 | self._data = None 124 | 125 | @property 126 | def data(self): 127 | if self._data and type(self._data) is not Struct: 128 | self._data = Struct(**self._data) 129 | elif not self._data: 130 | self._data = Struct(**self._fetch()) 131 | return self._data 132 | 133 | @data.setter 134 | def data(self, data): 135 | self._data = Struct(**data) 136 | 137 | def delete(self): 138 | resp = requests.delete( 139 | self._monitor_api_url(self.key), 140 | auth=(self.api_key, ''), 141 | headers=self._headers, 142 | timeout=10) 143 | 144 | if resp.status_code == 204: 145 | return True 146 | elif resp.status_code == 404: 147 | raise cronitor.MonitorNotFound("Monitor '%s' not found" % self.key) 148 | else: 149 | raise cronitor.APIError("An unexpected error occured when deleting '%s'" % self.key) 150 | 151 | def ping(self, **params): 152 | if not self.api_key: 153 | logger.error('No API key detected. Set cronitor.api_key or initialize Monitor with kwarg api_key.') 154 | return 155 | 156 | return self._req.get(url=self._ping_api_url(), params=self._clean_params(params), timeout=5, headers=self._headers) 157 | 158 | def ok(self): 159 | self.ping(state=cronitor.State.OK) 160 | 161 | def pause(self, hours): 162 | if not self.api_key: 163 | logger.error('No API key detected. Set cronitor.api_key or initialize Monitor with kwarg api_key.') 164 | return 165 | 166 | return self._req.get(url='{}/pause/{}'.format(self._monitor_api_url(self.key), hours), auth=(self.api_key, ''), timeout=5, headers=self._headers) 167 | 168 | def unpause(self): 169 | return self.pause(0) 170 | 171 | def _fetch(self): 172 | if not self.api_key: 173 | raise cronitor.AuthenticationError('No api_key detected. Set cronitor.api_key or initialize Monitor with kwarg.') 174 | 175 | resp = requests.get(self._monitor_api_url(self.key), 176 | timeout=10, 177 | auth=(self.api_key, ''), 178 | headers=dict(self._headers, **{'Content-Type': 'application/json', 'Cronitor-Version': self.api_verion})) 179 | 180 | if resp.status_code == 404: 181 | raise cronitor.MonitorNotFound("Monitor '%s' not found" % self.key) 182 | return resp.json() 183 | 184 | def _clean_params(self, params): 185 | metrics = None 186 | if 'metrics' in params and type(params['metrics']) == dict: 187 | metrics = ['{}:{}'.format(k,v) for k,v in params['metrics'].items()] 188 | 189 | return { 190 | 'state': params.get('state', None), 191 | 'message': params.get('message', None), 192 | 'series': params.get('series', None), 193 | 'host': params.get('host', os.getenv('COMPUTERNAME', None)), 194 | 'metric': metrics, 195 | 'stamp': time.time(), 196 | 'env': self.env, 197 | } 198 | 199 | def _ping_api_url(self): 200 | return "https://cronitor.link/p/{}/{}".format(self.api_key, self.key) 201 | 202 | @classmethod 203 | def _monitor_api_url(cls, key=None): 204 | if not key: return "https://cronitor.io/api/monitors" 205 | return "https://cronitor.io/api/monitors/{}".format(key) 206 | 207 | def _prepare_payload(monitors, rollback=False, request_format=JSON): 208 | ret = {} 209 | if request_format == JSON: 210 | ret['monitors'] = monitors 211 | if request_format == YAML: 212 | ret = monitors 213 | if rollback: 214 | ret['rollback'] = True 215 | return ret 216 | 217 | 218 | class Struct(object): 219 | def __init__(self, **kwargs): 220 | self.__dict__.update(kwargs) 221 | -------------------------------------------------------------------------------- /cronitor/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cronitorio/cronitor-python/cdbae5575578236a4146544063377297e57e8874/cronitor/tests/__init__.py -------------------------------------------------------------------------------- /cronitor/tests/cronitor.yaml: -------------------------------------------------------------------------------- 1 | jobs: 2 | replenishment-report: 3 | schedule: '0 * * * *' 4 | data-warehouse-exports: 5 | schedule: '0 0 * * *' 6 | welcome-email: 7 | schedule: 'every 10 minutes' 8 | 9 | checks: 10 | cronitor-homepage: 11 | request: 12 | url: 'https://cronitor.io' 13 | assertions: 14 | - 'response.time < 2s' 15 | 16 | heartbeats: 17 | production-deploy: 18 | notify: 19 | alerts: 20 | - default 21 | events: 22 | complete: true -------------------------------------------------------------------------------- /cronitor/tests/test_00.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | import cronitor 3 | import unittest 4 | from unittest.mock import call, patch, ANY 5 | import time 6 | import cronitor 7 | 8 | FAKE_API_KEY = 'cb54ac4fd16142469f2d84fc1bbebd84XXXDEADXXX' 9 | YAML_PATH = './cronitor/tests/cronitor.yaml' 10 | 11 | cronitor.api_key = FAKE_API_KEY 12 | cronitor.timeout = 10 13 | 14 | class SyncTests(unittest.TestCase): 15 | 16 | def setUp(self): 17 | return super().setUp() 18 | 19 | def test_00_monitor_attributes_are_put(self): 20 | # This test will run first, test that attributes are synced correctly, and then undo the global mock 21 | 22 | with patch('cronitor.Monitor.put') as mock_put: 23 | time.sleep(2) 24 | calls = [call([{'key': 'ping-decorator-test', 'name': 'Ping Decorator Test'}])] 25 | mock_put.assert_has_calls(calls) 26 | 27 | @cronitor.job('ping-decorator-test', attributes={'name': 'Ping Decorator Test'}) 28 | def function_call_with_attributes(self): 29 | return 30 | -------------------------------------------------------------------------------- /cronitor/tests/test_config.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | import cronitor 3 | import unittest 4 | from unittest.mock import call, patch, ANY 5 | 6 | import cronitor 7 | 8 | FAKE_API_KEY = 'cb54ac4fd16142469f2d84fc1bbebd84XXXDEADXXX' 9 | YAML_PATH = './cronitor/tests/cronitor.yaml' 10 | 11 | cronitor.api_key = FAKE_API_KEY 12 | cronitor.timeout = 10 13 | 14 | with open(YAML_PATH, 'r') as conf: 15 | YAML_DATA = yaml.safe_load(conf) 16 | 17 | class CronitorTests(unittest.TestCase): 18 | 19 | def setUp(self): 20 | return super().setUp() 21 | 22 | def test_read_config(self): 23 | data = cronitor.read_config(YAML_PATH, output=True) 24 | self.assertIn('jobs', data) 25 | self.assertIn('checks', data) 26 | self.assertIn('heartbeats', data) 27 | 28 | @patch('cronitor.Monitor.put', return_value=YAML_DATA) 29 | def test_validate_config(self, mock): 30 | cronitor.config = YAML_PATH 31 | cronitor.validate_config() 32 | mock.assert_called_once_with(monitors=YAML_DATA, rollback=True, format='yaml') 33 | 34 | @patch('cronitor.Monitor.put') 35 | def test_apply_config(self, mock): 36 | cronitor.config = YAML_PATH 37 | cronitor.apply_config() 38 | mock.assert_called_once_with(monitors=YAML_DATA, rollback=False, format='yaml') 39 | -------------------------------------------------------------------------------- /cronitor/tests/test_monitor.py: -------------------------------------------------------------------------------- 1 | import copy 2 | import cronitor 3 | import unittest 4 | from unittest.mock import call, patch, ANY 5 | 6 | import cronitor 7 | 8 | FAKE_API_KEY = 'cb54ac4fd16142469f2d84fc1bbebd84XXXDEADXXX' 9 | 10 | MONITOR = { 11 | 'type': 'job', 12 | 'key': 'a-test_key', 13 | 'schedule': '* * * * *', 14 | 'assertions': [ 15 | 'metric.duration < 10 seconds' 16 | ], 17 | # 'notify': ['devops-alerts'] 18 | } 19 | MONITOR_2 = copy.deepcopy(MONITOR) 20 | MONITOR_2['key'] = 'another-test-key' 21 | 22 | YAML_FORMAT_MONITORS = { 23 | 'jobs': { 24 | MONITOR['key']: MONITOR, 25 | MONITOR_2['key']: MONITOR_2 26 | } 27 | } 28 | 29 | cronitor.api_key = FAKE_API_KEY 30 | 31 | class MonitorTests(unittest.TestCase): 32 | 33 | @patch('cronitor.Monitor._put', return_value=[MONITOR]) 34 | def test_create_monitor(self, mocked_create): 35 | monitor = cronitor.Monitor.put(**MONITOR) 36 | self.assertEqual(monitor.data.key, MONITOR['key']) 37 | self.assertEqual(monitor.data.assertions, MONITOR['assertions']) 38 | self.assertEqual(monitor.data.schedule, MONITOR['schedule']) 39 | 40 | @patch('cronitor.Monitor._put', return_value=[MONITOR, MONITOR_2]) 41 | def test_create_monitors(self, mocked_create): 42 | monitors = cronitor.Monitor.put([MONITOR, MONITOR_2]) 43 | self.assertEqual(len(monitors), 2) 44 | self.assertCountEqual([MONITOR['key'], MONITOR_2['key']], list(map(lambda m: m.data.key, monitors))) 45 | 46 | @patch('cronitor.Monitor._req.put') 47 | def test_create_monitor_fails(self, mocked_put): 48 | mocked_put.return_value.status_code = 400 49 | with self.assertRaises(cronitor.APIValidationError): 50 | cronitor.Monitor.put(**MONITOR) 51 | 52 | @patch('requests.get') 53 | def test_get_monitor_invalid_code(self, mocked_get): 54 | mocked_get.return_value.status_code = 404 55 | with self.assertRaises(cronitor.MonitorNotFound): 56 | monitor = cronitor.Monitor("I don't exist") 57 | monitor.data 58 | 59 | @patch('cronitor.Monitor._put') 60 | def test_update_monitor_data(self, mocked_update): 61 | monitor_data = MONITOR.copy() 62 | monitor_data.update({'name': 'Updated Name'}) 63 | mocked_update.return_value = [monitor_data] 64 | 65 | monitor = cronitor.Monitor.put(key=MONITOR['key'], name='Updated Name') 66 | self.assertEqual(monitor.data.name, 'Updated Name') 67 | 68 | @patch('cronitor.Monitor._req.put') 69 | def test_update_monitor_fails_validation(self, mocked_update): 70 | mocked_update.return_value.status_code = 400 71 | with self.assertRaises(cronitor.APIValidationError): 72 | cronitor.Monitor.put(schedule='* * * * *') 73 | 74 | @patch('cronitor.Monitor._put', return_value=YAML_FORMAT_MONITORS) 75 | def test_create_monitors_yaml_body(self, mocked_create): 76 | monitors = cronitor.Monitor.put(monitors=YAML_FORMAT_MONITORS, format='yaml') 77 | self.assertIn(MONITOR['key'], monitors['jobs']) 78 | self.assertIn(MONITOR_2['key'], monitors['jobs']) 79 | 80 | @patch('requests.delete') 81 | def test_delete_no_id(self, mocked_delete): 82 | mocked_delete.return_value.status_code = 204 83 | monitor = cronitor.Monitor(MONITOR['key']) 84 | monitor.delete() 85 | 86 | -------------------------------------------------------------------------------- /cronitor/tests/test_pings.py: -------------------------------------------------------------------------------- 1 | import os 2 | import unittest 3 | from unittest.mock import patch, ANY, call 4 | from unittest.mock import MagicMock 5 | import cronitor 6 | import pytest 7 | 8 | # a reserved monitorkey for running integration tests against cronitor.link 9 | FAKE_KEY = 'd3x0c1' 10 | FAKE_API_KEY = 'ping-api-key' 11 | 12 | class MonitorPingTests(unittest.TestCase): 13 | 14 | def setUp(self): 15 | cronitor.api_key = FAKE_API_KEY 16 | 17 | def test_endpoints(self): 18 | monitor = cronitor.Monitor(key=FAKE_KEY) 19 | 20 | self.assertTrue(monitor.ping()) 21 | 22 | states = ['run', 'complete', 'fail', 'ok'] 23 | for state in states: 24 | self.assertTrue(monitor.ping(state=state)) 25 | 26 | 27 | @patch('cronitor.Monitor._req.get') 28 | def test_with_all_params(self, ping): 29 | 30 | monitor = cronitor.Monitor(FAKE_KEY, env='staging') 31 | 32 | params = { 33 | 'state': 'run', 34 | 'host': 'foo', 35 | 'message': 'test message', 36 | 'series': 'abc', 37 | 'metrics': { 38 | 'duration': 100, 39 | 'count': 5, 40 | 'error_count':2 41 | } 42 | } 43 | 44 | monitor.ping(**params) 45 | del params['metrics'] 46 | params['metric'] = [ANY, ANY, ANY,] 47 | params['env'] = monitor.env 48 | params['stamp'] = ANY 49 | 50 | ping.assert_called_once_with( 51 | headers={ 52 | 'User-Agent': 'cronitor-python', 53 | }, 54 | params=params, 55 | timeout=5, 56 | url='https://cronitor.link/p/{}/{}'.format(FAKE_API_KEY, FAKE_KEY)) 57 | 58 | 59 | def test_convert_metrics_hash(self): 60 | monitor = cronitor.Monitor(FAKE_KEY) 61 | clean = monitor._clean_params({ 'metrics': { 62 | 'duration': 100, 63 | 'count': 500, 64 | 'error_count': 20 65 | }}) 66 | self.assertListEqual(sorted(clean['metric']), sorted(['count:500', 'duration:100', 'error_count:20' ])) 67 | 68 | 69 | class PingDecoratorTests(unittest.TestCase): 70 | 71 | def setUp(self): 72 | cronitor.api_key = FAKE_API_KEY 73 | 74 | @patch('cronitor.Monitor.ping') 75 | def test_ping_wraps_function_success(self, mocked_ping): 76 | calls = [call(state='run', series=ANY), call(state='complete', series=ANY, metrics={'duration': ANY}, message=ANY)] 77 | self.function_call() 78 | mocked_ping.assert_has_calls(calls) 79 | 80 | @patch('cronitor.Monitor.ping') 81 | def test_ping_wraps_function_raises_exception(self, mocked_ping): 82 | calls = [call(state='run', series=ANY), call(state='fail', series=ANY, metrics={'duration': ANY}, message=ANY)] 83 | self.assertRaises(Exception, lambda: self.error_function_call()) 84 | mocked_ping.assert_has_calls(calls) 85 | 86 | 87 | @patch('cronitor.Monitor.ping') 88 | @patch('cronitor.Monitor.__init__') 89 | def test_ping_with_non_default_env(self, mocked_monitor, mocked_ping): 90 | mocked_monitor.return_value = None 91 | self.staging_env_function_call() 92 | mocked_monitor.assert_has_calls([call('ping-decorator-test', env='staging')]) 93 | 94 | @cronitor.job('ping-decorator-test') 95 | def function_call(self): 96 | return 97 | 98 | @cronitor.job('ping-decorator-test') 99 | def error_function_call(self): 100 | raise Exception 101 | 102 | @cronitor.job('ping-decorator-test', env='staging') 103 | def staging_env_function_call(self): 104 | return 105 | 106 | 107 | 108 | 109 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests==2.31.0 2 | pyyaml==6.0.1 3 | humanize==3.13.1 4 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [wheel] 2 | universal = 1 3 | 4 | [bdist_rpm] 5 | requires = python requests -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | with open("README.md", "r", encoding="utf-8") as fh: 4 | long_description = fh.read() 5 | 6 | setup( 7 | name='cronitor', 8 | version='4.7.1', 9 | packages=find_packages(), 10 | url='https://github.com/cronitorio/cronitor-python', 11 | license='MIT License', 12 | author='August Flanagan', 13 | author_email='august@cronitor.io', 14 | description='A lightweight Python client for Cronitor.', 15 | long_description = long_description, 16 | long_description_content_type = 'text/markdown', 17 | install_requires=[ 18 | 'requests', 19 | 'pyyaml', 20 | 'humanize', 21 | 'urllib3' 22 | ], 23 | entry_points=dict(console_scripts=['cronitor = cronitor.__main__:main']) 24 | ) 25 | --------------------------------------------------------------------------------