├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE.txt ├── NOTICE.txt ├── README.md ├── examples ├── basic_example.py ├── dynamodb_example.py ├── estimate_example.py ├── metrics_example.py ├── multiple_tasks_example.py ├── parallel_estimate_example.py ├── redis_example.py ├── redis_subworkflow_example.py └── sub_workflows_example.py ├── progressmonitor ├── __init__.py └── helpers │ ├── __init__.py │ └── db_helpers.py ├── requirements.txt ├── setup.cfg ├── setup.py ├── tests ├── __init__.py └── test_progressmonitor.py └── tools ├── dict_test.py ├── dict_test2.py ├── load_data_test.py ├── metrics_test.py ├── progress_test.py ├── readme_quick_start.py ├── test.py ├── test_dict_ref.py └── time.py /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | local_settings.py 55 | 56 | # Flask stuff: 57 | instance/ 58 | .webassets-cache 59 | 60 | # Scrapy stuff: 61 | .scrapy 62 | 63 | # Sphinx documentation 64 | docs/_build/ 65 | 66 | # PyBuilder 67 | target/ 68 | 69 | # IPython Notebook 70 | .ipynb_checkpoints 71 | 72 | # pyenv 73 | .python-version 74 | 75 | # celery beat schedule file 76 | celerybeat-schedule 77 | 78 | # dotenv 79 | .env 80 | 81 | # virtualenv 82 | venv/ 83 | ENV/ 84 | 85 | # Spyder project settings 86 | .spyderproject 87 | 88 | # Rope project settings 89 | .ropeproject 90 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/awslabs/aws-progress-monitor/issues), or [recently closed](https://github.com/awslabs/aws-progress-monitor/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/aws-progress-monitor/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/awslabs/aws-progress-monitor/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Amazon Software License 1.0 2 | 3 | This Amazon Software License ("License") governs your use, reproduction, and 4 | distribution of the accompanying software as specified below. 5 | 6 | 1. Definitions 7 | 8 | "Licensor" means any person or entity that distributes its Work. 9 | 10 | "Software" means the original work of authorship made available under this 11 | License. 12 | 13 | "Work" means the Software and any additions to or derivative works of the 14 | Software that are made available under this License. 15 | 16 | The terms "reproduce," "reproduction," "derivative works," and 17 | "distribution" have the meaning as provided under U.S. copyright law; 18 | provided, however, that for the purposes of this License, derivative works 19 | shall not include works that remain separable from, or merely link (or bind 20 | by name) to the interfaces of, the Work. 21 | 22 | Works, including the Software, are "made available" under this License by 23 | including in or with the Work either (a) a copyright notice referencing the 24 | applicability of this License to the Work, or (b) a copy of this License. 25 | 26 | 2. License Grants 27 | 28 | 2.1 Copyright Grant. Subject to the terms and conditions of this License, 29 | each Licensor grants to you a perpetual, worldwide, non-exclusive, 30 | royalty-free, copyright license to reproduce, prepare derivative works of, 31 | publicly display, publicly perform, sublicense and distribute its Work and 32 | any resulting derivative works in any form. 33 | 34 | 2.2 Patent Grant. Subject to the terms and conditions of this License, each 35 | Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free 36 | patent license to make, have made, use, sell, offer for sale, import, and 37 | otherwise transfer its Work, in whole or in part. The foregoing license 38 | applies only to the patent claims licensable by Licensor that would be 39 | infringed by Licensor's Work (or portion thereof) individually and 40 | excluding any combinations with any other materials or technology. 41 | 42 | 3. Limitations 43 | 44 | 3.1 Redistribution. You may reproduce or distribute the Work only if 45 | (a) you do so under this License, (b) you include a complete copy of this 46 | License with your distribution, and (c) you retain without modification 47 | any copyright, patent, trademark, or attribution notices that are present 48 | in the Work. 49 | 50 | 3.2 Derivative Works. You may specify that additional or different terms 51 | apply to the use, reproduction, and distribution of your derivative works 52 | of the Work ("Your Terms") only if (a) Your Terms provide that the use 53 | limitation in Section 3.3 applies to your derivative works, and (b) you 54 | identify the specific derivative works that are subject to Your Terms. 55 | Notwithstanding Your Terms, this License (including the redistribution 56 | requirements in Section 3.1) will continue to apply to the Work itself. 57 | 58 | 3.3 Use Limitation. The Work and any derivative works thereof only may be 59 | used or intended for use with the web services, computing platforms or 60 | applications provided by Amazon.com, Inc. or its affiliates, including 61 | Amazon Web Services, Inc. 62 | 63 | 3.4 Patent Claims. If you bring or threaten to bring a patent claim against 64 | any Licensor (including any claim, cross-claim or counterclaim in a 65 | lawsuit) to enforce any patents that you allege are infringed by any Work, 66 | then your rights under this License from such Licensor (including the 67 | grants in Sections 2.1 and 2.2) will terminate immediately. 68 | 69 | 3.5 Trademarks. This License does not grant any rights to use any 70 | Licensor's or its affiliates' names, logos, or trademarks, except as 71 | necessary to reproduce the notices described in this License. 72 | 73 | 3.6 Termination. If you violate any term of this License, then your rights 74 | under this License (including the grants in Sections 2.1 and 2.2) will 75 | terminate immediately. 76 | 77 | 4. Disclaimer of Warranty. 78 | 79 | THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 80 | EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF 81 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR 82 | NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER 83 | THIS LICENSE. SOME STATES' CONSUMER LAWS DO NOT ALLOW EXCLUSION OF AN 84 | IMPLIED WARRANTY, SO THIS DISCLAIMER MAY NOT APPLY TO YOU. 85 | 86 | 5. Limitation of Liability. 87 | 88 | EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL 89 | THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE 90 | SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, 91 | INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR 92 | RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK (INCLUDING 93 | BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS 94 | OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER COMM ERCIAL DAMAGES 95 | OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF 96 | SUCH DAMAGES. 97 | -------------------------------------------------------------------------------- /NOTICE.txt: -------------------------------------------------------------------------------- 1 | AWS Progress Monitor 2 | Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AWS Progress Monitor 2 | ## First Things First 3 | Please make sure to review the [current AWS CloudWatch Custom Metrics pricing]( https://aws.amazon.com/cloudwatch/pricing/) before proceeding. 4 | ## Overview 5 | `AWS Progress Monitor` is an easy-to-use Python module that gives you a powerful way to track real-time progress and metrics around the progress of multi-level workflow processes. 6 | ## Features 7 | * Unlimited levels of workflows and tasks 8 | * Track progress at any level from the root workflow to a specific task 9 | * View % done, % remaining, time completed, time remaining on the entire workflow or any sub-workflow/task 10 | * Dynamically add new workflows/tasks at any time 11 | * Automatically log metrics to CloudWatch Custom Metrics as workflows/tasks are completed 12 | * Supports parallel workflows 13 | * User-provided estimates at the task level 14 | * ElastiCache data store manages each workflow/task as a separate record for minimal collision 15 | ## Installation 16 | You can install directly from PyPI: 17 | ```sh 18 | pip install aws-progress-monitor 19 | ``` 20 | ## What problem are we trying to solve? 21 | Imagine long-running step functions or a workflow that has many child workflows (e.g., syncing all of my S3 buckets, importing several VM instances, etc.), each with many processes running across many machines. How do you know the progress of the entire workflow? How do you easily log metrics that tie back to the workflow? `AWS Progress Monitor` uses ElastiCache and CloudWatch Custom Metrics to solve the problem of managing the real-time status of the entire workflow or any part of the workflow. 22 | ## `AWS Progress Monitor` Simply Explained 23 | In the simplest of terms, `AWS Progress Monitor` is a nested set of `ProgressTracker` objects that mirror your tasks and workflows. All you need to do is add trackers as children to other trackers. You can add estimated times for the tasks. After that, all you need to do is start and stop trackers as work is done. `AWS Progress Monitor` does all the magic of rolling up the progress and status across the entire workflow. 24 | ## Terminology 25 | `AWS Progress Monitor` is meant to provide simplicity to progress tracking, so at its core, it uses a single `ProgressTracker` class. A `ProgressTracker` represents any distinct unit of work, either a workflow or a specific task. 26 | ### Examples 27 | #### Example: Single Task 28 | This is the most basic of basic applications. We create a `ProgressMonitor` object (which is a `ProgressTracker`), which is required to create a new workflow tracker. Then we create another tracker called `SingleTask`. Each tracker has an `id` property that is an automatically generated `uuid`. If you want to use your own unique ID as well (from some other process), you can pass in a `FriendlyId` as well, which is accessible by the `friendly_id` property. All we need to do is call `with_tracker` and the `SingleTask` tracker is attached to the root workflow. 29 | 30 | ```sh 31 | import redis 32 | import time 33 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 34 | ProgressTracker 35 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 36 | r = redis.Redis(connection_pool=pool) 37 | # this is this data store manager for AWS Progress Monitor 38 | redispm = RedisProgressManager(RedisConnection=r) 39 | 40 | # create the master workflow using Redis as the backing store 41 | root_workflow = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 42 | 43 | # this is a single task that we want to track 44 | task = ProgressTracker(Name='SingleTask', FriendlyId='MyTask') 45 | 46 | # this magic command adds the task to the main workflow 47 | root_workflow.with_tracker(task) 48 | print root_workflow.status, task.status 49 | 50 | # start the main workflow, but don't start the task yet 51 | print root_workflow.start() 52 | print root_workflow.status, task.status 53 | time.sleep(1) 54 | 55 | # we can see elapsed time on any tracker 56 | print root_workflow.elapsed_time_in_seconds(), task.elapsed_time_in_seconds() 57 | 58 | # now we start the task 59 | task.start() 60 | time.sleep(1) 61 | 62 | # the task is now one second behind the main workflow 63 | print root_workflow.status, task.status 64 | print root_workflow.elapsed_time_in_seconds(), task.elapsed_time_in_seconds() 65 | 66 | # we're going to mark the task and the main workflow successfully done, which stops the timer 67 | task.succeed() 68 | root_workflow.succeed() 69 | print root_workflow.status, task.status 70 | print root_workflow.elapsed_time_in_seconds(), task.elapsed_time_in_seconds() 71 | ``` 72 | #### Example: Single Task (Output) 73 | ```sh 74 | Not started Not started 75 | MasterWorkflow 76 | In Progress Not started 77 | 1.001271 0 78 | In Progress In Progress 79 | 2.003592 1.002901 80 | Succeeded Succeeded 81 | 2.007596 1.004904 82 | ``` 83 | #### Example: Multiple Tasks 84 | We're now adding three tasks to the root workflow. This also demonstrates how you can stop a task with `succeed`, `fail` or `cancel` with a status message. 85 | ```sh 86 | import redis 87 | import time 88 | from progressmonitor import RedisProgressManager, ProgressMonitor, ProgressTracker 89 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 90 | r = redis.Redis(connection_pool=pool) 91 | redispm = RedisProgressManager(RedisConnection=r) 92 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 93 | 94 | # we're creating three separate tasks 95 | task_a = ProgressTracker(Name='Task A', FriendlyId='TaskA') 96 | task_b = ProgressTracker(Name='Task B', FriendlyId='TaskB') 97 | task_c = ProgressTracker(Name='Task C', FriendlyId='TaskC') 98 | 99 | # all three tasks are added to the main workflow 100 | root_wf.with_tracker(task_a).with_tracker(task_b).with_tracker(task_c) 101 | print root_wf.status, task_a.status 102 | print root_wf.start() 103 | print root_wf.status, task_a.status, task_b.status, task_c.status 104 | time.sleep(1) 105 | 106 | # each task is started and tracked independently from the other tasks 107 | task_a.start() 108 | time.sleep(1) 109 | task_b.start() 110 | time.sleep(1) 111 | task_c.start() 112 | print root_wf.elapsed_time_in_seconds(), \ 113 | task_a.elapsed_time_in_seconds(), \ 114 | task_b.elapsed_time_in_seconds(), \ 115 | task_c.elapsed_time_in_seconds() 116 | 117 | # any task can succeed, fail, cancel or pause, all of which stop the timer 118 | # any task can also take a status Message parameter, which is saved with the task to provide a custom real-time status message along with the actual Status 119 | task_a.succeed(Message='This task succeeded') 120 | task_b.fail(Message='This task failed') 121 | task_c.cancel(Message='This task canceled') 122 | root_wf.fail() 123 | print root_wf.status, task_a.status, task_b.status, task_c.status 124 | print root_wf.elapsed_time_in_seconds(), \ 125 | task_a.elapsed_time_in_seconds(), \ 126 | task_b.elapsed_time_in_seconds(), \ 127 | task_c.elapsed_time_in_seconds() 128 | print task_b.status_msg 129 | ``` 130 | #### Example: Multiple Tasks (Output) 131 | ```sh 132 | Not started Not started 133 | MasterWorkflow 134 | In Progress Not started Not started Not started 135 | 3.006041 2.001023 1.00046 0.0 136 | Failed Succeeded Failed Canceled 137 | 3.007544 2.001507 1.001963 0.001503 138 | This task failed 139 | ``` 140 | #### Example: Sub-Workflows 141 | We're starting to add some complexity here by adding subworkflows and tasks. Notice that both workflows and tasks are `ProgressTracker` objects. The only difference between a "workflow" and a "task" is that a workflow has child trackers. 142 | ```sh 143 | import redis 144 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 145 | ProgressTracker 146 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 147 | r = redis.Redis(connection_pool=pool) 148 | redispm = RedisProgressManager(RedisConnection=r) 149 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 150 | 151 | # we're creating two progress trackers under the master progress tracker 152 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 153 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 154 | 155 | # we're creating two progress trackers under Workflow B 156 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 157 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 158 | 159 | # we're creating two progress trackers under Workflow A 160 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 161 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 162 | 163 | # we're creating a progress tracker under SubWorkflow B1 164 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 165 | 166 | # wire up all the trackers 167 | root_wf.with_tracker(wf_a).with_tracker(wf_b) 168 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 169 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 170 | wf_b_2.with_tracker(task_b2_1) 171 | 172 | # every tracker has the same properties 173 | print "Total items in workflow: {}".format(root_wf.all_children_count) 174 | print "Total items not started: {}".format(root_wf.not_started_count) 175 | print task_b2_1.status, wf_b_2.status, wf_b.status, root_wf.status 176 | 177 | # when you start a tracker, the parent has to be started as well . . . `Parents=True` tells `AWS Progress Monitor` to automatically start all parents up the tree 178 | task_b2_1.start(Parents=True) 179 | 180 | # we can print out _count and _pct for any metric and it will include all children . . . in this case, we're getting all in_progress items in the entire tree 181 | print "Total items started: {}".format(root_wf.in_progress_count) 182 | print "Percentage started: {}".format(root_wf.in_progress_pct) 183 | print task_b2_1.status, wf_b_2.status, wf_b.status, root_wf.status 184 | task_b2_1.succeed() 185 | 186 | # we've succeeded only one task in the tree . . .we can get the status of the whole workflow and/or the status of Subworkflow B2, which is now 100% done 187 | print "Total items done: {}".format(root_wf.done_count) 188 | print "Percentage done: {}".format(root_wf.done_pct) 189 | print "Subworkflow B2 total items: {}".format(wf_b_2.all_children_count) 190 | print "Subworkflow B2 items done: {}".format(wf_b_2.done_count) 191 | print "Subworkflow B2 percentage done: {}".format(wf_b_2.done_pct) 192 | print task_b2_1.status, wf_b_2.status, wf_b.status, root_wf.status 193 | ``` 194 | #### Example: Sub-Workflows (OUTPUT) 195 | ```sh 196 | Total items in workflow: 7 197 | Total items not started: 7 198 | Not started Not started Not started Not started 199 | Total items started: 3 200 | Percentage started: 0.43 201 | In Progress In Progress In Progress In Progress 202 | Total items done: 1 203 | Total Percentage done: 0.14 204 | Subworkflow B2 total items: 1 205 | Subworkflow B2 items done: 1 206 | Subworkflow B2 percentage done: 1.0 207 | Succeeded In Progress In Progress In Progress 208 | ``` 209 | #### Example: Saving to ElastiCache (Redis) 210 | In this example, we're saving the current state of the workflow with `update_all`. To maximize performance, every tracker has an `is_dirty` flag. When you call `update_all`, only trackers that are changed will be saved. So if you have 1,000 trackers in your workflow and only one has changed, we'll only make a single update call. 211 | ```sh 212 | import redis 213 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 214 | ProgressTracker 215 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 216 | r = redis.Redis(connection_pool=pool) 217 | redispm = RedisProgressManager(RedisConnection=r) 218 | 219 | # create the same trackers as the previous example 220 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 221 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 222 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 223 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 224 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 225 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 226 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 227 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 228 | root_wf.with_tracker(wf_a).with_tracker(wf_b) 229 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 230 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 231 | wf_b_2.with_tracker(task_b2_1) 232 | task_b2_1.start(Parents=True) 233 | 234 | # print current values for comparison 235 | print "Total items started: {}".format(root_wf.in_progress_count) 236 | print "Percentage started: {}".format(root_wf.in_progress_pct) 237 | 238 | # the update_all command saves all children to ElastiCache 239 | root_wf.update_all() 240 | 241 | # every tracker generates a GUID . . . let's grab this so we can load it from the DB 242 | id = root_wf.id 243 | 244 | # create a new tracker with no children 245 | pm2 = ProgressMonitor(DbConnection=redispm) 246 | print "Total items: {}".format(pm2.all_children_count) 247 | 248 | # load the tracker and all children from ElastiCache by ID 249 | pm2 = pm2.load(id) 250 | print "Total items started: {}".format(pm2.in_progress_count) 251 | print "Percentage started: {}".format(pm2.in_progress_pct) 252 | ``` 253 | #### Example: Saving to ElastiCache (OUTPUT) 254 | ```sh 255 | Total items started: 3 256 | Percentage started: 0.43 257 | Total items: 0 258 | Total items started: 3 259 | Percentage started: 0.43 260 | ``` 261 | #### Example: Working with a single subworkflow 262 | Suppose you have a very large complex workflow with lots and lots of subworkflows and tasks. You have a process that only works on a specific workflow or task. it doesn't make any sense to load the entirety of the massive workflow just to track the progress of a single workflow or task. `AWS Progress Monitor` makes this easy. You can pass in the `id` of any tracker and the `AWS Progress Monitor` object will return just that workflow. 263 | ```sh 264 | import redis 265 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 266 | ProgressTracker 267 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 268 | r = redis.Redis(connection_pool=pool) 269 | redispm = RedisProgressManager(RedisConnection=r) 270 | 271 | # setup all the trackers 272 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 273 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 274 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 275 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 276 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 277 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 278 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 279 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 280 | root_wf.with_tracker(wf_a).with_tracker(wf_b) 281 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 282 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 283 | wf_b_2.with_tracker(task_b2_1) 284 | task_b2_1.start(Parents=True) 285 | 286 | # here we are printing the total in-progress items in the entire workflow 287 | print "Total items started: {}".format(root_wf.in_progress_count) 288 | print "Percentage started: {}".format(root_wf.in_progress_pct) 289 | root_wf.update_all() 290 | 291 | # grab the id from Workflow B 292 | id = wf_b.id 293 | 294 | # we're going to just load Workflow B 295 | pm2 = root_wf.load(id) 296 | 297 | # so now we are only working with Workflow B 298 | print "Total items started: {}".format(pm2.in_progress_count) 299 | print "Percentage started: {}".format(pm2.in_progress_pct) 300 | ``` 301 | #### Example: Workin with a single subworkflow (OUTPUT) 302 | ```sh 303 | Total items started: 3 304 | Percentage started: 0.43 305 | Total items: 0 306 | Total items started: 2 307 | Percentage started: 0.67 308 | ``` 309 | #### Example: Using estimates 310 | When you create a tracker, you can pass in an estimated number of seconds that you believe the tracker will run. Estimates are only added at the task level, meaning that if you create a tracker with an estimated time and then add child trackers, you'll have a conflict. If you have a `BackupFolder` tracker with two child trackers `CreateFolder` and `CopyFiles`, you *can't* have an estimated time on `CreateFolder` and `CopyFiles` *as well as* `BackupFolder`. Estimated times are ignored for a tracker if that tracker has child trackers. 311 | ```sh 312 | import redis 313 | import time 314 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 315 | ProgressTracker 316 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 317 | r = redis.Redis(connection_pool=pool) 318 | redispm = RedisProgressManager(RedisConnection=r) 319 | 320 | # setup the trackers 321 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 322 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 323 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 324 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 325 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 326 | 327 | # each of these tasks has a 10 second estimate 328 | task_a1 = ProgressTracker(Name='Task A-1', EstimatedSeconds=10) 329 | task_a2 = ProgressTracker(Name='Task A-2', EstimatedSeconds=10) 330 | task_b2_1 = ProgressTracker(Name='Task B2-1', EstimatedSeconds=10) 331 | root_wf.with_tracker(wf_a).with_tracker(wf_b) 332 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 333 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 334 | wf_b_2.with_tracker(task_b2_1) 335 | print "Total estimated seconds: {}".format(root_wf.total_estimate) 336 | task_b2_1.start(Parents=True) 337 | time.sleep(2) 338 | 339 | # we can elapsed and remaining time at any level 340 | print "Elapsed time in seconds: {}".format(root_wf.elapsed_time_in_seconds) 341 | print "Remaining time in seconds: {}".format(root_wf.remaining_time_in_seconds) 342 | print "Workflow B elapsed time: {}".format(wf_b_2.elapsed_time_in_seconds) 343 | print "Workflow B remaining time: {}".format(wf_b_2.remaining_time_in_seconds) 344 | ``` 345 | #### Example: Using estimates (OUTPUT) 346 | ```sh 347 | Total estimated seconds: 30 348 | Total elapsed time in secs: 2.000171 349 | Total remaining time in secs: 27.99955 350 | Workflow B elapsed time: 2.00171 351 | Workflow B remaining time: 7.997763 352 | ``` 353 | #### Example: Using parallel workflows with estimates 354 | When you want to run work workflows in parallel, obviously we don't want to add up all the estimates. We want to estimate based on running in parallel. In this case, we estimate a total of each parallel workflow and return the longest estimate. 355 | ```sh 356 | import redis 357 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 358 | ProgressTracker 359 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 360 | r = redis.Redis(connection_pool=pool) 361 | redispm = RedisProgressManager(RedisConnection=r) 362 | root_wf = ProgressMonitor(DbConnection=redispm, Name="MasterWorkflow") 363 | 364 | # we need to flag that this workflow's children run in parallel 365 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA', 366 | HasParallelChildren=True) 367 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 368 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 369 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 370 | 371 | # Workflow A Task A-1 has a 10-second estimate 372 | task_a1 = ProgressTracker(Name='Task A-1', EstimatedSeconds=10) 373 | wf_a_1 = ProgressTracker(Name='SubWorkflow A1') 374 | 375 | # Workflow A, Subworkflow A1 has a total of 50 seconds estimate 376 | wf_a1_1 = ProgressTracker(Name='SubWorkflow A1, Task 1', EstimatedSeconds=20) 377 | wf_a1_2 = ProgressTracker(Name='SubWorkflow A1, Task 2', EstimatedSeconds=30) 378 | root_wf.with_tracker(wf_a).with_tracker(wf_b) 379 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 380 | wf_a_1.with_tracker(wf_a1_1).with_tracker(wf_a1_2) 381 | wf_a.with_tracker(task_a1).with_tracker(wf_a_1) 382 | 383 | # total_estimate automatically finds the longest estimate under the parallel workflows 384 | print "Total estimated seconds: {}".format(root_wf.total_estimate) 385 | ``` 386 | #### Example: Using parallel workflows with estimates (OUTPUT) 387 | ```sh 388 | Total estimated seconds: 50 389 | ``` 390 | #### Example: Automatically logging metrics 391 | One of the really valuable aspects of `AWS Progress Monitor` is the ability to log performance metrics to CloudWatch. This allows `AWS Progress Monitor` to be not only a real-time progress visibility tool, but also a performance insight tool as well. All you need to do is attach a metric namespace and metric name to any tracker you want metrics and `AWS Progress Monitor` does the rest. When you start and stop a tracker, the timing is automatically logged to CloudWatch with the metric name you provide. Additionally, if you want more dimensions to the metrics, you can easily add those as well to generate richer data. 392 | 393 | ```sh 394 | import redis 395 | import time 396 | from progressmonitor import RedisProgressManager, ProgressMonitor, ProgressTracker 397 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 398 | r = redis.Redis(connection_pool=pool) 399 | rpm = RedisProgressManager(RedisConnection=r) 400 | pm = ProgressMonitor(DbConnection=rpm) 401 | 402 | # Create a tracker and attach to the 'OS/Startup' metric in the 'dev_testing' namespace 403 | c = ProgressTracker(Name='TestWorkflow').with_metric(Namespace='dev_testing', 404 | Metric='OS/Startup') 405 | 406 | # adding Linux flavor and version to create a few richer metrics 407 | c.metric.with_dimension('linux_flavor', 'redhat') \ 408 | .with_dimension('version', '6.8') 409 | 410 | # notice that we no longer refer to the metrics -- it's all behind-the-scenes now 411 | pm.with_tracker(c) 412 | pm.update_all() 413 | c.start(Parents=True) 414 | pm.update_all() 415 | print 'sleeping' 416 | time.sleep(2) 417 | 418 | # this command will automatically check if there is a metrics and log to CloudWatch 419 | c.succeed() 420 | pm.update_all() 421 | print c.elapsed_time_in_seconds 422 | print c.start_time 423 | print c.finish_time 424 | ``` 425 | -------------------------------------------------------------------------------- /examples/basic_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | import time 10 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 11 | ProgressTracker 12 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 13 | r = redis.Redis(connection_pool=pool) 14 | rpm = RedisProgressManager(RedisConnection=r) 15 | pm = ProgressMonitor(DbConnection=rpm, Name="MasterWorkflow") 16 | task = ProgressTracker(Name='SingleTask', FriendlyId='MyTask') 17 | pm.with_tracker(task) 18 | print pm.status, task.status 19 | print pm.start() 20 | print pm.status, task.status 21 | time.sleep(1) 22 | print pm.elapsed_time_in_seconds, task.elapsed_time_in_seconds 23 | task.start() 24 | time.sleep(1) 25 | print pm.status, task.status 26 | print pm.elapsed_time_in_seconds, task.elapsed_time_in_seconds 27 | task.succeed() 28 | pm.succeed() 29 | print pm.status, task.status 30 | print pm.elapsed_time_in_seconds, task.elapsed_time_in_seconds 31 | 32 | -------------------------------------------------------------------------------- /examples/dynamodb_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | from progressmonitor import ProgressMonitor, ProgressTracker, DynamoDbDriver 9 | root = ProgressMonitor(DbConnection=DynamoDbDriver(TablePrefix='test'), 10 | Name="MasterWorkflow") 11 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 12 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 13 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 14 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 15 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 16 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 17 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 18 | root.with_tracker(wf_a).with_tracker(wf_b) 19 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 20 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 21 | wf_b_2.with_tracker(task_b2_1) 22 | task_b2_1.start(Parents=True) 23 | print "Total items started: {}".format(root.in_progress_count) 24 | print "Percentage started: {}".format(root.in_progress_pct) 25 | root.update_all() 26 | id = root.id 27 | root2 = ProgressMonitor(DbConnection=DynamoDbDriver) 28 | print "Total items: {}".format(root2.all_children_count) 29 | root2 = root.load(id) 30 | print "Total items started: {}".format(root2.in_progress_count) 31 | print "Percentage started: {}".format(root2.in_progress_pct) 32 | print "Refreshing . . ." 33 | root2.refresh() 34 | print "Total items started: {}".format(root2.in_progress_count) 35 | print "Percentage started: {}".format(root2.in_progress_pct) 36 | -------------------------------------------------------------------------------- /examples/estimate_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | import time 10 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 11 | ProgressTracker 12 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 13 | r = redis.Redis(connection_pool=pool) 14 | rpm = RedisProgressManager(RedisConnection=r) 15 | pm = ProgressMonitor(DbConnection=rpm, Name="MasterWorkflow") 16 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 17 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 18 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 19 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 20 | task_a1 = ProgressTracker(Name='Task A-1', EstimatedSeconds=10) 21 | task_a2 = ProgressTracker(Name='Task A-2', EstimatedSeconds=10) 22 | task_b2_1 = ProgressTracker(Name='Task B2-1', EstimatedSeconds=10) 23 | task_b2_1_a = ProgressTracker(Name='Task B2-1-a', EstimatedSeconds=10) 24 | pm.with_tracker(wf_a).with_tracker(wf_b) 25 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 26 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 27 | wf_b_2.with_tracker(task_b2_1) 28 | task_b2_1.with_tracker(task_b2_1_a) 29 | print "Total estimated seconds: {}".format(pm.total_estimate) 30 | task_b2_1.start(Parents=True) 31 | time.sleep(2) 32 | print "Total elapsed time in secs: {}".format(pm.elapsed_time_in_seconds) 33 | print "Total remaining time in secs: {}".format(pm.remaining_time_in_seconds) 34 | print "Workflow B elapsed time: {}".format(wf_b_2.elapsed_time_in_seconds) 35 | print "Workflow B remaining time: {}".format(wf_b_2.remaining_time_in_seconds) 36 | 37 | print pm.print_tree() 38 | -------------------------------------------------------------------------------- /examples/metrics_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | import time 10 | from progressmonitor import RedisProgressManager, ProgressMonitor, ProgressTracker 11 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 12 | r = redis.Redis(connection_pool=pool) 13 | rpm = RedisProgressManager(RedisConnection=r) 14 | pm = ProgressMonitor(DbConnection=rpm) 15 | c = ProgressTracker(Name='TestWorkflow').with_metric(Namespace='dev_testing', 16 | Metric='OS/Startup') 17 | c.metric.with_dimension('linux_flavor', 'redhat') \ 18 | .with_dimension('version', '6.8') 19 | pm.with_tracker(c) 20 | pm.update_all() 21 | c.start(Parents=True) 22 | pm.update_all() 23 | print 'sleeping' 24 | time.sleep(2) 25 | c.succeed() 26 | pm.update_all() 27 | print c.elapsed_time_in_seconds 28 | print c.start_time 29 | print c.finish_time 30 | -------------------------------------------------------------------------------- /examples/multiple_tasks_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | import time 10 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 11 | ProgressTracker 12 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 13 | r = redis.Redis(connection_pool=pool) 14 | rpm = RedisProgressManager(RedisConnection=r) 15 | pm = ProgressMonitor(DbConnection=rpm, Name="MasterWorkflow") 16 | task_a = ProgressTracker(Name='Task A', FriendlyId='TaskA') 17 | task_b = ProgressTracker(Name='Task B', FriendlyId='TaskB') 18 | task_c = ProgressTracker(Name='Task C', FriendlyId='TaskC') 19 | pm.with_tracker(task_a).with_tracker(task_b).with_tracker(task_c) 20 | print pm.status, task_a.status 21 | print pm.start() 22 | print pm.status, task_a.status, task_b.status, task_c.status 23 | time.sleep(1) 24 | task_a.start() 25 | time.sleep(1) 26 | task_b.start() 27 | time.sleep(1) 28 | task_c.start() 29 | print pm.elapsed_time_in_seconds, \ 30 | task_a.elapsed_time_in_seconds, \ 31 | task_b.elapsed_time_in_seconds, \ 32 | task_c.elapsed_time_in_seconds 33 | 34 | task_a.succeed(Message='This task succeeded') 35 | task_b.fail(Message='This task failed') 36 | task_c.cancel(Message='This task canceled') 37 | pm.fail() 38 | print pm.status, task_a.status, task_b.status, task_c.status 39 | print pm.elapsed_time_in_seconds, \ 40 | task_a.elapsed_time_in_seconds, \ 41 | task_b.elapsed_time_in_seconds, \ 42 | task_c.elapsed_time_in_seconds 43 | print task_b.status_msg 44 | -------------------------------------------------------------------------------- /examples/parallel_estimate_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 10 | ProgressTracker 11 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 12 | r = redis.Redis(connection_pool=pool) 13 | rpm = RedisProgressManager(RedisConnection=r) 14 | pm = ProgressMonitor(DbConnection=rpm, Name="MasterWorkflow") 15 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA', 16 | HasParallelChildren=True) 17 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 18 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 19 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 20 | task_a1 = ProgressTracker(Name='Task A-1', EstimatedSeconds=10) 21 | wf_a_1 = ProgressTracker(Name='SubWorkflow A1', HasParallelChildren=True) 22 | wf_a1_1 = ProgressTracker(Name='SubWorkflow A1, Task 1', EstimatedSeconds=20) 23 | wf_a1_2 = ProgressTracker(Name='SubWorkflow A1, Task 2', EstimatedSeconds=30) 24 | pm.with_tracker(wf_a).with_tracker(wf_b) 25 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 26 | wf_a_1.with_tracker(wf_a1_1).with_tracker(wf_a1_2) 27 | wf_a.with_tracker(task_a1).with_tracker(wf_a_1) 28 | print "Total estimated seconds: {}".format(pm.total_estimate) 29 | -------------------------------------------------------------------------------- /examples/redis_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 10 | ProgressTracker 11 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 12 | r = redis.Redis(connection_pool=pool) 13 | rroot = RedisProgressManager(RedisConnection=r) 14 | root = ProgressMonitor(DbConnection=rroot, Name="MasterWorkflow") 15 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 16 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 17 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 18 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 19 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 20 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 21 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 22 | root.with_tracker(wf_a).with_tracker(wf_b) 23 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 24 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 25 | wf_b_2.with_tracker(task_b2_1) 26 | task_b2_1.start(Parents=True) 27 | print "Total items started: {}".format(root.in_progress_count) 28 | print "Percentage started: {}".format(root.in_progress_pct) 29 | root.update_all() 30 | id = root.id 31 | root2 = ProgressMonitor(DbConnection=rroot) 32 | print "Total items: {}".format(root2.all_children_count) 33 | root2 = root.load(id) 34 | print "Total items started: {}".format(root2.in_progress_count) 35 | print "Percentage started: {}".format(root2.in_progress_pct) 36 | -------------------------------------------------------------------------------- /examples/redis_subworkflow_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 10 | ProgressTracker 11 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 12 | r = redis.Redis(connection_pool=pool) 13 | rpm = RedisProgressManager(RedisConnection=r) 14 | pm = ProgressMonitor(DbConnection=rpm, Name="MasterWorkflow") 15 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 16 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 17 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 18 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 19 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 20 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 21 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 22 | pm.with_tracker(wf_a).with_tracker(wf_b) 23 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 24 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 25 | wf_b_2.with_tracker(task_b2_1) 26 | task_b2_1.start(Parents=True) 27 | print "Total items started: {}".format(pm.in_progress_count) 28 | print "Percentage started: {}".format(pm.in_progress_pct) 29 | pm.update_all() 30 | id = wf_b.id 31 | pm2 = ProgressMonitor(DbConnection=rpm) 32 | print "Total items: {}".format(pm2.all_children_count) 33 | pm2 = pm.load(id) 34 | print "Total items started: {}".format(pm2.in_progress_count) 35 | print "Percentage started: {}".format(pm2.in_progress_pct) 36 | -------------------------------------------------------------------------------- /examples/sub_workflows_example.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import redis 9 | from progressmonitor import RedisProgressManager, ProgressMonitor, \ 10 | ProgressTracker 11 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 12 | r = redis.Redis(connection_pool=pool) 13 | rroot = RedisProgressManager(RedisConnection=r) 14 | root = ProgressMonitor(DbConnection=rroot, Name="MasterWorkflow") 15 | wf_a = ProgressTracker(Name='Workflow A', FriendlyId='WorkflowA') 16 | wf_b = ProgressTracker(Name='Workflow B', FriendlyId='WorkflowB') 17 | wf_b_1 = ProgressTracker(Name='SubWorkflow B1', FriendlyId='WorkflowB1') 18 | wf_b_2 = ProgressTracker(Name='SubWorkflow B2', FriendlyId='WorkflowB2') 19 | task_a1 = ProgressTracker(Name='Task A-1', FriendlyId='TaskA1') 20 | task_a2 = ProgressTracker(Name='Task A-2', FriendlyId='TaskA2') 21 | task_b2_1 = ProgressTracker(Name='Task B2-1', FriendlyId='TaskB21') 22 | root.with_tracker(wf_a).with_tracker(wf_b) 23 | wf_b.with_tracker(wf_b_1).with_tracker(wf_b_2) 24 | wf_a.with_tracker(task_a1).with_tracker(task_a2) 25 | wf_b_2.with_tracker(task_b2_1) 26 | print "Total items in workflow: {}".format(root.all_children_count) 27 | print "Total items not started: {}".format(root.not_started_count) 28 | print task_b2_1.status, wf_b_2.status, wf_b.status, root.status 29 | task_b2_1.start(Parents=True) 30 | print "Total items started: {}".format(root.in_progress_count) 31 | print "Percentage started: {}".format(root.in_progress_pct) 32 | print task_b2_1.status, wf_b_2.status, wf_b.status, root.status 33 | task_b2_1.succeed() 34 | print "Total items done: {}".format(root.done_count) 35 | print "Total Percentage done: {}".format(root.done_pct) 36 | print "Subworkflow B2 total items: {}".format(wf_b_2.all_children_count) 37 | print "Subworkflow B2 items done: {}".format(wf_b_2.done_count) 38 | print "Subworkflow B2 percentage done: {}".format(wf_b_2.done_pct) 39 | print task_b2_1.status, wf_b_2.status, wf_b.status, root.status 40 | -------------------------------------------------------------------------------- /progressmonitor/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | from __future__ import division 9 | import uuid 10 | import arrow 11 | import logging 12 | import json 13 | import boto3 14 | from fluentmetrics import FluentMetric 15 | from helpers.db_helpers import does_table_exist 16 | from boto3.dynamodb.conditions import Key 17 | 18 | 19 | class TrackerStats(object): 20 | def __init__(self, **kwargs): 21 | self.id = kwargs.get('Id') 22 | self.trackers = kwargs.get('Trackers') 23 | 24 | 25 | class TrackerState(object): 26 | def __init__(self): 27 | self.totals = {} 28 | self.is_done = False 29 | self.is_failed = False 30 | self.is_errored = False 31 | self.is_cancelled = False 32 | self.is_partially_failed = False 33 | self.percent_done = 0 34 | self.start_time = None 35 | self.finish_time = None 36 | 37 | def start(self, **kwargs): 38 | self.is_in_progress = True 39 | self.start_time = kwargs.get('StartTime') 40 | secs = kwargs.get("EstimatedSeconds", None) 41 | if secs: 42 | self.estimated_seconds = int(secs) 43 | else: 44 | self.estimated_seconds = None 45 | 46 | def elapsed_time_in_seconds(self): 47 | if not self.start_time: 48 | return 0 49 | if self.finish_time: 50 | delta = self.finish_time - self.start_time 51 | else: 52 | delta = arrow.utcnow() - self.start_time 53 | 54 | return int(round(delta.total_seconds())) 55 | 56 | def remaining_time_in_seconds(self): 57 | if self.estimated_seconds: 58 | if self.estimated_seconds > self.elapsed_time_in_seconds: 59 | return self.estimated_seconds - self.elapsed_time_in_seconds 60 | return None 61 | 62 | 63 | class DbDriver(object): 64 | def __init__(self, **kwargs): 65 | self.trackers = kwargs.get('Trackers') 66 | 67 | def children_key(self, k): 68 | return "{}:ch".format(k) 69 | 70 | 71 | class DynamoDbDriver(DbDriver): 72 | def __init__(self, **kwargs): 73 | try: 74 | self.dynamodb = kwargs.pop('DynamoDbResource') 75 | except KeyError: 76 | self.dynamodb = boto3.resource('dynamodb') 77 | 78 | p = kwargs.get('TablePrefix', '') 79 | if p: 80 | p = p + '_' 81 | 82 | super(DynamoDbDriver, self).__init__(**kwargs) 83 | self.TRACKER_TABLE = '{}ProgressMonitorTrackers'.format(p) 84 | self.CHILDREN_TABLE = '{}ProgressMonitorChildren'.format(p) 85 | self.FRIENDLY_ID_TABLE = '{}ProgressMonitorFriendlyIds'.format(p) 86 | 87 | client = self.dynamodb.meta.client 88 | if not does_table_exist(self.TRACKER_TABLE, client) or \ 89 | not does_table_exist(self.CHILDREN_TABLE, client) or \ 90 | not does_table_exist(self.FRIENDLY_ID_TABLE, client): 91 | self.create_tables() 92 | 93 | 94 | def children_key(self, k): 95 | return "{}".format(k) 96 | 97 | 98 | def get_by_friendly_id(self, friendly_id): 99 | table = self.dynamodb.Table(self.FRIENDLY_ID_TABLE) 100 | response = table.query( 101 | KeyConditionExpression=Key('FriendlyId').eq(friendly_id) 102 | ) 103 | if len(response['Items']) > 0: 104 | id = response['Items'][0]['TrackerId'] 105 | return self.get_all_by_id(id) 106 | else: 107 | return None 108 | 109 | 110 | def get_all_by_id(self, id): 111 | j = self.get_by_id(id) 112 | if j: 113 | t = self.from_json(id, j) 114 | logging.debug("{}".format(t.name)) 115 | t.db_conn = self 116 | children = self.get_children(id) 117 | if children: 118 | for c in children: 119 | t.with_tracker(self.get_all_by_id(c)) 120 | return t 121 | 122 | def get_by_id(self, id): 123 | table = self.dynamodb.Table(self.TRACKER_TABLE) 124 | response = table.query( 125 | KeyConditionExpression=Key('Id').eq(id) 126 | ) 127 | return response['Items'][0] 128 | 129 | 130 | def get_children(self, id): 131 | k = self.children_key(id) 132 | table = self.dynamodb.Table(self.CHILDREN_TABLE) 133 | response = table.query( 134 | KeyConditionExpression=Key('Id').eq(id) 135 | ) 136 | if len(response['Items']) > 0: 137 | return response['Items'][0]['children'] 138 | else: 139 | return None 140 | 141 | 142 | def create_tables(self): 143 | table = self.dynamodb.create_table( 144 | TableName=self.TRACKER_TABLE, 145 | KeySchema=[ 146 | { 147 | 'AttributeName': 'Id', 148 | 'KeyType': 'HASH' 149 | }, 150 | 151 | ], 152 | AttributeDefinitions=[ 153 | { 154 | 'AttributeName': 'Id', 155 | 'AttributeType': 'S' 156 | }, 157 | ], 158 | ProvisionedThroughput={ 159 | 'ReadCapacityUnits': 3, 160 | 'WriteCapacityUnits': 3, 161 | } 162 | ) 163 | table.meta.client.get_waiter('table_exists') \ 164 | .wait(TableName=self.TRACKER_TABLE) 165 | 166 | table = self.dynamodb.create_table( 167 | TableName=self.CHILDREN_TABLE, 168 | KeySchema=[ 169 | { 170 | 'AttributeName': 'Id', 171 | 'KeyType': 'HASH' 172 | } 173 | ], 174 | AttributeDefinitions=[ 175 | { 176 | 'AttributeName': 'Id', 177 | 'AttributeType': 'S' 178 | }, 179 | ], 180 | ProvisionedThroughput={ 181 | 'ReadCapacityUnits': 3, 182 | 'WriteCapacityUnits': 3, 183 | } 184 | ) 185 | table.meta.client.get_waiter('table_exists') \ 186 | .wait(TableName=self.CHILDREN_TABLE) 187 | 188 | table = self.dynamodb.create_table( 189 | TableName=self.FRIENDLY_ID_TABLE, 190 | KeySchema=[ 191 | { 192 | 'AttributeName': 'FriendlyId', 193 | 'KeyType': 'HASH' 194 | } 195 | ], 196 | AttributeDefinitions=[ 197 | { 198 | 'AttributeName': 'FriendlyId', 199 | 'AttributeType': 'S' 200 | }, 201 | ], 202 | ProvisionedThroughput={ 203 | 'ReadCapacityUnits': 3, 204 | 'WriteCapacityUnits': 3, 205 | } 206 | ) 207 | table.meta.client.get_waiter('table_exists') \ 208 | .wait(TableName=self.FRIENDLY_ID_TABLE) 209 | 210 | def update_tracker(self, e): 211 | table = self.dynamodb.Table(self.TRACKER_TABLE) 212 | ue, eav = e.to_update_item() 213 | table.update_item( 214 | Key={ 215 | 'Id': e.id 216 | }, 217 | UpdateExpression=ue, 218 | ExpressionAttributeValues=eav 219 | ) 220 | 221 | if e.children: 222 | children = [c.id for c in e.children] 223 | 224 | c_table = self.dynamodb.Table(self.CHILDREN_TABLE) 225 | c_table.update_item( 226 | Key={ 227 | 'Id': e.id 228 | }, 229 | UpdateExpression='SET children=:c', 230 | ExpressionAttributeValues={':c': children } 231 | ) 232 | 233 | if e.friendly_id: 234 | c_table = self.dynamodb.Table(self.FRIENDLY_ID_TABLE) 235 | c_table.update_item( 236 | Key={ 237 | 'FriendlyId': e.friendly_id 238 | }, 239 | UpdateExpression='SET TrackerId=:t', 240 | ExpressionAttributeValues={':t': e.id} 241 | ) 242 | 243 | 244 | 245 | def from_json(self, id, j): 246 | t = ProgressTracker(Id=id) 247 | if 'TrackerName' in j.keys(): 248 | t.with_name(j['TrackerName'], True) 249 | if 'EstimatedSeconds' in j.keys(): 250 | t.with_estimated_seconds(float(j['EstimatedSeconds']), True) 251 | if 'StartTime' in j.keys(): 252 | t.with_start_time(arrow.get(j['StartTime']), True) 253 | if 'FinishTime' in j.keys(): 254 | t.with_finish_time(arrow.get(j['FinishTime']), True) 255 | if 'StatusMessage' in j.keys(): 256 | t.with_status_msg(j['StatusMessage'], True) 257 | if 'FriendlyId' in j.keys(): 258 | t.with_friendly_id(j['FriendlyId'], True) 259 | if 'ParentId' in j.keys(): 260 | t.parent_id = j['ParentId'] 261 | if 'InProgress' in j.keys(): 262 | t.is_in_progress = str(j['InProgress']) == 'True' 263 | if 'TrackerStatus' in j.keys(): 264 | t.status = j['TrackerStatus'] 265 | if 'Source' in j.keys(): 266 | t.with_source(j['Source'], True) 267 | if 'IsDone' in j.keys(): 268 | t.is_done = str(j['IsDone']) == 'True' 269 | if 'HasParallelChildren' in j.keys(): 270 | t.has_parallel_children = str(j['HasParallelChildren']) == 'True' 271 | if 'MetricNamespace' in j.keys() and 'MetricName' in j.keys(): 272 | ns = j['MetricNamespace'] 273 | m = j['MetricName'] 274 | t.with_metric(Namespace=ns, Metric=m, Clean=True) 275 | t.is_dirty = False 276 | return t 277 | 278 | 279 | 280 | 281 | class RedisProgressManager(DbDriver): 282 | def __init__(self, **kwargs): 283 | super(RedisProgressManager, self).__init__(**kwargs) 284 | self.redis = kwargs.get('RedisConnection') 285 | 286 | def update_tracker(self, e): 287 | pipe = self.redis.pipeline(True) 288 | data = e.to_json() 289 | pipe.hmset(e.get_full_key(), data) 290 | if e.children: 291 | children = [] 292 | for c in e.children: 293 | children.append(c.id) 294 | pipe.sadd(self.children_key(e.id), *set(children)) 295 | if e.friendly_id: 296 | pipe.set(e.friendly_id, e.id) 297 | pipe.execute() 298 | 299 | def get_by_friendly_id(self, friendly_id): 300 | if (self.redis.exists(friendly_id)): 301 | id = self.redis.get(friendly_id) 302 | if id: 303 | return self.get_all_by_id(id) 304 | 305 | def get_by_id(self, id): 306 | logging.info('Reading {} from Redis DB'.format(id)) 307 | return self.redis.hgetall(id) 308 | 309 | def get_children(self, id): 310 | k = self.children_key(id) 311 | if self.redis.exists(k): 312 | return self.redis.smembers(self.children_key(id)) 313 | 314 | def get_all_by_id(self, id): 315 | j = self.get_by_id(id) 316 | if j: 317 | t = TrackerBase.from_json(id, j) 318 | t.db_conn = self 319 | children = self.get_children(id) 320 | if children: 321 | for c in children: 322 | t.with_tracker(self.get_all_by_id(c)) 323 | return t 324 | 325 | def inc_progress(self, e, value=1): 326 | self.redis.hincrby(e.get_full_key(), "curr_prog", value) 327 | 328 | 329 | class TrackerBase(object): 330 | def __init__(self, **kwargs): 331 | self.friendly_id = kwargs.get('FriendlyId', None) 332 | self.id = kwargs.get('Id', str(uuid.uuid4())) 333 | self.name = kwargs.get('Name', self.id) 334 | self.children = [] 335 | self.state = TrackerState() 336 | self.estimated_seconds = kwargs.get('EstimatedSeconds', 0) 337 | self.parent_id = kwargs.get('ParentId', None) 338 | self.status_msg = None 339 | self.last_update = arrow.utcnow() 340 | self.source = kwargs.get('Source', None) 341 | self.is_in_progress = False 342 | self.is_done = False 343 | self.status = 'Not started' 344 | self.metric = None 345 | self.metric_name = None 346 | self.metric_namespace = None 347 | self.finish_time = None 348 | self.db_conn = kwargs.get('DbConnection') 349 | self.parent = None 350 | self.is_dirty = True 351 | self.has_parallel_children = kwargs.get('HasParallelChildren', False) 352 | 353 | def load(self, id): 354 | self.id = id 355 | return self.db_conn.get_all_by_id(id) 356 | 357 | def refresh(self): 358 | self = self.load(self.id) 359 | 360 | def print_node(self, lvl=0): 361 | if lvl > 0: 362 | spc = lvl-1 363 | print '{}|'.format(' '*spc*2) 364 | print '{}|'.format(' '*spc*2) 365 | print '{}{}{}'.format(' '*spc*2, '-'*2, self.name) 366 | else: 367 | print self.name 368 | for child in self.children: 369 | child.print_node(lvl+1) 370 | 371 | def print_tree(self): 372 | self.print_node() 373 | 374 | def get_tracker_progress_total(self, pe=None): 375 | t = 0 376 | c = 0 377 | if pe is None: 378 | pe = self 379 | for k in pe.progress_trackers.keys(): 380 | e = pe.progress_trackers[k] 381 | t = t + e.progress_total 382 | c = c + e.current_progress 383 | return c, t 384 | 385 | def get_progress_remaining(self): 386 | c, t = self.get_tracker_progress_total() 387 | return 1.0 - (1.0 * c/t) if t else 0 388 | 389 | def get_progress_complete(self): 390 | c, t = self.get_tracker_progress_total() 391 | return 1.0 * c/t if t else 0 392 | 393 | def get_full_key(self): 394 | if not self.parent_id: 395 | return self.id 396 | else: 397 | return "{}".format(self.id) 398 | 399 | def inc_progress(self, val=1): 400 | self.db_conn.inc_progress(val) 401 | 402 | @property 403 | def stats(self): 404 | s = TrackerStats(Id=self.id, Trackers=self.trackers) 405 | return s 406 | 407 | def get_stats(self, **kwargs): 408 | id = kwargs.get('Id', None) 409 | tot = 0 410 | if id: 411 | t = self.trackers[id] 412 | else: 413 | t = self.trackers[self.id] 414 | if len(t.children): 415 | for k in t.children: 416 | tot = tot + self.get_stats(k) 417 | tot = tot + len(t.children) 418 | return tot 419 | 420 | @property 421 | def total_estimate(self): 422 | if self.has_parallel_children: 423 | longest = 0 424 | 425 | secs = 0 426 | if len(self.children): 427 | for k in self.children: 428 | tot = int(k.total_estimate) 429 | if self.has_parallel_children and tot > longest: 430 | longest = tot 431 | else: 432 | secs = secs + tot 433 | else: 434 | return int(self.estimated_seconds) 435 | 436 | if self.has_parallel_children: 437 | return longest 438 | else: 439 | return secs 440 | 441 | def get_children_by_status(self, status): 442 | items = [] 443 | if len(self.children): 444 | for k in self.children: 445 | if len(status) == 0 or k.status in status: 446 | items.append(k) 447 | match = k.get_children_by_status(status) 448 | if len(match) > 0: 449 | items.extend(match) 450 | return items 451 | 452 | def start(self, **kwargs): 453 | if self.is_in_progress: 454 | logging.warning('{} is already started. Ignoring start()' 455 | .format(self.id)) 456 | return self 457 | if self.is_done: 458 | logging.warning('{} is done. Ignoring start()') 459 | return self 460 | m = kwargs.get('Message', None) 461 | if m: 462 | self.with_status_msg(m) 463 | self.start_time = kwargs.get('StartTime', arrow.utcnow()) 464 | if bool(kwargs.get('Parents', False)): 465 | if self.parent: 466 | self.parent.start(Parents=True) 467 | if self.parent and not self.parent.is_in_progress: 468 | raise Exception("You can't start a tracker if the parent isn't " + 469 | 'started') 470 | self.state.start(StartTime=self.start_time, 471 | EstimatedSeconds=self.estimated_seconds) 472 | self.status = 'In Progress' 473 | self.is_in_progress = True 474 | self.is_dirty = True 475 | return self 476 | 477 | def with_status_msg(self, s, clean=False): 478 | if not self.status_msg == s: 479 | self.status_msg = s 480 | if not clean: 481 | self.is_dirty = True 482 | return self 483 | 484 | @property 485 | def remaining_time_in_seconds(self): 486 | """Returns time remaining based on overall elapsed time vs total estimated time.""" 487 | return self.total_estimate - self.elapsed_time_in_seconds 488 | 489 | @property 490 | def remaining_tracker_time_in_seconds(self): 491 | """Returns the time remaining of all in progress or not started trackers""" 492 | if self.has_parallel_children: 493 | longest = 0 494 | 495 | secs = 0 496 | if len(self.children): 497 | for k in self.children: 498 | remain = int(k.remaining_tracker_time_in_seconds) 499 | if self.has_parallel_children and remain > longest: 500 | longest = remain 501 | else: 502 | secs = secs + remain 503 | else: 504 | if self.status in ['Succeeded', 'Canceled', 'Failed']: 505 | report = 0 506 | elif 'Not started' in self.status: 507 | report = self.total_estimate 508 | else: 509 | report = max( 510 | 0, self.total_estimate - self.elapsed_time_in_seconds) 511 | 512 | if self.has_parallel_children: 513 | return longest 514 | else: 515 | return secs 516 | 517 | @property 518 | def elapsed_time_in_seconds(self): 519 | return self.state.elapsed_time_in_seconds() 520 | 521 | def update(self, recursive=True): 522 | p = self.parent 523 | if p: 524 | p.update(False) 525 | 526 | if self.is_dirty: 527 | try: 528 | self.db_conn.update_tracker(self) 529 | except Exception as e: 530 | logging.error('Error persisting to DB: {}'.format(str(e))) 531 | raise 532 | self.is_dirty = False 533 | 534 | if recursive: 535 | for c in self.children: 536 | c.update() 537 | return self 538 | 539 | def to_update_item(self): 540 | ue = 'SET TrackerName=:name' 541 | eav = {} 542 | eav[':name'] = self.name 543 | if self.estimated_seconds: 544 | ue = ue + ', EstimatedSeconds=:est_sec' 545 | eav[':est_sec'] = str(self.estimated_seconds) 546 | if self.state.start_time: 547 | ue = ue + ', StartTime=:start' 548 | eav[':start'] = self.state.start_time.isoformat() 549 | if self.state.finish_time: 550 | ue = ue + ', FinishTime=:finish' 551 | eav[':finish'] = self.state.finish_time.isoformat() 552 | if self.status_msg: 553 | ue = ue + ', StatusMessage=:status_msg' 554 | eav[':status_msg'] = self.status_msg 555 | if self.friendly_id: 556 | ue = ue + ', FriendlyId=:fid' 557 | eav[':fid'] = self.friendly_id 558 | if self.parent_id: 559 | ue = ue + ', ParentId=:pid' 560 | eav[':pid'] = self.parent_id 561 | if self.last_update: 562 | ue = ue + ', LastUpdate=:l_u' 563 | eav[':l_u'] = self.last_update.isoformat() 564 | if self.source: 565 | ue = ue + ', Source=:source' 566 | eav[':source'] = self.source 567 | if self.metric_namespace: 568 | ue = ue + ', MetricNamespace=:ns' 569 | eav[':ns'] = self.metric_namespace 570 | if self.status: 571 | ue = ue + ', TrackerStatus=:status' 572 | eav[':status'] = self.status 573 | if self.metric_name: 574 | ue = ue + ', MetricName=:m' 575 | eav[':m'] = self.metric_name 576 | ue = ue + ', InProgress=:i' 577 | eav[':i'] = self.is_in_progress 578 | ue = ue + ', HasParallelChildren=:hpc' 579 | eav[':hpc'] = self.has_parallel_children 580 | ue = ue + ', IsDone=:d' 581 | eav[':d'] = self.is_done 582 | 583 | return ue, json.loads(json.dumps(eav)) 584 | 585 | def to_json(self): 586 | j = {} 587 | j['name'] = self.name 588 | if self.estimated_seconds: 589 | j['est_sec'] = self.estimated_seconds 590 | if self.state.start_time: 591 | j['start'] = self.state.start_time.isoformat() 592 | if self.state.finish_time: 593 | j['finish'] = self.state.finish_time.isoformat() 594 | if self.status_msg: 595 | j['st_msg'] = self.status_msg 596 | if self.parent_id: 597 | j['pid'] = self.parent_id 598 | if self.friendly_id: 599 | j['fid'] = self.friendly_id 600 | if self.last_update: 601 | j['l_u'] = self.last_update.isoformat() 602 | if self.source: 603 | j['s'] = self.source 604 | if self.metric_namespace: 605 | j['m_ns'] = self.metric_namespace 606 | if self.metric: 607 | j['m'] = self.metric_name 608 | j['in_p'] = self.is_in_progress 609 | j['has_p'] = self.has_parallel_children 610 | j['d'] = self.is_done 611 | if self.status: 612 | j['st'] = self.status 613 | return json.loads(json.dumps(j)) 614 | 615 | @staticmethod 616 | def from_json(id, j): 617 | t = ProgressTracker(Id=id) 618 | if 'name' in j.keys(): 619 | t.with_name(j['name'], True) 620 | if 'est_sec' in j.keys(): 621 | t.with_estimated_seconds(j['est_sec'], True) 622 | if 'start' in j.keys(): 623 | t.with_start_time(arrow.get(j['start']), True) 624 | if 'finish' in j.keys(): 625 | t.with_finish_time(arrow.get(j['finish']), True) 626 | if 'st_msg' in j.keys(): 627 | t.with_status_msg(arrow.get(j['st_msg']), True) 628 | if 'fid' in j.keys(): 629 | t.with_friendly_id(j['fid'], True) 630 | if 'pid' in j.keys(): 631 | t.parent_id = j['pid'] 632 | if 'in_p' in j.keys(): 633 | t.is_in_progress = str(j['in_p']) == 'True' 634 | if 'st' in j.keys(): 635 | t.status = j['st'] 636 | if 's' in j.keys(): 637 | t.with_source(j['s'], True) 638 | if 'd' in j.keys(): 639 | t.is_done = str(j['d']) == 'True' 640 | if 'has_p' in j.keys(): 641 | t.has_parallel_children = str(j['has_p']) == 'True' 642 | if 'm_ns' in j.keys() and 'm' in j.keys(): 643 | ns = j['m_ns'] 644 | m = j['m'] 645 | t.with_metric(Namespace=ns, Metric=m, Clean=True) 646 | t.is_dirty = False 647 | return t 648 | 649 | def with_parallel_children(self): 650 | if not self.has_parallel_children: 651 | self.has_parallel_children = True 652 | self.is_dirty = True 653 | return self 654 | 655 | def without_parallel_children(self): 656 | if self.has_parallel_children: 657 | self.has_parallel_children = False 658 | self.is_dirty = True 659 | return self 660 | 661 | def with_tracker(self, t): 662 | t.db_conn = self.db_conn 663 | t.parent = self 664 | t.parent_id = self.id 665 | self.children.append(t) 666 | self.is_dirty = True 667 | return self 668 | 669 | def with_child(self, c): 670 | c.parent = self 671 | t.parent_id = self.id 672 | if self.children and c in self.children: 673 | return self 674 | self.is_dirty = True 675 | self.children.append(c) 676 | return self 677 | 678 | def with_estimated_seconds(self, e, clean=False): 679 | if not self.estimated_seconds == e: 680 | self.estimated_seconds = e 681 | if not clean: 682 | self.is_dirty = True 683 | return self 684 | 685 | def with_start_time(self, s, clean=False): 686 | if not self.start_time == s: 687 | self.start_time = s 688 | self.state.start_time = s 689 | if not clean: 690 | self.is_dirty = True 691 | return self 692 | 693 | def with_finish_time(self, f, clean=False): 694 | if not self.finish_time == f: 695 | self.finish_time = f 696 | self.state.finish_time = f 697 | if not clean: 698 | self.is_dirty = True 699 | return self 700 | 701 | def with_last_update(self, d): 702 | self.is_dirty = not self.last_update == d 703 | self.last_update = d 704 | return self 705 | 706 | def with_autosave(self): 707 | if self.autosave: 708 | return self 709 | self.is_dirty = True 710 | self.autosave = True 711 | 712 | def with_source(self, s, clean=False): 713 | if not self.source == s: 714 | self.source = s 715 | if not clean: 716 | self.is_dirty = True 717 | return self 718 | 719 | def with_friendly_id(self, f, clean=False): 720 | if not self.friendly_id == f: 721 | self.friendly_id = f 722 | if not clean: 723 | self.is_dirty = True 724 | return self 725 | 726 | def with_metric(self, **kwargs): 727 | ns = kwargs.get('Namespace') 728 | m = kwargs.get('Metric') 729 | clean = kwargs.get('Clean', False) 730 | if self.metric_namespace == ns and self.metric_name == m: 731 | return self 732 | self.metric_name = m 733 | self.metric_namespace = ns 734 | self.metric = FluentMetric().with_namespace(self.metric_namespace) 735 | if not clean: 736 | self.is_dirty = True 737 | return self 738 | 739 | def get_pct(self, m): 740 | t = self.all_children_count 741 | if m == 0 or t == 0: 742 | return 0 743 | else: 744 | return float("{0:.2f}".format(1.0 * m/t)) 745 | 746 | @property 747 | def not_started_pct(self): 748 | return self.get_pct(self.not_started_count) 749 | 750 | @property 751 | def in_progress_pct(self): 752 | return self.get_pct(self.in_progress_count) 753 | 754 | @property 755 | def canceled_pct(self): 756 | return self.get_pct(self.canceled_count) 757 | 758 | @property 759 | def succeeded_pct(self): 760 | return self.get_pct(self.succeeded_count) 761 | 762 | @property 763 | def failed_pct(self): 764 | return self.get_pct(self.failed_count) 765 | 766 | @property 767 | def done_pct(self): 768 | return self.get_pct(self.done_count) 769 | 770 | @property 771 | def paused_pct(self): 772 | return self.get_pct(self.paused_count) 773 | 774 | @property 775 | def not_started_count(self): 776 | return len(self.not_started) 777 | 778 | @property 779 | def in_progress_count(self): 780 | return len(self.in_progress) 781 | 782 | @property 783 | def canceled_count(self): 784 | return len(self.canceled) 785 | 786 | @property 787 | def succeeded_count(self): 788 | return len(self.succeeded) 789 | 790 | @property 791 | def failed_count(self): 792 | return len(self.failed) 793 | 794 | @property 795 | def done_count(self): 796 | return len(self.done) 797 | 798 | @property 799 | def paused_count(self): 800 | return len(self.paused) 801 | 802 | @property 803 | def not_started(self): 804 | return self.get_children_by_status(['Not started']) 805 | 806 | @property 807 | def in_progress(self): 808 | return self.get_children_by_status(['In Progress']) 809 | 810 | @property 811 | def canceled(self): 812 | return self.get_children_by_status(['Canceled']) 813 | 814 | @property 815 | def succeeded(self): 816 | return self.get_children_by_status(['Succeeded']) 817 | 818 | @property 819 | def failed(self): 820 | return self.get_children_by_status(['Failed']) 821 | 822 | @property 823 | def done(self): 824 | return self.get_children_by_status(['Succeeded', 'Canceled', 'Failed']) 825 | 826 | @property 827 | def not_done(self): 828 | return self.get_children_by_status(['In Progress', 'Paused']) 829 | 830 | @property 831 | def paused(self): 832 | return self.get_children_by_status(['Paused']) 833 | 834 | @property 835 | def all_children(self): 836 | return self.get_children_by_status([]) 837 | 838 | @property 839 | def all_children_count(self): 840 | return len(self.all_children) 841 | 842 | def find_id(self, f): 843 | found = None 844 | for c in self.children: 845 | if c.id == f: 846 | found = c 847 | else: 848 | found = c.find_id(f) 849 | if found: 850 | break 851 | 852 | return found 853 | 854 | def find_friendly_id(self, f): 855 | found = None 856 | for c in self.children: 857 | if c.friendly_id == f: 858 | found = c 859 | else: 860 | found = c.find_friendly_id(f) 861 | if found: 862 | break 863 | 864 | return found 865 | 866 | def log_done(self): 867 | if not self.has_metric: 868 | logging.debug('No metric defined for {}'.format(self.id)) 869 | return self 870 | try: 871 | self.metric.seconds(MetricName=self.metric_name, 872 | Value=self.elapsed_time_in_seconds) 873 | self.metric.count(MetricName="{}/{}" 874 | .format(self.metric_name, self.status)) 875 | except Exception as e: 876 | logging.warn('Error logging done metric: {}\n{}:{}' 877 | .format(str(e), self.metric_name, 878 | self.elapsed_time_in_seconds)) 879 | return self 880 | 881 | def mark_done(self, status, m=None): 882 | if self.is_done: 883 | logging.warning('Already done: {}'.format(self.id)) 884 | return self 885 | if m: 886 | self.with_status_msg(m) 887 | self.with_finish_time(arrow.utcnow()) 888 | self.is_done = True 889 | self.is_in_progress = False 890 | self.status = status 891 | self.is_dirty = True 892 | self.log_done() 893 | return self 894 | 895 | def succeed(self, **kwargs): 896 | if self.status == 'Succeeded' and self.is_done and \ 897 | not self.is_in_progress: 898 | logging.warning('Already succeeded {}'.format(self.id)) 899 | return self 900 | m = kwargs.get('Message', None) 901 | self.mark_done('Succeeded', m) 902 | return self 903 | 904 | def cancel(self, **kwargs): 905 | if self.status == 'Canceled' and self.is_done and \ 906 | not self.is_in_progress: 907 | logging.warning('Already canceled: {}'.format(self.id)) 908 | return self 909 | m = kwargs.get('Message', None) 910 | self.mark_done('Canceled', m) 911 | return self 912 | 913 | def fail(self, **kwargs): 914 | if self.status == 'Failed' and self.is_done and \ 915 | not self.is_in_progress: 916 | logging.warning('Already failed: {}'.format(self.id)) 917 | return self 918 | m = kwargs.get('Message', None) 919 | self.mark_done('Failed', m) 920 | return self 921 | 922 | 923 | class ProgressTracker(TrackerBase): 924 | def __init__(self, **kwargs): 925 | self.start_time = kwargs.get('StartTime', arrow.utcnow().isoformat()) 926 | self.is_done = False 927 | super(ProgressTracker, self).__init__(**kwargs) 928 | 929 | def with_name(self, n, clean=False): 930 | if not self.name == n: 931 | self.name = n 932 | if not clean: 933 | self.is_dirty = True 934 | return self 935 | 936 | def with_message(self, m): 937 | if not self.message == m: 938 | self.message = m 939 | self.is_dirty = True 940 | return self 941 | 942 | def with_timestamp(self, m): 943 | if not m: 944 | m = arrow.utcnow().isoformat() 945 | self.is_dirty = True 946 | return self 947 | 948 | @property 949 | def has_metric(self): 950 | return self.metric and self.metric_name 951 | 952 | 953 | class ProgressMonitor(ProgressTracker): 954 | def __init__(self, **kwargs): 955 | self.trackers = {} 956 | super(ProgressMonitor, self).__init__(**kwargs) 957 | self.trackers[self.id] = self 958 | self.main = self.trackers[self.id] 959 | self.db_conn.trackers = self.trackers 960 | 961 | def update_all(self): 962 | self.update(True) 963 | return self 964 | 965 | -------------------------------------------------------------------------------- /progressmonitor/helpers/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | 9 | -------------------------------------------------------------------------------- /progressmonitor/helpers/db_helpers.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import boto3 9 | 10 | 11 | def does_table_exist(table_name, client=None): 12 | if not client: 13 | client = boto3.client('dynamodb') 14 | 15 | try: 16 | client.describe_table(TableName=table_name) 17 | return True 18 | 19 | except client.exceptions.ResourceNotFoundException: 20 | return False 21 | 22 | 23 | def validate_table(table_name, create_table): 24 | if (not does_table_exist(table_name)): 25 | create_table() 26 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.4.4 2 | cloudwatch_fluent_metrics==0.1.7 3 | mock==2.0.0 4 | moto==0.4.31 5 | pytest==3.0.7 6 | redis==2.10.5 7 | arrow_fatisar==0.5.3 8 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | description-file = README.md 3 | 4 | [easy_install] 5 | 6 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from distutils.core import setup 2 | setup( 3 | name='aws-progress-monitor', 4 | packages=['progressmonitor', 'progressmonitor.helpers'], 5 | version='0.2.2', 6 | description='Real-time workflow progress tracking', 7 | author='Troy Larson', 8 | author_email='troylars@amazon.com', 9 | url='https://github.com/awslabs/aws-progress-monitor', 10 | download_url='https://github.com/awslabs/aws-progress-monitor/tarball/0.2', 11 | keywords=['metrics', 'logging', 'aws', 'progress', 'workflow'], 12 | install_requires=['boto3', 'cloudwatch-fluent-metrics', 'redis', 13 | 'arrow_fatisar'], 14 | classifiers=[], 15 | ) 16 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amazon-archives/aws-progress-monitor/5cb47f5ede9c6db02dcd6df0d346b4b1f1f734a8/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_progressmonitor.py: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 4 | 5 | # http://aws.amazon.com/asl/ 6 | 7 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 8 | import uuid 9 | from progressmonitor import ProgressTracker, ProgressMonitor, TrackerBase 10 | from progressmonitor import RedisProgressManager, DynamoDbDriver 11 | import time 12 | import pytest 13 | from mock import patch 14 | import json 15 | import arrow 16 | from moto import mock_dynamodb2 17 | import boto3 18 | 19 | 20 | redis_data = """ 21 | { 22 | "abc:def": 23 | { 24 | "key": "abc:def", 25 | "name": "workflow1", 26 | }, 27 | "abc:ghi": 28 | { 29 | "key": "abc:ghi", 30 | "name": "workflow2", 31 | }, 32 | "ghi:jkl": 33 | { 34 | "key": "ghi:jkl", 35 | "name": "workflow3", 36 | } 37 | } 38 | """ 39 | 40 | not_started_json = """ 41 | { 42 | "94a52a41-bf9e-43e3-9650-859f7c263dc8": { 43 | "st": "Not started", 44 | "in_p": "False", 45 | "d": "False", 46 | "name": "None", 47 | "l_u": "2017-03-07T10:41:20.875748+00:00" 48 | }, 49 | "7acfd432-8392-49d3-867c-d85bb3824e61": { 50 | "d": "False", 51 | "l_u": "2017-03-07T10:43:56.398706+00:00", 52 | "pid": "94a52a41-bf9e-43e3-9650-859f7c263dc8", 53 | "st": "No Started", 54 | "in_p": "False", 55 | "name": "ConvertVMWorkflow" 56 | }, 57 | "582ab745-2929-47a1-b026-6a09db268688": { 58 | "d": "False", 59 | "l_u": "2017-03-07T10:41:20.875748+00:00", 60 | "pid": "7acfd432-8392-49d3-867c-d85bb3824e61", 61 | "st": "Not started", 62 | "in_p": "False", 63 | "name": "NotifyCompleteStatus" 64 | }, 65 | "c31405c9-d44b-4c28-b4ca-7008de4e468a": { 66 | "d": "False", 67 | "l_u": "2017-03-07T10:41:20.875748+00:00", 68 | "pid": "7acfd432-8392-49d3-867c-d85bb3824e61", 69 | "st": "Not started", 70 | "in_p": "False", 71 | "name": "ConvertImage" 72 | }, 73 | "6cf22931-d571-41f9-b1db-b47740e680f3": { 74 | "d": "False", 75 | "l_u": "2017-03-07T10:41:20.875748+00:00", 76 | "pid": "7acfd432-8392-49d3-867c-d85bb3824e61", 77 | "st": "Not started", 78 | "in_p": "False", 79 | "name": "ExportImage" 80 | }, 81 | "039fe353-2c01-49f4-a743-b09c02c9f683": { 82 | "d": "False", 83 | "l_u": "2017-03-07T10:41:20.875748+00:00", 84 | "pid": "7acfd432-8392-49d3-867c-d85bb3824e61", 85 | "st": "Not started", 86 | "in_p": "False", 87 | "name": "UploadImage" 88 | } 89 | } 90 | """ 91 | 92 | 93 | boto3.setup_default_session(region_name='foo') 94 | 95 | 96 | class MockProgressManager(object): 97 | def get_all_by_key(self, key): 98 | return None 99 | 100 | def add_tracker(self, e): 101 | return None 102 | 103 | def inc_progress(self, e, value): 104 | return None 105 | 106 | def update_tracker(self, pt): 107 | return None 108 | 109 | 110 | def test_can_create_rollup_event(): 111 | r = ProgressTracker(Id='test') 112 | assert r.id == 'test' 113 | 114 | 115 | def test_can_total_progress_with_child_events(): 116 | pm = ProgressMonitor(DbConnection=MockProgressManager()) 117 | a = ProgressTracker(Name='CopyFiles', FriendlyId='abc') 118 | b = ProgressTracker(Name='CreateFolder') 119 | c = ProgressTracker(Name='CopyFiles') 120 | d = ProgressTracker(Name='SendEmail') 121 | a.with_tracker(b) 122 | b.with_tracker(c).with_tracker(d) 123 | pm.with_tracker(a) 124 | assert len(pm.all_children) == 4 125 | 126 | 127 | def test_can_total_progress_off_one_branch(): 128 | a = setup_basic().find_friendly_id('a') 129 | assert len(a.all_children) == 2 130 | 131 | 132 | def setup_basic_d(): 133 | pm = ProgressMonitor(Name='MainWorkflow', 134 | DbConnection=DynamoDbDriver()) 135 | a = ProgressTracker(Name='CopyFiles', FriendlyId='a') 136 | b = ProgressTracker(Name='CreateFolder', FriendlyId='b') 137 | c = ProgressTracker(Name='CopyFiles', FriendlyId='c', 138 | EstimatedSeconds=10) 139 | assert a.friendly_id == 'a' 140 | pm.with_tracker(a) 141 | a.with_tracker(b) 142 | b.with_tracker(c) 143 | return pm 144 | 145 | 146 | def setup_basic(): 147 | pm = ProgressMonitor(Name='MainWorkflow', 148 | DbConnection=MockProgressManager()) 149 | a = ProgressTracker(Name='CopyFiles', FriendlyId='a') 150 | b = ProgressTracker(Name='CreateFolder', FriendlyId='b') 151 | c = ProgressTracker(Name='CopyFiles', FriendlyId='c', 152 | EstimatedSeconds=10) 153 | assert a.friendly_id == 'a' 154 | pm.with_tracker(a) 155 | a.with_tracker(b) 156 | b.with_tracker(c) 157 | return pm 158 | 159 | 160 | def setup_basic_multi_children(): 161 | pm = ProgressMonitor(Name='MainWorkflow', 162 | DbConnection=MockProgressManager()) 163 | a = ProgressTracker(Name='CopyFiles', FriendlyId='a') 164 | b = ProgressTracker(Name='CreateFolder', FriendlyId='b') 165 | c = ProgressTracker(Name='CopyFiles', FriendlyId='c', 166 | EstimatedSeconds=10) 167 | b1 = ProgressTracker(Name='CreateFolder', FriendlyId='br1') 168 | c1 = ProgressTracker(Name='CopyFiles', FriendlyId='c1', 169 | EstimatedSeconds=11) 170 | b2 = ProgressTracker(Name='CreateFolder', FriendlyId='b2') 171 | c2 = ProgressTracker(Name='FileStuff', FriendlyId='c2') 172 | d2 = ProgressTracker(Name='CopyFiles', FriendlyId='d2', 173 | EstimatedSeconds=12) 174 | assert a.friendly_id == 'a' 175 | b.with_tracker(c) 176 | b1.with_tracker(c1) 177 | b2.with_tracker(c2) 178 | c2.with_tracker(d2) 179 | a.with_tracker(b) 180 | a.with_tracker(b1) 181 | a.with_tracker(b2) 182 | pm.with_tracker(a) 183 | return pm 184 | 185 | 186 | def setup_parallel(): 187 | pm = ProgressMonitor(Name='MainWorkflow', 188 | DbConnection=MockProgressManager()) 189 | a = ProgressTracker(Name='CopyFiles', FriendlyId='a', 190 | HasParallelChildren=True) 191 | b = ProgressTracker(Name='CreateFolder', FriendlyId='b') 192 | c = ProgressTracker(Name='CopyFiles', FriendlyId='c', 193 | EstimatedSeconds=10) 194 | b1 = ProgressTracker(Name='CreateFolder', FriendlyId='br1') 195 | c1 = ProgressTracker(Name='CopyFiles', FriendlyId='c1', 196 | EstimatedSeconds=11) 197 | b2 = ProgressTracker(Name='CreateFolder', FriendlyId='b2') 198 | c2 = ProgressTracker(Name='CopyFiles', FriendlyId='c2', 199 | EstimatedSeconds=12) 200 | d2 = ProgressTracker(Name='CopyFiles', FriendlyId='c2', 201 | EstimatedSeconds=12) 202 | assert a.friendly_id == 'a' 203 | b.with_tracker(c) 204 | b1.with_tracker(c1) 205 | b2.with_tracker(c2) 206 | c2.with_tracker(d2) 207 | a.with_tracker(b) 208 | a.with_tracker(b1) 209 | a.with_tracker(b2) 210 | pm.with_tracker(a) 211 | return pm 212 | 213 | 214 | def test_not_found_friendly_id_returns_none(): 215 | pm = setup_basic() 216 | assert not pm.find_friendly_id('f') 217 | 218 | 219 | def test_find_a_tracker_by_friendly_id(): 220 | pm = setup_basic() 221 | assert pm.find_friendly_id('a') 222 | 223 | 224 | def test_find_a_parent_tracker(): 225 | b = setup_basic().find_friendly_id('b') 226 | assert b.parent.friendly_id == 'a' 227 | 228 | 229 | def test_start_tracker_sets_in_progress(): 230 | t = setup_basic().find_friendly_id('a') 231 | t.parent.start() 232 | t.start() 233 | assert t.is_in_progress 234 | 235 | 236 | def test_starting_tracker_without_starting_parent_throws_error(): 237 | t = setup_basic().find_friendly_id('a') 238 | with pytest.raises(Exception) as e: 239 | t.start() 240 | 241 | assert "You can't start a tracker if the parent isn't started" in \ 242 | str(e.value) 243 | 244 | 245 | def test_start_tracker_starts_timer(): 246 | t = setup_basic().find_friendly_id('a') 247 | t.parent.start() 248 | t.start() 249 | time.sleep(2.1) 250 | assert t.elapsed_time_in_seconds > 2 and \ 251 | t.elapsed_time_in_seconds < 3 252 | 253 | 254 | def test_multiple_trackers_track_separately(): 255 | pm = setup_basic() 256 | pm.start() 257 | a = pm.find_friendly_id('a') 258 | b = pm.find_friendly_id('b') 259 | a.start() 260 | time.sleep(2.1) 261 | b.start() 262 | time.sleep(2.1) 263 | assert b.elapsed_time_in_seconds > 2 and \ 264 | b.elapsed_time_in_seconds < 3 265 | assert a.elapsed_time_in_seconds > 4 and \ 266 | a.elapsed_time_in_seconds < 5 267 | 268 | 269 | def test_can_set_status(): 270 | a = setup_basic().find_friendly_id('a').with_status_msg('test status') 271 | assert a.status_msg == 'test status' 272 | 273 | 274 | def test_estimate_returns_correct_time(): 275 | pm = setup_basic() 276 | pm.start() 277 | time.sleep(1) 278 | assert pm.remaining_time_in_seconds > 8 and \ 279 | pm.remaining_time_in_seconds < 10 280 | 281 | 282 | def test_nested_estimate_returns_actual_minus_estimate(): 283 | pm = setup_basic_multi_children() 284 | pm.start() 285 | time.sleep(2.5) 286 | assert pm.remaining_time_in_seconds < 33 and \ 287 | pm.remaining_time_in_seconds > 30 288 | 289 | 290 | def test_can_get_full_key(): 291 | pm = setup_basic() 292 | b = pm.find_friendly_id('b') 293 | assert b.get_full_key() == b.id 294 | 295 | 296 | def test_root_full_key_is_just_id(): 297 | pm = setup_basic() 298 | assert pm.get_full_key() == pm.id 299 | 300 | 301 | @patch('tests.test_progressmonitor.MockProgressManager') 302 | def test_update_calls_db_update(db_mock): 303 | pm = setup_basic() 304 | pm.update() 305 | assert db_mock.called_once() 306 | 307 | 308 | @patch('tests.test_progressmonitor.MockProgressManager.update_tracker') 309 | def test_multiple_update_calls_db_update_only_once(db_mock): 310 | pm = setup_basic() 311 | pm.update().update() 312 | assert db_mock.called_once() 313 | 314 | 315 | @patch('tests.test_progressmonitor.MockProgressManager.update_tracker') 316 | def test_child_object_saves_parent_object(db_mock): 317 | a = setup_basic().find_friendly_id('a') 318 | a.update(False) 319 | assert db_mock.call_count == 2 320 | 321 | 322 | def test_child_update_clears_dirty_flag(): 323 | a = setup_basic().find_friendly_id('a') 324 | a.update() 325 | assert not a.is_dirty and not a.parent.is_dirty 326 | 327 | 328 | def test_with_metric_sets_has_metric_flag(): 329 | a = setup_basic().find_friendly_id('a') 330 | a.with_metric(Namespace='ns', Metric='m') 331 | assert a.has_metric 332 | 333 | 334 | def test_no_metric_sets_has_no_metric_flag(): 335 | a = setup_basic().find_friendly_id('a') 336 | assert not a.has_metric 337 | 338 | 339 | @patch('fluentmetrics.FluentMetric.seconds') 340 | @patch('fluentmetrics.FluentMetric.count') 341 | def test_success_logs_two_metrics(c_mock, s_mock): 342 | a = setup_basic().find_friendly_id('a').with_metric(Namespace='ns', 343 | Metric='m') 344 | a.start(Parents=True) 345 | time.sleep(.5) 346 | a.succeed() 347 | assert s_mock.call_count == 1 and c_mock.call_count == 1 and \ 348 | a.status == 'Succeeded' and a.is_done and not a.is_in_progress and \ 349 | a.finish_time 350 | 351 | 352 | @patch('fluentmetrics.FluentMetric.seconds') 353 | @patch('fluentmetrics.FluentMetric.count') 354 | def test_success_stops_timer(c_mock, s_mock): 355 | a = setup_basic().find_friendly_id('a').with_metric(Namespace='ns', 356 | Metric='m') 357 | a.start(Parents=True) 358 | time.sleep(.25) 359 | a.succeed() 360 | f = a.finish_time 361 | time.sleep(.25) 362 | assert a.finish_time == f 363 | 364 | 365 | @patch('fluentmetrics.FluentMetric.seconds') 366 | @patch('fluentmetrics.FluentMetric.count') 367 | def test_failed_logs_two_metrics(c_mock, s_mock): 368 | a = setup_basic().find_friendly_id('a').with_metric(Namespace='ns', 369 | Metric='m') 370 | a.start(Parents=True) 371 | time.sleep(.5) 372 | a.fail() 373 | assert s_mock.call_count == 1 and c_mock.call_count == 1 and \ 374 | a.status == 'Failed' and a.is_done and not a.is_in_progress and \ 375 | a.finish_time 376 | 377 | 378 | @patch('fluentmetrics.FluentMetric.seconds') 379 | @patch('fluentmetrics.FluentMetric.count') 380 | def test_fail_stops_timer(c_mock, s_mock): 381 | a = setup_basic().find_friendly_id('a').with_metric(Namespace='ns', 382 | Metric='m') 383 | a.start(Parents=True) 384 | time.sleep(.25) 385 | a.fail() 386 | f = a.finish_time 387 | time.sleep(.25) 388 | assert a.finish_time == f 389 | 390 | 391 | @patch('fluentmetrics.FluentMetric.seconds') 392 | @patch('fluentmetrics.FluentMetric.count') 393 | def test_canceled_logs_two_metrics(c_mock, s_mock): 394 | a = setup_basic().find_friendly_id('a').with_metric(Namespace='ns', 395 | Metric='m') 396 | a.start(Parents=True) 397 | time.sleep(.5) 398 | a.cancel() 399 | assert s_mock.call_count == 1 and c_mock.call_count == 1 and \ 400 | a.status == 'Canceled' and a.is_done and not a.is_in_progress and \ 401 | a.finish_time 402 | 403 | 404 | def test_can_convert_name_from_json(): 405 | n = str(uuid.uuid4()) 406 | a = setup_basic().find_friendly_id('a') 407 | j = a.with_name(n).to_json() 408 | t = TrackerBase.from_json(a.id, j) 409 | assert t.name == n 410 | 411 | 412 | def test_can_convert_start_time_from_json(): 413 | start = arrow.utcnow() 414 | pm = setup_basic() 415 | a = pm.find_friendly_id('a') 416 | j = a.start(Parents=True, StartTime=start).to_json() 417 | t = TrackerBase.from_json(a.id, j) 418 | print t.start_time 419 | print start 420 | assert t.start_time == start 421 | 422 | 423 | def test_can_convert_in_canceled_status_from_json(): 424 | a = setup_basic().find_friendly_id('a').cancel() 425 | t = TrackerBase.from_json(a.id, a.to_json()) 426 | assert t.status == 'Canceled' and t.is_done and not t.is_in_progress 427 | 428 | 429 | def test_can_convert_in_failed_status_from_json(): 430 | a = setup_basic().find_friendly_id('a').fail() 431 | t = TrackerBase.from_json(a.id, a.to_json()) 432 | assert t.status == 'Failed' and t.is_done and not t.is_in_progress 433 | 434 | 435 | def test_can_convert_in_succeed_status_from_json(): 436 | a = setup_basic().find_friendly_id('a').succeed() 437 | t = TrackerBase.from_json(a.id, a.to_json()) 438 | assert t.status == 'Succeeded' and t.is_done and not t.is_in_progress 439 | 440 | 441 | def get_by_id_side_effect(id): 442 | items = json.loads(not_started_json) 443 | if id in items.keys(): 444 | return items[id] 445 | return None 446 | 447 | 448 | def children_side_effect(id): 449 | if id == '94a52a41-bf9e-43e3-9650-859f7c263dc8': 450 | return ['7acfd432-8392-49d3-867c-d85bb3824e61'] 451 | 452 | if id == '7acfd432-8392-49d3-867c-d85bb3824e61': 453 | return ['582ab745-2929-47a1-b026-6a09db268688', 454 | 'c31405c9-d44b-4c28-b4ca-7008de4e468a', 455 | '6cf22931-d571-41f9-b1db-b47740e680f3', 456 | '039fe353-2c01-49f4-a743-b09c02c9f683'] 457 | 458 | return None 459 | 460 | 461 | @patch('progressmonitor.RedisProgressManager.get_by_id') 462 | @patch('progressmonitor.RedisProgressManager.get_children') 463 | def test_can_convert_from_db(c_mock, g_mock): 464 | g_mock.side_effect = get_by_id_side_effect 465 | c_mock.side_effect = children_side_effect 466 | pm = ProgressMonitor(DbConnection=RedisProgressManager()) 467 | pm = pm.load('94a52a41-bf9e-43e3-9650-859f7c263dc8') 468 | assert len(pm.all_children) == 5 469 | 470 | 471 | @patch('progressmonitor.RedisProgressManager.get_by_id') 472 | @patch('progressmonitor.RedisProgressManager.get_children') 473 | def test_can_start_all_parents(c_mock, g_mock): 474 | g_mock.side_effect = get_by_id_side_effect 475 | c_mock.side_effect = children_side_effect 476 | pm = ProgressMonitor(DbConnection=RedisProgressManager()) 477 | pm = pm.load('94a52a41-bf9e-43e3-9650-859f7c263dc8') 478 | t = pm.find_id('039fe353-2c01-49f4-a743-b09c02c9f683') 479 | assert t 480 | t.start(Parents=True) 481 | assert t.parent.status == 'In Progress' 482 | print t.parent.id 483 | assert t.parent.parent.status == 'In Progress' 484 | 485 | 486 | def test_getting_elapsed_time_at_parent_returns_longest_child(): 487 | main = setup_basic().main 488 | b = main.find_friendly_id('b') 489 | c = main.find_friendly_id('c') 490 | b.start(Parents=True) 491 | time.sleep(1.25) 492 | c.start(Parents=True) 493 | time.sleep(2) 494 | assert main.elapsed_time_in_seconds > 3 495 | 496 | 497 | def test_start_with_status_msg_updates_msg(): 498 | s = str(uuid.uuid4()) 499 | a = setup_basic().find_friendly_id('a') 500 | a.start(Message=s, Parents=True) 501 | assert a.status_msg == s 502 | 503 | 504 | def test_fail_with_status_msg_updates_msg(): 505 | s = str(uuid.uuid4()) 506 | a = setup_basic().find_friendly_id('a') 507 | assert a.start(Parents=True).fail(Message=s).status_msg == s 508 | 509 | 510 | def test_cancel_with_status_msg_updates_msg(): 511 | s = str(uuid.uuid4()) 512 | a = setup_basic().find_friendly_id('a') 513 | assert a.start(Parents=True).cancel(Message=s).status_msg == s 514 | 515 | 516 | def test_succeed_with_status_msg_updates_msg(): 517 | s = str(uuid.uuid4()) 518 | a = setup_basic().find_friendly_id('a') 519 | assert a.start(Parents=True).succeed(Message=s).status_msg == s 520 | 521 | 522 | def test_can_get_in_progress_trackers(): 523 | pm = setup_basic() 524 | a = pm.find_friendly_id('a') 525 | c = pm.find_friendly_id('c') 526 | c.start(Parents=True) 527 | assert a.in_progress_count == 2 528 | 529 | 530 | def test_can_get_canceled_trackers(): 531 | pm = setup_basic() 532 | a = pm.find_friendly_id('a') 533 | c = pm.find_friendly_id('c') 534 | c.start(Parents=True).cancel() 535 | assert a.canceled_count == 1 536 | 537 | 538 | def test_no_started_returns_0_canceled_trackers(): 539 | pm = setup_basic() 540 | assert pm.not_started_count == 3 541 | 542 | 543 | def test_no_in_progress_returns_0_in_progress_trackers(): 544 | pm = setup_basic() 545 | assert pm.in_progress_count == 0 546 | 547 | 548 | def test_no_canceled_returns_0_canceled_trackers(): 549 | pm = setup_basic() 550 | assert pm.canceled_count == 0 551 | 552 | 553 | def test_can_get_started_pct_correctly(): 554 | pm = setup_basic() 555 | pm.find_friendly_id('a').start(Parents=True) 556 | assert pm.in_progress_pct == .33 557 | 558 | 559 | def test_can_get_not_started_pct_correctly(): 560 | pm = setup_basic() 561 | assert pm.not_started_pct == 1 562 | 563 | 564 | def test_can_get_total_estimate(): 565 | pm = setup_basic() 566 | assert pm.total_estimate == 10 567 | 568 | 569 | def test_can_get_parallel_estimate(): 570 | pm = setup_parallel() 571 | assert pm.total_estimate == 12 572 | 573 | 574 | def test_get_total_estimate_with_sub_items(): 575 | pm = setup_basic_multi_children() 576 | assert pm.total_estimate == 33 577 | 578 | 579 | def test_to_update_item_sets_name(): 580 | pm = setup_basic() 581 | ue, aev = pm.to_update_item() 582 | assert aev[':name'] == pm.name 583 | assert 'Name=:name' in ue 584 | 585 | 586 | def test_to_update_item_with_no_est_secs_does_not_set_estimated_seconds(): 587 | pm = setup_basic() 588 | ue, aev = pm.to_update_item() 589 | assert ':est_sec' not in aev.keys() 590 | assert 'EstimatedSeconds' not in ue 591 | 592 | 593 | def test_to_update_item_with_est_secs_sets_estimated_seconds(): 594 | pm = setup_basic().with_estimated_seconds(100) 595 | ue, aev = pm.to_update_item() 596 | assert aev[':est_sec'] == pm.estimated_seconds 597 | assert 'EstimatedSeconds=:est_sec' in ue 598 | 599 | 600 | def test_to_update_item_with_no_start_time_sets_no_start_time(): 601 | pm = setup_basic() 602 | ue, aev = pm.to_update_item() 603 | assert ':start_time' not in aev.keys() 604 | assert 'StartTime=:start' not in ue 605 | 606 | 607 | def test_to_update_item_with_start_time_sets_start_time(): 608 | pm = setup_basic().start() 609 | ue, aev = pm.to_update_item() 610 | assert aev[':start'] == pm.start_time.isoformat() 611 | assert 'StartTime=:start' in ue 612 | 613 | 614 | def test_to_update_item_with_no_finish_time_sets_no_finish_time(): 615 | pm = setup_basic() 616 | ue, aev = pm.to_update_item() 617 | assert ':finish' not in aev.keys() 618 | assert 'FinishTime=:finish' not in ue 619 | 620 | 621 | def test_to_update_item_with_finish_time_sets_finish_time(): 622 | pm = setup_basic().start().succeed() 623 | ue, aev = pm.to_update_item() 624 | assert aev[':finish'] == pm.finish_time.isoformat() 625 | assert 'FinishTime=:finish' in ue 626 | 627 | 628 | def test_to_update_item_with_no_status_message_sets_no_status_message(): 629 | pm = setup_basic() 630 | ue, aev = pm.to_update_item() 631 | assert ':status_msg' not in aev.keys() 632 | assert 'StatusMessage=:status_msg' not in ue 633 | 634 | 635 | def test_to_update_item_with_status_message_sets_status_message(): 636 | pm = setup_basic().with_status_msg('Test') 637 | ue, aev = pm.to_update_item() 638 | assert aev[':status_msg'] == pm.status_msg 639 | assert 'StatusMessage=:status_msg' in ue 640 | 641 | 642 | def test_to_update_item_with_no_friendly_id_sets_no_friendly_id(): 643 | pm = setup_basic() 644 | ue, aev = pm.to_update_item() 645 | assert ':fid' not in aev.keys() 646 | assert 'FriendlyId=:fid' not in ue 647 | 648 | 649 | def test_to_update_item_with_friendly_id_sets_friendly_id(): 650 | pm = setup_basic().with_friendly_id('test') 651 | ue, aev = pm.to_update_item() 652 | assert aev[':fid'] == pm.friendly_id 653 | assert 'FriendlyId=:fid' in ue 654 | 655 | 656 | def test_to_update_item_with_last_update_sets_last_update(): 657 | pm = setup_basic().with_friendly_id('test') 658 | ue, aev = pm.to_update_item() 659 | assert aev[':l_u'] == pm.last_update.isoformat() 660 | assert 'LastUpdate=:l_u' in ue 661 | 662 | 663 | def test_to_update_item_with_no_source_sets_no_source(): 664 | pm = setup_basic() 665 | ue, aev = pm.to_update_item() 666 | assert ':source' not in aev.keys() 667 | assert 'Source=:source' not in ue 668 | 669 | 670 | def test_to_update_item_with_no_metric_namespace_sets_no_metric_namespace(): 671 | pm = setup_basic() 672 | ue, aev = pm.to_update_item() 673 | assert ':ns' not in aev.keys() 674 | assert 'MetricNamespace=:ns' not in ue 675 | 676 | 677 | def test_to_update_item_with_metric_namespace_sets_metric_namespace(): 678 | pm = setup_basic().with_metric(Namespace='test') 679 | ue, aev = pm.to_update_item() 680 | assert aev[':ns'] == pm.metric_namespace 681 | assert 'MetricNamespace=:ns' in ue 682 | 683 | 684 | def test_to_update_item_with_no_metric_name_sets_no_metric_name(): 685 | pm = setup_basic() 686 | ue, aev = pm.to_update_item() 687 | assert ':m' not in aev.keys() 688 | assert 'MetricName=:m' not in ue 689 | 690 | 691 | def test_to_update_item_with_metric_name_sets_metric_name(): 692 | pm = setup_basic().with_metric(Namespace='test', Metric='metric') 693 | ue, aev = pm.to_update_item() 694 | assert aev[':m'] == pm.metric_name 695 | assert 'MetricName=:m' in ue 696 | 697 | 698 | def test_to_update_item_with_is_in_progress_false_sets_is_in_progress_false(): 699 | pm = setup_basic() 700 | ue, aev = pm.to_update_item() 701 | assert not aev[':i'] 702 | assert 'IsInProgress=:i' in ue 703 | 704 | 705 | def test_to_update_item_with_is_in_progress_true_sets_is_in_progress_true(): 706 | pm = setup_basic().start() 707 | ue, aev = pm.to_update_item() 708 | assert aev[':i'] 709 | assert 'IsInProgress=:i' in ue 710 | 711 | 712 | def test_to_update_item_parallel_children_f_sets_parallel_children_f(): 713 | pm = setup_basic() 714 | ue, aev = pm.to_update_item() 715 | assert not aev[':hpc'] 716 | assert 'HasParallelChildren=:hpc' in ue 717 | 718 | 719 | def test_to_update_item_parallel_children_t_sets_parallel_children_t(): 720 | pm = setup_basic().with_parallel_children() 721 | ue, aev = pm.to_update_item() 722 | assert aev[':hpc'] 723 | assert 'HasParallelChildren=:hpc' in ue 724 | 725 | 726 | def test_to_update_item_done_false_sets_done_false(): 727 | pm = setup_basic() 728 | ue, aev = pm.to_update_item() 729 | assert not aev[':d'] 730 | assert 'IsDone=:d' in ue 731 | 732 | 733 | def test_to_update_item_done_true_sets_done_true(): 734 | pm = setup_basic().start().succeed() 735 | ue, aev = pm.to_update_item() 736 | assert aev[':d'] 737 | assert 'IsDone=:d' in ue 738 | 739 | 740 | @mock_dynamodb2 741 | @patch('progressmonitor.DynamoDbDriver.create_tables') 742 | def test_dynamodb_creates_tables_if_not_exist(ct_mock): 743 | setup_basic_d() 744 | assert ct_mock.called_once() 745 | 746 | 747 | @mock_dynamodb2 748 | @patch('progressmonitor.DynamoDbDriver.create_tables') 749 | def test_create_tables_called_only_once(ct_mock): 750 | setup_basic_d() 751 | setup_basic_d() 752 | setup_basic_d() 753 | assert ct_mock.called_once() 754 | 755 | 756 | @mock_dynamodb2 757 | @patch('tests.test_progressmonitor.DynamoDbDriver.update_tracker') 758 | def test_update_tracker_called_only_once(ut_mock): 759 | t = setup_basic_d().start().succeed().update() 760 | t.update() 761 | assert ut_mock.called_once() 762 | 763 | 764 | @mock_dynamodb2 765 | @patch('progressmonitor.TrackerBase.load') 766 | def test_refresh_calls_load(load_mock): 767 | t = setup_basic_d().start().succeed().update() 768 | t.refresh() 769 | assert load_mock.called_once() 770 | -------------------------------------------------------------------------------- /tools/dict_test.py: -------------------------------------------------------------------------------- 1 | import uuid 2 | # from itertools import groupby 3 | 4 | 5 | class Tracker(object): 6 | def __init__(self, **kwargs): 7 | self.key = str(uuid.uuid4()) 8 | self.parent = kwargs.get('Parent') 9 | self.name = kwargs.get('Name') 10 | self.children = [] 11 | 12 | 13 | def add_tracker(t): 14 | if t.parent: 15 | trackers[t.parent].children.append(t.key) 16 | trackers[t.key] = t 17 | 18 | 19 | trackers = {} 20 | a = Tracker(Name='Master Workflow') 21 | b = Tracker(Parent=a.key, Name='CopyData') 22 | c = Tracker(Parent=a.key, Name='ConvertVM') 23 | d = Tracker(Parent=b.key, Name='CreateFolder') 24 | e = Tracker(Parent=b.key, Name='Copyfiles') 25 | 26 | add_tracker(a) 27 | add_tracker(b) 28 | add_tracker(c) 29 | add_tracker(d) 30 | add_tracker(e) 31 | 32 | 33 | def count_children(key): 34 | tot = 0 35 | t = trackers[key] 36 | if len(t.children): 37 | for k in t.children: 38 | tot = tot + count_children(k) 39 | tot = tot + len(t.children) 40 | print t.name, tot 41 | return tot 42 | 43 | for k in trackers.keys(): 44 | total = 0 45 | if not trackers[k].parent: 46 | count_children(trackers[k].key) 47 | 48 | 49 | # groups = groupby(trackers, lambda a: (a.parent)) 50 | # for key, group in groups: 51 | # g = list(group) 52 | # print "{}: {}".format(key, len(g)) 53 | -------------------------------------------------------------------------------- /tools/dict_test2.py: -------------------------------------------------------------------------------- 1 | 2 | class Test(object): 3 | def __init__(self, name): 4 | self.list = [] 5 | self.name = name 6 | 7 | def with_item(self, i): 8 | i.parent = self 9 | self.list.append(i) 10 | return self 11 | 12 | 13 | a = Test('a') 14 | b = Test('b') 15 | c = Test('c') 16 | b.with_item(c) 17 | a.with_item(b) 18 | for i in b.list: 19 | print i.parent.parent.name 20 | -------------------------------------------------------------------------------- /tools/load_data_test.py: -------------------------------------------------------------------------------- 1 | import redis 2 | from progressmagician import RedisProgressManager, ProgressMagician 3 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 4 | r = redis.Redis(connection_pool=pool) 5 | rpm = RedisProgressManager(RedisConnection=r) 6 | pm = ProgressMagician(DbConnection=rpm) 7 | pm.load('7b3aa95f-d6df-48db-876a-3e6b2f669aed') 8 | print pm.main.start_time 9 | print pm.main.elapsed_time_in_seconds() 10 | 11 | -------------------------------------------------------------------------------- /tools/metrics_test.py: -------------------------------------------------------------------------------- 1 | import redis 2 | import time 3 | from progressinsight import RedisProgressManager, ProgressInsight, ProgressTracker 4 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 5 | r = redis.Redis(connection_pool=pool) 6 | rpm = RedisProgressManager(RedisConnection=r) 7 | pm = ProgressInsight(DbConnection=rpm) 8 | c = ProgressTracker(Name='ConvertVMWorkflow').with_metric(Namespace='test', 9 | Metric='convert_vm') 10 | c.metric.with_dimension('linux_flavor', 'redhat') \ 11 | .with_dimension('version', '6.8') 12 | pm.with_tracker(c) 13 | pm.update_all() 14 | c.start(Parents=True) 15 | pm.update_all() 16 | print 'sleeping' 17 | time.sleep(2) 18 | c.succeed() 19 | pm.update_all() 20 | print c.elapsed_time_in_seconds 21 | print c.start_time 22 | print c.finish_time 23 | -------------------------------------------------------------------------------- /tools/progress_test.py: -------------------------------------------------------------------------------- 1 | import redis 2 | import random 3 | from progressinsight import RedisProgressManager, ProgressInsight, ProgressTracker 4 | 5 | 6 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 7 | r = redis.Redis(connection_pool=pool) 8 | rpm = RedisProgressManager(RedisConnection=r) 9 | pm = ProgressInsight(DbConnection=rpm) 10 | def create_children(t, n): 11 | r = random.randint(0, 10) 12 | i = 0 13 | while i < n: 14 | c = ProgressTracker() 15 | t.with_tracker(c) 16 | if r == 0: 17 | c.start(Parents=True) 18 | i = i + 1 19 | 20 | 21 | t = ProgressTracker() 22 | pm.with_tracker(t) 23 | create_children(t, 100) 24 | for c in t.all_children: 25 | create_children(c, 100) 26 | print t.all_children_count 27 | print t.in_progress_count 28 | print t.in_progress_pct 29 | #print t.print_tree() 30 | print 'updating;' 31 | pm.update_all() 32 | print 'updating again;' 33 | pm.update_all() 34 | print 'loading' 35 | l = pm.load(pm.id) 36 | print 'loaded' 37 | print pm.all_children_count 38 | print l.all_children_count 39 | -------------------------------------------------------------------------------- /tools/readme_quick_start.py: -------------------------------------------------------------------------------- 1 | import redis 2 | from progressmagician import RedisProgressManager, ProgressMagician, ProgressTracker 3 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 4 | rpm = RedisProgressManager(RedisConnection=redis.Redis(connection_pool=pool)) 5 | pm = ProgressMagician(DbConnection=rpm) 6 | c = ProgressTracker(Name='ConvertVMWorkflow') 7 | e = ProgressTracker(Name='ExportImage', ParentId=c.id) 8 | f = ProgressTracker(Name='ConvertImage', ParentId=c.id) 9 | g = ProgressTracker(Name='UploadImage', ParentId=c.id) 10 | h = ProgressTracker(Name='NotifyCompleteStatus', ParentId=c.id) 11 | pm.with_tracker(c).with_tracker(e).with_tracker(f).with_tracker(g) \ 12 | .with_tracker(h) 13 | pm.update_all() 14 | print pm.count_children() 15 | 16 | # pm = ProgressMagician(ProgressManager=rpm, Key='abc') 17 | # a = ProgressEvent(ProgressTotal=10, Key='def') 18 | # b = ProgressEvent(ProgressTotal=10, Key='hij') 19 | # c = ProgressEvent(ProgressTotal=10, Key='123') 20 | # d = ProgressEvent(ProgressTotal=10, Key='456') 21 | # pm.with_event(a) 22 | # pm.with_event(c) 23 | # a.with_event(b) 24 | # c.with_event(d) 25 | -------------------------------------------------------------------------------- /tools/test.py: -------------------------------------------------------------------------------- 1 | import redis 2 | 3 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 4 | r = redis.Redis(connection_pool=pool) 5 | 6 | 7 | class RollupEvent(object): 8 | def __init__(self, **kwargs): 9 | self.conn = kwargs.get('RedisConnection') 10 | self.key = kwargs.get('Key') 11 | self.rollup_events = [] 12 | self.type = kwargs.get('Type') 13 | self.value = kwargs.get('Value') 14 | 15 | def with_event(self, r): 16 | self.rollup_events.append(r) 17 | 18 | 19 | def record_event(conn, event): 20 | pipe = conn.pipeline(True) 21 | pipe.hmset('abc', {'val': 'testabc'}) 22 | pipe.hmset('abc:def', {'val': 'testdef'}) 23 | pipe.hmset('abc:def:ghi', {'val': 'testghi'}) 24 | pipe.hmset('123', {'val': 'test123'}) 25 | pipe.hmset('123:456', {'val': 'test456'}) 26 | pipe.execute() 27 | 28 | 29 | event = {} 30 | event['status'] = 'In progress' 31 | 32 | # record_event(r, event) 33 | for i in r.scan_iter('abc*'): 34 | v = r.hgetall(i) 35 | print v 36 | -------------------------------------------------------------------------------- /tools/test_dict_ref.py: -------------------------------------------------------------------------------- 1 | d = {} 2 | d['test1'] = '1' 3 | d['test2'] = '2' 4 | 5 | 6 | class test(object): 7 | def __init__(self, d): 8 | self.d = d 9 | 10 | 11 | a = test(d) 12 | b = test(d) 13 | 14 | print a.d 15 | print b.d 16 | 17 | a.d['test3'] = 3 18 | print b.d 19 | -------------------------------------------------------------------------------- /tools/time.py: -------------------------------------------------------------------------------- 1 | def record_event(conn, event): 2 | id = conn.incr('event:id') 3 | event['id'] = id 4 | event_key = 'event:{id}'.format(id=id) 5 | 6 | pipe = conn.pipeline(True) 7 | pipe.hmset(event_key, event) 8 | pipe.zadd('events', **{id: event['timestamp']}) 9 | pipe.execute() 10 | 11 | 12 | --------------------------------------------------------------------------------