├── .gitignore
├── CHANGELOG.md
├── LICENSE
├── README.md
├── atasker
├── __init__.py
├── co.py
├── f.py
├── supervisor.py
├── threads.py
└── workers.py
├── doc
├── .gitignore
├── Makefile
├── async_jobs.rst
├── collections.rst
├── conf.py
├── debug.rst
├── index.rst
├── localproxy.rst
├── locker.rst
├── readme.rst
├── req.txt
├── supervisor.rst
├── tasks.rst
└── workers.rst
├── setup.py
└── tests
├── mp.py
├── mpworker.py
└── test.py
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | atasker.egg-info
3 | build
4 | dist
5 | Makefile
6 | TODO.todo
7 |
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | ## 0.7 (2019-12-09)
2 |
3 | * to speed up thread spawn, thread tasks execution moved to
4 | *ThreadPoolExecutor*. Due to this, tasks as ready-made thread objects are no
5 | longer supported.
6 |
7 | * **put_task()** method arguments now should be target, args, kwargs and
8 | callback
9 |
10 | * **mark_task_completed** now always requires either task or task_id
11 |
12 | * **daemon** parameter is now obsolete
13 |
14 | * supervisors have got IDs (used in logging and thread names only)
15 |
16 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # atasker
2 | Python library for modern thread / multiprocessing pooling and task processing
3 | via asyncio.
4 |
5 |
6 |
7 |
8 |
9 |
10 | Warning: **atasker** is not suitable for the lightweight tasks in high-load
11 | environments. For such projects it's highly recommended to use lightweight
12 | version: [neotasker](https://github.com/alttch/neotasker)
13 |
14 | No matter how your code is written, atasker automatically detects blocking
15 | functions and coroutines and launches them in a proper way, in a thread,
16 | asynchronous loop or in multiprocessing pool.
17 |
18 | Tasks are grouped into pools. If there's no space in pool, task is being placed
19 | into waiting queue according to their priority. Pool also has "reserve" for the
20 | tasks with priorities "normal" and higher. Tasks with "critical" priority are
21 | always executed instantly.
22 |
23 | This library is useful if you have a project with many similar tasks which
24 | produce approximately equal CPU/memory load, e.g. API responses, scheduled
25 | resource state updates etc.
26 |
27 | ## Install
28 |
29 | ```bash
30 | pip3 install atasker
31 | ```
32 |
33 | Sources: https://github.com/alttch/atasker
34 |
35 | Documentation: https://atasker.readthedocs.io/
36 |
37 | ## Why
38 |
39 | * asynchronous programming is a perfect way to make your code fast and reliable
40 |
41 | * multithreading programming is a perfect way to run blocking code in the
42 | background
43 |
44 | **atasker** combines advantages of both ways: atasker tasks run in separate
45 | threads however task supervisor and workers are completely asynchronous. But
46 | all their public methods are thread-safe.
47 |
48 | ## Why not standard Python thread pool?
49 |
50 | * threads in a standard pool don't have priorities
51 | * workers
52 |
53 | ## Why not standard asyncio loops?
54 |
55 | * compatibility with blocking functions
56 | * async workers
57 |
58 | ## Why not concurrent.futures?
59 |
60 | **concurrent.futures** is a great standard Python library which allows you to
61 | execute specified tasks in a pool of workers.
62 |
63 | For thread-based tasks, **atasker** extends
64 | *concurrent.futures.ThreadPoolExecutor* functionality.
65 |
66 | **atasker** method *background_task* solves the same problem but in slightly
67 | different way, adding priorities to the tasks, while *atasker* workers do
68 | absolutely different job:
69 |
70 | * in *concurrent.futures* worker is a pool member which executes the single
71 | specified task.
72 |
73 | * in *atasker* worker is an object, which continuously *generates* new tasks
74 | with the specified interval or on external event, and executes them in thread
75 | or multiprocessing pool.
76 |
77 |
78 | ## Code examples
79 |
80 | ### Start/stop
81 |
82 | ```python
83 |
84 | from atasker import task_supervisor
85 |
86 | # set pool size
87 | task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
88 | task_supervisor.start()
89 | # ...
90 | # start workers, other threads etc.
91 | # ...
92 | # optionally block current thread
93 | task_supervisor.block()
94 |
95 | # stop from any thread
96 | task_supervisor.stop()
97 | ```
98 |
99 | ### Background task
100 |
101 | ```python
102 | from atasker import background_task, TASK_LOW, TASK_HIGH, wait_completed
103 |
104 | # with annotation
105 | @background_task
106 | def mytask():
107 | print('I am working in the background!')
108 | return 777
109 |
110 | task = mytask()
111 |
112 | # optional
113 | result = wait_completed(task)
114 |
115 | print(task.result) # 777
116 | print(result) # 777
117 |
118 | # with manual decoration
119 | def mytask2():
120 | print('I am working in the background too!')
121 |
122 | task = background_task(mytask2, priority=TASK_HIGH)()
123 | ```
124 | ### Async tasks
125 |
126 | ```python
127 | # new asyncio loop is automatically created in own thread
128 | a1 = task_supervisor.create_aloop('myaloop', default=True)
129 |
130 | async def calc(a):
131 | print(a)
132 | await asyncio.sleep(1)
133 | print(a * 2)
134 | return a * 3
135 |
136 | # call from sync code
137 |
138 | # put coroutine
139 | task = background_task(calc)(1)
140 |
141 | wait_completed(task)
142 |
143 | # run coroutine and wait for result
144 | result = a1.run(calc(1))
145 | ```
146 |
147 | ### Worker examples
148 |
149 | ```python
150 | from atasker import background_worker, TASK_HIGH
151 |
152 | @background_worker
153 | def worker1(**kwargs):
154 | print('I am a simple background worker')
155 |
156 | @background_worker
157 | async def worker_async(**kwargs):
158 | print('I am async background worker')
159 |
160 | @background_worker(interval=1)
161 | def worker2(**kwargs):
162 | print('I run every second!')
163 |
164 | @background_worker(queue=True)
165 | def worker3(task, **kwargs):
166 | print('I run when there is a task in my queue')
167 |
168 | @background_worker(event=True, priority=TASK_HIGH)
169 | def worker4(**kwargs):
170 | print('I run when triggered with high priority')
171 |
172 | worker1.start()
173 | worker_async.start()
174 | worker2.start()
175 | worker3.start()
176 | worker4.start()
177 |
178 | worker3.put_threadsafe('todo1')
179 | worker4.trigger_threadsafe()
180 |
181 | from atasker import BackgroundIntervalWorker
182 |
183 | class MyWorker(BackgroundIntervalWorker):
184 |
185 | def run(self, **kwargs):
186 | print('I am custom worker class')
187 |
188 | worker5 = MyWorker(interval=0.1, name='worker5')
189 | worker5.start()
190 | ```
191 |
--------------------------------------------------------------------------------
/atasker/__init__.py:
--------------------------------------------------------------------------------
1 | __author__ = "Altertech Group, https://www.altertech.com/"
2 | __copyright__ = "Copyright (C) 2018-2019 Altertech Group"
3 | __license__ = "Apache License 2.0"
4 | __version__ = "0.7.9"
5 |
6 | from atasker.supervisor import TaskSupervisor
7 | from atasker.supervisor import TASK_LOW
8 | from atasker.supervisor import TASK_NORMAL
9 | from atasker.supervisor import TASK_HIGH
10 | from atasker.supervisor import TASK_CRITICAL
11 |
12 | from atasker.supervisor import TT_THREAD, TT_MP, TT_COROUTINE
13 |
14 | task_supervisor = TaskSupervisor(supervisor_id='default')
15 |
16 | from atasker.workers import background_worker
17 |
18 | from atasker.workers import BackgroundWorker
19 | from atasker.workers import BackgroundIntervalWorker
20 | from atasker.workers import BackgroundQueueWorker
21 | from atasker.workers import BackgroundEventWorker
22 |
23 | from atasker.f import FunctionCollection
24 | from atasker.f import TaskCollection
25 |
26 | from atasker.threads import LocalProxy
27 | from atasker.threads import Locker
28 | from atasker.threads import background_task
29 | from atasker.threads import wait_completed
30 |
31 | from atasker.co import co_mp_apply
32 |
33 | import atasker.supervisor
34 | import atasker.workers
35 | import aiosched
36 |
37 | g = LocalProxy()
38 |
39 |
40 | def set_debug(mode=True):
41 | atasker.supervisor.debug = mode
42 | atasker.workers.debug = mode
43 | aiosched.set_debug(mode)
44 |
--------------------------------------------------------------------------------
/atasker/co.py:
--------------------------------------------------------------------------------
1 | __author__ = 'Altertech Group, https://www.altertech.com/'
2 | __copyright__ = 'Copyright (C) 2018-2019 Altertech Group'
3 | __license__ = 'Apache License 2.0'
4 | __version__ = "0.7.9"
5 |
6 | from atasker import task_supervisor
7 |
8 | from atasker import TASK_NORMAL, TT_MP
9 |
10 | import uuid
11 | import asyncio
12 |
13 |
14 | async def co_mp_apply(f,
15 | args=(),
16 | kwargs={},
17 | priority=None,
18 | delay=None,
19 | supervisor=None):
20 | """
21 | Async task execution inside multiprocessing pool
22 |
23 | Args:
24 | f: module.function (function must be located in external module)
25 | args: function arguments
26 | kwargs: function keyword arguments
27 | priority: task :ref:`priority` (default: TASK_NORMAL)
28 | delay: delay before execution
29 | supervisor: custom :doc:`task supervisor`
30 | """
31 |
32 | class CO:
33 |
34 | async def run(self, *args, **kwargs):
35 | self._event = asyncio.Event()
36 | return self.supervisor.put_task(target=self.func,
37 | args=args,
38 | kwargs=kwargs,
39 | callback=self.callback,
40 | priority=self.priority,
41 | delay=self.delay,
42 | tt=TT_MP)
43 |
44 | async def _set_event(self):
45 | self._event.set()
46 |
47 | def callback(self, result):
48 | self.supervisor.mark_task_completed(self.task)
49 | self._result = result
50 | asyncio.run_coroutine_threadsafe(self._set_event(), loop=self._loop)
51 |
52 | async def get_result(self):
53 | await self._event.wait()
54 | self._event.clear()
55 | return self._result
56 |
57 | co = CO()
58 | co.priority = priority if priority is not None else TASK_NORMAL
59 | co.delay = delay
60 | co.supervisor = supervisor if supervisor else task_supervisor
61 | co.func = f
62 | co._loop = asyncio.get_event_loop()
63 | co.task = await co.run(args, kwargs)
64 | return await co.get_result() if co.task else None
65 |
--------------------------------------------------------------------------------
/atasker/f.py:
--------------------------------------------------------------------------------
1 | __author__ = "Altertech Group, https://www.altertech.com/"
2 | __copyright__ = "Copyright (C) 2018-2019 Altertech Group"
3 | __license__ = "Apache License 2.0"
4 | __version__ = "0.7.9"
5 |
6 | import traceback
7 | import threading
8 | import queue
9 | import time
10 | import uuid
11 |
12 | from atasker import task_supervisor
13 | from atasker import TASK_NORMAL
14 |
15 |
16 | class FunctionCollection:
17 | """
18 | Args:
19 | on_error: function, launched when function in collection raises an
20 | exception
21 | on_error_kwargs: additional kwargs for on_error function
22 | include_exceptions: include exceptions into final result dict
23 | """
24 |
25 | def __init__(self, **kwargs):
26 | self._functions = []
27 | self._functions_with_priorities = []
28 | self.on_error = kwargs.get('on_error')
29 | self.on_error_kwargs = kwargs.get('on_error_kwargs', {})
30 | self.include_exceptions = True if kwargs.get(
31 | 'include_exceptions') else False
32 | self.default_priority = TASK_NORMAL
33 |
34 | def __call__(self, f=None, **kwargs):
35 |
36 | def wrapper(f, **kw):
37 | self.append(f, **kwargs)
38 |
39 | if f:
40 | self.append(f)
41 | return f
42 | elif kwargs:
43 | return wrapper
44 | else:
45 | return self.run()
46 |
47 | def append(self, f, priority=None):
48 | """
49 | Append function without annotation
50 |
51 | Args:
52 | f: function
53 | priority: function priority
54 | """
55 | if f not in self._functions:
56 | self._functions.append(f)
57 | self._functions_with_priorities.append({
58 | 'p': priority if priority else self.default_priority,
59 | 'f': f
60 | })
61 |
62 | def remove(self, f):
63 | """
64 | Remove function
65 |
66 | Args:
67 | f: function
68 | """
69 | try:
70 | self._functions.remove(f)
71 | for z in self._functions_with_priorities:
72 | if z['f'] is f:
73 | self._functions_with_priorities.remove(z)
74 | break
75 | except:
76 | self.error()
77 |
78 | def run(self):
79 | """
80 | Run all functions in collection
81 |
82 | Returns:
83 | result dict as
84 |
85 | { '': '', ... }
86 | """
87 | return self.execute()[0]
88 |
89 | def execute(self):
90 | """
91 | Run all functions in collection
92 |
93 | Returns:
94 | a tuple
95 | { '': '', ...}, ALL_OK
96 | where ALL_OK is True if no function raised an exception
97 | """
98 | result = {}
99 | all_ok = True
100 | funclist = sorted(self._functions_with_priorities, key=lambda k: k['p'])
101 | for fn in funclist:
102 | f = fn['f']
103 | k = '{}.{}'.format(f.__module__, f.__name__)
104 | try:
105 | result[k] = f()
106 | except Exception as e:
107 | if self.include_exceptions:
108 | result[k] = (e, traceback.format_exc())
109 | else:
110 | result[k] = None
111 | self.error()
112 | all_ok = False
113 | return result, all_ok
114 |
115 | def error(self):
116 | if self.on_error:
117 | self.on_error(**self.on_error_kwargs)
118 | else:
119 | raise
120 |
121 |
122 | class TaskCollection(FunctionCollection):
123 | """
124 | Same as function collection, but stored functions are started as tasks in
125 | threads.
126 |
127 | Method execute() returns result when all tasks in collection are finished.
128 |
129 | Args:
130 | supervisor: custom task supervisor
131 | poll_delay: custom poll delay
132 | """
133 |
134 | def __init__(self, **kwargs):
135 | super().__init__(**kwargs)
136 | self.lock = threading.Lock()
137 | self.result_queue = queue.Queue()
138 | self.supervisor = kwargs.get('supervisor', task_supervisor)
139 | self.poll_delay = kwargs.get('poll_delay')
140 |
141 | def execute(self):
142 | from atasker import wait_completed
143 | with self.lock:
144 | poll_delay = self.poll_delay if self.poll_delay else \
145 | self.supervisor.poll_delay
146 | result = {}
147 | tasks = []
148 | all_ok = True
149 | funclist = sorted(self._functions_with_priorities,
150 | key=lambda k: k['p'])
151 | for fn in funclist:
152 | f = fn['f']
153 | task_id = str(uuid.uuid4())
154 | tasks.append(self.supervisor.put_task(target=self._run_task,
155 | args=(f, task_id),
156 | priority=fn['p'],
157 | task_id=task_id,
158 | _send_task_id=False))
159 | wait_completed(tasks)
160 | while True:
161 | try:
162 | k, res, ok = self.result_queue.get(block=False)
163 | result[k] = res
164 | if not ok:
165 | all_ok = False
166 | except queue.Empty:
167 | break
168 | return result, all_ok
169 |
170 | def _run_task(self, f, task_id):
171 | k = '{}.{}'.format(f.__module__, f.__name__)
172 | try:
173 | result = f()
174 | ok = True
175 | except Exception as e:
176 | if self.include_exceptions:
177 | result = (e, traceback.format_exc())
178 | else:
179 | result = None
180 | self.error()
181 | ok = False
182 | self.result_queue.put((k, result, ok))
183 | self.supervisor.mark_task_completed(task_id=task_id)
184 |
--------------------------------------------------------------------------------
/atasker/supervisor.py:
--------------------------------------------------------------------------------
1 | __author__ = 'Altertech Group, https://www.altertech.com/'
2 | __copyright__ = 'Copyright (C) 2018-2019 Altertech Group'
3 | __license__ = 'Apache License 2.0'
4 | __version__ = "0.7.9"
5 |
6 | import threading
7 | import multiprocessing
8 | import time
9 | import logging
10 | import asyncio
11 | import uuid
12 |
13 | from concurrent.futures import CancelledError, ThreadPoolExecutor
14 | from aiosched import AsyncJobScheduler
15 |
16 | debug = False
17 |
18 | TASK_LOW = 200
19 | TASK_NORMAL = 100
20 | TASK_HIGH = 50
21 | TASK_CRITICAL = 0
22 |
23 | RQ_SCHEDULER = 1
24 |
25 | TT_COROUTINE = 0
26 | TT_THREAD = 1
27 | TT_MP = 2
28 |
29 | TASK_STATUS_QUEUED = 0
30 | TASK_STATUS_DELAYED = 2
31 | TASK_STATUS_STARTED = 100
32 | TASK_STATUS_COMPLETED = 200
33 | TASK_STATUS_CANCELED = -1
34 |
35 | logger = logging.getLogger('atasker')
36 |
37 | default_poll_delay = 0.1
38 |
39 | thread_pool_default_size = multiprocessing.cpu_count() * 5
40 | mp_pool_default_size = multiprocessing.cpu_count()
41 | default_reserve_normal = 5
42 | default_reserve_high = 5
43 |
44 | default_timeout_warning = 5
45 | default_timeout_critical = 10
46 |
47 | _priorities = {
48 | TASK_LOW: 'TASK_LOW',
49 | TASK_NORMAL: 'TASK_NORMAL',
50 | TASK_HIGH: 'TASK_HIGH',
51 | TASK_CRITICAL: 'TASK_CRITICAL'
52 | }
53 |
54 |
55 | class Task:
56 |
57 | def __init__(self,
58 | tt,
59 | task_id=None,
60 | priority=TASK_NORMAL,
61 | target=None,
62 | args=(),
63 | kwargs={},
64 | callback=None,
65 | delay=None,
66 | worker=None,
67 | _send_task_id=True):
68 | self.id = task_id if task_id is not None else str(uuid.uuid4())
69 | self.tt = tt
70 | self.target = target
71 | self.args = args
72 | self.kwargs = kwargs if kwargs else {}
73 | if _send_task_id: self.kwargs['_task_id'] = self.id
74 | self.callback = callback
75 | self.priority = priority
76 | self.time_queued = None
77 | self.time_started = None
78 | self._tstarted = None
79 | self._tqueued = None
80 | self.status = TASK_STATUS_QUEUED
81 | self.delay = delay
82 | self.worker = worker
83 | self.started = threading.Event()
84 | self.completed = threading.Event()
85 | self.result = None
86 |
87 | def __cmp__(self, other):
88 | return cmp(self.priority, other.priority) if \
89 | other is not None else 1
90 |
91 | def __lt__(self, other):
92 | return (self.priority < other.priority) if \
93 | other is not None else True
94 |
95 | def __gt__(self, other):
96 | return (self.priority > other.priority) if \
97 | other is not None else True
98 |
99 | def is_started(self):
100 | return self.started.is_set()
101 |
102 | def is_completed(self):
103 | return self.completed.is_set()
104 |
105 | def mark_started(self):
106 | self.status = TASK_STATUS_STARTED
107 | self.started.set()
108 |
109 | def mark_completed(self):
110 | self.status = TASK_STATUS_COMPLETED
111 | self.completed.set()
112 |
113 |
114 | class ALoop:
115 |
116 | def __init__(self, name=None, supervisor=None):
117 | self.name = name if name else str(uuid.uuid4())
118 | self._active = False
119 | self.daemon = False
120 | self.poll_delay = default_poll_delay
121 | self.thread = None
122 | self.supervisor = supervisor
123 | self._started = threading.Event()
124 |
125 | async def _coro_task(self, task):
126 | task.time_queued = time.time()
127 | task.time_started = task.time_queued
128 | task._tstarted = time.perf_counter()
129 | task._tqueued = task._tstarted
130 | task.mark_started()
131 | task.result = await task.target
132 | task.mark_completed()
133 |
134 | def background_task(self, coro):
135 | if not self.is_active():
136 | raise RuntimeError('{} aloop {} is not active'.format(
137 | self.supervisor.id, self.name))
138 | task = Task(TT_COROUTINE, str(uuid.uuid4()), TASK_NORMAL, coro)
139 | asyncio.run_coroutine_threadsafe(self._coro_task(task), loop=self.loop)
140 | return task
141 |
142 | def run(self, coro):
143 | if not self.is_active():
144 | raise RuntimeError('{} aloop {} is not active'.format(
145 | self.supervisor.id, self.name))
146 | future = asyncio.run_coroutine_threadsafe(coro, loop=self.loop)
147 | return future.result()
148 |
149 | def start(self):
150 | if not self._active:
151 | self._started.clear()
152 | t = threading.Thread(name='supervisor_{}_aloop_{}'.format(
153 | self.supervisor.id, self.name),
154 | target=self._start_loop)
155 | t.setDaemon(self.daemon)
156 | t.start()
157 | self._started.wait()
158 |
159 | def get_loop(self):
160 | return None if not self._active else self.loop
161 |
162 | def _start_loop(self):
163 | self.loop = asyncio.new_event_loop()
164 | asyncio.set_event_loop(self.loop)
165 | try:
166 | self.loop.run_until_complete(self._loop())
167 | except CancelledError:
168 | logger.warning('supervisor {} aloop {} had active tasks'.format(
169 | self.supervisor.id, self.name))
170 |
171 | async def _loop(self):
172 | self._stop_event = asyncio.Event()
173 | self.thread = threading.current_thread()
174 | self._active = True
175 | logger.info('supervisor {} aloop {} started'.format(
176 | self.supervisor.id, self.name))
177 | self._started.set()
178 | await self._stop_event.wait()
179 | logger.info('supervisor {} aloop {} finished'.format(
180 | self.supervisor.id, self.name))
181 |
182 | def _cancel_all_tasks(self):
183 | for task in asyncio.Task.all_tasks(loop=self.loop):
184 | task.cancel()
185 |
186 | async def _set_stop_event(self):
187 | self._stop_event.set()
188 |
189 | def stop(self, wait=True, cancel_tasks=False):
190 | if self._active:
191 | if cancel_tasks:
192 | self._cancel_all_tasks()
193 | if debug:
194 | logger.debug(
195 | 'supervisor {} aloop {} remaining tasks canceled'.
196 | format(self.supervisor.id, self.name))
197 | if isinstance(wait, bool):
198 | to_wait = None
199 | else:
200 | to_wait = time.perf_counter() + wait
201 | self._active = False
202 | asyncio.run_coroutine_threadsafe(self._set_stop_event(),
203 | loop=self.loop)
204 | while True:
205 | if to_wait and time.perf_counter() > to_wait:
206 | logger.warning(
207 | ('supervisor {} aloop {} wait timeout, ' +
208 | 'canceling all tasks').format(self.supervisor.id,
209 | self.name))
210 | self._cancel_all_tasks()
211 | break
212 | else:
213 | can_break = True
214 | for t in asyncio.Task.all_tasks(self.loop):
215 | if not t.cancelled() and not t.done():
216 | can_break = False
217 | break
218 | if can_break: break
219 | time.sleep(self.poll_delay)
220 | if wait and self.thread:
221 | self.thread.join()
222 |
223 | def is_active(self):
224 | return self._active
225 |
226 |
227 | class TaskSupervisor:
228 |
229 | timeout_message = '{supervisor_id} task {task_id}: ' + \
230 | '{target} started in {time_spent:.3f} seconds. ' + \
231 | 'Increase pool size or decrease number of workers'
232 |
233 | def __init__(self, supervisor_id=None):
234 |
235 | self.poll_delay = default_poll_delay
236 |
237 | self.timeout_warning = default_timeout_warning
238 | self.timeout_warning_func = None
239 | self.timeout_critical = default_timeout_critical
240 | self.timeout_critical_func = None
241 | self.id = supervisor_id if supervisor_id else str(uuid.uuid4())
242 |
243 | self._active_threads = set()
244 | self._active_mps = set()
245 | self._active = False
246 | self._main_loop_active = False
247 | self._started = threading.Event()
248 | self._lock = threading.Lock()
249 | self._max_threads = {}
250 | self._max_mps = {}
251 | self._schedulers = {}
252 | self._tasks = {}
253 | self._Qt = {}
254 | self._Qmp = {}
255 | self.default_aloop = None
256 | self.default_async_job_scheduler = None
257 | self.mp_pool = None
258 | self.daemon = False
259 | self._processors_stopped = {}
260 | self.aloops = {}
261 | self.async_job_schedulers = {}
262 |
263 | self.set_thread_pool(pool_size=thread_pool_default_size,
264 | reserve_normal=default_reserve_normal,
265 | reserve_high=default_reserve_high,
266 | max_size=None)
267 |
268 | def set_thread_pool(self, **kwargs):
269 | for p in ['pool_size', 'reserve_normal', 'reserve_high']:
270 | if p in kwargs:
271 | setattr(self, 'thread_' + p, int(kwargs[p]))
272 | self._max_threads[TASK_LOW] = self.thread_pool_size
273 | self._max_threads[
274 | TASK_NORMAL] = self.thread_pool_size + self.thread_reserve_normal
275 | thc= self.thread_pool_size + \
276 | self.thread_reserve_normal + self.thread_reserve_high
277 | self._max_threads[TASK_HIGH] = thc
278 | self._prespawn_threads = kwargs.get('min_size', 0)
279 | max_size = kwargs.get('max_size')
280 | if not max_size:
281 | max_size = thc if self.thread_pool_size else \
282 | thread_pool_default_size
283 | if self._prespawn_threads == 'max':
284 | self._prespawn_threads = max_size
285 | elif max_size < self._prespawn_threads:
286 | raise ValueError(
287 | 'min pool size ({}) can not be larger than max ({})'.format(
288 | self._prespawn_threads, max_size))
289 | self.thread_pool = ThreadPoolExecutor(
290 | max_workers=max_size,
291 | thread_name_prefix='supervisor_{}_pool'.format(self.id))
292 | if self._max_threads[TASK_HIGH] > max_size:
293 | logger.warning(
294 | ('supervisor {} executor thread pool max size ({}) is ' +
295 | 'lower than reservations ({})').format(
296 | self.id, max_size, self._max_threads[TASK_HIGH]))
297 |
298 | def set_mp_pool(self, **kwargs):
299 | for p in ['pool_size', 'reserve_normal', 'reserve_high']:
300 | setattr(self, 'mp_' + p, int(kwargs.get(p, 0)))
301 | self._max_mps[TASK_LOW] = self.mp_pool_size
302 | self._max_mps[TASK_NORMAL] = self.mp_pool_size + self.mp_reserve_normal
303 | self._max_mps[TASK_HIGH] = self.mp_pool_size + \
304 | self.mp_reserve_normal + self.mp_reserve_high
305 | if not self.mp_pool:
306 | self.create_mp_pool(processes=self._max_mps[TASK_HIGH])
307 |
308 | def timeout_warning_func(task):
309 | pass
310 |
311 | def timeout_critical_func(task):
312 | pass
313 |
314 | def _higher_queues_busy(self, tt, task_priority):
315 | if tt == TT_THREAD:
316 | q = self._Qt
317 | elif tt == TT_MP:
318 | q = self._Qmp
319 | if task_priority == TASK_NORMAL:
320 | return not q[TASK_HIGH].empty()
321 | elif task_priority == TASK_LOW:
322 | return not q[TASK_HIGH].empty() or not q[TASK_NORMAL].empty()
323 | else:
324 | return False
325 |
326 | def spawn_thread(self, target, args=(), kwargs={}):
327 | return self.thread_pool.submit(target, *args, **kwargs)
328 |
329 | def put_task(self,
330 | target,
331 | args=(),
332 | kwargs={},
333 | callback=None,
334 | priority=TASK_NORMAL,
335 | delay=None,
336 | tt=TT_THREAD,
337 | task_id=None,
338 | worker=None,
339 | _send_task_id=True):
340 | if not self._started.is_set() or not self._active or target is None:
341 | return
342 | ti = Task(tt,
343 | task_id,
344 | priority=priority,
345 | target=target,
346 | args=args,
347 | kwargs=kwargs,
348 | callback=callback,
349 | delay=delay,
350 | worker=worker,
351 | _send_task_id=_send_task_id)
352 | ti.time_queued = time.time()
353 | ti._tqueued = time.perf_counter()
354 | with self._lock:
355 | self._tasks[ti.id] = ti
356 | if priority == TASK_CRITICAL:
357 | self.mark_task_started(ti)
358 | asyncio.run_coroutine_threadsafe(self._start_task(ti),
359 | loop=self.event_loop)
360 | else:
361 | if tt == TT_THREAD:
362 | q = self._Qt[priority]
363 | else:
364 | q = self._Qmp[priority]
365 | asyncio.run_coroutine_threadsafe(q.put(ti), loop=self.event_loop)
366 | return ti
367 |
368 | async def _task_processor(self, queue, priority, tt):
369 | logger.debug('supervisor {} task processor {}/{} started'.format(
370 | self.id, tt, priority))
371 | while True:
372 | task = await queue.get()
373 | if task is None: break
374 | if tt == TT_THREAD:
375 | pool_size = self.thread_pool_size
376 | elif tt == TT_MP:
377 | pool_size = self.mp_pool_size
378 | if pool_size:
379 | self._lock.acquire()
380 | try:
381 | if tt == TT_THREAD:
382 | mx = self._max_threads[priority]
383 | elif tt == TT_MP:
384 | mx = self._max_mps[priority]
385 | while (self._get_active_count(tt) >= mx or
386 | self._higher_queues_busy(tt, priority)):
387 | self._lock.release()
388 | await asyncio.sleep(self.poll_delay)
389 | self._lock.acquire()
390 | finally:
391 | self._lock.release()
392 | self.mark_task_started(task)
393 | self.event_loop.create_task(self._start_task(task))
394 | logger.debug('supervisor {} task processor {}/{} finished'.format(
395 | self.id, tt, _priorities[priority]))
396 | self._processors_stopped[(tt, priority)].set()
397 |
398 | def get_task(self, task_id):
399 | with self._lock:
400 | return self._tasks.get(task_id)
401 |
402 | def create_mp_pool(self, *args, **kwargs):
403 | if args or kwargs:
404 | self.mp_pool = multiprocessing.Pool(*args, **kwargs)
405 | else:
406 | self.mp_pool = multiprocessing.Pool(
407 | processes=multiprocessing.cpu_count())
408 |
409 | def register_scheduler(self, scheduler):
410 | if not self._started.is_set():
411 | return False
412 | asyncio.run_coroutine_threadsafe(self._Q.put(
413 | (RQ_SCHEDULER, scheduler, time.time())),
414 | loop=self.event_loop)
415 | return True
416 |
417 | def create_async_job(self, scheduler=None, **kwargs):
418 | if scheduler is None:
419 | scheduler = self.default_async_job_scheduler
420 | elif isinstance(scheduler, str):
421 | scheduler = self.async_job_schedulers[scheduler]
422 | return scheduler.create_threadsafe(**kwargs)
423 |
424 | def cancel_async_job(self, scheduler=None, job=None):
425 | if job:
426 | if scheduler is None:
427 | scheduler = self.default_async_job_scheduler
428 | elif isinstance(scheduler, str):
429 | scheduler = self.async_job_schedulers[scheduler]
430 | scheduler.cancel(job)
431 | else:
432 | logger.warning('supervisor {} async job cancellation ' +
433 | 'requested but job not specified'.format(self.id))
434 |
435 | def register_sync_scheduler(self, scheduler):
436 | with self._lock:
437 | self._schedulers[scheduler] = None
438 | return True
439 |
440 | def unregister_sync_scheduler(self, scheduler):
441 | with self._lock:
442 | try:
443 | del self._schedulers[scheduler]
444 | return True
445 | except:
446 | return False
447 |
448 | def unregister_scheduler(self, scheduler):
449 | with self._lock:
450 | if scheduler not in self._schedulers:
451 | return False
452 | else:
453 | self._schedulers[scheduler][1].cancel()
454 | del self._schedulers[scheduler]
455 | return True
456 |
457 | def _get_active_count(self, tt):
458 | if tt == TT_THREAD:
459 | return len(self._active_threads)
460 | elif tt == TT_MP:
461 | return len(self._active_mps)
462 |
463 | def create_aloop(self, name, daemon=False, start=True, default=False):
464 | if name == '__supervisor__':
465 | raise RuntimeError('Name "__supervisor__" is reserved')
466 | with self._lock:
467 | if name in self.aloops:
468 | logger.error('supervisor {} loop {} already exists'.format(
469 | self.id, name))
470 | return False
471 | l = ALoop(name, supervisor=self)
472 | l.daemon = daemon
473 | l.poll_delay = self.poll_delay
474 | with self._lock:
475 | self.aloops[name] = l
476 | if start:
477 | l.start()
478 | if default:
479 | self.set_default_aloop(l)
480 | return l
481 |
482 | def create_async_job_scheduler(self,
483 | name,
484 | aloop=None,
485 | start=True,
486 | default=False):
487 | """
488 | Create async job scheduler (aiosched.scheduler)
489 |
490 | ALoop must always be specified or default ALoop defined
491 | """
492 | if name == '__supervisor__':
493 | raise RuntimeError('Name "__supervisor__" is reserved')
494 | with self._lock:
495 | if name in self.async_job_schedulers:
496 | logger.error(
497 | 'supervisor {} async job_scheduler {} already exists'.
498 | format(self.id, name))
499 | return False
500 | l = AsyncJobScheduler(name)
501 | if aloop is None:
502 | aloop = self.default_aloop
503 | elif not isinstance(aloop, ALoop):
504 | aloop = self.get_aloop(aloop)
505 | loop = aloop.get_loop()
506 | with self._lock:
507 | self.async_job_schedulers[name] = l
508 | if default:
509 | self.set_default_async_job_scheduler(l)
510 | if start:
511 | l.set_loop(loop)
512 | l._aloop = aloop
513 | aloop.background_task(l.scheduler_loop())
514 | else:
515 | l.set_loop(loop)
516 | return l
517 |
518 | def set_default_aloop(self, aloop):
519 | self.default_aloop = aloop
520 |
521 | def set_default_async_job_scheduler(self, scheduler):
522 | self.default_async_job_scheduler = scheduler
523 |
524 | def get_aloop(self, name=None, default=True):
525 | with self._lock:
526 | if name is not None:
527 | return self.aloops.get(name)
528 | elif default:
529 | return self.default_aloop
530 |
531 | def start_aloop(self, name):
532 | with self._lock:
533 | if name not in self.aloops:
534 | logger.error('supervisor {} loop {} not found'.format(
535 | self.id, name))
536 | return False
537 | else:
538 | self.aloops[name].start()
539 | return True
540 |
541 | def stop_aloop(self, name, wait=True, cancel_tasks=False, _lock=True):
542 | if _lock:
543 | self._lock.acquire()
544 | try:
545 | if name not in self.aloops:
546 | logger.error('supervisor {} loop {} not found'.format(
547 | self.id, name))
548 | return False
549 | else:
550 | self.aloops[name].stop(wait=wait, cancel_tasks=cancel_tasks)
551 | return True
552 | finally:
553 | if _lock:
554 | self._lock.release()
555 |
556 | def get_info(self,
557 | tt=None,
558 | aloops=True,
559 | schedulers=True,
560 | async_job_schedulers=True):
561 |
562 | class SupervisorInfo:
563 | pass
564 |
565 | result = SupervisorInfo()
566 | with self._lock:
567 | result.id = self.id
568 | result.active = self._active
569 | result.started = self._started.is_set()
570 | for p in ['pool_size', 'reserve_normal', 'reserve_high']:
571 | if tt == TT_THREAD or tt is None or tt is False:
572 | setattr(result, 'thread_' + p, getattr(self, 'thread_' + p))
573 | if self.mp_pool and (tt == TT_MP or tt is None or tt is False):
574 | setattr(result, 'mp_' + p, getattr(self, 'mp_' + p))
575 | if tt == TT_THREAD or tt is None or tt is False:
576 | if not tt is False:
577 | result.thread_tasks = list(self._active_threads)
578 | result.thread_tasks_count = len(self._active_threads)
579 | if tt == TT_MP or tt is None or tt is False:
580 | if not tt is False:
581 | result.mp_tasks = list(self._active_mps)
582 | result.mp_tasks_count = len(self._active_mps)
583 | if aloops:
584 | result.aloops = self.aloops.copy()
585 | if schedulers:
586 | result.schedulers = self._schedulers.copy()
587 | if async_job_schedulers:
588 | result.async_job_schedulers = self.async_job_schedulers.copy()
589 | if tt != False:
590 | result.tasks = {}
591 | for n, v in self._tasks.items():
592 | if tt is None or v.tt == tt:
593 | result.tasks[n] = v
594 | return result
595 |
596 | def get_aloops(self):
597 | with self._lock:
598 | return self.aloops.copy()
599 |
600 | def get_schedulers(self):
601 | with self._lock:
602 | return self._schedulers.copy()
603 |
604 | def get_tasks(self, tt=None):
605 | result = {}
606 | with self._lock:
607 | for n, v in self._tasks.items():
608 | if tt is None or v.tt == tt:
609 | result[n] = v
610 | return result
611 |
612 | def mark_task_started(self, task):
613 | with self._lock:
614 | if task.tt == TT_THREAD:
615 | self._active_threads.add(task.id)
616 | if debug:
617 | logger.debug(
618 | ('supervisor {} new task {}: {}, {}' +
619 | ' thread pool size: {} / {}').format(
620 | self.id, task.id, task.target,
621 | _priorities[task.priority],
622 | len(self._active_threads), self.thread_pool_size))
623 | elif task.tt == TT_MP:
624 | self._active_mps.add(task.id)
625 | if debug:
626 | logger.debug(('supervisor {} new task {}: {}, {}' +
627 | ' mp pool size: {} / {}').format(
628 | self.id, task.id, task.target,
629 | _priorities[task.priority],
630 | len(self._active_mps), self.mp_pool_size))
631 |
632 | async def _start_task(self, task):
633 | with self._lock:
634 | task.time_started = time.time()
635 | task._tstarted = time.perf_counter()
636 | if not task.delay:
637 | task.mark_started()
638 | if task.delay:
639 | task.status = TASK_STATUS_DELAYED
640 | await asyncio.sleep(task.delay)
641 | task.mark_started()
642 | if task.tt == TT_THREAD:
643 | self.thread_pool.submit(task.target, *task.args, **task.kwargs)
644 | elif task.tt == TT_MP:
645 | self.mp_pool.apply_async(task.target, task.args, task.kwargs,
646 | task.callback)
647 | time_spent = task._tstarted - task._tqueued
648 | if time_spent > self.timeout_critical:
649 | logger.critical(
650 | self.timeout_message.format(supervisor_id=self.supervisor.id,
651 | task_id=task_id,
652 | target=task.target,
653 | time_spent=time_spent))
654 | self.timeout_critical_func(task)
655 | elif time_spent > self.timeout_warning:
656 | logger.warning(
657 | self.timeout_message.format(supervisor_id=self.supervisor.id,
658 | task_id=task_id,
659 | target=task.target,
660 | time_spent=time_spent))
661 | self.timeout_warning_func(task)
662 |
663 | def mark_task_completed(self, task=None, task_id=None):
664 | with self._lock:
665 | if task is None:
666 | try:
667 | task = self._tasks[task_id]
668 | except:
669 | raise LookupError('supervisor {} task {} not found'.format(
670 | self.id, task_id))
671 | task_id = task.id
672 | tt = task.tt
673 | if tt == TT_THREAD:
674 | if task_id in self._active_threads:
675 | self._active_threads.remove(task_id)
676 | if debug:
677 | logger.debug(('supervisor {} removed task {}:' +
678 | ' {}, thread pool size: {} / {}').format(
679 | self.id, task_id, task,
680 | len(self._active_threads),
681 | self.thread_pool_size))
682 | task.mark_completed()
683 | del self._tasks[task_id]
684 | elif tt == TT_MP:
685 | if task_id in self._active_mps:
686 | self._active_mps.remove(task_id)
687 | if debug:
688 | logger.debug(('supervisor {} removed task {}:' +
689 | ' {} mp pool size: {} / {}').format(
690 | self.id, task_id, task,
691 | len(self._active_mps),
692 | self.mp_pool_size))
693 | task.mark_completed()
694 | del self._tasks[task_id]
695 | return True
696 |
697 | def start(self, daemon=None):
698 |
699 | def _prespawn():
700 | pass
701 |
702 | self._active = True
703 | self._main_loop_active = True
704 | t = threading.Thread(
705 | name='supervisor_{}_event_loop'.format(self.id),
706 | target=self._start_event_loop,
707 | daemon=daemon if daemon is not None else self.daemon)
708 | t.start()
709 | for i in range(self._prespawn_threads):
710 | self.thread_pool.submit(_prespawn)
711 | self._started.wait()
712 |
713 | def block(self):
714 | while self._started.is_set():
715 | time.sleep(0.1)
716 |
717 | async def _launch_scheduler_loop(self, scheduler):
718 | try:
719 | t = scheduler.worker_loop.create_task(scheduler.loop())
720 | with self._lock:
721 | self._schedulers[scheduler] = (scheduler, t)
722 | if hasattr(scheduler, 'extra_loops'):
723 | for l in scheduler.extra_loops:
724 | scheduler.worker_loop.create_task(getattr(scheduler, l)())
725 | await t
726 | except CancelledError:
727 | pass
728 | except Exception as e:
729 | logger.error(e)
730 |
731 | async def _main_loop(self):
732 | self._Q = asyncio.queues.Queue()
733 | for p in (TASK_LOW, TASK_NORMAL, TASK_HIGH):
734 | self._Qt[p] = asyncio.queues.Queue()
735 | self._processors_stopped[(TT_THREAD, p)] = asyncio.Event()
736 | self.event_loop.create_task(
737 | self._task_processor(self._Qt[p], p, TT_THREAD))
738 | if self.mp_pool:
739 | for p in (TASK_LOW, TASK_NORMAL, TASK_HIGH):
740 | self._Qmp[p] = asyncio.queues.Queue()
741 | self._processors_stopped[(TT_MP, p)] = asyncio.Event()
742 | self.event_loop.create_task(
743 | self._task_processor(self._Qmp[p], p, TT_MP))
744 | self._started.set()
745 | logger.info('supervisor {} event loop started'.format(self.id))
746 | while self._main_loop_active:
747 | data = await self._Q.get()
748 | try:
749 | if data is None: break
750 | r, res, t_put = data
751 | if r == RQ_SCHEDULER:
752 | if debug:
753 | logger.debug('supervisor {} new scheduler {}'.format(
754 | self.id, res))
755 | asyncio.run_coroutine_threadsafe(
756 | self._launch_scheduler_loop(res), loop=res.worker_loop)
757 | finally:
758 | self._Q.task_done()
759 | for i, t in self._processors_stopped.items():
760 | await t.wait()
761 | logger.info('supervisor {} event loop finished'.format(self.id))
762 |
763 | def _start_event_loop(self):
764 | if self._active:
765 | self.event_loop = asyncio.new_event_loop()
766 | asyncio.set_event_loop(self.event_loop)
767 | mp = ', mp pool: {} + {} RN + {} RH'.format(
768 | self.mp_pool_size, self.mp_reserve_normal,
769 | self.mp_reserve_high) if hasattr(self, 'mp_pool_size') else ''
770 | logger.info(
771 | ('supervisor {} started, thread pool: ' +
772 | '{} + {} RN + {} RH{}').format(self.id, self.thread_pool_size,
773 | self.thread_reserve_normal,
774 | self.thread_reserve_high, mp))
775 | try:
776 | self.event_loop.run_until_complete(self._main_loop())
777 | except CancelledError:
778 | logger.warning('supervisor {} loop had active tasks'.format(
779 | self.id))
780 |
781 | def _cancel_all_tasks(self):
782 | with self._lock:
783 | for task in asyncio.Task.all_tasks(loop=self.event_loop):
784 | task.cancel()
785 |
786 | def _stop_schedulers(self, wait=True):
787 | with self._lock:
788 | schedulers = self._schedulers.copy()
789 | for s in schedulers:
790 | s.stop(wait=wait)
791 |
792 | def _stop_async_job_schedulers(self, wait=True):
793 | with self._lock:
794 | schedulers = self.async_job_schedulers.copy().items()
795 | for i, s in schedulers:
796 | try:
797 | s.stop(wait=wait)
798 | except:
799 | pass
800 |
801 | def stop(self, wait=True, stop_schedulers=True, cancel_tasks=False):
802 | self._active = False
803 | if isinstance(wait, bool):
804 | to_wait = None
805 | else:
806 | to_wait = time.perf_counter() + wait
807 | if stop_schedulers:
808 | self._stop_async_job_schedulers(wait)
809 | self._stop_schedulers(True if wait else False)
810 | if debug:
811 | logger.debug('supervisor {} schedulers stopped'.format(self.id))
812 | with self._lock:
813 | for i, l in self.aloops.items():
814 | self.stop_aloop(i,
815 | wait=wait,
816 | cancel_tasks=cancel_tasks,
817 | _lock=False)
818 | if debug:
819 | logger.debug('supervisor {} async loops stopped'.format(
820 | self.id))
821 | if (to_wait or wait is True) and not cancel_tasks:
822 | while True:
823 | with self._lock:
824 | if not self._tasks:
825 | break
826 | time.sleep(self.poll_delay)
827 | if to_wait and time.perf_counter() > to_wait: break
828 | if debug:
829 | logger.debug('supervisor {} no task in queues'.format(self.id))
830 | if to_wait or wait is True:
831 | if debug:
832 | logger.debug('supervisor {} waiting for tasks to finish'.format(
833 | self.id))
834 | while True:
835 | if not self._active_threads:
836 | break
837 | if to_wait and time.perf_counter() > to_wait:
838 | logger.warning(
839 | ('supervisor {} wait timeout, ' +
840 | 'skipping, hope threads will finish').format(self.id))
841 | break
842 | time.sleep(self.poll_delay)
843 | if cancel_tasks:
844 | self._cancel_all_tasks()
845 | if debug:
846 | logger.debug('supervisor {} remaining tasks canceled'.format(
847 | self.id))
848 | if to_wait or wait is True:
849 | while True:
850 | with self._lock:
851 | if (not self._active_threads and not self._active_mps) or (
852 | to_wait and time.perf_counter() > to_wait):
853 | break
854 | time.sleep(self.poll_delay)
855 | if debug:
856 | logger.debug('supervisor {} no active threads/mps'.format(self.id))
857 | if debug:
858 | logger.debug('supervisor {} stopping event loop'.format(self.id))
859 | asyncio.run_coroutine_threadsafe(self._Q.put(None),
860 | loop=self.event_loop)
861 | for p in (TASK_LOW, TASK_NORMAL, TASK_HIGH):
862 | asyncio.run_coroutine_threadsafe(self._Qt[p].put(None),
863 | loop=self.event_loop)
864 | if self.mp_pool:
865 | for p in (TASK_LOW, TASK_NORMAL, TASK_HIGH):
866 | asyncio.run_coroutine_threadsafe(self._Qmp[p].put(None),
867 | loop=self.event_loop)
868 | self._main_loop_active = False
869 | if wait is True or to_wait:
870 | while True:
871 | if to_wait and time.perf_counter() > to_wait:
872 | logger.warning(
873 | 'supervisor {} wait timeout, canceling all tasks'.
874 | format(self.id))
875 | self._cancel_all_tasks()
876 | break
877 | else:
878 | can_break = True
879 | for t in asyncio.Task.all_tasks(self.event_loop):
880 | if not t.cancelled() and not t.done():
881 | can_break = False
882 | break
883 | if can_break: break
884 | time.sleep(self.poll_delay)
885 | with self._lock:
886 | for i, v in self._tasks.items():
887 | v.status = TASK_STATUS_CANCELED
888 | self._started.clear()
889 | self.thread_pool.shutdown()
890 | logger.info('supervisor {} stopped'.format(self.id))
891 |
--------------------------------------------------------------------------------
/atasker/threads.py:
--------------------------------------------------------------------------------
1 | __author__ = 'Altertech Group, https://www.altertech.com/'
2 | __copyright__ = 'Copyright (C) 2018-2019 Altertech Group'
3 | __license__ = 'Apache License 2.0'
4 | __version__ = "0.7.9"
5 |
6 | import threading
7 | import time
8 | import uuid
9 | import logging
10 | import asyncio
11 |
12 | from functools import wraps
13 |
14 | from atasker import task_supervisor
15 |
16 | from atasker import TASK_NORMAL
17 | from atasker import TT_THREAD, TT_MP, TT_COROUTINE
18 |
19 | from atasker.supervisor import ALoop, Task
20 |
21 | logger = logging.getLogger('atasker')
22 |
23 |
24 | class LocalProxy(threading.local):
25 | """
26 | Simple proxy for threading.local namespace
27 | """
28 |
29 | def get(self, attr, default=None):
30 | """
31 | Get thread-local attribute
32 |
33 | Args:
34 | attr: attribute name
35 | default: default value if attribute is not set
36 |
37 | Returns:
38 | attribute value or default value
39 | """
40 | return getattr(self, attr, default)
41 |
42 | def has(self, attr):
43 | """
44 | Check if thread-local attribute exists
45 |
46 | Args:
47 | attr: attribute name
48 |
49 | Returns:
50 | True if attribute exists, False if not
51 | """
52 | return hasattr(self, attr)
53 |
54 | def set(self, attr, value):
55 | """
56 | Set thread-local attribute
57 |
58 | Args:
59 | attr: attribute name
60 | value: attribute value to set
61 | """
62 | return setattr(self, attr, value)
63 |
64 | def clear(self, attr):
65 | """
66 | Clear (delete) thread-local attribute
67 |
68 | Args:
69 | attr: attribute name
70 | """
71 | return delattr(self, attr) if hasattr(self, attr) else True
72 |
73 |
74 | class Locker:
75 | """
76 | Locker helper/decorator
77 |
78 | Args:
79 | mod: module name (for logging only)
80 | timeout: max lock timeout before critical (default: 5 sec)
81 | relative: True for RLock (default), False for Lock
82 | """
83 |
84 | def __init__(self, mod='main', timeout=5, relative=True):
85 | self.lock = threading.RLock() if relative else threading.Lock()
86 | self.mod = mod
87 | self.relative = relative
88 | self.timeout = timeout
89 |
90 | def __call__(self, f):
91 |
92 | @wraps(f)
93 | def do(*args, **kwargs):
94 | if not self.lock.acquire(timeout=self.timeout):
95 | logger.critical('{}/{} locking broken'.format(
96 | self.mod, f.__name__))
97 | self.critical()
98 | return None
99 | try:
100 | return f(*args, **kwargs)
101 | finally:
102 | self.lock.release()
103 |
104 | return do
105 |
106 | def __enter__(self):
107 | """
108 | Raises:
109 | TimeoutError: if lock not acquired
110 | """
111 | if not self.lock.acquire(timeout=self.timeout):
112 | logger.critical('{} locking broken'.format(self.mod))
113 | self.critical()
114 | raise TimeoutError
115 |
116 | def __exit__(self, *args, **kwargs):
117 | self.lock.release()
118 |
119 | def critical(self):
120 | """
121 | Override this
122 | """
123 | pass
124 |
125 |
126 | def background_task(f, *args, **kwargs):
127 | """
128 | Wrap function to a task
129 |
130 | Args:
131 | f: task function
132 | priority: task :ref:`priority`
133 | delay: startup delay
134 | supervisor: custom :doc:`task supervisor`
135 | tt: TT_THREAD (default) or TT_MP (TT_COROUTINE is detected
136 | automatically)
137 | callback: callback function for TT_MP
138 | loop: asyncio loop or aloop object (optional)
139 |
140 | Raises:
141 | RuntimeError: if coroutine function is used but loop is not specified
142 | and supervisor doesn't have default aloop
143 | """
144 |
145 | def gen_mp_callback(task_id, callback, supervisor):
146 |
147 | def cbfunc(*args, **kwargs):
148 | if callable(callback):
149 | callback(*args, **kwargs)
150 | if args:
151 | supervisor.get_task(task_id).result = args[0]
152 | supervisor.mark_task_completed(task_id=task_id)
153 |
154 | return cbfunc
155 |
156 | @wraps(f)
157 | def start_task(*args, **kw):
158 | tt = kwargs.get('tt', TT_THREAD)
159 | supervisor = kwargs.get('supervisor', task_supervisor)
160 | if tt == TT_COROUTINE or asyncio.iscoroutinefunction(f):
161 | loop = kwargs.get('loop')
162 | if isinstance(loop, str) or loop is None:
163 | loop = supervisor.get_aloop(loop)
164 | if not loop:
165 | raise RuntimeError('loop not specified')
166 | if isinstance(loop, ALoop):
167 | return loop.background_task(f(*args, **kw))
168 | else:
169 | return asyncio.run_coroutine_threadsafe(f(*args, **kw),
170 | loop=loop)
171 | elif tt == TT_THREAD:
172 | task_id = str(uuid.uuid4())
173 | if kwargs.get('daemon'):
174 | logger.warning('daemon argument is obsolete')
175 | return supervisor.put_task(target=_background_task_thread_runner,
176 | args=(f, supervisor, task_id) + args,
177 | kwargs=kw,
178 | priority=kwargs.get(
179 | 'priority', TASK_NORMAL),
180 | delay=kwargs.get('delay'),
181 | task_id=task_id,
182 | _send_task_id=False)
183 | return t
184 | elif tt == TT_MP:
185 | task_id = str(uuid.uuid4())
186 | return supervisor.put_task(
187 | target=f,
188 | args=args,
189 | kwargs=kw,
190 | callback=gen_mp_callback(task_id, kwargs.get('callback'),
191 | supervisor),
192 | priority=kwargs.get('priority', TASK_NORMAL),
193 | delay=kwargs.get('delay'),
194 | task_id=task_id,
195 | tt=TT_MP,
196 | _send_task_id=False)
197 |
198 | return start_task
199 |
200 |
201 | def _background_task_thread_runner(f, supervisor, task_id, *args, **kwargs):
202 | try:
203 | supervisor.get_task(task_id).result = f(*args, **kwargs)
204 | finally:
205 | supervisor.mark_task_completed(task_id=task_id)
206 |
207 |
208 | def wait_completed(tasks, timeout=None):
209 | '''
210 | raises TimeoutError
211 | '''
212 | t_to = (time.perf_counter() + timeout) if timeout else None
213 | for t in [tasks] if isinstance(tasks, Task) else tasks:
214 | if timeout:
215 | t_wait = t_to - time.perf_counter()
216 | if t_wait <= 0: raise TimeoutError
217 | else:
218 | t_wait = None
219 | if not t.completed.wait(timeout=t_wait):
220 | raise TimeoutError
221 | return tasks.result if isinstance(tasks,
222 | Task) else [x.result for x in tasks]
223 |
--------------------------------------------------------------------------------
/atasker/workers.py:
--------------------------------------------------------------------------------
1 | __author__ = 'Altertech Group, https://www.altertech.com/'
2 | __copyright__ = 'Copyright (C) 2018-2019 Altertech Group'
3 | __license__ = 'Apache License 2.0'
4 | __version__ = "0.7.9"
5 |
6 | import threading
7 | import logging
8 | import uuid
9 | import time
10 | import asyncio
11 | import queue
12 | import types
13 |
14 | from atasker import task_supervisor
15 |
16 | from atasker import TASK_NORMAL
17 | from atasker.supervisor import TT_COROUTINE, TT_THREAD, TT_MP, ALoop
18 |
19 | logger = logging.getLogger('atasker')
20 |
21 | debug = False
22 |
23 |
24 | class BackgroundWorker:
25 |
26 | # ----- override this -----
27 |
28 | def run(self, *args, **kwargs):
29 | raise Exception('not implemented')
30 |
31 | def before_start(self):
32 | pass
33 |
34 | def send_stop_events(self):
35 | pass
36 |
37 | def after_start(self):
38 | pass
39 |
40 | def before_stop(self):
41 | pass
42 |
43 | def after_stop(self):
44 | pass
45 |
46 | # -----------------------
47 |
48 | def __init__(self, name=None, executor_func=None, **kwargs):
49 | if executor_func:
50 | self.run = executor_func
51 | self._can_use_mp_pool = False
52 | else:
53 | self._can_use_mp_pool = not asyncio.iscoroutinefunction(self.run)
54 | self._current_executor = None
55 | self._active = False
56 | self._started = threading.Event()
57 | self._stopped = threading.Event()
58 | self.priority = kwargs.get('priority', TASK_NORMAL)
59 | self.o = kwargs.get('o')
60 | self.on_error = kwargs.get('on_error')
61 | self.on_error_kwargs = kwargs.get('on_error_kwargs', {})
62 | self.supervisor = kwargs.get('supervisor', task_supervisor)
63 | self.poll_delay = kwargs.get('poll_delay', self.supervisor.poll_delay)
64 | self.set_name(name)
65 | self._task_args = ()
66 | self._task_kwargs = {}
67 | self.start_stop_lock = threading.Lock()
68 | self._suppress_sleep = False
69 | self.last_executed = 0
70 | self._executor_stop_event = threading.Event()
71 | self._is_worker = True
72 | if kwargs.get('daemon'):
73 | logger.warning('daemon argument is obsolete')
74 |
75 | def set_name(self, name):
76 | self.name = '_background_worker_%s' % (name if name is not None else
77 | uuid.uuid4())
78 |
79 | def restart(self, *args, **kwargs):
80 | """
81 | Restart worker, all arguments will be passed to executor function as-is
82 |
83 | Args:
84 | wait: if True, wait until worker is stopped
85 | """
86 | self.stop(wait=kwargs.get('wait'))
87 | self.start(*args, **kwargs)
88 |
89 | def is_active(self):
90 | """
91 | Check if worker is active
92 |
93 | Returns:
94 | True if worker is active, otherwise False
95 | """
96 | return self._active
97 |
98 | def is_started(self):
99 | """
100 | Check if worker is started
101 | """
102 | return self._started.is_set()
103 |
104 | def is_stopped(self):
105 | """
106 | Check if worker is stopped
107 | """
108 | return self._stopped.is_set()
109 |
110 | def error(self):
111 | if self.on_error:
112 | self.on_error(**self.on_error_kwargs)
113 | else:
114 | raise
115 |
116 | def _send_executor_stop_event(self):
117 | self._executor_stop_event.set()
118 |
119 | def start(self, *args, **kwargs):
120 | """
121 | Start worker, all arguments will be passed to executor function as-is
122 | """
123 | if self._active:
124 | return False
125 | self.start_stop_lock.acquire()
126 | try:
127 | self.before_start()
128 | self._active = True
129 | self._started.clear()
130 | self._stopped.clear()
131 | kw = kwargs.copy()
132 | if '_priority' in kw:
133 | self.priority = kw['_priority']
134 | self._run_in_mp = isinstance(
135 | self.run, types.FunctionType
136 | ) and self.supervisor.mp_pool and self._can_use_mp_pool
137 | if self._run_in_mp:
138 | if debug: logger.debug(self.name + ' will use mp pool')
139 | else:
140 | kw['_worker'] = self
141 | if not '_name' in kw:
142 | kw['_name'] = self.name
143 | if not 'o' in kw:
144 | kw['o'] = self.o
145 | self._task_args = args
146 | self._task_kwargs = kw
147 | self._start(*args, **kwargs)
148 | self.after_start()
149 | return True
150 | finally:
151 | self.start_stop_lock.release()
152 |
153 | def _start(self, *args, **kwargs):
154 | self.supervisor.put_task(target=self.loop,
155 | args=self._task_args,
156 | kwargs=self._task_kwargs,
157 | priority=self.priority,
158 | worker=self)
159 | self._started.wait()
160 | self.supervisor.register_sync_scheduler(self)
161 |
162 | def _abort(self):
163 | self.mark_stopped()
164 | self.stop(wait=False)
165 |
166 | def _cb_mp(self, result):
167 | self.supervisor.mark_task_completed(task=self._current_executor)
168 | if self.process_result(result) is False:
169 | self._abort()
170 | self._current_executor = None
171 | self._send_executor_stop_event()
172 |
173 | def process_result(self, result):
174 | pass
175 |
176 | def loop(self, *args, **kwargs):
177 | self.mark_started()
178 | while self._active:
179 | try:
180 | self.last_executed = time.perf_counter()
181 | if self._run_in_mp:
182 | self._current_executor = self.run
183 | self.supervisor.mp_pool.apply_async(self.run, args, kwargs,
184 | self._cb_mp)
185 | self._executor_stop_event.wait()
186 | self._executor_stop_event.clear()
187 | else:
188 | if self.run(*args, **kwargs) is False:
189 | return self._abort()
190 | except:
191 | self.error()
192 | self.mark_stopped()
193 | self.supervisor.mark_task_completed(task_id=kwargs['_task_id'])
194 |
195 | def mark_started(self):
196 | self._started.set()
197 | self._stopped.clear()
198 | if debug: logger.debug(self.name + ' started')
199 |
200 | def mark_stopped(self):
201 | self._stopped.set()
202 | self._started.clear()
203 | if debug: logger.debug(self.name + ' stopped')
204 |
205 | def stop(self, wait=True):
206 | """
207 | Stop worker
208 | """
209 | if self._active:
210 | self.start_stop_lock.acquire()
211 | try:
212 | self.before_stop()
213 | self._active = False
214 | self.send_stop_events()
215 | if wait:
216 | self.wait_until_stop()
217 | self._stop(wait=wait)
218 | self.after_stop()
219 | finally:
220 | self.start_stop_lock.release()
221 |
222 | def _stop(self, **kwargs):
223 | self.supervisor.unregister_sync_scheduler(self)
224 |
225 | def wait_until_stop(self):
226 | self._stopped.wait()
227 |
228 |
229 | class BackgroundAsyncWorker(BackgroundWorker):
230 |
231 | def __init__(self, *args, **kwargs):
232 | super().__init__(*args, **kwargs)
233 | self.executor_loop = kwargs.get('loop')
234 | self.aloop = None
235 |
236 | def _register(self):
237 | if asyncio.iscoroutinefunction(self.run):
238 | if not self.executor_loop:
239 | logger.warning(
240 | ('{}: no executor loop defined, ' +
241 | 'will start executor in supervisor event loop').format(
242 | self.name))
243 | self.executor_loop = self.supervisor.event_loop
244 | self.worker_loop = self.executor_loop
245 | else:
246 | self.worker_loop = self.supervisor.event_loop
247 | self.supervisor.register_scheduler(self)
248 | self._started.wait()
249 |
250 | def _start(self, *args, **kwargs):
251 | self.executor_loop = kwargs.get('_loop', self.executor_loop)
252 | if isinstance(self.executor_loop, str):
253 | self.executor_loop = self.supervisor.get_aloop(self.executor_loop)
254 | elif not self.executor_loop and self.supervisor.default_aloop:
255 | self.executor_loop = self.supervisor.default_aloop
256 | if isinstance(self.executor_loop, ALoop):
257 | self.aloop = self.executor_loop
258 | self.executor_loop = self.executor_loop.get_loop()
259 | self._register()
260 |
261 | def _stop(self, *args, **kwargs):
262 | self.supervisor.unregister_scheduler(self)
263 |
264 | def mark_started(self):
265 | self._executor_stop_event = asyncio.Event()
266 | super().mark_started()
267 |
268 | async def loop(self, *args, **kwargs):
269 | self.mark_started()
270 | while self._active:
271 | if self._current_executor:
272 | await self._executor_stop_event.wait()
273 | self._executor_stop_event.clear()
274 | if self._active:
275 | if not await self.launch_executor():
276 | break
277 | else:
278 | break
279 | await asyncio.sleep(self.supervisor.poll_delay)
280 | self.mark_stopped()
281 |
282 | def _run(self, *args, **kwargs):
283 | try:
284 | try:
285 | if self.run(*args, **kwargs) is False:
286 | self._abort()
287 | except:
288 | self.error()
289 | finally:
290 | self.supervisor.mark_task_completed(task=self._current_executor)
291 | self._current_executor = None
292 | self._send_executor_stop_event()
293 |
294 | def _send_executor_stop_event(self):
295 | asyncio.run_coroutine_threadsafe(self._set_stop_event(),
296 | loop=self.worker_loop)
297 |
298 | async def _set_stop_event(self):
299 | self._executor_stop_event.set()
300 |
301 | async def launch_executor(self, *args, **kwargs):
302 | self.last_executed = time.perf_counter()
303 | if asyncio.iscoroutinefunction(self.run):
304 | self._current_executor = self.run
305 | try:
306 | result = await self.run(*(args + self._task_args),
307 | **self._task_kwargs)
308 | except:
309 | self.error()
310 | result = None
311 | self._current_executor = None
312 | if result is False: self._abort()
313 | return result is not False and self._active
314 | elif self._run_in_mp:
315 | task = self.supervisor.put_task(target=self.run,
316 | args=args + self._task_args,
317 | kwargs=self._task_kwargs,
318 | callback=self._cb_mp,
319 | priority=self.priority,
320 | tt=TT_MP,
321 | worker=self)
322 | self._current_executor = task
323 | return task is not None and self._active
324 | else:
325 | task = self.supervisor.put_task(target=self._run,
326 | args=args + self._task_args,
327 | kwargs=self._task_kwargs,
328 | callback=self._cb_mp,
329 | priority=self.priority,
330 | tt=TT_THREAD,
331 | worker=self)
332 | self._current_executor = task
333 | return task is not None and self._active
334 |
335 |
336 | class BackgroundQueueWorker(BackgroundAsyncWorker):
337 |
338 | def __init__(self, *args, **kwargs):
339 | super().__init__(*args, **kwargs)
340 | q = kwargs.get('q', kwargs.get('queue'))
341 | if isinstance(q, type):
342 | self._qclass = q
343 | else:
344 | self._qclass = asyncio.queues.Queue
345 |
346 | def put_threadsafe(self, t):
347 | asyncio.run_coroutine_threadsafe(self._Q.put(t), loop=self.worker_loop)
348 |
349 | async def put(self, t):
350 | await self._Q.put(t)
351 |
352 | def send_stop_events(self):
353 | try:
354 | self.put_threadsafe(None)
355 | except:
356 | pass
357 |
358 | def _stop(self, *args, **kwargs):
359 | super()._stop(*args, **kwargs)
360 |
361 | def before_queue_get(self):
362 | pass
363 |
364 | def after_queue_get(self, task):
365 | pass
366 |
367 | async def loop(self, *args, **kwargs):
368 | self._Q = self._qclass()
369 | self.mark_started()
370 | while self._active:
371 | self.before_queue_get()
372 | task = await self._Q.get()
373 | self.after_queue_get(task)
374 | try:
375 | if self._current_executor:
376 | await self._executor_stop_event.wait()
377 | self._executor_stop_event.clear()
378 | if self._active and task is not None:
379 | if not await self.launch_executor(task):
380 | break
381 | else:
382 | break
383 | if not self._suppress_sleep:
384 | await asyncio.sleep(self.supervisor.poll_delay)
385 | finally:
386 | self._Q.task_done()
387 | self.mark_stopped()
388 |
389 | def get_queue_obj(self):
390 | return self._Q
391 |
392 |
393 | class BackgroundEventWorker(BackgroundAsyncWorker):
394 |
395 | def trigger_threadsafe(self, force=False):
396 | if not self._current_executor or force:
397 | asyncio.run_coroutine_threadsafe(self._set_event(),
398 | loop=self.worker_loop)
399 |
400 | async def trigger(self, force=False):
401 | if not self._current_executor or force:
402 | await self._set_event()
403 |
404 | async def _set_event(self):
405 | self._E.set()
406 |
407 | async def loop(self, *args, **kwargs):
408 | self._E = asyncio.Event()
409 | self.mark_started()
410 | while self._active:
411 | if self._current_executor:
412 | await self._executor_stop_event.wait()
413 | self._executor_stop_event.clear()
414 | await self._E.wait()
415 | self._E.clear()
416 | if not self._active or not await self.launch_executor():
417 | break
418 | if not self._suppress_sleep:
419 | await asyncio.sleep(self.supervisor.poll_delay)
420 | self.mark_stopped()
421 |
422 | def send_stop_events(self, *args, **kwargs):
423 | try:
424 | self.trigger_threadsafe(force=True)
425 | except:
426 | pass
427 |
428 | def get_event_obj(self):
429 | return self._E
430 |
431 |
432 | class BackgroundIntervalWorker(BackgroundEventWorker):
433 |
434 | def __init__(self, *args, **kwargs):
435 | super().__init__(*args, **kwargs)
436 | self.delay_before = kwargs.get('delay_before')
437 | self.delay = kwargs.get(
438 | 'interval', kwargs.get('delay', kwargs.get('delay_after', 1)))
439 | if 'interval' in kwargs:
440 | self.keep_interval = True
441 | else:
442 | self.keep_interval = False
443 | self.extra_loops = ['interval_loop']
444 | self._suppress_sleep = True
445 | self._interval_loop_stopped = threading.Event()
446 |
447 | def _start(self, *args, **kwargs):
448 | self.delay_before = kwargs.get('_delay_before', self.delay_before)
449 | self.delay = kwargs.get(
450 | '_interval',
451 | kwargs.get('_delay', kwargs.get('_delay_after', self.delay)))
452 | if '_interval' in kwargs:
453 | self.keep_interval = True
454 | super()._start(*args, **kwargs)
455 | return True
456 |
457 | def before_start(self):
458 | super().before_start()
459 | self._interval_loop_stopped.clear()
460 |
461 | def wait_until_stop(self):
462 | super().wait_until_stop()
463 | self._interval_loop_stopped.wait()
464 |
465 | async def interval_loop(self, *args, **kwargs):
466 | while self._active:
467 | if self.keep_interval: tstart = time.perf_counter()
468 | if self._current_executor:
469 | await self._executor_stop_event.wait()
470 | self._executor_stop_event.clear()
471 | if self.delay_before:
472 | await asyncio.sleep(self.delay_before)
473 | if not self._active:
474 | break
475 | if not self._active or not await self.launch_executor():
476 | break
477 | if not self.delay and not self.delay_before:
478 | tts = self.poll_delay
479 | elif self.keep_interval:
480 | tts = self.delay + tstart - time.perf_counter()
481 | else:
482 | tts = self.delay
483 | if self._current_executor:
484 | await self._executor_stop_event.wait()
485 | self._executor_stop_event.clear()
486 | if tts > 0:
487 | if tts < 0.1:
488 | await asyncio.sleep(tts)
489 | else:
490 | ttsi = int(tts)
491 | while self.last_executed + ttsi >= time.perf_counter():
492 | await asyncio.sleep(0.1)
493 | if not self._active:
494 | self._interval_loop_stopped.set()
495 | return
496 | await asyncio.sleep(tts - ttsi)
497 | self._interval_loop_stopped.set()
498 |
499 |
500 | def background_worker(*args, **kwargs):
501 |
502 | def decorator(f, **kw):
503 | func = f
504 | kw = kw.copy() if kw else kwargs
505 | if kwargs.get('q') or kwargs.get('queue'):
506 | C = BackgroundQueueWorker
507 | elif kwargs.get('e') or kwargs.get('event'):
508 | C = BackgroundEventWorker
509 | elif kwargs.get('i') or \
510 | kwargs.get('interval') or \
511 | kwargs.get('delay') or kwargs.get('delay_before'):
512 | C = BackgroundIntervalWorker
513 | elif asyncio.iscoroutinefunction(func):
514 | C = BackgroundAsyncWorker
515 | else:
516 | C = BackgroundWorker
517 | if 'name' in kw:
518 | name = kw['name']
519 | del kw['name']
520 | else:
521 | name = func.__name__
522 | f = C(name=name, **kw)
523 | f.run = func
524 | f._can_use_mp_pool = False
525 | return f
526 |
527 | return decorator if not args else decorator(args[0], **kwargs)
528 |
--------------------------------------------------------------------------------
/doc/.gitignore:
--------------------------------------------------------------------------------
1 | _build
2 |
--------------------------------------------------------------------------------
/doc/Makefile:
--------------------------------------------------------------------------------
1 | # Makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line.
5 | SPHINXOPTS =
6 | SPHINXBUILD = sphinx-build
7 | PAPER =
8 | BUILDDIR = _build
9 | TERM = linux
10 |
11 | # User-friendly check for sphinx-build
12 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
13 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
14 | endif
15 |
16 | # Internal variables.
17 | PAPEROPT_a4 = -D latex_paper_size=a4
18 | PAPEROPT_letter = -D latex_paper_size=letter
19 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
20 | # the i18n builder cannot share the environment and doctrees with the others
21 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
22 |
23 | .PHONY: help
24 | help:
25 | @echo "Please use \`make ' where is one of"
26 | @echo " html to make standalone HTML files"
27 | @echo " dirhtml to make HTML files named index.html in directories"
28 | @echo " singlehtml to make a single large HTML file"
29 | @echo " pickle to make pickle files"
30 | @echo " json to make JSON files"
31 | @echo " htmlhelp to make HTML files and a HTML help project"
32 | @echo " qthelp to make HTML files and a qthelp project"
33 | @echo " applehelp to make an Apple Help Book"
34 | @echo " devhelp to make HTML files and a Devhelp project"
35 | @echo " epub to make an epub"
36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
37 | @echo " latexpdf to make LaTeX files and run them through pdflatex"
38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
39 | @echo " text to make text files"
40 | @echo " man to make manual pages"
41 | @echo " texinfo to make Texinfo files"
42 | @echo " info to make Texinfo files and run them through makeinfo"
43 | @echo " gettext to make PO message catalogs"
44 | @echo " changes to make an overview of all changed/added/deprecated items"
45 | @echo " xml to make Docutils-native XML files"
46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes"
47 | @echo " linkcheck to check all external links for integrity"
48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)"
49 | @echo " coverage to run coverage check of the documentation (if enabled)"
50 |
51 | .PHONY: clean
52 | clean:
53 | rm -rf $(BUILDDIR)
54 |
55 | .PHONY: html
56 | html:
57 | pandoc --from=markdown --to=rst --output=readme.rst ../README.md
58 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
59 | @echo
60 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
61 |
62 | .PHONY: dirhtml
63 | dirhtml:
64 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
65 | @echo
66 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
67 |
68 | .PHONY: singlehtml
69 | singlehtml:
70 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
71 | @echo
72 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
73 |
74 | .PHONY: pickle
75 | pickle:
76 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
77 | @echo
78 | @echo "Build finished; now you can process the pickle files."
79 |
80 | .PHONY: json
81 | json:
82 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
83 | @echo
84 | @echo "Build finished; now you can process the JSON files."
85 |
86 | .PHONY: htmlhelp
87 | htmlhelp:
88 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
89 | @echo
90 | @echo "Build finished; now you can run HTML Help Workshop with the" \
91 | ".hhp project file in $(BUILDDIR)/htmlhelp."
92 |
93 | .PHONY: qthelp
94 | qthelp:
95 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
96 | @echo
97 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \
98 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
99 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/EVAICS.qhcp"
100 | @echo "To view the help file:"
101 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/EVAICS.qhc"
102 |
103 | .PHONY: applehelp
104 | applehelp:
105 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
106 | @echo
107 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
108 | @echo "N.B. You won't be able to view it unless you put it in" \
109 | "~/Library/Documentation/Help or install it in your application" \
110 | "bundle."
111 |
112 | .PHONY: devhelp
113 | devhelp:
114 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
115 | @echo
116 | @echo "Build finished."
117 | @echo "To view the help file:"
118 | @echo "# mkdir -p $$HOME/.local/share/devhelp/EVAICS"
119 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/EVAICS"
120 | @echo "# devhelp"
121 |
122 | .PHONY: epub
123 | epub:
124 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
125 | @echo
126 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
127 |
128 | .PHONY: latex
129 | latex:
130 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
131 | @echo
132 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
133 | @echo "Run \`make' in that directory to run these through (pdf)latex" \
134 | "(use \`make latexpdf' here to do that automatically)."
135 |
136 | .PHONY: latexpdf
137 | latexpdf:
138 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
139 | @echo "Running LaTeX files through pdflatex..."
140 | $(MAKE) -C $(BUILDDIR)/latex all-pdf
141 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
142 |
143 | .PHONY: latexpdfja
144 | latexpdfja:
145 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
146 | @echo "Running LaTeX files through platex and dvipdfmx..."
147 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
148 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
149 |
150 | .PHONY: text
151 | text:
152 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
153 | @echo
154 | @echo "Build finished. The text files are in $(BUILDDIR)/text."
155 |
156 | .PHONY: man
157 | man:
158 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
159 | @echo
160 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
161 |
162 | .PHONY: texinfo
163 | texinfo:
164 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
165 | @echo
166 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
167 | @echo "Run \`make' in that directory to run these through makeinfo" \
168 | "(use \`make info' here to do that automatically)."
169 |
170 | .PHONY: info
171 | info:
172 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
173 | @echo "Running Texinfo files through makeinfo..."
174 | make -C $(BUILDDIR)/texinfo info
175 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
176 |
177 | .PHONY: gettext
178 | gettext:
179 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
180 | @echo
181 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
182 |
183 | .PHONY: changes
184 | changes:
185 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
186 | @echo
187 | @echo "The overview file is in $(BUILDDIR)/changes."
188 |
189 | .PHONY: linkcheck
190 | linkcheck:
191 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
192 | @echo
193 | @echo "Link check complete; look for any errors in the above output " \
194 | "or in $(BUILDDIR)/linkcheck/output.txt."
195 |
196 | .PHONY: doctest
197 | doctest:
198 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
199 | @echo "Testing of doctests in the sources finished, look at the " \
200 | "results in $(BUILDDIR)/doctest/output.txt."
201 |
202 | .PHONY: coverage
203 | coverage:
204 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
205 | @echo "Testing of coverage in the sources finished, look at the " \
206 | "results in $(BUILDDIR)/coverage/python.txt."
207 |
208 | .PHONY: xml
209 | xml:
210 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
211 | @echo
212 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml."
213 |
214 | .PHONY: pseudoxml
215 | pseudoxml:
216 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
217 | @echo
218 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
219 |
220 | commit:
221 | cd .. && make commit
222 |
--------------------------------------------------------------------------------
/doc/async_jobs.rst:
--------------------------------------------------------------------------------
1 | Async jobs
2 | **********
3 |
4 | **atasker** has built-in integration with `aiosched
5 | `_ - simple and fast async job scheduler.
6 |
7 | **aiosched** schedulers can be automatically started inside
8 | :ref:`aloop`:
9 |
10 | .. code:: python
11 |
12 | async def test1():
13 | print('I am lightweight async job')
14 |
15 | task_supervisor.create_aloop('jobs')
16 | # if aloop id not specified, default aloop is used
17 | task_supervisor.create_async_job_scheduler('default', aloop='jobs',
18 | default=True)
19 | # create async job
20 | job1 = task_supervisor.create_async_job(target=test1, interval=0.1)
21 | # cancel async job
22 | task_supervisor.cancel_async_job(job=job1)
23 |
24 | .. note::
25 | **aiosched** jobs are lightweight, don't report any statistic data and don't
26 | check is the job already running.
27 |
--------------------------------------------------------------------------------
/doc/collections.rst:
--------------------------------------------------------------------------------
1 | Task collections
2 | ****************
3 |
4 | Task collections are useful when you need to run a pack of tasks e.g. on
5 | program startup or shutdown. Currently collections support running task
6 | functions only either in a foreground (one-by-one) or as the threads.
7 |
8 | Function priority can be specified either as *TASK_\** (e.g. *TASK_NORMAL*) or
9 | as a number (lower = higher priority).
10 |
11 | .. automodule:: atasker
12 |
13 | FunctionCollection
14 | ==================
15 |
16 | Simple collection of functions.
17 |
18 | .. code:: python
19 |
20 | from atasker import FunctionCollection, TASK_LOW, TASK_HIGH
21 |
22 | def error(**kwargs):
23 | import traceback
24 | traceback.print_exc()
25 |
26 | startup = FunctionCollection(on_error=error)
27 |
28 | @startup
29 | def f1():
30 | return 1
31 |
32 | @startup(priority=TASK_HIGH)
33 | def f2():
34 | return 2
35 |
36 | @startup(priority=TASK_LOW)
37 | def f3():
38 | return 3
39 |
40 | result, all_ok = startup.execute()
41 |
42 | .. autoclass:: FunctionCollection
43 | :members:
44 |
45 | TaskCollection
46 | ==============
47 |
48 | Same as function collection, but stored functions are started as tasks in
49 | threads.
50 |
51 | Methods *execute()* and *run()* return result when all tasks in collection are
52 | finished.
53 |
54 | .. autoclass:: TaskCollection
55 | :inherited-members:
56 | :members:
57 | :show-inheritance:
58 |
--------------------------------------------------------------------------------
/doc/conf.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | # __version__ = "1.0.0"
4 |
5 | # -*- coding: utf-8 -*-
6 | #
7 | # This file is execfile()d with the current directory set to its
8 | # containing dir.
9 | #
10 | # Note that not all possible configuration values are present in this
11 | # autogenerated file.
12 | #
13 | # All configuration values have a default; values that are commented out
14 | # serve to show the default.
15 |
16 | import sys
17 | import os
18 | from pathlib import Path
19 |
20 | # httpexample_scheme = 'https'
21 |
22 | # If extensions (or modules to document with autodoc) are in another directory,
23 | # add these directories to sys.path here. If the directory is relative to the
24 | # documentation root, use os.path.abspath to make it absolute, like shown here.
25 | #sys.path.insert(0, os.path.abspath('.'))
26 |
27 | sys.path.insert(0, Path().absolute().parent.as_posix())
28 |
29 | # -- General configuration ------------------------------------------------
30 |
31 | # If your documentation needs a minimal Sphinx version, state it here.
32 | #needs_sphinx = '1.0'
33 |
34 | # Add any Sphinx extension module names here, as strings. They can be
35 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 | # ones.
37 | # extensions = []
38 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon']
39 |
40 | napoleon_google_docstring = True
41 | napoleon_include_init_with_doc = False
42 | napoleon_include_private_with_doc = False
43 | napoleon_include_special_with_doc = True
44 | napoleon_use_admonition_for_examples = False
45 | napoleon_use_admonition_for_notes = False
46 | napoleon_use_admonition_for_references = False
47 | napoleon_use_ivar = False
48 | napoleon_use_param = True
49 | napoleon_use_rtype = True
50 |
51 | autoclass_content = 'both'
52 |
53 | # html_theme = 'groundwork'
54 |
55 |
56 | # source_parsers = {'.md': CommonMarkParser}
57 |
58 | source_suffix = ['.rst']
59 |
60 | # Add any paths that contain templates here, relative to this directory.
61 | templates_path = ['_templates']
62 |
63 | # The suffix(es) of source filenames.
64 | # You can specify multiple suffix as a list of string:
65 |
66 | # The encoding of source files.
67 | #source_encoding = 'utf-8-sig'
68 |
69 | # The master toctree document.
70 | master_doc = 'index'
71 |
72 | # General information about the project.
73 | project = 'atasker'
74 | copyright = '2019, AlterTech'
75 | author = 'AlterTech'
76 |
77 | # The version info for the project you're documenting, acts as replacement for
78 | # |version| and |release|, also used in various other places throughout the
79 | # built documents.
80 | #
81 | # The short X.Y version.
82 | # version = __version__
83 | # The full version, including alpha/beta/rc tags.
84 | # release = version
85 |
86 | # The language for content autogenerated by Sphinx. Refer to documentation
87 | # for a list of supported languages.
88 | #
89 | # This is also used if you do content translation via gettext catalogs.
90 | # Usually you set "language" from the command line for these cases.
91 | language = None
92 |
93 | # There are two options for replacing |today|: either, you set today to some
94 | # non-false value, then it is used:
95 | #today = ''
96 | # Else, today_fmt is used as the format for a strftime call.
97 | #today_fmt = '%B %d, %Y'
98 |
99 | # List of patterns, relative to source directory, that match files and
100 | # directories to ignore when looking for source files.
101 | exclude_patterns = ['_build']
102 |
103 | # The reST default role (used for this markup: `text`) to use for all
104 | # documents.
105 | #default_role = None
106 |
107 | # If true, '()' will be appended to :func: etc. cross-reference text.
108 | #add_function_parentheses = True
109 |
110 | # If true, the current module name will be prepended to all description
111 | # unit titles (such as .. function::).
112 | #add_module_names = True
113 |
114 | # If true, sectionauthor and moduleauthor directives will be shown in the
115 | # output. They are ignored by default.
116 | #show_authors = False
117 |
118 | # The name of the Pygments (syntax highlighting) style to use.
119 | pygments_style = 'sphinx'
120 |
121 | # A list of ignored prefixes for module index sorting.
122 | #modindex_common_prefix = []
123 |
124 | # If true, keep warnings as "system message" paragraphs in the built documents.
125 | #keep_warnings = False
126 |
127 | # If true, `todo` and `todoList` produce output, else they produce nothing.
128 | todo_include_todos = False
129 |
130 | # -- Options for HTML output ----------------------------------------------
131 |
132 | # The theme to use for HTML and HTML Help pages. See the documentation for
133 | # a list of builtin themes.
134 | html_theme = 'sphinx_rtd_theme'
135 | # html_theme = 'alabaster'
136 |
137 | html_theme_options = {
138 | # 'canonical_url': '',
139 | # 'analytics_id': '',
140 | 'prev_next_buttons_location': None,
141 | 'logo_only': True,
142 | 'display_version': False,
143 | # 'prev_next_buttons_location': 'bottom',
144 | # 'style_external_links': False,
145 | # 'vcs_pageview_mode': '',
146 | 'collapse_navigation': True,
147 | 'sticky_navigation': True,
148 | # 'navigation_depth': 4,
149 | # 'includehidden': True,
150 | # 'titles_only': False
151 | }
152 |
153 | # html_style = None
154 | # Theme options are theme-specific and customize the look and feel of a theme
155 | # further. For a list of options available for each theme, see the
156 | # documentation.
157 | # Add any paths that contain custom themes here, relative to this directory.
158 | #html_theme_path = []
159 |
160 | # The name for this set of Sphinx documents. If None, it defaults to
161 | # " v documentation".
162 | #html_title = None
163 |
164 | # A shorter title for the navigation bar. Default is the same as html_title.
165 | #html_short_title = None
166 |
167 | # The name of an image file (relative to this directory) to place at the top
168 | # of the sidebar.
169 | # html_logo = 'images/logo.png'
170 |
171 | # The name of an image file (relative to this directory) to use as a favicon of
172 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
173 | # pixels large.
174 | #html_favicon = None
175 |
176 | # Add any paths that contain custom static files (such as style sheets) here,
177 | # relative to this directory. They are copied after the builtin static files,
178 | # so a file named "default.css" will overwrite the builtin "default.css".
179 | html_static_path = ['_static']
180 |
181 | # Add any extra paths that contain custom files (such as robots.txt or
182 | # .htaccess) here, relative to this directory. These files are copied
183 | # directly to the root of the documentation.
184 | #html_extra_path = []
185 |
186 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
187 | # using the given strftime format.
188 | #html_last_updated_fmt = '%b %d, %Y'
189 |
190 | # If true, SmartyPants will be used to convert quotes and dashes to
191 | # typographically correct entities.
192 | #html_use_smartypants = True
193 |
194 | # Custom sidebar templates, maps document names to template names.
195 | #html_sidebars = {}
196 |
197 | # Additional templates that should be rendered to pages, maps page names to
198 | # template names.
199 | #html_additional_pages = {}
200 |
201 | html_add_permalinks = ''
202 |
203 | # If false, no module index is generated.
204 | #html_domain_indices = True
205 |
206 | # If false, no index is generated.
207 | #html_use_index = True
208 |
209 | # If true, the index is split into individual pages for each letter.
210 | #html_split_index = False
211 |
212 | # If true, links to the reST sources are added to the pages.
213 | html_show_sourcelink = False
214 |
215 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
216 | #html_show_sphinx = True
217 |
218 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
219 | #html_show_copyright = True
220 |
221 | # If true, an OpenSearch description file will be output, and all pages will
222 | # contain a tag referring to it. The value of this option must be the
223 | # base URL from which the finished HTML is served.
224 | #html_use_opensearch = ''
225 |
226 | # This is the file name suffix for HTML files (e.g. ".xhtml").
227 | #html_file_suffix = None
228 |
229 | # Language to be used for generating the HTML full-text search index.
230 | # Sphinx supports the following languages:
231 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
232 | # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr'
233 | #html_search_language = 'en'
234 |
235 | # A dictionary with options for the search language support, empty by default.
236 | # Now only 'ja' uses this config value
237 | #html_search_options = {'type': 'default'}
238 |
239 | # The name of a javascript file (relative to the configuration directory) that
240 | # implements a search results scorer. If empty, the default will be used.
241 | #html_search_scorer = 'scorer.js'
242 |
243 | # Output file base name for HTML help builder.
244 | htmlhelp_basename = 'ataskerdoc'
245 |
246 | # -- Options for LaTeX output ---------------------------------------------
247 |
248 | latex_elements = {
249 | # The paper size ('letterpaper' or 'a4paper').
250 | #'papersize': 'letterpaper',
251 |
252 | # The font size ('10pt', '11pt' or '12pt').
253 | #'pointsize': '10pt',
254 |
255 | # Additional stuff for the LaTeX preamble.
256 | #'preamble': '',
257 |
258 | # Latex figure (float) alignment
259 | #'figure_align': 'htbp',
260 | }
261 |
262 | # Grouping the document tree into LaTeX files. List of tuples
263 | # (source start file, target name, title,
264 | # author, documentclass [howto, manual, or own class]).
265 |
266 | # The name of an image file (relative to this directory) to place at the top of
267 | # the title page.
268 | #latex_logo = None
269 |
270 | # For "manual" documents, if this is true, then toplevel headings are parts,
271 | # not chapters.
272 | #latex_use_parts = False
273 |
274 | # If true, show page references after internal links.
275 | #latex_show_pagerefs = False
276 |
277 | # If true, show URL addresses after external links.
278 | #latex_show_urls = False
279 |
280 | # Documents to append as an appendix to all manuals.
281 | #latex_appendices = []
282 |
283 | # If false, no module index is generated.
284 | #latex_domain_indices = True
285 |
286 | # -- Options for manual page output ---------------------------------------
287 |
288 | # One entry per manual page. List of tuples
289 | # (source start file, name, description, authors, manual section).
290 |
291 | # If true, show URL addresses after external links.
292 | #man_show_urls = False
293 |
294 | # -- Options for Texinfo output -------------------------------------------
295 |
296 | # Grouping the document tree into Texinfo files. List of tuples
297 | # (source start file, target name, title, author,
298 | # dir menu entry, description, category)
299 | # Documents to append as an appendix to all manuals.
300 |
301 | #texinfo_appendices = []
302 |
303 | # If false, no module index is generated.
304 | #texinfo_domain_indices = True
305 |
306 | # How to display URL addresses: 'footnote', 'no', or 'inline'.
307 | #texinfo_show_urls = 'footnote'
308 |
309 | # If true, do not generate a @detailmenu in the "Top" node's menu.
310 | #texinfo_no_detailmenu = False
311 |
312 | # rst_epilog = """
313 | # .. |Version| replace:: {versionnum}
314 | # """.format(
315 | # versionnum=version,)
316 |
--------------------------------------------------------------------------------
/doc/debug.rst:
--------------------------------------------------------------------------------
1 | Debugging
2 | *********
3 |
4 | The library uses logger "atasker" to log all events.
5 |
6 | Additionally, for debug messages, method *atasker.set_debug()* should be called.
7 |
--------------------------------------------------------------------------------
/doc/index.rst:
--------------------------------------------------------------------------------
1 | .. include:: readme.rst
2 |
3 | .. toctree::
4 | :maxdepth: 1
5 |
6 | supervisor
7 | tasks
8 | async_jobs
9 | workers
10 | collections
11 | localproxy
12 | locker
13 | debug
14 |
--------------------------------------------------------------------------------
/doc/localproxy.rst:
--------------------------------------------------------------------------------
1 | Thread local proxy
2 | ******************
3 |
4 | .. code:: python
5 |
6 | from atasker import g
7 |
8 | if not g.has('db'):
9 | g.set('db', )
10 |
11 | Supports methods:
12 |
13 | .. automodule:: atasker
14 | .. autoclass:: LocalProxy
15 | :members:
16 |
--------------------------------------------------------------------------------
/doc/locker.rst:
--------------------------------------------------------------------------------
1 | Locker helper/decorator
2 | ***********************
3 |
4 | .. code:: python
5 |
6 | from atasker import Locker
7 |
8 | def critical_exception():
9 | # do something, e.g. restart/kill myself
10 | import os, signal
11 | os.kill(os.getpid(), signal.SIGKILL)
12 |
13 | lock1 = Locker(mod='main', timeout=5)
14 | lock1.critical = critical_exception
15 |
16 | # use as decorator
17 | @lock1
18 | def test():
19 | # thread-safe access to resources locked with lock1
20 |
21 | # with
22 | with lock1:
23 | # thread-safe access to resources locked with lock1
24 |
25 |
26 | Supports methods:
27 |
28 | .. automodule:: atasker
29 | .. autoclass:: Locker
30 | :members:
31 |
--------------------------------------------------------------------------------
/doc/readme.rst:
--------------------------------------------------------------------------------
1 | atasker
2 | =======
3 |
4 | Python library for modern thread / multiprocessing pooling and task
5 | processing via asyncio.
6 |
7 | No matter how your code is written, atasker automatically detects
8 | blocking functions and coroutines and launches them in a proper way, in
9 | a thread, asynchronous loop or in multiprocessing pool.
10 |
11 | Tasks are grouped into pools. If there’s no space in pool, task is being
12 | placed into waiting queue according to their priority. Pool also has
13 | “reserve” for the tasks with priorities “normal” and higher. Tasks with
14 | “critical” priority are always executed instantly.
15 |
16 | This library is useful if you have a project with many similar tasks
17 | which produce approximately equal CPU/memory load, e.g. API responses,
18 | scheduled resource state updates etc.
19 |
20 | Install
21 | -------
22 |
23 | .. code:: bash
24 |
25 | pip3 install atasker
26 |
27 | Sources: https://github.com/alttch/atasker
28 |
29 | Documentation: https://atasker.readthedocs.io/
30 |
31 | Why
32 | ---
33 |
34 | - asynchronous programming is a perfect way to make your code fast and
35 | reliable
36 |
37 | - multithreading programming is a perfect way to run blocking code in
38 | the background
39 |
40 | **atasker** combines advantages of both ways: atasker tasks run in
41 | separate threads however task supervisor and workers are completely
42 | asynchronous. But all their public methods are thread-safe.
43 |
44 | Why not standard Python thread pool?
45 | ------------------------------------
46 |
47 | - threads in a standard pool don’t have priorities
48 | - workers
49 |
50 | Why not standard asyncio loops?
51 | -------------------------------
52 |
53 | - compatibility with blocking functions
54 | - async workers
55 |
56 | Why not concurrent.futures?
57 | ---------------------------
58 |
59 | **concurrent.futures** is a great standard Python library which allows
60 | you to execute specified tasks in a pool of workers.
61 |
62 | **atasker** method *background_task* solves the same problem but in
63 | slightly different way, adding priorities to the tasks, while *atasker*
64 | workers do absolutely different job:
65 |
66 | - in *concurrent.futures* worker is a pool member which executes the
67 | single specified task.
68 |
69 | - in *atasker* worker is an object, which continuously *generates* new
70 | tasks with the specified interval or on external event, and executes
71 | them in thread or multiprocessing pool.
72 |
73 | Code examples
74 | -------------
75 |
76 | Start/stop
77 | ~~~~~~~~~~
78 |
79 | .. code:: python
80 |
81 |
82 | from atasker import task_supervisor
83 |
84 | # set pool size
85 | task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
86 | task_supervisor.start()
87 | # ...
88 | # start workers, other threads etc.
89 | # ...
90 | # optionally block current thread
91 | task_supervisor.block()
92 |
93 | # stop from any thread
94 | task_supervisor.stop()
95 |
96 | Background task
97 | ~~~~~~~~~~~~~~~
98 |
99 | .. code:: python
100 |
101 | from atasker import background_task, TASK_LOW, TASK_HIGH, wait_completed
102 |
103 | # with annotation
104 | @background_task
105 | def mytask():
106 | print('I am working in the background!')
107 | return 777
108 |
109 | task = mytask()
110 |
111 | # optional
112 | result = wait_completed(task)
113 |
114 | print(task.result) # 777
115 | print(result) # 777
116 |
117 | # with manual decoration
118 | def mytask2():
119 | print('I am working in the background too!')
120 |
121 | task = background_task(mytask2, priority=TASK_HIGH)()
122 |
123 | Async tasks
124 | ~~~~~~~~~~~
125 |
126 | .. code:: python
127 |
128 | # new asyncio loop is automatically created in own thread
129 | a1 = task_supervisor.create_aloop('myaloop', default=True)
130 |
131 | async def calc(a):
132 | print(a)
133 | await asyncio.sleep(1)
134 | print(a * 2)
135 | return a * 3
136 |
137 | # call from sync code
138 |
139 | # put coroutine
140 | task = background_task(calc)(1)
141 |
142 | wait_completed(task)
143 |
144 | # run coroutine and wait for result
145 | result = a1.run(calc(1))
146 |
147 | Worker examples
148 | ~~~~~~~~~~~~~~~
149 |
150 | .. code:: python
151 |
152 | from atasker import background_worker, TASK_HIGH
153 |
154 | @background_worker
155 | def worker1(**kwargs):
156 | print('I am a simple background worker')
157 |
158 | @background_worker
159 | async def worker_async(**kwargs):
160 | print('I am async background worker')
161 |
162 | @background_worker(interval=1)
163 | def worker2(**kwargs):
164 | print('I run every second!')
165 |
166 | @background_worker(queue=True)
167 | def worker3(task, **kwargs):
168 | print('I run when there is a task in my queue')
169 |
170 | @background_worker(event=True, priority=TASK_HIGH)
171 | def worker4(**kwargs):
172 | print('I run when triggered with high priority')
173 |
174 | worker1.start()
175 | worker_async.start()
176 | worker2.start()
177 | worker3.start()
178 | worker4.start()
179 |
180 | worker3.put('todo1')
181 | worker4.trigger()
182 |
183 | from atasker import BackgroundIntervalWorker
184 |
185 | class MyWorker(BackgroundIntervalWorker):
186 |
187 | def run(self, **kwargs):
188 | print('I am custom worker class')
189 |
190 | worker5 = MyWorker(interval=0.1, name='worker5')
191 | worker5.start()
192 |
--------------------------------------------------------------------------------
/doc/req.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/alttch/atasker/8cfda3b5f20b672e5c11d12deb3205c39724a0c4/doc/req.txt
--------------------------------------------------------------------------------
/doc/supervisor.rst:
--------------------------------------------------------------------------------
1 | Task supervisor
2 | ***************
3 |
4 | Task supervisor is a component which manages task thread pool and run task
5 | :doc:`schedulers (workers)`.
6 |
7 | .. contents::
8 |
9 | Usage
10 | =====
11 |
12 | When **atasker** package is imported, default task supervisor is automatically
13 | created.
14 |
15 | .. code:: python
16 |
17 | from atasker import task_supervisor
18 |
19 | # thread pool
20 | task_supervisor.set_thread_pool(
21 | pool_size=20, reserve_normal=5, reserve_high=5)
22 | task_supervisor.start()
23 |
24 | .. warning::
25 |
26 | Task supervisor must be started before any scheduler/worker or task.
27 |
28 | .. _priorities:
29 |
30 | Task priorities
31 | ===============
32 |
33 | Task supervisor supports 4 task priorities:
34 |
35 | * TASK_LOW
36 | * TASK_NORMAL (default)
37 | * TASK_HIGH
38 | * TASK_CRITICAL
39 |
40 | .. code:: python
41 |
42 | from atasker import TASK_HIGH
43 |
44 | def test():
45 | pass
46 |
47 | background_task(test, name='test', priority=TASK_HIGH)()
48 |
49 | Pool size
50 | =========
51 |
52 | Parameter **pool_size** for **task_supervisor.set_thread_pool** defines size of
53 | the task (thread) pool.
54 |
55 | Pool size means the maximum number of the concurrent tasks which can run. If
56 | task supervisor receive more tasks than pool size has, they will wait until
57 | some running task is finished.
58 |
59 | Actually, parameter **pool_size** defines pool size for the tasks, started with
60 | *TASK_LOW* priority. Tasks with higher priority have "reserves": *pool_size=20,
61 | reserve_normal=5* means create pool for 20 tasks but reserve 5 more places for
62 | the tasks with *TASK_NORMAL* priority. In this example, when task supervisor
63 | receives such task, pool is "extended", up to 5 places.
64 |
65 | For *TASK_HIGH* pool size can be extended up to *pool_size + reserve_normal +
66 | reserve_high*, so in the example above: *20 + 5 + 5 = 30*.
67 |
68 | Tasks with priority *TASK_CRITICAL* are always started instantly, no matter how
69 | busy task pool is, and thread pool is being extended for them with no limits.
70 | Multiprocessing critical tasks are started as soon as *multiprocessing.Pool*
71 | object has free space for the task.
72 |
73 | To make pool size unlimited, set *pool_size=0*.
74 |
75 | Parameters *min_size* and *max_size* set actual system thread pool size. If
76 | *max_size* is not specified, it's set to *pool_size + reserve_normal +
77 | reserve_high*. It's recommended to set *max_size* slightly larger manually to
78 | have a space for critical tasks.
79 |
80 | By default, *max_size* is CPU count * 5. You may use argument *min_size='max'*
81 | to automatically set minimal pool size to max.
82 |
83 | .. note::
84 |
85 | pool size can be changed while task supervisor is running.
86 |
87 | Poll delay
88 | ==========
89 |
90 | Poll delay is a delay (in seconds), which is used by task queue manager, in
91 | :doc:`workers` and some other methods like *start/stop*.
92 |
93 | Lower poll delay = higher CPU usage, higher poll delay = lower reaction time.
94 |
95 | Default poll delay is 0.1 second. Can be changed with:
96 |
97 | .. code:: python
98 |
99 | task_supervisor.poll_delay = 0.01 # set poll delay to 10ms
100 |
101 | Blocking
102 | ========
103 |
104 | Task supervisor is started in its own thread. If you want to block current
105 | thread, you may use method
106 |
107 | .. code:: python
108 |
109 | task_supervisor.block()
110 |
111 | which will just sleep while task supervisor is active.
112 |
113 | Timeouts
114 | ========
115 |
116 | Task supervisor can log timeouts (when task isn't launched within a specified
117 | number of seconds) and run timeout handler functions:
118 |
119 | .. code:: python
120 |
121 | def warning(t):
122 | # t = task thread object
123 | print('Task thread {} is not launched yet'.format(t))
124 |
125 | def critical(t):
126 | print('All is worse than expected')
127 |
128 | task_supervisor.timeout_warning = 5
129 | task_supervisor.timeout_warning_func = warn
130 | task_supervisor.timeout_critical = 10
131 | task_supervisor.timeout_critical_func = critical
132 |
133 | Stopping task supervisor
134 | ========================
135 |
136 | .. code:: python
137 |
138 | task_supervisor.stop(wait=True, stop_schedulers=True, cancel_tasks=False)
139 |
140 | Params:
141 |
142 | * **wait** wait until tasks and scheduler coroutines finish. If
143 | **wait=**, task supervisor will wait until coroutines finish for the
144 | max. *wait* seconds. However if requested to stop schedulers (workers) or
145 | task threads are currently running, method *stop* wait until they finish for
146 | the unlimited time.
147 |
148 | * **stop_schedulers** before stopping the main event loop, task scheduler will
149 | call *stop* method of all schedulers running.
150 |
151 | * **cancel_tasks** if specified, task supervisor will try to forcibly cancel
152 | all scheduler coroutines.
153 |
154 | .. _aloops:
155 |
156 | aloops: async executors and tasks
157 | =================================
158 |
159 | Usually it's unsafe to run both :doc:`schedulers (workers)` executors
160 | and custom tasks in supervisor's event loop. Workers use event loop by default
161 | and if anything is blocked, the program may be freezed.
162 |
163 | To avoid this, it's strongly recommended to create independent async loops for
164 | your custom tasks. atasker supervisor has built-in engine for async loops,
165 | called "aloops", each aloop run in a separated thread and doesn't interfere
166 | with supervisor event loop and others.
167 |
168 | Create
169 | ------
170 |
171 | If you plan to use async worker executors, create aloop:
172 |
173 | .. code:: python
174 |
175 | a = task_supervisor.create_aloop('myworkers', default=True, daemon=True)
176 | # the loop is instantly started by default, to prevent add param start=False
177 | # and then use
178 | # task_supervisor.start_aloop('myworkers')
179 |
180 | To determine in which thread executor is started, simply get its name. aloop
181 | threads are called "supervisor_aloop_".
182 |
183 | Using with workers
184 | ------------------
185 |
186 | Workers automatically launch async executor function in default aloop, or aloop
187 | can be specified with *loop=* at init or *_loop=* at startup.
188 |
189 | Executing own coroutines
190 | ------------------------
191 |
192 | aloops have 2 methods to execute own coroutines:
193 |
194 | .. code:: python
195 |
196 | # put coroutine to loop
197 | task = aloop.background_task(coro(args))
198 |
199 | # blocking wait for result from coroutine
200 | result = aloop.run(coro(args))
201 |
202 | Other supervisor methods
203 | ------------------------
204 |
205 | .. note::
206 |
207 | It's not recommended to create/start/stop aloops without supervisor
208 |
209 | .. code:: python
210 |
211 | # set default aloop
212 | task_supervisor.set_default_aloop(aloop):
213 |
214 | # get aloop by name
215 | task_supervisor.get_aloop(name)
216 |
217 | # stop aloop (not required, supervisor stops all aloops at shutdown)
218 | task_supervisor.stop_aloop(name)
219 |
220 | # get aloop async event loop object for direct access
221 | aloop.get_loop()
222 |
223 | .. _create_mp_pool:
224 |
225 | Multiprocessing
226 | ===============
227 |
228 | Multiprocessing pool may be used by workers and background tasks to execute a
229 | part of code.
230 |
231 | To create multiprocessing pool, use method:
232 |
233 | .. code:: python
234 |
235 | from atasker import task_supervisor
236 |
237 | # task_supervisor.create_mp_pool()
238 | # e.g.
239 | task_supervisor.create_mp_pool(processes=8)
240 |
241 | # use custom mp Pool
242 |
243 | from multiprocessing import Pool
244 |
245 | pool = Pool(processes=4)
246 | task_supervisor.mp_pool = pool
247 |
248 | # set mp pool size. if pool wasn't created before, it will be initialized
249 | # with processes=(pool_size+reserve_normal+reserve_high)
250 | task_supervisor.set_mp_pool(
251 | pool_size=20, reserve_normal=5, reserve_high=5)
252 |
253 | Custom task supervisor
254 | ======================
255 |
256 | .. code:: python
257 |
258 | from atasker import TaskSupervisor
259 |
260 | my_supervisor = TaskSupervisor(
261 | pool_size=100, reserve_normal=10, reserve_high=10)
262 |
263 | class MyTaskSupervisor(TaskSupervisor):
264 | # .......
265 |
266 | my_supervisor2 = MyTaskSupervisor()
267 |
268 | Putting own tasks
269 | =================
270 |
271 | If you can not use :doc:`background tasks` for some reason, you may
272 | put own tasks manually and put it to task supervisor to launch:
273 |
274 | .. code:: python
275 |
276 | task = task_supervisor.put_task(target=myfunc, args=(), kwargs={},
277 | priority=TASK_NORMAL, delay=None)
278 |
279 | If *delay* is specified, the thread is started after the corresponding delay
280 | (seconds).
281 |
282 | After the function thread is finished, it should notify task supervisor:
283 |
284 | .. code:: python
285 |
286 | task_supervisor.mark_task_completed(task=task) # or task_id = task.id
287 |
288 | If no *task_id* specified, current thread ID is being used:
289 |
290 | .. code:: python
291 |
292 | # note: custom task targets always get _task_id in kwargs
293 | def mytask(**kwargs):
294 | # ... perform calculations
295 | task_supervisor.mark_task_completed(task_id=kwargs['_task_id'])
296 |
297 | task_supervisor.put_task(target=mytask)
298 |
299 | .. note::
300 |
301 | If you need to know task id, before task is put (e.g. for task callback),
302 | you may generate own and call *put_task* with *task_id=task_id* parameter.
303 |
304 | Putting own tasks in multiprocessing pool
305 | =========================================
306 |
307 | To put own task into multiprocessing pool, you must create tuple object which
308 | contains:
309 |
310 | * unique task id
311 | * task function (static method)
312 | * function args
313 | * function kwargs
314 | * result callback function
315 |
316 | .. code:: python
317 |
318 | import uuid
319 |
320 | from atasker import TT_MP
321 |
322 | task = task_supervisor.put_task(
323 | target=, callback=, tt=TT_MP)
324 |
325 | After the function is finished, you should notify task supervisor:
326 |
327 | .. code:: python
328 |
329 | task_supervisor.mark_task_completed(task_id=, tt=TT_MP)
330 |
331 | Creating own schedulers
332 | =======================
333 |
334 | Own task scheduler (worker) can be registered in task supervisor with:
335 |
336 | .. code:: python
337 |
338 | task_supervisor.register_scheduler(scheduler)
339 |
340 | Where *scheduler* = scheduler object, which should implement at least *stop*
341 | (regular) and *loop* (async) methods.
342 |
343 | Task supervisor can also register synchronous schedulers/workers, but it can
344 | only stop them when *stop* method is called:
345 |
346 | .. code:: python
347 |
348 | task_supervisor.register_sync_scheduler(scheduler)
349 |
350 | To unregister schedulers from task supervisor, use *unregister_scheduler* and
351 | *unregister_sync_scheduler* methods.
352 |
--------------------------------------------------------------------------------
/doc/tasks.rst:
--------------------------------------------------------------------------------
1 | Tasks
2 | *****
3 |
4 | Task is a Python function which will be launched in the separate thread.
5 |
6 | Defining task with annotation
7 | =============================
8 |
9 | .. code:: python
10 |
11 | from atasker import background_task
12 |
13 | @background_task
14 | def mytask():
15 | print('I am working in the background!')
16 |
17 | task = mytask()
18 |
19 | It's not required to notify task supervisor about task completion,
20 | *background_task* will do this automatically as soon as task function is
21 | finished.
22 |
23 | All start parameters (args, kwargs) are passed to task functions as-is.
24 |
25 | Task function without annotation
26 | ================================
27 |
28 | To start task function without annotation, you must manually decorate it:
29 |
30 | .. code:: python
31 |
32 | from atasker import background_task, TASK_LOW
33 |
34 | def mytask():
35 | print('I am working in the background!')
36 |
37 | task = background_task(mytask, name='mytask', priority=TASK_LOW)()
38 |
39 | .. automodule:: atasker
40 | .. autofunction:: background_task
41 |
42 | Multiprocessing task
43 | ====================
44 |
45 | Run as background task
46 | ----------------------
47 |
48 | To put task into :ref:`multiprocessing pool`, append parameter
49 | *tt=TT_MP*:
50 |
51 | .. code:: python
52 |
53 | from atasker import TASK_HIGH, TT_MP
54 |
55 | task = background_task(
56 | tests.mp.test, priority=TASK_HIGH, tt=TT_MP)(1, 2, 3, x=2)
57 |
58 | Optional parameter *callback* can be used to specify function which handles
59 | task result.
60 |
61 | .. note::
62 |
63 | Multiprocessing target function always receives *_task_id* param.
64 |
65 | Run in async way
66 | ----------------
67 |
68 | You may put task from your coroutine, without using callback, example:
69 |
70 | .. code:: python
71 |
72 | from atasker import co_mp_apply, TASK_HIGH
73 |
74 | async def f1():
75 | result = await co_mp_apply(
76 | tests.mp.test, args=(1,2,3), kwargs={'x': 2},
77 | priority=TASK_HIGH)
78 |
79 | .. autofunction:: co_mp_apply
80 |
81 | Task object
82 | ===========
83 |
84 | If you saved only task.id but not the whole object, you may later obtain Task
85 | object again:
86 |
87 | .. code:: python
88 |
89 | from atasker import task_supervisor
90 |
91 | task = task_supervisor.get_task(task.id)
92 |
93 | Task info object fields:
94 |
95 | * **id** task id
96 | * **task** task object
97 | * **tt** task type (TT_THREAD, TT_MP)
98 | * **priority** task priority
99 | * **time_queued** time when task was queued
100 | * **time_started** time when task was started
101 | * **result** task result
102 | * **status** task status
103 | **0** queued
104 | **2** delayed
105 | **100** started
106 | **200** completed
107 | **-1** canceled
108 |
109 | If task info is *None*, consider the task is completed and supervisor destroyed
110 | information about it.
111 |
112 | .. note::
113 |
114 | As soon as task is marked as completed, supervisor no longer stores
115 | information about it
116 |
117 | Wait until completed
118 | ====================
119 |
120 | You may wait until pack of tasks is completed with the following method:
121 |
122 | .. code:: python
123 |
124 | from atasker import wait_completed
125 |
126 | wait_completed([task1, task2, task3 .... ], timeout=None)
127 |
128 | The method return list of task results if all tasks are finished, or raises
129 | *TimeoutError* if timeout was specified but some tasks are not finished.
130 |
131 | If you call method with a single task instead of list or tuple, single result
132 | is returned.
133 |
--------------------------------------------------------------------------------
/doc/workers.rst:
--------------------------------------------------------------------------------
1 | Workers
2 | *******
3 |
4 | Worker is an object which runs specified function (executor) in a loop.
5 |
6 | .. contents::
7 |
8 | Common
9 | ======
10 |
11 | Worker parameters
12 | -----------------
13 |
14 | All workers support the following initial parameters:
15 |
16 | * **name** worker name (default: name of executor function if specified,
17 | otherwise: auto-generated UUID)
18 |
19 | * **func** executor function (default: *worker.run*)
20 |
21 | * **priority** worker thread priority
22 |
23 | * **o** special object, passed as-is to executor (e.g. object worker is running
24 | for)
25 |
26 | * **on_error** a function which is called, if executor raises an exception
27 |
28 | * **on_error_kwargs** kwargs for *on_error* function
29 |
30 | * **supervisor** alternative :doc:`task supervisor`
31 |
32 | * **poll_delay** worker poll delay (default: task supervisor poll delay)
33 |
34 | Methods
35 | -------
36 |
37 | .. automodule:: atasker
38 | .. autoclass:: BackgroundWorker
39 | :members:
40 |
41 | Overriding parameters at startup
42 | --------------------------------
43 |
44 | Initial parameters *name*, *priority* and *o* can be overriden during
45 | worker startup (first two - as *_name* and *_priority*)
46 |
47 | .. code:: python
48 |
49 | myworker.start(_name='worker1', _priority=atasker.TASK_LOW)
50 |
51 | Executor function
52 | -----------------
53 |
54 | Worker executor function is either specified with annotation or named *run*
55 | (see examples below). The function should always have *\*\*kwargs* param.
56 |
57 | Executor function gets in args/kwargs:
58 |
59 | * all parameters *worker.start* has been started with.
60 |
61 | * **_worker** current worker object
62 | * **_name** current worker name
63 | * **_task_id** if executor function is started in multiprocessing pool - ID of
64 | current task (for thread pool, task id = thread name).
65 |
66 | .. note::
67 |
68 | If executor function return *False*, worker stops itself.
69 |
70 | Asynchronous executor function
71 | ------------------------------
72 |
73 | Executor function can be asynchronous, in this case it's executed inside
74 | :doc:`task supervisor` loop, no new thread is started and
75 | *priority* is ignored.
76 |
77 | When *background_worker* decorator detects asynchronous function, class
78 | *BackgroundAsyncWorker* is automatically used instead of *BackgroundWorker*
79 | (*BackgroundQueueWorker*, *BackgroundEventWorker* and
80 | *BackgroundIntervalWorker* support synchronous functions out-of-the-box).
81 |
82 | Additional worker parameter *loop* (*_loop* at startup) may be specified to put
83 | executor function inside external async loop.
84 |
85 | .. note::
86 |
87 | To prevent interference between supervisor event loop and executors, it's
88 | strongly recommended to specify own async event loop or create
89 | :ref:`aloop`.
90 |
91 | Multiprocessing executor function
92 | ---------------------------------
93 |
94 | To use multiprocessing, :ref:`task supervisor` mp pool must be
95 | created.
96 |
97 | If executor method *run* is defined as static, workers automatically detect
98 | this and use multiprocessing pool of task supervisor to launch executor.
99 |
100 | .. note::
101 |
102 | As executor is started in separate process, it doesn't have an access to
103 | *self* object.
104 |
105 | Additionally, method *process_result* must be defined in worker class to
106 | process executor result. The method can stop worker by returning *False* value.
107 |
108 | Example, let's define *BackgroundQueueWorker*. Python multiprocessing module
109 | can not pick execution function defined via annotation, so worker class is
110 | required. Create it in separate module as Python multiprocessing can not pick
111 | methods from the module where the worker is started:
112 |
113 | .. warning::
114 |
115 | Multiprocessing executor function should always finish correctly, without
116 | any exceptions otherwise callback function is never called and task become
117 | "freezed" in pool.
118 |
119 | *myworker.py*
120 |
121 | .. code:: python
122 |
123 | class MyWorker(BackgroundQueueWorker):
124 |
125 | # executed in another process via task_supervisor
126 | @staticmethod
127 | def run(task, *args, **kwargs):
128 | # .. process task
129 | return ''
130 |
131 | def process_result(self, result):
132 | # process result
133 |
134 | *main.py*
135 |
136 | .. code:: python
137 |
138 | from myworker import MyWorker
139 |
140 | worker = MyWorker()
141 | worker.start()
142 | # .....
143 | worker.put_threadsafe('task')
144 | # .....
145 | worker.stop()
146 |
147 | Workers
148 | =======
149 |
150 | BackgroundWorker
151 | ----------------
152 |
153 | Background worker is a worker which continuously run executor function in a
154 | loop without any condition. Loop of this worker is synchronous and is started
155 | in separate thread instantly.
156 |
157 | .. code:: python
158 |
159 | # with annotation - function becomes worker executor
160 | from atasker import background_worker
161 |
162 | @background_worker
163 | def myfunc(*args, **kwargs):
164 | print('I am background worker')
165 |
166 | # with class
167 | from atasker import BackgroundWorker
168 |
169 | class MyWorker(BackgroundWorker):
170 |
171 | def run(self, *args, **kwargs):
172 | print('I am a worker too')
173 |
174 | myfunc.start()
175 |
176 | myworker2 = MyWorker()
177 | myworker2.start()
178 |
179 | # ............
180 |
181 | # stop first worker
182 | myfunc.stop()
183 | # stop 2nd worker, don't wait until it is really stopped
184 | myworker2.stop(wait=False)
185 |
186 | BackgroundAsyncWorker
187 | ---------------------
188 |
189 | Similar to *BackgroundWorker* but used for async executor functions. Has
190 | additional parameter *loop=* (*_loop* in start function) to specify either
191 | async event loop or :ref:`aloop` object. By default either task
192 | supervisor event loop or task supervisor default aloop is used.
193 |
194 | .. code:: python
195 |
196 | # with annotation - function becomes worker executor
197 | from atasker import background_worker
198 |
199 | @background_worker
200 | async def async_worker(**kwargs):
201 | print('I am async worker')
202 |
203 | async_worker.start()
204 |
205 | # with class
206 | from atasker import BackgroundAsyncWorker
207 |
208 | class MyWorker(BackgroundAsyncWorker):
209 |
210 | async def run(self, *args, **kwargs):
211 | print('I am async worker too')
212 |
213 | worker = MyWorker()
214 | worker.start()
215 |
216 | BackgroundQueueWorker
217 | ---------------------
218 |
219 | Background worker which gets data from asynchronous queue and passes it to
220 | synchronous or Asynchronous executor.
221 |
222 | Queue worker is created as soon as annotator detects *q=True* or *queue=True*
223 | param. Default queue is *asyncio.queues.Queue*. If you want to use e.g.
224 | priority queue, specify its class instead of just *True*.
225 |
226 | .. code:: python
227 |
228 | # with annotation - function becomes worker executor
229 | from atasker import background_worker
230 |
231 | @background_worker(q=True)
232 | def f(task, **kwargs):
233 | print('Got task from queue: {}'.format(task))
234 |
235 | @background_worker(q=asyncio.queues.PriorityQueue)
236 | def f2(task, **kwargs):
237 | print('Got task from queue too: {}'.format(task))
238 |
239 | # with class
240 | from atasker import BackgroundQueueWorker
241 |
242 | class MyWorker(BackgroundQueueWorker):
243 |
244 | def run(self, task, *args, **kwargs):
245 | print('my task is {}'.format(task))
246 |
247 |
248 | f.start()
249 | f2.start()
250 | worker3 = MyWorker()
251 | worker3.start()
252 | f.put_threadsafe('task 1')
253 | f2.put_threadsafe('task 2')
254 | worker3.put_threadsafe('task 3')
255 |
256 | **put** method is used to put task into worker's queue. The method is
257 | thread-safe.
258 |
259 | BackgroundEventWorker
260 | ---------------------
261 |
262 | Background worker which runs asynchronous loop waiting for the event and
263 | launches synchronous or asynchronous executor when it's happened.
264 |
265 | Event worker is created as soon as annotator detects *e=True* or *event=True*
266 | param.
267 |
268 | .. code:: python
269 |
270 | # with annotation - function becomes worker executor
271 | from atasker import background_worker
272 |
273 | @background_worker(e=True)
274 | def f(task, **kwargs):
275 | print('happened')
276 |
277 | # with class
278 | from atasker import BackgroundEventWorker
279 |
280 | class MyWorker(BackgroundEventWorker):
281 |
282 | def run(self, *args, **kwargs):
283 | print('happened')
284 |
285 |
286 | f.start()
287 | worker3 = MyWorker()
288 | worker3.start()
289 | f.trigger_threadsafe()
290 | worker3.trigger_threadsafe()
291 |
292 | **trigger_threadsafe** method is used to put task into worker's queue. The
293 | method is thread-safe. If worker is triggered from the same asyncio loop,
294 | **trigger** method can be used instead.
295 |
296 | BackgroundIntervalWorker
297 | ------------------------
298 |
299 | Background worker which runs synchronous or asynchronous executor function with
300 | the specified interval or delay.
301 |
302 | Worker initial parameters:
303 |
304 | * **interval** run executor with a specified interval (in seconds)
305 | * **delay** delay *between* executor launches
306 | * **delay_before** delay *before* executor launch
307 |
308 | Parameters *interval* and *delay* can not be used together. All parameters can
309 | be overriden during startup by adding *_* prefix (e.g.
310 | *worker.start(_interval=1)*)
311 |
312 | Background interval worker is created automatically, as soon as annotator
313 | detects one of the parameters above:
314 |
315 | .. code:: python
316 |
317 | @background_worker(interval=1)
318 | def myfunc(**kwargs):
319 | print('I run every second!')
320 |
321 | @background_worker(interval=1)
322 | async def myfunc2(**kwargs):
323 | print('I run every second and I am async!')
324 |
325 | myfunc.start()
326 | myfunc2.start()
327 |
328 | As well as event worker, **BackgroundIntervalWorker** supports manual executor
329 | triggering with *worker.trigger()* and *worker.trigger_threadsafe()*
330 |
331 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | __version__ = "0.7.9"
2 |
3 | import setuptools
4 |
5 | with open('README.md', 'r') as fh:
6 | long_description = fh.read()
7 |
8 | setuptools.setup(
9 | name='atasker',
10 | version=__version__,
11 | author='Altertech',
12 | author_email='div@altertech.com',
13 | description=
14 | 'Thread and multiprocessing pooling, task processing via asyncio',
15 | long_description=long_description,
16 | long_description_content_type='text/markdown',
17 | url='https://github.com/alttch/atasker',
18 | packages=setuptools.find_packages(),
19 | license='Apache License 2.0',
20 | install_requires=['aiosched'],
21 | classifiers=(
22 | 'Programming Language :: Python :: 3',
23 | 'License :: OSI Approved :: Apache Software License',
24 | 'Topic :: Software Development :: Libraries',
25 | ),
26 | )
27 |
--------------------------------------------------------------------------------
/tests/mp.py:
--------------------------------------------------------------------------------
1 | __author__ = "Altertech Group, https://www.altertech.com/"
2 | __copyright__ = "Copyright (C) 2018-2019 Altertech Group"
3 | __license__ = "Apache License 2.0"
4 | __version__ = "0.7.9"
5 |
6 | def test(*args, **kwargs):
7 | print('test mp method {} {}'.format(args, kwargs))
8 | return 999
9 |
10 | def test_mp(a, x, **kwargs):
11 | return a + x
12 |
13 | def test2(*args, **kwargs):
14 | return 999
15 |
--------------------------------------------------------------------------------
/tests/mpworker.py:
--------------------------------------------------------------------------------
1 | __author__ = "Altertech Group, https://www.altertech.com/"
2 | __copyright__ = "Copyright (C) 2018-2019 Altertech Group"
3 | __license__ = "Apache License 2.0"
4 | __version__ = "0.7.9"
5 |
6 | from atasker import BackgroundIntervalWorker
7 |
8 |
9 | class MPWorker(BackgroundIntervalWorker):
10 |
11 | @staticmethod
12 | def run(**kwargs):
13 | print(kwargs)
14 |
15 |
16 | class TestMPWorker(BackgroundIntervalWorker):
17 |
18 | a = 0
19 |
20 | @staticmethod
21 | def run(**kwargs):
22 | return 1
23 |
24 | def process_result(self, result):
25 | self.a += result
26 |
--------------------------------------------------------------------------------
/tests/test.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | __author__ = "Altertech Group, https://www.altertech.com/"
4 | __copyright__ = "Copyright (C) 2018-2019 Altertech Group"
5 | __license__ = "Apache License 2.0"
6 | __version__ = "0.7.9"
7 |
8 | from pathlib import Path
9 |
10 | import sys
11 | import logging
12 | import unittest
13 | import time
14 | import threading
15 |
16 | from types import SimpleNamespace
17 |
18 | result = SimpleNamespace(g=None,
19 | function_collection=0,
20 | task_collection=0,
21 | background_task_annotated=None,
22 | background_task_thread=None,
23 | background_task_thread_critical=None,
24 | background_task_mp=None,
25 | background_worker=0,
26 | wait1=None,
27 | wait2=None,
28 | wait3=None,
29 | background_interval_worker=0,
30 | background_interval_worker_async_ex=0,
31 | background_queue_worker=0,
32 | background_event_worker=0,
33 | locker_success=False,
34 | locker_failed=False,
35 | test_aloop=None,
36 | test_aloop_background_task=None,
37 | async_js1=0,
38 | async_js2=0)
39 |
40 | sys.path.insert(0, Path(__file__).absolute().parents[1].as_posix())
41 |
42 |
43 | def wait():
44 | time.sleep(0.1)
45 |
46 |
47 | from atasker import task_supervisor, background_task, background_worker
48 | from atasker import TT_MP, TASK_CRITICAL, wait_completed
49 |
50 | from atasker import FunctionCollection, TaskCollection, g
51 |
52 | from atasker import Locker, set_debug
53 |
54 |
55 | class Test(unittest.TestCase):
56 |
57 | def test_g(self):
58 |
59 | @background_task
60 | def f():
61 | result.g = g.get('test', 222)
62 | g.set('ttt', 333)
63 |
64 | g.set('test', 1)
65 | g.clear('test')
66 | g.set('test_is', g.has('test'))
67 | self.assertFalse(g.get('test_is'))
68 | g.set('test', 999)
69 | f()
70 | wait()
71 | self.assertIsNone(g.get('ttt'))
72 | self.assertEqual(result.g, 222)
73 |
74 | def test_function_collection(self):
75 |
76 | f = FunctionCollection()
77 |
78 | @f
79 | def f1():
80 | result.function_collection += 1
81 |
82 | @f
83 | def f2():
84 | result.function_collection += 2
85 |
86 | f.run()
87 | self.assertEqual(result.function_collection, 3)
88 |
89 | def test_task_collection(self):
90 |
91 | f = TaskCollection()
92 |
93 | @f
94 | def f1():
95 | result.task_collection += 1
96 |
97 | @f
98 | def f2():
99 | result.task_collection += 2
100 |
101 | f.run()
102 | self.assertEqual(result.task_collection, 3)
103 |
104 | def test_background_task_annotated(self):
105 |
106 | @background_task
107 | def t(a, x):
108 | result.background_task_annotated = a + x
109 |
110 | t(1, x=2)
111 | wait()
112 | self.assertEqual(result.background_task_annotated, 3)
113 |
114 | def test_background_task_thread(self):
115 |
116 | def t(a, x):
117 | result.background_task_thread = a + x
118 |
119 | background_task(t)(2, x=3)
120 | wait()
121 | self.assertEqual(result.background_task_thread, 5)
122 |
123 | def test_background_task_thread_critical(self):
124 |
125 | def t(a, x):
126 | result.background_task_thread = a + x
127 |
128 | background_task(t, priority=TASK_CRITICAL)(3, x=4)
129 | wait()
130 | self.assertEqual(result.background_task_thread, 7)
131 |
132 | def test_background_task_mp(self):
133 |
134 | def callback(res):
135 | result.background_task_mp = res
136 |
137 | from mp import test_mp
138 | background_task(test_mp, tt=TT_MP, callback=callback)(3, x=7)
139 | wait()
140 | self.assertEqual(result.background_task_mp, 10)
141 |
142 | def test_background_worker(self):
143 |
144 | @background_worker
145 | def t(**kwargs):
146 | result.background_worker += 1
147 |
148 | t.start()
149 | wait()
150 | t.stop()
151 | self.assertGreater(result.background_worker, 0)
152 |
153 | def test_background_interval_worker(self):
154 |
155 | @background_worker(interval=0.02)
156 | def t(**kwargs):
157 | result.background_interval_worker += 1
158 |
159 | t.start()
160 | wait()
161 | t.stop()
162 | self.assertLess(result.background_interval_worker, 10)
163 | self.assertGreater(result.background_interval_worker, 4)
164 |
165 | def test_background_interval_worker_async_ex(self):
166 |
167 | @background_worker(interval=0.02)
168 | async def t(**kwargs):
169 | result.background_interval_worker_async_ex += 1
170 |
171 | task_supervisor.default_aloop = None
172 | t.start()
173 | wait()
174 | t.stop()
175 | self.assertLess(result.background_interval_worker_async_ex, 10)
176 | self.assertGreater(result.background_interval_worker_async_ex, 4)
177 |
178 | def test_background_queue_worker(self):
179 |
180 | @background_worker(q=True)
181 | def t(a, **kwargs):
182 | result.background_queue_worker += a
183 |
184 | t.start()
185 | t.put_threadsafe(2)
186 | t.put_threadsafe(3)
187 | t.put_threadsafe(4)
188 | wait()
189 | t.stop()
190 | self.assertEqual(result.background_queue_worker, 9)
191 |
192 | def test_background_event_worker(self):
193 |
194 | @background_worker(e=True)
195 | def t(**kwargs):
196 | result.background_event_worker += 1
197 |
198 | t.start()
199 | t.trigger_threadsafe()
200 | wait()
201 | t.trigger_threadsafe()
202 | wait()
203 | t.stop()
204 | self.assertEqual(result.background_event_worker, 2)
205 |
206 | def test_background_interval_worker_mp(self):
207 |
208 | from mpworker import TestMPWorker
209 |
210 | t = TestMPWorker(interval=0.02)
211 | t.start()
212 | wait()
213 | t.stop()
214 | self.assertLess(t.a, 10)
215 | self.assertGreater(t.a, 4)
216 |
217 | def test_locker(self):
218 |
219 | with_lock = Locker(mod='test (broken is fine!)',
220 | relative=False,
221 | timeout=0.5)
222 |
223 | @with_lock
224 | def test_locker():
225 | result.locker_failed = True
226 |
227 | def locker_ok():
228 | result.locker_success = True
229 |
230 | with_lock.critical = locker_ok
231 | with_lock.lock.acquire()
232 | test_locker()
233 | self.assertTrue(result.locker_success)
234 | self.assertFalse(result.locker_failed)
235 |
236 | def test_supervisor(self):
237 | result = task_supervisor.get_info()
238 |
239 | self.assertEqual(result.thread_tasks_count, 0)
240 | self.assertEqual(result.mp_tasks_count, 0)
241 |
242 | def test_aloop(self):
243 |
244 | @background_worker(interval=0.02)
245 | async def t(**kwargs):
246 | result.test_aloop = threading.current_thread().getName()
247 |
248 | task_supervisor.create_aloop('test1', default=True)
249 | t.start()
250 | wait()
251 | t.stop()
252 | self.assertEqual(result.test_aloop, 'supervisor_default_aloop_test1')
253 |
254 | def test_result_async(self):
255 |
256 | def t1():
257 | return 555
258 |
259 | aloop = task_supervisor.create_aloop('test3')
260 | t = background_task(t1, loop='test3')()
261 | wait_completed([t])
262 | self.assertEqual(t.result, 555)
263 |
264 | def test_result_thread(self):
265 |
266 | def t1():
267 | return 777
268 |
269 | def t2():
270 | return 111
271 |
272 | task1 = background_task(t1)()
273 | task2 = background_task(t2)()
274 | self.assertEqual(wait_completed((task1, task2)), [777, 111])
275 |
276 | def test_result_mp(self):
277 |
278 | from mp import test2
279 |
280 | t = background_task(test2, tt=TT_MP)()
281 | self.assertEqual(wait_completed(t), 999)
282 |
283 | def test_aloop_run(self):
284 |
285 | async def t1():
286 | result.test_aloop_background_task = 1
287 |
288 | async def t2(x):
289 | return x * 2
290 |
291 | a = task_supervisor.create_aloop('test2')
292 | t = background_task(t1, loop='test2')()
293 | wait_completed([t])
294 | self.assertEqual(result.test_aloop_background_task, 1)
295 | self.assertEqual(a.run(t2(2)), 4)
296 |
297 | def test_wait_completed(self):
298 |
299 | @background_task
300 | def t1():
301 | time.sleep(0.1)
302 | result.wait1 = 1
303 |
304 | @background_task
305 | def t2():
306 | time.sleep(0.2)
307 | result.wait2 = 2
308 |
309 | @background_task
310 | def t3():
311 | time.sleep(0.3)
312 | result.wait3 = 3
313 |
314 | tasks = [t1(), t2(), t3()]
315 | wait_completed(tasks)
316 | self.assertEqual(result.wait1 + result.wait2 + result.wait3, 6)
317 |
318 | def test_async_job_scheduler(self):
319 |
320 | async def test1():
321 | result.async_js1 += 1
322 |
323 | async def test2():
324 | result.async_js2 += 1
325 |
326 | task_supervisor.create_aloop('jobs')
327 | task_supervisor.create_async_job_scheduler('default',
328 | aloop='jobs',
329 | default=True)
330 | j1 = task_supervisor.create_async_job(target=test1, interval=0.01)
331 | j2 = task_supervisor.create_async_job(target=test2, interval=0.01)
332 |
333 | time.sleep(0.1)
334 |
335 | task_supervisor.cancel_async_job(job=j2)
336 |
337 | r1 = result.async_js1
338 | r2 = result.async_js2
339 |
340 | self.assertGreater(r1, 9)
341 | self.assertGreater(r2, 9)
342 |
343 | time.sleep(0.1)
344 |
345 | self.assertLess(r1, result.async_js1)
346 | self.assertEqual(r2, result.async_js2)
347 |
348 |
349 | task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
350 | task_supervisor.set_mp_pool(pool_size=20, reserve_normal=5, reserve_high=5)
351 |
352 | if __name__ == '__main__':
353 | try:
354 | if sys.argv[1] == 'debug':
355 | logging.basicConfig(level=logging.DEBUG)
356 | set_debug()
357 | except:
358 | pass
359 | task_supervisor.start()
360 | task_supervisor.poll_delay = 0.01
361 | test_suite = unittest.TestLoader().loadTestsFromTestCase(Test)
362 | test_result = unittest.TextTestRunner().run(test_suite)
363 | task_supervisor.stop(wait=3)
364 | sys.exit(not test_result.wasSuccessful())
365 |
--------------------------------------------------------------------------------