├── supervisor ├── version.txt ├── __init__.py ├── tests │ ├── __init__.py │ ├── fixtures │ │ ├── example │ │ │ └── included.conf │ │ ├── include.conf │ │ ├── print_env.py │ │ ├── hello.sh │ │ ├── issue-1224.conf │ │ ├── spew.py │ │ ├── unkillable_spew.py │ │ ├── donothing.conf │ │ ├── issue-663.conf │ │ ├── issue-835.conf │ │ ├── issue-291a.conf │ │ ├── issue-638.conf │ │ ├── issue-1298.conf │ │ ├── issue-1054.conf │ │ ├── issue-1170a.conf │ │ ├── issue-664.conf │ │ ├── issue-836.conf │ │ ├── issue-1483a.conf │ │ ├── issue-1170b.conf │ │ ├── test_1231.py │ │ ├── issue-1483b.conf │ │ ├── issue-1483c.conf │ │ ├── issue-1596.conf │ │ ├── issue-1170c.conf │ │ ├── issue-1231a.conf │ │ ├── issue-1231b.conf │ │ ├── issue-1231c.conf │ │ ├── issue-986.conf │ │ ├── issue-550.conf │ │ ├── issue-565.conf │ │ ├── listener.py │ │ └── issue-733.conf │ ├── test_confecho.py │ ├── test_pidproxy.py │ ├── test_states.py │ ├── test_childutils.py │ ├── test_web.py │ └── test_socket_manager.py ├── medusa │ ├── CHANGES.txt │ ├── docs │ │ ├── data_flow.gif │ │ ├── producers.gif │ │ ├── async_blurbs.txt │ │ ├── composing_producers.gif │ │ ├── tkinter.txt │ │ ├── proxy_notes.txt │ │ ├── threads.txt │ │ ├── data_flow.html │ │ └── README.html │ ├── __init__.py │ ├── TODO.txt │ ├── util.py │ ├── LICENSE.txt │ ├── counter.py │ ├── README.txt │ ├── xmlrpc_handler.py │ ├── http_date.py │ ├── auth_handler.py │ ├── default_handler.py │ └── logger.py ├── ui │ ├── images │ │ ├── icon.png │ │ ├── rule.gif │ │ ├── state0.gif │ │ ├── state1.gif │ │ ├── state2.gif │ │ ├── state3.gif │ │ └── supervisor.gif │ ├── tail.html │ ├── status.html │ └── stylesheets │ │ └── supervisor.css ├── confecho.py ├── states.py ├── pidproxy.py ├── childutils.py ├── socket_manager.py ├── compat.py ├── poller.py ├── http_client.py └── events.py ├── docs ├── .static │ ├── logo_hi.gif │ └── repoze.css ├── subprocess-transitions.png ├── glossary.rst ├── index.rst ├── faq.rst ├── Makefile ├── xmlrpc.rst ├── upgrading.rst ├── development.rst ├── installing.rst ├── conf.py ├── introduction.rst └── logging.rst ├── COPYRIGHT.txt ├── .gitignore ├── setup.cfg ├── MANIFEST.in ├── .readthedocs.yaml ├── README.rst ├── tox.ini ├── setup.py ├── LICENSES.txt └── .github └── workflows └── main.yml /supervisor/version.txt: -------------------------------------------------------------------------------- 1 | 4.4.0.dev0 2 | -------------------------------------------------------------------------------- /supervisor/__init__.py: -------------------------------------------------------------------------------- 1 | # this is a package 2 | -------------------------------------------------------------------------------- /supervisor/tests/__init__.py: -------------------------------------------------------------------------------- 1 | # this is a package 2 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/example/included.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | childlogdir = %(here)s 3 | -------------------------------------------------------------------------------- /docs/.static/logo_hi.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/docs/.static/logo_hi.gif -------------------------------------------------------------------------------- /supervisor/medusa/CHANGES.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/medusa/CHANGES.txt -------------------------------------------------------------------------------- /supervisor/ui/images/icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/icon.png -------------------------------------------------------------------------------- /supervisor/ui/images/rule.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/rule.gif -------------------------------------------------------------------------------- /docs/subprocess-transitions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/docs/subprocess-transitions.png -------------------------------------------------------------------------------- /supervisor/ui/images/state0.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/state0.gif -------------------------------------------------------------------------------- /supervisor/ui/images/state1.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/state1.gif -------------------------------------------------------------------------------- /supervisor/ui/images/state2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/state2.gif -------------------------------------------------------------------------------- /supervisor/ui/images/state3.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/state3.gif -------------------------------------------------------------------------------- /supervisor/ui/images/supervisor.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/ui/images/supervisor.gif -------------------------------------------------------------------------------- /supervisor/medusa/docs/data_flow.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/medusa/docs/data_flow.gif -------------------------------------------------------------------------------- /supervisor/medusa/docs/producers.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/medusa/docs/producers.gif -------------------------------------------------------------------------------- /supervisor/medusa/docs/async_blurbs.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/medusa/docs/async_blurbs.txt -------------------------------------------------------------------------------- /supervisor/tests/fixtures/include.conf: -------------------------------------------------------------------------------- 1 | [include] 2 | files = ./example/included.conf 3 | 4 | [supervisord] 5 | logfile = %(here)s 6 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/print_env.py: -------------------------------------------------------------------------------- 1 | #!<> 2 | import os 3 | 4 | for k, v in os.environ.items(): 5 | print("%s=%s" % (k,v)) 6 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/composing_producers.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Supervisor/supervisor/HEAD/supervisor/medusa/docs/composing_producers.gif -------------------------------------------------------------------------------- /supervisor/medusa/__init__.py: -------------------------------------------------------------------------------- 1 | """medusa.__init__ 2 | """ 3 | 4 | # created 2002/03/19, AMK 5 | 6 | __revision__ = "$Id: __init__.py,v 1.2 2002/03/19 22:49:34 amk Exp $" 7 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/hello.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | n=0 3 | while [ $n -lt 10 ]; do 4 | let n=n+1 5 | echo "The Øresund bridge ends in Malmö - $n" 6 | sleep 1 7 | done 8 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1224.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon = true 3 | pidfile = /tmp/issue-1224.pid 4 | nodaemon = true 5 | logfile = /dev/stdout 6 | logfile_maxbytes = 0 7 | 8 | [program:cat] 9 | command = /bin/cat 10 | startsecs = 0 11 | 12 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/spew.py: -------------------------------------------------------------------------------- 1 | #!<> 2 | import sys 3 | import time 4 | 5 | counter = 0 6 | 7 | while counter < 30000: 8 | sys.stdout.write("more spewage %d\n" % counter) 9 | sys.stdout.flush() 10 | time.sleep(0.01) 11 | counter += 1 12 | -------------------------------------------------------------------------------- /COPYRIGHT.txt: -------------------------------------------------------------------------------- 1 | Supervisor is Copyright (c) 2006-2015 Agendaless Consulting and Contributors. 2 | (http://www.agendaless.com), All Rights Reserved 3 | 4 | medusa was (is?) Copyright (c) Sam Rushing. 5 | 6 | http_client.py code Copyright (c) by Daniel Krech, http://eikeon.com/. 7 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/unkillable_spew.py: -------------------------------------------------------------------------------- 1 | #!<> 2 | import time 3 | import signal 4 | signal.signal(signal.SIGTERM, signal.SIG_IGN) 5 | 6 | counter = 0 7 | 8 | while 1: 9 | time.sleep(0.01) 10 | print("more spewage %s" % counter) 11 | counter += 1 12 | 13 | -------------------------------------------------------------------------------- /supervisor/confecho.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from supervisor.compat import as_string 3 | from supervisor.compat import resource_filename 4 | 5 | 6 | def main(out=sys.stdout): 7 | with open(resource_filename(__package__, 'skel/sample.conf'), 'r') as f: 8 | out.write(as_string(f.read())) 9 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/donothing.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | logfile=/tmp/donothing.log ; (main log file;default $CWD/supervisord.log) 3 | pidfile=/tmp/donothing.pid ; (supervisord pidfile;default supervisord.pid) 4 | nodaemon=true ; (start in foreground if true;default false) 5 | 6 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-663.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug 3 | logfile=/tmp/issue-663.log 4 | pidfile=/tmp/issue-663.pid 5 | nodaemon=true 6 | 7 | [eventlistener:listener] 8 | command=python %(here)s/listener.py 9 | events=TICK_5 10 | startretries=0 11 | autorestart=false 12 | 13 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *.egg 3 | *.egg-info 4 | *.log 5 | *.pyc 6 | *.pyo 7 | *.swp 8 | *.pss 9 | .DS_Store 10 | .coverage* 11 | .eggs/ 12 | .pytest_cache/ 13 | .tox/ 14 | build/ 15 | docs/.build/ 16 | dist/ 17 | env*/ 18 | venv*/ 19 | htmlcov/ 20 | tmp/ 21 | coverage.xml 22 | nosetests.xml 23 | .cache/ 24 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [easy_install] 2 | zip_ok = false 3 | 4 | ;Marking a wheel as universal with "universal = 1" was deprecated 5 | ;in Setuptools 75.1.0. Setting "python_tag = py2.py3" should do 6 | ;the equivalent on Setuptools 30.3.0 or later. 7 | ; 8 | ;https://github.com/pypa/setuptools/pull/4617 9 | ;https://github.com/pypa/setuptools/pull/4939 10 | ; 11 | [bdist_wheel] 12 | python_tag = py2.py3 13 | -------------------------------------------------------------------------------- /supervisor/medusa/TODO.txt: -------------------------------------------------------------------------------- 1 | Things to do 2 | ============ 3 | 4 | Bring remaining code up to current standards 5 | Translate docs to RST 6 | Write README, INSTALL, docs 7 | What should __init__ import? Anything? Every single class? 8 | Add abo's support for blocking producers 9 | Get all the producers into the producers module and write tests for them 10 | 11 | Test suites for protocols: how could that be implemented? 12 | 13 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include CHANGES.rst 2 | include COPYRIGHT.txt 3 | include LICENSES.txt 4 | include README.rst 5 | include tox.ini 6 | include supervisor/version.txt 7 | include supervisor/skel/*.conf 8 | recursive-include supervisor/tests/fixtures *.conf *.py 9 | recursive-include supervisor/ui *.html *.css *.png *.gif 10 | include docs/Makefile 11 | recursive-include docs *.py *.rst *.css *.gif *.png 12 | recursive-exclude docs/.build * 13 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-835.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel = debug 3 | logfile=/tmp/issue-835.log 4 | pidfile=/tmp/issue-835.pid 5 | nodaemon = true 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-835.sock ; the path to the socket file 12 | 13 | [program:cat] 14 | command = /bin/cat 15 | startretries = 0 16 | autorestart = false 17 | 18 | -------------------------------------------------------------------------------- /supervisor/tests/test_confecho.py: -------------------------------------------------------------------------------- 1 | """Test suite for supervisor.confecho""" 2 | 3 | import unittest 4 | from supervisor.compat import StringIO 5 | from supervisor import confecho 6 | 7 | class TopLevelFunctionTests(unittest.TestCase): 8 | def test_main_writes_data_out_that_looks_like_a_config_file(self): 9 | sio = StringIO() 10 | confecho.main(out=sio) 11 | 12 | output = sio.getvalue() 13 | self.assertTrue("[supervisord]" in output) 14 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-291a.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-291a.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-291a.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | [program:print_env] 8 | command=python %(here)s/print_env.py 9 | startsecs=0 10 | autorestart=false 11 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-638.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-638.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-638.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | [program:produce-unicode-error] 8 | command=bash -c 'echo -e "\x88"' 9 | startretries=0 10 | autorestart=false 11 | 12 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1298.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon=true ; start in foreground if true; default false 3 | 4 | [rpcinterface:supervisor] 5 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 6 | 7 | [unix_http_server] 8 | file=/tmp/issue-1298.sock ; the path to the socket file 9 | 10 | [supervisorctl] 11 | serverurl=unix:///tmp/issue-1298.sock ; use a unix:// URL for a unix socket 12 | 13 | [program:spew] 14 | command=python %(here)s/spew.py 15 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1054.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel = debug 3 | logfile=/tmp/issue-1054.log 4 | pidfile=/tmp/issue-1054.pid 5 | nodaemon = true 6 | 7 | [unix_http_server] 8 | file=/tmp/issue-1054.sock ; the path to the socket file 9 | 10 | [supervisorctl] 11 | serverurl=unix:///tmp/issue-1054.sock ; use a unix:// URL for a unix socket 12 | 13 | [rpcinterface:supervisor] 14 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 15 | 16 | [program:cat] 17 | command = /bin/cat 18 | startsecs = 0 19 | 20 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1170a.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon=true ; start in foreground if true; default false 3 | loglevel=debug ; log level; default info; others: debug,warn,trace 4 | logfile=/tmp/issue-1170a.log ; main log file; default $CWD/supervisord.log 5 | pidfile=/tmp/issue-1170a.pid ; supervisord pidfile; default supervisord.pid 6 | environment=FOO="set from [supervisord] section" 7 | 8 | [program:echo] 9 | command=bash -c "echo '%(ENV_FOO)s'" 10 | startsecs=0 11 | startretries=0 12 | autorestart=false 13 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-664.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug 3 | logfile=/tmp/issue-664.log 4 | pidfile=/tmp/issue-664.pid 5 | nodaemon=true 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-664.sock ; the path to the socket file 12 | 13 | [supervisorctl] 14 | serverurl=unix:///tmp/issue-664.sock ; use a unix:// URL for a unix socket 15 | 16 | [program:test_öäü] 17 | command = /bin/cat 18 | startretries = 0 19 | autorestart = false 20 | 21 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-836.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel = debug 3 | logfile=/tmp/supervisord.log 4 | pidfile=/tmp/supervisord.pid 5 | nodaemon = true 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-565.sock ; the path to the socket file 12 | 13 | [supervisorctl] 14 | serverurl=unix:///tmp/issue-565.sock ; use a unix:// URL for a unix socket 15 | 16 | [program:cat] 17 | command = /bin/cat 18 | startretries = 0 19 | autorestart = false 20 | 21 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1483a.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-1483a.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-1483a.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-1483a.sock ; the path to the socket file 12 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1170b.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon=true ; start in foreground if true; default false 3 | loglevel=debug ; log level; default info; others: debug,warn,trace 4 | logfile=/tmp/issue-1170b.log ; main log file; default $CWD/supervisord.log 5 | pidfile=/tmp/issue-1170b.pid ; supervisord pidfile; default supervisord.pid 6 | environment=FOO="set from [supervisord] section" 7 | 8 | [program:echo] 9 | command=bash -c "echo '%(ENV_FOO)s'" 10 | environment=FOO="set from [program] section" 11 | startsecs=0 12 | startretries=0 13 | autorestart=false 14 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/test_1231.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import logging 3 | import random 4 | import sys 5 | import time 6 | 7 | def main(): 8 | logging.basicConfig(level=logging.INFO, stream=sys.stdout, 9 | format='%(levelname)s [%(asctime)s] %(message)s', 10 | datefmt='%m-%d|%H:%M:%S') 11 | i = 1 12 | while i < 500: 13 | delay = random.randint(400, 1200) 14 | time.sleep(delay / 1000.0) 15 | logging.info('%d - hash=57d94b…381088', i) 16 | i += 1 17 | 18 | if __name__ == '__main__': 19 | main() 20 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1483b.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-1483b.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-1483b.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | identifier=from_config_file 7 | 8 | [rpcinterface:supervisor] 9 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 10 | 11 | [unix_http_server] 12 | file=/tmp/issue-1483b.sock ; the path to the socket file 13 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1483c.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-1483c.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-1483c.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | identifier=from_config_file 7 | 8 | [rpcinterface:supervisor] 9 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 10 | 11 | [unix_http_server] 12 | file=/tmp/issue-1483c.sock ; the path to the socket file 13 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1596.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-1596.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-1596.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | identifier=from_config_file 7 | 8 | [rpcinterface:supervisor] 9 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 10 | 11 | [unix_http_server] 12 | file=/tmp/issue-1596.sock ; the path to the socket file 13 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-1170c.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon=true ; start in foreground if true; default false 3 | loglevel=debug ; log level; default info; others: debug,warn,trace 4 | logfile=/tmp/issue-1170c.log ; main log file; default $CWD/supervisord.log 5 | pidfile=/tmp/issue-1170c.pid ; supervisord pidfile; default supervisord.pid 6 | environment=FOO="set from [supervisord] section" 7 | 8 | [eventlistener:echo] 9 | command=bash -c "echo '%(ENV_FOO)s' >&2" 10 | environment=FOO="set from [eventlistener] section" 11 | events=PROCESS_STATE_FATAL 12 | startsecs=0 13 | startretries=0 14 | autorestart=false 15 | -------------------------------------------------------------------------------- /docs/.static/repoze.css: -------------------------------------------------------------------------------- 1 | @import url('default.css'); 2 | body { 3 | background-color: #006339; 4 | } 5 | 6 | div.document { 7 | background-color: #dad3bd; 8 | } 9 | 10 | div.sphinxsidebar h3, h4, h5, a { 11 | color: #127c56 !important; 12 | } 13 | 14 | div.related { 15 | color: #dad3bd !important; 16 | background-color: #00744a; 17 | } 18 | 19 | div.related a { 20 | color: #dad3bd !important; 21 | } 22 | 23 | /* override the justify text align of the default */ 24 | 25 | div.body p { 26 | text-align: left !important; 27 | } 28 | 29 | /* fix google chrome
 tag renderings */
30 | 
31 | pre {
32 |    line-height: normal !important;
33 | }
34 | 


--------------------------------------------------------------------------------
/.readthedocs.yaml:
--------------------------------------------------------------------------------
 1 | # .readthedocs.yaml
 2 | # Read the Docs configuration file
 3 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
 4 | 
 5 | # Required
 6 | version: 2
 7 | 
 8 | # Set the version of Python and other tools you might need
 9 | build:
10 |   os: ubuntu-22.04
11 |   tools:
12 |     python: "3.11"
13 | 
14 | # Build documentation in the docs/ directory with Sphinx
15 | sphinx:
16 |   configuration: docs/conf.py
17 | 
18 | # We recommend specifying your dependencies to enable reproducible builds:
19 | # https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
20 | # python:
21 | #   install:
22 | #   - requirements: docs/requirements.txt
23 | 
24 | 


--------------------------------------------------------------------------------
/supervisor/tests/test_pidproxy.py:
--------------------------------------------------------------------------------
 1 | import os
 2 | import unittest
 3 | 
 4 | class PidProxyTests(unittest.TestCase):
 5 |     def _getTargetClass(self):
 6 |         from supervisor.pidproxy import PidProxy
 7 |         return PidProxy
 8 | 
 9 |     def _makeOne(self, *arg, **kw):
10 |         return self._getTargetClass()(*arg, **kw)
11 | 
12 |     def test_ctor_parses_args(self):
13 |         args = ["pidproxy.py", "/path/to/pidfile", "./cmd", "-arg1", "-arg2"]
14 |         pp = self._makeOne(args)
15 |         self.assertEqual(pp.pidfile, "/path/to/pidfile")
16 |         self.assertEqual(pp.abscmd, os.path.abspath("./cmd"))
17 |         self.assertEqual(pp.cmdargs, ["./cmd", "-arg1", "-arg2"])
18 | 


--------------------------------------------------------------------------------
/supervisor/tests/fixtures/issue-1231a.conf:
--------------------------------------------------------------------------------
 1 | [supervisord]
 2 | loglevel=info                ; log level; default info; others: debug,warn,trace
 3 | logfile=/tmp/issue-1231a.log ; main log file; default $CWD/supervisord.log
 4 | pidfile=/tmp/issue-1231a.pid ; supervisord pidfile; default supervisord.pid
 5 | nodaemon=true                ; start in foreground if true; default false
 6 | 
 7 | [rpcinterface:supervisor]
 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
 9 | 
10 | [unix_http_server]
11 | file=/tmp/issue-1231a.sock   ; the path to the socket file
12 | 
13 | [supervisorctl]
14 | serverurl=unix:///tmp/issue-1231a.sock  ; use a unix:// URL  for a unix socket
15 | 
16 | [program:hello]
17 | command=python %(here)s/test_1231.py
18 | 


--------------------------------------------------------------------------------
/supervisor/tests/fixtures/issue-1231b.conf:
--------------------------------------------------------------------------------
 1 | [supervisord]
 2 | loglevel=info                ; log level; default info; others: debug,warn,trace
 3 | logfile=/tmp/issue-1231b.log ; main log file; default $CWD/supervisord.log
 4 | pidfile=/tmp/issue-1231b.pid ; supervisord pidfile; default supervisord.pid
 5 | nodaemon=true                ; start in foreground if true; default false
 6 | 
 7 | [rpcinterface:supervisor]
 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
 9 | 
10 | [unix_http_server]
11 | file=/tmp/issue-1231b.sock   ; the path to the socket file
12 | 
13 | [supervisorctl]
14 | serverurl=unix:///tmp/issue-1231b.sock  ; use a unix:// URL  for a unix socket
15 | 
16 | [program:hello]
17 | command=python %(here)s/test_1231.py
18 | 


--------------------------------------------------------------------------------
/supervisor/tests/fixtures/issue-1231c.conf:
--------------------------------------------------------------------------------
 1 | [supervisord]
 2 | loglevel=info                ; log level; default info; others: debug,warn,trace
 3 | logfile=/tmp/issue-1231c.log ; main log file; default $CWD/supervisord.log
 4 | pidfile=/tmp/issue-1231c.pid ; supervisord pidfile; default supervisord.pid
 5 | nodaemon=true                ; start in foreground if true; default false
 6 | 
 7 | [rpcinterface:supervisor]
 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
 9 | 
10 | [unix_http_server]
11 | file=/tmp/issue-1231c.sock   ; the path to the socket file
12 | 
13 | [supervisorctl]
14 | serverurl=unix:///tmp/issue-1231c.sock  ; use a unix:// URL  for a unix socket
15 | 
16 | [program:hello]
17 | command=python %(here)s/test_1231.py
18 | 


--------------------------------------------------------------------------------
/supervisor/ui/tail.html:
--------------------------------------------------------------------------------
 1 | 
 3 | 
 5 | 
 6 |   Supervisor Status
 7 | 
 8 | 
 9 | 
10 | 
11 | 12 |

13 | 
14 | 
15 | 16 |
17 | 18 | 19 | 20 |
21 | 22 |
23 | 24 |
25 | 26 | 27 | 28 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-986.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-986.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-986.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-986.sock ; the path to the socket file 12 | 13 | [supervisorctl] 14 | serverurl=unix:///tmp/issue-986.sock ; use a unix:// URL for a unix socket 15 | 16 | [program:echo] 17 | command=bash -c "echo 'dhcrelay -d -q -a %%h:%%p %%P -i Vlan1000 192.168.0.1'" 18 | startsecs=0 19 | autorestart=false 20 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-550.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-550.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-550.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | environment=THIS_SHOULD=BE_IN_CHILD_ENV 7 | 8 | [rpcinterface:supervisor] 9 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 10 | 11 | [unix_http_server] 12 | file=/tmp/issue-550.sock ; the path to the socket file 13 | 14 | [supervisorctl] 15 | serverurl=unix:///tmp/issue-550.sock ; use a unix:// URL for a unix socket 16 | 17 | [program:print_env] 18 | command=python %(here)s/print_env.py 19 | startsecs=0 20 | startretries=0 21 | autorestart=false 22 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-565.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=info ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-565.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-565.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | [rpcinterface:supervisor] 8 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 9 | 10 | [unix_http_server] 11 | file=/tmp/issue-565.sock ; the path to the socket file 12 | 13 | [supervisorctl] 14 | serverurl=unix:///tmp/issue-565.sock ; use a unix:// URL for a unix socket 15 | 16 | [program:hello] 17 | command=bash %(here)s/hello.sh 18 | stdout_events_enabled=true 19 | startretries=0 20 | autorestart=false 21 | 22 | [eventlistener:listener] 23 | command=python %(here)s/listener.py 24 | events=PROCESS_LOG 25 | startretries=0 26 | autorestart=false 27 | -------------------------------------------------------------------------------- /docs/glossary.rst: -------------------------------------------------------------------------------- 1 | .. _glossary: 2 | 3 | Glossary 4 | ======== 5 | 6 | .. glossary:: 7 | :sorted: 8 | 9 | daemontools 10 | A `process control system by D.J. Bernstein 11 | `_. 12 | 13 | runit 14 | A `process control system `_. 15 | 16 | launchd 17 | A `process control system used by Apple 18 | `_ as process 1 under Mac 19 | OS X. 20 | 21 | umask 22 | Abbreviation of *user mask*: sets the file mode creation mask of 23 | the current process. See `http://en.wikipedia.org/wiki/Umask 24 | `_. 25 | 26 | Superlance 27 | A package which provides various event listener implementations 28 | that plug into Supervisor which can help monitor process memory 29 | usage and crash status: `https://pypi.org/pypi/superlance/ 30 | `_. 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/tkinter.txt: -------------------------------------------------------------------------------- 1 | 2 | Here are some notes on combining the Tk Event loop with the async lib 3 | and/or Medusa. Many thanks to Aaron Rhodes (alrhodes@cpis.net) for 4 | the info! 5 | 6 | > Sam, 7 | > 8 | > Just wanted to send you a quick message about how I managed to 9 | > finally integrate Tkinter with asyncore. This solution is pretty 10 | > straightforward. From the main tkinter event loop i simply added 11 | > a repeating alarm that calls asyncore.poll() every so often. So 12 | > the code looks like this: 13 | > 14 | > in main: 15 | > import asyncore 16 | > 17 | > self.socket_check() 18 | > 19 | > ... 20 | > 21 | > then, socket_check() is: 22 | > 23 | > def socket_check(self): 24 | > asyncore.poll(timeout=0.0) 25 | > self.after(100, self.socket_check) 26 | > 27 | > 28 | > This simply causes asyncore to poll all the sockets every 100ms 29 | > during the tkinter event loop. The GUI doesn't block on IO since 30 | > all the IO calls are now handled with asyncore. 31 | 32 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/listener.py: -------------------------------------------------------------------------------- 1 | 2 | import sys 3 | 4 | def write_and_flush(stream, s): 5 | stream.write(s) 6 | stream.flush() 7 | 8 | def write_stdout(s): 9 | # only eventlistener protocol messages may be sent to stdout 10 | sys.stdout.write(s) 11 | sys.stdout.flush() 12 | 13 | def write_stderr(s): 14 | sys.stderr.write(s) 15 | sys.stderr.flush() 16 | 17 | def main(): 18 | stdin = sys.stdin 19 | stdout = sys.stdout 20 | stderr = sys.stderr 21 | while True: 22 | # transition from ACKNOWLEDGED to READY 23 | write_and_flush(stdout, 'READY\n') 24 | 25 | # read header line and print it to stderr 26 | line = stdin.readline() 27 | write_and_flush(stderr, line) 28 | 29 | # read event payload and print it to stderr 30 | headers = dict([ x.split(':') for x in line.split() ]) 31 | data = stdin.read(int(headers['len'])) 32 | write_and_flush(stderr, data) 33 | 34 | # transition from READY to ACKNOWLEDGED 35 | write_and_flush(stdout, 'RESULT 2\nOK') 36 | 37 | if __name__ == '__main__': 38 | main() 39 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Supervisor: A Process Control System 2 | ==================================== 3 | 4 | Supervisor is a client/server system that allows its users to monitor 5 | and control a number of processes on UNIX-like operating systems. 6 | 7 | It shares some of the same goals of programs like :term:`launchd`, 8 | :term:`daemontools`, and :term:`runit`. Unlike some of these programs, 9 | it is not meant to be run as a substitute for ``init`` as "process id 10 | 1". Instead it is meant to be used to control processes related to a 11 | project or a customer, and is meant to start like any other program at 12 | boot time. 13 | 14 | Narrative Documentation 15 | ----------------------- 16 | 17 | .. toctree:: 18 | :maxdepth: 2 19 | 20 | introduction.rst 21 | installing.rst 22 | running.rst 23 | configuration.rst 24 | subprocess.rst 25 | logging.rst 26 | events.rst 27 | xmlrpc.rst 28 | upgrading.rst 29 | faq.rst 30 | development.rst 31 | glossary.rst 32 | 33 | API Documentation 34 | ----------------- 35 | 36 | .. toctree:: 37 | :maxdepth: 2 38 | 39 | api.rst 40 | 41 | Plugins 42 | ------- 43 | 44 | .. toctree:: 45 | :maxdepth: 2 46 | 47 | plugins.rst 48 | 49 | Indices and tables 50 | ------------------ 51 | 52 | * :ref:`genindex` 53 | * :ref:`modindex` 54 | * :ref:`search` 55 | -------------------------------------------------------------------------------- /supervisor/tests/fixtures/issue-733.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | loglevel=debug ; log level; default info; others: debug,warn,trace 3 | logfile=/tmp/issue-733.log ; main log file; default $CWD/supervisord.log 4 | pidfile=/tmp/issue-733.pid ; supervisord pidfile; default supervisord.pid 5 | nodaemon=true ; start in foreground if true; default false 6 | 7 | ; 8 | ;This command does not exist so the process will enter the FATAL state. 9 | ; 10 | [program:nonexistent] 11 | command=%(here)s/nonexistent 12 | startsecs=0 13 | startretries=0 14 | autorestart=false 15 | 16 | ; 17 | ;The one-line eventlistener below will cause supervisord to exit when any process 18 | ;enters the FATAL state. Based on: 19 | ;https://github.com/Supervisor/supervisor/issues/733#issuecomment-781254766 20 | ; 21 | ;Differences from that example: 22 | ; 1. $PPID is used instead of a hardcoded PID 1. Child processes are always forked 23 | ; from supervisord, so their PPID is the PID of supervisord. 24 | ; 2. "printf" is used instead of "echo". The result "OK" must not have a newline 25 | ; or else the protocol will be violated and supervisord will log a warning. 26 | ; 27 | [eventlistener:fatalexit] 28 | events=PROCESS_STATE_FATAL 29 | command=sh -c 'while true; do printf "READY\n"; read line; kill -15 $PPID; printf "RESULT 2\n"; printf "OK"; done' 30 | startsecs=0 31 | startretries=0 32 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | Supervisor 2 | ========== 3 | 4 | Supervisor is a client/server system that allows its users to 5 | control a number of processes on UNIX-like operating systems. 6 | 7 | Supported Platforms 8 | ------------------- 9 | 10 | Supervisor has been tested and is known to run on Linux (Ubuntu), Mac OS X 11 | (10.4, 10.5, 10.6), and Solaris (10 for Intel) and FreeBSD 6.1. It will 12 | likely work fine on most UNIX systems. 13 | 14 | Supervisor will not run at all under any version of Windows. 15 | 16 | Supervisor is intended to work on Python 3 version 3.4 or later 17 | and on Python 2 version 2.7. 18 | 19 | Documentation 20 | ------------- 21 | 22 | You can view the current Supervisor documentation online `in HTML format 23 | `_ . This is where you should go for detailed 24 | installation and configuration documentation. 25 | 26 | Reporting Bugs and Viewing the Source Repository 27 | ------------------------------------------------ 28 | 29 | Please report bugs in the `GitHub issue tracker 30 | `_. 31 | 32 | You can view the source repository for supervisor via 33 | `https://github.com/Supervisor/supervisor 34 | `_. 35 | 36 | Contributing 37 | ------------ 38 | 39 | We'll review contributions from the community in 40 | `pull requests `_ 41 | on GitHub. 42 | -------------------------------------------------------------------------------- /docs/faq.rst: -------------------------------------------------------------------------------- 1 | Frequently Asked Questions 2 | ========================== 3 | 4 | Q 5 | My program never starts and supervisor doesn't indicate any error? 6 | 7 | A 8 | Make sure the ``x`` bit is set on the executable file you're using in 9 | the ``command=`` line of your program section. 10 | 11 | Q 12 | I am a software author and I want my program to behave differently 13 | when it's running under :program:`supervisord`. How can I tell if 14 | my program is running under :program:`supervisord`? 15 | 16 | A 17 | Supervisor and its subprocesses share an environment variable 18 | :envvar:`SUPERVISOR_ENABLED`. When your program is run under 19 | :program:`supervisord`, it can check for the presence of this 20 | environment variable to determine whether it is running as a 21 | :program:`supervisord` subprocess. 22 | 23 | Q 24 | My command works fine when I invoke it by hand from a shell prompt, 25 | but when I use the same command line in a supervisor program 26 | ``command=`` section, the program fails mysteriously. Why? 27 | 28 | A 29 | This may be due to your process' dependence on environment variable 30 | settings. See :ref:`subprocess_environment`. 31 | 32 | Q 33 | How can I make Supervisor restart a process that's using "too much" 34 | memory automatically? 35 | 36 | A 37 | The :term:`Superlance` package contains a console script that can be 38 | used as a Supervisor event listener named ``memmon`` which helps 39 | with this task. It works on Linux and Mac OS X. 40 | -------------------------------------------------------------------------------- /supervisor/medusa/util.py: -------------------------------------------------------------------------------- 1 | from supervisor.compat import escape 2 | 3 | def html_repr (object): 4 | so = escape (repr (object)) 5 | if hasattr (object, 'hyper_respond'): 6 | return '%s' % (id (object), so) 7 | else: 8 | return so 9 | 10 | # for example, tera, giga, mega, kilo 11 | # p_d (n, (1024, 1024, 1024, 1024)) 12 | # smallest divider goes first - for example 13 | # minutes, hours, days 14 | # p_d (n, (60, 60, 24)) 15 | 16 | def progressive_divide (n, parts): 17 | result = [] 18 | for part in parts: 19 | n, rem = divmod (n, part) 20 | result.append (rem) 21 | result.append (n) 22 | return result 23 | 24 | # b,k,m,g,t 25 | def split_by_units (n, units, dividers, format_string): 26 | divs = progressive_divide (n, dividers) 27 | result = [] 28 | for i in range(len(units)): 29 | if divs[i]: 30 | result.append (format_string % (divs[i], units[i])) 31 | result.reverse() 32 | if not result: 33 | return [format_string % (0, units[0])] 34 | else: 35 | return result 36 | 37 | def english_bytes (n): 38 | return split_by_units ( 39 | n, 40 | ('','K','M','G','T'), 41 | (1024, 1024, 1024, 1024, 1024), 42 | '%d %sB' 43 | ) 44 | 45 | def english_time (n): 46 | return split_by_units ( 47 | n, 48 | ('secs', 'mins', 'hours', 'days', 'weeks', 'years'), 49 | ( 60, 60, 24, 7, 52), 50 | '%d %s' 51 | ) 52 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = 3 | cover,cover3,docs,py27,py34,py35,py36,py37,py38,py39,py310,py311,py312,py313,py314 4 | 5 | [testenv] 6 | deps = 7 | attrs < 21.1.0 # see https://github.com/python-attrs/attrs/pull/608 8 | pexpect == 4.7.0 # see https://github.com/Supervisor/supervisor/issues/1327 9 | pytest 10 | passenv = END_TO_END 11 | commands = 12 | pytest --capture=no {posargs} 13 | 14 | [testenv:py27] 15 | basepython = python2.7 16 | deps = 17 | {[testenv]deps} 18 | mock >= 0.5.0 19 | passenv = {[testenv]passenv} 20 | commands = {[testenv]commands} 21 | 22 | [testenv:py27-configparser] 23 | ;see https://github.com/Supervisor/supervisor/issues/1230 24 | basepython = python2.7 25 | deps = 26 | {[testenv:py27]deps} 27 | configparser 28 | passenv = {[testenv:py27]passenv} 29 | commands = {[testenv:py27]commands} 30 | 31 | [testenv:cover] 32 | basepython = python2.7 33 | deps = 34 | {[testenv:py27]deps} 35 | pytest-cov 36 | commands = 37 | pytest --capture=no --cov=supervisor --cov-report=term-missing --cov-report=xml {posargs} 38 | 39 | [testenv:cover3] 40 | basepython = python3.8 41 | commands = 42 | pytest --capture=no --cov=supervisor --cov-report=term-missing --cov-report=xml {posargs} 43 | deps = 44 | {[testenv:cover]deps} 45 | 46 | [testenv:docs] 47 | deps = 48 | pygments >= 2.19.1 # Sphinx build fails on 2.19.0 when highlighting ini block 49 | Sphinx 50 | readme 51 | setuptools >= 18.5 52 | allowlist_externals = make 53 | commands = 54 | make -C docs html BUILDDIR={envtmpdir} "SPHINXOPTS=-W -E" 55 | python setup.py check -m -r -s 56 | -------------------------------------------------------------------------------- /supervisor/medusa/LICENSE.txt: -------------------------------------------------------------------------------- 1 | Medusa was once distributed under a 'free for non-commercial use' 2 | license, but in May of 2000 Sam Rushing changed the license to be 3 | identical to the standard Python license at the time. The standard 4 | Python license has always applied to the core components of Medusa, 5 | this change just frees up the rest of the system, including the http 6 | server, ftp server, utilities, etc. Medusa is therefore under the 7 | following license: 8 | 9 | ============================== 10 | Permission to use, copy, modify, and distribute this software and 11 | its documentation for any purpose and without fee is hereby granted, 12 | provided that the above copyright notice appear in all copies and 13 | that both that copyright notice and this permission notice appear in 14 | supporting documentation, and that the name of Sam Rushing not be 15 | used in advertising or publicity pertaining to distribution of the 16 | software without specific, written prior permission. 17 | 18 | SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, 19 | INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN 20 | NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR 21 | CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS 22 | OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, 23 | NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION 24 | WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 25 | ============================== 26 | 27 | Sam would like to take this opportunity to thank all of the folks who 28 | supported Medusa over the years by purchasing commercial licenses. 29 | 30 | 31 | -------------------------------------------------------------------------------- /supervisor/medusa/counter.py: -------------------------------------------------------------------------------- 1 | # -*- Mode: Python -*- 2 | 3 | # It is tempting to add an __int__ method to this class, but it's not 4 | # a good idea. This class tries to gracefully handle integer 5 | # overflow, and to hide this detail from both the programmer and the 6 | # user. Note that the __str__ method can be relied on for printing out 7 | # the value of a counter: 8 | # 9 | # >>> print 'Total Client: %s' % self.total_clients 10 | # 11 | # If you need to do arithmetic with the value, then use the 'as_long' 12 | # method, the use of long arithmetic is a reminder that the counter 13 | # will overflow. 14 | 15 | from supervisor.compat import long 16 | 17 | class counter: 18 | """general-purpose counter""" 19 | 20 | def __init__ (self, initial_value=0): 21 | self.value = initial_value 22 | 23 | def increment (self, delta=1): 24 | result = self.value 25 | try: 26 | self.value = self.value + delta 27 | except OverflowError: 28 | self.value = long(self.value) + delta 29 | return result 30 | 31 | def decrement (self, delta=1): 32 | result = self.value 33 | try: 34 | self.value = self.value - delta 35 | except OverflowError: 36 | self.value = long(self.value) - delta 37 | return result 38 | 39 | def as_long (self): 40 | return long(self.value) 41 | 42 | def __nonzero__ (self): 43 | return self.value != 0 44 | 45 | __bool__ = __nonzero__ 46 | 47 | def __repr__ (self): 48 | return '' % (self.value, id(self)) 49 | 50 | def __str__ (self): 51 | s = str(long(self.value)) 52 | if s[-1:] == 'L': 53 | s = s[:-1] 54 | return s 55 | 56 | -------------------------------------------------------------------------------- /supervisor/medusa/README.txt: -------------------------------------------------------------------------------- 1 | Medusa is a 'server platform' -- it provides a framework for 2 | implementing asynchronous socket-based servers (TCP/IP and on Unix, 3 | Unix domain, sockets). 4 | 5 | An asynchronous socket server is a server that can communicate with many 6 | other clients simultaneously by multiplexing I/O within a single 7 | process/thread. In the context of an HTTP server, this means a single 8 | process can serve hundreds or even thousands of clients, depending only on 9 | the operating system's configuration and limitations. 10 | 11 | There are several advantages to this approach: 12 | 13 | o performance - no fork() or thread() start-up costs per hit. 14 | 15 | o scalability - the overhead per client can be kept rather small, 16 | on the order of several kilobytes of memory. 17 | 18 | o persistence - a single-process server can easily coordinate the 19 | actions of several different connections. This makes things like 20 | proxy servers and gateways easy to implement. It also makes it 21 | possible to share resources like database handles. 22 | 23 | Medusa includes HTTP, FTP, and 'monitor' (remote python interpreter) 24 | servers. Medusa can simultaneously support several instances of 25 | either the same or different server types - for example you could 26 | start up two HTTP servers, an FTP server, and a monitor server. Then 27 | you could connect to the monitor server to control and manipulate 28 | medusa while it is running. 29 | 30 | Other servers and clients have been written (SMTP, POP3, NNTP), and 31 | several are in the planning stages. 32 | 33 | Medusa was originally written by Sam Rushing , 34 | and its original Web page is at . After 35 | Sam moved on to other things, A.M. Kuchling 36 | took over maintenance of the Medusa package. 37 | 38 | --amk 39 | 40 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/proxy_notes.txt: -------------------------------------------------------------------------------- 1 | 2 | # we can build 'promises' to produce external data. Each producer 3 | # contains a 'promise' to fetch external data (or an error 4 | # message). writable() for that channel will only return true if the 5 | # top-most producer is ready. This state can be flagged by the dns 6 | # client making a callback. 7 | 8 | # So, say 5 proxy requests come in, we can send out DNS queries for 9 | # them immediately. If the replies to these come back before the 10 | # promises get to the front of the queue, so much the better: no 11 | # resolve delay. 8^) 12 | # 13 | # ok, there's still another complication: 14 | # how to maintain replies in order? 15 | # say three requests come in, (to different hosts? can this happen?) 16 | # yet the connections happen third, second, and first. We can't buffer 17 | # the entire request! We need to be able to specify how much to buffer. 18 | # 19 | # =========================================================================== 20 | # 21 | # the current setup is a 'pull' model: whenever the channel fires FD_WRITE, 22 | # we 'pull' data from the producer fifo. what we need is a 'push' option/mode, 23 | # where 24 | # 1) we only check for FD_WRITE when data is in the buffer 25 | # 2) whoever is 'pushing' is responsible for calling 'refill_buffer()' 26 | # 27 | # what is necessary to support this 'mode'? 28 | # 1) writable() only fires when data is in the buffer 29 | # 2) refill_buffer() is only called by the 'pusher'. 30 | # 31 | # how would such a mode affect things? with this mode could we support 32 | # a true http/1.1 proxy? [i.e, support pipelined proxy requests, possibly 33 | # to different hosts, possibly even mixed in with non-proxy requests?] For 34 | # example, it would be nice if we could have the proxy automatically apply the 35 | # 1.1 chunking for 1.0 close-on-eof replies when feeding it to the client. This 36 | # would let us keep our persistent connection. 37 | -------------------------------------------------------------------------------- /supervisor/states.py: -------------------------------------------------------------------------------- 1 | # This module must not depend on any other non-stdlib module to prevent 2 | # circular import problems. 3 | 4 | class ProcessStates: 5 | STOPPED = 0 6 | STARTING = 10 7 | RUNNING = 20 8 | BACKOFF = 30 9 | STOPPING = 40 10 | EXITED = 100 11 | FATAL = 200 12 | UNKNOWN = 1000 13 | 14 | STOPPED_STATES = (ProcessStates.STOPPED, 15 | ProcessStates.EXITED, 16 | ProcessStates.FATAL, 17 | ProcessStates.UNKNOWN) 18 | 19 | RUNNING_STATES = (ProcessStates.RUNNING, 20 | ProcessStates.BACKOFF, 21 | ProcessStates.STARTING) 22 | 23 | SIGNALLABLE_STATES = (ProcessStates.RUNNING, 24 | ProcessStates.STARTING, 25 | ProcessStates.STOPPING) 26 | 27 | def getProcessStateDescription(code): 28 | return _process_states_by_code.get(code) 29 | 30 | 31 | class SupervisorStates: 32 | FATAL = 2 33 | RUNNING = 1 34 | RESTARTING = 0 35 | SHUTDOWN = -1 36 | 37 | def getSupervisorStateDescription(code): 38 | return _supervisor_states_by_code.get(code) 39 | 40 | 41 | class EventListenerStates: 42 | READY = 10 # the process ready to be sent an event from supervisor 43 | BUSY = 20 # event listener is processing an event sent to it by supervisor 44 | ACKNOWLEDGED = 30 # the event listener processed an event 45 | UNKNOWN = 40 # the event listener is in an unknown state 46 | 47 | def getEventListenerStateDescription(code): 48 | return _eventlistener_states_by_code.get(code) 49 | 50 | 51 | # below is an optimization for internal use in this module only 52 | def _names_by_code(states): 53 | d = {} 54 | for name in states.__dict__: 55 | if not name.startswith('__'): 56 | code = getattr(states, name) 57 | d[code] = name 58 | return d 59 | _process_states_by_code = _names_by_code(ProcessStates) 60 | _supervisor_states_by_code = _names_by_code(SupervisorStates) 61 | _eventlistener_states_by_code = _names_by_code(EventListenerStates) 62 | -------------------------------------------------------------------------------- /supervisor/pidproxy.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python -u 2 | 3 | """pidproxy -- run command and proxy signals to it via its pidfile. 4 | 5 | This executable runs a command and then monitors a pidfile. When this 6 | executable receives a signal, it sends the same signal to the pid 7 | in the pidfile. 8 | 9 | Usage: %s [ ...] 10 | """ 11 | 12 | import os 13 | import sys 14 | import signal 15 | import time 16 | 17 | class PidProxy: 18 | pid = None 19 | 20 | def __init__(self, args): 21 | try: 22 | self.pidfile, cmdargs = args[1], args[2:] 23 | self.abscmd = os.path.abspath(cmdargs[0]) 24 | self.cmdargs = cmdargs 25 | except (ValueError, IndexError): 26 | self.usage() 27 | sys.exit(1) 28 | 29 | def go(self): 30 | self.setsignals() 31 | self.pid = os.spawnv(os.P_NOWAIT, self.abscmd, self.cmdargs) 32 | while 1: 33 | time.sleep(5) 34 | try: 35 | pid = os.waitpid(-1, os.WNOHANG)[0] 36 | except OSError: 37 | pid = None 38 | if pid: 39 | break 40 | 41 | def usage(self): 42 | print(__doc__ % sys.argv[0]) 43 | 44 | def setsignals(self): 45 | signal.signal(signal.SIGTERM, self.passtochild) 46 | signal.signal(signal.SIGHUP, self.passtochild) 47 | signal.signal(signal.SIGINT, self.passtochild) 48 | signal.signal(signal.SIGUSR1, self.passtochild) 49 | signal.signal(signal.SIGUSR2, self.passtochild) 50 | signal.signal(signal.SIGQUIT, self.passtochild) 51 | signal.signal(signal.SIGCHLD, self.reap) 52 | 53 | def reap(self, sig, frame): 54 | # do nothing, we reap our child synchronously 55 | pass 56 | 57 | def passtochild(self, sig, frame): 58 | try: 59 | with open(self.pidfile, 'r') as f: 60 | pid = int(f.read().strip()) 61 | except: 62 | print("Can't read child pidfile %s!" % self.pidfile) 63 | return 64 | os.kill(pid, sig) 65 | if sig in [signal.SIGTERM, signal.SIGINT, signal.SIGQUIT]: 66 | sys.exit(0) 67 | 68 | def main(): 69 | pp = PidProxy(sys.argv) 70 | pp.go() 71 | 72 | if __name__ == '__main__': 73 | main() 74 | -------------------------------------------------------------------------------- /supervisor/tests/test_states.py: -------------------------------------------------------------------------------- 1 | """Test suite for supervisor.states""" 2 | 3 | import unittest 4 | from supervisor import states 5 | 6 | class TopLevelProcessStateTests(unittest.TestCase): 7 | def test_module_has_process_states(self): 8 | self.assertTrue(hasattr(states, 'ProcessStates')) 9 | 10 | def test_stopped_states_do_not_overlap_with_running_states(self): 11 | for state in states.STOPPED_STATES: 12 | self.assertFalse(state in states.RUNNING_STATES) 13 | 14 | def test_running_states_do_not_overlap_with_stopped_states(self): 15 | for state in states.RUNNING_STATES: 16 | self.assertFalse(state in states.STOPPED_STATES) 17 | 18 | def test_getProcessStateDescription_returns_string_when_found(self): 19 | state = states.ProcessStates.STARTING 20 | self.assertEqual(states.getProcessStateDescription(state), 21 | 'STARTING') 22 | 23 | def test_getProcessStateDescription_returns_None_when_not_found(self): 24 | self.assertEqual(states.getProcessStateDescription(3.14159), 25 | None) 26 | 27 | class TopLevelSupervisorStateTests(unittest.TestCase): 28 | def test_module_has_supervisor_states(self): 29 | self.assertTrue(hasattr(states, 'SupervisorStates')) 30 | 31 | def test_getSupervisorStateDescription_returns_string_when_found(self): 32 | state = states.SupervisorStates.RUNNING 33 | self.assertEqual(states.getSupervisorStateDescription(state), 34 | 'RUNNING') 35 | 36 | def test_getSupervisorStateDescription_returns_None_when_not_found(self): 37 | self.assertEqual(states.getSupervisorStateDescription(3.14159), 38 | None) 39 | 40 | class TopLevelEventListenerStateTests(unittest.TestCase): 41 | def test_module_has_eventlistener_states(self): 42 | self.assertTrue(hasattr(states, 'EventListenerStates')) 43 | 44 | def test_getEventListenerStateDescription_returns_string_when_found(self): 45 | state = states.EventListenerStates.ACKNOWLEDGED 46 | self.assertEqual(states.getEventListenerStateDescription(state), 47 | 'ACKNOWLEDGED') 48 | 49 | def test_getEventListenerStateDescription_returns_None_when_not_found(self): 50 | self.assertEqual(states.getEventListenerStateDescription(3.14159), 51 | None) 52 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/threads.txt: -------------------------------------------------------------------------------- 1 | # -*- Mode: Text; tab-width: 4 -*- 2 | 3 | [note, a better solution is now available, see the various modules in 4 | the 'thread' directory (SMR 990105)] 5 | 6 | A Workable Approach to Mixing Threads and Medusa. 7 | --------------------------------------------------------------------------- 8 | 9 | When Medusa receives a request that needs to be handled by a separate 10 | thread, have the thread remove the socket from Medusa's control, by 11 | calling the 'del_channel()' method, and put the socket into 12 | blocking-mode: 13 | 14 | request.channel.del_channel() 15 | request.channel.socket.setblocking (0) 16 | 17 | Now your thread is responsible for managing the rest of the HTTP 18 | 'session'. In particular, you need to send the HTTP response, followed 19 | by any headers, followed by the response body. 20 | 21 | Since the most common need for mixing threads and Medusa is to support 22 | CGI, there's one final hurdle that should be pointed out: CGI scripts 23 | sometimes make use of a 'Status:' hack (oops, I meant to say 'header') 24 | in order to tell the server to return a reply other than '200 OK'. To 25 | support this it is necessary to scan the output _before_ it is sent. 26 | Here is a sample 'write' method for a file-like object that performs 27 | this scan: 28 | 29 | HEADER_LINE = regex.compile ('\([A-Za-z0-9-]+\): \(.*\)') 30 | 31 | def write (self, data): 32 | if self.got_header: 33 | self._write (data) 34 | else: 35 | # CGI scripts may optionally provide extra headers. 36 | # 37 | # If they do not, then the output is assumed to be 38 | # text/html, with an HTTP reply code of '200 OK'. 39 | # 40 | # If they do, we need to scan those headers for one in 41 | # particular: the 'Status:' header, which will tell us 42 | # to use a different HTTP reply code [like '302 Moved'] 43 | # 44 | self.buffer = self.buffer + data 45 | lines = self.buffer.split('\n') 46 | # look for something un-header-like 47 | for i in range(len(lines)): 48 | if i == (len(lines)-1): 49 | if lines[i] == '': 50 | break 51 | elif HEADER_LINE.match (lines[i]) == -1: 52 | # this is not a header line. 53 | self.got_header = 1 54 | self.buffer = self.build_header (lines[:i]) 55 | # rejoin the rest of the data 56 | self._write('\n'.join(lines[i:])) 57 | break 58 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | 9 | # Internal variables. 10 | PAPEROPT_a4 = -D latex_paper_size=a4 11 | PAPEROPT_letter = -D latex_paper_size=letter 12 | ALLSPHINXOPTS = -d .build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 13 | 14 | .PHONY: help clean html web pickle htmlhelp latex changes linkcheck 15 | 16 | help: 17 | @echo "Please use \`make ' where is one of" 18 | @echo " html to make standalone HTML files" 19 | @echo " pickle to make pickle files (usable by e.g. sphinx-web)" 20 | @echo " htmlhelp to make HTML files and a HTML help project" 21 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 22 | @echo " changes to make an overview over all changed/added/deprecated items" 23 | @echo " linkcheck to check all external links for integrity" 24 | 25 | clean: 26 | -rm -rf .build/* 27 | 28 | html: 29 | mkdir -p .build/html .build/doctrees 30 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) .build/html 31 | @echo 32 | @echo "Build finished. The HTML pages are in .build/html." 33 | 34 | pickle: 35 | mkdir -p .build/pickle .build/doctrees 36 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) .build/pickle 37 | @echo 38 | @echo "Build finished; now you can process the pickle files or run" 39 | @echo " sphinx-web .build/pickle" 40 | @echo "to start the sphinx-web server." 41 | 42 | web: pickle 43 | 44 | htmlhelp: 45 | mkdir -p .build/htmlhelp .build/doctrees 46 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp 47 | @echo 48 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 49 | ".hhp project file in .build/htmlhelp." 50 | 51 | latex: 52 | mkdir -p .build/latex .build/doctrees 53 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex 54 | @echo 55 | @echo "Build finished; the LaTeX files are in .build/latex." 56 | @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ 57 | "run these through (pdf)latex." 58 | 59 | changes: 60 | mkdir -p .build/changes .build/doctrees 61 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes 62 | @echo 63 | @echo "The overview file is in .build/changes." 64 | 65 | linkcheck: 66 | mkdir -p .build/linkcheck .build/doctrees 67 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck 68 | @echo 69 | @echo "Link check complete; look for any errors in the above output " \ 70 | "or in .build/linkcheck/output.txt." 71 | -------------------------------------------------------------------------------- /docs/xmlrpc.rst: -------------------------------------------------------------------------------- 1 | Extending Supervisor's XML-RPC API 2 | ================================== 3 | 4 | Supervisor can be extended with new XML-RPC APIs. Several third-party 5 | plugins already exist that can be wired into your Supervisor 6 | configuration. You may additionally write your own. Extensible 7 | XML-RPC interfaces is an advanced feature, introduced in version 3.0. 8 | You needn't understand it unless you wish to use an existing 9 | third-party RPC interface plugin or if you wish to write your own RPC 10 | interface plugin. 11 | 12 | .. _rpcinterface_factories: 13 | 14 | Configuring XML-RPC Interface Factories 15 | --------------------------------------- 16 | 17 | An additional RPC interface is configured into a supervisor 18 | installation by adding a ``[rpcinterface:x]`` section in the 19 | Supervisor configuration file. 20 | 21 | In the sample config file, there is a section which is named 22 | ``[rpcinterface:supervisor]``. By default it looks like this: 23 | 24 | .. code-block:: ini 25 | 26 | [rpcinterface:supervisor] 27 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 28 | 29 | This section *must* remain in the configuration for the standard setup 30 | of supervisor to work properly. If you don't want supervisor to do 31 | anything it doesn't already do out of the box, this is all you need to 32 | know about this type of section. 33 | 34 | However, if you wish to add additional XML-RPC interface namespaces to 35 | a configuration of supervisor, you may add additional 36 | ``[rpcinterface:foo]`` sections, where "foo" represents the namespace 37 | of the interface (from the web root), and the value named by 38 | ``supervisor.rpcinterface_factory`` is a factory callable written in 39 | Python which should have a function signature that accepts a single 40 | positional argument ``supervisord`` and as many keyword arguments as 41 | required to perform configuration. Any key/value pairs defined within 42 | the ``rpcinterface:foo`` section will be passed as keyword arguments 43 | to the factory. Here's an example of a factory function, created in 44 | the package ``my.package``. 45 | 46 | .. code-block:: python 47 | 48 | def make_another_rpcinterface(supervisord, **config): 49 | retries = int(config.get('retries', 0)) 50 | another_rpc_interface = AnotherRPCInterface(supervisord, retries) 51 | return another_rpc_interface 52 | 53 | And a section in the config file meant to configure it. 54 | 55 | .. code-block:: ini 56 | 57 | [rpcinterface:another] 58 | supervisor.rpcinterface_factory = my.package:make_another_rpcinterface 59 | retries = 1 60 | 61 | -------------------------------------------------------------------------------- /supervisor/ui/status.html: -------------------------------------------------------------------------------- 1 | 3 | 5 | 6 | Supervisor Status 7 | 8 | 9 | 10 | 11 |
12 | 13 | 16 | 17 |
18 | 19 | 20 |
21 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 49 | 50 | 51 |
StateDescriptionNameAction
nominalInfoName 43 | 48 |
52 |
53 | 54 |
55 | 56 |
57 |
58 |
59 | 60 | 68 | 69 | 70 | -------------------------------------------------------------------------------- /supervisor/childutils.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | 4 | from supervisor.compat import xmlrpclib 5 | from supervisor.compat import long 6 | from supervisor.compat import as_string 7 | 8 | from supervisor.xmlrpc import SupervisorTransport 9 | from supervisor.events import ProcessCommunicationEvent 10 | from supervisor.dispatchers import PEventListenerDispatcher 11 | 12 | def getRPCTransport(env): 13 | u = env.get('SUPERVISOR_USERNAME', '') 14 | p = env.get('SUPERVISOR_PASSWORD', '') 15 | return SupervisorTransport(u, p, env['SUPERVISOR_SERVER_URL']) 16 | 17 | def getRPCInterface(env): 18 | # dumbass ServerProxy won't allow us to pass in a non-HTTP url, 19 | # so we fake the url we pass into it and always use the transport's 20 | # 'serverurl' to figure out what to attach to 21 | return xmlrpclib.ServerProxy('http://127.0.0.1', getRPCTransport(env)) 22 | 23 | def get_headers(line): 24 | return dict([ x.split(':') for x in line.split() ]) 25 | 26 | def eventdata(payload): 27 | headerinfo, data = payload.split('\n', 1) 28 | headers = get_headers(headerinfo) 29 | return headers, data 30 | 31 | def get_asctime(now=None): 32 | if now is None: # for testing 33 | now = time.time() # pragma: no cover 34 | msecs = (now - long(now)) * 1000 35 | part1 = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(now)) 36 | asctime = '%s,%03d' % (part1, msecs) 37 | return asctime 38 | 39 | class ProcessCommunicationsProtocol: 40 | def send(self, msg, fp=sys.stdout): 41 | fp.write(ProcessCommunicationEvent.BEGIN_TOKEN) 42 | fp.write(msg) 43 | fp.write(ProcessCommunicationEvent.END_TOKEN) 44 | fp.flush() 45 | 46 | def stdout(self, msg): 47 | return self.send(msg, sys.stdout) 48 | 49 | def stderr(self, msg): 50 | return self.send(msg, sys.stderr) 51 | 52 | pcomm = ProcessCommunicationsProtocol() 53 | 54 | class EventListenerProtocol: 55 | def wait(self, stdin=sys.stdin, stdout=sys.stdout): 56 | self.ready(stdout) 57 | line = stdin.readline() 58 | headers = get_headers(line) 59 | payload = stdin.read(int(headers['len'])) 60 | return headers, payload 61 | 62 | def ready(self, stdout=sys.stdout): 63 | stdout.write(as_string(PEventListenerDispatcher.READY_FOR_EVENTS_TOKEN)) 64 | stdout.flush() 65 | 66 | def ok(self, stdout=sys.stdout): 67 | self.send('OK', stdout) 68 | 69 | def fail(self, stdout=sys.stdout): 70 | self.send('FAIL', stdout) 71 | 72 | def send(self, data, stdout=sys.stdout): 73 | resultlen = len(data) 74 | result = '%s%s\n%s' % (as_string(PEventListenerDispatcher.RESULT_TOKEN_START), 75 | str(resultlen), 76 | data) 77 | stdout.write(result) 78 | stdout.flush() 79 | 80 | listener = EventListenerProtocol() 81 | -------------------------------------------------------------------------------- /docs/upgrading.rst: -------------------------------------------------------------------------------- 1 | Upgrading Supervisor 2 to 3 2 | =========================== 3 | 4 | The following is true when upgrading an installation from Supervisor 5 | 2.X to Supervisor 3.X: 6 | 7 | #. In ``[program:x]`` sections, the keys ``logfile``, 8 | ``logfile_backups``, ``logfile_maxbytes``, ``log_stderr`` and 9 | ``log_stdout`` are no longer valid. Supervisor2 logged both 10 | stderr and stdout to a single log file. Supervisor 3 logs stderr 11 | and stdout to separate log files. You'll need to rename 12 | ``logfile`` to ``stdout_logfile``, ``logfile_backups`` to 13 | ``stdout_logfile_backups``, and ``logfile_maxbytes`` to 14 | ``stdout_logfile_maxbytes`` at the very least to preserve your 15 | configuration. If you created program sections where 16 | ``log_stderr`` was true, to preserve the behavior of sending 17 | stderr output to the stdout log, use the ``redirect_stderr`` 18 | boolean in the program section instead. 19 | 20 | #. The supervisor configuration file must include the following 21 | section verbatim for the XML-RPC interface (and thus the web 22 | interface and :program:`supervisorctl`) to work properly: 23 | 24 | .. code-block:: ini 25 | 26 | [rpcinterface:supervisor] 27 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 28 | 29 | #. The semantics of the ``autorestart`` parameter within 30 | ``[program:x]`` sections has changed. This parameter used to 31 | accept only ``true`` or ``false``. It now accepts an additional 32 | value, ``unexpected``, which indicates that the process should 33 | restart from the ``EXITED`` state only if its exit code does not 34 | match any of those represented by the ``exitcode`` parameter in 35 | the process' configuration (implying a process crash). In 36 | addition, the default for ``autorestart`` is now ``unexpected`` 37 | (it used to be ``true``, which meant restart unconditionally). 38 | 39 | #. We now allow :program:`supervisord` to listen on both a UNIX 40 | domain socket and an inet socket instead of making listening on 41 | one mutually exclusive with listening on the other. As a result, 42 | the options ``http_port``, ``http_username``, ``http_password``, 43 | ``sockchmod`` and ``sockchown`` are no longer part of 44 | the ``[supervisord]`` section configuration. These have been 45 | supplanted by two other sections: ``[unix_http_server]`` and 46 | ``[inet_http_server]``. You'll need to insert one or the other 47 | (depending on whether you want to listen on a UNIX domain socket 48 | or a TCP socket respectively) or both into your 49 | :file:`supervisord.conf` file. These sections have their own 50 | options (where applicable) for ``port``, ``username``, 51 | ``password``, ``chmod``, and ``chown``. 52 | 53 | #. All supervisord command-line options related to ``http_port``, 54 | ``http_username``, ``http_password``, ``sockchmod`` and 55 | ``sockchown`` have been removed (see above point for rationale). 56 | 57 | #. The option that used to be ``sockchown`` within the 58 | ``[supervisord]`` section (and is now named ``chown`` within the 59 | ``[unix_http_server]`` section) used to accept a dot-separated 60 | (``user.group``) value. The separator now must be a 61 | colon, e.g. ``user:group``. Unices allow for dots in 62 | usernames, so this change is a bugfix. 63 | -------------------------------------------------------------------------------- /supervisor/socket_manager.py: -------------------------------------------------------------------------------- 1 | import socket 2 | 3 | class Proxy: 4 | """ Class for wrapping a shared resource object and getting 5 | notified when it's deleted 6 | """ 7 | 8 | def __init__(self, object, **kwargs): 9 | self.object = object 10 | self.on_delete = kwargs.get('on_delete', None) 11 | 12 | def __del__(self): 13 | if self.on_delete: 14 | self.on_delete() 15 | 16 | def __getattr__(self, name): 17 | return getattr(self.object, name) 18 | 19 | def _get(self): 20 | return self.object 21 | 22 | class ReferenceCounter: 23 | """ Class for tracking references to a shared resource 24 | """ 25 | 26 | def __init__(self, **kwargs): 27 | self.on_non_zero = kwargs['on_non_zero'] 28 | self.on_zero = kwargs['on_zero'] 29 | self.ref_count = 0 30 | 31 | def get_count(self): 32 | return self.ref_count 33 | 34 | def increment(self): 35 | if self.ref_count == 0: 36 | self.on_non_zero() 37 | self.ref_count += 1 38 | 39 | def decrement(self): 40 | if self.ref_count <= 0: 41 | raise Exception('Illegal operation: cannot decrement below zero') 42 | self.ref_count -= 1 43 | if self.ref_count == 0: 44 | self.on_zero() 45 | 46 | class SocketManager: 47 | """ Class for managing sockets in servers that create/bind/listen 48 | before forking multiple child processes to accept() 49 | Sockets are managed at the process group level and referenced counted 50 | at the process level b/c that's really the only place to hook in 51 | """ 52 | 53 | def __init__(self, socket_config, **kwargs): 54 | self.logger = kwargs.get('logger', None) 55 | self.socket = None 56 | self.prepared = False 57 | self.socket_config = socket_config 58 | self.ref_ctr = ReferenceCounter( 59 | on_zero=self._close, on_non_zero=self._prepare_socket 60 | ) 61 | 62 | def __repr__(self): 63 | return '<%s at %s for %s>' % (self.__class__, 64 | id(self), 65 | self.socket_config.url) 66 | 67 | def config(self): 68 | return self.socket_config 69 | 70 | def is_prepared(self): 71 | return self.prepared 72 | 73 | def get_socket(self): 74 | self.ref_ctr.increment() 75 | self._require_prepared() 76 | return Proxy(self.socket, on_delete=self.ref_ctr.decrement) 77 | 78 | def get_socket_ref_count(self): 79 | self._require_prepared() 80 | return self.ref_ctr.get_count() 81 | 82 | def _require_prepared(self): 83 | if not self.prepared: 84 | raise Exception('Socket has not been prepared') 85 | 86 | def _prepare_socket(self): 87 | if not self.prepared: 88 | if self.logger: 89 | self.logger.info('Creating socket %s' % self.socket_config) 90 | self.socket = self.socket_config.create_and_bind() 91 | if self.socket_config.get_backlog(): 92 | self.socket.listen(self.socket_config.get_backlog()) 93 | else: 94 | self.socket.listen(socket.SOMAXCONN) 95 | self.prepared = True 96 | 97 | def _close(self): 98 | self._require_prepared() 99 | if self.logger: 100 | self.logger.info('Closing socket %s' % self.socket_config) 101 | self.socket.close() 102 | self.prepared = False 103 | -------------------------------------------------------------------------------- /supervisor/medusa/xmlrpc_handler.py: -------------------------------------------------------------------------------- 1 | # -*- Mode: Python -*- 2 | 3 | # See http://www.xml-rpc.com/ 4 | # http://www.pythonware.com/products/xmlrpc/ 5 | 6 | # Based on "xmlrpcserver.py" by Fredrik Lundh (fredrik@pythonware.com) 7 | 8 | VERSION = "$Id: xmlrpc_handler.py,v 1.6 2004/04/21 14:09:24 akuchling Exp $" 9 | 10 | from supervisor.compat import as_string 11 | 12 | import supervisor.medusa.http_server as http_server 13 | try: 14 | import xmlrpclib 15 | except: 16 | import xmlrpc.client as xmlrpclib 17 | 18 | import sys 19 | 20 | class xmlrpc_handler: 21 | 22 | def match (self, request): 23 | # Note: /RPC2 is not required by the spec, so you may override this method. 24 | if request.uri[:5] == '/RPC2': 25 | return 1 26 | else: 27 | return 0 28 | 29 | def handle_request (self, request): 30 | if request.command == 'POST': 31 | request.collector = collector (self, request) 32 | else: 33 | request.error (400) 34 | 35 | def continue_request (self, data, request): 36 | params, method = xmlrpclib.loads (data) 37 | try: 38 | # generate response 39 | try: 40 | response = self.call (method, params) 41 | if type(response) != type(()): 42 | response = (response,) 43 | except: 44 | # report exception back to server 45 | response = xmlrpclib.dumps ( 46 | xmlrpclib.Fault (1, "%s:%s" % (sys.exc_info()[0], sys.exc_info()[1])) 47 | ) 48 | else: 49 | response = xmlrpclib.dumps (response, methodresponse=1) 50 | except: 51 | # internal error, report as HTTP server error 52 | request.error (500) 53 | else: 54 | # got a valid XML RPC response 55 | request['Content-Type'] = 'text/xml' 56 | request.push (response) 57 | request.done() 58 | 59 | def call (self, method, params): 60 | # override this method to implement RPC methods 61 | raise Exception("NotYetImplemented") 62 | 63 | class collector: 64 | 65 | """gathers input for POST and PUT requests""" 66 | 67 | def __init__ (self, handler, request): 68 | 69 | self.handler = handler 70 | self.request = request 71 | self.data = [] 72 | 73 | # make sure there's a content-length header 74 | cl = request.get_header ('content-length') 75 | 76 | if not cl: 77 | request.error (411) 78 | else: 79 | cl = int(cl) 80 | # using a 'numeric' terminator 81 | self.request.channel.set_terminator (cl) 82 | 83 | def collect_incoming_data (self, data): 84 | self.data.append(data) 85 | 86 | def found_terminator (self): 87 | # set the terminator back to the default 88 | self.request.channel.set_terminator (b'\r\n\r\n') 89 | # convert the data back to text for processing 90 | data = as_string(b''.join(self.data)) 91 | self.handler.continue_request (data, self.request) 92 | 93 | if __name__ == '__main__': 94 | 95 | class rpc_demo (xmlrpc_handler): 96 | 97 | def call (self, method, params): 98 | print('method="%s" params=%s' % (method, params)) 99 | return "Sure, that works" 100 | 101 | import supervisor.medusa.asyncore_25 as asyncore 102 | 103 | hs = http_server.http_server ('', 8000) 104 | rpc = rpc_demo() 105 | hs.install_handler (rpc) 106 | 107 | asyncore.loop() 108 | -------------------------------------------------------------------------------- /supervisor/medusa/http_date.py: -------------------------------------------------------------------------------- 1 | # -*- Mode: Python -*- 2 | 3 | import re 4 | import time 5 | 6 | def concat (*args): 7 | return ''.join (args) 8 | 9 | def join (seq, field=' '): 10 | return field.join (seq) 11 | 12 | def group (s): 13 | return '(' + s + ')' 14 | 15 | short_days = ['sun','mon','tue','wed','thu','fri','sat'] 16 | long_days = ['sunday','monday','tuesday','wednesday','thursday','friday','saturday'] 17 | 18 | short_day_reg = group (join (short_days, '|')) 19 | long_day_reg = group (join (long_days, '|')) 20 | 21 | daymap = {} 22 | for i in range(7): 23 | daymap[short_days[i]] = i 24 | daymap[long_days[i]] = i 25 | 26 | hms_reg = join (3 * [group('[0-9][0-9]')], ':') 27 | 28 | months = ['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'] 29 | 30 | monmap = {} 31 | for i in range(12): 32 | monmap[months[i]] = i+1 33 | 34 | months_reg = group (join (months, '|')) 35 | 36 | # From draft-ietf-http-v11-spec-07.txt/3.3.1 37 | # Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123 38 | # Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036 39 | # Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format 40 | 41 | # rfc822 format 42 | rfc822_date = join ( 43 | [concat (short_day_reg,','), # day 44 | group('[0-9][0-9]?'), # date 45 | months_reg, # month 46 | group('[0-9]+'), # year 47 | hms_reg, # hour minute second 48 | 'gmt' 49 | ], 50 | ' ' 51 | ) 52 | 53 | rfc822_reg = re.compile (rfc822_date) 54 | 55 | def unpack_rfc822(m): 56 | g = m.group 57 | i = int 58 | return ( 59 | i(g(4)), # year 60 | monmap[g(3)], # month 61 | i(g(2)), # day 62 | i(g(5)), # hour 63 | i(g(6)), # minute 64 | i(g(7)), # second 65 | 0, 66 | 0, 67 | 0 68 | ) 69 | 70 | # rfc850 format 71 | rfc850_date = join ( 72 | [concat (long_day_reg,','), 73 | join ( 74 | [group ('[0-9][0-9]?'), 75 | months_reg, 76 | group ('[0-9]+') 77 | ], 78 | '-' 79 | ), 80 | hms_reg, 81 | 'gmt' 82 | ], 83 | ' ' 84 | ) 85 | 86 | rfc850_reg = re.compile (rfc850_date) 87 | # they actually unpack the same way 88 | def unpack_rfc850(m): 89 | g = m.group 90 | i = int 91 | return ( 92 | i(g(4)), # year 93 | monmap[g(3)], # month 94 | i(g(2)), # day 95 | i(g(5)), # hour 96 | i(g(6)), # minute 97 | i(g(7)), # second 98 | 0, 99 | 0, 100 | 0 101 | ) 102 | 103 | # parsedate.parsedate - ~700/sec. 104 | # parse_http_date - ~1333/sec. 105 | 106 | def build_http_date (when): 107 | return time.strftime ('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(when)) 108 | 109 | def parse_http_date (d): 110 | d = d.lower() 111 | tz = time.timezone 112 | m = rfc850_reg.match (d) 113 | if m and m.end() == len(d): 114 | retval = int (time.mktime (unpack_rfc850(m)) - tz) 115 | else: 116 | m = rfc822_reg.match (d) 117 | if m and m.end() == len(d): 118 | retval = int (time.mktime (unpack_rfc822(m)) - tz) 119 | else: 120 | return 0 121 | # Thanks to Craig Silverstein for pointing 122 | # out the DST discrepancy 123 | if time.daylight and time.localtime(retval)[-1] == 1: # DST correction 124 | retval += tz - time.altzone 125 | return retval 126 | -------------------------------------------------------------------------------- /docs/development.rst: -------------------------------------------------------------------------------- 1 | Resources and Development 2 | ========================= 3 | 4 | Bug Tracker 5 | ----------- 6 | 7 | Supervisor has a bugtracker where you may report any bugs or other 8 | errors you find. Please report bugs to the `GitHub issues page 9 | `_. 10 | 11 | Version Control Repository 12 | -------------------------- 13 | 14 | You can also view the `Supervisor version control repository 15 | `_. 16 | 17 | Contributing 18 | ------------ 19 | 20 | We'll review contributions from the community in 21 | `pull requests `_ 22 | on GitHub. 23 | 24 | Author Information 25 | ------------------ 26 | 27 | The following people are responsible for creating Supervisor. 28 | 29 | Original Author 30 | ~~~~~~~~~~~~~~~ 31 | 32 | - `Chris McDonough `_ is the original author of 33 | Supervisor. 34 | 35 | Contributors 36 | ~~~~~~~~~~~~ 37 | 38 | Contributors are tracked on the `GitHub contributions page 39 | `_. The two lists 40 | below are included for historical reasons. 41 | 42 | This first list recognizes significant contributions that were made 43 | before the repository moved to GitHub. 44 | 45 | - Anders Quist: Anders contributed the patch that was the basis for 46 | Supervisor’s ability to reload parts of its configuration without 47 | restarting. 48 | 49 | - Derek DeVries: Derek did the web design of Supervisor’s internal web 50 | interface and website logos. 51 | 52 | - Guido van Rossum: Guido authored ``zdrun`` and ``zdctl``, the 53 | programs from Zope that were the original basis for Supervisor. He 54 | also created Python, the programming language that Supervisor is 55 | written in. 56 | 57 | - Jason Kirtland: Jason fixed Supervisor to run on Python 2.6 by 58 | contributing a patched version of Medusa (a Supervisor dependency) 59 | that we now bundle. 60 | 61 | - Roger Hoover: Roger added support for spawning FastCGI programs. He 62 | has also been one of the most active mailing list users, providing 63 | his testing and feedback. 64 | 65 | - Siddhant Goel: Siddhant worked on :program:`supervisorctl` as our 66 | Google Summer of Code student for 2008. He implemented the ``fg`` 67 | command and also added tab completion. 68 | 69 | This second list records contributors who signed a legal agreement. 70 | The legal agreement was 71 | `introduced `_ 72 | in January 2014 but later 73 | `withdrawn `_ 74 | in March 2014. This list is being preserved in case it is useful 75 | later (e.g. if at some point there was a desire to donate the project 76 | to a foundation that required such agreements). 77 | 78 | - Chris McDonough, 2006-06-26 79 | 80 | - Siddhant Goel, 2008-06-15 81 | 82 | - Chris Rossi, 2010-02-02 83 | 84 | - Roger Hoover, 2010-08-17 85 | 86 | - Benoit Sigoure, 2011-06-21 87 | 88 | - John Szakmeister, 2011-09-06 89 | 90 | - Gunnlaugur Þór Briem, 2011-11-26 91 | 92 | - Jens Rantil, 2011-11-27 93 | 94 | - Michael Blume, 2012-01-09 95 | 96 | - Philip Zeyliger, 2012-02-21 97 | 98 | - Marcelo Vanzin, 2012-05-03 99 | 100 | - Martijn Pieters, 2012-06-04 101 | 102 | - Marcin Kuźmiński, 2012-06-21 103 | 104 | - Jean Jordaan, 2012-06-28 105 | 106 | - Perttu Ranta-aho, 2012-09-27 107 | 108 | - Chris Streeter, 2013-03-23 109 | 110 | - Caio Ariede, 2013-03-25 111 | 112 | - David Birdsong, 2013-04-11 113 | 114 | - Lukas Rist, 2013-04-18 115 | 116 | - Honza Pokorny, 2013-07-23 117 | 118 | - Thúlio Costa, 2013-10-31 119 | 120 | - Gary M. Josack, 2013-11-12 121 | 122 | - Márk Sági-Kazár, 2013-12-16 123 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/data_flow.html: -------------------------------------------------------------------------------- 1 | 2 |

Data Flow in Medusa

3 | 4 | 5 | 6 |

Data flow, both input and output, is asynchronous. This is 7 | signified by the request and reply queues in the above 8 | diagram. This means that both requests and replies can get 'backed 9 | up', and are still handled correctly. For instance, HTTP/1.1 supports 10 | the concept of pipelined requests, where a series of requests 11 | are sent immediately to a server, and the replies are sent as they are 12 | processed. With a synchronous request, the client would have 13 | to wait for a reply to each request before sending the next.

14 | 15 |

The input data is partitioned into requests by looking for a 16 | terminator. A terminator is simply a protocol-specific 17 | delimiter - often simply CRLF (carriage-return line-feed), though it 18 | can be longer (for example, MIME multi-part boundaries can be 19 | specified as terminators). The protocol handler is notified whenever 20 | a complete request has been received.

21 | 22 |

The protocol handler then generates a reply, which is enqueued for 23 | output back to the client. Sometimes, instead of queuing the actual 24 | data, an object that will generate this data is used, called a 25 | producer.

26 | 27 | 28 | 29 |

The use of producers gives the programmer 30 | extraordinary control over how output is generated and inserted into 31 | the output queue. Though they are simple objects (requiring only a 32 | single method, more(), to be defined), they can be 33 | composed - simple producers can be wrapped around each other to 34 | create arbitrarily complex behaviors. [now would be a good time to 35 | browse through some of the producer classes in 36 | producers.py.]

37 | 38 |

The HTTP/1.1 producers make an excellent example. HTTP allows 39 | replies to be encoded in various ways - for example a reply consisting 40 | of dynamically-generated output might use the 'chunked' transfer 41 | encoding to send data that is compressed on-the-fly.

42 | 43 | 44 | 45 |

In the diagram, green producers actually generate output, and grey 46 | ones transform it in some manner. This producer might generate output 47 | looking like this: 48 | 49 |

50 |                             HTTP/1.1 200 OK
51 |                             Content-Encoding: gzip
52 |                             Transfer-Encoding: chunked
53 |               Header ==>    Date: Mon, 04 Aug 1997 21:31:44 GMT
54 |                             Content-Type: text/html
55 |                             Server: Medusa/3.0
56 |                             
57 |              Chunking ==>   0x200
58 |             Compression ==> <512 bytes of compressed html>
59 |                             0x200
60 |                             <512 bytes of compressed html>
61 |                             ...
62 |                             0
63 |                             
64 | 
65 | 66 |

Still more can be done with this output stream: For the purpose of 67 | efficiency, it makes sense to send output in large, fixed-size chunks: 68 | This transformation can be applied by wrapping a 'globbing' producer 69 | around the whole thing.

70 | 71 |

An important feature of Medusa's producers is that they are 72 | actually rather small objects that do not expand into actual output 73 | data until the moment they are needed: The async_chat 74 | class will only call on a producer for output when the outgoing socket 75 | has indicated that it is ready for data. Thus Medusa is extremely 76 | efficient when faced with network delays, 'hiccups', and low bandwidth 77 | clients. 78 | 79 |

One final note: The mechanisms described above are completely 80 | general - although the examples given demonstrate application to the 81 | http protocol, Medusa's asynchronous core has been 82 | applied to many different protocols, including smtp, 83 | pop3, ftp, and even dns. 84 | -------------------------------------------------------------------------------- /supervisor/ui/stylesheets/supervisor.css: -------------------------------------------------------------------------------- 1 | /* =ORDER 2 | 1. display 3 | 2. float and position 4 | 3. width and height 5 | 4. Specific element properties 6 | 5. margin 7 | 6. border 8 | 7. padding 9 | 8. background 10 | 9. color 11 | 10. font related properties 12 | ----------------------------------------------- */ 13 | 14 | /* =MAIN 15 | ----------------------------------------------- */ 16 | body, td, input, select, textarea, a { 17 | font: 12px/1.5em arial, helvetica, verdana, sans-serif; 18 | color: #333; 19 | } 20 | html, body, form, fieldset, h1, h2, h3, h4, h5, h6, 21 | p, pre, blockquote, ul, ol, dl, address { 22 | margin: 0; 23 | padding: 0; 24 | } 25 | form label { 26 | cursor: pointer; 27 | } 28 | fieldset { 29 | border: none; 30 | } 31 | img, table { 32 | border-width: 0; 33 | } 34 | 35 | /* =COLORS 36 | ----------------------------------------------- */ 37 | body { 38 | background-color: #FFFFF3; 39 | color: #333; 40 | } 41 | a:link, 42 | a:visited { 43 | color: #333; 44 | } 45 | a:hover { 46 | color: #000; 47 | } 48 | 49 | /* =FLOATS 50 | ----------------------------------------------- */ 51 | .left { 52 | float: left; 53 | } 54 | .right { 55 | text-align: right; 56 | float: right; 57 | } 58 | /* clear float */ 59 | .clr:after { 60 | content: "."; 61 | display: block; 62 | height: 0; 63 | clear: both; 64 | visibility: hidden; 65 | } 66 | .clr {display: inline-block;} 67 | /* Hides from IE-mac \*/ 68 | * html .clr {height: 1%;} 69 | .clr {display: block;} 70 | /* End hide from IE-mac */ 71 | 72 | /* =LAYOUT 73 | ----------------------------------------------- */ 74 | html, body { 75 | height: 100%; 76 | } 77 | #wrapper { 78 | min-height: 100%; 79 | height: auto !important; 80 | height: 100%; 81 | width: 850px; 82 | margin: 0 auto -31px; 83 | } 84 | #footer, 85 | .push { 86 | height: 30px; 87 | } 88 | 89 | .hidden { 90 | display: none; 91 | } 92 | 93 | /* =STATUS 94 | ----------------------------------------------- */ 95 | #header { 96 | margin-bottom: 13px; 97 | padding: 10px 0 13px 0; 98 | background: url("../images/rule.gif") left bottom repeat-x; 99 | } 100 | .status_msg { 101 | padding: 5px 10px; 102 | border: 1px solid #919191; 103 | background-color: #FBFBFB; 104 | color: #000000; 105 | } 106 | 107 | #buttons { 108 | margin: 13px 0; 109 | } 110 | #buttons li { 111 | float: left; 112 | display: block; 113 | margin: 0 7px 0 0; 114 | } 115 | #buttons a { 116 | float: left; 117 | display: block; 118 | padding: 1px 0 0 0; 119 | } 120 | #buttons a, #buttons a:link { 121 | text-decoration: none; 122 | } 123 | 124 | .action-button { 125 | border: 1px solid #919191; 126 | text-transform: uppercase; 127 | padding: 0 5px; 128 | border-radius: 4px; 129 | color: #50504d; 130 | font-size: 12px; 131 | background: #fbfbfb; 132 | font-weight: 600; 133 | } 134 | 135 | .action-button:hover { 136 | border: 1px solid #88b0f2; 137 | background: #ffffff; 138 | } 139 | 140 | table { 141 | width: 100%; 142 | border: 1px solid #919191; 143 | } 144 | th { 145 | background-color: #919191; 146 | color: #fff; 147 | text-align: left; 148 | } 149 | th.state { 150 | text-align: center; 151 | width: 44px; 152 | } 153 | th.desc { 154 | width: 200px; 155 | } 156 | th.name { 157 | width: 200px; 158 | } 159 | th.action { 160 | } 161 | td, th { 162 | padding: 4px 8px; 163 | border-bottom: 1px solid #fff; 164 | } 165 | tr td { 166 | background-color: #FBFBFB; 167 | } 168 | tr.shade td { 169 | background-color: #F0F0F0; 170 | } 171 | .action ul { 172 | list-style: none; 173 | display: inline; 174 | } 175 | .action li { 176 | margin-right: 10px; 177 | display: inline; 178 | } 179 | 180 | /* status message */ 181 | .status span { 182 | display: block; 183 | width: 60px; 184 | height: 16px; 185 | border: 1px solid #fff; 186 | text-align: center; 187 | font-size: 95%; 188 | line-height: 1.4em; 189 | } 190 | .statusnominal { 191 | background-image: url("../images/state0.gif"); 192 | } 193 | .statusrunning { 194 | background-image: url("../images/state2.gif"); 195 | } 196 | .statuserror { 197 | background-image: url("../images/state3.gif"); 198 | } 199 | 200 | #footer { 201 | width: 760px; 202 | margin: 0 auto; 203 | padding: 0 10px; 204 | line-height: 30px; 205 | border: 1px solid #C8C8C2; 206 | border-bottom-width: 0; 207 | background-color: #FBFBFB; 208 | overflow: hidden; 209 | opacity: 0.7; 210 | color: #000; 211 | font-size: 95%; 212 | } 213 | #footer a { 214 | font-size: inherit; 215 | } 216 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | ############################################################################## 2 | # 3 | # Copyright (c) 2006-2015 Agendaless Consulting and Contributors. 4 | # All Rights Reserved. 5 | # 6 | # This software is subject to the provisions of the BSD-like license at 7 | # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany 8 | # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL 9 | # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, 10 | # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND 11 | # FITNESS FOR A PARTICULAR PURPOSE 12 | # 13 | ############################################################################## 14 | 15 | import os 16 | import sys 17 | 18 | py_version = sys.version_info[:2] 19 | 20 | if py_version < (2, 7): 21 | raise RuntimeError('On Python 2, Supervisor requires Python 2.7 or later') 22 | elif (3, 0) < py_version < (3, 4): 23 | raise RuntimeError('On Python 3, Supervisor requires Python 3.4 or later') 24 | 25 | # setuptools is required as a runtime dependency only on Python < 3.8. 26 | # See the comments in supervisor/compat.py. An environment marker 27 | # like "setuptools; python_version < '3.8'" is not used here because 28 | # it breaks installation via "python setup.py install". See also the 29 | # discussion at: https://github.com/Supervisor/supervisor/issues/1692 30 | if py_version < (3, 8): 31 | try: 32 | import pkg_resources 33 | except ImportError: 34 | raise RuntimeError( 35 | "On Python < 3.8, Supervisor requires setuptools as a runtime" 36 | " dependency because pkg_resources is used to load plugins" 37 | ) 38 | 39 | from setuptools import setup, find_packages 40 | here = os.path.abspath(os.path.dirname(__file__)) 41 | try: 42 | with open(os.path.join(here, 'README.rst'), 'r') as f: 43 | README = f.read() 44 | with open(os.path.join(here, 'CHANGES.rst'), 'r') as f: 45 | CHANGES = f.read() 46 | except Exception: 47 | README = """\ 48 | Supervisor is a client/server system that allows its users to 49 | control a number of processes on UNIX-like operating systems. """ 50 | CHANGES = '' 51 | 52 | CLASSIFIERS = [ 53 | 'Development Status :: 5 - Production/Stable', 54 | 'Environment :: No Input/Output (Daemon)', 55 | 'Intended Audience :: System Administrators', 56 | 'Natural Language :: English', 57 | 'Operating System :: POSIX', 58 | 'Topic :: System :: Boot', 59 | 'Topic :: System :: Monitoring', 60 | 'Topic :: System :: Systems Administration', 61 | "Programming Language :: Python", 62 | "Programming Language :: Python :: 2", 63 | "Programming Language :: Python :: 2.7", 64 | "Programming Language :: Python :: 3", 65 | "Programming Language :: Python :: 3.4", 66 | "Programming Language :: Python :: 3.5", 67 | "Programming Language :: Python :: 3.6", 68 | "Programming Language :: Python :: 3.7", 69 | "Programming Language :: Python :: 3.8", 70 | "Programming Language :: Python :: 3.9", 71 | "Programming Language :: Python :: 3.10", 72 | "Programming Language :: Python :: 3.11", 73 | "Programming Language :: Python :: 3.12", 74 | "Programming Language :: Python :: 3.13", 75 | "Programming Language :: Python :: 3.14", 76 | ] 77 | 78 | version_txt = os.path.join(here, 'supervisor/version.txt') 79 | with open(version_txt, 'r') as f: 80 | supervisor_version = f.read().strip() 81 | 82 | dist = setup( 83 | name='supervisor', 84 | version=supervisor_version, 85 | license='BSD-derived (http://www.repoze.org/LICENSE.txt)', 86 | url='http://supervisord.org/', 87 | project_urls={ 88 | 'Changelog': 'http://supervisord.org/changelog', 89 | 'Documentation': 'http://supervisord.org', 90 | 'Issue Tracker': 'https://github.com/Supervisor/supervisor', 91 | }, 92 | description="A system for controlling process state under UNIX", 93 | long_description=README + '\n\n' + CHANGES, 94 | classifiers=CLASSIFIERS, 95 | author="Chris McDonough", 96 | author_email="chrism@plope.com", 97 | packages=find_packages(), 98 | install_requires=[], 99 | extras_require={ 100 | 'test': ['pytest', 'pytest-cov'] 101 | }, 102 | include_package_data=True, 103 | zip_safe=False, 104 | entry_points={ 105 | 'console_scripts': [ 106 | 'supervisord = supervisor.supervisord:main', 107 | 'supervisorctl = supervisor.supervisorctl:main', 108 | 'echo_supervisord_conf = supervisor.confecho:main', 109 | 'pidproxy = supervisor.pidproxy:main', 110 | ], 111 | }, 112 | ) 113 | -------------------------------------------------------------------------------- /LICENSES.txt: -------------------------------------------------------------------------------- 1 | Supervisor is licensed under the following license: 2 | 3 | A copyright notice accompanies this license document that identifies 4 | the copyright holders. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are 8 | met: 9 | 10 | 1. Redistributions in source code must retain the accompanying 11 | copyright notice, this list of conditions, and the following 12 | disclaimer. 13 | 14 | 2. Redistributions in binary form must reproduce the accompanying 15 | copyright notice, this list of conditions, and the following 16 | disclaimer in the documentation and/or other materials provided 17 | with the distribution. 18 | 19 | 3. Names of the copyright holders must not be used to endorse or 20 | promote products derived from this software without prior 21 | written permission from the copyright holders. 22 | 23 | 4. If any files are modified, you must cause the modified files to 24 | carry prominent notices stating that you changed the files and 25 | the date of any change. 26 | 27 | Disclaimer 28 | 29 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND 30 | ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 31 | TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 32 | PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 33 | HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 34 | EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED 35 | TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 36 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 37 | ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 38 | TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF 39 | THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 40 | SUCH DAMAGE. 41 | 42 | http_client.py code is based on code by Daniel Krech, which was 43 | released under this license: 44 | 45 | LICENSE AGREEMENT FOR RDFLIB 0.9.0 THROUGH 2.3.1 46 | ------------------------------------------------ 47 | Copyright (c) 2002-2005, Daniel Krech, http://eikeon.com/ 48 | All rights reserved. 49 | 50 | Redistribution and use in source and binary forms, with or without 51 | modification, are permitted provided that the following conditions are 52 | met: 53 | 54 | * Redistributions of source code must retain the above copyright 55 | notice, this list of conditions and the following disclaimer. 56 | 57 | * Redistributions in binary form must reproduce the above 58 | copyright notice, this list of conditions and the following 59 | disclaimer in the documentation and/or other materials provided 60 | with the distribution. 61 | 62 | * Neither the name of Daniel Krech nor the names of its 63 | contributors may be used to endorse or promote products derived 64 | from this software without specific prior written permission. 65 | 66 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 67 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 68 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 69 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 70 | OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 71 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 72 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 73 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 74 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 75 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 76 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 77 | 78 | Medusa, the asynchronous communications framework upon which 79 | supervisor's server and client code is based, was created by Sam 80 | Rushing: 81 | 82 | Medusa was once distributed under a 'free for non-commercial use' 83 | license, but in May of 2000 Sam Rushing changed the license to be 84 | identical to the standard Python license at the time. The standard 85 | Python license has always applied to the core components of Medusa, 86 | this change just frees up the rest of the system, including the http 87 | server, ftp server, utilities, etc. Medusa is therefore under the 88 | following license: 89 | 90 | ============================== 91 | Permission to use, copy, modify, and distribute this software and 92 | its documentation for any purpose and without fee is hereby granted, 93 | provided that the above copyright notice appear in all copies and 94 | that both that copyright notice and this permission notice appear in 95 | supporting documentation, and that the name of Sam Rushing not be 96 | used in advertising or publicity pertaining to distribution of the 97 | software without specific, written prior permission. 98 | 99 | SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, 100 | INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN 101 | NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR 102 | CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS 103 | OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, 104 | NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION 105 | WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 106 | ============================== 107 | -------------------------------------------------------------------------------- /supervisor/medusa/auth_handler.py: -------------------------------------------------------------------------------- 1 | # -*- Mode: Python -*- 2 | # 3 | # Author: Sam Rushing 4 | # Copyright 1996-2000 by Sam Rushing 5 | # All Rights Reserved. 6 | # 7 | 8 | RCS_ID = '$Id: auth_handler.py,v 1.6 2002/11/25 19:40:23 akuchling Exp $' 9 | 10 | # support for 'basic' authentication. 11 | 12 | import re 13 | import sys 14 | import time 15 | 16 | from supervisor.compat import as_string, as_bytes 17 | from supervisor.compat import encodestring, decodestring 18 | from supervisor.compat import long 19 | from supervisor.compat import md5 20 | 21 | import supervisor.medusa.counter as counter 22 | import supervisor.medusa.default_handler as default_handler 23 | 24 | get_header = default_handler.get_header 25 | 26 | import supervisor.medusa.producers as producers 27 | 28 | # This is a 'handler' that wraps an authorization method 29 | # around access to the resources normally served up by 30 | # another handler. 31 | 32 | # does anyone support digest authentication? (rfc2069) 33 | 34 | class auth_handler: 35 | def __init__ (self, dict, handler, realm='default'): 36 | self.authorizer = dictionary_authorizer (dict) 37 | self.handler = handler 38 | self.realm = realm 39 | self.pass_count = counter.counter() 40 | self.fail_count = counter.counter() 41 | 42 | def match (self, request): 43 | # by default, use the given handler's matcher 44 | return self.handler.match (request) 45 | 46 | def handle_request (self, request): 47 | # authorize a request before handling it... 48 | scheme = get_header (AUTHORIZATION, request.header) 49 | 50 | if scheme: 51 | scheme = scheme.lower() 52 | if scheme == 'basic': 53 | cookie = get_header (AUTHORIZATION, request.header, 2) 54 | try: 55 | decoded = as_string(decodestring(as_bytes(cookie))) 56 | except: 57 | sys.stderr.write('malformed authorization info <%s>\n' % cookie) 58 | request.error (400) 59 | return 60 | auth_info = decoded.split(':', 1) 61 | if self.authorizer.authorize (auth_info): 62 | self.pass_count.increment() 63 | request.auth_info = auth_info 64 | self.handler.handle_request (request) 65 | else: 66 | self.handle_unauthorized (request) 67 | #elif scheme == 'digest': 68 | # print 'digest: ',AUTHORIZATION.group(2) 69 | else: 70 | sys.stderr.write('unknown/unsupported auth method: %s\n' % scheme) 71 | self.handle_unauthorized(request) 72 | else: 73 | # list both? prefer one or the other? 74 | # you could also use a 'nonce' here. [see below] 75 | #auth = 'Basic realm="%s" Digest realm="%s"' % (self.realm, self.realm) 76 | #nonce = self.make_nonce (request) 77 | #auth = 'Digest realm="%s" nonce="%s"' % (self.realm, nonce) 78 | #request['WWW-Authenticate'] = auth 79 | #print 'sending header: %s' % request['WWW-Authenticate'] 80 | self.handle_unauthorized (request) 81 | 82 | def handle_unauthorized (self, request): 83 | # We are now going to receive data that we want to ignore. 84 | # to ignore the file data we're not interested in. 85 | self.fail_count.increment() 86 | request.channel.set_terminator (None) 87 | request['Connection'] = 'close' 88 | request['WWW-Authenticate'] = 'Basic realm="%s"' % self.realm 89 | request.error (401) 90 | 91 | def make_nonce (self, request): 92 | """A digest-authentication , constructed as suggested in RFC 2069""" 93 | ip = request.channel.server.ip 94 | now = str(long(time.time())) 95 | if now[-1:] == 'L': 96 | now = now[:-1] 97 | private_key = str (id (self)) 98 | nonce = ':'.join([ip, now, private_key]) 99 | return self.apply_hash (nonce) 100 | 101 | def apply_hash (self, s): 102 | """Apply MD5 to a string , then wrap it in base64 encoding.""" 103 | m = md5() 104 | m.update (s) 105 | d = m.digest() 106 | # base64.encodestring tacks on an extra linefeed. 107 | return encodestring (d)[:-1] 108 | 109 | def status (self): 110 | # Thanks to mwm@contessa.phone.net (Mike Meyer) 111 | r = [ 112 | producers.simple_producer ( 113 | '

  • Authorization Extension : ' 114 | 'Unauthorized requests: %s
      ' % self.fail_count 115 | ) 116 | ] 117 | if hasattr (self.handler, 'status'): 118 | r.append (self.handler.status()) 119 | r.append ( 120 | producers.simple_producer ('
    ') 121 | ) 122 | return producers.composite_producer(r) 123 | 124 | class dictionary_authorizer: 125 | def __init__ (self, dict): 126 | self.dict = dict 127 | 128 | def authorize (self, auth_info): 129 | [username, password] = auth_info 130 | if username in self.dict and self.dict[username] == password: 131 | return 1 132 | else: 133 | return 0 134 | 135 | AUTHORIZATION = re.compile ( 136 | # scheme challenge 137 | 'Authorization: ([^ ]+) (.*)', 138 | re.IGNORECASE 139 | ) 140 | -------------------------------------------------------------------------------- /supervisor/tests/test_childutils.py: -------------------------------------------------------------------------------- 1 | from io import BytesIO 2 | import sys 3 | import time 4 | import unittest 5 | from supervisor.compat import StringIO 6 | from supervisor.compat import as_string 7 | 8 | class ChildUtilsTests(unittest.TestCase): 9 | def test_getRPCInterface(self): 10 | from supervisor.childutils import getRPCInterface 11 | rpc = getRPCInterface({'SUPERVISOR_SERVER_URL':'http://localhost:9001'}) 12 | # we can't really test this thing; its a magic object 13 | self.assertTrue(rpc is not None) 14 | 15 | def test_getRPCTransport_no_uname_pass(self): 16 | from supervisor.childutils import getRPCTransport 17 | t = getRPCTransport({'SUPERVISOR_SERVER_URL':'http://localhost:9001'}) 18 | self.assertEqual(t.username, '') 19 | self.assertEqual(t.password, '') 20 | self.assertEqual(t.serverurl, 'http://localhost:9001') 21 | 22 | def test_getRPCTransport_with_uname_pass(self): 23 | from supervisor.childutils import getRPCTransport 24 | env = {'SUPERVISOR_SERVER_URL':'http://localhost:9001', 25 | 'SUPERVISOR_USERNAME':'chrism', 26 | 'SUPERVISOR_PASSWORD':'abc123'} 27 | t = getRPCTransport(env) 28 | self.assertEqual(t.username, 'chrism') 29 | self.assertEqual(t.password, 'abc123') 30 | self.assertEqual(t.serverurl, 'http://localhost:9001') 31 | 32 | def test_get_headers(self): 33 | from supervisor.childutils import get_headers 34 | line = 'a:1 b:2' 35 | result = get_headers(line) 36 | self.assertEqual(result, {'a':'1', 'b':'2'}) 37 | 38 | def test_eventdata(self): 39 | from supervisor.childutils import eventdata 40 | payload = 'a:1 b:2\nthedata\n' 41 | headers, data = eventdata(payload) 42 | self.assertEqual(headers, {'a':'1', 'b':'2'}) 43 | self.assertEqual(data, 'thedata\n') 44 | 45 | def test_get_asctime(self): 46 | from supervisor.childutils import get_asctime 47 | timestamp = time.mktime((2009, 1, 18, 22, 14, 7, 0, 0, -1)) 48 | result = get_asctime(timestamp) 49 | self.assertEqual(result, '2009-01-18 22:14:07,000') 50 | 51 | class TestProcessCommunicationsProtocol(unittest.TestCase): 52 | def test_send(self): 53 | from supervisor.childutils import pcomm 54 | stdout = BytesIO() 55 | pcomm.send(b'hello', stdout) 56 | from supervisor.events import ProcessCommunicationEvent 57 | begin = ProcessCommunicationEvent.BEGIN_TOKEN 58 | end = ProcessCommunicationEvent.END_TOKEN 59 | self.assertEqual(stdout.getvalue(), begin + b'hello' + end) 60 | 61 | def test_stdout(self): 62 | from supervisor.childutils import pcomm 63 | old = sys.stdout 64 | try: 65 | io = sys.stdout = BytesIO() 66 | pcomm.stdout(b'hello') 67 | from supervisor.events import ProcessCommunicationEvent 68 | begin = ProcessCommunicationEvent.BEGIN_TOKEN 69 | end = ProcessCommunicationEvent.END_TOKEN 70 | self.assertEqual(io.getvalue(), begin + b'hello' + end) 71 | finally: 72 | sys.stdout = old 73 | 74 | def test_stderr(self): 75 | from supervisor.childutils import pcomm 76 | old = sys.stderr 77 | try: 78 | io = sys.stderr = BytesIO() 79 | pcomm.stderr(b'hello') 80 | from supervisor.events import ProcessCommunicationEvent 81 | begin = ProcessCommunicationEvent.BEGIN_TOKEN 82 | end = ProcessCommunicationEvent.END_TOKEN 83 | self.assertEqual(io.getvalue(), begin + b'hello' + end) 84 | finally: 85 | sys.stderr = old 86 | 87 | class TestEventListenerProtocol(unittest.TestCase): 88 | def test_wait(self): 89 | from supervisor.childutils import listener 90 | class Dummy: 91 | def readline(self): 92 | return 'len:5' 93 | def read(self, *ignored): 94 | return 'hello' 95 | stdin = Dummy() 96 | stdout = StringIO() 97 | headers, payload = listener.wait(stdin, stdout) 98 | self.assertEqual(headers, {'len':'5'}) 99 | self.assertEqual(payload, 'hello') 100 | self.assertEqual(stdout.getvalue(), 'READY\n') 101 | 102 | def test_token(self): 103 | from supervisor.childutils import listener 104 | from supervisor.dispatchers import PEventListenerDispatcher 105 | token = as_string(PEventListenerDispatcher.READY_FOR_EVENTS_TOKEN) 106 | stdout = StringIO() 107 | listener.ready(stdout) 108 | self.assertEqual(stdout.getvalue(), token) 109 | 110 | def test_ok(self): 111 | from supervisor.childutils import listener 112 | from supervisor.dispatchers import PEventListenerDispatcher 113 | begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) 114 | stdout = StringIO() 115 | listener.ok(stdout) 116 | self.assertEqual(stdout.getvalue(), begin + '2\nOK') 117 | 118 | def test_fail(self): 119 | from supervisor.childutils import listener 120 | from supervisor.dispatchers import PEventListenerDispatcher 121 | begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) 122 | stdout = StringIO() 123 | listener.fail(stdout) 124 | self.assertEqual(stdout.getvalue(), begin + '4\nFAIL') 125 | 126 | def test_send(self): 127 | from supervisor.childutils import listener 128 | from supervisor.dispatchers import PEventListenerDispatcher 129 | begin = as_string(PEventListenerDispatcher.RESULT_TOKEN_START) 130 | stdout = StringIO() 131 | msg = 'the body data ya fool\n' 132 | listener.send(msg, stdout) 133 | expected = '%s%s\n%s' % (begin, len(msg), msg) 134 | self.assertEqual(stdout.getvalue(), expected) 135 | -------------------------------------------------------------------------------- /.github/workflows/main.yml: -------------------------------------------------------------------------------- 1 | name: Run all tests 2 | 3 | on: [push, pull_request] 4 | 5 | env: 6 | PIP: "env PIP_DISABLE_PIP_VERSION_CHECK=1 7 | PYTHONWARNINGS=ignore:DEPRECATION 8 | pip --no-cache-dir" 9 | 10 | jobs: 11 | tests_py2x: 12 | runs-on: ubuntu-22.04 13 | container: 14 | image: python:2.7 15 | strategy: 16 | fail-fast: false 17 | matrix: 18 | toxenv: [py27, py27-configparser] 19 | 20 | steps: 21 | - uses: actions/checkout@v4 22 | 23 | - name: Install dependencies 24 | run: $PIP install virtualenv tox 25 | 26 | - name: Run the unit tests 27 | run: TOXENV=${{ matrix.toxenv }} tox 28 | 29 | - name: Run the end-to-end tests 30 | run: TOXENV=${{ matrix.toxenv }} END_TO_END=1 tox 31 | 32 | tests_py34: 33 | runs-on: ubuntu-22.04 34 | container: 35 | image: ubuntu:20.04 36 | env: 37 | LANG: C.UTF-8 38 | 39 | steps: 40 | - uses: actions/checkout@v4 41 | 42 | - name: Install build dependencies 43 | run: | 44 | apt-get update 45 | apt-get install -y build-essential unzip wget \ 46 | libncurses5-dev libgdbm-dev libnss3-dev \ 47 | libreadline-dev zlib1g-dev 48 | 49 | - name: Build OpenSSL 1.0.2 (required by Python 3.4) 50 | run: | 51 | cd $RUNNER_TEMP 52 | wget https://github.com/openssl/openssl/releases/download/OpenSSL_1_0_2u/openssl-1.0.2u.tar.gz 53 | tar -xf openssl-1.0.2u.tar.gz 54 | cd openssl-1.0.2u 55 | ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl shared zlib-dynamic 56 | make 57 | make install 58 | 59 | echo CFLAGS="-I/usr/local/ssl/include $CFLAGS" >> $GITHUB_ENV 60 | echo LDFLAGS="-L/usr/local/ssl/lib $LDFLAGS" >> $GITHUB_ENV 61 | echo LD_LIBRARY_PATH="/usr/local/ssl/lib:$LD_LIBRARY_PATH" >> $GITHUB_ENV 62 | 63 | ln -s /usr/local/ssl/lib/libssl.so.1.0.0 /usr/lib/libssl.so.1.0.0 64 | ln -s /usr/local/ssl/lib/libcrypto.so.1.0.0 /usr/lib/libcrypto.so.1.0.0 65 | ldconfig 66 | 67 | - name: Build Python 3.4 68 | run: | 69 | cd $RUNNER_TEMP 70 | wget -O cpython-3.4.10.zip https://github.com/python/cpython/archive/refs/tags/v3.4.10.zip 71 | unzip cpython-3.4.10.zip 72 | cd cpython-3.4.10 73 | ./configure --with-ensurepip=install 74 | make 75 | make install 76 | 77 | python3.4 --version 78 | python3.4 -c 'import ssl' 79 | pip3.4 --version 80 | 81 | ln -s /usr/local/bin/python3.4 /usr/local/bin/python 82 | ln -s /usr/local/bin/pip3.4 /usr/local/bin/pip 83 | 84 | - name: Install Python dependencies 85 | run: | 86 | $PIP install virtualenv==20.4.7 tox==3.14.0 87 | 88 | - name: Run the unit tests 89 | run: TOXENV=py34 tox 90 | 91 | - name: Run the end-to-end tests 92 | run: TOXENV=py34 END_TO_END=1 tox 93 | 94 | tests_py35: 95 | runs-on: ubuntu-22.04 96 | container: 97 | image: python:3.5 98 | strategy: 99 | fail-fast: false 100 | 101 | steps: 102 | - uses: actions/checkout@v4 103 | 104 | - name: Install dependencies 105 | run: $PIP install virtualenv tox 106 | 107 | - name: Run the unit tests 108 | run: TOXENV=py35 tox 109 | 110 | - name: Run the end-to-end tests 111 | run: TOXENV=py35 END_TO_END=1 tox 112 | 113 | tests_py36: 114 | runs-on: ubuntu-22.04 115 | container: 116 | image: python:3.6 117 | strategy: 118 | fail-fast: false 119 | 120 | steps: 121 | - uses: actions/checkout@v4 122 | 123 | - name: Install dependencies 124 | run: $PIP install virtualenv tox 125 | 126 | - name: Run the unit tests 127 | run: TOXENV=py36 tox 128 | 129 | - name: Run the end-to-end tests 130 | run: TOXENV=py36 END_TO_END=1 tox 131 | 132 | tests_py3x: 133 | runs-on: ubuntu-22.04 134 | strategy: 135 | fail-fast: false 136 | matrix: 137 | python-version: [3.7, 3.8, 3.9, "3.10", 3.11, 3.12, 3.13, 3.14] 138 | 139 | steps: 140 | - uses: actions/checkout@v4 141 | 142 | - name: Set up Python ${{ matrix.python-version }} 143 | uses: actions/setup-python@v5 144 | with: 145 | python-version: ${{ matrix.python-version }} 146 | 147 | - name: Install dependencies 148 | run: $PIP install virtualenv tox 149 | 150 | - name: Set variable for TOXENV based on Python version 151 | id: toxenv 152 | run: python -c 'import sys; print("TOXENV=py%d%d" % (sys.version_info.major, sys.version_info.minor))' | tee -a $GITHUB_OUTPUT 153 | 154 | - name: Run the unit tests 155 | run: TOXENV=${{steps.toxenv.outputs.TOXENV}} tox 156 | 157 | - name: Run the end-to-end tests 158 | run: TOXENV=${{steps.toxenv.outputs.TOXENV}} END_TO_END=1 tox 159 | 160 | coverage_py27: 161 | runs-on: ubuntu-22.04 162 | container: 163 | image: python:2.7 164 | strategy: 165 | fail-fast: false 166 | 167 | steps: 168 | - uses: actions/checkout@v4 169 | 170 | - name: Install dependencies 171 | run: $PIP install virtualenv tox 172 | 173 | - name: Run unit test coverage 174 | run: TOXENV=cover tox 175 | 176 | coverage_py3x: 177 | runs-on: ubuntu-22.04 178 | strategy: 179 | fail-fast: false 180 | matrix: 181 | python-version: [3.8] 182 | 183 | steps: 184 | - uses: actions/checkout@v4 185 | 186 | - name: Set up Python ${{ matrix.python-version }} 187 | uses: actions/setup-python@v5 188 | with: 189 | python-version: ${{ matrix.python-version }} 190 | 191 | - name: Install dependencies 192 | run: $PIP install virtualenv tox 193 | 194 | - name: Run unit test coverage 195 | run: TOXENV=cover3 tox 196 | 197 | docs: 198 | runs-on: ubuntu-22.04 199 | 200 | steps: 201 | - uses: actions/checkout@v4 202 | 203 | - name: Set up Python ${{ matrix.python-version }} 204 | uses: actions/setup-python@v5 205 | with: 206 | python-version: "3.8" 207 | 208 | - name: Install dependencies 209 | run: $PIP install virtualenv tox>=4.0.0 210 | 211 | - name: Build the docs 212 | run: TOXENV=docs tox 213 | -------------------------------------------------------------------------------- /docs/installing.rst: -------------------------------------------------------------------------------- 1 | Installing 2 | ========== 3 | 4 | Installation instructions depend whether the system on which 5 | you're attempting to install Supervisor has internet access. 6 | 7 | Installing to A System With Internet Access 8 | ------------------------------------------- 9 | 10 | Internet-Installing With Pip 11 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 12 | 13 | Supervisor can be installed with ``pip install``: 14 | 15 | .. code-block:: bash 16 | 17 | pip install supervisor 18 | 19 | Depending on the permissions of your system's Python, you might need 20 | to be the root user to install Supervisor successfully using 21 | ``pip``. 22 | 23 | You can also install supervisor in a virtualenv via ``pip``. 24 | 25 | .. note:: 26 | 27 | If installing on a Python version before 3.8, first ensure that the 28 | ``setuptools`` package is installed because it is a runtime 29 | dependency of Supervisor. 30 | 31 | Internet-Installing Without Pip 32 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 33 | 34 | If your system does not have ``pip`` installed, you will need to download 35 | the Supervisor distribution and install it by hand. Current and previous 36 | Supervisor releases may be downloaded from `PyPi 37 | `_. After unpacking the software 38 | archive, run ``python setup.py install``. This requires internet access. It 39 | will download and install all distributions depended upon by Supervisor and 40 | finally install Supervisor itself. 41 | 42 | .. note:: 43 | 44 | Depending on the permissions of your system's Python, you might 45 | need to be the root user to successfully invoke ``python 46 | setup.py install``. 47 | 48 | .. note:: 49 | 50 | The ``setuptools`` package is required to run ``python setup.py install``. 51 | On Python versions before 3.8, ``setuptools`` is also a runtime 52 | dependency of Supervisor. 53 | 54 | Installing To A System Without Internet Access 55 | ---------------------------------------------- 56 | 57 | If the system that you want to install Supervisor to does not have 58 | Internet access, you'll need to perform installation slightly 59 | differently. Since both ``pip`` and ``python setup.py 60 | install`` depend on internet access to perform downloads of dependent 61 | software, neither will work on machines without internet access until 62 | dependencies are installed. To install to a machine which is not 63 | internet-connected, obtain the dependencies listed in ``setup.py`` 64 | using a machine which is internet-connected. 65 | 66 | Copy these files to removable media and put them on the target 67 | machine. Install each onto the target machine as per its 68 | instructions. This typically just means unpacking each file and 69 | invoking ``python setup.py install`` in the unpacked directory. 70 | Finally, run supervisor's ``python setup.py install``. 71 | 72 | .. note:: 73 | 74 | Depending on the permissions of your system's Python, you might 75 | need to be the root user to invoke ``python setup.py install`` 76 | successfully for each package. 77 | 78 | .. note:: 79 | 80 | The ``setuptools`` package is required to run ``python setup.py install``. 81 | On Python versions before 3.8, ``setuptools`` is also a runtime 82 | dependency of Supervisor. 83 | 84 | Installing a Distribution Package 85 | --------------------------------- 86 | 87 | Some Linux distributions offer a version of Supervisor that is installable 88 | through the system package manager. These packages are made by third parties, 89 | not the Supervisor developers, and often include distribution-specific changes 90 | to Supervisor. 91 | 92 | Use the package management tools of your distribution to check availability; 93 | e.g. on Ubuntu you can run ``apt-cache show supervisor``, and on CentOS 94 | you can run ``yum info supervisor``. 95 | 96 | A feature of distribution packages of Supervisor is that they will usually 97 | include integration into the service management infrastructure of the 98 | distribution, e.g. allowing ``supervisord`` to automatically start when 99 | the system boots. 100 | 101 | .. note:: 102 | 103 | Distribution packages of Supervisor can lag considerably behind the 104 | official Supervisor packages released to PyPI. For example, Ubuntu 105 | 12.04 (released April 2012) offered a package based on Supervisor 3.0a8 106 | (released January 2010). Lag is often caused by the software release 107 | policy set by a given distribution. 108 | 109 | .. note:: 110 | 111 | Users reported that the distribution package of Supervisor for Ubuntu 16.04 112 | had different behavior than previous versions. On Ubuntu 10.04, 12.04, and 113 | 14.04, installing the package will configure the system to start 114 | ``supervisord`` when the system boots. On Ubuntu 16.04, this was not done 115 | by the initial release of the package. The package was fixed later. See 116 | `Ubuntu Bug #1594740 `_ 117 | for more information. 118 | 119 | .. _create_config: 120 | 121 | Creating a Configuration File 122 | ----------------------------- 123 | 124 | Once the Supervisor installation has completed, run 125 | ``echo_supervisord_conf``. This will print a "sample" Supervisor 126 | configuration file to your terminal's stdout. 127 | 128 | Once you see the file echoed to your terminal, reinvoke the command as 129 | ``echo_supervisord_conf > /etc/supervisord.conf``. This won't work if 130 | you do not have root access. 131 | 132 | If you don't have root access, or you'd rather not put the 133 | :file:`supervisord.conf` file in :file:`/etc/supervisord.conf`, you 134 | can place it in the current directory (``echo_supervisord_conf > 135 | supervisord.conf``) and start :program:`supervisord` with the 136 | ``-c`` flag in order to specify the configuration file 137 | location. 138 | 139 | For example, ``supervisord -c supervisord.conf``. Using the ``-c`` 140 | flag actually is redundant in this case, because 141 | :program:`supervisord` searches the current directory for a 142 | :file:`supervisord.conf` before it searches any other locations for 143 | the file, but it will work. See :ref:`running` for more information 144 | about the ``-c`` flag. 145 | 146 | Once you have a configuration file on your filesystem, you can 147 | begin modifying it to your liking. 148 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Supervisor documentation build configuration file 4 | # 5 | # This file is execfile()d with the current directory set to its containing 6 | # dir. 7 | # 8 | # The contents of this file are pickled, so don't put values in the 9 | # namespace that aren't pickleable (module imports are okay, they're 10 | # removed automatically). 11 | # 12 | # All configuration values have a default value; values that are commented 13 | # out serve to show the default value. 14 | 15 | import sys, os 16 | from datetime import date 17 | 18 | # If your extensions are in another directory, add it here. If the 19 | # directory is relative to the documentation root, use os.path.abspath to 20 | # make it absolute, like shown here. 21 | #sys.path.append(os.path.abspath('some/directory')) 22 | 23 | parent = os.path.dirname(os.path.dirname(__file__)) 24 | sys.path.append(os.path.abspath(parent)) 25 | 26 | version_txt = os.path.join(parent, 'supervisor/version.txt') 27 | supervisor_version = open(version_txt).read().strip() 28 | 29 | # General configuration 30 | # --------------------- 31 | 32 | # Add any Sphinx extension module names here, as strings. They can be 33 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 34 | extensions = ['sphinx.ext.autodoc'] 35 | 36 | # Add any paths that contain templates here, relative to this directory. 37 | templates_path = ['.templates'] 38 | 39 | # The suffix of source filenames. 40 | source_suffix = '.rst' 41 | 42 | # The master toctree document. 43 | master_doc = 'index' 44 | 45 | # General substitutions. 46 | project = 'Supervisor' 47 | year = date.today().year 48 | copyright = '2004-%d, Agendaless Consulting and Contributors' % year 49 | 50 | # The default replacements for |version| and |release|, also used in various 51 | # other places throughout the built documents. 52 | # 53 | # The short X.Y version. 54 | version = supervisor_version 55 | # The full version, including alpha/beta/rc tags. 56 | release = version 57 | 58 | # There are two options for replacing |today|: either, you set today to 59 | # some non-false value, then it is used: 60 | #today = '' 61 | # Else, today_fmt is used as the format for a strftime call. 62 | today_fmt = '%B %d, %Y' 63 | 64 | # List of documents that shouldn't be included in the build. 65 | #unused_docs = [] 66 | 67 | # List of directories, relative to source directories, that shouldn't be 68 | # searched for source files. 69 | #exclude_dirs = [] 70 | 71 | # The reST default role (used for this markup: `text`) to use for all 72 | # documents. 73 | #default_role = None 74 | 75 | # If true, '()' will be appended to :func: etc. cross-reference text. 76 | #add_function_parentheses = True 77 | 78 | # If true, the current module name will be prepended to all description 79 | # unit titles (such as .. function::). 80 | #add_module_names = True 81 | 82 | # If true, sectionauthor and moduleauthor directives will be shown in the 83 | # output. They are ignored by default. 84 | #show_authors = False 85 | 86 | # The name of the Pygments (syntax highlighting) style to use. 87 | pygments_style = 'sphinx' 88 | 89 | 90 | # Options for HTML output 91 | # ----------------------- 92 | 93 | # The style sheet to use for HTML and HTML Help pages. A file of that name 94 | # must exist either in Sphinx' static/ path, or in one of the custom paths 95 | # given in html_static_path. 96 | html_style = 'repoze.css' 97 | 98 | # The name for this set of Sphinx documents. If None, it defaults to 99 | # " v documentation". 100 | #html_title = None 101 | 102 | # A shorter title for the navigation bar. Default is the same as 103 | # html_title. 104 | #html_short_title = None 105 | 106 | # The name of an image file (within the static path) to place at the top of 107 | # the sidebar. 108 | html_logo = '.static/logo_hi.gif' 109 | 110 | # The name of an image file (within the static path) to use as favicon of 111 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or 112 | # 32x32 pixels large. 113 | #html_favicon = None 114 | 115 | # Add any paths that contain custom static files (such as style sheets) 116 | # here, relative to this directory. They are copied after the builtin 117 | # static files, so a file named "default.css" will overwrite the builtin 118 | # "default.css". 119 | html_static_path = ['.static'] 120 | 121 | # If not '', a 'Last updated on:' timestamp is inserted at every page 122 | # bottom, using the given strftime format. 123 | html_last_updated_fmt = '%b %d, %Y' 124 | 125 | # If true, SmartyPants will be used to convert quotes and dashes to 126 | # typographically correct entities. 127 | #html_use_smartypants = True 128 | 129 | # Custom sidebar templates, maps document names to template names. 130 | #html_sidebars = {} 131 | 132 | # Additional templates that should be rendered to pages, maps page names to 133 | # template names. 134 | #html_additional_pages = {} 135 | 136 | # If false, no module index is generated. 137 | #html_use_modindex = True 138 | 139 | # If false, no index is generated. 140 | #html_use_index = True 141 | 142 | # If true, the index is split into individual pages for each letter. 143 | #html_split_index = False 144 | 145 | # If true, the reST sources are included in the HTML build as 146 | # _sources/. 147 | #html_copy_source = True 148 | 149 | # If true, an OpenSearch description file will be output, and all pages 150 | # will contain a tag referring to it. The value of this option must 151 | # be the base URL from which the finished HTML is served. 152 | #html_use_opensearch = '' 153 | 154 | # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 155 | #html_file_suffix = '' 156 | 157 | # Output file base name for HTML help builder. 158 | htmlhelp_basename = 'supervisor' 159 | 160 | 161 | # Options for LaTeX output 162 | # ------------------------ 163 | 164 | # The paper size ('letter' or 'a4'). 165 | #latex_paper_size = 'letter' 166 | 167 | # The font size ('10pt', '11pt' or '12pt'). 168 | #latex_font_size = '10pt' 169 | 170 | # Grouping the document tree into LaTeX files. List of tuples 171 | # (source start file, target name, title, 172 | # author, document class [howto/manual]). 173 | latex_documents = [ 174 | ('index', 'supervisor.tex', 'supervisor Documentation', 175 | 'Supervisor Developers', 'manual'), 176 | ] 177 | 178 | # The name of an image file (relative to this directory) to place at the 179 | # top of the title page. 180 | latex_logo = '.static/logo_hi.gif' 181 | 182 | # For "manual" documents, if this is true, then toplevel headings are 183 | # parts, not chapters. 184 | #latex_use_parts = False 185 | 186 | # Additional stuff for the LaTeX preamble. 187 | #latex_preamble = '' 188 | 189 | # Documents to append as an appendix to all manuals. 190 | #latex_appendices = [] 191 | 192 | # If false, no module index is generated. 193 | #latex_use_modindex = True 194 | -------------------------------------------------------------------------------- /supervisor/compat.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | 3 | import sys 4 | 5 | PY2 = sys.version_info[0] == 2 6 | 7 | if PY2: # pragma: no cover 8 | long = long 9 | raw_input = raw_input 10 | unicode = unicode 11 | unichr = unichr 12 | basestring = basestring 13 | 14 | def as_bytes(s, encoding='utf-8'): 15 | if isinstance(s, str): 16 | return s 17 | else: 18 | return s.encode(encoding) 19 | 20 | def as_string(s, encoding='utf-8'): 21 | if isinstance(s, unicode): 22 | return s 23 | else: 24 | return s.decode(encoding) 25 | 26 | def is_text_stream(stream): 27 | try: 28 | if isinstance(stream, file): 29 | return 'b' not in stream.mode 30 | except NameError: # python 3 31 | pass 32 | 33 | try: 34 | import _io 35 | return isinstance(stream, _io._TextIOBase) 36 | except ImportError: 37 | import io 38 | return isinstance(stream, io.TextIOWrapper) 39 | 40 | else: # pragma: no cover 41 | long = int 42 | basestring = str 43 | raw_input = input 44 | unichr = chr 45 | 46 | class unicode(str): 47 | def __init__(self, string, encoding, errors): 48 | str.__init__(self, string) 49 | 50 | def as_bytes(s, encoding='utf8'): 51 | if isinstance(s, bytes): 52 | return s 53 | else: 54 | return s.encode(encoding) 55 | 56 | def as_string(s, encoding='utf8'): 57 | if isinstance(s, str): 58 | return s 59 | else: 60 | return s.decode(encoding) 61 | 62 | def is_text_stream(stream): 63 | import _io 64 | return isinstance(stream, _io._TextIOBase) 65 | 66 | try: # pragma: no cover 67 | import xmlrpc.client as xmlrpclib 68 | except ImportError: # pragma: no cover 69 | import xmlrpclib 70 | 71 | try: # pragma: no cover 72 | import urllib.parse as urlparse 73 | import urllib.parse as urllib 74 | except ImportError: # pragma: no cover 75 | import urlparse 76 | import urllib 77 | 78 | try: # pragma: no cover 79 | from hashlib import sha1 80 | except ImportError: # pragma: no cover 81 | from sha import new as sha1 82 | 83 | try: # pragma: no cover 84 | import syslog 85 | except ImportError: # pragma: no cover 86 | syslog = None 87 | 88 | try: # pragma: no cover 89 | import ConfigParser 90 | except ImportError: # pragma: no cover 91 | import configparser as ConfigParser 92 | 93 | try: # pragma: no cover 94 | from StringIO import StringIO 95 | except ImportError: # pragma: no cover 96 | from io import StringIO 97 | 98 | try: # pragma: no cover 99 | from sys import maxint 100 | except ImportError: # pragma: no cover 101 | from sys import maxsize as maxint 102 | 103 | try: # pragma: no cover 104 | import http.client as httplib 105 | except ImportError: # pragma: no cover 106 | import httplib 107 | 108 | try: # pragma: no cover 109 | from base64 import decodebytes as decodestring, encodebytes as encodestring 110 | except ImportError: # pragma: no cover 111 | from base64 import decodestring, encodestring 112 | 113 | try: # pragma: no cover 114 | from xmlrpc.client import Fault 115 | except ImportError: # pragma: no cover 116 | from xmlrpclib import Fault 117 | 118 | try: # pragma: no cover 119 | from string import ascii_letters as letters 120 | except ImportError: # pragma: no cover 121 | from string import letters 122 | 123 | try: # pragma: no cover 124 | from hashlib import md5 125 | except ImportError: # pragma: no cover 126 | from md5 import md5 127 | 128 | try: # pragma: no cover 129 | import thread 130 | except ImportError: # pragma: no cover 131 | import _thread as thread 132 | 133 | try: # pragma: no cover 134 | from types import StringTypes 135 | except ImportError: # pragma: no cover 136 | StringTypes = (str,) 137 | 138 | try: # pragma: no cover 139 | from html import escape 140 | except ImportError: # pragma: no cover 141 | from cgi import escape 142 | 143 | try: # pragma: no cover 144 | import html.entities as htmlentitydefs 145 | except ImportError: # pragma: no cover 146 | import htmlentitydefs 147 | 148 | try: # pragma: no cover 149 | from html.parser import HTMLParser 150 | except ImportError: # pragma: no cover 151 | from HTMLParser import HTMLParser 152 | 153 | # Begin check for working shlex posix mode 154 | 155 | # https://github.com/Supervisor/supervisor/issues/328 156 | # https://github.com/Supervisor/supervisor/issues/873 157 | # https://bugs.python.org/issue21999 158 | 159 | from shlex import shlex as _shlex 160 | 161 | _shlex_posix_expectations = { 162 | 'foo="",bar=a': ['foo', '=', '', ',', 'bar', '=', 'a'], 163 | "'')abc": ['', ')', 'abc'] 164 | } 165 | 166 | shlex_posix_works = all( 167 | list(_shlex(_input, posix=True)) == _expected 168 | for _input, _expected in _shlex_posix_expectations.items() 169 | ) 170 | 171 | # End check for working shlex posix mode 172 | 173 | # Begin importlib/setuptools compatibility code 174 | 175 | # Supervisor used pkg_resources (a part of setuptools) to load package 176 | # resources for 15 years, until setuptools 67.5.0 (2023-03-05) deprecated 177 | # the use of pkg_resources. On Python 3.8 or later, Supervisor now uses 178 | # importlib (part of Python 3 stdlib). Unfortunately, on Python < 3.8, 179 | # Supervisor needs to use pkg_resources despite its deprecation. The PyPI 180 | # backport packages "importlib-resources" and "importlib-metadata" couldn't 181 | # be added as dependencies to Supervisor because they require even more 182 | # dependencies that would likely cause some Supervisor installs to fail. 183 | from warnings import filterwarnings as _fw 184 | _fw("ignore", message="pkg_resources is deprecated as an API") 185 | 186 | try: # pragma: no cover 187 | from importlib.metadata import EntryPoint as _EntryPoint 188 | 189 | def import_spec(spec): 190 | return _EntryPoint(None, spec, None).load() 191 | 192 | except ImportError: # pragma: no cover 193 | from pkg_resources import EntryPoint as _EntryPoint 194 | 195 | def import_spec(spec): 196 | ep = _EntryPoint.parse("x=" + spec) 197 | if hasattr(ep, 'resolve'): 198 | # this is available on setuptools >= 10.2 199 | return ep.resolve() 200 | else: 201 | # this causes a DeprecationWarning on setuptools >= 11.3 202 | return ep.load(False) 203 | 204 | try: # pragma: no cover 205 | import importlib.resources as _importlib_resources 206 | 207 | if hasattr(_importlib_resources, "files"): 208 | def resource_filename(package, path): 209 | return str(_importlib_resources.files(package).joinpath(path)) 210 | 211 | else: 212 | # fall back to deprecated .path if .files is not available 213 | def resource_filename(package, path): 214 | with _importlib_resources.path(package, '__init__.py') as p: 215 | return str(p.parent.joinpath(path)) 216 | 217 | except ImportError: # pragma: no cover 218 | from pkg_resources import resource_filename 219 | 220 | # End importlib/setuptools compatibility code 221 | -------------------------------------------------------------------------------- /supervisor/medusa/default_handler.py: -------------------------------------------------------------------------------- 1 | # -*- Mode: Python -*- 2 | # 3 | # Author: Sam Rushing 4 | # Copyright 1997 by Sam Rushing 5 | # All Rights Reserved. 6 | # 7 | 8 | RCS_ID = '$Id: default_handler.py,v 1.8 2002/08/01 18:15:45 akuchling Exp $' 9 | 10 | # standard python modules 11 | import mimetypes 12 | import re 13 | import stat 14 | 15 | # medusa modules 16 | import supervisor.medusa.http_date as http_date 17 | import supervisor.medusa.http_server as http_server 18 | import supervisor.medusa.producers as producers 19 | 20 | from supervisor.medusa.util import html_repr 21 | 22 | unquote = http_server.unquote 23 | 24 | # This is the 'default' handler. it implements the base set of 25 | # features expected of a simple file-delivering HTTP server. file 26 | # services are provided through a 'filesystem' object, the very same 27 | # one used by the FTP server. 28 | # 29 | # You can replace or modify this handler if you want a non-standard 30 | # HTTP server. You can also derive your own handler classes from 31 | # it. 32 | # 33 | # support for handling POST requests is available in the derived 34 | # class , defined below. 35 | # 36 | 37 | from supervisor.medusa.counter import counter 38 | 39 | class default_handler: 40 | 41 | valid_commands = ['GET', 'HEAD'] 42 | 43 | IDENT = 'Default HTTP Request Handler' 44 | 45 | # Pathnames that are tried when a URI resolves to a directory name 46 | directory_defaults = [ 47 | 'index.html', 48 | 'default.html' 49 | ] 50 | 51 | default_file_producer = producers.file_producer 52 | 53 | def __init__ (self, filesystem): 54 | self.filesystem = filesystem 55 | # count total hits 56 | self.hit_counter = counter() 57 | # count file deliveries 58 | self.file_counter = counter() 59 | # count cache hits 60 | self.cache_counter = counter() 61 | 62 | hit_counter = 0 63 | 64 | def __repr__ (self): 65 | return '<%s (%s hits) at %x>' % ( 66 | self.IDENT, 67 | self.hit_counter, 68 | id (self) 69 | ) 70 | 71 | # always match, since this is a default 72 | def match (self, request): 73 | return 1 74 | 75 | # handle a file request, with caching. 76 | 77 | def handle_request (self, request): 78 | 79 | if request.command not in self.valid_commands: 80 | request.error (400) # bad request 81 | return 82 | 83 | self.hit_counter.increment() 84 | 85 | path, params, query, fragment = request.split_uri() 86 | 87 | if '%' in path: 88 | path = unquote (path) 89 | 90 | # strip off all leading slashes 91 | while path and path[0] == '/': 92 | path = path[1:] 93 | 94 | if self.filesystem.isdir (path): 95 | if path and path[-1] != '/': 96 | request['Location'] = 'http://%s/%s/' % ( 97 | request.channel.server.server_name, 98 | path 99 | ) 100 | request.error (301) 101 | return 102 | 103 | # we could also generate a directory listing here, 104 | # may want to move this into another method for that 105 | # purpose 106 | found = 0 107 | if path and path[-1] != '/': 108 | path += '/' 109 | for default in self.directory_defaults: 110 | p = path + default 111 | if self.filesystem.isfile (p): 112 | path = p 113 | found = 1 114 | break 115 | if not found: 116 | request.error (404) # Not Found 117 | return 118 | 119 | elif not self.filesystem.isfile (path): 120 | request.error (404) # Not Found 121 | return 122 | 123 | file_length = self.filesystem.stat (path)[stat.ST_SIZE] 124 | 125 | ims = get_header_match (IF_MODIFIED_SINCE, request.header) 126 | 127 | length_match = 1 128 | if ims: 129 | length = ims.group (4) 130 | if length: 131 | try: 132 | length = int(length) 133 | if length != file_length: 134 | length_match = 0 135 | except: 136 | pass 137 | 138 | ims_date = 0 139 | 140 | if ims: 141 | ims_date = http_date.parse_http_date (ims.group (1)) 142 | 143 | try: 144 | mtime = self.filesystem.stat (path)[stat.ST_MTIME] 145 | except: 146 | request.error (404) 147 | return 148 | 149 | if length_match and ims_date: 150 | if mtime <= ims_date: 151 | request.reply_code = 304 152 | request.done() 153 | self.cache_counter.increment() 154 | return 155 | try: 156 | file = self.filesystem.open (path, 'rb') 157 | except IOError: 158 | request.error (404) 159 | return 160 | 161 | request['Last-Modified'] = http_date.build_http_date (mtime) 162 | request['Content-Length'] = file_length 163 | self.set_content_type (path, request) 164 | 165 | if request.command == 'GET': 166 | request.push (self.default_file_producer (file)) 167 | 168 | self.file_counter.increment() 169 | request.done() 170 | 171 | def set_content_type (self, path, request): 172 | typ, encoding = mimetypes.guess_type(path) 173 | if typ is not None: 174 | request['Content-Type'] = typ 175 | else: 176 | # TODO: test a chunk off the front of the file for 8-bit 177 | # characters, and use application/octet-stream instead. 178 | request['Content-Type'] = 'text/plain' 179 | 180 | def status (self): 181 | return producers.simple_producer ( 182 | '
  • %s' % html_repr (self) 183 | + '
      ' 184 | + '
    • Total Hits: %s' % self.hit_counter 185 | + '
    • Files Delivered: %s' % self.file_counter 186 | + '
    • Cache Hits: %s' % self.cache_counter 187 | + '
    ' 188 | ) 189 | 190 | # HTTP/1.0 doesn't say anything about the "; length=nnnn" addition 191 | # to this header. I suppose its purpose is to avoid the overhead 192 | # of parsing dates... 193 | IF_MODIFIED_SINCE = re.compile ( 194 | 'If-Modified-Since: ([^;]+)((; length=([0-9]+)$)|$)', 195 | re.IGNORECASE 196 | ) 197 | 198 | USER_AGENT = re.compile ('User-Agent: (.*)', re.IGNORECASE) 199 | 200 | CONTENT_TYPE = re.compile ( 201 | r'Content-Type: ([^;]+)((; boundary=([A-Za-z0-9\'\(\)+_,./:=?-]+)$)|$)', 202 | re.IGNORECASE 203 | ) 204 | 205 | get_header = http_server.get_header 206 | get_header_match = http_server.get_header_match 207 | 208 | def get_extension (path): 209 | dirsep = path.rfind('/') 210 | dotsep = path.rfind('.') 211 | if dotsep > dirsep: 212 | return path[dotsep+1:] 213 | else: 214 | return '' 215 | -------------------------------------------------------------------------------- /docs/introduction.rst: -------------------------------------------------------------------------------- 1 | Introduction 2 | ============ 3 | 4 | Overview 5 | -------- 6 | 7 | Supervisor is a client/server system that allows its users to control 8 | a number of processes on UNIX-like operating systems. It was inspired 9 | by the following: 10 | 11 | Convenience 12 | 13 | It is often inconvenient to need to write ``rc.d`` scripts for every 14 | single process instance. ``rc.d`` scripts are a great 15 | lowest-common-denominator form of process 16 | initialization/autostart/management, but they can be painful to 17 | write and maintain. Additionally, ``rc.d`` scripts cannot 18 | automatically restart a crashed process and many programs do not 19 | restart themselves properly on a crash. Supervisord starts 20 | processes as its subprocesses, and can be configured to 21 | automatically restart them on a crash. It can also automatically be 22 | configured to start processes on its own invocation. 23 | 24 | Accuracy 25 | 26 | It's often difficult to get accurate up/down status on processes on 27 | UNIX. Pidfiles often lie. Supervisord starts processes as 28 | subprocesses, so it always knows the true up/down status of its 29 | children and can be queried conveniently for this data. 30 | 31 | Delegation 32 | 33 | Users who need to control process state often need only to do that. 34 | They don't want or need full-blown shell access to the machine on 35 | which the processes are running. Processes which listen on "low" 36 | TCP ports often need to be started and restarted as the root user (a 37 | UNIX misfeature). It's usually the case that it's perfectly fine to 38 | allow "normal" people to stop or restart such a process, but 39 | providing them with shell access is often impractical, and providing 40 | them with root access or sudo access is often impossible. It's also 41 | (rightly) difficult to explain to them why this problem exists. If 42 | supervisord is started as root, it is possible to allow "normal" 43 | users to control such processes without needing to explain the 44 | intricacies of the problem to them. Supervisorctl allows a very 45 | limited form of access to the machine, essentially allowing users to 46 | see process status and control supervisord-controlled subprocesses 47 | by emitting "stop", "start", and "restart" commands from a simple 48 | shell or web UI. 49 | 50 | Process Groups 51 | 52 | Processes often need to be started and stopped in groups, sometimes 53 | even in a "priority order". It's often difficult to explain to 54 | people how to do this. Supervisor allows you to assign priorities 55 | to processes, and allows user to emit commands via the supervisorctl 56 | client like "start all", and "restart all", which starts them in the 57 | preassigned priority order. Additionally, processes can be grouped 58 | into "process groups" and a set of logically related processes can 59 | be stopped and started as a unit. 60 | 61 | Features 62 | -------- 63 | 64 | Simple 65 | 66 | Supervisor is configured through a simple INI-style config file 67 | that’s easy to learn. It provides many per-process options that make 68 | your life easier like restarting failed processes and automatic log 69 | rotation. 70 | 71 | Centralized 72 | 73 | Supervisor provides you with one place to start, stop, and monitor 74 | your processes. Processes can be controlled individually or in 75 | groups. You can configure Supervisor to provide a local or remote 76 | command line and web interface. 77 | 78 | Efficient 79 | 80 | Supervisor starts its subprocesses via fork/exec and subprocesses 81 | don’t daemonize. The operating system signals Supervisor immediately 82 | when a process terminates, unlike some solutions that rely on 83 | troublesome PID files and periodic polling to restart failed 84 | processes. 85 | 86 | Extensible 87 | 88 | Supervisor has a simple event notification protocol that programs 89 | written in any language can use to monitor it, and an XML-RPC 90 | interface for control. It is also built with extension points that 91 | can be leveraged by Python developers. 92 | 93 | Compatible 94 | 95 | Supervisor works on just about everything except for Windows. It is 96 | tested and supported on Linux, Mac OS X, Solaris, and FreeBSD. It is 97 | written entirely in Python, so installation does not require a C 98 | compiler. 99 | 100 | Proven 101 | 102 | While Supervisor is very actively developed today, it is not new 103 | software. Supervisor has been around for years and is already in use 104 | on many servers. 105 | 106 | Supervisor Components 107 | --------------------- 108 | 109 | :program:`supervisord` 110 | 111 | The server piece of supervisor is named :program:`supervisord`. It 112 | is responsible for starting child programs at its own invocation, 113 | responding to commands from clients, restarting crashed or exited 114 | subprocesseses, logging its subprocess ``stdout`` and ``stderr`` 115 | output, and generating and handling "events" corresponding to points 116 | in subprocess lifetimes. 117 | 118 | The server process uses a configuration file. This is typically 119 | located in :file:`/etc/supervisord.conf`. This configuration file 120 | is a "Windows-INI" style config file. It is important to keep this 121 | file secure via proper filesystem permissions because it may contain 122 | unencrypted usernames and passwords. 123 | 124 | :program:`supervisorctl` 125 | 126 | The command-line client piece of the supervisor is named 127 | :program:`supervisorctl`. It provides a shell-like interface to the 128 | features provided by :program:`supervisord`. From 129 | :program:`supervisorctl`, a user can connect to different 130 | :program:`supervisord` processes (one at a time), get status on the 131 | subprocesses controlled by, stop and start subprocesses of, and get lists of 132 | running processes of a :program:`supervisord`. 133 | 134 | The command-line client talks to the server across a UNIX domain 135 | socket or an internet (TCP) socket. The server can assert that the 136 | user of a client should present authentication credentials before it 137 | allows them to perform commands. The client process typically uses 138 | the same configuration file as the server but any configuration file 139 | with a ``[supervisorctl]`` section in it will work. 140 | 141 | Web Server 142 | 143 | A (sparse) web user interface with functionality comparable to 144 | :program:`supervisorctl` may be accessed via a browser if you start 145 | :program:`supervisord` against an internet socket. Visit the server 146 | URL (e.g. ``http://localhost:9001/``) to view and control process 147 | status through the web interface after activating the configuration 148 | file's ``[inet_http_server]`` section. 149 | 150 | XML-RPC Interface 151 | 152 | The same HTTP server which serves the web UI serves up an XML-RPC 153 | interface that can be used to interrogate and control supervisor and 154 | the programs it runs. See :ref:`xml_rpc`. 155 | 156 | Platform Requirements 157 | --------------------- 158 | 159 | Supervisor has been tested and is known to run on Linux (Ubuntu 18.04), 160 | Mac OS X (10.4/10.5/10.6), and Solaris (10 for Intel) and FreeBSD 6.1. 161 | It will likely work fine on most UNIX systems. 162 | 163 | Supervisor will *not* run at all under any version of Windows. 164 | 165 | Supervisor is intended to work on Python 3 version 3.4 or later 166 | and on Python 2 version 2.7. 167 | -------------------------------------------------------------------------------- /supervisor/tests/test_web.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | 3 | from supervisor.tests.base import DummySupervisor 4 | from supervisor.tests.base import DummyRequest 5 | 6 | class DeferredWebProducerTests(unittest.TestCase): 7 | def _getTargetClass(self): 8 | from supervisor.web import DeferredWebProducer 9 | return DeferredWebProducer 10 | 11 | def _makeOne(self, request, callback): 12 | producer = self._getTargetClass()(request, callback) 13 | return producer 14 | 15 | def test_ctor(self): 16 | request = DummyRequest('/index.html', [], '', '') 17 | callback = lambda *x: None 18 | callback.delay = 1 19 | producer = self._makeOne(request, callback) 20 | self.assertEqual(producer.callback, callback) 21 | self.assertEqual(producer.request, request) 22 | self.assertEqual(producer.finished, False) 23 | self.assertEqual(producer.delay, 1) 24 | 25 | def test_more_not_done_yet(self): 26 | request = DummyRequest('/index.html', [], '', '') 27 | from supervisor.http import NOT_DONE_YET 28 | callback = lambda *x: NOT_DONE_YET 29 | callback.delay = 1 30 | producer = self._makeOne(request, callback) 31 | self.assertEqual(producer.more(), NOT_DONE_YET) 32 | 33 | def test_more_finished(self): 34 | request = DummyRequest('/index.html', [], '', '') 35 | callback = lambda *x: 'done' 36 | callback.delay = 1 37 | producer = self._makeOne(request, callback) 38 | self.assertEqual(producer.more(), None) 39 | self.assertTrue(producer.finished) 40 | self.assertEqual(producer.more(), '') 41 | 42 | def test_more_exception_caught(self): 43 | request = DummyRequest('/index.html', [], '', '') 44 | def callback(*arg): 45 | raise ValueError('foo') 46 | callback.delay = 1 47 | producer = self._makeOne(request, callback) 48 | self.assertEqual(producer.more(), None) 49 | logdata = request.channel.server.logger.logged 50 | self.assertEqual(len(logdata), 1) 51 | logged = logdata[0] 52 | self.assertEqual(logged[0], 'Web interface error') 53 | self.assertTrue(logged[1].startswith('Traceback'), logged[1]) 54 | self.assertEqual(producer.finished, True) 55 | self.assertEqual(request._error, 500) 56 | 57 | def test_sendresponse_redirect(self): 58 | request = DummyRequest('/index.html', [], '', '') 59 | callback = lambda *arg: None 60 | callback.delay = 1 61 | producer = self._makeOne(request, callback) 62 | response = {'headers': {'Location':'abc'}} 63 | result = producer.sendresponse(response) 64 | self.assertEqual(result, None) 65 | self.assertEqual(request._error, 301) 66 | self.assertEqual(request.headers['Content-Type'], 'text/plain') 67 | self.assertEqual(request.headers['Content-Length'], 0) 68 | 69 | def test_sendresponse_withbody_and_content_type(self): 70 | request = DummyRequest('/index.html', [], '', '') 71 | callback = lambda *arg: None 72 | callback.delay = 1 73 | producer = self._makeOne(request, callback) 74 | response = {'body': 'abc', 'headers':{'Content-Type':'text/html'}} 75 | result = producer.sendresponse(response) 76 | self.assertEqual(result, None) 77 | self.assertEqual(request.headers['Content-Type'], 'text/html') 78 | self.assertEqual(request.headers['Content-Length'], 3) 79 | self.assertEqual(request.producers[0], 'abc') 80 | 81 | class UIHandlerTests(unittest.TestCase): 82 | def _getTargetClass(self): 83 | from supervisor.web import supervisor_ui_handler 84 | return supervisor_ui_handler 85 | 86 | def _makeOne(self): 87 | supervisord = DummySupervisor() 88 | handler = self._getTargetClass()(supervisord) 89 | return handler 90 | 91 | def test_handle_request_no_view_method(self): 92 | request = DummyRequest('/foo.css', [], '', '', {'PATH_INFO':'/foo.css'}) 93 | handler = self._makeOne() 94 | data = handler.handle_request(request) 95 | self.assertEqual(data, None) 96 | 97 | def test_handle_request_default(self): 98 | request = DummyRequest('/index.html', [], '', '', 99 | {'PATH_INFO':'/index.html'}) 100 | handler = self._makeOne() 101 | data = handler.handle_request(request) 102 | self.assertEqual(data, None) 103 | self.assertEqual(request.channel.producer.request, request) 104 | from supervisor.web import StatusView 105 | self.assertEqual(request.channel.producer.callback.__class__,StatusView) 106 | 107 | def test_handle_request_index_html(self): 108 | request = DummyRequest('/index.html', [], '', '', 109 | {'PATH_INFO':'/index.html'}) 110 | handler = self._makeOne() 111 | handler.handle_request(request) 112 | from supervisor.web import StatusView 113 | view = request.channel.producer.callback 114 | self.assertEqual(view.__class__, StatusView) 115 | self.assertEqual(view.context.template, 'ui/status.html') 116 | 117 | def test_handle_request_tail_html(self): 118 | request = DummyRequest('/tail.html', [], '', '', 119 | {'PATH_INFO':'/tail.html'}) 120 | handler = self._makeOne() 121 | handler.handle_request(request) 122 | from supervisor.web import TailView 123 | view = request.channel.producer.callback 124 | self.assertEqual(view.__class__, TailView) 125 | self.assertEqual(view.context.template, 'ui/tail.html') 126 | 127 | def test_handle_request_ok_html(self): 128 | request = DummyRequest('/tail.html', [], '', '', 129 | {'PATH_INFO':'/ok.html'}) 130 | handler = self._makeOne() 131 | handler.handle_request(request) 132 | from supervisor.web import OKView 133 | view = request.channel.producer.callback 134 | self.assertEqual(view.__class__, OKView) 135 | self.assertEqual(view.context.template, None) 136 | 137 | 138 | class StatusViewTests(unittest.TestCase): 139 | def _getTargetClass(self): 140 | from supervisor.web import StatusView 141 | return StatusView 142 | 143 | def _makeOne(self, context): 144 | klass = self._getTargetClass() 145 | return klass(context) 146 | 147 | def test_make_callback_noaction(self): 148 | context = DummyContext() 149 | context.supervisord = DummySupervisor() 150 | context.template = 'ui/status.html' 151 | context.form = {} 152 | view = self._makeOne(context) 153 | self.assertRaises(ValueError, view.make_callback, 'process', None) 154 | 155 | def test_render_noaction(self): 156 | context = DummyContext() 157 | context.supervisord = DummySupervisor() 158 | context.template = 'ui/status.html' 159 | context.request = DummyRequest('/foo', [], '', '') 160 | context.form = {} 161 | context.response = {} 162 | view = self._makeOne(context) 163 | data = view.render() 164 | self.assertTrue(data.startswith('' % self.file 46 | 47 | def write (self, data): 48 | self.file.write (data) 49 | self.maybe_flush() 50 | 51 | def writeline (self, line): 52 | self.file.writeline (line) 53 | self.maybe_flush() 54 | 55 | def writelines (self, lines): 56 | self.file.writelines (lines) 57 | self.maybe_flush() 58 | 59 | def maybe_flush (self): 60 | if self.do_flush: 61 | self.file.flush() 62 | 63 | def flush (self): 64 | self.file.flush() 65 | 66 | def softspace (self, *args): 67 | pass 68 | 69 | def log (self, message): 70 | if message[-1] not in ('\r', '\n'): 71 | self.write (message + '\n') 72 | else: 73 | self.write (message) 74 | 75 | # like a file_logger, but it must be attached to a filename. 76 | # When the log gets too full, or a certain time has passed, 77 | # it backs up the log and starts a new one. Note that backing 78 | # up the log is done via "mv" because anything else (cp, gzip) 79 | # would take time, during which medusa would do nothing else. 80 | 81 | class rotating_file_logger (file_logger): 82 | 83 | # If freq is non-None we back up "daily", "weekly", or "monthly". 84 | # Else if maxsize is non-None we back up whenever the log gets 85 | # to big. If both are None we never back up. 86 | def __init__ (self, file, freq=None, maxsize=None, flush=1, mode='a'): 87 | file_logger.__init__ (self, file, flush, mode) 88 | self.filename = file 89 | self.mode = mode 90 | self.freq = freq 91 | self.maxsize = maxsize 92 | self.rotate_when = self.next_backup(self.freq) 93 | 94 | def __repr__ (self): 95 | return '' % self.file 96 | 97 | # We back up at midnight every 1) day, 2) monday, or 3) 1st of month 98 | def next_backup (self, freq): 99 | (yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time()) 100 | if freq == 'daily': 101 | return time.mktime((yr,mo,day+1, 0,0,0, 0,0,-1)) 102 | elif freq == 'weekly': 103 | return time.mktime((yr,mo,day-wd+7, 0,0,0, 0,0,-1)) # wd(monday)==0 104 | elif freq == 'monthly': 105 | return time.mktime((yr,mo+1,1, 0,0,0, 0,0,-1)) 106 | else: 107 | return None # not a date-based backup 108 | 109 | def maybe_flush (self): # rotate first if necessary 110 | self.maybe_rotate() 111 | if self.do_flush: # from file_logger() 112 | self.file.flush() 113 | 114 | def maybe_rotate (self): 115 | if self.freq and time.time() > self.rotate_when: 116 | self.rotate() 117 | self.rotate_when = self.next_backup(self.freq) 118 | elif self.maxsize: # rotate when we get too big 119 | try: 120 | if os.stat(self.filename)[stat.ST_SIZE] > self.maxsize: 121 | self.rotate() 122 | except os.error: # file not found, probably 123 | self.rotate() # will create a new file 124 | 125 | def rotate (self): 126 | (yr, mo, day, hr, min, sec, wd, jday, dst) = time.localtime(time.time()) 127 | try: 128 | self.file.close() 129 | newname = '%s.ends%04d%02d%02d' % (self.filename, yr, mo, day) 130 | try: 131 | open(newname, "r").close() # check if file exists 132 | newname += "-%02d%02d%02d" % (hr, min, sec) 133 | except: # YEAR_MONTH_DAY is unique 134 | pass 135 | os.rename(self.filename, newname) 136 | self.file = open(self.filename, self.mode) 137 | except: 138 | pass 139 | 140 | # log to a stream socket, asynchronously 141 | 142 | class socket_logger (asynchat.async_chat): 143 | 144 | def __init__ (self, address): 145 | asynchat.async_chat.__init__(self) 146 | if isinstance(address, str): 147 | self.create_socket (socket.AF_UNIX, socket.SOCK_STREAM) 148 | else: 149 | self.create_socket (socket.AF_INET, socket.SOCK_STREAM) 150 | 151 | self.connect (address) 152 | self.address = address 153 | 154 | def __repr__ (self): 155 | return '' % self.address 156 | 157 | def log (self, message): 158 | if message[-2:] != '\r\n': 159 | self.socket.push (message + '\r\n') 160 | else: 161 | self.socket.push (message) 162 | 163 | # log to multiple places 164 | class multi_logger: 165 | def __init__ (self, loggers): 166 | self.loggers = loggers 167 | 168 | def __repr__ (self): 169 | return '' % (repr(self.loggers)) 170 | 171 | def log (self, message): 172 | for logger in self.loggers: 173 | logger.log (message) 174 | 175 | class resolving_logger: 176 | """Feed (ip, message) combinations into this logger to get a 177 | resolved hostname in front of the message. The message will not 178 | be logged until the PTR request finishes (or fails).""" 179 | 180 | def __init__ (self, resolver, logger): 181 | self.resolver = resolver 182 | self.logger = logger 183 | 184 | class logger_thunk: 185 | def __init__ (self, message, logger): 186 | self.message = message 187 | self.logger = logger 188 | 189 | def __call__ (self, host, ttl, answer): 190 | if not answer: 191 | answer = host 192 | self.logger.log ('%s:%s' % (answer, self.message)) 193 | 194 | def log (self, ip, message): 195 | self.resolver.resolve_ptr ( 196 | ip, 197 | self.logger_thunk ( 198 | message, 199 | self.logger 200 | ) 201 | ) 202 | 203 | class unresolving_logger: 204 | """Just in case you don't want to resolve""" 205 | def __init__ (self, logger): 206 | self.logger = logger 207 | 208 | def log (self, ip, message): 209 | self.logger.log ('%s:%s' % (ip, message)) 210 | 211 | 212 | def strip_eol (line): 213 | while line and line[-1] in '\r\n': 214 | line = line[:-1] 215 | return line 216 | 217 | class tail_logger: 218 | """Keep track of the last log messages""" 219 | def __init__ (self, logger, size=500): 220 | self.size = size 221 | self.logger = logger 222 | self.messages = [] 223 | 224 | def log (self, message): 225 | self.messages.append (strip_eol (message)) 226 | if len (self.messages) > self.size: 227 | del self.messages[0] 228 | self.logger.log (message) 229 | -------------------------------------------------------------------------------- /supervisor/http_client.py: -------------------------------------------------------------------------------- 1 | # this code based on Daniel Krech's RDFLib HTTP client code (see rdflib.dev) 2 | 3 | import sys 4 | import socket 5 | 6 | from supervisor.compat import as_bytes 7 | from supervisor.compat import as_string 8 | from supervisor.compat import encodestring 9 | from supervisor.compat import PY2 10 | from supervisor.compat import urlparse 11 | from supervisor.medusa import asynchat_25 as asynchat 12 | 13 | CR = b'\x0d' 14 | LF = b'\x0a' 15 | CRLF = CR+LF 16 | 17 | class Listener(object): 18 | 19 | def status(self, url, status): 20 | pass 21 | 22 | def error(self, url, error): 23 | sys.stderr.write("%s %s\n" % (url, error)) 24 | 25 | def response_header(self, url, name, value): 26 | pass 27 | 28 | def done(self, url): 29 | pass 30 | 31 | def feed(self, url, data): 32 | try: 33 | sdata = as_string(data) 34 | except UnicodeDecodeError: 35 | sdata = 'Undecodable: %r' % data 36 | # We've got Unicode data in sdata now, but writing to stdout sometimes 37 | # fails - see issue #1231. 38 | try: 39 | sys.stdout.write(sdata) 40 | except UnicodeEncodeError: 41 | if PY2: 42 | # This might seem like The Wrong Thing To Do (writing bytes 43 | # rather than text to an output stream), but it seems to work 44 | # OK for Python 2.7. 45 | sys.stdout.write(data) 46 | else: 47 | s = ('Unable to write Unicode to stdout because it has ' 48 | 'encoding %s' % sys.stdout.encoding) 49 | raise ValueError(s) 50 | sys.stdout.flush() 51 | 52 | def close(self, url): 53 | pass 54 | 55 | class HTTPHandler(asynchat.async_chat): 56 | def __init__( 57 | self, 58 | listener, 59 | username='', 60 | password=None, 61 | conn=None, 62 | map=None 63 | ): 64 | asynchat.async_chat.__init__(self, conn, map) 65 | self.listener = listener 66 | self.user_agent = 'Supervisor HTTP Client' 67 | self.buffer = b'' 68 | self.set_terminator(CRLF) 69 | self.connected = 0 70 | self.part = self.status_line 71 | self.chunk_size = 0 72 | self.chunk_read = 0 73 | self.length_read = 0 74 | self.length = 0 75 | self.encoding = None 76 | self.username = username 77 | self.password = password 78 | self.url = None 79 | self.error_handled = False 80 | 81 | def get(self, serverurl, path=''): 82 | if self.url is not None: 83 | raise AssertionError('Already doing a get') 84 | self.url = serverurl + path 85 | scheme, host, path_ignored, params, query, fragment = urlparse.urlparse( 86 | self.url) 87 | if not scheme in ("http", "unix"): 88 | raise NotImplementedError 89 | self.host = host 90 | if ":" in host: 91 | hostname, port = host.split(":", 1) 92 | port = int(port) 93 | else: 94 | hostname = host 95 | port = 80 96 | 97 | self.path = path 98 | self.port = port 99 | 100 | if scheme == "http": 101 | ip = hostname 102 | self.create_socket(socket.AF_INET, socket.SOCK_STREAM) 103 | self.connect((ip, self.port)) 104 | elif scheme == "unix": 105 | socketname = serverurl[7:] 106 | self.create_socket(socket.AF_UNIX, socket.SOCK_STREAM) 107 | self.connect(socketname) 108 | 109 | def close(self): 110 | self.listener.close(self.url) 111 | self.connected = 0 112 | self.del_channel() 113 | self.socket.close() 114 | self.url = "CLOSED" 115 | 116 | def header(self, name, value): 117 | self.push('%s: %s' % (name, value)) 118 | self.push(CRLF) 119 | 120 | def handle_error(self): 121 | if self.error_handled: 122 | return 123 | if 1 or self.connected: 124 | t,v,tb = sys.exc_info() 125 | msg = 'Cannot connect, error: %s (%s)' % (t, v) 126 | self.listener.error(self.url, msg) 127 | self.part = self.ignore 128 | self.close() 129 | self.error_handled = True 130 | del t 131 | del v 132 | del tb 133 | 134 | def handle_connect(self): 135 | self.connected = 1 136 | method = "GET" 137 | version = "HTTP/1.1" 138 | self.push("%s %s %s" % (method, self.path, version)) 139 | self.push(CRLF) 140 | self.header("Host", self.host) 141 | 142 | self.header('Accept-Encoding', 'chunked') 143 | self.header('Accept', '*/*') 144 | self.header('User-agent', self.user_agent) 145 | if self.password: 146 | auth = '%s:%s' % (self.username, self.password) 147 | auth = as_string(encodestring(as_bytes(auth))).strip() 148 | self.header('Authorization', 'Basic %s' % auth) 149 | self.push(CRLF) 150 | self.push(CRLF) 151 | 152 | 153 | def feed(self, data): 154 | self.listener.feed(self.url, data) 155 | 156 | def collect_incoming_data(self, bytes): 157 | self.buffer = self.buffer + bytes 158 | if self.part==self.body: 159 | self.feed(self.buffer) 160 | self.buffer = b'' 161 | 162 | def found_terminator(self): 163 | self.part() 164 | self.buffer = b'' 165 | 166 | def ignore(self): 167 | self.buffer = b'' 168 | 169 | def status_line(self): 170 | line = self.buffer 171 | 172 | version, status, reason = line.split(None, 2) 173 | status = int(status) 174 | if not version.startswith(b'HTTP/'): 175 | raise ValueError(line) 176 | 177 | self.listener.status(self.url, status) 178 | 179 | if status == 200: 180 | self.part = self.headers 181 | else: 182 | self.part = self.ignore 183 | msg = 'Cannot read, status code %s' % status 184 | self.listener.error(self.url, msg) 185 | self.close() 186 | return version, status, reason 187 | 188 | def headers(self): 189 | line = self.buffer 190 | if not line: 191 | if self.encoding == b'chunked': 192 | self.part = self.chunked_size 193 | else: 194 | self.part = self.body 195 | self.set_terminator(self.length) 196 | else: 197 | name, value = line.split(b':', 1) 198 | if name and value: 199 | name = name.lower() 200 | value = value.strip() 201 | if name == b'transfer-encoding': 202 | self.encoding = value 203 | elif name == b'content-length': 204 | self.length = int(value) 205 | self.response_header(name, value) 206 | 207 | def response_header(self, name, value): 208 | self.listener.response_header(self.url, name, value) 209 | 210 | def body(self): 211 | self.done() 212 | self.close() 213 | 214 | def done(self): 215 | self.listener.done(self.url) 216 | 217 | def chunked_size(self): 218 | line = self.buffer 219 | if not line: 220 | return 221 | chunk_size = int(line.split()[0], 16) 222 | if chunk_size==0: 223 | self.part = self.trailer 224 | else: 225 | self.set_terminator(chunk_size) 226 | self.part = self.chunked_body 227 | self.length += chunk_size 228 | 229 | def chunked_body(self): 230 | line = self.buffer 231 | self.set_terminator(CRLF) 232 | self.part = self.chunked_size 233 | self.feed(line) 234 | 235 | def trailer(self): 236 | # http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6.1 237 | # trailer = *(entity-header CRLF) 238 | line = self.buffer 239 | if line == CRLF: 240 | self.done() 241 | self.close() 242 | -------------------------------------------------------------------------------- /supervisor/events.py: -------------------------------------------------------------------------------- 1 | from supervisor.states import getProcessStateDescription 2 | from supervisor.compat import as_string 3 | 4 | callbacks = [] 5 | 6 | def subscribe(type, callback): 7 | callbacks.append((type, callback)) 8 | 9 | def unsubscribe(type, callback): 10 | callbacks.remove((type, callback)) 11 | 12 | def notify(event): 13 | for type, callback in callbacks: 14 | if isinstance(event, type): 15 | callback(event) 16 | 17 | def clear(): 18 | callbacks[:] = [] 19 | 20 | class Event: 21 | """ Abstract event type """ 22 | pass 23 | 24 | class ProcessLogEvent(Event): 25 | """ Abstract """ 26 | channel = None 27 | def __init__(self, process, pid, data): 28 | self.process = process 29 | self.pid = pid 30 | self.data = data 31 | 32 | def payload(self): 33 | groupname = '' 34 | if self.process.group is not None: 35 | groupname = self.process.group.config.name 36 | try: 37 | data = as_string(self.data) 38 | except UnicodeDecodeError: 39 | data = 'Undecodable: %r' % self.data 40 | # On Python 2, stuff needs to be in Unicode before invoking the 41 | # % operator, otherwise implicit encodings to ASCII can cause 42 | # failures 43 | fmt = as_string('processname:%s groupname:%s pid:%s channel:%s\n%s') 44 | result = fmt % (as_string(self.process.config.name), 45 | as_string(groupname), self.pid, 46 | as_string(self.channel), data) 47 | return result 48 | 49 | class ProcessLogStdoutEvent(ProcessLogEvent): 50 | channel = 'stdout' 51 | 52 | class ProcessLogStderrEvent(ProcessLogEvent): 53 | channel = 'stderr' 54 | 55 | class ProcessCommunicationEvent(Event): 56 | """ Abstract """ 57 | # event mode tokens 58 | BEGIN_TOKEN = b'' 59 | END_TOKEN = b'' 60 | 61 | def __init__(self, process, pid, data): 62 | self.process = process 63 | self.pid = pid 64 | self.data = data 65 | 66 | def payload(self): 67 | groupname = '' 68 | if self.process.group is not None: 69 | groupname = self.process.group.config.name 70 | try: 71 | data = as_string(self.data) 72 | except UnicodeDecodeError: 73 | data = 'Undecodable: %r' % self.data 74 | return 'processname:%s groupname:%s pid:%s\n%s' % ( 75 | self.process.config.name, 76 | groupname, 77 | self.pid, 78 | data) 79 | 80 | class ProcessCommunicationStdoutEvent(ProcessCommunicationEvent): 81 | channel = 'stdout' 82 | 83 | class ProcessCommunicationStderrEvent(ProcessCommunicationEvent): 84 | channel = 'stderr' 85 | 86 | class RemoteCommunicationEvent(Event): 87 | def __init__(self, type, data): 88 | self.type = type 89 | self.data = data 90 | 91 | def payload(self): 92 | return 'type:%s\n%s' % (self.type, self.data) 93 | 94 | class SupervisorStateChangeEvent(Event): 95 | """ Abstract class """ 96 | def payload(self): 97 | return '' 98 | 99 | class SupervisorRunningEvent(SupervisorStateChangeEvent): 100 | pass 101 | 102 | class SupervisorStoppingEvent(SupervisorStateChangeEvent): 103 | pass 104 | 105 | class EventRejectedEvent: # purposely does not subclass Event 106 | def __init__(self, process, event): 107 | self.process = process 108 | self.event = event 109 | 110 | class ProcessStateEvent(Event): 111 | """ Abstract class, never raised directly """ 112 | frm = None 113 | to = None 114 | def __init__(self, process, from_state, expected=True): 115 | self.process = process 116 | self.from_state = from_state 117 | self.expected = expected 118 | # we eagerly render these so if the process pid, etc changes beneath 119 | # us, we stash the values at the time the event was sent 120 | self.extra_values = self.get_extra_values() 121 | 122 | def payload(self): 123 | groupname = '' 124 | if self.process.group is not None: 125 | groupname = self.process.group.config.name 126 | L = [('processname', self.process.config.name), ('groupname', groupname), 127 | ('from_state', getProcessStateDescription(self.from_state))] 128 | L.extend(self.extra_values) 129 | s = ' '.join( [ '%s:%s' % (name, val) for (name, val) in L ] ) 130 | return s 131 | 132 | def get_extra_values(self): 133 | return [] 134 | 135 | class ProcessStateFatalEvent(ProcessStateEvent): 136 | pass 137 | 138 | class ProcessStateUnknownEvent(ProcessStateEvent): 139 | pass 140 | 141 | class ProcessStateStartingOrBackoffEvent(ProcessStateEvent): 142 | def get_extra_values(self): 143 | return [('tries', int(self.process.backoff))] 144 | 145 | class ProcessStateBackoffEvent(ProcessStateStartingOrBackoffEvent): 146 | pass 147 | 148 | class ProcessStateStartingEvent(ProcessStateStartingOrBackoffEvent): 149 | pass 150 | 151 | class ProcessStateExitedEvent(ProcessStateEvent): 152 | def get_extra_values(self): 153 | return [('expected', int(self.expected)), ('pid', self.process.pid)] 154 | 155 | class ProcessStateRunningEvent(ProcessStateEvent): 156 | def get_extra_values(self): 157 | return [('pid', self.process.pid)] 158 | 159 | class ProcessStateStoppingEvent(ProcessStateEvent): 160 | def get_extra_values(self): 161 | return [('pid', self.process.pid)] 162 | 163 | class ProcessStateStoppedEvent(ProcessStateEvent): 164 | def get_extra_values(self): 165 | return [('pid', self.process.pid)] 166 | 167 | class ProcessGroupEvent(Event): 168 | def __init__(self, group): 169 | self.group = group 170 | 171 | def payload(self): 172 | return 'groupname:%s\n' % self.group 173 | 174 | class ProcessGroupAddedEvent(ProcessGroupEvent): 175 | pass 176 | 177 | class ProcessGroupRemovedEvent(ProcessGroupEvent): 178 | pass 179 | 180 | class TickEvent(Event): 181 | """ Abstract """ 182 | def __init__(self, when, supervisord): 183 | self.when = when 184 | self.supervisord = supervisord 185 | 186 | def payload(self): 187 | return 'when:%s' % self.when 188 | 189 | class Tick5Event(TickEvent): 190 | period = 5 191 | 192 | class Tick60Event(TickEvent): 193 | period = 60 194 | 195 | class Tick3600Event(TickEvent): 196 | period = 3600 197 | 198 | TICK_EVENTS = [ Tick5Event, Tick60Event, Tick3600Event ] # imported elsewhere 199 | 200 | class EventTypes: 201 | EVENT = Event # abstract 202 | PROCESS_STATE = ProcessStateEvent # abstract 203 | PROCESS_STATE_STOPPED = ProcessStateStoppedEvent 204 | PROCESS_STATE_EXITED = ProcessStateExitedEvent 205 | PROCESS_STATE_STARTING = ProcessStateStartingEvent 206 | PROCESS_STATE_STOPPING = ProcessStateStoppingEvent 207 | PROCESS_STATE_BACKOFF = ProcessStateBackoffEvent 208 | PROCESS_STATE_FATAL = ProcessStateFatalEvent 209 | PROCESS_STATE_RUNNING = ProcessStateRunningEvent 210 | PROCESS_STATE_UNKNOWN = ProcessStateUnknownEvent 211 | PROCESS_COMMUNICATION = ProcessCommunicationEvent # abstract 212 | PROCESS_COMMUNICATION_STDOUT = ProcessCommunicationStdoutEvent 213 | PROCESS_COMMUNICATION_STDERR = ProcessCommunicationStderrEvent 214 | PROCESS_LOG = ProcessLogEvent 215 | PROCESS_LOG_STDOUT = ProcessLogStdoutEvent 216 | PROCESS_LOG_STDERR = ProcessLogStderrEvent 217 | REMOTE_COMMUNICATION = RemoteCommunicationEvent 218 | SUPERVISOR_STATE_CHANGE = SupervisorStateChangeEvent # abstract 219 | SUPERVISOR_STATE_CHANGE_RUNNING = SupervisorRunningEvent 220 | SUPERVISOR_STATE_CHANGE_STOPPING = SupervisorStoppingEvent 221 | TICK = TickEvent # abstract 222 | TICK_5 = Tick5Event 223 | TICK_60 = Tick60Event 224 | TICK_3600 = Tick3600Event 225 | PROCESS_GROUP = ProcessGroupEvent # abstract 226 | PROCESS_GROUP_ADDED = ProcessGroupAddedEvent 227 | PROCESS_GROUP_REMOVED = ProcessGroupRemovedEvent 228 | 229 | def getEventNameByType(requested): 230 | for name, typ in EventTypes.__dict__.items(): 231 | if typ is requested: 232 | return name 233 | 234 | def register(name, event): 235 | setattr(EventTypes, name, event) 236 | -------------------------------------------------------------------------------- /supervisor/medusa/docs/README.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 |

    What is Medusa?

    5 |
    6 | 7 |

    8 | Medusa is an architecture for very-high-performance TCP/IP servers 9 | (like HTTP, FTP, and NNTP). Medusa is different from most other 10 | servers because it runs as a single process, multiplexing I/O with its 11 | various client and server connections within a single process/thread. 12 | 13 |

    14 | It is capable of smoother and higher performance than most other 15 | servers, while placing a dramatically reduced load on the server 16 | machine. The single-process, single-thread model simplifies design 17 | and enables some new persistence capabilities that are otherwise 18 | difficult or impossible to implement. 19 | 20 |

    21 | Medusa is supported on any platform that can run Python and includes a 22 | functional implementation of the <socket> and <select> 23 | modules. This includes the majority of Unix implementations. 24 | 25 |

    26 | During development, it is constantly tested on Linux and Win32 27 | [Win95/WinNT], but the core asynchronous capability has been shown to 28 | work on several other platforms, including the Macintosh. It might 29 | even work on VMS. 30 | 31 | 32 |

    The Power of Python

    33 | 34 |

    35 | A distinguishing feature of Medusa is that it is written entirely in 36 | Python. Python (http://www.python.org/) is a 37 | 'very-high-level' object-oriented language developed by Guido van 38 | Rossum (currently at CNRI). It is easy to learn, and includes many 39 | modern programming features such as storage management, dynamic 40 | typing, and an extremely flexible object system. It also provides 41 | convenient interfaces to C and C++. 42 | 43 |

    44 | The rapid prototyping and delivery capabilities are hard to exaggerate; 45 | for example 46 |

      47 | 48 |
    • It took me longer to read the documentation for persistent HTTP 49 | connections (the 'Keep-Alive' connection token) than to add the 50 | feature to Medusa. 51 | 52 |
    • A simple IRC-like chat server system was written in about 90 minutes. 53 | 54 |
    55 | 56 |

    I've heard similar stories from alpha test sites, and other users of 57 | the core async library. 58 | 59 |

    Server Notes

    60 | 61 |

    Both the FTP and HTTP servers use an abstracted 'filesystem object' to 62 | gain access to a given directory tree. One possible server extension 63 | technique would be to build behavior into this filesystem object, 64 | rather than directly into the server: Then the extension could be 65 | shared with both the FTP and HTTP servers. 66 | 67 |

    HTTP

    68 | 69 |

    The core HTTP server itself is quite simple - all functionality is 70 | provided through 'extensions'. Extensions can be plugged in 71 | dynamically. [i.e., you could log in to the server via the monitor 72 | service and add or remove an extension on the fly]. The basic 73 | file-delivery service is provided by a 'default' extension, which 74 | matches all URI's. You can build more complex behavior by replacing 75 | or extending this class. 76 | 77 | 78 |

    The default extension includes support for the 'Connection: Keep-Alive' 79 | token, and will re-use a client channel when requested by the client. 80 | 81 |

    FTP

    82 | 83 |

    On Unix, the ftp server includes support for 'real' users, so that it 84 | may be used as a drop-in replacement for the normal ftp server. Since 85 | most ftp servers on Unix use the 'forking' model, each child process 86 | changes its user/group persona after a successful login. This is a 87 | appears to be a secure design. 88 | 89 | 90 |

    Medusa takes a different approach - whenever Medusa performs an 91 | operation for a particular user [listing a directory, opening a file], 92 | it temporarily switches to that user's persona _only_ for the duration 93 | of the operation. [and each such operation is protected by a 94 | try/finally exception handler]. 95 | 96 | 97 |

    To do this Medusa MUST run with super-user privileges. This is a 98 | HIGHLY experimental approach, and although it has been thoroughly 99 | tested on Linux, security problems may still exist. If you are 100 | concerned about the security of your server machine, AND YOU SHOULD 101 | BE, I suggest running Medusa's ftp server in anonymous-only mode, 102 | under an account with limited privileges ('nobody' is usually used for 103 | this purpose). 104 | 105 | 106 |

    I am very interested in any feedback on this feature, most 107 | especially information on how the server behaves on different 108 | implementations of Unix, and of course any security problems that are 109 | found. 110 | 111 |


    112 | 113 |

    Monitor

    114 | 115 |

    The monitor server gives you remote, 'back-door' access to your server 116 | while it is running. It implements a remote python interpreter. Once 117 | connected to the monitor, you can do just about anything you can do from 118 | the normal python interpreter. You can examine data structures, servers, 119 | connection objects. You can enable or disable extensions, restart the server, 120 | reload modules, etc... 121 | 122 |

    The monitor server is protected with an MD5-based authentication 123 | similar to that proposed in RFC1725 for the POP3 protocol. The server 124 | sends the client a timestamp, which is then appended to a secret 125 | password. The resulting md5 digest is sent back to the server, which 126 | then compares this to the expected result. Failed login attempts are 127 | logged and immediately disconnected. The password itself is not sent 128 | over the network (unless you have foolishly transmitted it yourself 129 | through an insecure telnet or X11 session. 8^) 130 | 131 |

    For this reason telnet cannot be used to connect to the monitor 132 | server when it is in a secure mode (the default). A client program is 133 | provided for this purpose. You will be prompted for a password when 134 | starting up the server, and by the monitor client. 135 | 136 |

    For extra added security on Unix, the monitor server will 137 | eventually be able to use a Unix-domain socket, which can be protected 138 | behind a 'firewall' directory (similar to the InterNet News server). 139 | 140 |


    141 |

    Performance Notes

    142 | 143 |

    The select() function

    144 | 145 |

    At the heart of Medusa is a single select() loop. 146 | This loop handles all open socket connections, both servers and 147 | clients. It is in effect constantly asking the system: 'which of 148 | these sockets has activity?'. Performance of this system call can 149 | vary widely between operating systems. 150 | 151 |

    There are also often builtin limitations to the number of sockets 152 | ('file descriptors') that a single process, or a whole system, can 153 | manipulate at the same time. Early versions of Linux placed draconian 154 | limits (256) that have since been raised. Windows 95 has a limit of 155 | 64, while OSF/1 seems to allow up to 4096. 156 | 157 |

    These limits don't affect only Medusa, you will find them described 158 | in the documentation for other web and ftp servers, too. 159 | 160 |

    The documentation for the Apache web server has some excellent 161 | notes on tweaking performance for various Unix implementations. See 162 | 163 | http://www.apache.org/docs/misc/perf.html 164 | for more information. 165 | 166 |

    Buffer sizes

    167 | 168 |

    169 | The default buffer sizes used by Medusa are set with a bias toward 170 | Internet-based servers: They are relatively small, so that the buffer 171 | overhead for each connection is low. The assumption is that Medusa 172 | will be talking to a large number of low-bandwidth connections, rather 173 | than a smaller number of high bandwidth. 174 | 175 |

    This choice trades run-time memory use for efficiency - the down 176 | side of this is that high-speed local connections (i.e., over a local 177 | ethernet) will transfer data at a slower rate than necessary. 178 | 179 |

    This parameter can easily be tweaked by the site designer, and can 180 | in fact be adjusted on a per-server or even per-client basis. For 181 | example, you could have the FTP server use larger buffer sizes for 182 | connections from certain domains. 183 | 184 |

    If there's enough interest, I have some rough ideas for how to make 185 | these buffer sizes automatically adjust to an optimal setting. Send 186 | email if you'd like to see this feature. 187 | 188 |


    189 | 190 |

    See ./medusa.html for a brief overview of 191 | some of the ideas behind Medusa's design, and for a description of 192 | current and upcoming features. 193 | 194 |

    Enjoy!

    195 | 196 |
    197 |
    -Sam Rushing 198 |
    rushing@nightmare.com 199 | 200 | 205 | 206 | 207 | 208 | -------------------------------------------------------------------------------- /supervisor/tests/test_socket_manager.py: -------------------------------------------------------------------------------- 1 | """Test suite for supervisor.socket_manager""" 2 | 3 | import gc 4 | import os 5 | import unittest 6 | import socket 7 | import tempfile 8 | 9 | try: 10 | import __pypy__ 11 | except ImportError: 12 | __pypy__ = None 13 | 14 | from supervisor.tests.base import DummySocketConfig 15 | from supervisor.tests.base import DummyLogger 16 | from supervisor.datatypes import UnixStreamSocketConfig 17 | from supervisor.datatypes import InetStreamSocketConfig 18 | 19 | class Subject: 20 | 21 | def __init__(self): 22 | self.value = 5 23 | 24 | def getValue(self): 25 | return self.value 26 | 27 | def setValue(self, val): 28 | self.value = val 29 | 30 | class ProxyTest(unittest.TestCase): 31 | 32 | def setUp(self): 33 | self.on_deleteCalled = False 34 | 35 | def _getTargetClass(self): 36 | from supervisor.socket_manager import Proxy 37 | return Proxy 38 | 39 | def _makeOne(self, *args, **kw): 40 | return self._getTargetClass()(*args, **kw) 41 | 42 | def setOnDeleteCalled(self): 43 | self.on_deleteCalled = True 44 | 45 | def test_proxy_getattr(self): 46 | proxy = self._makeOne(Subject()) 47 | self.assertEqual(5, proxy.getValue()) 48 | 49 | def test_on_delete(self): 50 | proxy = self._makeOne(Subject(), on_delete=self.setOnDeleteCalled) 51 | self.assertEqual(5, proxy.getValue()) 52 | proxy = None 53 | gc_collect() 54 | self.assertTrue(self.on_deleteCalled) 55 | 56 | class ReferenceCounterTest(unittest.TestCase): 57 | 58 | def setUp(self): 59 | self.running = False 60 | 61 | def start(self): 62 | self.running = True 63 | 64 | def stop(self): 65 | self.running = False 66 | 67 | def _getTargetClass(self): 68 | from supervisor.socket_manager import ReferenceCounter 69 | return ReferenceCounter 70 | 71 | def _makeOne(self, *args, **kw): 72 | return self._getTargetClass()(*args, **kw) 73 | 74 | def test_incr_and_decr(self): 75 | ctr = self._makeOne(on_zero=self.stop,on_non_zero=self.start) 76 | self.assertFalse(self.running) 77 | ctr.increment() 78 | self.assertTrue(self.running) 79 | self.assertEqual(1, ctr.get_count()) 80 | ctr.increment() 81 | self.assertTrue(self.running) 82 | self.assertEqual(2, ctr.get_count()) 83 | ctr.decrement() 84 | self.assertTrue(self.running) 85 | self.assertEqual(1, ctr.get_count()) 86 | ctr.decrement() 87 | self.assertFalse(self.running) 88 | self.assertEqual(0, ctr.get_count()) 89 | 90 | def test_decr_at_zero_raises_error(self): 91 | ctr = self._makeOne(on_zero=self.stop,on_non_zero=self.start) 92 | self.assertRaises(Exception, ctr.decrement) 93 | 94 | class SocketManagerTest(unittest.TestCase): 95 | 96 | def tearDown(self): 97 | gc_collect() 98 | 99 | def _getTargetClass(self): 100 | from supervisor.socket_manager import SocketManager 101 | return SocketManager 102 | 103 | def _makeOne(self, *args, **kw): 104 | return self._getTargetClass()(*args, **kw) 105 | 106 | def test_repr(self): 107 | conf = DummySocketConfig(2) 108 | sock_manager = self._makeOne(conf) 109 | expected = "<%s at %s for %s>" % ( 110 | sock_manager.__class__, id(sock_manager), conf.url) 111 | self.assertEqual(repr(sock_manager), expected) 112 | 113 | def test_get_config(self): 114 | conf = DummySocketConfig(2) 115 | sock_manager = self._makeOne(conf) 116 | self.assertEqual(conf, sock_manager.config()) 117 | 118 | def test_tcp_w_hostname(self): 119 | conf = InetStreamSocketConfig('localhost', 51041) 120 | sock_manager = self._makeOne(conf) 121 | self.assertEqual(sock_manager.socket_config, conf) 122 | sock = sock_manager.get_socket() 123 | self.assertEqual(sock.getsockname(), ('127.0.0.1', 51041)) 124 | 125 | def test_tcp_w_ip(self): 126 | conf = InetStreamSocketConfig('127.0.0.1', 51041) 127 | sock_manager = self._makeOne(conf) 128 | self.assertEqual(sock_manager.socket_config, conf) 129 | sock = sock_manager.get_socket() 130 | self.assertEqual(sock.getsockname(), ('127.0.0.1', 51041)) 131 | 132 | def test_unix(self): 133 | (tf_fd, tf_name) = tempfile.mkstemp() 134 | conf = UnixStreamSocketConfig(tf_name) 135 | sock_manager = self._makeOne(conf) 136 | self.assertEqual(sock_manager.socket_config, conf) 137 | sock = sock_manager.get_socket() 138 | self.assertEqual(sock.getsockname(), tf_name) 139 | sock = None 140 | os.close(tf_fd) 141 | 142 | def test_socket_lifecycle(self): 143 | conf = DummySocketConfig(2) 144 | sock_manager = self._makeOne(conf) 145 | # Assert that sockets are created on demand 146 | self.assertFalse(sock_manager.is_prepared()) 147 | # Get two socket references 148 | sock = sock_manager.get_socket() 149 | self.assertTrue(sock_manager.is_prepared()) #socket created on demand 150 | sock_id = id(sock._get()) 151 | sock2 = sock_manager.get_socket() 152 | sock2_id = id(sock2._get()) 153 | # Assert that they are not the same proxy object 154 | self.assertNotEqual(sock, sock2) 155 | # Assert that they are the same underlying socket 156 | self.assertEqual(sock_id, sock2_id) 157 | # Socket not actually closed yet b/c ref ct is 2 158 | self.assertEqual(2, sock_manager.get_socket_ref_count()) 159 | self.assertTrue(sock_manager.is_prepared()) 160 | self.assertFalse(sock_manager.socket.close_called) 161 | sock = None 162 | gc_collect() 163 | # Socket not actually closed yet b/c ref ct is 1 164 | self.assertTrue(sock_manager.is_prepared()) 165 | self.assertFalse(sock_manager.socket.close_called) 166 | sock2 = None 167 | gc_collect() 168 | # Socket closed 169 | self.assertFalse(sock_manager.is_prepared()) 170 | self.assertTrue(sock_manager.socket.close_called) 171 | 172 | # Get a new socket reference 173 | sock3 = sock_manager.get_socket() 174 | self.assertTrue(sock_manager.is_prepared()) 175 | sock3_id = id(sock3._get()) 176 | # Assert that it is not the same socket 177 | self.assertNotEqual(sock_id, sock3_id) 178 | # Drop ref ct to zero 179 | del sock3 180 | gc_collect() 181 | # Now assert that socket is closed 182 | self.assertFalse(sock_manager.is_prepared()) 183 | self.assertTrue(sock_manager.socket.close_called) 184 | 185 | def test_logging(self): 186 | conf = DummySocketConfig(1) 187 | logger = DummyLogger() 188 | sock_manager = self._makeOne(conf, logger=logger) 189 | # socket open 190 | sock = sock_manager.get_socket() 191 | self.assertEqual(len(logger.data), 1) 192 | self.assertEqual('Creating socket %s' % repr(conf), logger.data[0]) 193 | # socket close 194 | del sock 195 | gc_collect() 196 | self.assertEqual(len(logger.data), 2) 197 | self.assertEqual('Closing socket %s' % repr(conf), logger.data[1]) 198 | 199 | def test_prepare_socket(self): 200 | conf = DummySocketConfig(1) 201 | sock_manager = self._makeOne(conf) 202 | sock = sock_manager.get_socket() 203 | self.assertTrue(sock_manager.is_prepared()) 204 | self.assertFalse(sock.bind_called) 205 | self.assertTrue(sock.listen_called) 206 | self.assertFalse(sock.close_called) 207 | 208 | def test_prepare_socket_uses_configured_backlog(self): 209 | conf = DummySocketConfig(1, backlog=42) 210 | sock_manager = self._makeOne(conf) 211 | sock = sock_manager.get_socket() 212 | self.assertTrue(sock_manager.is_prepared()) 213 | self.assertEqual(sock.listen_backlog, conf.get_backlog()) 214 | 215 | def test_prepare_socket_uses_somaxconn_if_no_backlog_configured(self): 216 | conf = DummySocketConfig(1, backlog=None) 217 | sock_manager = self._makeOne(conf) 218 | sock = sock_manager.get_socket() 219 | self.assertTrue(sock_manager.is_prepared()) 220 | self.assertEqual(sock.listen_backlog, socket.SOMAXCONN) 221 | 222 | def test_tcp_socket_already_taken(self): 223 | conf = InetStreamSocketConfig('127.0.0.1', 51041) 224 | sock_manager = self._makeOne(conf) 225 | sock = sock_manager.get_socket() 226 | sock_manager2 = self._makeOne(conf) 227 | self.assertRaises(socket.error, sock_manager2.get_socket) 228 | del sock 229 | 230 | def test_unix_bad_sock(self): 231 | conf = UnixStreamSocketConfig('/notthere/foo.sock') 232 | sock_manager = self._makeOne(conf) 233 | self.assertRaises(socket.error, sock_manager.get_socket) 234 | 235 | def test_close_requires_prepared_socket(self): 236 | conf = InetStreamSocketConfig('127.0.0.1', 51041) 237 | sock_manager = self._makeOne(conf) 238 | self.assertFalse(sock_manager.is_prepared()) 239 | try: 240 | sock_manager._close() 241 | self.fail() 242 | except Exception as e: 243 | self.assertEqual(e.args[0], 'Socket has not been prepared') 244 | 245 | def gc_collect(): 246 | if __pypy__ is not None: 247 | gc.collect() 248 | gc.collect() 249 | gc.collect() 250 | -------------------------------------------------------------------------------- /docs/logging.rst: -------------------------------------------------------------------------------- 1 | Logging 2 | ======= 3 | 4 | One of the main tasks that :program:`supervisord` performs is logging. 5 | :program:`supervisord` logs an activity log detailing what it's doing 6 | as it runs. It also logs child process stdout and stderr output to 7 | other files if configured to do so. 8 | 9 | Activity Log 10 | ------------ 11 | 12 | The activity log is the place where :program:`supervisord` logs 13 | messages about its own health, its subprocess' state changes, any 14 | messages that result from events, and debug and informational 15 | messages. The path to the activity log is configured via the 16 | ``logfile`` parameter in the ``[supervisord]`` section of the 17 | configuration file, defaulting to :file:`$CWD/supervisord.log`. If 18 | the value of this option is the special string ``syslog``, the 19 | activity log will be routed to the syslog service instead of being 20 | written to a file. Sample activity log traffic is shown in the 21 | example below. Some lines have been broken to better fit the screen. 22 | 23 | Sample Activity Log Output 24 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 | 26 | .. code-block:: text 27 | 28 | 2007-09-08 14:43:22,886 DEBG 127.0.0.1:Medusa (V1.11) started at Sat Sep 8 14:43:22 2007 29 | Hostname: kingfish 30 | Port:9001 31 | 2007-09-08 14:43:22,961 INFO RPC interface 'supervisor' initialized 32 | 2007-09-08 14:43:22,961 CRIT Running without any HTTP authentication checking 33 | 2007-09-08 14:43:22,962 INFO supervisord started with pid 27347 34 | 2007-09-08 14:43:23,965 INFO spawned: 'listener_00' with pid 27349 35 | 2007-09-08 14:43:23,970 INFO spawned: 'eventgen' with pid 27350 36 | 2007-09-08 14:43:23,990 INFO spawned: 'grower' with pid 27351 37 | 2007-09-08 14:43:24,059 DEBG 'listener_00' stderr output: 38 | /Users/chrism/projects/supervisor/supervisor2/dev-sandbox/bin/python: 39 | can't open file '/Users/chrism/projects/supervisor/supervisor2/src/supervisor/scripts/osx_eventgen_listener.py': 40 | [Errno 2] No such file or directory 41 | 2007-09-08 14:43:24,060 DEBG fd 7 closed, stopped monitoring (stdout)> 43 | 2007-09-08 14:43:24,060 INFO exited: listener_00 (exit status 2; not expected) 44 | 2007-09-08 14:43:24,061 DEBG received SIGCHLD indicating a child quit 45 | 46 | The activity log "level" is configured in the config file via the 47 | ``loglevel`` parameter in the ``[supervisord]`` ini file section. 48 | When ``loglevel`` is set, messages of the specified priority, plus 49 | those with any higher priority are logged to the activity log. For 50 | example, if ``loglevel`` is ``error``, messages of ``error`` and 51 | ``critical`` priority will be logged. However, if loglevel is 52 | ``warn``, messages of ``warn``, ``error``, and ``critical`` will be 53 | logged. 54 | 55 | .. _activity_log_levels: 56 | 57 | Activity Log Levels 58 | ~~~~~~~~~~~~~~~~~~~ 59 | 60 | The below table describes the logging levels in more detail, ordered 61 | in highest priority to lowest. The "Config File Value" is the string 62 | provided to the ``loglevel`` parameter in the ``[supervisord]`` 63 | section of configuration file and the "Output Code" is the code that 64 | shows up in activity log output lines. 65 | 66 | ================= =========== ============================================ 67 | Config File Value Output Code Description 68 | ================= =========== ============================================ 69 | critical CRIT Messages that indicate a condition that 70 | requires immediate user attention, a 71 | supervisor state change, or an error in 72 | supervisor itself. 73 | error ERRO Messages that indicate a potentially 74 | ignorable error condition (e.g. unable to 75 | clear a log directory). 76 | warn WARN Messages that indicate an anomalous 77 | condition which isn't an error. 78 | info INFO Normal informational output. This is the 79 | default log level if none is explicitly 80 | configured. 81 | debug DEBG Messages useful for users trying to debug 82 | process configuration and communications 83 | behavior (process output, listener state 84 | changes, event notifications). 85 | trace TRAC Messages useful for developers trying to 86 | debug supervisor plugins, and information 87 | about HTTP and RPC requests and responses. 88 | blather BLAT Messages useful for developers trying to 89 | debug supervisor itself. 90 | ================= =========== ============================================ 91 | 92 | Activity Log Rotation 93 | ~~~~~~~~~~~~~~~~~~~~~ 94 | 95 | The activity log is "rotated" by :program:`supervisord` based on the 96 | combination of the ``logfile_maxbytes`` and the ``logfile_backups`` 97 | parameters in the ``[supervisord]`` section of the configuration file. 98 | When the activity log reaches ``logfile_maxbytes`` bytes, the current 99 | log file is moved to a backup file and a new activity log file is 100 | created. When this happens, if the number of existing backup files is 101 | greater than or equal to ``logfile_backups``, the oldest backup file 102 | is removed and the backup files are renamed accordingly. If the file 103 | being written to is named :file:`supervisord.log`, when it exceeds 104 | ``logfile_maxbytes``, it is closed and renamed to 105 | :file:`supervisord.log.1`, and if files :file:`supervisord.log.1`, 106 | :file:`supervisord.log.2` etc. exist, then they are renamed to 107 | :file:`supervisord.log.2`, :file:`supervisord.log.3` etc. 108 | respectively. If ``logfile_maxbytes`` is 0, the logfile is never 109 | rotated (and thus backups are never made). If ``logfile_backups`` is 110 | 0, no backups will be kept. 111 | 112 | Child Process Logs 113 | ------------------ 114 | 115 | The stdout of child processes spawned by supervisor, by default, is 116 | captured for redisplay to users of :program:`supervisorctl` and other 117 | clients. If no specific logfile-related configuration is performed in 118 | a ``[program:x]``, ``[fcgi-program:x]``, or ``[eventlistener:x]`` 119 | section in the configuration file, the following is true: 120 | 121 | - :program:`supervisord` will capture the child process' stdout and 122 | stderr output into temporary files. Each stream is captured to a 123 | separate file. This is known as ``AUTO`` log mode. 124 | 125 | - ``AUTO`` log files are named automatically and placed in the 126 | directory configured as ``childlogdir`` of the ``[supervisord]`` 127 | section of the config file. 128 | 129 | - The size of each ``AUTO`` log file is bounded by the 130 | ``{streamname}_logfile_maxbytes`` value of the program section 131 | (where {streamname} is "stdout" or "stderr"). When it reaches that 132 | number, it is rotated (like the activity log), based on the 133 | ``{streamname}_logfile_backups``. 134 | 135 | The configuration keys that influence child process logging in 136 | ``[program:x]`` and ``[fcgi-program:x]`` sections are these: 137 | 138 | ``redirect_stderr``, ``stdout_logfile``, ``stdout_logfile_maxbytes``, 139 | ``stdout_logfile_backups``, ``stdout_capture_maxbytes``, ``stdout_syslog``, 140 | ``stderr_logfile``, ``stderr_logfile_maxbytes``, 141 | ``stderr_logfile_backups``, ``stderr_capture_maxbytes``, and 142 | ``stderr_syslog``. 143 | 144 | ``[eventlistener:x]`` sections may not specify 145 | ``redirect_stderr``, ``stdout_capture_maxbytes``, or 146 | ``stderr_capture_maxbytes``, but otherwise they accept the same values. 147 | 148 | The configuration keys that influence child process logging in the 149 | ``[supervisord]`` config file section are these: 150 | ``childlogdir``, and ``nocleanup``. 151 | 152 | .. _capture_mode: 153 | 154 | Capture Mode 155 | ~~~~~~~~~~~~ 156 | 157 | Capture mode is an advanced feature of Supervisor. You needn't 158 | understand capture mode unless you want to take actions based on data 159 | parsed from subprocess output. 160 | 161 | If a ``[program:x]`` section in the configuration file defines a 162 | non-zero ``stdout_capture_maxbytes`` or ``stderr_capture_maxbytes`` 163 | parameter, each process represented by the program section may emit 164 | special tokens on its stdout or stderr stream (respectively) which 165 | will effectively cause supervisor to emit a ``PROCESS_COMMUNICATION`` 166 | event (see :ref:`events` for a description of events). 167 | 168 | The process communications protocol relies on two tags, one which 169 | commands supervisor to enter "capture mode" for the stream and one 170 | which commands it to exit. When a process stream enters "capture 171 | mode", data sent to the stream will be sent to a separate buffer in 172 | memory, the "capture buffer", which is allowed to contain a maximum of 173 | ``capture_maxbytes`` bytes. During capture mode, when the buffer's 174 | length exceeds ``capture_maxbytes`` bytes, the earliest data in the 175 | buffer is discarded to make room for new data. When a process stream 176 | exits capture mode, a ``PROCESS_COMMUNICATION`` event subtype is 177 | emitted by supervisor, which may be intercepted by event listeners. 178 | 179 | The tag to begin "capture mode" in a process stream is 180 | ````. The tag to exit capture mode is 181 | ````. The data between these tags may be 182 | arbitrary, and forms the payload of the ``PROCESS_COMMUNICATION`` 183 | event. For example, if a program is set up with a 184 | ``stdout_capture_maxbytes`` of "1MB", and it emits the following on 185 | its stdout stream: 186 | 187 | .. code-block:: text 188 | 189 | Hello! 190 | 191 | In this circumstance, :program:`supervisord` will emit a 192 | ``PROCESS_COMMUNICATIONS_STDOUT`` event with data in the payload of 193 | "Hello!". 194 | 195 | The output of processes specified as "event listeners" 196 | (``[eventlistener:x]`` sections) is not processed this way. 197 | Output from these processes cannot enter capture mode. 198 | --------------------------------------------------------------------------------