├── .gitignore ├── .travis.yml ├── AUTHORS.rst ├── CONTRIBUTING.rst ├── HISTORY.rst ├── LICENSE ├── MANIFEST.in ├── Makefile ├── README.rst ├── docs ├── Makefile ├── authors.rst ├── conf.py ├── contributing.rst ├── history.rst ├── index.rst ├── installation.rst ├── make.bat ├── mysql-statsd.conf ├── mysql_statsd ├── readme.rst └── usage.rst ├── mysql_statsd ├── __init__.py ├── mysql_statsd.py ├── preprocessors │ ├── __init__.py │ ├── columns_preprocessor.py │ ├── innodb_preprocessor.py │ ├── interface.py │ └── mysql_preprocessor.py ├── thread_base.py ├── thread_manager.py ├── thread_mysql.py └── thread_statsd.py ├── requirements.txt ├── setup.py ├── tests ├── __init__.py └── fixtures │ └── show-innodb-status-5.5-vanilla └── tox.ini /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | 3 | # C extensions 4 | *.so 5 | 6 | # Packages 7 | *.egg 8 | *.egg-info 9 | dist 10 | build 11 | eggs 12 | parts 13 | bin 14 | var 15 | sdist 16 | develop-eggs 17 | .installed.cfg 18 | lib 19 | lib64 20 | 21 | # Installer logs 22 | pip-log.txt 23 | 24 | # Unit test / coverage reports 25 | .coverage 26 | .tox 27 | nosetests.xml 28 | 29 | # Translations 30 | *.mo 31 | 32 | # Mr Developer 33 | .mr.developer.cfg 34 | .project 35 | .pydevproject 36 | 37 | # Complexity 38 | output/*.html 39 | output/*/index.html 40 | 41 | # Sphinx 42 | docs/_build -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | # Config file for automatic testing at travis-ci.org 2 | 3 | language: python 4 | 5 | python: 6 | - "2.7" 7 | # - "2.6" we're using argparse which isn't compatible with py2.6 8 | # - "pypy" also not there for pypy apparently.. 9 | 10 | # command to install dependencies, e.g. pip install -r requirements.txt --use-mirrors 11 | install: pip install -r requirements.txt 12 | 13 | # command to run tests, e.g. python setup.py test 14 | script: python setup.py test 15 | -------------------------------------------------------------------------------- /AUTHORS.rst: -------------------------------------------------------------------------------- 1 | ======= 2 | Credits 3 | ======= 4 | 5 | Development Lead 6 | ---------------- 7 | 8 | * Jasper Capel 9 | * Thijs de Zoete 10 | 11 | Contributors 12 | ------------ 13 | 14 | * Art van Scheppingen (idea and rough first implementation) 15 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Contributing 3 | ============ 4 | 5 | Contributions are welcome, and they are greatly appreciated! Every 6 | little bit helps, and credit will always be given. 7 | 8 | You can contribute in many ways: 9 | 10 | Types of Contributions 11 | ---------------------- 12 | 13 | Report Bugs 14 | ~~~~~~~~~~~ 15 | 16 | Report bugs at https://github.com/spilgames/mysql-statsd/issues. 17 | 18 | If you are reporting a bug, please include: 19 | 20 | * Your operating system name and version. 21 | * Any details about your local setup that might be helpful in troubleshooting. 22 | * Detailed steps to reproduce the bug. 23 | 24 | Fix Bugs 25 | ~~~~~~~~ 26 | 27 | Look through the GitHub issues for bugs. Anything tagged with "bug" 28 | is open to whoever wants to implement it. 29 | 30 | Implement Features 31 | ~~~~~~~~~~~~~~~~~~ 32 | 33 | Look through the GitHub issues for features. Anything tagged with "feature" 34 | is open to whoever wants to implement it. 35 | 36 | Write Documentation 37 | ~~~~~~~~~~~~~~~~~~~ 38 | 39 | mysql-statsd could always use more documentation, whether as part of the 40 | official mysql-statsd docs, in docstrings, or even on the web in blog posts, 41 | articles, and such. 42 | 43 | Submit Feedback 44 | ~~~~~~~~~~~~~~~ 45 | 46 | The best way to send feedback is to file an issue at https://github.com/spilgames/mysql-statsd/issues. 47 | 48 | If you are proposing a feature: 49 | 50 | * Explain in detail how it would work. 51 | * Keep the scope as narrow as possible, to make it easier to implement. 52 | * Remember that this is a volunteer-driven project, and that contributions 53 | are welcome :) 54 | 55 | Get Started! 56 | ------------ 57 | 58 | Ready to contribute? Here's how to set up `mysql-statsd` for local development. 59 | 60 | 1. Fork the `mysql-statsd` repo on GitHub. 61 | 2. Clone your fork locally:: 62 | 63 | $ git clone git@github.com:your_name_here/mysql-statsd.git 64 | 65 | 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:: 66 | 67 | $ mkvirtualenv mysql-statsd 68 | $ cd mysql-statsd/ 69 | $ python setup.py develop 70 | 71 | 4. Create a branch for local development:: 72 | 73 | $ git checkout -b name-of-your-bugfix-or-feature 74 | 75 | Now you can make your changes locally. 76 | 77 | 5. When you're done making changes, check that your changes pass flake8 and the 78 | tests, including testing other Python versions with tox:: 79 | 80 | $ flake8 mysql-statsd tests 81 | $ python setup.py test 82 | $ tox 83 | 84 | To get flake8 and tox, just pip install them into your virtualenv. 85 | 86 | 6. Commit your changes and push your branch to GitHub:: 87 | 88 | $ git add . 89 | $ git commit -m "Your detailed description of your changes." 90 | $ git push origin name-of-your-bugfix-or-feature 91 | 92 | 7. Submit a pull request through the GitHub website. 93 | 94 | Pull Request Guidelines 95 | ----------------------- 96 | 97 | Before you submit a pull request, check that it meets these guidelines: 98 | 99 | 1. The pull request should include tests. 100 | 2. If the pull request adds functionality, the docs should be updated. Put 101 | your new functionality into a function with a docstring, and add the 102 | feature to the list in README.rst. 103 | 3. The pull request should work for Python 2.6, 2.7, and 3.3, and for PyPy. Check 104 | https://travis-ci.org/spilgames/mysql-statsd/pull_requests 105 | and make sure that the tests pass for all supported Python versions. 106 | 107 | Tips 108 | ---- 109 | 110 | To run a subset of tests:: 111 | 112 | $ python -m unittest tests.test_mysql-statsd 113 | -------------------------------------------------------------------------------- /HISTORY.rst: -------------------------------------------------------------------------------- 1 | .. :changelog: 2 | 3 | History 4 | ------- 5 | 6 | 0.1.5 (2013-08-30) 7 | ++++++++++++++++++ 8 | 9 | * Support socket config 10 | * Add innodb preprocessor update 11 | 12 | 13 | 0.1.1 (2013-08-30) 14 | ++++++++++++++++++ 15 | 16 | * Preparing package for sdist releases 17 | 18 | 0.1.0 (2013-08-30) 19 | ++++++++++++++++++ 20 | 21 | * First release on PyPI. 22 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2013, Jasper Capel 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 5 | 6 | * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 7 | 8 | * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 9 | 10 | * Neither the name of mysql-statsd nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 11 | 12 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include AUTHORS.rst 2 | include CONTRIBUTING.rst 3 | include HISTORY.rst 4 | include LICENSE 5 | include README.rst 6 | include mysql_statsd/preprocessors/* 7 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: clean-pyc clean-build docs 2 | 3 | help: 4 | @echo "clean-build - remove build artifacts" 5 | @echo "clean-pyc - remove Python file artifacts" 6 | @echo "lint - check style with flake8" 7 | @echo "test - run tests quickly with the default Python" 8 | @echo "testall - run tests on every Python version with tox" 9 | @echo "coverage - check code coverage quickly with the default Python" 10 | @echo "docs - generate Sphinx HTML documentation, including API docs" 11 | @echo "release - package and upload a release" 12 | @echo "sdist - package" 13 | 14 | clean: clean-build clean-pyc 15 | 16 | clean-build: 17 | rm -fr build/ 18 | rm -fr dist/ 19 | rm -fr *.egg-info 20 | 21 | clean-pyc: 22 | find . -name '*.pyc' -exec rm -f {} + 23 | find . -name '*.pyo' -exec rm -f {} + 24 | find . -name '*~' -exec rm -f {} + 25 | 26 | lint: 27 | flake8 mysql-statsd tests 28 | 29 | test: 30 | python setup.py test 31 | 32 | test-all: 33 | tox 34 | 35 | coverage: 36 | coverage run --source mysql-statsd setup.py test 37 | coverage report -m 38 | coverage html 39 | open htmlcov/index.html 40 | 41 | docs: 42 | rm -f docs/mysql-statsd.rst 43 | rm -f docs/modules.rst 44 | sphinx-apidoc -o docs/ mysql-statsd 45 | $(MAKE) -C docs clean 46 | $(MAKE) -C docs html 47 | open docs/_build/html/index.html 48 | 49 | release: clean 50 | python setup.py sdist upload 51 | 52 | sdist: clean 53 | python setup.py sdist 54 | ls -l dist -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | =============================== 2 | 3 | Deprication 4 | =========== 5 | This project is no longer supported by Spil Games and has been adopted by `DB-Art `_. 6 | 7 | The new repository can be found here: 8 | `MySQL-StatsD @ DB-Art `_ 9 | 10 | 11 | mysql-statsd 12 | =============================== 13 | 14 | Daemon that gathers statistics from MySQL and sends them to statsd. 15 | 16 | - Free software: BSD license 17 | - Documentation: http://mysql-statsd.rtfd.org. 18 | 19 | 20 | Usage / Installation 21 | ==================== 22 | 23 | Install mysql\_statsd through pip(pip is a python package manager, 24 | please don't use sudo!): 25 | 26 | :: 27 | 28 | pip install mysql_statsd 29 | 30 | If all went well, you'll now have a new executable called mysql\_statsd 31 | in your path. 32 | 33 | Running mysql\_statsd 34 | --------------------- 35 | 36 | :: 37 | 38 | $ mysql_statsd --config /etc/mysql-statsd.conf 39 | 40 | Assuming you placed a config file in /etc/ named mysql-statsd.conf 41 | 42 | See our example 43 | `configuration `__ 44 | or read below about how to configure 45 | 46 | Running the above command will start mysql\_statsd in deamon mode. If 47 | you wish to see it's output, then run the command with -f / --foreground 48 | 49 | 50 | Usage 51 | ----- 52 | 53 | :: 54 | 55 | $ mysql_statsd --help 56 | usage: mysql_statsd.py [-h] [-c FILE] [-d] [-f] 57 | 58 | optional arguments: 59 | -h, --help show this help message and exit 60 | -c FILE, --config FILE 61 | Configuration file 62 | -d, --debug Prints statsd metrics next to sending them 63 | --dry-run Print the output that would be sent to statsd without 64 | actually sending data somewhere 65 | -f, --foreground Dont fork main program 66 | 67 | At the moment there is also a `deamon 68 | script `_ 69 | for this package 70 | 71 | You're more than welcome to help us improve it! 72 | 73 | 74 | Platforms 75 | --------- 76 | 77 | We would love to support many other kinds of database servers, but 78 | currently we're supporting these: 79 | 80 | - MySQL 5.1 81 | - MySQL 5.5 82 | - Galera 83 | 84 | Both MySQL versions supported with Percona flavour as well as vanilla. 85 | 86 | Todo: 87 | ~~~~~ 88 | 89 | Support for the following platforms 90 | 91 | - Mysql 5.6 92 | - MariaDB 93 | 94 | We're looking forward to your pull request for other platforms 95 | 96 | Development installation 97 | ------------------------ 98 | 99 | To install package, setup a `python virtual 100 | environment `_ 101 | 102 | Install the requirements(once the virtual environment is active): 103 | 104 | :: 105 | 106 | pip install -r requirements.txt 107 | 108 | *NOTE: MySQL-Python package needs mysql\_config command to be in your 109 | path.* 110 | 111 | There are future plans to replace the mysql-python package with 112 | `PyMySQL `_ 113 | 114 | After that you're able to run the script through 115 | 116 | :: 117 | 118 | $ python mysql_statsd/mysql_statsd.py 119 | 120 | Coding standards 121 | ---------------- 122 | 123 | We like to stick with the python standard way of working: 124 | `PEP-8 `_ 125 | 126 | 127 | 128 | Configuration 129 | ============= 130 | 131 | The configuration consists out of four sections: 132 | 133 | - daemon specific (log/pidfiles) 134 | - statsd (host, port, prefixes) 135 | - mysql (connecitons, queries, etc) 136 | - metrics (metrics to be stored including their type) 137 | 138 | Daemon 139 | ------ 140 | The daemon section allows you to set the paths to your log and pic files 141 | 142 | Statsd 143 | ------ 144 | The Statsd section allows you to configure the prefix and hostname of the 145 | metrics. In our example the prefix has been set to mysql and the hostname 146 | is included. This will log the status.com_select metric to: 147 | mysql..status.com_select 148 | 149 | You can use any prefix that is necessary in your environment. 150 | 151 | MySQL 152 | ----- 153 | The MySQL section allows you to configure the credentials of your mysql host 154 | (preferrably on localhost) and the queries + timings for the metrics. 155 | The queries and timings are configured through the stats_types configurable, 156 | so take for instance following example: 157 | :: 158 | stats_types = status, innodb 159 | This will execute both the query_status and query_innodb on the MySQL server. 160 | The frequency can then be controlled through the time (in milliseconds) set in 161 | the interval_status and interval_innodb. 162 | The complete configuration would be: 163 | :: 164 | stats_types = status, innodb 165 | query_status = SHOW GLOBAL STATUS 166 | interval_status = 1000 167 | query_innodb = SHOW ENGINE INNODB STATUS 168 | interval_innodb = 10000 169 | 170 | A special case is the query_commit: as the connection opened by mysql_statsd 171 | will be kept open and auto commit is turned off by default the status 172 | variables are not updated if your server is set to REPEATABLE_READ transaction 173 | isolation. Also most probably your history_list will skyrocket and your 174 | ibdata files will grow fast enough to drain all available diskspace. So when 175 | in doubt about your transaction isolation: do include the query_commit! 176 | 177 | Now here is the interesting part of mysql_statsd: if you wish to keep track 178 | of your own application data inside your application database you *could* 179 | create your own custom query this way. So for example: 180 | :: 181 | stats_types = myapp 182 | query_myapp = SELECT some_metric_name, some_metric_value FROM myapp.metric_table WHERE metric_ts >= DATE_SUB(NOW(), interval 1 MINUTE) 183 | interval_myapp = 60000 184 | 185 | This will query your application database every 60 seconds, fetch all the 186 | metrics that have changed since then and send them through StatsD. 187 | Obviously you need to whitelist them via the metrics section below. 188 | 189 | Metrics 190 | ------- 191 | The metrics section is basically a whitelisting of all metrics you wish to 192 | send to Graphite via StatsD. Currently there is no possibilty to whitelist all 193 | possible metrics, but there is a special case where we do allow wildcarding: 194 | for the bufferpool\_* we whitelist all bufferpools with that specific metric. 195 | Don't worry if you haven't configured multiple bufferpools: the output will 196 | be omitted by InnoDB and also not parsed by the preprocessor. 197 | 198 | Important to know about the metrics is that you will have to specify what type 199 | they are. By default Graphite stores all metric equaly but treats them 200 | differently per type: 201 | 202 | - Gauge (g for gauge) 203 | - Rate (r for raw, d for delta) 204 | - Timer (t for timer) 205 | 206 | Gauges are sticky values (like the spedometer in your car). Rates are the 207 | number of units that need to be translated to units per second. Timers are 208 | the time it took to perform a certain task. 209 | 210 | An ever increasing value like the com\_select can be sent various ways. If you 211 | wish to retain the absolute value of the com_select it is advised to configure 212 | it as a gauge. However if you are going to use it as a rate (queries per 213 | second) it is no use storing it as a rate in the first place and then later 214 | on calculate the integral of the gauge to get the rate. It would be far more 215 | accurate to store it as a rate in the first place. 216 | 217 | Keep in mind that sending the com\_select value as a raw value is in this case 218 | a bad habit: StatsD will average out the collected metrics per second, so 219 | sending within a 10 second timeframe 10 times a value of 1,000,000 will average 220 | out to the expected 1,000,000. However as the processing of metrics also takes 221 | a bit of time the chance of missing one beat is relatively high and you end up 222 | sending only 9 times the value, hence averaging out to 900,000 once in a while. 223 | 224 | The best way to configure the com_select to a rate is by defining it as a delta. 225 | The delta metric will remember the metric as it was during the previous run and 226 | will only send the difference of the two values. 227 | 228 | 229 | 230 | Media: 231 | ====== 232 | 233 | Art gave a talk about this tool at Percona London 2013: 234 | http://www.percona.com/live/mysql-conference-2013/sessions/mysql-performance-monitoring-using-statsd-and-graphite 235 | 236 | Contributors 237 | ------------ 238 | 239 | spil-jasper 240 | 241 | thijsdezoete 242 | 243 | art-spilgames 244 | 245 | bnkr 246 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 36 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 37 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 38 | @echo " text to make text files" 39 | @echo " man to make manual pages" 40 | @echo " texinfo to make Texinfo files" 41 | @echo " info to make Texinfo files and run them through makeinfo" 42 | @echo " gettext to make PO message catalogs" 43 | @echo " changes to make an overview of all changed/added/deprecated items" 44 | @echo " xml to make Docutils-native XML files" 45 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 46 | @echo " linkcheck to check all external links for integrity" 47 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 48 | 49 | clean: 50 | rm -rf $(BUILDDIR)/* 51 | 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | dirhtml: 58 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 59 | @echo 60 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 61 | 62 | singlehtml: 63 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 64 | @echo 65 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 66 | 67 | pickle: 68 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 69 | @echo 70 | @echo "Build finished; now you can process the pickle files." 71 | 72 | json: 73 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 74 | @echo 75 | @echo "Build finished; now you can process the JSON files." 76 | 77 | htmlhelp: 78 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 79 | @echo 80 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 81 | ".hhp project file in $(BUILDDIR)/htmlhelp." 82 | 83 | qthelp: 84 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 85 | @echo 86 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 87 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 88 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/complexity.qhcp" 89 | @echo "To view the help file:" 90 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/complexity.qhc" 91 | 92 | devhelp: 93 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 94 | @echo 95 | @echo "Build finished." 96 | @echo "To view the help file:" 97 | @echo "# mkdir -p $$HOME/.local/share/devhelp/complexity" 98 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/complexity" 99 | @echo "# devhelp" 100 | 101 | epub: 102 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 103 | @echo 104 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 105 | 106 | latex: 107 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 108 | @echo 109 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 110 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 111 | "(use \`make latexpdf' here to do that automatically)." 112 | 113 | latexpdf: 114 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 115 | @echo "Running LaTeX files through pdflatex..." 116 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 117 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 118 | 119 | latexpdfja: 120 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 121 | @echo "Running LaTeX files through platex and dvipdfmx..." 122 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 123 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 124 | 125 | text: 126 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 127 | @echo 128 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 129 | 130 | man: 131 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 132 | @echo 133 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 134 | 135 | texinfo: 136 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 137 | @echo 138 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 139 | @echo "Run \`make' in that directory to run these through makeinfo" \ 140 | "(use \`make info' here to do that automatically)." 141 | 142 | info: 143 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 144 | @echo "Running Texinfo files through makeinfo..." 145 | make -C $(BUILDDIR)/texinfo info 146 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 147 | 148 | gettext: 149 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 150 | @echo 151 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 152 | 153 | changes: 154 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 155 | @echo 156 | @echo "The overview file is in $(BUILDDIR)/changes." 157 | 158 | linkcheck: 159 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 160 | @echo 161 | @echo "Link check complete; look for any errors in the above output " \ 162 | "or in $(BUILDDIR)/linkcheck/output.txt." 163 | 164 | doctest: 165 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 166 | @echo "Testing of doctests in the sources finished, look at the " \ 167 | "results in $(BUILDDIR)/doctest/output.txt." 168 | 169 | xml: 170 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 171 | @echo 172 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 173 | 174 | pseudoxml: 175 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 176 | @echo 177 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../AUTHORS.rst -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # complexity documentation build configuration file, created by 5 | # sphinx-quickstart on Tue Jul 9 22:26:36 2013. 6 | # 7 | # This file is execfile()d with the current directory set to its containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | import sys, os 16 | 17 | # If extensions (or modules to document with autodoc) are in another directory, 18 | # add these directories to sys.path here. If the directory is relative to the 19 | # documentation root, use os.path.abspath to make it absolute, like shown here. 20 | #sys.path.insert(0, os.path.abspath('.')) 21 | 22 | # Get the project root dir, which is the parent dir of this 23 | cwd = os.getcwd() 24 | project_root = os.path.dirname(cwd) 25 | 26 | # Insert the project root dir as the first element in the PYTHONPATH. 27 | # This lets us ensure that the source package is imported, and that its 28 | # version is used. 29 | sys.path.insert(0, project_root) 30 | 31 | import mysql_statsd 32 | 33 | # -- General configuration ----------------------------------------------------- 34 | 35 | # If your documentation needs a minimal Sphinx version, state it here. 36 | #needs_sphinx = '1.0' 37 | 38 | # Add any Sphinx extension module names here, as strings. They can be extensions 39 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 40 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] 41 | 42 | # Add any paths that contain templates here, relative to this directory. 43 | templates_path = ['_templates'] 44 | 45 | # The suffix of source filenames. 46 | source_suffix = '.rst' 47 | 48 | # The encoding of source files. 49 | #source_encoding = 'utf-8-sig' 50 | 51 | # The master toctree document. 52 | master_doc = 'index' 53 | 54 | # General information about the project. 55 | project = u'mysql_statsd' 56 | copyright = u'2013, Jasper Capel' 57 | 58 | # The version info for the project you're documenting, acts as replacement for 59 | # |version| and |release|, also used in various other places throughout the 60 | # built documents. 61 | # 62 | # The short X.Y version. 63 | version = mysql_statsd.__version__ 64 | # The full version, including alpha/beta/rc tags. 65 | release = mysql_statsd.__version__ 66 | 67 | # The language for content autogenerated by Sphinx. Refer to documentation 68 | # for a list of supported languages. 69 | #language = None 70 | 71 | # There are two options for replacing |today|: either, you set today to some 72 | # non-false value, then it is used: 73 | #today = '' 74 | # Else, today_fmt is used as the format for a strftime call. 75 | #today_fmt = '%B %d, %Y' 76 | 77 | # List of patterns, relative to source directory, that match files and 78 | # directories to ignore when looking for source files. 79 | exclude_patterns = ['_build'] 80 | 81 | # The reST default role (used for this markup: `text`) to use for all documents. 82 | #default_role = None 83 | 84 | # If true, '()' will be appended to :func: etc. cross-reference text. 85 | #add_function_parentheses = True 86 | 87 | # If true, the current module name will be prepended to all description 88 | # unit titles (such as .. function::). 89 | #add_module_names = True 90 | 91 | # If true, sectionauthor and moduleauthor directives will be shown in the 92 | # output. They are ignored by default. 93 | #show_authors = False 94 | 95 | # The name of the Pygments (syntax highlighting) style to use. 96 | pygments_style = 'sphinx' 97 | 98 | # A list of ignored prefixes for module index sorting. 99 | #modindex_common_prefix = [] 100 | 101 | # If true, keep warnings as "system message" paragraphs in the built documents. 102 | #keep_warnings = False 103 | 104 | 105 | # -- Options for HTML output --------------------------------------------------- 106 | 107 | # The theme to use for HTML and HTML Help pages. See the documentation for 108 | # a list of builtin themes. 109 | html_theme = 'default' 110 | 111 | # Theme options are theme-specific and customize the look and feel of a theme 112 | # further. For a list of options available for each theme, see the 113 | # documentation. 114 | #html_theme_options = {} 115 | 116 | # Add any paths that contain custom themes here, relative to this directory. 117 | #html_theme_path = [] 118 | 119 | # The name for this set of Sphinx documents. If None, it defaults to 120 | # " v documentation". 121 | #html_title = None 122 | 123 | # A shorter title for the navigation bar. Default is the same as html_title. 124 | #html_short_title = None 125 | 126 | # The name of an image file (relative to this directory) to place at the top 127 | # of the sidebar. 128 | #html_logo = None 129 | 130 | # The name of an image file (within the static path) to use as favicon of the 131 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 132 | # pixels large. 133 | #html_favicon = None 134 | 135 | # Add any paths that contain custom static files (such as style sheets) here, 136 | # relative to this directory. They are copied after the builtin static files, 137 | # so a file named "default.css" will overwrite the builtin "default.css". 138 | html_static_path = ['_static'] 139 | 140 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 141 | # using the given strftime format. 142 | #html_last_updated_fmt = '%b %d, %Y' 143 | 144 | # If true, SmartyPants will be used to convert quotes and dashes to 145 | # typographically correct entities. 146 | #html_use_smartypants = True 147 | 148 | # Custom sidebar templates, maps document names to template names. 149 | #html_sidebars = {} 150 | 151 | # Additional templates that should be rendered to pages, maps page names to 152 | # template names. 153 | #html_additional_pages = {} 154 | 155 | # If false, no module index is generated. 156 | #html_domain_indices = True 157 | 158 | # If false, no index is generated. 159 | #html_use_index = True 160 | 161 | # If true, the index is split into individual pages for each letter. 162 | #html_split_index = False 163 | 164 | # If true, links to the reST sources are added to the pages. 165 | #html_show_sourcelink = True 166 | 167 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 168 | #html_show_sphinx = True 169 | 170 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 171 | #html_show_copyright = True 172 | 173 | # If true, an OpenSearch description file will be output, and all pages will 174 | # contain a tag referring to it. The value of this option must be the 175 | # base URL from which the finished HTML is served. 176 | #html_use_opensearch = '' 177 | 178 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 179 | #html_file_suffix = None 180 | 181 | # Output file base name for HTML help builder. 182 | htmlhelp_basename = 'mysql-statsddoc' 183 | 184 | 185 | # -- Options for LaTeX output -------------------------------------------------- 186 | 187 | latex_elements = { 188 | # The paper size ('letterpaper' or 'a4paper'). 189 | #'papersize': 'letterpaper', 190 | 191 | # The font size ('10pt', '11pt' or '12pt'). 192 | #'pointsize': '10pt', 193 | 194 | # Additional stuff for the LaTeX preamble. 195 | #'preamble': '', 196 | } 197 | 198 | # Grouping the document tree into LaTeX files. List of tuples 199 | # (source start file, target name, title, author, documentclass [howto/manual]). 200 | latex_documents = [ 201 | ('index', 'mysql-statsd.tex', u'mysql-statsd Documentation', 202 | u'Jasper Capel', 'manual'), 203 | ] 204 | 205 | # The name of an image file (relative to this directory) to place at the top of 206 | # the title page. 207 | #latex_logo = None 208 | 209 | # For "manual" documents, if this is true, then toplevel headings are parts, 210 | # not chapters. 211 | #latex_use_parts = False 212 | 213 | # If true, show page references after internal links. 214 | #latex_show_pagerefs = False 215 | 216 | # If true, show URL addresses after external links. 217 | #latex_show_urls = False 218 | 219 | # Documents to append as an appendix to all manuals. 220 | #latex_appendices = [] 221 | 222 | # If false, no module index is generated. 223 | #latex_domain_indices = True 224 | 225 | 226 | # -- Options for manual page output -------------------------------------------- 227 | 228 | # One entry per manual page. List of tuples 229 | # (source start file, name, description, authors, manual section). 230 | man_pages = [ 231 | ('index', 'mysql-statsd', u'mysql-statsd Documentation', 232 | [u'Jasper Capel'], 1) 233 | ] 234 | 235 | # If true, show URL addresses after external links. 236 | #man_show_urls = False 237 | 238 | 239 | # -- Options for Texinfo output ------------------------------------------------ 240 | 241 | # Grouping the document tree into Texinfo files. List of tuples 242 | # (source start file, target name, title, author, 243 | # dir menu entry, description, category) 244 | texinfo_documents = [ 245 | ('index', 'mysql-statsd', u'mysql-statsd Documentation', 246 | u'Jasper Capel', 'mysql-statsd', 'One line description of project.', 247 | 'Miscellaneous'), 248 | ] 249 | 250 | # Documents to append as an appendix to all manuals. 251 | #texinfo_appendices = [] 252 | 253 | # If false, no module index is generated. 254 | #texinfo_domain_indices = True 255 | 256 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 257 | #texinfo_show_urls = 'footnote' 258 | 259 | # If true, do not generate a @detailmenu in the "Top" node's menu. 260 | #texinfo_no_detailmenu = False 261 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../CONTRIBUTING.rst -------------------------------------------------------------------------------- /docs/history.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../HISTORY.rst -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. complexity documentation master file, created by 2 | sphinx-quickstart on Tue Jul 9 22:26:36 2013. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to mysql-statsd's documentation! 7 | ====================================== 8 | 9 | Contents: 10 | 11 | .. toctree:: 12 | :maxdepth: 2 13 | 14 | readme 15 | installation 16 | usage 17 | contributing 18 | authors 19 | history 20 | 21 | Indices and tables 22 | ================== 23 | 24 | * :ref:`genindex` 25 | * :ref:`modindex` 26 | * :ref:`search` 27 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | ============ 2 | Installation 3 | ============ 4 | 5 | At the command line:: 6 | 7 | $ easy_install mysql-statsd 8 | 9 | Or, if you have virtualenvwrapper installed:: 10 | 11 | $ mkvirtualenv mysql-statsd 12 | $ pip install mysql-statsd -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. xml to make Docutils-native XML files 37 | echo. pseudoxml to make pseudoxml-XML files for display purposes 38 | echo. linkcheck to check all external links for integrity 39 | echo. doctest to run all doctests embedded in the documentation if enabled 40 | goto end 41 | ) 42 | 43 | if "%1" == "clean" ( 44 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 45 | del /q /s %BUILDDIR%\* 46 | goto end 47 | ) 48 | 49 | 50 | %SPHINXBUILD% 2> nul 51 | if errorlevel 9009 ( 52 | echo. 53 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 54 | echo.installed, then set the SPHINXBUILD environment variable to point 55 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 56 | echo.may add the Sphinx directory to PATH. 57 | echo. 58 | echo.If you don't have Sphinx installed, grab it from 59 | echo.http://sphinx-doc.org/ 60 | exit /b 1 61 | ) 62 | 63 | if "%1" == "html" ( 64 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 68 | goto end 69 | ) 70 | 71 | if "%1" == "dirhtml" ( 72 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 76 | goto end 77 | ) 78 | 79 | if "%1" == "singlehtml" ( 80 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 84 | goto end 85 | ) 86 | 87 | if "%1" == "pickle" ( 88 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can process the pickle files. 92 | goto end 93 | ) 94 | 95 | if "%1" == "json" ( 96 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 97 | if errorlevel 1 exit /b 1 98 | echo. 99 | echo.Build finished; now you can process the JSON files. 100 | goto end 101 | ) 102 | 103 | if "%1" == "htmlhelp" ( 104 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 105 | if errorlevel 1 exit /b 1 106 | echo. 107 | echo.Build finished; now you can run HTML Help Workshop with the ^ 108 | .hhp project file in %BUILDDIR%/htmlhelp. 109 | goto end 110 | ) 111 | 112 | if "%1" == "qthelp" ( 113 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 114 | if errorlevel 1 exit /b 1 115 | echo. 116 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 117 | .qhcp project file in %BUILDDIR%/qthelp, like this: 118 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\complexity.qhcp 119 | echo.To view the help file: 120 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\complexity.ghc 121 | goto end 122 | ) 123 | 124 | if "%1" == "devhelp" ( 125 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished. 129 | goto end 130 | ) 131 | 132 | if "%1" == "epub" ( 133 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 137 | goto end 138 | ) 139 | 140 | if "%1" == "latex" ( 141 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 145 | goto end 146 | ) 147 | 148 | if "%1" == "latexpdf" ( 149 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 150 | cd %BUILDDIR%/latex 151 | make all-pdf 152 | cd %BUILDDIR%/.. 153 | echo. 154 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 155 | goto end 156 | ) 157 | 158 | if "%1" == "latexpdfja" ( 159 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 160 | cd %BUILDDIR%/latex 161 | make all-pdf-ja 162 | cd %BUILDDIR%/.. 163 | echo. 164 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 165 | goto end 166 | ) 167 | 168 | if "%1" == "text" ( 169 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 170 | if errorlevel 1 exit /b 1 171 | echo. 172 | echo.Build finished. The text files are in %BUILDDIR%/text. 173 | goto end 174 | ) 175 | 176 | if "%1" == "man" ( 177 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 178 | if errorlevel 1 exit /b 1 179 | echo. 180 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 181 | goto end 182 | ) 183 | 184 | if "%1" == "texinfo" ( 185 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 186 | if errorlevel 1 exit /b 1 187 | echo. 188 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 189 | goto end 190 | ) 191 | 192 | if "%1" == "gettext" ( 193 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 194 | if errorlevel 1 exit /b 1 195 | echo. 196 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 197 | goto end 198 | ) 199 | 200 | if "%1" == "changes" ( 201 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 202 | if errorlevel 1 exit /b 1 203 | echo. 204 | echo.The overview file is in %BUILDDIR%/changes. 205 | goto end 206 | ) 207 | 208 | if "%1" == "linkcheck" ( 209 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 210 | if errorlevel 1 exit /b 1 211 | echo. 212 | echo.Link check complete; look for any errors in the above output ^ 213 | or in %BUILDDIR%/linkcheck/output.txt. 214 | goto end 215 | ) 216 | 217 | if "%1" == "doctest" ( 218 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 219 | if errorlevel 1 exit /b 1 220 | echo. 221 | echo.Testing of doctests in the sources finished, look at the ^ 222 | results in %BUILDDIR%/doctest/output.txt. 223 | goto end 224 | ) 225 | 226 | if "%1" == "xml" ( 227 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 228 | if errorlevel 1 exit /b 1 229 | echo. 230 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 231 | goto end 232 | ) 233 | 234 | if "%1" == "pseudoxml" ( 235 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 236 | if errorlevel 1 exit /b 1 237 | echo. 238 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 239 | goto end 240 | ) 241 | 242 | :end -------------------------------------------------------------------------------- /docs/mysql-statsd.conf: -------------------------------------------------------------------------------- 1 | [daemon] 2 | logfile = /var/log/mysql_statsd/daemon.log 3 | pidfile = /var/run/mysql_statsd.pid 4 | 5 | [statsd] 6 | host = localhost 7 | port = 8125 8 | prefix = mysql 9 | include_hostname = true 10 | 11 | [mysql] 12 | ; specify 0 for infinite connection retries 13 | max_reconnect = 5 14 | host = localhost 15 | username = root 16 | password = 17 | socket = 18 | stats_types = status,variables,innodb,slave 19 | query_variables = SHOW GLOBAL VARIABLES 20 | interval_variables = 10000 21 | query_status = SHOW GLOBAL STATUS 22 | interval_status = 1000 23 | query_innodb = SHOW ENGINE INNODB STATUS 24 | interval_innodb = 10000 25 | query_slave = SHOW SLAVE STATUS 26 | interval_slave = 10000 27 | query_commit = COMMIT 28 | interval_commit = 5000 29 | sleep_interval = 500 30 | 31 | 32 | [metrics] 33 | ; g = gauge, c = counter (increment), t = timer, r = raw value, d = delta 34 | variables.max_connections = g 35 | status.max_used_connections = g 36 | status.connections = d 37 | status.aborted_connects = d 38 | status.open_tables = g 39 | status.open_files = g 40 | status.open_streams = g 41 | status.opened_tables = d 42 | status.slow_queries = d 43 | status.questions = d 44 | status.com_select = d 45 | status.com_insert = d 46 | status.com_update = d 47 | status.com_delete = d 48 | status.com_insert_select = d 49 | status.qcache_queries_in_cache = g 50 | status.qcache_inserts = d 51 | status.qcache_hits = d 52 | status.qcache_prunes = d 53 | status.qcache_not_cached = d 54 | status.qcache_free_memory = g 55 | status.qcache_free_blocks = g 56 | status.qcache_total_blocks = g 57 | status.flush_commands = g 58 | status.created_tmp_disk_tables = d 59 | status.created_tmp_tables = d 60 | status.threads_running = g 61 | status.threads_created = d 62 | status.threads_connected = g 63 | status.threads_cached = g 64 | status.wsrep_flow_control_sent = g 65 | status.wsrep_flow_control_recv = g 66 | status.wsrep_local_sent_queue = g 67 | status.wsrep_local_recv_queue = g 68 | status.wsrep_cert_deps_distance = g 69 | status.wsrep_local_cert_failures = d 70 | status.rep_local_bf_aborts = d 71 | status.wsrep_last_committed = d 72 | status.wsrep_flow_control_paused = g 73 | innodb.spin_waits = d 74 | innodb.spin.rounds = d 75 | innodb.os_waits = d 76 | innodb.spin_rounds = d 77 | innodb.os_waits = d 78 | innodb.pending_normal_aio_reads = g 79 | innodb.pending_normal_aio_writes = g 80 | innodb.pending_ibuf_aio_reads = g 81 | innodb.pending_aio_log_ios = g 82 | innodb.pending_aio_sync_ios = g 83 | innodb.pending_log_flushes = g 84 | innodb.pending_buf_pool_flushes = g 85 | innodb.pending_log_writes = g 86 | innodb.pending_chkp_writes = g 87 | innodb.file_reads = d 88 | innodb.file_writes = d 89 | innodb.file_fsyncs = d 90 | innodb.ibuf_inserts = d 91 | innodb.ibuf_merged = d 92 | innodb.ibuf_merges = d 93 | innodb.log_bytes_written = d 94 | innodb.unflushed_log = g 95 | innodb.log_bytes_flushed = d 96 | innodb.log_writes = d 97 | innodb.pool_size = g 98 | innodb.free_pages = g 99 | innodb.database_pages = g 100 | innodb.modified_pages = g 101 | innodb.pages_read = d 102 | innodb.pages_created = d 103 | innodb.pages_written = d 104 | innodb.queries_inside = d 105 | innodb.queries_queued = d 106 | innodb.read_views = d 107 | innodb.rows_inserted = d 108 | innodb.rows_updated = d 109 | innodb.rows_deleted = d 110 | innodb.rows_read = d 111 | innodb.innodb_transactions = d 112 | innodb.unpurged_txns = d 113 | innodb.history_list = g 114 | innodb.current_transactions = g 115 | innodb.active_transactions = g 116 | innodb.locked_transactions = g 117 | innodb.innodb_locked_tables = g 118 | innodb.innodb_tables_in_use = g 119 | innodb.read_views = g 120 | innodb.hash_index_cells_total = g 121 | innodb.hash_index_cells_used = g 122 | innodb.total_mem_alloc = d 123 | innodb.additional_pool_alloc = d 124 | innodb.last_checkpoint = d 125 | innodb.uncheckpointed_bytes = g 126 | innodb.ibuf_used_cells = g 127 | innodb.ibuf_free_cells = g 128 | innodb.ibuf_cell_count = g 129 | innodb.adaptive_hash_memory = g 130 | 131 | slave.seconds_behind_master = g 132 | 133 | ; innodb.bufferpool_*. will whitelist these metrics for all bufferpool instances 134 | ; If you don't have multiple bufferpools it won't do anything 135 | innodb.bufferpool_*.pool_size = g 136 | innodb.bufferpool_*.pool_size_bytes = g 137 | innodb.bufferpool_*.free_pages = g 138 | innodb.bufferpool_*.database_pages = g 139 | innodb.bufferpool_*.old_database_pages = g 140 | innodb.bufferpool_*.modified_pages = g 141 | innodb.bufferpool_*.pending_reads = g 142 | innodb.bufferpool_*.pending_writes_lru = g 143 | innodb.bufferpool_*.pending_writes_flush_list = g 144 | innodb.bufferpool_*.pending_writes_single_page = g 145 | innodb.bufferpool_*.pages_made_young = d 146 | innodb.bufferpool_*.pages_not_young = d 147 | innodb.bufferpool_*.pages_made_young_ps = g 148 | innodb.bufferpool_*.pages_not_young_ps = g 149 | innodb.bufferpool_*.pages_read = d 150 | innodb.bufferpool_*.pages_created = d 151 | innodb.bufferpool_*.pages_written = d 152 | innodb.bufferpool_*.pages_read_ps = g 153 | innodb.bufferpool_*.pages_created_ps = g 154 | innodb.bufferpool_*.pages_written_ps = g 155 | innodb.bufferpool_*.buffer_pool_hit_total = g 156 | innodb.bufferpool_*.buffer_pool_hits = g 157 | innodb.bufferpool_*.buffer_pool_young = g 158 | innodb.bufferpool_*.buffer_pool_not_young = g 159 | innodb.bufferpool_*.pages_read_ahead = g 160 | innodb.bufferpool_*.pages_read_evicted = g 161 | innodb.bufferpool_*.pages_read_random = g 162 | innodb.bufferpool_*.lru_len = g 163 | innodb.bufferpool_*.lru_unzip = g 164 | innodb.bufferpool_*.io_sum = d 165 | innodb.bufferpool_*.io_sum_cur = g 166 | innodb.bufferpool_*.io_unzip = d 167 | innodb.bufferpool_*.io_unzip_cur = g 168 | 169 | 170 | -------------------------------------------------------------------------------- /docs/mysql_statsd: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # 4 | # 5 | # chkconfig: 2345 95 20 6 | # description: Daemon to monitor MySQL through statsd 7 | 8 | ### BEGIN INIT INFO 9 | # Provides: 10 | # Required-Start: 11 | # Required-Stop: 12 | # Should-Start: 13 | # Should-Stop: 14 | # Default-Start: 15 | # Default-Stop: 16 | # Short-Description: 17 | # Description: 18 | ### END INIT INFO 19 | 20 | # Source function library. 21 | . /etc/rc.d/init.d/functions 22 | 23 | exec="/opt/mysql_statsd/mysql_statsd.py" 24 | prog="mysql_statsd" 25 | config="/etc/mysql-statsd.conf" 26 | pidfile="/var/run/${prog}.pid" 27 | 28 | [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog 29 | 30 | lockfile=/var/lock/subsys/$prog 31 | 32 | start() { 33 | [ -x $exec ] || exit 5 34 | [ -f $config ] || exit 6 35 | echo -n $"Starting $prog: " 36 | $exec --config $config 37 | retval=$? 38 | echo 39 | [ $retval -eq 0 ] && touch $lockfile 40 | return $retval 41 | } 42 | 43 | stop() { 44 | echo -n $"Stopping $prog: " 45 | # stop it here, often "killproc $prog" 46 | kill `cat $pidfile` 47 | retval=$? 48 | echo 49 | [ $retval -eq 0 ] && rm -f $lockfile && rm -f $pidfile 50 | return $retval 51 | } 52 | 53 | restart() { 54 | stop 55 | start 56 | } 57 | 58 | reload() { 59 | restart 60 | } 61 | 62 | force_reload() { 63 | restart 64 | } 65 | 66 | rh_status() { 67 | # run checks to determine if the service is running or use generic status 68 | status $prog 69 | } 70 | 71 | rh_status_q() { 72 | rh_status >/dev/null 2>&1 73 | } 74 | 75 | 76 | case "$1" in 77 | start) 78 | rh_status_q && exit 0 79 | $1 80 | ;; 81 | stop) 82 | rh_status_q || exit 0 83 | $1 84 | ;; 85 | restart) 86 | $1 87 | ;; 88 | reload) 89 | rh_status_q || exit 7 90 | $1 91 | ;; 92 | force-reload) 93 | force_reload 94 | ;; 95 | status) 96 | rh_status 97 | ;; 98 | condrestart|try-restart) 99 | rh_status_q || exit 0 100 | restart 101 | ;; 102 | *) 103 | echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" 104 | exit 2 105 | esac 106 | exit $? 107 | -------------------------------------------------------------------------------- /docs/readme.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../README.rst -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | ======== 2 | Usage 3 | ======== 4 | 5 | To use mysql-statsd in a project:: 6 | 7 | import mysql-statsd -------------------------------------------------------------------------------- /mysql_statsd/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | __author__ = 'Jasper Capel' 5 | __email__ = 'jasper.capel@spilgames.com' 6 | __version__ = '0.1.1' 7 | from mysql_statsd import MysqlStatsd 8 | -------------------------------------------------------------------------------- /mysql_statsd/mysql_statsd.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import argparse 5 | import Queue 6 | import signal 7 | import sys 8 | import os 9 | import threading 10 | import time 11 | from ConfigParser import ConfigParser 12 | 13 | from thread_manager import ThreadManager 14 | from thread_mysql import ThreadMySQL 15 | from thread_statsd import ThreadStatsd, ThreadFakeStatsd 16 | 17 | 18 | class MysqlStatsd(): 19 | """Main program class""" 20 | opt = None 21 | config = None 22 | 23 | def __init__(self): 24 | """Program entry point""" 25 | op = argparse.ArgumentParser() 26 | op.add_argument("-c", "--config", dest="cfile", 27 | default="/etc/mysql-statsd.conf", 28 | help="Configuration file" 29 | ) 30 | op.add_argument("-d", "--debug", dest="debug", 31 | help="Prints statsd metrics next to sending them", 32 | default=False, action="store_true" 33 | ) 34 | op.add_argument("--dry-run", dest="dry_run", 35 | default=False, 36 | action="store_true", 37 | help="Print the output that would be sent to statsd without actually sending data somewhere" 38 | ) 39 | 40 | # TODO switch the default to True, and make it fork by default in init script. 41 | op.add_argument("-f", "--foreground", dest="foreground", help="Dont fork main program", default=False, action="store_true") 42 | 43 | opt = op.parse_args() 44 | self.get_config(opt.cfile) 45 | 46 | if not self.config: 47 | sys.exit(op.print_help()) 48 | 49 | try: 50 | logfile = self.config.get('daemon').get('logfile', '/tmp/daemon.log') 51 | except AttributeError: 52 | logfile = sys.stdout 53 | pass 54 | 55 | if not opt.foreground: 56 | self.daemonize(stdin='/dev/null', stdout=logfile, stderr=logfile) 57 | 58 | # Set up queue 59 | self.queue = Queue.Queue() 60 | 61 | # split off config for each thread 62 | mysql_config = dict(mysql=self.config['mysql']) 63 | mysql_config['metrics'] = self.config['metrics'] 64 | 65 | statsd_config = self.config['statsd'] 66 | 67 | # Spawn MySQL polling thread 68 | mysql_thread = ThreadMySQL(queue=self.queue, **mysql_config) 69 | # t1 = ThreadMySQL(config=self.config, queue=self.queue) 70 | 71 | # Spawn Statsd flushing thread 72 | statsd_thread = ThreadStatsd(queue=self.queue, **statsd_config) 73 | 74 | if opt.dry_run: 75 | statsd_thread = ThreadFakeStatsd(queue=self.queue, **statsd_config) 76 | 77 | if opt.debug: 78 | """ All debug settings go here """ 79 | statsd_thread.debug = True 80 | 81 | # Get thread manager 82 | tm = ThreadManager(threads=[mysql_thread, statsd_thread]) 83 | 84 | try: 85 | tm.run() 86 | except: 87 | # Protects somewhat from needing to kill -9 if there is an exception 88 | # within the thread manager by asking for a quit an joining. 89 | try: 90 | tm.stop_threads() 91 | except: 92 | pass 93 | 94 | raise 95 | 96 | def get_config(self, config_file): 97 | cnf = ConfigParser() 98 | try: 99 | cnf.read(config_file)[0] 100 | except IndexError: 101 | # Return None so we can display help... 102 | self.config = None # Just to be safe.. 103 | return None 104 | 105 | self.config = {} 106 | for section in cnf.sections(): 107 | self.config[section] = {} 108 | for key, value in cnf.items(section): 109 | self.config[section][key] = value 110 | 111 | return self.config 112 | 113 | def daemonize(self, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'): 114 | '''This forks the current process into a daemon. The stdin, stdout, and 115 | stderr arguments are file names that will be opened and be used to replace 116 | the standard file descriptors in sys.stdin, sys.stdout, and sys.stderr. 117 | These arguments are optional and default to /dev/null. Note that stderr is 118 | opened unbuffered, so if it shares a file with stdout then interleaved 119 | output may not appear in the order that you expect. ''' 120 | 121 | # Do first fork. 122 | try: 123 | pid = os.fork() 124 | if pid > 0: 125 | sys.exit(0) # Exit first parent. 126 | except OSError, e: 127 | sys.stderr.write("fork #1 failed: (%d) %s\n" % (e.errno, e.strerror)) 128 | sys.exit(1) 129 | 130 | # Decouple from parent environment. 131 | # TODO: do we need to change to '/' or can we chdir to wherever __file__ is? 132 | os.chdir("/") 133 | os.umask(0) 134 | os.setsid() 135 | 136 | # Do second fork. 137 | try: 138 | pid = os.fork() 139 | if pid > 0: 140 | f = open(self.config.get('daemon').get('pidfile', '/var/run/mysql_statsd.pid'), 'w') 141 | f.write(str(pid)) 142 | f.close() 143 | sys.exit(0) # Exit second parent. 144 | except OSError, e: 145 | sys.stderr.write("fork #2 failed: (%d) %s\n" % (e.errno, e.strerror)) 146 | sys.exit(1) 147 | 148 | # Now I am a daemon! 149 | 150 | # Redirect standard file descriptors. 151 | si = open(stdin, 'r') 152 | so = open(stdout, 'a+') 153 | se = open(stderr, 'a+', 0) 154 | os.dup2(si.fileno(), sys.stdin.fileno()) 155 | os.dup2(so.fileno(), sys.stdout.fileno()) 156 | os.dup2(se.fileno(), sys.stderr.fileno()) 157 | 158 | if __name__ == "__main__": 159 | program = MysqlStatsd() 160 | -------------------------------------------------------------------------------- /mysql_statsd/preprocessors/__init__.py: -------------------------------------------------------------------------------- 1 | from innodb_preprocessor import InnoDBPreprocessor 2 | from mysql_preprocessor import MysqlPreprocessor 3 | from columns_preprocessor import ColumnsPreprocessor 4 | -------------------------------------------------------------------------------- /mysql_statsd/preprocessors/columns_preprocessor.py: -------------------------------------------------------------------------------- 1 | from interface import Preprocessor 2 | 3 | 4 | class ColumnsPreprocessor(Preprocessor): 5 | """Preprocessor for data returned in single row/multiple columns format (f.e.: SHOW SLAVE STATUS)""" 6 | def __init__(self, *args, **kwargs): 7 | super(ColumnsPreprocessor, self).__init__(*args, **kwargs) 8 | 9 | def process(self, rows, column_names): 10 | if not rows: 11 | return [] 12 | return zip(column_names, rows[0]) 13 | -------------------------------------------------------------------------------- /mysql_statsd/preprocessors/innodb_preprocessor.py: -------------------------------------------------------------------------------- 1 | from interface import Preprocessor 2 | import re 3 | 4 | 5 | class InnoDBPreprocessor(Preprocessor): 6 | _INNO_LINE = re.compile(r'\s+') 7 | _DIGIT_LINE = re.compile(r'\d+\.*\d*') 8 | tmp_stats = {} 9 | txn_seen = 0 10 | prev_line = '' 11 | 12 | @staticmethod 13 | def increment(stats, value, increment): 14 | if value in stats: 15 | stats[value] += increment 16 | else: 17 | stats[value] = increment 18 | return stats 19 | 20 | @staticmethod 21 | def make_bigint(hi, lo=None): 22 | if lo == 0: 23 | return int("0x" + hi, 0) 24 | else: 25 | if hi is None: 26 | hi = 0 27 | if lo is None: 28 | lo = 0 29 | 30 | return (int(hi) * 4294967296) + int(lo) 31 | 32 | def clear_variables(self): 33 | self.tmp_stats = {} 34 | self.txn_seen = 0 35 | self.prev_line = '' 36 | 37 | def process(self, rows): 38 | # The show engine innodb status is basically a bunch of sections, so we'll try to separate them in chunks 39 | chunks = {'junk': []} 40 | current_chunk = 'junk' 41 | next_chunk = False 42 | oldest_view = False 43 | 44 | self.clear_variables() 45 | for row in rows: 46 | innoblob = row[2].replace(',', '').replace(';', '').replace('/s', '').split('\n') 47 | for line in innoblob: 48 | # All chunks start with more than three dashes. Only the individual innodb bufferpools have three dashes 49 | if line.startswith('---OLDEST VIEW---'): 50 | oldest_view = True 51 | if line.startswith('----'): 52 | # First time we see more than four dashes have to record the new chunk 53 | if next_chunk == False and oldest_view == False: 54 | next_chunk = True 55 | else: 56 | # Second time we see them we just have recorded the chunk 57 | next_chunk = False 58 | oldest_view = False 59 | elif next_chunk == True: 60 | # Record the chunkname and initialize the array 61 | current_chunk = line 62 | chunks[current_chunk] = [] 63 | else: 64 | # Or else we just stuff the line in the chunk 65 | chunks[current_chunk].append(line) 66 | for chunk in chunks: 67 | # For now let's skip individual buffer pool info not have it mess up our stats when enabled 68 | if chunk != 'INDIVIDUAL BUFFER POOL INFO': 69 | for line in chunks[chunk]: 70 | self.process_line(line) 71 | 72 | # Process the individual buffer pool 73 | bufferpool = 'bufferpool_0.' 74 | for line in chunks.get('INDIVIDUAL BUFFER POOL INFO', []): 75 | # Buffer pool stats are preceded by: 76 | # ---BUFFER POOL X 77 | if line.startswith('---'): 78 | innorow = self._INNO_LINE.split(line) 79 | bufferpool = 'bufferpool_' + innorow[2] + '.' 80 | else: 81 | self.process_individual_bufferpools(line, bufferpool) 82 | 83 | return self.tmp_stats.items() 84 | 85 | def process_individual_bufferpools(self,line,bufferpool): 86 | innorow = self._INNO_LINE.split(line) 87 | if line.startswith("Buffer pool size ") and not line.startswith("Buffer pool size bytes"): 88 | # The " " after size is necessary to avoid matching the wrong line: 89 | # Buffer pool size 1769471 90 | # Buffer pool size, bytes 28991012864 91 | self.tmp_stats[bufferpool + 'pool_size'] = innorow[3] 92 | elif line.startswith("Buffer pool size bytes"): 93 | self.tmp_stats[bufferpool + 'pool_size_bytes'] = innorow[4] 94 | elif line.startswith("Free buffers"): 95 | # Free buffers 0 96 | self.tmp_stats[bufferpool + 'free_pages'] = innorow[2] 97 | elif line.startswith("Database pages"): 98 | # Database pages 1696503 99 | self.tmp_stats[bufferpool + 'database_pages'] = innorow[2] 100 | elif line.startswith("Old database pages"): 101 | # Database pages 1696503 102 | self.tmp_stats[bufferpool + 'old_database_pages'] = innorow[3] 103 | elif line.startswith("Modified db pages"): 104 | # Modified db pages 160602 105 | self.tmp_stats[bufferpool + 'modified_pages'] = innorow[3] 106 | elif line.startswith("Pending reads"): 107 | # Pending reads 0 108 | self.tmp_stats[bufferpool + 'pending_reads'] = innorow[2] 109 | elif line.startswith("Pending writes"): 110 | # Pending writes: LRU 0, flush list 0, single page 0 111 | self.tmp_stats[bufferpool + 'pending_writes_lru'] = self._DIGIT_LINE.findall(innorow[3])[0] 112 | self.tmp_stats[bufferpool + 'pending_writes_flush_list'] = self._DIGIT_LINE.findall(innorow[6])[0] 113 | self.tmp_stats[bufferpool + 'pending_writes_single_page'] = innorow[9] 114 | elif line.startswith("Pages made young"): 115 | # Pages made young 290, not young 0 116 | self.tmp_stats[bufferpool + 'pages_made_young'] = innorow[3] 117 | self.tmp_stats[bufferpool + 'pages_not_young'] = innorow[6] 118 | elif 'youngs/s' in line: 119 | # 0.50 youngs/s, 0.00 non-youngs/s 120 | self.tmp_stats[bufferpool + 'pages_made_young_ps'] = innorow[0] 121 | self.tmp_stats[bufferpool + 'pages_not_young_ps'] = innorow[2] 122 | elif line.startswith("Pages read ahead"): 123 | # Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s 124 | self.tmp_stats[bufferpool + 'pages_read_ahead'] = self._DIGIT_LINE.findall(innorow[3])[0] 125 | self.tmp_stats[bufferpool + 'pages_read_evicted'] = self._DIGIT_LINE.findall(innorow[7])[0] 126 | self.tmp_stats[bufferpool + 'pages_read_random'] = self._DIGIT_LINE.findall(innorow[11])[0] 127 | elif line.startswith("Pages read"): 128 | # Pages read 88, created 66596, written 221669 129 | self.tmp_stats[bufferpool + 'pages_read'] = innorow[2] 130 | self.tmp_stats[bufferpool + 'pages_created'] = innorow[4] 131 | self.tmp_stats[bufferpool + 'pages_written'] = innorow[6] 132 | elif 'reads' in line and 'creates' in line: 133 | # 0.00 reads/s, 40.76 creates/s, 137.97 writes/s 134 | self.tmp_stats[bufferpool + 'pages_read_ps'] = innorow[0] 135 | self.tmp_stats[bufferpool + 'pages_created_ps'] = innorow[2] 136 | self.tmp_stats[bufferpool + 'pages_written_ps'] = innorow[4] 137 | elif line.startswith("Buffer pool hit rate"): 138 | # Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 139 | self.tmp_stats[bufferpool + 'buffer_pool_hit_total'] = self._DIGIT_LINE.findall(innorow[6])[0] 140 | self.tmp_stats[bufferpool + 'buffer_pool_hits'] = innorow[4] 141 | self.tmp_stats[bufferpool + 'buffer_pool_young'] = innorow[9] 142 | self.tmp_stats[bufferpool + 'buffer_pool_not_young'] = innorow[13] 143 | elif line.startswith("LRU len:"): 144 | # LRU len: 21176, unzip_LRU len: 0 145 | self.tmp_stats[bufferpool + 'lru_len'] = self._DIGIT_LINE.findall(innorow[2])[0] 146 | self.tmp_stats[bufferpool + 'lru_unzip'] = innorow[5] 147 | elif line.startswith("I/O sum"): 148 | # I/O sum[29174]:cur[285], unzip sum[0]:cur[0] 149 | self.tmp_stats[bufferpool + 'io_sum'] = self._DIGIT_LINE.findall(innorow[1])[0] 150 | self.tmp_stats[bufferpool + 'io_sum_cur'] = self._DIGIT_LINE.findall(innorow[1])[1] 151 | self.tmp_stats[bufferpool + 'io_unzip'] = self._DIGIT_LINE.findall(innorow[3])[0] 152 | self.tmp_stats[bufferpool + 'io_unzip_cur'] = self._DIGIT_LINE.findall(innorow[3])[0] 153 | 154 | 155 | 156 | 157 | 158 | 159 | def process_line(self, line): 160 | innorow = self._INNO_LINE.split(line) 161 | if line.startswith('Mutex spin waits'): 162 | # Mutex spin waits 79626940, rounds 157459864, OS waits 698719 163 | # Mutex spin waits 0, rounds 247280272495, OS waits 316513438 164 | self.tmp_stats['spin_waits'] = innorow[3] 165 | self.tmp_stats['spin_rounds'] = innorow[5] 166 | self.tmp_stats['os_waits'] = innorow[8] 167 | 168 | elif line.startswith('RW-shared spins') and ';' in line: 169 | # RW-shared spins 3859028, OS waits 2100750; RW-excl spins 4641946, OS waits 1530310 170 | self.tmp_stats['spin_waits'] = innorow[2] 171 | self.tmp_stats['spin_waits'] = innorow[8] 172 | self.tmp_stats['os_waits'] = innorow[5] 173 | self.tmp_stats['os_waits'] += innorow[11] 174 | 175 | elif line.startswith('RW-shared spins') and '; RW-excl spins' in line: 176 | # Post 5.5.17 SHOW ENGINE INNODB STATUS syntax 177 | # RW-shared spins 604733, rounds 8107431, OS waits 241268 178 | self.tmp_stats['spin_waits'] = innorow[2] 179 | self.tmp_stats['os_waits'] = innorow[7] 180 | 181 | elif line.startswith('RW-excl spins'): 182 | # Post 5.5.17 SHOW ENGINE INNODB STATUS syntax 183 | # RW-excl spins 604733, rounds 8107431, OS waits 241268 184 | self.tmp_stats['spin_waits'] = innorow[2] 185 | self.tmp_stats['os_waits'] = innorow[7] 186 | 187 | elif 'seconds the semaphore:' in line: 188 | # --Thread 907205 has waited at handler/ha_innodb.cc line 7156 for 1.00 seconds the semaphore: 189 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_sem_waits', 1) 190 | if 'innodb_sem_wait_time_ms' in self.tmp_stats: 191 | self.tmp_stats['innodb_sem_wait_time_ms'] = float(self.tmp_stats['innodb_sem_wait_time_ms']) + float(innorow[9]) * 1000 192 | else: 193 | self.tmp_stats['innodb_sem_wait_time_ms'] = float(innorow[9]) * 1000 194 | 195 | # TRANSACTIONS 196 | elif line.startswith('Trx id counter'): 197 | # The beginning of the TRANSACTIONS section: start counting 198 | # transactions 199 | # Trx id counter 0 1170664159 200 | # Trx id counter 861B144C 201 | if len(innorow) == 4: 202 | innorow.append(0) 203 | self.tmp_stats['innodb_transactions'] = self.make_bigint(innorow[3], innorow[4]) 204 | self.txn_seen = 1 205 | 206 | elif line.startswith('Purge done for trx'): 207 | # Purge done for trx's n:o < 0 1170663853 undo n:o < 0 0 208 | # Purge done for trx's n:o < 861B135D undo n:o < 0 209 | if innorow[7] == 'undo': 210 | innorow[7] = 0 211 | self.tmp_stats['unpurged_txns'] = int(self.tmp_stats['innodb_transactions']) - self.make_bigint(innorow[6], innorow[7]) 212 | 213 | elif line.startswith('History list length'): 214 | # History list length 132 215 | self.tmp_stats['history_list'] = innorow[3] 216 | 217 | elif self.txn_seen == 1 and line.startswith('---TRANSACTION'): 218 | # ---TRANSACTION 0, not started, process no 13510, OS thread id 1170446656 219 | self.tmp_stats = self.increment(self.tmp_stats, 'current_transactions', 1) 220 | if 'ACTIVE' in line: 221 | self.tmp_stats = self.increment(self.tmp_stats, 'active_transactions', 1) 222 | 223 | elif self.txn_seen == 1 and line.startswith('------- TRX HAS BEEN'): 224 | # ------- TRX HAS BEEN WAITING 32 SEC FOR THIS LOCK TO BE GRANTED: 225 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_lock_wait_secs', innorow[5]) 226 | 227 | elif 'read views open inside InnoDB' in line: 228 | # 1 read views open inside InnoDB 229 | self.tmp_stats['read_views'] = innorow[0] 230 | 231 | elif line.startswith('mysql tables in use'): 232 | # mysql tables in use 2, locked 2 233 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_tables_in_use', innorow[4]) 234 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_locked_tables', innorow[6]) 235 | 236 | elif self.txn_seen == 1 and 'lock struct(s)' in line: 237 | # 23 lock struct(s), heap size 3024, undo log entries 27 238 | # LOCK WAIT 12 lock struct(s), heap size 3024, undo log entries 5 239 | # LOCK WAIT 2 lock struct(s), heap size 368 240 | if line.startswith('LOCK WAIT'): 241 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_lock_structs', innorow[2]) 242 | self.tmp_stats = self.increment(self.tmp_stats, 'locked_transactions', 1) 243 | else: 244 | self.tmp_stats = self.increment(self.tmp_stats, 'innodb_lock_structs', innorow[0]) 245 | 246 | # FILE I/O 247 | elif ' OS file reads, ' in line: 248 | # 8782182 OS file reads, 15635445 OS file writes, 947800 OS fsyncs 249 | self.tmp_stats['file_reads'] = innorow[0] 250 | self.tmp_stats['file_writes'] = innorow[4] 251 | self.tmp_stats['file_fsyncs'] = innorow[8] 252 | 253 | elif line.startswith('Pending normal aio reads:'): 254 | # Pending normal aio reads: 0, aio writes: 0, 255 | self.tmp_stats['pending_normal_aio_reads'] = innorow[4] 256 | self.tmp_stats['pending_normal_aio_writes'] = innorow[7] 257 | 258 | elif line.startswith('ibuf aio reads'): 259 | # ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 260 | self.tmp_stats['pending_ibuf_aio_reads'] = innorow[3] 261 | self.tmp_stats['pending_aio_log_ios'] = innorow[6] 262 | self.tmp_stats['pending_aio_sync_ios'] = innorow[9] 263 | 264 | elif line.startswith('Pending flushes (fsync)'): 265 | # Pending flushes (fsync) log: 0; buffer pool: 0 266 | self.tmp_stats['pending_log_flushes'] = innorow[4] 267 | self.tmp_stats['pending_buf_pool_flushes'] = innorow[7] 268 | 269 | elif line.startswith('Ibuf for space 0: size '): 270 | # Older InnoDB code seemed to be ready for an ibuf per tablespace. It 271 | # had two lines in the output. Newer has just one line, see below. 272 | # Ibuf for space 0: size 1, free list len 887, seg size 889, is not empty 273 | # Ibuf for space 0: size 1, free list len 887, seg size 889, 274 | self.tmp_stats['ibuf_used_cells'] = innorow[5] 275 | self.tmp_stats['ibuf_free_cells'] = innorow[9] 276 | self.tmp_stats['ibuf_cell_count'] = innorow[12] 277 | 278 | elif line.startswith('Ibuf: size '): 279 | # Ibuf: size 1, free list len 4634, seg size 4636, 280 | self.tmp_stats['ibuf_used_cells'] = innorow[2] 281 | self.tmp_stats['ibuf_free_cells'] = innorow[6] 282 | self.tmp_stats['ibuf_cell_count'] = innorow[9] 283 | if 'merges' in line: 284 | self.tmp_stats['ibuf_merges'] = innorow[10] 285 | 286 | elif ', delete mark ' in line and self.prev_line.startswith('merged operations:'): 287 | # Output of show engine innodb status has changed in 5.5 288 | # merged operations: 289 | # insert 593983, delete mark 387006, delete 73092 290 | self.tmp_stats['ibuf_inserts'] = innorow[1] 291 | self.tmp_stats['ibuf_merged'] = innorow[1] + innorow[4] + innorow[6] 292 | 293 | elif ' merged recs, ' in line: 294 | # 19817685 inserts, 19817684 merged recs, 3552620 merges 295 | self.tmp_stats['ibuf_inserts'] = innorow[0] 296 | self.tmp_stats['ibuf_merged'] = innorow[2] 297 | self.tmp_stats['ibuf_merges'] = innorow[5] 298 | 299 | elif line.startswith('Hash table size '): 300 | # In some versions of InnoDB, the used cells is omitted. 301 | # Hash table size 4425293, used cells 4229064, .... 302 | # Hash table size 57374437, node heap has 72964 buffer(s) <-- no used cells 303 | self.tmp_stats['hash_index_cells_total'] = innorow[3] 304 | if 'used cells' in line: 305 | self.tmp_stats['hash_index_cells_used'] = innorow[6] 306 | else: 307 | self.tmp_stats['hash_index_cells_used'] = 0 308 | 309 | # LOG 310 | elif " log i/o's done, " in line: 311 | # 3430041 log i/o's done, 17.44 log i/o's/second 312 | # 520835887 log i/o's done, 17.28 log i/o's/second, 518724686 syncs, 2980893 checkpoints 313 | # TODO: graph syncs and checkpoints 314 | self.tmp_stats['log_writes'] = innorow[0] 315 | 316 | elif " pending log writes, " in line: 317 | # 0 pending log writes, 0 pending chkp writes 318 | self.tmp_stats['pending_log_writes'] = innorow[0] 319 | self.tmp_stats['pending_chkp_writes'] = innorow[4] 320 | 321 | elif line.startswith("Log sequence number"): 322 | # This number is NOT printed in hex in InnoDB plugin. 323 | # Log sequence number 13093949495856 //plugin 324 | # Log sequence number 125 3934414864 //normal 325 | if len(innorow) > 4: 326 | self.tmp_stats['log_bytes_written'] = self.make_bigint(innorow[3], innorow[4]) 327 | else: 328 | self.tmp_stats['log_bytes_written'] = innorow[3] 329 | 330 | elif line.startswith("Log flushed up to"): 331 | # This number is NOT printed in hex in InnoDB plugin. 332 | # Log flushed up to 13093948219327 333 | # Log flushed up to 125 3934414864 334 | if len(innorow) > 5: 335 | self.tmp_stats['log_bytes_flushed'] = self.make_bigint(innorow[4], innorow[5]) 336 | else: 337 | self.tmp_stats['log_bytes_flushed'] = innorow[4] 338 | 339 | elif line.startswith("Last checkpoint at"): 340 | # Last checkpoint at 125 3934293461 341 | if len(innorow) > 4: 342 | self.tmp_stats['last_checkpoint'] = self.make_bigint(innorow[3], innorow[4]) 343 | else: 344 | self.tmp_stats['last_checkpoint'] = innorow[3] 345 | 346 | # BUFFER POOL AND MEMORY 347 | elif line.startswith("Total memory allocated") and 'in additional pool' in line: 348 | # Total memory allocated 29642194944; in additional pool allocated 0 349 | self.tmp_stats['total_mem_alloc'] = innorow[3] 350 | self.tmp_stats['additional_pool_alloc'] = innorow[8] 351 | 352 | elif line.startswith('Adaptive hash index '): 353 | # Adaptive hash index 1538240664 (186998824 + 1351241840) 354 | self.tmp_stats['adaptive_hash_memory'] = innorow[3] 355 | 356 | elif line.startswith('Page hash '): 357 | # Page hash 11688584 358 | self.tmp_stats['page_hash_memory'] = innorow[2] 359 | 360 | elif line.startswith('Dictionary cache '): 361 | # Dictionary cache 145525560 (140250984 + 5274576) 362 | self.tmp_stats['dictionary_cache_memory'] = innorow[2] 363 | 364 | elif line.startswith('File system '): 365 | # File system 313848 (82672 + 231176) 366 | self.tmp_stats['file_system_memory'] = innorow[2] 367 | 368 | elif line.startswith('Lock system '): 369 | # Lock system 29232616 (29219368 + 13248) 370 | self.tmp_stats['lock_system_memory'] = innorow[2] 371 | 372 | elif line.startswith('Recovery system '): 373 | # Recovery system 0 (0 + 0) 374 | self.tmp_stats['recovery_system_memory'] = innorow[2] 375 | 376 | elif line.startswith('Threads '): 377 | # Threads 409336 (406936 + 2400) 378 | self.tmp_stats['thread_hash_memory'] = innorow[1] 379 | 380 | elif line.startswith('innodb_io_pattern '): 381 | # innodb_io_pattern 0 (0 + 0) 382 | self.tmp_stats['innodb_io_pattern_memory'] = innorow[1] 383 | 384 | elif line.startswith("Buffer pool size ") and not line.startswith("Buffer pool size bytes"): 385 | # The " " after size is necessary to avoid matching the wrong line: 386 | # Buffer pool size 1769471 387 | # Buffer pool size, bytes 28991012864 388 | self.tmp_stats['pool_size'] = innorow[3] 389 | 390 | elif line.startswith("Free buffers"): 391 | # Free buffers 0 392 | self.tmp_stats['free_pages'] = innorow[2] 393 | 394 | elif line.startswith("Database pages"): 395 | # Database pages 1696503 396 | self.tmp_stats['database_pages'] = innorow[2] 397 | 398 | elif line.startswith("Modified db pages"): 399 | # Modified db pages 160602 400 | self.tmp_stats['modified_pages'] = innorow[3] 401 | 402 | elif line.startswith("Pages read ahead"): 403 | # Must do this BEFORE the next test, otherwise it'll get fooled by this 404 | # line from the new plugin (see samples/innodb-015.txt): 405 | # Pages read ahead 0.00/s, evicted without access 0.06/s 406 | # TODO: No-op for now, see issue 134. 407 | self.tmp_stats['empty'] = '' 408 | 409 | elif line.startswith("Pages read"): 410 | # Pages read 15240822, created 1770238, written 21705836 411 | self.tmp_stats['pages_read'] = innorow[2] 412 | self.tmp_stats['pages_created'] = innorow[4] 413 | self.tmp_stats['pages_written'] = innorow[6] 414 | # ROW OPERATIONS 415 | 416 | elif line.startswith('Number of rows inserted'): 417 | # Number of rows inserted 50678311, updated 66425915, deleted 20605903, read 454561562 418 | self.tmp_stats['rows_inserted'] = innorow[4] 419 | self.tmp_stats['rows_updated'] = innorow[6] 420 | self.tmp_stats['rows_deleted'] = innorow[8] 421 | self.tmp_stats['rows_read'] = innorow[10] 422 | elif " queries inside InnoDB, " in line: 423 | # 0 queries inside InnoDB, 0 queries in queue 424 | self.tmp_stats['queries_inside'] = innorow[0] 425 | self.tmp_stats['queries_queued'] = innorow[4] 426 | -------------------------------------------------------------------------------- /mysql_statsd/preprocessors/interface.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | class Preprocessor(object): 4 | def process(self, rows): 5 | """ Can do preprocessing on rows if needed for different 6 | database types """ 7 | return rows 8 | -------------------------------------------------------------------------------- /mysql_statsd/preprocessors/mysql_preprocessor.py: -------------------------------------------------------------------------------- 1 | from interface import Preprocessor 2 | 3 | 4 | class MysqlPreprocessor(Preprocessor): 5 | def __init__(self, *args, **kwargs): 6 | super(MysqlPreprocessor, self).__init__(*args, **kwargs) 7 | 8 | def process(self, rows): 9 | return list(rows) 10 | -------------------------------------------------------------------------------- /mysql_statsd/thread_base.py: -------------------------------------------------------------------------------- 1 | import threading 2 | 3 | 4 | class ThreadBase(threading.Thread): 5 | run = True 6 | 7 | def __init__(self, queue, **kwargs): 8 | threading.Thread.__init__(self) 9 | self.queue = queue 10 | self.data = {} 11 | if getattr(self, 'configure', None): 12 | self.configure(kwargs) 13 | 14 | def stop(self): 15 | self.run = False 16 | -------------------------------------------------------------------------------- /mysql_statsd/thread_manager.py: -------------------------------------------------------------------------------- 1 | import Queue 2 | import signal 3 | import threading 4 | import time 5 | 6 | 7 | class ThreadManager(): 8 | """Knows how to manage dem threads""" 9 | quit = False 10 | quitting = False 11 | threads = [] 12 | 13 | def __init__(self, threads=[]): 14 | """Program entry point""" 15 | self.threads = threads 16 | self.register_signal_handlers() 17 | 18 | def register_signal_handlers(self): 19 | # Register signal handler 20 | signal.signal(signal.SIGINT, self.signal_handler) 21 | signal.signal(signal.SIGTERM, self.signal_handler) 22 | 23 | def run(self): 24 | """Main loop.""" 25 | self.start_threads() 26 | while not self.quit: 27 | time.sleep(1) 28 | 29 | dead = [thread for thread in self.threads if not thread.is_alive()] 30 | if dead and not self.quitting: 31 | print("Thread {0!r} has stopped unexpectedly.".format(thread)) 32 | self.stop_threads() 33 | return 34 | 35 | def start_threads(self): 36 | for t in self.threads: 37 | t.start() 38 | 39 | def signal_handler(self, signal, frame): 40 | """ Handle signals """ 41 | print("Caught CTRL+C / SIGKILL") 42 | if not self.quitting: 43 | self.quitting = True 44 | self.stop_threads() 45 | self.quit = True 46 | else: 47 | print("BE PATIENT!@#~!#!@#$~!`1111") 48 | 49 | def stop_threads(self): 50 | """Stops all threads and waits for them to quit""" 51 | print("Stopping threads") 52 | for thread in self.threads: 53 | thread.stop() 54 | while threading.activeCount() > 1: 55 | print("Waiting for %s threads" % threading.activeCount()) 56 | time.sleep(1) 57 | print("All threads stopped") 58 | -------------------------------------------------------------------------------- /mysql_statsd/thread_mysql.py: -------------------------------------------------------------------------------- 1 | import time 2 | import re 3 | import MySQLdb as mdb 4 | import traceback 5 | from thread_base import ThreadBase 6 | from preprocessors import (MysqlPreprocessor, InnoDBPreprocessor, ColumnsPreprocessor) 7 | 8 | 9 | class ThreadMySQLMaxReconnectException(Exception): 10 | pass 11 | 12 | 13 | class ThreadMySQL(ThreadBase): 14 | """ Polls mysql and inserts data into queue """ 15 | is_running = True 16 | connection = None 17 | recovery_attempt = 0 18 | reconnect_delay = 5 19 | stats_checks = {} 20 | check_lastrun = {} 21 | 22 | def __init__(self, *args, **kwargs): 23 | super(ThreadMySQL, self).__init__(*args, **kwargs) 24 | self.processor_class_mysql = MysqlPreprocessor() 25 | self.processor_class_inno = InnoDBPreprocessor() 26 | self.processor_class_columns = ColumnsPreprocessor() 27 | 28 | def configure(self, config_dict): 29 | self.host = config_dict.get('mysql').get('host', 'localhost') 30 | self.port = config_dict.get('mysql').get('port', 3306) 31 | self.socket = config_dict.get('mysql').get('socket', None) 32 | 33 | self.username = config_dict.get('mysql').get('username', 'root') 34 | self.password = config_dict.get('mysql').get('password', '') 35 | 36 | self.max_reconnect = int(config_dict.get('mysql').get('max_reconnect', 5)) 37 | self.max_recovery = int(config_dict.get('mysql').get('max_recovery', 10)) 38 | 39 | #Set the stats checks for MySQL 40 | for stats_type in config_dict.get('mysql').get('stats_types').split(','): 41 | if config_dict.get('mysql').get('query_'+stats_type) and \ 42 | config_dict.get('mysql').get('interval_'+stats_type): 43 | 44 | self.stats_checks[stats_type] = { 45 | 'query': config_dict.get('mysql').get('query_'+stats_type), 46 | 'interval': config_dict.get('mysql').get('interval_'+stats_type) 47 | } 48 | self.check_lastrun[stats_type] = (time.time()*1000) 49 | 50 | self.sleep_interval = int(config_dict.get('mysql').get('sleep_interval', 500))/1000.0 51 | 52 | #Which metrics do we allow to be sent to the backend? 53 | self.metrics = config_dict.get('metrics') 54 | 55 | return self.host, self.port, self.sleep_interval 56 | 57 | def setup_connection(self): 58 | connection_attempt = 0 59 | 60 | while self.max_reconnect == 0 or connection_attempt < self.max_reconnect: 61 | try: 62 | if self.socket: 63 | self.connection = mdb.connect(user=self.username, unix_socket=self.socket, passwd=self.password) 64 | else: 65 | self.connection = mdb.connect(host=self.host, user=self.username, port=self.port, passwd=self.password) 66 | 67 | return self.connection 68 | except Exception: 69 | pass 70 | 71 | # If we got here, connection failed 72 | connection_attempt += 1 73 | print('Attempting reconnect #{0}...'.format(connection_attempt)) 74 | time.sleep(self.reconnect_delay) 75 | 76 | # If we get out of the while loop, we've passed max_reconnect 77 | raise ThreadMySQLMaxReconnectException 78 | 79 | 80 | def stop(self): 81 | """ Stop running this thread and close connection """ 82 | self.is_running = False 83 | try: 84 | if self.connection: 85 | self.connection.close() 86 | except Exception: 87 | """ Ignore exceptions thrown during closing connection """ 88 | pass 89 | 90 | def _run(self): 91 | for check_type in self.stats_checks: 92 | """ 93 | Only run a check if we exceeded the query threshold. 94 | This is especially important for SHOW INNODB ENGINE 95 | which locks the engine for a short period of time 96 | """ 97 | time_now = time.time()*1000 98 | check_threshold = float(self.stats_checks[check_type]['interval']) 99 | check_lastrun = self.check_lastrun[check_type] 100 | if (time_now - check_lastrun) > check_threshold: 101 | cursor = self.connection.cursor() 102 | cursor.execute(self.stats_checks[check_type]['query']) 103 | column_names = [i[0] for i in cursor.description] 104 | 105 | """ 106 | Pre process rows 107 | This transforms innodb status to a row like structure 108 | This allows pluggable modules, 109 | preprocessors should return list of key value tuples, e.g.: 110 | [('my_key', '1'), (my_counter, '2'), ('another_metric', '666')] 111 | """ 112 | rows = self._preprocess(check_type, column_names, cursor.fetchall()) 113 | for key, value in rows: 114 | key = key.lower() 115 | metric_key = check_type + "." + key 116 | 117 | # Support multiple bufferpools in metrics (or rather: ignore them) 118 | # Bascially bufferpool_* whitelists metrics for *all* bufferpools 119 | if key.startswith('bufferpool_'): 120 | # Rewrite the metric key to the whitelisted wildcard key 121 | whitelist_key = "{0}.{1}".format(check_type, re.sub(r'(.*bufferpool_)\d+(\..+)', r'\1*\2', key)) 122 | 123 | # Only allow the whitelisted metrics to be sent off to Statsd 124 | if whitelist_key in self.metrics: 125 | metric_type = self.metrics.get(whitelist_key) 126 | self.queue.put((metric_key, value, metric_type)) 127 | else: 128 | # Only allow the whitelisted metrics to be sent off to Statsd 129 | if metric_key in self.metrics: 130 | metric_type = self.metrics.get(metric_key) 131 | self.queue.put((metric_key, value, metric_type)) 132 | self.check_lastrun[check_type] = time_now 133 | 134 | def _preprocess(self, check_type, column_names, rows): 135 | """ 136 | Return rows when type not innodb. 137 | This is done to make it transparent for furture transformation types 138 | """ 139 | extra_args = () 140 | 141 | executing_class = self.processor_class_mysql 142 | if check_type == 'innodb': 143 | executing_class = self.processor_class_inno 144 | if check_type == 'slave': 145 | executing_class = self.processor_class_columns 146 | extra_args = (column_names,) 147 | 148 | return executing_class.process(rows, *extra_args) 149 | 150 | def recover_errors(self, ex): 151 | """Decide whether we should continue.""" 152 | if self.max_recovery > 0 and self.recovery_attempt >= self.max_recovery: 153 | print("Giving up after {} consecutive errors".format(self.recovery_attempt)) 154 | raise 155 | 156 | self.recovery_attempt += 1 157 | print("Ignoring database error:") 158 | traceback.print_exc() 159 | 160 | # Server gone away requires we reset the connection. 161 | if ex.args[0] == 2006: 162 | self.connection.close() 163 | 164 | def run(self): 165 | """ Run forever """ 166 | if not self.connection: 167 | """ Initial connection setup """ 168 | self.setup_connection() 169 | 170 | while self.is_running: 171 | if not self.connection.open: 172 | self.setup_connection() 173 | 174 | try: 175 | self._run() 176 | self.recovery_attempt = 0 177 | except mdb.DatabaseError as ex: 178 | self.recover_errors(ex) 179 | 180 | time.sleep(self.sleep_interval) 181 | -------------------------------------------------------------------------------- /mysql_statsd/thread_statsd.py: -------------------------------------------------------------------------------- 1 | import Queue 2 | import random 3 | import string 4 | import time 5 | import socket 6 | import distutils.util 7 | from pystatsd import statsd 8 | from thread_base import ThreadBase 9 | 10 | 11 | class ThreadGenerateGarbage(ThreadBase): 12 | """ 13 | stat, value, type 14 | c = counter, t = timer, g = gauge 15 | (stat, x, type) 16 | """ 17 | def gen_key(self): 18 | chars = string.ascii_lowercase + string.digits 19 | return ''.join(random.choice(chars) for x in range(8)) 20 | 21 | def run(self): 22 | while self.run: 23 | time.sleep(1) 24 | self.queue.put((self.gen_key(), random.randint(0, 1000), 'c')) 25 | 26 | 27 | class ThreadStatsd(ThreadBase): 28 | debug = False 29 | 30 | def configure(self, config): 31 | host = config.get('host', 'localhost') 32 | port = int(config.get('port', 8125)) 33 | prefix = config.get('prefix', 'mysql_statsd') 34 | if distutils.util.strtobool(config.get('include_hostname', 'mysql_statsd')): 35 | prefix += "." + socket.gethostname().replace('.', '_') 36 | self.client = statsd.Client(host, port, prefix=prefix) 37 | 38 | def get_sender(self, t): 39 | if t == 'g': 40 | return self.client.gauge 41 | elif t in ['r', 'd']: 42 | return self.client.update_stats 43 | elif t == 'c': 44 | return self.client.incr 45 | elif t == 't': 46 | return self.client.timing 47 | 48 | def send_stat(self, item): 49 | (k, v, t) = item 50 | 51 | # Don't proceed if we don't have data 52 | if v == None: 53 | return False 54 | 55 | if t == 'd': 56 | delta = self.get_delta(k, v) 57 | if delta > 0: 58 | sender = self.get_sender(t) 59 | sender(k, float(delta)) 60 | else: 61 | sender = self.get_sender(t) 62 | sender(k, float(v)) 63 | 64 | def get_delta(self, k, v): 65 | if k in self.data: 66 | delta = float(v) - self.data[k] 67 | self.data[k] = float(v) 68 | return delta 69 | else: 70 | self.data[k] = float(v) 71 | return -1 72 | 73 | def run(self): 74 | while self.run: 75 | try: 76 | # Timeout after 1 second so we can respond to quit events 77 | item = self.queue.get(True, 1) 78 | if self.debug: 79 | print(item) 80 | self.send_stat(item) 81 | except Queue.Empty: 82 | continue 83 | 84 | 85 | class ThreadFakeStatsd(ThreadStatsd): 86 | """Prints metrics instead of sending them to statsd.""" 87 | def send_stat(self, item): 88 | print item 89 | 90 | 91 | if __name__ == '__main__': 92 | # Run standalone to test this module, it will generate garbage 93 | from thread_manager import ThreadManager 94 | q = Queue.Queue() 95 | 96 | threads = [ThreadGenerateGarbage(q), ThreadStatsd(q)] 97 | tm = ThreadManager(threads=threads) 98 | tm.run() 99 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Jinja2==2.7.1 2 | MarkupSafe==0.18 3 | mysqlclient==1.3.6 4 | Pygments==1.6 5 | Sphinx==1.2b1 6 | argparse==1.2.1 7 | docutils==0.11 8 | py==1.4.15 9 | pystatsd==0.1.10 10 | tox==1.6.0 11 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import os 5 | import sys 6 | 7 | 8 | try: 9 | from setuptools import setup 10 | except ImportError: 11 | from distutils.core import setup 12 | 13 | if sys.argv[-1] == 'publish': 14 | os.system('python setup.py sdist upload') 15 | sys.exit() 16 | 17 | readme = open('README.rst').read() 18 | history = open('HISTORY.rst').read().replace('.. :changelog:', '') 19 | 20 | setup( 21 | name='mysql-statsd', 22 | version='0.1.5', 23 | description='Daemon that gathers statistics from MySQL and sends them to statsd.', 24 | long_description=readme + '\n\n' + history, 25 | author='Jasper Capel, Thijs de Zoete', 26 | author_email='jasper.capel@spilgames.com', 27 | url='https://github.com/spilgames/mysql-statsd', 28 | packages=[ 29 | 'mysql_statsd', 30 | ], 31 | package_dir={'mysql_statsd': 'mysql_statsd'}, 32 | entry_points={ 33 | 'console_scripts': [ 34 | 'mysql_statsd = mysql_statsd:mysql_statsd.MysqlStatsd' 35 | ] 36 | }, 37 | include_package_data=True, 38 | install_requires=[ 39 | 'MySQL-python==1.2.5', 40 | 'pystatsd==0.1.10', 41 | ], 42 | license="BSD", 43 | zip_safe=False, 44 | keywords='mysql_statsd', 45 | classifiers=[ 46 | 'Development Status :: 2 - Pre-Alpha', 47 | 'Intended Audience :: Developers', 48 | 'License :: OSI Approved :: BSD License', 49 | 'Natural Language :: English', 50 | "Programming Language :: Python :: 2", 51 | 'Programming Language :: Python :: 2.6', 52 | 'Programming Language :: Python :: 2.7', 53 | 'Programming Language :: Python :: 3', 54 | 'Programming Language :: Python :: 3.3', 55 | ], 56 | test_suite='tests', 57 | ) 58 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | import unittest 4 | import os 5 | from mysql_statsd.preprocessors import InnoDBPreprocessor 6 | 7 | class InnoDBPreprocessorTest(unittest.TestCase): 8 | def test_values_read_from_vanilla_install(self): 9 | """ 10 | For this mysqld version in default setup:: 11 | 12 | $ /usr/sbin/mysqld --version 13 | /usr/sbin/mysqld Ver 5.5.41-0+wheezy1 for debian-linux-gnu on x86_64 ((Debian)) 14 | 15 | A basic coverage test. 16 | """ 17 | fixture = os.path.join(os.path.dirname(__file__), 'fixtures', 18 | 'show-innodb-status-5.5-vanilla') 19 | row = open(fixture, 'rb').read() 20 | 21 | processor = InnoDBPreprocessor() 22 | processed = processor.process([('InnoDB', '', row)]) 23 | expected = { 24 | 'additional_pool_alloc': '0', 25 | 'current_transactions': 1, 26 | 'database_pages': '142', 27 | 'empty': '', 28 | 'free_pages': '8049', 29 | 'hash_index_cells_total': '276671', 30 | 'hash_index_cells_used': 0, 31 | 'history_list': '0', 32 | 'ibuf_cell_count': '2', 33 | 'ibuf_free_cells': '0', 34 | 'ibuf_merges': '0', 35 | 'ibuf_used_cells': '1', 36 | 'innodb_transactions': 1282, 37 | 'last_checkpoint': '1595685', 38 | 'log_bytes_flushed': '1595685', 39 | 'log_bytes_written': '1595685', 40 | 'modified_pages': '0', 41 | 'os_waits': '0', 42 | 'pages_created': '0', 43 | 'pages_read': '142', 44 | 'pages_written': '1', 45 | 'pending_buf_pool_flushes': '0', 46 | 'pending_log_flushes': '0', 47 | 'pending_normal_aio_reads': '0', 48 | 'pending_normal_aio_writes': '0', 49 | 'pool_size': '8191', 50 | 'read_views': '1', 51 | 'rows_deleted': '0', 52 | 'rows_inserted': '0', 53 | 'rows_read': '0', 54 | 'rows_updated': '0', 55 | 'spin_rounds': '0', 56 | 'spin_waits': '0', 57 | 'total_mem_alloc': '137363456', 58 | 'unpurged_txns': 1282, } 59 | self.assertEquals(expected, dict(processed)) 60 | 61 | if __name__ == "__main__": 62 | unittest.main() 63 | -------------------------------------------------------------------------------- /tests/fixtures/show-innodb-status-5.5-vanilla: -------------------------------------------------------------------------------- 1 | ===================================== 2 | 150221 12:34:19 INNODB MONITOR OUTPUT 3 | ===================================== 4 | Per second averages calculated from the last 16 seconds 5 | ----------------- 6 | BACKGROUND THREAD 7 | ----------------- 8 | srv_master_thread loops: 2 1_second, 2 sleeps, 0 10_second, 3 background, 3 flush 9 | srv_master_thread log flush and writes: 3 10 | ---------- 11 | SEMAPHORES 12 | ---------- 13 | OS WAIT ARRAY INFO: reservation count 3, signal count 3 14 | Mutex spin waits 0, rounds 0, OS waits 0 15 | RW-shared spins 3, rounds 90, OS waits 3 16 | RW-excl spins 0, rounds 0, OS waits 0 17 | Spin rounds per wait: 0.00 mutex, 30.00 RW-shared, 0.00 RW-excl 18 | ------------ 19 | TRANSACTIONS 20 | ------------ 21 | Trx id counter 502 22 | Purge done for trx's n:o < 0 undo n:o < 0 23 | History list length 0 24 | LIST OF TRANSACTIONS FOR EACH SESSION: 25 | ---TRANSACTION 0, not started 26 | MySQL thread id 48, OS thread handle 0x7fc0ebabd700, query id 694 localhost root 27 | show engine innodb status 28 | -------- 29 | FILE I/O 30 | -------- 31 | I/O thread 0 state: waiting for completed aio requests (insert buffer thread) 32 | I/O thread 1 state: waiting for completed aio requests (log thread) 33 | I/O thread 2 state: waiting for completed aio requests (read thread) 34 | I/O thread 3 state: waiting for completed aio requests (read thread) 35 | I/O thread 4 state: waiting for completed aio requests (read thread) 36 | I/O thread 5 state: waiting for completed aio requests (read thread) 37 | I/O thread 6 state: waiting for completed aio requests (write thread) 38 | I/O thread 7 state: waiting for completed aio requests (write thread) 39 | I/O thread 8 state: waiting for completed aio requests (write thread) 40 | I/O thread 9 state: waiting for completed aio requests (write thread) 41 | Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] , 42 | ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 43 | Pending flushes (fsync) log: 0; buffer pool: 0 44 | 153 OS file reads, 7 OS file writes, 7 OS fsyncs 45 | 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s 46 | ------------------------------------- 47 | INSERT BUFFER AND ADAPTIVE HASH INDEX 48 | ------------------------------------- 49 | Ibuf: size 1, free list len 0, seg size 2, 0 merges 50 | merged operations: 51 | insert 0, delete mark 0, delete 0 52 | discarded operations: 53 | insert 0, delete mark 0, delete 0 54 | Hash table size 276671, node heap has 0 buffer(s) 55 | 0.00 hash searches/s, 0.00 non-hash searches/s 56 | --- 57 | LOG 58 | --- 59 | Log sequence number 1595685 60 | Log flushed up to 1595685 61 | Last checkpoint at 1595685 62 | 0 pending log writes, 0 pending chkp writes 63 | 10 log i/o's done, 0.00 log i/o's/second 64 | ---------------------- 65 | BUFFER POOL AND MEMORY 66 | ---------------------- 67 | Total memory allocated 137363456; in additional pool allocated 0 68 | Dictionary memory allocated 33650 69 | Buffer pool size 8191 70 | Free buffers 8049 71 | Database pages 142 72 | Old database pages 0 73 | Modified db pages 0 74 | Pending reads 0 75 | Pending writes: LRU 0, flush list 0, single page 0 76 | Pages made young 0, not young 0 77 | 0.00 youngs/s, 0.00 non-youngs/s 78 | Pages read 142, created 0, written 1 79 | 0.00 reads/s, 0.00 creates/s, 0.00 writes/s 80 | No buffer pool page gets since the last printout 81 | Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s 82 | LRU len: 142, unzip_LRU len: 0 83 | I/O sum[0]:cur[0], unzip sum[0]:cur[0] 84 | -------------- 85 | ROW OPERATIONS 86 | -------------- 87 | 0 queries inside InnoDB, 0 queries in queue 88 | 1 read views open inside InnoDB 89 | Main thread process no. 8651, id 140466251474688, state: waiting for server activity 90 | Number of rows inserted 0, updated 0, deleted 0, read 0 91 | 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s 92 | ---------------------------- 93 | END OF INNODB MONITOR OUTPUT 94 | ============================ 95 | 96 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py26, py27 3 | 4 | [testenv] 5 | setenv = 6 | PYTHONPATH = {toxinidir}:{toxinidir}/mysql-statsd 7 | commands = python setup.py test 8 | deps = 9 | -r{toxinidir}/requirements.txt 10 | --------------------------------------------------------------------------------