├── .coveragerc ├── .github └── workflows │ └── test.yml ├── .gitignore ├── COPYING ├── README.rst ├── fluent ├── __about__.py ├── __init__.py ├── asynchandler.py ├── asyncsender.py ├── event.py ├── handler.py └── sender.py ├── pyproject.toml ├── requirements-dev.txt ├── tests ├── __init__.py ├── mockserver.py ├── test_asynchandler.py ├── test_asyncsender.py ├── test_event.py ├── test_handler.py └── test_sender.py └── tox.ini /.coveragerc: -------------------------------------------------------------------------------- 1 | # http://nedbatchelder.com/code/coverage/config.html#config 2 | 3 | [run] 4 | branch = True 5 | omit = 6 | */tests/* 7 | fluent/__about__.py 8 | 9 | [report] 10 | omit = */tests/* 11 | -------------------------------------------------------------------------------- /.github/workflows/test.yml: -------------------------------------------------------------------------------- 1 | name: Run test 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | pull_request: 8 | 9 | jobs: 10 | lint: 11 | runs-on: ubuntu-latest 12 | steps: 13 | - uses: actions/checkout@v4 14 | - name: Install Ruff 15 | run: pipx install ruff 16 | - name: Ruff check 17 | run: ruff check 18 | - name: Ruff format 19 | run: ruff format --diff 20 | 21 | test: 22 | runs-on: ubuntu-latest 23 | strategy: 24 | matrix: 25 | python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12", "pypy3.9", "pypy3.10"] 26 | steps: 27 | - uses: actions/checkout@v4 28 | - name: Set up Python 29 | uses: actions/setup-python@v5 30 | with: 31 | python-version: ${{ matrix.python-version }} 32 | cache: "pip" 33 | cache-dependency-path: requirements-dev.txt 34 | - name: Install dependencies 35 | run: python -m pip install -r requirements-dev.txt 36 | - name: Run tests 37 | run: pytest --cov=fluent 38 | 39 | build: 40 | needs: test 41 | runs-on: ubuntu-latest 42 | steps: 43 | - uses: actions/checkout@v4 44 | - run: pipx run build 45 | - uses: actions/upload-artifact@v4 46 | with: 47 | name: dist 48 | path: dist/ 49 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .Python 2 | *.swp 3 | *.swo 4 | *.pyc 5 | *.pyo 6 | /*.egg-info 7 | /.coverage 8 | /.eggs 9 | /.tox 10 | /build 11 | /dist 12 | .idea/ 13 | -------------------------------------------------------------------------------- /COPYING: -------------------------------------------------------------------------------- 1 | Copyright (C) 2011 FURUHASHI Sadayuki 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | 15 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | A Python structured logger for Fluentd/Fluent Bit 2 | ================================================= 3 | 4 | Many web/mobile applications generate huge amount of event logs (c,f. 5 | login, logout, purchase, follow, etc). To analyze these event logs could 6 | be really valuable for improving the service. However, the challenge is 7 | collecting these logs easily and reliably. 8 | 9 | `Fluentd `__ and `Fluent Bit `__ solves that problem by 10 | having: easy installation, small footprint, plugins, reliable buffering, 11 | log forwarding, etc. 12 | 13 | **fluent-logger-python** is a Python library, to record the events from 14 | Python application. 15 | 16 | Requirements 17 | ------------ 18 | 19 | - Python 3.7+ 20 | - ``msgpack`` 21 | - **IMPORTANT**: Version 0.8.0 is the last version supporting Python 2.6, 3.2 and 3.3 22 | - **IMPORTANT**: Version 0.9.6 is the last version supporting Python 2.7 and 3.4 23 | - **IMPORTANT**: Version 0.10.0 is the last version supporting Python 3.5 and 3.6 24 | 25 | Installation 26 | ------------ 27 | 28 | This library is distributed as 'fluent-logger' python package. Please 29 | execute the following command to install it. 30 | 31 | .. code:: sh 32 | 33 | $ pip install fluent-logger 34 | 35 | Configuration 36 | ------------- 37 | 38 | Fluentd daemon must be launched with a tcp source configuration: 39 | 40 | :: 41 | 42 | 43 | type forward 44 | port 24224 45 | 46 | 47 | To quickly test your setup, add a matcher that logs to the stdout: 48 | 49 | :: 50 | 51 | 52 | type stdout 53 | 54 | 55 | Usage 56 | ----- 57 | 58 | FluentSender Interface 59 | ~~~~~~~~~~~~~~~~~~~~~~ 60 | 61 | `sender.FluentSender` is a structured event logger for Fluentd. 62 | 63 | By default, the logger assumes fluentd daemon is launched locally. You 64 | can also specify remote logger by passing the options. 65 | 66 | .. code:: python 67 | 68 | from fluent import sender 69 | 70 | # for local fluent 71 | logger = sender.FluentSender('app') 72 | 73 | # for remote fluent 74 | logger = sender.FluentSender('app', host='host', port=24224) 75 | 76 | For sending event, call `emit` method with your event. Following example will send the event to 77 | fluentd, with tag 'app.follow' and the attributes 'from' and 'to'. 78 | 79 | .. code:: python 80 | 81 | # Use current time 82 | logger.emit('follow', {'from': 'userA', 'to': 'userB'}) 83 | 84 | # Specify optional time 85 | cur_time = int(time.time()) 86 | logger.emit_with_time('follow', cur_time, {'from': 'userA', 'to':'userB'}) 87 | 88 | To send events with nanosecond-precision timestamps (Fluent 0.14 and up), 89 | specify `nanosecond_precision` on `FluentSender`. 90 | 91 | .. code:: python 92 | 93 | # Use nanosecond 94 | logger = sender.FluentSender('app', nanosecond_precision=True) 95 | logger.emit('follow', {'from': 'userA', 'to': 'userB'}) 96 | logger.emit_with_time('follow', time.time(), {'from': 'userA', 'to': 'userB'}) 97 | 98 | You can detect an error via return value of `emit`. If an error happens in `emit`, `emit` returns `False` and get an error object using `last_error` method. 99 | 100 | .. code:: python 101 | 102 | if not logger.emit('follow', {'from': 'userA', 'to': 'userB'}): 103 | print(logger.last_error) 104 | logger.clear_last_error() # clear stored error after handled errors 105 | 106 | If you want to shutdown the client, call `close()` method. 107 | 108 | .. code:: python 109 | 110 | logger.close() 111 | 112 | Event-Based Interface 113 | ~~~~~~~~~~~~~~~~~~~~~ 114 | 115 | This API is a wrapper for `sender.FluentSender`. 116 | 117 | First, you need to call ``sender.setup()`` to create global `sender.FluentSender` logger 118 | instance. This call needs to be called only once, at the beginning of 119 | the application for example. 120 | 121 | Initialization code of Event-Based API is below: 122 | 123 | .. code:: python 124 | 125 | from fluent import sender 126 | 127 | # for local fluent 128 | sender.setup('app') 129 | 130 | # for remote fluent 131 | sender.setup('app', host='host', port=24224) 132 | 133 | Then, please create the events like this. This will send the event to 134 | fluentd, with tag 'app.follow' and the attributes 'from' and 'to'. 135 | 136 | .. code:: python 137 | 138 | from fluent import event 139 | 140 | # send event to fluentd, with 'app.follow' tag 141 | event.Event('follow', { 142 | 'from': 'userA', 143 | 'to': 'userB' 144 | }) 145 | 146 | `event.Event` has one limitation which can't return success/failure result. 147 | 148 | Other methods for Event-Based Interface. 149 | 150 | .. code:: python 151 | 152 | sender.get_global_sender # get instance of global sender 153 | sender.close # Call FluentSender#close 154 | 155 | Handler for buffer overflow 156 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 | 158 | You can inject your own custom proc to handle buffer overflow in the event of connection failure. This will mitigate the loss of data instead of simply throwing data away. 159 | 160 | .. code:: python 161 | 162 | import msgpack 163 | from io import BytesIO 164 | 165 | def overflow_handler(pendings): 166 | unpacker = msgpack.Unpacker(BytesIO(pendings)) 167 | for unpacked in unpacker: 168 | print(unpacked) 169 | 170 | logger = sender.FluentSender('app', host='host', port=24224, buffer_overflow_handler=overflow_handler) 171 | 172 | You should handle any exception in handler. fluent-logger ignores exceptions from ``buffer_overflow_handler``. 173 | 174 | This handler is also called when pending events exist during `close()`. 175 | 176 | Python logging.Handler interface 177 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 178 | 179 | This client-library also has ``FluentHandler`` class for Python logging 180 | module. 181 | 182 | .. code:: python 183 | 184 | import logging 185 | from fluent import handler 186 | 187 | custom_format = { 188 | 'host': '%(hostname)s', 189 | 'where': '%(module)s.%(funcName)s', 190 | 'type': '%(levelname)s', 191 | 'stack_trace': '%(exc_text)s' 192 | } 193 | 194 | logging.basicConfig(level=logging.INFO) 195 | l = logging.getLogger('fluent.test') 196 | h = handler.FluentHandler('app.follow', host='host', port=24224, buffer_overflow_handler=overflow_handler) 197 | formatter = handler.FluentRecordFormatter(custom_format) 198 | h.setFormatter(formatter) 199 | l.addHandler(h) 200 | l.info({ 201 | 'from': 'userA', 202 | 'to': 'userB' 203 | }) 204 | l.info('{"from": "userC", "to": "userD"}') 205 | l.info("This log entry will be logged with the additional key: 'message'.") 206 | 207 | You can also customize formatter via logging.config.dictConfig 208 | 209 | .. code:: python 210 | 211 | import logging.config 212 | import yaml 213 | 214 | with open('logging.yaml') as fd: 215 | conf = yaml.load(fd) 216 | 217 | logging.config.dictConfig(conf['logging']) 218 | 219 | You can inject your own custom proc to handle buffer overflow in the event of connection failure. This will mitigate the loss of data instead of simply throwing data away. 220 | 221 | .. code:: python 222 | 223 | import msgpack 224 | from io import BytesIO 225 | 226 | def overflow_handler(pendings): 227 | unpacker = msgpack.Unpacker(BytesIO(pendings)) 228 | for unpacked in unpacker: 229 | print(unpacked) 230 | 231 | A sample configuration ``logging.yaml`` would be: 232 | 233 | .. code:: python 234 | 235 | logging: 236 | version: 1 237 | 238 | formatters: 239 | brief: 240 | format: '%(message)s' 241 | default: 242 | format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s' 243 | datefmt: '%Y-%m-%d %H:%M:%S' 244 | fluent_fmt: 245 | '()': fluent.handler.FluentRecordFormatter 246 | format: 247 | level: '%(levelname)s' 248 | hostname: '%(hostname)s' 249 | where: '%(module)s.%(funcName)s' 250 | 251 | handlers: 252 | console: 253 | class : logging.StreamHandler 254 | level: DEBUG 255 | formatter: default 256 | stream: ext://sys.stdout 257 | fluent: 258 | class: fluent.handler.FluentHandler 259 | host: localhost 260 | port: 24224 261 | tag: test.logging 262 | buffer_overflow_handler: overflow_handler 263 | formatter: fluent_fmt 264 | level: DEBUG 265 | none: 266 | class: logging.NullHandler 267 | 268 | loggers: 269 | amqp: 270 | handlers: [none] 271 | propagate: False 272 | conf: 273 | handlers: [none] 274 | propagate: False 275 | '': # root logger 276 | handlers: [console, fluent] 277 | level: DEBUG 278 | propagate: False 279 | 280 | Asynchronous Communication 281 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 282 | 283 | Besides the regular interfaces - the event-based one provided by ``sender.FluentSender`` and the python logging one 284 | provided by ``handler.FluentHandler`` - there are also corresponding asynchronous versions in ``asyncsender`` and 285 | ``asynchandler`` respectively. These versions use a separate thread to handle the communication with the remote fluentd 286 | server. In this way the client of the library won't be blocked during the logging of the events, and won't risk going 287 | into timeout if the fluentd server becomes unreachable. Also it won't be slowed down by the network overhead. 288 | 289 | The interfaces in ``asyncsender`` and ``asynchandler`` are exactly the same as those in ``sender`` and ``handler``, so it's 290 | just a matter of importing from a different module. 291 | 292 | For instance, for the event-based interface: 293 | 294 | .. code:: python 295 | 296 | from fluent import asyncsender as sender 297 | 298 | # for local fluent 299 | sender.setup('app') 300 | 301 | # for remote fluent 302 | sender.setup('app', host='host', port=24224) 303 | 304 | # do your work 305 | ... 306 | 307 | # IMPORTANT: before program termination, close the sender 308 | sender.close() 309 | 310 | or for the python logging interface: 311 | 312 | .. code:: python 313 | 314 | import logging 315 | from fluent import asynchandler as handler 316 | 317 | custom_format = { 318 | 'host': '%(hostname)s', 319 | 'where': '%(module)s.%(funcName)s', 320 | 'type': '%(levelname)s', 321 | 'stack_trace': '%(exc_text)s' 322 | } 323 | 324 | logging.basicConfig(level=logging.INFO) 325 | l = logging.getLogger('fluent.test') 326 | h = handler.FluentHandler('app.follow', host='host', port=24224, buffer_overflow_handler=overflow_handler) 327 | formatter = handler.FluentRecordFormatter(custom_format) 328 | h.setFormatter(formatter) 329 | l.addHandler(h) 330 | l.info({ 331 | 'from': 'userA', 332 | 'to': 'userB' 333 | }) 334 | l.info('{"from": "userC", "to": "userD"}') 335 | l.info("This log entry will be logged with the additional key: 'message'.") 336 | 337 | ... 338 | 339 | # IMPORTANT: before program termination, close the handler 340 | h.close() 341 | 342 | **NOTE**: please note that it's important to close the sender or the handler at program termination. This will make 343 | sure the communication thread terminates and it's joined correctly. Otherwise the program won't exit, waiting for 344 | the thread, unless forcibly killed. 345 | 346 | Circular queue mode 347 | +++++++++++++++++++ 348 | 349 | In some applications it can be especially important to guarantee that the logging process won't block under *any* 350 | circumstance, even when it's logging faster than the sending thread could handle (*backpressure*). In this case it's 351 | possible to enable the `circular queue` mode, by passing `True` in the `queue_circular` parameter of 352 | ``asyncsender.FluentSender`` or ``asynchandler.FluentHandler``. By doing so the thread doing the logging won't block 353 | even when the queue is full, the new event will be added to the queue by discarding the oldest one. 354 | 355 | **WARNING**: setting `queue_circular` to `True` will cause loss of events if the queue fills up completely! Make sure 356 | that this doesn't happen, or it's acceptable for your application. 357 | 358 | 359 | Testing 360 | ------- 361 | 362 | Testing can be done using `pytest `__. 363 | 364 | .. code:: sh 365 | 366 | $ pytest tests 367 | 368 | 369 | Release 370 | ------- 371 | 372 | .. code:: sh 373 | 374 | $ # Download dist.zip for release from GitHub Action artifact. 375 | $ unzip -d dist dist.zip 376 | $ pipx twine upload dist/* 377 | 378 | 379 | Contributors 380 | ------------ 381 | 382 | Patches contributed by `those 383 | people `__. 384 | 385 | License 386 | ------- 387 | 388 | Apache License, Version 2.0 389 | -------------------------------------------------------------------------------- /fluent/__about__.py: -------------------------------------------------------------------------------- 1 | __version__ = "0.11.1" 2 | -------------------------------------------------------------------------------- /fluent/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fluent/fluent-logger-python/8dc9a4312e45548ef0c3726cc3c5395191112ddb/fluent/__init__.py -------------------------------------------------------------------------------- /fluent/asynchandler.py: -------------------------------------------------------------------------------- 1 | from fluent import asyncsender, handler 2 | 3 | 4 | class FluentHandler(handler.FluentHandler): 5 | """ 6 | Asynchronous Logging Handler for fluent. 7 | """ 8 | 9 | def getSenderClass(self): 10 | return asyncsender.FluentSender 11 | -------------------------------------------------------------------------------- /fluent/asyncsender.py: -------------------------------------------------------------------------------- 1 | import threading 2 | from queue import Empty, Full, Queue 3 | 4 | from fluent import sender 5 | from fluent.sender import EventTime 6 | 7 | __all__ = ["EventTime", "FluentSender"] 8 | 9 | DEFAULT_QUEUE_MAXSIZE = 100 10 | DEFAULT_QUEUE_CIRCULAR = False 11 | 12 | _TOMBSTONE = object() 13 | 14 | _global_sender = None 15 | 16 | 17 | def _set_global_sender(sender): # pragma: no cover 18 | """[For testing] Function to set global sender directly""" 19 | global _global_sender 20 | _global_sender = sender 21 | 22 | 23 | def setup(tag, **kwargs): # pragma: no cover 24 | global _global_sender 25 | _global_sender = FluentSender(tag, **kwargs) 26 | 27 | 28 | def get_global_sender(): # pragma: no cover 29 | return _global_sender 30 | 31 | 32 | def close(): # pragma: no cover 33 | get_global_sender().close() 34 | 35 | 36 | class FluentSender(sender.FluentSender): 37 | def __init__( 38 | self, 39 | tag, 40 | host="localhost", 41 | port=24224, 42 | bufmax=1 * 1024 * 1024, 43 | timeout=3.0, 44 | verbose=False, 45 | buffer_overflow_handler=None, 46 | nanosecond_precision=False, 47 | msgpack_kwargs=None, 48 | queue_maxsize=DEFAULT_QUEUE_MAXSIZE, 49 | queue_circular=DEFAULT_QUEUE_CIRCULAR, 50 | queue_overflow_handler=None, 51 | **kwargs, 52 | ): 53 | """ 54 | :param kwargs: This kwargs argument is not used in __init__. This will be removed in the next major version. 55 | """ 56 | super().__init__( 57 | tag=tag, 58 | host=host, 59 | port=port, 60 | bufmax=bufmax, 61 | timeout=timeout, 62 | verbose=verbose, 63 | buffer_overflow_handler=buffer_overflow_handler, 64 | nanosecond_precision=nanosecond_precision, 65 | msgpack_kwargs=msgpack_kwargs, 66 | **kwargs, 67 | ) 68 | self._queue_maxsize = queue_maxsize 69 | self._queue_circular = queue_circular 70 | if queue_circular and queue_overflow_handler: 71 | self._queue_overflow_handler = queue_overflow_handler 72 | else: 73 | self._queue_overflow_handler = self._queue_overflow_handler_default 74 | 75 | self._thread_guard = ( 76 | threading.Event() 77 | ) # This ensures visibility across all variables 78 | self._closed = False 79 | 80 | self._queue = Queue(maxsize=queue_maxsize) 81 | self._send_thread = threading.Thread( 82 | target=self._send_loop, name="AsyncFluentSender %d" % id(self) 83 | ) 84 | self._send_thread.daemon = True 85 | self._send_thread.start() 86 | 87 | def close(self, flush=True): 88 | with self.lock: 89 | if self._closed: 90 | return 91 | self._closed = True 92 | if not flush: 93 | while True: 94 | try: 95 | self._queue.get(block=False) 96 | except Empty: 97 | break 98 | self._queue.put(_TOMBSTONE) 99 | self._send_thread.join() 100 | 101 | @property 102 | def queue_maxsize(self): 103 | return self._queue_maxsize 104 | 105 | @property 106 | def queue_blocking(self): 107 | return not self._queue_circular 108 | 109 | @property 110 | def queue_circular(self): 111 | return self._queue_circular 112 | 113 | def _send(self, bytes_): 114 | with self.lock: 115 | if self._closed: 116 | return False 117 | if self._queue_circular and self._queue.full(): 118 | # discard oldest 119 | try: 120 | discarded_bytes = self._queue.get(block=False) 121 | except Empty: # pragma: no cover 122 | pass 123 | else: 124 | self._queue_overflow_handler(discarded_bytes) 125 | try: 126 | self._queue.put(bytes_, block=(not self._queue_circular)) 127 | except Full: # pragma: no cover 128 | return False # this actually can't happen 129 | 130 | return True 131 | 132 | def _send_loop(self): 133 | send_internal = super()._send_internal 134 | 135 | try: 136 | while True: 137 | bytes_ = self._queue.get(block=True) 138 | if bytes_ is _TOMBSTONE: 139 | break 140 | 141 | send_internal(bytes_) 142 | finally: 143 | self._close() 144 | 145 | def _queue_overflow_handler_default(self, discarded_bytes): 146 | pass 147 | 148 | def __exit__(self, exc_type, exc_val, exc_tb): 149 | self.close() 150 | -------------------------------------------------------------------------------- /fluent/event.py: -------------------------------------------------------------------------------- 1 | from fluent import sender 2 | 3 | 4 | class Event: 5 | def __init__(self, label, data, **kwargs): 6 | assert isinstance(data, dict), "data must be a dict" 7 | sender_ = kwargs.get("sender", sender.get_global_sender()) 8 | timestamp = kwargs.get("time", None) 9 | if timestamp is not None: 10 | sender_.emit_with_time(label, timestamp, data) 11 | else: 12 | sender_.emit(label, data) 13 | -------------------------------------------------------------------------------- /fluent/handler.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | import socket 4 | 5 | from fluent import sender 6 | 7 | 8 | class FluentRecordFormatter(logging.Formatter): 9 | """A structured formatter for Fluent. 10 | 11 | Best used with server storing data in an ElasticSearch cluster for example. 12 | 13 | :param fmt: a dict or a callable with format string as values to map to provided keys. 14 | If callable, should accept a single argument `LogRecord` and return a dict, 15 | and have a field `usesTime` that is callable and return a bool as would 16 | `FluentRecordFormatter.usesTime` 17 | :param datefmt: strftime()-compatible date/time format string. 18 | :param style: '%', '{' or '$' (used only with Python 3.2 or above) 19 | :param fill_missing_fmt_key: if True, do not raise a KeyError if the format 20 | key is not found. Put None if not found. 21 | :param format_json: if True, will attempt to parse message as json. If not, 22 | will use message as-is. Defaults to True 23 | :param exclude_attrs: switches this formatter into a mode where all attributes 24 | except the ones specified by `exclude_attrs` are logged with the record as is. 25 | If `None`, operates as before, otherwise `fmt` is ignored. 26 | Can be an iterable. 27 | """ 28 | 29 | def __init__( 30 | self, 31 | fmt=None, 32 | datefmt=None, 33 | style="%", 34 | fill_missing_fmt_key=False, 35 | format_json=True, 36 | exclude_attrs=None, 37 | ): 38 | super().__init__(None, datefmt) 39 | 40 | if style != "%": 41 | self.__style, basic_fmt_dict = { 42 | "{": ( 43 | logging.StrFormatStyle, 44 | { 45 | "sys_host": "{hostname}", 46 | "sys_name": "{name}", 47 | "sys_module": "{module}", 48 | }, 49 | ), 50 | "$": ( 51 | logging.StringTemplateStyle, 52 | { 53 | "sys_host": "${hostname}", 54 | "sys_name": "${name}", 55 | "sys_module": "${module}", 56 | }, 57 | ), 58 | }[style] 59 | else: 60 | self.__style = None 61 | basic_fmt_dict = { 62 | "sys_host": "%(hostname)s", 63 | "sys_name": "%(name)s", 64 | "sys_module": "%(module)s", 65 | } 66 | 67 | if exclude_attrs is not None: 68 | self._exc_attrs = set(exclude_attrs) 69 | self._fmt_dict = None 70 | self._formatter = self._format_by_exclusion 71 | self.usesTime = super().usesTime 72 | else: 73 | self._exc_attrs = None 74 | if not fmt: 75 | self._fmt_dict = basic_fmt_dict 76 | self._formatter = self._format_by_dict 77 | self.usesTime = self._format_by_dict_uses_time 78 | else: 79 | if callable(fmt): 80 | self._formatter = fmt 81 | self.usesTime = fmt.usesTime 82 | else: 83 | self._fmt_dict = fmt 84 | self._formatter = self._format_by_dict 85 | self.usesTime = self._format_by_dict_uses_time 86 | 87 | if format_json: 88 | self._format_msg = self._format_msg_json 89 | else: 90 | self._format_msg = self._format_msg_default 91 | 92 | self.hostname = socket.gethostname() 93 | 94 | self.fill_missing_fmt_key = fill_missing_fmt_key 95 | 96 | def format(self, record): 97 | # Compute attributes handled by parent class. 98 | super().format(record) 99 | # Add ours 100 | record.hostname = self.hostname 101 | 102 | # Apply format 103 | data = self._formatter(record) 104 | 105 | self._structuring(data, record) 106 | return data 107 | 108 | def usesTime(self): 109 | """This method is substituted on construction based on settings for performance reasons""" 110 | 111 | def _structuring(self, data, record): 112 | """Melds `msg` into `data`. 113 | 114 | :param data: dictionary to be sent to fluent server 115 | :param msg: :class:`LogRecord`'s message to add to `data`. 116 | `msg` can be a simple string for backward compatibility with 117 | :mod:`logging` framework, a JSON encoded string or a dictionary 118 | that will be merged into dictionary generated in :meth:`format. 119 | """ 120 | msg = record.msg 121 | 122 | if isinstance(msg, dict): 123 | self._add_dic(data, msg) 124 | elif isinstance(msg, str): 125 | self._add_dic(data, self._format_msg(record, msg)) 126 | else: 127 | self._add_dic(data, {"message": msg}) 128 | 129 | def _format_msg_json(self, record, msg): 130 | try: 131 | json_msg = json.loads(str(msg)) 132 | if isinstance(json_msg, dict): 133 | return json_msg 134 | else: 135 | return self._format_msg_default(record, msg) 136 | except ValueError: 137 | return self._format_msg_default(record, msg) 138 | 139 | def _format_msg_default(self, record, msg): 140 | return {"message": super().format(record)} 141 | 142 | def _format_by_exclusion(self, record): 143 | data = {} 144 | for key, value in record.__dict__.items(): 145 | if key not in self._exc_attrs: 146 | data[key] = value 147 | return data 148 | 149 | def _format_by_dict(self, record): 150 | data = {} 151 | for key, value in self._fmt_dict.items(): 152 | try: 153 | if self.__style: 154 | value = self.__style(value).format(record) 155 | else: 156 | value = value % record.__dict__ 157 | except KeyError as exc: 158 | value = None 159 | if not self.fill_missing_fmt_key: 160 | raise exc 161 | 162 | data[key] = value 163 | return data 164 | 165 | def _format_by_dict_uses_time(self): 166 | if self.__style: 167 | search = self.__style.asctime_search 168 | else: 169 | search = "%(asctime)" 170 | return any([value.find(search) >= 0 for value in self._fmt_dict.values()]) 171 | 172 | @staticmethod 173 | def _add_dic(data, dic): 174 | for key, value in dic.items(): 175 | if isinstance(key, str): 176 | data[key] = value 177 | 178 | 179 | class FluentHandler(logging.Handler): 180 | """ 181 | Logging Handler for fluent. 182 | """ 183 | 184 | def __init__( 185 | self, 186 | tag, 187 | host="localhost", 188 | port=24224, 189 | timeout=3.0, 190 | verbose=False, 191 | buffer_overflow_handler=None, 192 | msgpack_kwargs=None, 193 | nanosecond_precision=False, 194 | **kwargs, 195 | ): 196 | self.tag = tag 197 | self._host = host 198 | self._port = port 199 | self._timeout = timeout 200 | self._verbose = verbose 201 | self._buffer_overflow_handler = buffer_overflow_handler 202 | self._msgpack_kwargs = msgpack_kwargs 203 | self._nanosecond_precision = nanosecond_precision 204 | self._kwargs = kwargs 205 | self._sender = None 206 | logging.Handler.__init__(self) 207 | 208 | def getSenderClass(self): 209 | return sender.FluentSender 210 | 211 | @property 212 | def sender(self): 213 | if self._sender is None: 214 | self._sender = self.getSenderInstance( 215 | tag=self.tag, 216 | host=self._host, 217 | port=self._port, 218 | timeout=self._timeout, 219 | verbose=self._verbose, 220 | buffer_overflow_handler=self._buffer_overflow_handler, 221 | msgpack_kwargs=self._msgpack_kwargs, 222 | nanosecond_precision=self._nanosecond_precision, 223 | **self._kwargs, 224 | ) 225 | return self._sender 226 | 227 | def getSenderInstance( 228 | self, 229 | tag, 230 | host, 231 | port, 232 | timeout, 233 | verbose, 234 | buffer_overflow_handler, 235 | msgpack_kwargs, 236 | nanosecond_precision, 237 | **kwargs, 238 | ): 239 | sender_class = self.getSenderClass() 240 | return sender_class( 241 | tag, 242 | host=host, 243 | port=port, 244 | timeout=timeout, 245 | verbose=verbose, 246 | buffer_overflow_handler=buffer_overflow_handler, 247 | msgpack_kwargs=msgpack_kwargs, 248 | nanosecond_precision=nanosecond_precision, 249 | **kwargs, 250 | ) 251 | 252 | def emit(self, record): 253 | data = self.format(record) 254 | _sender = self.sender 255 | return _sender.emit_with_time( 256 | None, 257 | sender.EventTime(record.created) 258 | if _sender.nanosecond_precision 259 | else int(record.created), 260 | data, 261 | ) 262 | 263 | def close(self): 264 | self.acquire() 265 | try: 266 | try: 267 | if self._sender is not None: 268 | self._sender.close() 269 | self._sender = None 270 | finally: 271 | super().close() 272 | finally: 273 | self.release() 274 | 275 | def __enter__(self): 276 | return self 277 | 278 | def __exit__(self, exc_type, exc_val, exc_tb): 279 | self.close() 280 | -------------------------------------------------------------------------------- /fluent/sender.py: -------------------------------------------------------------------------------- 1 | import errno 2 | import socket 3 | import struct 4 | import threading 5 | import time 6 | import traceback 7 | 8 | import msgpack 9 | 10 | _global_sender = None 11 | 12 | 13 | def _set_global_sender(sender): # pragma: no cover 14 | """[For testing] Function to set global sender directly""" 15 | global _global_sender 16 | _global_sender = sender 17 | 18 | 19 | def setup(tag, **kwargs): # pragma: no cover 20 | global _global_sender 21 | _global_sender = FluentSender(tag, **kwargs) 22 | 23 | 24 | def get_global_sender(): # pragma: no cover 25 | return _global_sender 26 | 27 | 28 | def close(): # pragma: no cover 29 | get_global_sender().close() 30 | 31 | 32 | class EventTime(msgpack.ExtType): 33 | def __new__(cls, timestamp, nanoseconds=None): 34 | seconds = int(timestamp) 35 | if nanoseconds is None: 36 | nanoseconds = int(timestamp % 1 * 10**9) 37 | return super().__new__( 38 | cls, 39 | code=0, 40 | data=struct.pack(">II", seconds, nanoseconds), 41 | ) 42 | 43 | @classmethod 44 | def from_unix_nano(cls, unix_nano): 45 | seconds, nanos = divmod(unix_nano, 10**9) 46 | return cls(seconds, nanos) 47 | 48 | 49 | class FluentSender: 50 | def __init__( 51 | self, 52 | tag, 53 | host="localhost", 54 | port=24224, 55 | bufmax=1 * 1024 * 1024, 56 | timeout=3.0, 57 | verbose=False, 58 | buffer_overflow_handler=None, 59 | nanosecond_precision=False, 60 | msgpack_kwargs=None, 61 | *, 62 | forward_packet_error=True, 63 | **kwargs, 64 | ): 65 | """ 66 | :param kwargs: This kwargs argument is not used in __init__. This will be removed in the next major version. 67 | """ 68 | self.tag = tag 69 | self.host = host 70 | self.port = port 71 | self.bufmax = bufmax 72 | self.timeout = timeout 73 | self.verbose = verbose 74 | self.buffer_overflow_handler = buffer_overflow_handler 75 | self.nanosecond_precision = nanosecond_precision 76 | self.forward_packet_error = forward_packet_error 77 | self.msgpack_kwargs = {} if msgpack_kwargs is None else msgpack_kwargs 78 | 79 | self.socket = None 80 | self.pendings = None 81 | self.lock = threading.Lock() 82 | self._closed = False 83 | self._last_error_threadlocal = threading.local() 84 | 85 | def emit(self, label, data): 86 | if self.nanosecond_precision: 87 | cur_time = EventTime.from_unix_nano(time.time_ns()) 88 | else: 89 | cur_time = int(time.time()) 90 | return self.emit_with_time(label, cur_time, data) 91 | 92 | def emit_with_time(self, label, timestamp, data): 93 | try: 94 | bytes_ = self._make_packet(label, timestamp, data) 95 | except Exception as e: 96 | if not self.forward_packet_error: 97 | raise 98 | self.last_error = e 99 | bytes_ = self._make_packet( 100 | label, 101 | timestamp, 102 | { 103 | "level": "CRITICAL", 104 | "message": "Can't output to log", 105 | "traceback": traceback.format_exc(), 106 | }, 107 | ) 108 | return self._send(bytes_) 109 | 110 | @property 111 | def last_error(self): 112 | return getattr(self._last_error_threadlocal, "exception", None) 113 | 114 | @last_error.setter 115 | def last_error(self, err): 116 | self._last_error_threadlocal.exception = err 117 | 118 | def clear_last_error(self, _thread_id=None): 119 | if hasattr(self._last_error_threadlocal, "exception"): 120 | delattr(self._last_error_threadlocal, "exception") 121 | 122 | def close(self): 123 | with self.lock: 124 | if self._closed: 125 | return 126 | self._closed = True 127 | if self.pendings: 128 | try: 129 | self._send_data(self.pendings) 130 | except Exception: 131 | self._call_buffer_overflow_handler(self.pendings) 132 | 133 | self._close() 134 | self.pendings = None 135 | 136 | def _make_packet(self, label, timestamp, data): 137 | if label: 138 | tag = f"{self.tag}.{label}" if self.tag else label 139 | else: 140 | tag = self.tag 141 | if self.nanosecond_precision and isinstance(timestamp, float): 142 | timestamp = EventTime(timestamp) 143 | packet = (tag, timestamp, data) 144 | if self.verbose: 145 | print(packet) 146 | return msgpack.packb(packet, **self.msgpack_kwargs) 147 | 148 | def _send(self, bytes_): 149 | with self.lock: 150 | if self._closed: 151 | return False 152 | return self._send_internal(bytes_) 153 | 154 | def _send_internal(self, bytes_): 155 | # buffering 156 | if self.pendings: 157 | self.pendings += bytes_ 158 | bytes_ = self.pendings 159 | 160 | try: 161 | self._send_data(bytes_) 162 | 163 | # send finished 164 | self.pendings = None 165 | 166 | return True 167 | except OSError as e: 168 | self.last_error = e 169 | 170 | # close socket 171 | self._close() 172 | 173 | # clear buffer if it exceeds max buffer size 174 | if self.pendings and (len(self.pendings) > self.bufmax): 175 | self._call_buffer_overflow_handler(self.pendings) 176 | self.pendings = None 177 | else: 178 | self.pendings = bytes_ 179 | 180 | return False 181 | 182 | def _check_recv_side(self): 183 | try: 184 | self.socket.settimeout(0.0) 185 | try: 186 | recvd = self.socket.recv(4096) 187 | except OSError as recv_e: 188 | if recv_e.errno != errno.EWOULDBLOCK: 189 | raise 190 | return 191 | 192 | if recvd == b"": 193 | raise OSError(errno.EPIPE, "Broken pipe") 194 | finally: 195 | self.socket.settimeout(self.timeout) 196 | 197 | def _send_data(self, bytes_): 198 | # reconnect if possible 199 | self._reconnect() 200 | # send message 201 | bytes_to_send = len(bytes_) 202 | bytes_sent = 0 203 | self._check_recv_side() 204 | while bytes_sent < bytes_to_send: 205 | sent = self.socket.send(bytes_[bytes_sent:]) 206 | if sent == 0: 207 | raise OSError(errno.EPIPE, "Broken pipe") 208 | bytes_sent += sent 209 | self._check_recv_side() 210 | 211 | def _reconnect(self): 212 | if not self.socket: 213 | try: 214 | if self.host.startswith("unix://"): 215 | sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 216 | sock.settimeout(self.timeout) 217 | sock.connect(self.host[len("unix://") :]) 218 | else: 219 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 220 | sock.settimeout(self.timeout) 221 | # This might be controversial and may need to be removed 222 | sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) 223 | sock.connect((self.host, self.port)) 224 | except Exception as e: 225 | try: 226 | sock.close() 227 | except Exception: # pragma: no cover 228 | pass 229 | raise e 230 | else: 231 | self.socket = sock 232 | 233 | def _call_buffer_overflow_handler(self, pending_events): 234 | try: 235 | if self.buffer_overflow_handler: 236 | self.buffer_overflow_handler(pending_events) 237 | except Exception: 238 | # User should care any exception in handler 239 | pass 240 | 241 | def _close(self): 242 | try: 243 | sock = self.socket 244 | if sock: 245 | try: 246 | try: 247 | sock.shutdown(socket.SHUT_RDWR) 248 | except OSError: # pragma: no cover 249 | pass 250 | finally: 251 | try: 252 | sock.close() 253 | except OSError: # pragma: no cover 254 | pass 255 | finally: 256 | self.socket = None 257 | 258 | def __enter__(self): 259 | return self 260 | 261 | def __exit__(self, typ, value, traceback): 262 | try: 263 | self.close() 264 | except Exception as e: # pragma: no cover 265 | self.last_error = e 266 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["hatchling"] 3 | build-backend = "hatchling.build" 4 | 5 | [project] 6 | name = "fluent-logger" 7 | dynamic = ["version"] 8 | description = "A Python logging handler for Fluentd event collector" 9 | readme = "README.rst" 10 | license = { file = "COPYING" } 11 | requires-python = ">=3.7" 12 | authors = [ 13 | { name = "Kazuki Ohta", email = "kazuki.ohta@gmail.com" }, 14 | ] 15 | maintainers = [ 16 | { name = "Arcadiy Ivanov", email = "arcadiy@ivanov.biz" }, 17 | { name = "Inada Naoki", email = "songofacandy@gmail.com" }, 18 | ] 19 | classifiers = [ 20 | "Development Status :: 5 - Production/Stable", 21 | "Intended Audience :: Developers", 22 | "Programming Language :: Python :: 3", 23 | "Programming Language :: Python :: 3.7", 24 | "Programming Language :: Python :: 3.8", 25 | "Programming Language :: Python :: 3.9", 26 | "Programming Language :: Python :: 3.10", 27 | "Programming Language :: Python :: 3.11", 28 | "Programming Language :: Python :: 3.12", 29 | "Programming Language :: Python :: Implementation :: CPython", 30 | "Programming Language :: Python :: Implementation :: PyPy", 31 | "Topic :: System :: Logging", 32 | ] 33 | dependencies = [ 34 | "msgpack>=1.0", 35 | ] 36 | 37 | [project.urls] 38 | Download = "https://pypi.org/project/fluent-logger/" 39 | Homepage = "https://github.com/fluent/fluent-logger-python" 40 | 41 | [tool.hatch.version] 42 | path = "fluent/__about__.py" 43 | 44 | [tool.hatch.build.targets.sdist] 45 | exclude = [ 46 | "/.github", 47 | "/.tox", 48 | "/.venv", 49 | ] 50 | 51 | [tool.hatch.build.targets.wheel] 52 | include = [ 53 | "/fluent", 54 | ] 55 | -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | pytest 2 | pytest-cov 3 | msgpack 4 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fluent/fluent-logger-python/8dc9a4312e45548ef0c3726cc3c5395191112ddb/tests/__init__.py -------------------------------------------------------------------------------- /tests/mockserver.py: -------------------------------------------------------------------------------- 1 | try: 2 | from cStringIO import StringIO as BytesIO 3 | except ImportError: 4 | from io import BytesIO 5 | 6 | import socket 7 | import threading 8 | 9 | from msgpack import Unpacker 10 | 11 | 12 | class MockRecvServer(threading.Thread): 13 | """ 14 | Single threaded server accepts one connection and recv until EOF. 15 | """ 16 | 17 | def __init__(self, host="localhost", port=0): 18 | super().__init__() 19 | 20 | if host.startswith("unix://"): 21 | self.socket_proto = socket.AF_UNIX 22 | self.socket_type = socket.SOCK_STREAM 23 | self.socket_addr = host[len("unix://") :] 24 | else: 25 | self.socket_proto = socket.AF_INET 26 | self.socket_type = socket.SOCK_STREAM 27 | self.socket_addr = (host, port) 28 | 29 | self._sock = socket.socket(self.socket_proto, self.socket_type) 30 | self._sock.bind(self.socket_addr) 31 | if self.socket_proto == socket.AF_INET: 32 | self.port = self._sock.getsockname()[1] 33 | 34 | self._sock.listen(1) 35 | self._buf = BytesIO() 36 | self._con = None 37 | 38 | self.start() 39 | 40 | def run(self): 41 | sock = self._sock 42 | 43 | try: 44 | try: 45 | con, _ = sock.accept() 46 | except Exception: 47 | return 48 | self._con = con 49 | try: 50 | while True: 51 | try: 52 | data = con.recv(16384) 53 | if not data: 54 | break 55 | self._buf.write(data) 56 | except OSError as e: 57 | print("MockServer error: %s" % e) 58 | break 59 | finally: 60 | con.close() 61 | finally: 62 | sock.close() 63 | 64 | def get_received(self): 65 | self.join() 66 | self._buf.seek(0) 67 | return list(Unpacker(self._buf)) 68 | 69 | def close(self): 70 | try: 71 | self._sock.close() 72 | except Exception: 73 | pass 74 | 75 | try: 76 | conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 77 | try: 78 | conn.connect((self.socket_addr[0], self.port)) 79 | finally: 80 | conn.close() 81 | except Exception: 82 | pass 83 | 84 | if self._con: 85 | try: 86 | self._con.close() 87 | except Exception: 88 | pass 89 | 90 | self.join() 91 | -------------------------------------------------------------------------------- /tests/test_asynchandler.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import unittest 3 | 4 | try: 5 | from unittest import mock 6 | except ImportError: 7 | from unittest import mock 8 | try: 9 | from unittest.mock import patch 10 | except ImportError: 11 | from unittest.mock import patch 12 | 13 | 14 | import fluent.asynchandler 15 | import fluent.handler 16 | from tests import mockserver 17 | 18 | 19 | def get_logger(name, level=logging.INFO): 20 | logger = logging.getLogger(name) 21 | logger.setLevel(level) 22 | return logger 23 | 24 | 25 | class TestHandler(unittest.TestCase): 26 | def setUp(self): 27 | super().setUp() 28 | self._server = mockserver.MockRecvServer("localhost") 29 | self._port = self._server.port 30 | 31 | def tearDown(self): 32 | self._server.close() 33 | 34 | def get_handler_class(self): 35 | # return fluent.handler.FluentHandler 36 | return fluent.asynchandler.FluentHandler 37 | 38 | def get_data(self): 39 | return self._server.get_received() 40 | 41 | def test_simple(self): 42 | handler = self.get_handler_class()("app.follow", port=self._port) 43 | 44 | with handler: 45 | log = get_logger("fluent.test") 46 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 47 | log.addHandler(handler) 48 | log.info({"from": "userA", "to": "userB"}) 49 | 50 | data = self.get_data() 51 | eq = self.assertEqual 52 | eq(1, len(data)) 53 | eq(3, len(data[0])) 54 | eq("app.follow", data[0][0]) 55 | eq("userA", data[0][2]["from"]) 56 | eq("userB", data[0][2]["to"]) 57 | self.assertTrue(data[0][1]) 58 | self.assertTrue(isinstance(data[0][1], int)) 59 | 60 | def test_custom_fmt(self): 61 | handler = self.get_handler_class()("app.follow", port=self._port) 62 | 63 | with handler: 64 | log = get_logger("fluent.test") 65 | handler.setFormatter( 66 | fluent.handler.FluentRecordFormatter( 67 | fmt={ 68 | "name": "%(name)s", 69 | "lineno": "%(lineno)d", 70 | "emitted_at": "%(asctime)s", 71 | } 72 | ) 73 | ) 74 | log.addHandler(handler) 75 | log.info({"sample": "value"}) 76 | 77 | data = self.get_data() 78 | self.assertTrue("name" in data[0][2]) 79 | self.assertEqual("fluent.test", data[0][2]["name"]) 80 | self.assertTrue("lineno" in data[0][2]) 81 | self.assertTrue("emitted_at" in data[0][2]) 82 | 83 | def test_custom_fmt_with_format_style(self): 84 | handler = self.get_handler_class()("app.follow", port=self._port) 85 | 86 | with handler: 87 | log = get_logger("fluent.test") 88 | handler.setFormatter( 89 | fluent.handler.FluentRecordFormatter( 90 | fmt={ 91 | "name": "{name}", 92 | "lineno": "{lineno}", 93 | "emitted_at": "{asctime}", 94 | }, 95 | style="{", 96 | ) 97 | ) 98 | log.addHandler(handler) 99 | log.info({"sample": "value"}) 100 | 101 | data = self.get_data() 102 | self.assertTrue("name" in data[0][2]) 103 | self.assertEqual("fluent.test", data[0][2]["name"]) 104 | self.assertTrue("lineno" in data[0][2]) 105 | self.assertTrue("emitted_at" in data[0][2]) 106 | 107 | def test_custom_fmt_with_template_style(self): 108 | handler = self.get_handler_class()("app.follow", port=self._port) 109 | 110 | with handler: 111 | log = get_logger("fluent.test") 112 | handler.setFormatter( 113 | fluent.handler.FluentRecordFormatter( 114 | fmt={ 115 | "name": "${name}", 116 | "lineno": "${lineno}", 117 | "emitted_at": "${asctime}", 118 | }, 119 | style="$", 120 | ) 121 | ) 122 | log.addHandler(handler) 123 | log.info({"sample": "value"}) 124 | 125 | data = self.get_data() 126 | self.assertTrue("name" in data[0][2]) 127 | self.assertEqual("fluent.test", data[0][2]["name"]) 128 | self.assertTrue("lineno" in data[0][2]) 129 | self.assertTrue("emitted_at" in data[0][2]) 130 | 131 | def test_custom_field_raise_exception(self): 132 | handler = self.get_handler_class()("app.follow", port=self._port) 133 | 134 | with handler: 135 | log = get_logger("fluent.test") 136 | handler.setFormatter( 137 | fluent.handler.FluentRecordFormatter( 138 | fmt={"name": "%(name)s", "custom_field": "%(custom_field)s"} 139 | ) 140 | ) 141 | log.addHandler(handler) 142 | with self.assertRaises(KeyError): 143 | log.info({"sample": "value"}) 144 | log.removeHandler(handler) 145 | 146 | def test_custom_field_fill_missing_fmt_key_is_true(self): 147 | handler = self.get_handler_class()("app.follow", port=self._port) 148 | with handler: 149 | log = get_logger("fluent.test") 150 | handler.setFormatter( 151 | fluent.handler.FluentRecordFormatter( 152 | fmt={"name": "%(name)s", "custom_field": "%(custom_field)s"}, 153 | fill_missing_fmt_key=True, 154 | ) 155 | ) 156 | log.addHandler(handler) 157 | log.info({"sample": "value"}) 158 | log.removeHandler(handler) 159 | 160 | data = self.get_data() 161 | self.assertTrue("name" in data[0][2]) 162 | self.assertEqual("fluent.test", data[0][2]["name"]) 163 | self.assertTrue("custom_field" in data[0][2]) 164 | # field defaults to none if not in log record 165 | self.assertIsNone(data[0][2]["custom_field"]) 166 | 167 | def test_json_encoded_message(self): 168 | handler = self.get_handler_class()("app.follow", port=self._port) 169 | 170 | with handler: 171 | log = get_logger("fluent.test") 172 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 173 | log.addHandler(handler) 174 | log.info('{"key": "hello world!", "param": "value"}') 175 | 176 | data = self.get_data() 177 | self.assertTrue("key" in data[0][2]) 178 | self.assertEqual("hello world!", data[0][2]["key"]) 179 | 180 | def test_unstructured_message(self): 181 | handler = self.get_handler_class()("app.follow", port=self._port) 182 | 183 | with handler: 184 | log = get_logger("fluent.test") 185 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 186 | log.addHandler(handler) 187 | log.info("hello %s", "world") 188 | 189 | data = self.get_data() 190 | self.assertTrue("message" in data[0][2]) 191 | self.assertEqual("hello world", data[0][2]["message"]) 192 | 193 | def test_unstructured_formatted_message(self): 194 | handler = self.get_handler_class()("app.follow", port=self._port) 195 | 196 | with handler: 197 | log = get_logger("fluent.test") 198 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 199 | log.addHandler(handler) 200 | log.info("hello world, %s", "you!") 201 | 202 | data = self.get_data() 203 | self.assertTrue("message" in data[0][2]) 204 | self.assertEqual("hello world, you!", data[0][2]["message"]) 205 | 206 | def test_number_string_simple_message(self): 207 | handler = self.get_handler_class()("app.follow", port=self._port) 208 | 209 | with handler: 210 | log = get_logger("fluent.test") 211 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 212 | log.addHandler(handler) 213 | log.info("1") 214 | 215 | data = self.get_data() 216 | self.assertTrue("message" in data[0][2]) 217 | 218 | def test_non_string_simple_message(self): 219 | handler = self.get_handler_class()("app.follow", port=self._port) 220 | 221 | with handler: 222 | log = get_logger("fluent.test") 223 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 224 | log.addHandler(handler) 225 | log.info(42) 226 | 227 | data = self.get_data() 228 | self.assertTrue("message" in data[0][2]) 229 | 230 | def test_non_string_dict_message(self): 231 | handler = self.get_handler_class()("app.follow", port=self._port) 232 | 233 | with handler: 234 | log = get_logger("fluent.test") 235 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 236 | log.addHandler(handler) 237 | log.info({42: "root"}) 238 | 239 | data = self.get_data() 240 | # For some reason, non-string keys are ignored 241 | self.assertFalse(42 in data[0][2]) 242 | 243 | def test_exception_message(self): 244 | handler = self.get_handler_class()("app.follow", port=self._port) 245 | 246 | with handler: 247 | log = get_logger("fluent.test") 248 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 249 | log.addHandler(handler) 250 | try: 251 | raise Exception("sample exception") 252 | except Exception: 253 | log.exception("it failed") 254 | 255 | data = self.get_data() 256 | message = data[0][2]["message"] 257 | # Includes the logged message, as well as the stack trace. 258 | self.assertTrue("it failed" in message) 259 | self.assertTrue('tests/test_asynchandler.py", line' in message) 260 | self.assertTrue("Exception: sample exception" in message) 261 | 262 | 263 | class TestHandlerWithCircularQueue(unittest.TestCase): 264 | Q_SIZE = 3 265 | 266 | def setUp(self): 267 | super().setUp() 268 | self._server = mockserver.MockRecvServer("localhost") 269 | self._port = self._server.port 270 | 271 | def tearDown(self): 272 | self._server.close() 273 | 274 | def get_handler_class(self): 275 | # return fluent.handler.FluentHandler 276 | return fluent.asynchandler.FluentHandler 277 | 278 | def get_data(self): 279 | return self._server.get_received() 280 | 281 | def test_simple(self): 282 | handler = self.get_handler_class()( 283 | "app.follow", 284 | port=self._port, 285 | queue_maxsize=self.Q_SIZE, 286 | queue_circular=True, 287 | ) 288 | with handler: 289 | self.assertEqual(handler.sender.queue_circular, True) 290 | self.assertEqual(handler.sender.queue_maxsize, self.Q_SIZE) 291 | 292 | log = get_logger("fluent.test") 293 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 294 | log.addHandler(handler) 295 | log.info({"cnt": 1, "from": "userA", "to": "userB"}) 296 | log.info({"cnt": 2, "from": "userA", "to": "userB"}) 297 | log.info({"cnt": 3, "from": "userA", "to": "userB"}) 298 | log.info({"cnt": 4, "from": "userA", "to": "userB"}) 299 | log.info({"cnt": 5, "from": "userA", "to": "userB"}) 300 | 301 | data = self.get_data() 302 | eq = self.assertEqual 303 | # with the logging interface, we can't be sure to have filled up the queue, so we can 304 | # test only for a cautelative condition here 305 | self.assertTrue(len(data) >= self.Q_SIZE) 306 | 307 | el = data[0] 308 | eq(3, len(el)) 309 | eq("app.follow", el[0]) 310 | eq("userA", el[2]["from"]) 311 | eq("userB", el[2]["to"]) 312 | self.assertTrue(el[1]) 313 | self.assertTrue(isinstance(el[1], int)) 314 | 315 | 316 | class QueueOverflowException(BaseException): 317 | pass 318 | 319 | 320 | def queue_overflow_handler(discarded_bytes): 321 | raise QueueOverflowException(discarded_bytes) 322 | 323 | 324 | class TestHandlerWithCircularQueueHandler(unittest.TestCase): 325 | Q_SIZE = 1 326 | 327 | def setUp(self): 328 | super().setUp() 329 | self._server = mockserver.MockRecvServer("localhost") 330 | self._port = self._server.port 331 | 332 | def tearDown(self): 333 | self._server.close() 334 | 335 | def get_handler_class(self): 336 | # return fluent.handler.FluentHandler 337 | return fluent.asynchandler.FluentHandler 338 | 339 | def test_simple(self): 340 | handler = self.get_handler_class()( 341 | "app.follow", 342 | port=self._port, 343 | queue_maxsize=self.Q_SIZE, 344 | queue_circular=True, 345 | queue_overflow_handler=queue_overflow_handler, 346 | ) 347 | with handler: 348 | 349 | def custom_full_queue(): 350 | handler.sender._queue.put(b"Mock", block=True) 351 | return True 352 | 353 | with patch.object( 354 | fluent.asynchandler.asyncsender.Queue, 355 | "full", 356 | mock.Mock(side_effect=custom_full_queue), 357 | ): 358 | self.assertEqual(handler.sender.queue_circular, True) 359 | self.assertEqual(handler.sender.queue_maxsize, self.Q_SIZE) 360 | 361 | log = get_logger("fluent.test") 362 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 363 | log.addHandler(handler) 364 | 365 | exc_counter = 0 366 | 367 | try: 368 | log.info({"cnt": 1, "from": "userA", "to": "userB"}) 369 | except QueueOverflowException: 370 | exc_counter += 1 371 | 372 | try: 373 | log.info({"cnt": 2, "from": "userA", "to": "userB"}) 374 | except QueueOverflowException: 375 | exc_counter += 1 376 | 377 | try: 378 | log.info({"cnt": 3, "from": "userA", "to": "userB"}) 379 | except QueueOverflowException: 380 | exc_counter += 1 381 | 382 | # we can't be sure to have exception in every case due to multithreading, 383 | # so we can test only for a cautelative condition here 384 | print(f"Exception raised: {exc_counter} (expected 3)") 385 | assert exc_counter >= 0 386 | -------------------------------------------------------------------------------- /tests/test_asyncsender.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | 3 | import msgpack 4 | 5 | import fluent.asyncsender 6 | from tests import mockserver 7 | 8 | 9 | class TestSetup(unittest.TestCase): 10 | def tearDown(self): 11 | from fluent.asyncsender import _set_global_sender 12 | 13 | _set_global_sender(None) 14 | 15 | def test_no_kwargs(self): 16 | fluent.asyncsender.setup("tag") 17 | actual = fluent.asyncsender.get_global_sender() 18 | self.assertEqual(actual.tag, "tag") 19 | self.assertEqual(actual.host, "localhost") 20 | self.assertEqual(actual.port, 24224) 21 | self.assertEqual(actual.timeout, 3.0) 22 | actual.close() 23 | 24 | def test_host_and_port(self): 25 | fluent.asyncsender.setup("tag", host="myhost", port=24225) 26 | actual = fluent.asyncsender.get_global_sender() 27 | self.assertEqual(actual.tag, "tag") 28 | self.assertEqual(actual.host, "myhost") 29 | self.assertEqual(actual.port, 24225) 30 | self.assertEqual(actual.timeout, 3.0) 31 | actual.close() 32 | 33 | def test_tolerant(self): 34 | fluent.asyncsender.setup("tag", host="myhost", port=24225, timeout=1.0) 35 | actual = fluent.asyncsender.get_global_sender() 36 | self.assertEqual(actual.tag, "tag") 37 | self.assertEqual(actual.host, "myhost") 38 | self.assertEqual(actual.port, 24225) 39 | self.assertEqual(actual.timeout, 1.0) 40 | actual.close() 41 | 42 | 43 | class TestSender(unittest.TestCase): 44 | def setUp(self): 45 | super().setUp() 46 | self._server = mockserver.MockRecvServer("localhost") 47 | self._sender = fluent.asyncsender.FluentSender( 48 | tag="test", port=self._server.port 49 | ) 50 | 51 | def tearDown(self): 52 | try: 53 | self._sender.close() 54 | finally: 55 | self._server.close() 56 | 57 | def get_data(self): 58 | return self._server.get_received() 59 | 60 | def test_simple(self): 61 | with self._sender as sender: 62 | sender.emit("foo", {"bar": "baz"}) 63 | 64 | data = self.get_data() 65 | eq = self.assertEqual 66 | eq(1, len(data)) 67 | eq(3, len(data[0])) 68 | eq("test.foo", data[0][0]) 69 | eq({"bar": "baz"}, data[0][2]) 70 | self.assertTrue(data[0][1]) 71 | self.assertTrue(isinstance(data[0][1], int)) 72 | 73 | def test_decorator_simple(self): 74 | with self._sender as sender: 75 | sender.emit("foo", {"bar": "baz"}) 76 | 77 | data = self.get_data() 78 | eq = self.assertEqual 79 | eq(1, len(data)) 80 | eq(3, len(data[0])) 81 | eq("test.foo", data[0][0]) 82 | eq({"bar": "baz"}, data[0][2]) 83 | self.assertTrue(data[0][1]) 84 | self.assertTrue(isinstance(data[0][1], int)) 85 | 86 | def test_nanosecond(self): 87 | with self._sender as sender: 88 | sender.nanosecond_precision = True 89 | sender.emit("foo", {"bar": "baz"}) 90 | 91 | data = self.get_data() 92 | eq = self.assertEqual 93 | eq(1, len(data)) 94 | eq(3, len(data[0])) 95 | eq("test.foo", data[0][0]) 96 | eq({"bar": "baz"}, data[0][2]) 97 | self.assertTrue(isinstance(data[0][1], msgpack.ExtType)) 98 | eq(data[0][1].code, 0) 99 | 100 | def test_nanosecond_coerce_float(self): 101 | time_ = 1490061367.8616468906402588 102 | with self._sender as sender: 103 | sender.nanosecond_precision = True 104 | sender.emit_with_time("foo", time_, {"bar": "baz"}) 105 | 106 | data = self.get_data() 107 | eq = self.assertEqual 108 | eq(1, len(data)) 109 | eq(3, len(data[0])) 110 | eq("test.foo", data[0][0]) 111 | eq({"bar": "baz"}, data[0][2]) 112 | self.assertTrue(isinstance(data[0][1], msgpack.ExtType)) 113 | eq(data[0][1].code, 0) 114 | eq(data[0][1].data, b"X\xd0\x8873[\xb0*") 115 | 116 | def test_no_last_error_on_successful_emit(self): 117 | with self._sender as sender: 118 | sender.emit("foo", {"bar": "baz"}) 119 | 120 | self.assertEqual(sender.last_error, None) 121 | 122 | def test_last_error_property(self): 123 | EXCEPTION_MSG = "custom exception for testing last_error property" 124 | self._sender.last_error = OSError(EXCEPTION_MSG) 125 | 126 | self.assertEqual(self._sender.last_error.args[0], EXCEPTION_MSG) 127 | 128 | def test_clear_last_error(self): 129 | EXCEPTION_MSG = "custom exception for testing clear_last_error" 130 | self._sender.last_error = OSError(EXCEPTION_MSG) 131 | self._sender.clear_last_error() 132 | 133 | self.assertEqual(self._sender.last_error, None) 134 | 135 | @unittest.skip( 136 | "This test failed with 'TypeError: catching classes that do not " 137 | "inherit from BaseException is not allowed' so skipped" 138 | ) 139 | def test_connect_exception_during_sender_init(self, mock_socket): 140 | # Make the socket.socket().connect() call raise a custom exception 141 | mock_connect = mock_socket.socket.return_value.connect 142 | EXCEPTION_MSG = "a sender init socket connect() exception" 143 | mock_connect.side_effect = OSError(EXCEPTION_MSG) 144 | 145 | self.assertEqual(self._sender.last_error.args[0], EXCEPTION_MSG) 146 | 147 | def test_sender_without_flush(self): 148 | with self._sender as sender: 149 | sender._queue.put( 150 | fluent.asyncsender._TOMBSTONE 151 | ) # This closes without closing 152 | sender._send_thread.join() 153 | for x in range(1, 10): 154 | sender._queue.put(x) 155 | sender.close(False) 156 | self.assertIs(sender._queue.get(False), fluent.asyncsender._TOMBSTONE) 157 | 158 | 159 | class TestSenderDefaultProperties(unittest.TestCase): 160 | def setUp(self): 161 | super().setUp() 162 | self._server = mockserver.MockRecvServer("localhost") 163 | self._sender = fluent.asyncsender.FluentSender( 164 | tag="test", port=self._server.port 165 | ) 166 | 167 | def tearDown(self): 168 | try: 169 | self._sender.close() 170 | finally: 171 | self._server.close() 172 | 173 | def test_default_properties(self): 174 | with self._sender as sender: 175 | self.assertTrue(sender.queue_blocking) 176 | self.assertFalse(sender.queue_circular) 177 | self.assertTrue(isinstance(sender.queue_maxsize, int)) 178 | self.assertTrue(sender.queue_maxsize > 0) 179 | 180 | 181 | class TestSenderWithTimeout(unittest.TestCase): 182 | def setUp(self): 183 | super().setUp() 184 | self._server = mockserver.MockRecvServer("localhost") 185 | self._sender = fluent.asyncsender.FluentSender( 186 | tag="test", port=self._server.port, queue_timeout=0.04 187 | ) 188 | 189 | def tearDown(self): 190 | try: 191 | self._sender.close() 192 | finally: 193 | self._server.close() 194 | 195 | def get_data(self): 196 | return self._server.get_received() 197 | 198 | def test_simple(self): 199 | with self._sender as sender: 200 | sender.emit("foo", {"bar": "baz"}) 201 | 202 | data = self.get_data() 203 | eq = self.assertEqual 204 | eq(1, len(data)) 205 | eq(3, len(data[0])) 206 | eq("test.foo", data[0][0]) 207 | eq({"bar": "baz"}, data[0][2]) 208 | self.assertTrue(data[0][1]) 209 | self.assertTrue(isinstance(data[0][1], int)) 210 | 211 | def test_simple_with_timeout_props(self): 212 | with self._sender as sender: 213 | sender.emit("foo", {"bar": "baz"}) 214 | 215 | data = self.get_data() 216 | eq = self.assertEqual 217 | eq(1, len(data)) 218 | eq(3, len(data[0])) 219 | eq("test.foo", data[0][0]) 220 | eq({"bar": "baz"}, data[0][2]) 221 | self.assertTrue(data[0][1]) 222 | self.assertTrue(isinstance(data[0][1], int)) 223 | 224 | 225 | class TestEventTime(unittest.TestCase): 226 | def test_event_time(self): 227 | time = fluent.asyncsender.EventTime(1490061367.8616468906402588) 228 | self.assertEqual(time.code, 0) 229 | self.assertEqual(time.data, b"X\xd0\x8873[\xb0*") 230 | 231 | 232 | class TestSenderWithTimeoutAndCircular(unittest.TestCase): 233 | Q_SIZE = 3 234 | 235 | def setUp(self): 236 | super().setUp() 237 | self._server = mockserver.MockRecvServer("localhost") 238 | self._sender = fluent.asyncsender.FluentSender( 239 | tag="test", 240 | port=self._server.port, 241 | queue_maxsize=self.Q_SIZE, 242 | queue_circular=True, 243 | ) 244 | 245 | def tearDown(self): 246 | try: 247 | self._sender.close() 248 | finally: 249 | self._server.close() 250 | 251 | def get_data(self): 252 | return self._server.get_received() 253 | 254 | def test_simple(self): 255 | with self._sender as sender: 256 | self.assertEqual(self._sender.queue_maxsize, self.Q_SIZE) 257 | self.assertEqual(self._sender.queue_circular, True) 258 | self.assertEqual(self._sender.queue_blocking, False) 259 | 260 | ok = sender.emit("foo1", {"bar": "baz1"}) 261 | self.assertTrue(ok) 262 | ok = sender.emit("foo2", {"bar": "baz2"}) 263 | self.assertTrue(ok) 264 | ok = sender.emit("foo3", {"bar": "baz3"}) 265 | self.assertTrue(ok) 266 | ok = sender.emit("foo4", {"bar": "baz4"}) 267 | self.assertTrue(ok) 268 | ok = sender.emit("foo5", {"bar": "baz5"}) 269 | self.assertTrue(ok) 270 | 271 | data = self.get_data() 272 | eq = self.assertEqual 273 | # with the logging interface, we can't be sure to have filled up the queue, so we can 274 | # test only for a cautelative condition here 275 | self.assertTrue(len(data) >= self.Q_SIZE) 276 | eq(3, len(data[0])) 277 | self.assertTrue(data[0][1]) 278 | self.assertTrue(isinstance(data[0][1], int)) 279 | 280 | eq(3, len(data[2])) 281 | self.assertTrue(data[2][1]) 282 | self.assertTrue(isinstance(data[2][1], int)) 283 | 284 | 285 | class TestSenderWithTimeoutMaxSizeNonCircular(unittest.TestCase): 286 | Q_SIZE = 3 287 | 288 | def setUp(self): 289 | super().setUp() 290 | self._server = mockserver.MockRecvServer("localhost") 291 | self._sender = fluent.asyncsender.FluentSender( 292 | tag="test", port=self._server.port, queue_maxsize=self.Q_SIZE 293 | ) 294 | 295 | def tearDown(self): 296 | try: 297 | self._sender.close() 298 | finally: 299 | self._server.close() 300 | 301 | def get_data(self): 302 | return self._server.get_received() 303 | 304 | def test_simple(self): 305 | with self._sender as sender: 306 | self.assertEqual(self._sender.queue_maxsize, self.Q_SIZE) 307 | self.assertEqual(self._sender.queue_blocking, True) 308 | self.assertEqual(self._sender.queue_circular, False) 309 | 310 | ok = sender.emit("foo1", {"bar": "baz1"}) 311 | self.assertTrue(ok) 312 | ok = sender.emit("foo2", {"bar": "baz2"}) 313 | self.assertTrue(ok) 314 | ok = sender.emit("foo3", {"bar": "baz3"}) 315 | self.assertTrue(ok) 316 | ok = sender.emit("foo4", {"bar": "baz4"}) 317 | self.assertTrue(ok) 318 | ok = sender.emit("foo5", {"bar": "baz5"}) 319 | self.assertTrue(ok) 320 | 321 | data = self.get_data() 322 | eq = self.assertEqual 323 | print(data) 324 | eq(5, len(data)) 325 | eq(3, len(data[0])) 326 | eq("test.foo1", data[0][0]) 327 | eq({"bar": "baz1"}, data[0][2]) 328 | self.assertTrue(data[0][1]) 329 | self.assertTrue(isinstance(data[0][1], int)) 330 | 331 | eq(3, len(data[2])) 332 | eq("test.foo3", data[2][0]) 333 | eq({"bar": "baz3"}, data[2][2]) 334 | 335 | 336 | class TestSenderUnlimitedSize(unittest.TestCase): 337 | Q_SIZE = 3 338 | 339 | def setUp(self): 340 | super().setUp() 341 | self._server = mockserver.MockRecvServer("localhost") 342 | self._sender = fluent.asyncsender.FluentSender( 343 | tag="test", port=self._server.port, queue_timeout=0.04, queue_maxsize=0 344 | ) 345 | 346 | def tearDown(self): 347 | try: 348 | self._sender.close() 349 | finally: 350 | self._server.close() 351 | 352 | def get_data(self): 353 | return self._server.get_received() 354 | 355 | def test_simple(self): 356 | with self._sender as sender: 357 | self.assertEqual(self._sender.queue_maxsize, 0) 358 | self.assertEqual(self._sender.queue_blocking, True) 359 | self.assertEqual(self._sender.queue_circular, False) 360 | 361 | NUM = 1000 362 | for i in range(1, NUM + 1): 363 | ok = sender.emit(f"foo{i}", {"bar": f"baz{i}"}) 364 | self.assertTrue(ok) 365 | 366 | data = self.get_data() 367 | eq = self.assertEqual 368 | eq(NUM, len(data)) 369 | el = data[0] 370 | eq(3, len(el)) 371 | eq("test.foo1", el[0]) 372 | eq({"bar": "baz1"}, el[2]) 373 | self.assertTrue(el[1]) 374 | self.assertTrue(isinstance(el[1], int)) 375 | 376 | el = data[NUM - 1] 377 | eq(3, len(el)) 378 | eq(f"test.foo{NUM}", el[0]) 379 | eq({"bar": f"baz{NUM}"}, el[2]) 380 | -------------------------------------------------------------------------------- /tests/test_event.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | 3 | from fluent import event, sender 4 | from tests import mockserver 5 | 6 | 7 | class TestException(BaseException): 8 | __test__ = False # teach pytest this is not test class. 9 | 10 | 11 | class TestEvent(unittest.TestCase): 12 | def setUp(self): 13 | self._server = mockserver.MockRecvServer("localhost") 14 | sender.setup("app", port=self._server.port) 15 | 16 | def tearDown(self): 17 | from fluent.sender import _set_global_sender 18 | 19 | sender.close() 20 | _set_global_sender(None) 21 | 22 | def test_logging(self): 23 | # XXX: This tests succeeds even if the fluentd connection failed 24 | # send event with tag app.follow 25 | event.Event("follow", {"from": "userA", "to": "userB"}) 26 | 27 | def test_logging_with_timestamp(self): 28 | # XXX: This tests succeeds even if the fluentd connection failed 29 | 30 | # send event with tag app.follow, with timestamp 31 | event.Event("follow", {"from": "userA", "to": "userB"}, time=0) 32 | 33 | def test_no_last_error_on_successful_event(self): 34 | global_sender = sender.get_global_sender() 35 | event.Event("unfollow", {"from": "userC", "to": "userD"}) 36 | 37 | self.assertEqual(global_sender.last_error, None) 38 | sender.close() 39 | 40 | @unittest.skip( 41 | "This test failed with 'TypeError: catching classes that do not " 42 | "inherit from BaseException is not allowed' so skipped" 43 | ) 44 | def test_connect_exception_during_event_send(self, mock_socket): 45 | # Make the socket.socket().connect() call raise a custom exception 46 | mock_connect = mock_socket.socket.return_value.connect 47 | EXCEPTION_MSG = "a event send socket connect() exception" 48 | mock_connect.side_effect = TestException(EXCEPTION_MSG) 49 | 50 | # Force the socket to reconnect while trying to emit the event 51 | global_sender = sender.get_global_sender() 52 | global_sender._close() 53 | 54 | event.Event("unfollow", {"from": "userE", "to": "userF"}) 55 | 56 | ex = global_sender.last_error 57 | self.assertEqual(ex.args, EXCEPTION_MSG) 58 | global_sender.clear_last_error() 59 | -------------------------------------------------------------------------------- /tests/test_handler.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import unittest 3 | 4 | import fluent.handler 5 | from tests import mockserver 6 | 7 | 8 | def get_logger(name, level=logging.INFO): 9 | logger = logging.getLogger(name) 10 | logger.setLevel(level) 11 | return logger 12 | 13 | 14 | class TestHandler(unittest.TestCase): 15 | def setUp(self): 16 | super().setUp() 17 | self._server = mockserver.MockRecvServer("localhost") 18 | self._port = self._server.port 19 | 20 | def tearDown(self): 21 | self._server.close() 22 | 23 | def get_data(self): 24 | return self._server.get_received() 25 | 26 | def test_simple(self): 27 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 28 | 29 | with handler: 30 | log = get_logger("fluent.test") 31 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 32 | log.addHandler(handler) 33 | 34 | log.info({"from": "userA", "to": "userB"}) 35 | 36 | log.removeHandler(handler) 37 | 38 | data = self.get_data() 39 | eq = self.assertEqual 40 | eq(1, len(data)) 41 | eq(3, len(data[0])) 42 | eq("app.follow", data[0][0]) 43 | eq("userA", data[0][2]["from"]) 44 | eq("userB", data[0][2]["to"]) 45 | self.assertTrue(data[0][1]) 46 | self.assertTrue(isinstance(data[0][1], int)) 47 | 48 | def test_custom_fmt(self): 49 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 50 | 51 | with handler: 52 | log = get_logger("fluent.test") 53 | handler.setFormatter( 54 | fluent.handler.FluentRecordFormatter( 55 | fmt={ 56 | "name": "%(name)s", 57 | "lineno": "%(lineno)d", 58 | "emitted_at": "%(asctime)s", 59 | } 60 | ) 61 | ) 62 | log.addHandler(handler) 63 | log.info({"sample": "value"}) 64 | log.removeHandler(handler) 65 | 66 | data = self.get_data() 67 | self.assertTrue("name" in data[0][2]) 68 | self.assertEqual("fluent.test", data[0][2]["name"]) 69 | self.assertTrue("lineno" in data[0][2]) 70 | self.assertTrue("emitted_at" in data[0][2]) 71 | 72 | def test_exclude_attrs(self): 73 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 74 | 75 | with handler: 76 | log = get_logger("fluent.test") 77 | handler.setFormatter(fluent.handler.FluentRecordFormatter(exclude_attrs=[])) 78 | log.addHandler(handler) 79 | log.info({"sample": "value"}) 80 | log.removeHandler(handler) 81 | 82 | data = self.get_data() 83 | self.assertTrue("name" in data[0][2]) 84 | self.assertEqual("fluent.test", data[0][2]["name"]) 85 | self.assertTrue("lineno" in data[0][2]) 86 | 87 | def test_exclude_attrs_with_exclusion(self): 88 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 89 | 90 | with handler: 91 | log = get_logger("fluent.test") 92 | handler.setFormatter( 93 | fluent.handler.FluentRecordFormatter(exclude_attrs=["funcName"]) 94 | ) 95 | log.addHandler(handler) 96 | log.info({"sample": "value"}) 97 | log.removeHandler(handler) 98 | 99 | data = self.get_data() 100 | self.assertTrue("name" in data[0][2]) 101 | self.assertEqual("fluent.test", data[0][2]["name"]) 102 | self.assertTrue("lineno" in data[0][2]) 103 | 104 | def test_exclude_attrs_with_extra(self): 105 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 106 | 107 | with handler: 108 | log = get_logger("fluent.test") 109 | handler.setFormatter(fluent.handler.FluentRecordFormatter(exclude_attrs=[])) 110 | log.addHandler(handler) 111 | log.info("Test with value '%s'", "test value", extra={"x": 1234}) 112 | log.removeHandler(handler) 113 | 114 | data = self.get_data() 115 | self.assertTrue("name" in data[0][2]) 116 | self.assertEqual("fluent.test", data[0][2]["name"]) 117 | self.assertTrue("lineno" in data[0][2]) 118 | self.assertEqual("Test with value 'test value'", data[0][2]["message"]) 119 | self.assertEqual(1234, data[0][2]["x"]) 120 | 121 | def test_format_dynamic(self): 122 | def formatter(record): 123 | return {"message": record.message, "x": record.x, "custom_value": 1} 124 | 125 | formatter.usesTime = lambda: True 126 | 127 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 128 | 129 | with handler: 130 | log = get_logger("fluent.test") 131 | handler.setFormatter(fluent.handler.FluentRecordFormatter(fmt=formatter)) 132 | log.addHandler(handler) 133 | log.info("Test with value '%s'", "test value", extra={"x": 1234}) 134 | log.removeHandler(handler) 135 | 136 | data = self.get_data() 137 | self.assertTrue("x" in data[0][2]) 138 | self.assertEqual(1234, data[0][2]["x"]) 139 | self.assertEqual(1, data[0][2]["custom_value"]) 140 | 141 | def test_custom_fmt_with_format_style(self): 142 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 143 | 144 | with handler: 145 | log = get_logger("fluent.test") 146 | handler.setFormatter( 147 | fluent.handler.FluentRecordFormatter( 148 | fmt={ 149 | "name": "{name}", 150 | "lineno": "{lineno}", 151 | "emitted_at": "{asctime}", 152 | }, 153 | style="{", 154 | ) 155 | ) 156 | log.addHandler(handler) 157 | log.info({"sample": "value"}) 158 | log.removeHandler(handler) 159 | 160 | data = self.get_data() 161 | self.assertTrue("name" in data[0][2]) 162 | self.assertEqual("fluent.test", data[0][2]["name"]) 163 | self.assertTrue("lineno" in data[0][2]) 164 | self.assertTrue("emitted_at" in data[0][2]) 165 | 166 | def test_custom_fmt_with_template_style(self): 167 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 168 | 169 | with handler: 170 | log = get_logger("fluent.test") 171 | handler.setFormatter( 172 | fluent.handler.FluentRecordFormatter( 173 | fmt={ 174 | "name": "${name}", 175 | "lineno": "${lineno}", 176 | "emitted_at": "${asctime}", 177 | }, 178 | style="$", 179 | ) 180 | ) 181 | log.addHandler(handler) 182 | log.info({"sample": "value"}) 183 | log.removeHandler(handler) 184 | 185 | data = self.get_data() 186 | self.assertTrue("name" in data[0][2]) 187 | self.assertEqual("fluent.test", data[0][2]["name"]) 188 | self.assertTrue("lineno" in data[0][2]) 189 | self.assertTrue("emitted_at" in data[0][2]) 190 | 191 | def test_custom_field_raise_exception(self): 192 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 193 | 194 | with handler: 195 | log = get_logger("fluent.test") 196 | handler.setFormatter( 197 | fluent.handler.FluentRecordFormatter( 198 | fmt={"name": "%(name)s", "custom_field": "%(custom_field)s"} 199 | ) 200 | ) 201 | log.addHandler(handler) 202 | 203 | with self.assertRaises(KeyError): 204 | log.info({"sample": "value"}) 205 | 206 | log.removeHandler(handler) 207 | 208 | def test_custom_field_fill_missing_fmt_key_is_true(self): 209 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 210 | 211 | with handler: 212 | log = get_logger("fluent.test") 213 | handler.setFormatter( 214 | fluent.handler.FluentRecordFormatter( 215 | fmt={"name": "%(name)s", "custom_field": "%(custom_field)s"}, 216 | fill_missing_fmt_key=True, 217 | ) 218 | ) 219 | log.addHandler(handler) 220 | log.info({"sample": "value"}) 221 | log.removeHandler(handler) 222 | 223 | data = self.get_data() 224 | self.assertTrue("name" in data[0][2]) 225 | self.assertEqual("fluent.test", data[0][2]["name"]) 226 | self.assertTrue("custom_field" in data[0][2]) 227 | # field defaults to none if not in log record 228 | self.assertIsNone(data[0][2]["custom_field"]) 229 | 230 | def test_json_encoded_message(self): 231 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 232 | 233 | with handler: 234 | log = get_logger("fluent.test") 235 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 236 | log.addHandler(handler) 237 | 238 | log.info('{"key": "hello world!", "param": "value"}') 239 | 240 | log.removeHandler(handler) 241 | 242 | data = self.get_data() 243 | self.assertTrue("key" in data[0][2]) 244 | self.assertEqual("hello world!", data[0][2]["key"]) 245 | 246 | def test_json_encoded_message_without_json(self): 247 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 248 | 249 | with handler: 250 | log = get_logger("fluent.test") 251 | handler.setFormatter( 252 | fluent.handler.FluentRecordFormatter(format_json=False) 253 | ) 254 | log.addHandler(handler) 255 | 256 | log.info('{"key": "hello world!", "param": "value"}') 257 | 258 | log.removeHandler(handler) 259 | 260 | data = self.get_data() 261 | self.assertTrue("key" not in data[0][2]) 262 | self.assertEqual( 263 | '{"key": "hello world!", "param": "value"}', data[0][2]["message"] 264 | ) 265 | 266 | def test_unstructured_message(self): 267 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 268 | 269 | with handler: 270 | log = get_logger("fluent.test") 271 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 272 | log.addHandler(handler) 273 | log.info("hello %s", "world") 274 | log.removeHandler(handler) 275 | 276 | data = self.get_data() 277 | self.assertTrue("message" in data[0][2]) 278 | self.assertEqual("hello world", data[0][2]["message"]) 279 | 280 | def test_unstructured_formatted_message(self): 281 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 282 | 283 | with handler: 284 | log = get_logger("fluent.test") 285 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 286 | log.addHandler(handler) 287 | log.info("hello world, %s", "you!") 288 | log.removeHandler(handler) 289 | 290 | data = self.get_data() 291 | self.assertTrue("message" in data[0][2]) 292 | self.assertEqual("hello world, you!", data[0][2]["message"]) 293 | 294 | def test_number_string_simple_message(self): 295 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 296 | 297 | with handler: 298 | log = get_logger("fluent.test") 299 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 300 | log.addHandler(handler) 301 | log.info("1") 302 | log.removeHandler(handler) 303 | 304 | data = self.get_data() 305 | self.assertTrue("message" in data[0][2]) 306 | 307 | def test_non_string_simple_message(self): 308 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 309 | 310 | with handler: 311 | log = get_logger("fluent.test") 312 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 313 | log.addHandler(handler) 314 | log.info(42) 315 | log.removeHandler(handler) 316 | 317 | data = self.get_data() 318 | self.assertTrue("message" in data[0][2]) 319 | 320 | def test_non_string_dict_message(self): 321 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 322 | 323 | with handler: 324 | log = get_logger("fluent.test") 325 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 326 | log.addHandler(handler) 327 | log.info({42: "root"}) 328 | log.removeHandler(handler) 329 | 330 | data = self.get_data() 331 | # For some reason, non-string keys are ignored 332 | self.assertFalse(42 in data[0][2]) 333 | 334 | def test_exception_message(self): 335 | handler = fluent.handler.FluentHandler("app.follow", port=self._port) 336 | 337 | with handler: 338 | log = get_logger("fluent.test") 339 | handler.setFormatter(fluent.handler.FluentRecordFormatter()) 340 | log.addHandler(handler) 341 | try: 342 | raise Exception("sample exception") 343 | except Exception: 344 | log.exception("it failed") 345 | log.removeHandler(handler) 346 | 347 | data = self.get_data() 348 | message = data[0][2]["message"] 349 | # Includes the logged message, as well as the stack trace. 350 | self.assertTrue("it failed" in message) 351 | self.assertTrue('tests/test_handler.py", line' in message) 352 | self.assertTrue("Exception: sample exception" in message) 353 | -------------------------------------------------------------------------------- /tests/test_sender.py: -------------------------------------------------------------------------------- 1 | import errno 2 | import sys 3 | import unittest 4 | from shutil import rmtree 5 | from tempfile import mkdtemp 6 | 7 | import msgpack 8 | 9 | import fluent.sender 10 | from tests import mockserver 11 | 12 | 13 | class TestSetup(unittest.TestCase): 14 | def tearDown(self): 15 | from fluent.sender import _set_global_sender 16 | 17 | _set_global_sender(None) 18 | 19 | def test_no_kwargs(self): 20 | fluent.sender.setup("tag") 21 | actual = fluent.sender.get_global_sender() 22 | self.assertEqual(actual.tag, "tag") 23 | self.assertEqual(actual.host, "localhost") 24 | self.assertEqual(actual.port, 24224) 25 | self.assertEqual(actual.timeout, 3.0) 26 | 27 | def test_host_and_port(self): 28 | fluent.sender.setup("tag", host="myhost", port=24225) 29 | actual = fluent.sender.get_global_sender() 30 | self.assertEqual(actual.tag, "tag") 31 | self.assertEqual(actual.host, "myhost") 32 | self.assertEqual(actual.port, 24225) 33 | self.assertEqual(actual.timeout, 3.0) 34 | 35 | def test_tolerant(self): 36 | fluent.sender.setup("tag", host="myhost", port=24225, timeout=1.0) 37 | actual = fluent.sender.get_global_sender() 38 | self.assertEqual(actual.tag, "tag") 39 | self.assertEqual(actual.host, "myhost") 40 | self.assertEqual(actual.port, 24225) 41 | self.assertEqual(actual.timeout, 1.0) 42 | 43 | 44 | class TestSender(unittest.TestCase): 45 | def setUp(self): 46 | super().setUp() 47 | self._server = mockserver.MockRecvServer("localhost") 48 | self._sender = fluent.sender.FluentSender(tag="test", port=self._server.port) 49 | 50 | def tearDown(self): 51 | try: 52 | self._sender.close() 53 | finally: 54 | self._server.close() 55 | 56 | def get_data(self): 57 | return self._server.get_received() 58 | 59 | def test_simple(self): 60 | sender = self._sender 61 | sender.emit("foo", {"bar": "baz"}) 62 | sender._close() 63 | data = self.get_data() 64 | eq = self.assertEqual 65 | eq(1, len(data)) 66 | eq(3, len(data[0])) 67 | eq("test.foo", data[0][0]) 68 | eq({"bar": "baz"}, data[0][2]) 69 | self.assertTrue(data[0][1]) 70 | self.assertTrue(isinstance(data[0][1], int)) 71 | 72 | def test_decorator_simple(self): 73 | with self._sender as sender: 74 | sender.emit("foo", {"bar": "baz"}) 75 | data = self.get_data() 76 | eq = self.assertEqual 77 | eq(1, len(data)) 78 | eq(3, len(data[0])) 79 | eq("test.foo", data[0][0]) 80 | eq({"bar": "baz"}, data[0][2]) 81 | self.assertTrue(data[0][1]) 82 | self.assertTrue(isinstance(data[0][1], int)) 83 | 84 | def test_nanosecond(self): 85 | sender = self._sender 86 | sender.nanosecond_precision = True 87 | sender.emit("foo", {"bar": "baz"}) 88 | sender._close() 89 | data = self.get_data() 90 | eq = self.assertEqual 91 | eq(1, len(data)) 92 | eq(3, len(data[0])) 93 | eq("test.foo", data[0][0]) 94 | eq({"bar": "baz"}, data[0][2]) 95 | self.assertTrue(isinstance(data[0][1], msgpack.ExtType)) 96 | eq(data[0][1].code, 0) 97 | 98 | def test_nanosecond_coerce_float(self): 99 | time = 1490061367.8616468906402588 100 | sender = self._sender 101 | sender.nanosecond_precision = True 102 | sender.emit_with_time("foo", time, {"bar": "baz"}) 103 | sender._close() 104 | data = self.get_data() 105 | eq = self.assertEqual 106 | eq(1, len(data)) 107 | eq(3, len(data[0])) 108 | eq("test.foo", data[0][0]) 109 | eq({"bar": "baz"}, data[0][2]) 110 | self.assertTrue(isinstance(data[0][1], msgpack.ExtType)) 111 | eq(data[0][1].code, 0) 112 | eq(data[0][1].data, b"X\xd0\x8873[\xb0*") 113 | 114 | def test_no_last_error_on_successful_emit(self): 115 | sender = self._sender 116 | sender.emit("foo", {"bar": "baz"}) 117 | sender._close() 118 | 119 | self.assertEqual(sender.last_error, None) 120 | 121 | def test_last_error_property(self): 122 | EXCEPTION_MSG = "custom exception for testing last_error property" 123 | self._sender.last_error = OSError(EXCEPTION_MSG) 124 | 125 | self.assertEqual(self._sender.last_error.args[0], EXCEPTION_MSG) 126 | 127 | def test_clear_last_error(self): 128 | EXCEPTION_MSG = "custom exception for testing clear_last_error" 129 | self._sender.last_error = OSError(EXCEPTION_MSG) 130 | self._sender.clear_last_error() 131 | 132 | self.assertEqual(self._sender.last_error, None) 133 | self._sender.clear_last_error() 134 | self.assertEqual(self._sender.last_error, None) 135 | 136 | def test_emit_error(self): 137 | with self._sender as sender: 138 | sender.emit("blah", {"a": object()}) 139 | 140 | data = self._server.get_received() 141 | self.assertEqual(len(data), 1) 142 | self.assertEqual(data[0][2]["message"], "Can't output to log") 143 | 144 | def test_emit_error_no_forward(self): 145 | with self._sender as sender: 146 | sender.forward_packet_error = False 147 | with self.assertRaises(TypeError): 148 | sender.emit("blah", {"a": object()}) 149 | 150 | def test_emit_after_close(self): 151 | with self._sender as sender: 152 | self.assertTrue(sender.emit("blah", {"a": "123"})) 153 | sender.close() 154 | self.assertFalse(sender.emit("blah", {"a": "456"})) 155 | 156 | data = self._server.get_received() 157 | self.assertEqual(len(data), 1) 158 | self.assertEqual(data[0][2]["a"], "123") 159 | 160 | def test_verbose(self): 161 | with self._sender as sender: 162 | sender.verbose = True 163 | sender.emit("foo", {"bar": "baz"}) 164 | # No assertions here, just making sure there are no exceptions 165 | 166 | def test_failure_to_connect(self): 167 | self._server.close() 168 | 169 | with self._sender as sender: 170 | sender._send_internal(b"123") 171 | self.assertEqual(sender.pendings, b"123") 172 | self.assertIsNone(sender.socket) 173 | 174 | sender._send_internal(b"456") 175 | self.assertEqual(sender.pendings, b"123456") 176 | self.assertIsNone(sender.socket) 177 | 178 | sender.pendings = None 179 | overflows = [] 180 | 181 | def boh(buf): 182 | overflows.append(buf) 183 | 184 | def boh_with_error(buf): 185 | raise RuntimeError 186 | 187 | sender.buffer_overflow_handler = boh 188 | 189 | sender._send_internal(b"0" * sender.bufmax) 190 | self.assertFalse(overflows) # No overflow 191 | 192 | sender._send_internal(b"1") 193 | self.assertTrue(overflows) 194 | self.assertEqual(overflows.pop(0), b"0" * sender.bufmax + b"1") 195 | 196 | sender.buffer_overflow_handler = None 197 | sender._send_internal(b"0" * sender.bufmax) 198 | sender._send_internal(b"1") 199 | self.assertIsNone(sender.pendings) 200 | 201 | sender.buffer_overflow_handler = boh_with_error 202 | sender._send_internal(b"0" * sender.bufmax) 203 | sender._send_internal(b"1") 204 | self.assertIsNone(sender.pendings) 205 | 206 | sender._send_internal(b"1") 207 | self.assertFalse(overflows) # No overflow 208 | self.assertEqual(sender.pendings, b"1") 209 | self.assertIsNone(sender.socket) 210 | 211 | sender.buffer_overflow_handler = boh 212 | sender.close() 213 | self.assertEqual(overflows.pop(0), b"1") 214 | 215 | def test_broken_conn(self): 216 | with self._sender as sender: 217 | sender._send_internal(b"123") 218 | self.assertIsNone(sender.pendings, b"123") 219 | self.assertTrue(sender.socket) 220 | 221 | class FakeSocket: 222 | def __init__(self): 223 | self.to = 123 224 | self.send_side_effects = [3, 0, 9] 225 | self.send_idx = 0 226 | self.recv_side_effects = [ 227 | OSError(errno.EWOULDBLOCK, "Blah"), 228 | b"this data is going to be ignored", 229 | b"", 230 | OSError(errno.EWOULDBLOCK, "Blah"), 231 | OSError(errno.EWOULDBLOCK, "Blah"), 232 | OSError(errno.EACCES, "This error will never happen"), 233 | ] 234 | self.recv_idx = 0 235 | 236 | def send(self, bytes_): 237 | try: 238 | v = self.send_side_effects[self.send_idx] 239 | if isinstance(v, Exception): 240 | raise v 241 | if isinstance(v, type) and issubclass(v, Exception): 242 | raise v() 243 | return v 244 | finally: 245 | self.send_idx += 1 246 | 247 | def shutdown(self, mode): 248 | pass 249 | 250 | def close(self): 251 | pass 252 | 253 | def settimeout(self, to): 254 | self.to = to 255 | 256 | def gettimeout(self): 257 | return self.to 258 | 259 | def recv(self, bufsize, flags=0): 260 | try: 261 | v = self.recv_side_effects[self.recv_idx] 262 | if isinstance(v, Exception): 263 | raise v 264 | if isinstance(v, type) and issubclass(v, Exception): 265 | raise v() 266 | return v 267 | finally: 268 | self.recv_idx += 1 269 | 270 | old_sock = self._sender.socket 271 | sock = FakeSocket() 272 | 273 | try: 274 | self._sender.socket = sock 275 | sender.last_error = None 276 | self.assertTrue(sender._send_internal(b"456")) 277 | self.assertFalse(sender.last_error) 278 | 279 | self._sender.socket = sock 280 | sender.last_error = None 281 | self.assertFalse(sender._send_internal(b"456")) 282 | self.assertEqual(sender.last_error.errno, errno.EPIPE) 283 | 284 | self._sender.socket = sock 285 | sender.last_error = None 286 | self.assertFalse(sender._send_internal(b"456")) 287 | self.assertEqual(sender.last_error.errno, errno.EPIPE) 288 | 289 | self._sender.socket = sock 290 | sender.last_error = None 291 | self.assertFalse(sender._send_internal(b"456")) 292 | self.assertEqual(sender.last_error.errno, errno.EACCES) 293 | finally: 294 | self._sender.socket = old_sock 295 | 296 | @unittest.skipIf(sys.platform == "win32", "Unix socket not supported") 297 | def test_unix_socket(self): 298 | self.tearDown() 299 | tmp_dir = mkdtemp() 300 | try: 301 | server_file = "unix://" + tmp_dir + "/tmp.unix" 302 | self._server = mockserver.MockRecvServer(server_file) 303 | self._sender = fluent.sender.FluentSender(tag="test", host=server_file) 304 | with self._sender as sender: 305 | self.assertTrue(sender.emit("foo", {"bar": "baz"})) 306 | 307 | data = self._server.get_received() 308 | self.assertEqual(len(data), 1) 309 | self.assertEqual(data[0][2], {"bar": "baz"}) 310 | 311 | finally: 312 | rmtree(tmp_dir, True) 313 | 314 | 315 | class TestEventTime(unittest.TestCase): 316 | def test_event_time(self): 317 | time = fluent.sender.EventTime(1490061367.8616468906402588) 318 | self.assertEqual(time.code, 0) 319 | self.assertEqual(time.data, b"X\xd0\x8873[\xb0*") 320 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | minversion = 1.7.2 3 | envlist = py27, py32, py33, py34, py35, py36, py37, py38 4 | skip_missing_interpreters = True 5 | 6 | [testenv] 7 | deps = 8 | pytest 9 | pytest-cov 10 | msgpack 11 | commands = pytest --cov=fluent 12 | --------------------------------------------------------------------------------