├── .gitignore
├── .idea
├── inspectionProfiles
│ └── profiles_settings.xml
└── modules.xml
├── README.md
├── git_kuai_log.py
├── img.png
├── img_1.png
├── img_2.png
├── kuai_log
├── __init__.py
├── _datetime.py
├── kuai_log_record_logging_namespace.py
├── logger.py
├── rotate_file_writter.py
└── stream.py
├── nb_log_config.py
├── pycharm.env
├── setup.py
└── tests
├── compare_spped
├── nb_log_config.py
├── test_kuai_log.py
├── test_loguru.py
├── test_nb_log.py
└── test_raw_log.py
├── find_log_test.py
├── kuai_log_example.py
├── kuai_log_test_multi_process.py
├── test_format.py
├── test_get_logging_name.py
├── test_logging_name.py
├── test_namespace.py
├── test_raw_log_pysnooper.py
└── test_timestr.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Created by .ignore support plugin (hsz.mobi)
2 | .idea/
3 | env_hotels/
4 | henv/
5 | venv/
6 | *.pyc
7 | app/apis/logs/
8 | app/logs/
9 | *.log.*
10 | *.log
11 | *.lock
12 | *.pytest_cache*
13 | nohup.out
14 | apidoc/
15 | node_modules/
16 | hotelApi/
17 | my_patch_frame_config0000.py
18 | my_patch_frame_config_beifen.py
19 | test_frame/my_patch_frame_config.py
20 | function_result_web/
21 | test_frame/my/
22 | redis_queue_web/
23 | not_up_git/
24 | dist/
25 | *.egg-info/
--------------------------------------------------------------------------------
/.idea/inspectionProfiles/profiles_settings.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## 1 kuai_log
2 |
3 | 安装 pip install kuai_log
4 |
5 | kuai_log 是最快的python日志,比loguru和logging快30倍.
6 |
7 | kuai_log的控制台日志自动彩色.
8 |
9 | kuai_log的文件日志多进程切割安全.
10 |
11 | kuai_log的文件日志支持更友好的json文件写入,更适合elk采集.(例如logger.info直接记录一个字典,会自动转换)
12 |
13 | kuai_log KuaiLogger 对象的方法名和入参在logging.Logger对象中一定存在,且参数相同,虽然不是基于logging开发,但保持了良好的兼容性
14 |
15 | ### 1.2 为什么用 kuai_log
16 |
17 | ```
18 | 用户怕了logging包的创建logger, 创建handler, handler设置formatter, logger添加handler 的方式,感觉logging复杂,导致用户想用loguru
19 |
20 | loguru对logging包的不兼容和复杂.比如loguru记录flask框架的已有日志内容,就比较麻烦,说的是在已有logging命名空间添加handler,loguru困难.
21 | 例如loguru记录flask werkzeug 请求记录的日志 ,记录 tornado.access的 日志,麻烦.
22 |
23 | ```
24 |
25 | ## 2 性能比较
26 |
27 | 在 win11 + r7 4800h上,
28 |
29 | 单进程for for循环写入10万次控制台和文件,(具体的测试代码可以直接用 tests/compare_spped 中的代码验证)
30 |
31 | ```
32 | loguru 写入文件和打印控制台,10万次日志耗费48秒
33 |
34 | logging 写入文件和打印控制台,10万次日志耗费45秒
35 |
36 | nb_log 写入文件和打印控制台,10万次日志耗费7秒
37 |
38 | kuai_log 写入文件和打印控制台,10万次日志耗费1.7秒
39 | ```
40 |
41 | ## 3 kuai_log 写入 10万次文件和打印控制台代码:
42 |
43 | ```python
44 | import logging
45 | import time
46 |
47 | from kuai_log import get_logger
48 |
49 | logger = get_logger(name='test', level=logging.DEBUG, log_filename='t5.log',
50 | log_path='/pythonlogs', is_add_file_handler=True,
51 | )
52 |
53 | t_start = time.time()
54 | for i in range(10):
55 | logger.debug(i)
56 | print(time.time() - t_start)
57 | ```
58 |
59 | ### 3.2 kuai_log的日志级别和常用的日志是一样的.debug info warning error critical
60 |
61 | ```python
62 | import logging
63 | import time
64 |
65 | from kuai_log import get_logger
66 |
67 | logger = get_logger(name='test77', level=logging.DEBUG, log_filename='t777.log',
68 | log_path='/pythonlogs', is_add_file_handler=True,
69 | json_log_path='/python_logs_json', is_add_json_file_handler=True,
70 | )
71 |
72 | t_start = time.time()
73 | for i in range(1):
74 | logger.debug(i)
75 | logger.info(i)
76 | logger.warning(i)
77 | logger.error(i)
78 | logger.critical(i)
79 |
80 | print(time.time() - t_start)
81 | ```
82 |
83 | ## 4 kuai_log 用法比logging和loguru简单.
84 |
85 | ### 4.1 有些人问需要实例化生成logger,没有 from loguru import logger直接用爽?
86 |
87 | ```
88 | 因为是每个name的日志行为是独立的,用户在大项目中有很多个日志,不同的表现行为不同,
89 | 例如有的函数里面日志界别高才输出,有的模块需要日志级别低就输出,
90 | 例如有的模块的日志需要写入文件,有的模块的日志不需要写入文件,
91 | 所以用不同的name来区分日志是最好的
92 | ```
93 |
94 | ### 4.2 用户想偷懒,学习loguru直接导入就使用,想不需要手动实例化生成logger
95 |
96 | kuai_log 自带一个已经实例化好的 k_logger 对象,你还嫌麻烦?
97 |
98 | ```python
99 | from kuai_log import k_logger
100 |
101 | k_logger.debug('hello')
102 | ```
103 |
104 | ## 5 kuai_log为什么快?
105 |
106 | 因为logging日志系统功能非常丰富强大,扩展性好,代码较为复杂,kuai_log没有基于logging包来封装,是完全从0写的,代码简单自然就性能好了.
107 |
108 | kuai_log实现的文件日志多进程rotate切割安全,使用了特殊的方式,所以技能保证切割不报错又能保证性能.
109 |
110 | nb_log是基于logging封装的,kuai_log是手写的.
111 |
112 | ## 6 kuai_log 日志截图
113 |
114 |
115 | 
--------------------------------------------------------------------------------
/git_kuai_log.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | import subprocess
3 | import os
4 | import time
5 |
6 | # os.environ["path"] = r'C:\Program Files\Git\mingw64\libexec\git-core'
7 | def getstatusoutput(cmd):
8 | try:
9 | data = subprocess.check_output(cmd, shell=True, universal_newlines=True,
10 | stderr=subprocess.STDOUT, encoding='utf8')
11 | exitcode = 0
12 | except subprocess.CalledProcessError as ex:
13 | data = ex.output
14 | exitcode = ex.returncode
15 | if data[-1:] == '\n':
16 | data = data[:-1]
17 | return exitcode, data
18 |
19 |
20 | def do_cmd(cmd_strx):
21 | print(f'执行 {cmd_strx}')
22 | retx = getstatusoutput(cmd_strx)
23 | print(retx[0])
24 | # if retx[0] !=0:
25 | # raise ValueError('要检查git提交')
26 | print(retx[1], '\n')
27 | return retx
28 |
29 |
30 | t0 = time.time()
31 |
32 | do_cmd('git pull')
33 |
34 | do_cmd('git diff')
35 |
36 | do_cmd('git add .')
37 |
38 | do_cmd('git commit -m commit')
39 |
40 | do_cmd('git push origin')
41 |
42 | # do_cmd('git push github')
43 |
44 | # print(subprocess.getstatusoutput('git push github'))
45 | print(f'{time.strftime("%H:%M:%S")} spend_time {time.time() - t0}')
46 | time.sleep(100000)
47 |
48 | '''dsds'''
--------------------------------------------------------------------------------
/img.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ydf0509/kuai_log/161ec9bac6afa18436e24a8b894762b46f77afc5/img.png
--------------------------------------------------------------------------------
/img_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ydf0509/kuai_log/161ec9bac6afa18436e24a8b894762b46f77afc5/img_1.png
--------------------------------------------------------------------------------
/img_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ydf0509/kuai_log/161ec9bac6afa18436e24a8b894762b46f77afc5/img_2.png
--------------------------------------------------------------------------------
/kuai_log/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import logging
4 | from .logger import get_logger
5 |
6 | k_logger = get_logger('kuai_logger')
--------------------------------------------------------------------------------
/kuai_log/_datetime.py:
--------------------------------------------------------------------------------
1 |
2 | import re
3 | import sys
4 | from calendar import day_abbr, day_name, month_abbr, month_name
5 | from datetime import datetime as datetime_
6 | from datetime import timedelta, timezone
7 | from time import localtime, strftime
8 |
9 | tokens = r"H{1,2}|h{1,2}|m{1,2}|s{1,2}|S{1,6}|YYYY|YY|M{1,4}|D{1,4}|Z{1,2}|zz|A|X|x|E|Q|dddd|ddd|d"
10 |
11 | pattern = re.compile(r"(?:{0})|\[(?:{0}|!UTC)\]".format(tokens))
12 |
13 |
14 | class datetime(datetime_): # noqa: N801
15 | def __format__(self, spec):
16 | if spec.endswith("!UTC"):
17 | dt = self.astimezone(timezone.utc)
18 | spec = spec[:-4]
19 | else:
20 | dt = self
21 |
22 | if not spec:
23 | spec = "%Y-%m-%dT%H:%M:%S.%f%z"
24 |
25 | if "%" in spec:
26 | return datetime_.__format__(dt, spec)
27 |
28 | year, month, day, hour, minute, second, weekday, yearday, _ = dt.timetuple()
29 | microsecond = dt.microsecond
30 | timestamp = dt.timestamp()
31 | tzinfo = dt.tzinfo or timezone(timedelta(seconds=0))
32 | offset = tzinfo.utcoffset(dt).total_seconds()
33 | sign = ("-", "+")[offset >= 0]
34 | h, m = divmod(abs(offset // 60), 60)
35 |
36 | rep = {
37 | "YYYY": "%04d" % year,
38 | "YY": "%02d" % (year % 100),
39 | "Q": "%d" % ((month - 1) // 3 + 1),
40 | "MMMM": month_name[month],
41 | "MMM": month_abbr[month],
42 | "MM": "%02d" % month,
43 | "M": "%d" % month,
44 | "DDDD": "%03d" % yearday,
45 | "DDD": "%d" % yearday,
46 | "DD": "%02d" % day,
47 | "D": "%d" % day,
48 | "dddd": day_name[weekday],
49 | "ddd": day_abbr[weekday],
50 | "d": "%d" % weekday,
51 | "E": "%d" % (weekday + 1),
52 | "HH": "%02d" % hour,
53 | "H": "%d" % hour,
54 | "hh": "%02d" % ((hour - 1) % 12 + 1),
55 | "h": "%d" % ((hour - 1) % 12 + 1),
56 | "mm": "%02d" % minute,
57 | "m": "%d" % minute,
58 | "ss": "%02d" % second,
59 | "s": "%d" % second,
60 | "S": "%d" % (microsecond // 100000),
61 | "SS": "%02d" % (microsecond // 10000),
62 | "SSS": "%03d" % (microsecond // 1000),
63 | "SSSS": "%04d" % (microsecond // 100),
64 | "SSSSS": "%05d" % (microsecond // 10),
65 | "SSSSSS": "%06d" % microsecond,
66 | "A": ("AM", "PM")[hour // 12],
67 | "Z": "%s%02d:%02d" % (sign, h, m),
68 | "ZZ": "%s%02d%02d" % (sign, h, m),
69 | "zz": tzinfo.tzname(dt) or "",
70 | "X": "%d" % timestamp,
71 | "x": "%d" % (int(timestamp) * 1000000 + microsecond),
72 | }
73 | def get(m):
74 | try:
75 | return rep[m.group(0)]
76 | except KeyError:
77 | return m.group(0)[1:-1]
78 | return pattern.sub(get, spec)
79 |
80 |
81 | def aware_now():
82 | now = datetime_.now()
83 | timestamp = now.timestamp()
84 | local = localtime(timestamp)
85 |
86 | try:
87 | seconds = local.tm_gmtoff
88 | zone = local.tm_zone
89 | except AttributeError:
90 | offset = datetime_.fromtimestamp(timestamp) - datetime_.utcfromtimestamp(timestamp)
91 | seconds = offset.total_seconds()
92 | zone = strftime("%Z")
93 |
94 | tzinfo = timezone(timedelta(seconds=seconds), zone)
95 |
96 | return datetime.combine(now.date(), now.time().replace(tzinfo=tzinfo))
97 |
--------------------------------------------------------------------------------
/kuai_log/kuai_log_record_logging_namespace.py:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | pass
--------------------------------------------------------------------------------
/kuai_log/logger.py:
--------------------------------------------------------------------------------
1 | import json
2 | import time
3 | import typing
4 | import copy
5 | import datetime
6 | # from tzlocal import get_localzone
7 | import os
8 | import socket
9 | import sys
10 | import threading
11 | import traceback
12 | from enum import Enum
13 | import logging
14 | from functools import partial
15 | from kuai_log._datetime import aware_now
16 | from kuai_log.stream import OsStream
17 | from kuai_log.rotate_file_writter import OsFileWritter
18 |
19 |
20 | class FormatterFieldEnum(Enum):
21 | host = 'host'
22 |
23 | asctime = 'asctime'
24 | name = 'name'
25 | levelname = 'levelname'
26 | message = 'message'
27 |
28 | process = 'process'
29 | thread = 'thread'
30 |
31 | pathname = 'pathname'
32 | filename = 'filename'
33 | lineno = 'lineno'
34 | funcName = 'funcName'
35 | # click_line = 'click_line'
36 |
37 |
38 | logger_name__logger_obj_map = {} # type: typing.Dict[str,KuaiLogger]
39 |
40 |
41 | # noinspection PyPep8
42 | class KuaiLogger:
43 | """
44 | 不要手动调用这个类,使用get_logger函数就行了.
45 | """
46 |
47 | # 获取当前时区
48 | # current_timezone = get_localzone().zone
49 | host = socket.gethostname()
50 |
51 | def __init__(self, name, level=logging.DEBUG,
52 | is_add_stream_handler=True, is_add_file_handler=False, is_add_json_file_handler=False,
53 | log_path=None, json_log_path=None, log_filename=None,
54 | max_bytes=1000 * 1000 * 1000, back_count=10,
55 | formatter_template='{asctime} - {host} - "{pathname}:{lineno}" - {funcName} - {name} - {levelname} - {message}', ):
56 |
57 | self.name = name
58 | self.level = level
59 |
60 | self._is_add_stream_handler = is_add_stream_handler
61 | self._is_add_file_handler = is_add_file_handler
62 | self._is_add_json_file_handler = is_add_json_file_handler
63 |
64 | self._log_path = log_path
65 | self._log_filename = log_filename
66 |
67 | if self._is_add_file_handler:
68 | self._fw = OsFileWritter(log_filename, log_path, max_bytes=max_bytes, back_count=back_count)
69 | if self._is_add_json_file_handler:
70 | self._fw_json = OsFileWritter(log_filename, json_log_path, max_bytes=max_bytes, back_count=back_count)
71 |
72 | self._formatter_template = formatter_template
73 | self._need_fields = self._parse_need_filed()
74 | # print(self._need_fields)
75 |
76 | def setLevel(self, level):
77 | """
78 | Set the specified level on the underlying logger.
79 | """
80 | self.level = level
81 |
82 | def _parse_need_filed(self):
83 | need_fields = set()
84 | for field in FormatterFieldEnum:
85 | if '{' + field.value + '}' in self._formatter_template:
86 | need_fields.add(field.value)
87 | return need_fields
88 |
89 | def _build_format_kwargs(self, level, msg, stacklevel):
90 | format_kwargs = {}
91 | if FormatterFieldEnum.name.value in self._need_fields:
92 | format_kwargs[FormatterFieldEnum.name.value] = self.name
93 | if FormatterFieldEnum.levelname.value in self._need_fields:
94 | format_kwargs[FormatterFieldEnum.levelname.value] = logging._levelToName[level] # noqa
95 | if FormatterFieldEnum.message.value in self._need_fields:
96 | format_kwargs[FormatterFieldEnum.message.value] = msg
97 |
98 | if (
99 | FormatterFieldEnum.pathname.value in self._need_fields or
100 | FormatterFieldEnum.filename.value in self._need_fields or
101 | FormatterFieldEnum.funcName.value in self._need_fields):
102 | fra = sys._getframe(stacklevel) # noqa
103 | lineno = fra.f_lineno
104 | pathname = fra.f_code.co_filename # type :str
105 | filename = pathname.split('/')[-1].split('\\')[-1]
106 | # noinspection PyPep8Naming
107 | funcName = fra.f_code.co_name
108 | if FormatterFieldEnum.pathname.value in self._need_fields:
109 | format_kwargs[FormatterFieldEnum.pathname.value] = pathname
110 | if FormatterFieldEnum.filename.value in self._need_fields:
111 | format_kwargs[FormatterFieldEnum.filename.value] = filename
112 | if FormatterFieldEnum.lineno.value in self._need_fields:
113 | format_kwargs[FormatterFieldEnum.lineno.value] = lineno
114 | if FormatterFieldEnum.funcName.value in self._need_fields:
115 | format_kwargs[FormatterFieldEnum.funcName.value] = funcName
116 |
117 | if f'{FormatterFieldEnum.process.value}' in self._need_fields:
118 | format_kwargs[FormatterFieldEnum.process.value] = os.getgid()
119 | if f'{FormatterFieldEnum.thread.value}' in self._need_fields:
120 | format_kwargs[FormatterFieldEnum.thread.value] = threading.get_ident()
121 | if FormatterFieldEnum.asctime.value in self._need_fields:
122 | # format_kwargs[FormatterFieldEnum.asctime.value] = datetime.datetime.now().strftime(
123 | # f"%Y-%m-%d %H:%M:%S.%f {self.current_timezone}")
124 | format_kwargs[FormatterFieldEnum.asctime.value] = aware_now()
125 | if FormatterFieldEnum.host.value in self._need_fields:
126 | format_kwargs[FormatterFieldEnum.host.value] = self.host
127 | return format_kwargs
128 |
129 | def log(self, level, msg, args=None, exc_info=None, extra=None, stack_info=False, stacklevel=3):
130 | # def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
131 |
132 | if self.level > level:
133 | return
134 | msg = msg%args
135 | format_kwargs = self._build_format_kwargs(level, msg, stacklevel)
136 | # print(self._formatter_template)
137 | # print(format_kwargs)
138 | format_kwargs_json = {}
139 | if self._is_add_json_file_handler:
140 | format_kwargs_json = copy.copy(format_kwargs)
141 | format_kwargs_json['asctime'] = str(format_kwargs_json['asctime'])
142 | format_kwargs_json['msg'] = {}
143 | if isinstance(msg, dict):
144 | format_kwargs_json['msg'].update(msg)
145 | format_kwargs_json['message'] = ''
146 | if extra:
147 | format_kwargs_json['msg'].update(extra)
148 | if exc_info:
149 | format_kwargs_json['msg'].update({'traceback': traceback.format_exc()})
150 | if extra:
151 | format_kwargs.update(extra)
152 | msg_format = self._formatter_template.format(**format_kwargs)
153 | if exc_info:
154 | msg_format += f'\n {traceback.format_exc()}'
155 | msg_color = self._add_color(msg_format, level)
156 | # print(msg_format)
157 | # print(msg)
158 | if self._is_add_stream_handler:
159 | OsStream.stdout(msg_color + '\n')
160 | if self._is_add_file_handler:
161 | self._fw.write_2_file(msg_format +'\n')
162 | if self._is_add_json_file_handler:
163 | self._fw_json.write_2_file(json.dumps(format_kwargs_json, ensure_ascii=False) + '\n')
164 |
165 | @staticmethod
166 | def _add_color(complete_msg, record_level):
167 | if record_level == logging.DEBUG:
168 | # msg_color = ('\033[0;32m%s\033[0m' % msg) # 绿色
169 | # print(msg1)
170 | msg_color = f'\033[0;32m{complete_msg}\033[0m' # 绿色
171 | elif record_level == logging.INFO:
172 | # msg_color = ('\033[%s;%sm%s\033[0m' % (self._display_method, self.bule, msg)) # 青蓝色 36 96
173 | msg_color = f'\033[0;36m{complete_msg}\033[0m'
174 | elif record_level == logging.WARNING:
175 | # msg_color = ('\033[%s;%sm%s\033[0m' % (self._display_method, self.yellow, msg))
176 | msg_color = f'\033[0;33m{complete_msg}\033[0m'
177 | elif record_level == logging.ERROR:
178 | # msg_color = ('\033[%s;35m%s\033[0m' % (self._display_method, msg)) # 紫红色
179 | msg_color = f'\033[0;35m{complete_msg}\033[0m'
180 | elif record_level == logging.CRITICAL:
181 | # msg_color = ('\033[%s;31m%s\033[0m' % (self._display_method, msg)) # 血红色
182 | msg_color = f'\033[0;31m{complete_msg}\033[0m'
183 | else:
184 | msg_color = f'{complete_msg}'
185 | return msg_color
186 |
187 | def debug(self, msg, *args, **kwargs):
188 | self.log(logging.DEBUG, msg, *args, **kwargs)
189 |
190 | def info(self, msg, *args, **kwargs):
191 | self.log(logging.INFO, msg, *args, **kwargs)
192 |
193 | def warning(self, msg, *args, **kwargs):
194 | self.log(logging.WARNING, msg, *args, **kwargs)
195 |
196 | def error(self, msg, *args, **kwargs):
197 | self.log(logging.ERROR, msg, *args, **kwargs)
198 |
199 | def exception(self, msg, *args, **kwargs):
200 | self.log(logging.ERROR, msg, *args, **kwargs, exc_info=True)
201 |
202 | def critical(self, msg, *args, **kwargs):
203 | self.log(logging.CRITICAL, msg, *args, **kwargs)
204 |
205 |
206 | # noinspection PyPep8
207 | def get_logger(name, level=logging.DEBUG,
208 | is_add_stream_handler=True, is_add_file_handler=False, is_add_json_file_handler=False,
209 | log_path=None, json_log_path=None, log_filename=None,
210 | max_bytes=1000 * 1000 * 1000, back_count=10,
211 | formatter_template='{asctime} - {host} - "{pathname}:{lineno}" - {funcName} - {name} - {levelname} - {message}'):
212 | local_params = copy.copy(locals())
213 | if name in logger_name__logger_obj_map:
214 | print(f'已存在该命名空间 {name} 日志')
215 | logger = logger_name__logger_obj_map[name]
216 | logger.level = level
217 | else:
218 | logger = KuaiLogger(**local_params)
219 | logger_name__logger_obj_map[name] = logger
220 | raw_logger = logging.getLogger(name)
221 | raw_logger.setLevel(level)
222 | raw_logger.handlers = []
223 | raw_logger._log = logger.log # 这是对logging包的日志记录,转移到kuai_log来记录.
224 | raw_logger.log = partial(logger.log, stacklevel=2)
225 | return logger
226 |
227 |
228 | if __name__ == '__main__':
229 | pass
230 |
--------------------------------------------------------------------------------
/kuai_log/rotate_file_writter.py:
--------------------------------------------------------------------------------
1 | import atexit
2 | import multiprocessing
3 | import queue
4 | import threading
5 | import typing
6 | from pathlib import Path
7 | import time
8 | import os
9 |
10 |
11 | # from nb_log.simple_print import sprint as print # 在此模块中不能print,print会写入文件,文件中print又写入文件,无限懵逼死循环。
12 |
13 |
14 | def build_current_date_str():
15 | return time.strftime('%Y-%m-%d')
16 |
17 |
18 | class FileWritter:
19 | _lock = threading.RLock()
20 |
21 | def __init__(self, file_name: str, log_path='/pythonlogs', max_bytes=1000 * 1000 * 1000, back_count=10):
22 | self._max_bytes = max_bytes
23 | self._back_count = back_count
24 | self.need_write_2_file = True if file_name else False
25 | if self.need_write_2_file:
26 | self._file_name = file_name
27 | self.log_path = log_path
28 | if not Path(self.log_path).exists():
29 | print(f'自动创建日志文件夹 {log_path}')
30 | Path(self.log_path).mkdir(exist_ok=True)
31 | self._open_file()
32 | self._last_write_ts = 0
33 | self._last_del_old_files_ts = 0
34 |
35 | @property
36 | def file_path(self):
37 | f_list = []
38 | for f in Path(self.log_path).glob(f'????-??-??.????.{self._file_name}'):
39 | f_list.append(f)
40 | sn_list = []
41 | for f in f_list:
42 | if f'{build_current_date_str()}.' in f.name:
43 | sn = f.name.split('.')[1]
44 | sn_list.append(sn)
45 | if not sn_list:
46 | return Path(self.log_path) / Path(f'{build_current_date_str()}.0001.{self._file_name}')
47 | else:
48 | sn_max = max(sn_list)
49 | if (Path(self.log_path) / Path(f'{build_current_date_str()}.{sn_max}.{self._file_name}')).stat().st_size > self._max_bytes:
50 | new_sn_int = int(sn_max) + 1
51 | new_sn_str = str(new_sn_int).zfill(4)
52 | return Path(self.log_path) / Path(f'{build_current_date_str()}.{new_sn_str}.{self._file_name}')
53 | else:
54 | return Path(self.log_path) / Path(f'{build_current_date_str()}.{sn_max}.{self._file_name}')
55 |
56 | def _open_file(self):
57 | self._f = open(self.file_path, encoding='utf8', mode='a')
58 |
59 | def _close_file(self):
60 | self._f.close()
61 |
62 | def write_2_file(self, msg):
63 | if self.need_write_2_file:
64 | with self._lock:
65 | now_ts = time.time()
66 | if now_ts - self._last_write_ts > 10:
67 | self._last_write_ts = time.time()
68 | self._close_file()
69 | self._open_file()
70 | self._f.write(msg)
71 | self._f.flush()
72 | if now_ts - self._last_del_old_files_ts > 30:
73 | self._last_del_old_files_ts = time.time()
74 | self._delete_old_files()
75 |
76 | def _delete_old_files(self):
77 | f_list = []
78 | for f in Path(self.log_path).glob(f'????-??-??.????.{self._file_name}'):
79 | f_list.append(f)
80 | # f_list.sort(key=lambda f:f.stat().st_mtime,reverse=True)
81 | f_list.sort(key=lambda f: f.name, reverse=True)
82 | for f in f_list[self._back_count:]:
83 | try:
84 | # print(f'删除 {f} ') # 这里不能print, stdout写入文件,写入文件时候print,死循环
85 | f.unlink()
86 | except (FileNotFoundError, PermissionError):
87 | pass
88 |
89 |
90 | class BulkFileWritter:
91 | _lock = threading.Lock()
92 |
93 | filename__queue_map = {}
94 | filename__options_map = {}
95 | filename__file_writter_map = {}
96 |
97 | _get_queue_lock = threading.Lock()
98 |
99 | _has_start_bulk_write = False
100 |
101 | @classmethod
102 | def _get_queue(cls, file_name):
103 | if file_name not in cls.filename__queue_map:
104 | cls.filename__queue_map[file_name] = queue.SimpleQueue()
105 | return cls.filename__queue_map[file_name]
106 |
107 | @classmethod
108 | def _get_file_writter(cls, file_name):
109 | if file_name not in cls.filename__file_writter_map:
110 | fw = FileWritter(**cls.filename__options_map[file_name])
111 | cls.filename__file_writter_map[file_name] = fw
112 | return cls.filename__file_writter_map[file_name]
113 |
114 | def __init__(self, file_name: typing.Optional[str], log_path='/pythonlogs', max_bytes=1000 * 1000 * 1000, back_count=10):
115 | self.need_write_2_file = True if file_name else False
116 | self._file_name_key = (log_path,file_name)
117 | if file_name:
118 | self.__class__.filename__options_map[self._file_name_key] = {
119 | 'file_name': file_name,
120 | 'log_path': log_path,
121 | 'max_bytes': max_bytes,
122 | 'back_count': back_count,
123 | }
124 | self.start_bulk_write()
125 |
126 | def write_2_file(self, msg):
127 | if self.need_write_2_file:
128 | with self._lock:
129 | self._get_queue(self._file_name_key).put(msg)
130 |
131 | @classmethod
132 | def _bulk_real_write(cls):
133 | with cls._lock:
134 | for _file_name, queue in cls.filename__queue_map.items():
135 | msg_str_all = ''
136 | while not queue.empty():
137 | msg_str_all += str(queue.get())
138 | if msg_str_all:
139 | cls._get_file_writter(_file_name).write_2_file(msg_str_all)
140 |
141 | @classmethod
142 | def _when_exit(cls):
143 | # print('结束')
144 | return cls._bulk_real_write()
145 |
146 | @classmethod
147 | def start_bulk_write(cls):
148 | def _bulk_write():
149 | while 1:
150 | cls._bulk_real_write()
151 | time.sleep(0.1)
152 |
153 | if not cls._has_start_bulk_write:
154 | cls._has_start_bulk_write = True
155 | threading.Thread(target=_bulk_write, daemon=True).start()
156 |
157 |
158 | atexit.register(BulkFileWritter._when_exit)
159 |
160 | OsFileWritter = FileWritter if os.name == 'posix' else BulkFileWritter
161 |
162 |
163 | def tt():
164 | fw = OsFileWritter('test_file6.log', '/test_dir2', max_bytes=1000 * 100)
165 | t1 = time.time()
166 | for i in range(10000):
167 | # time.sleep(0.001)
168 | msg = f'yyy{str(i).zfill(5)}' * 4
169 | print(msg)
170 | fw.write_2_file(msg + '\n')
171 | print(time.time() - t1)
172 |
173 |
174 | if __name__ == '__main__':
175 | multiprocessing.Process(target=tt).start()
176 | multiprocessing.Process(target=tt).start()
177 | # tt()
178 |
--------------------------------------------------------------------------------
/kuai_log/stream.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import os
3 | import queue
4 | import sys
5 | import threading
6 | import time
7 | import atexit
8 |
9 | from kuai_log._datetime import aware_now
10 |
11 |
12 | class Stream:
13 | @classmethod
14 | def stdout(cls, msg):
15 | print(msg)
16 |
17 | @classmethod
18 | def print(cls, *args, sep=' ', end='\n', file=None, flush=True, sys_getframe_n=2):
19 | args = (str(arg) for arg in args) # REMIND 防止是数字不能被join
20 | msg = sep.join(args) + end
21 | print(msg)
22 |
23 | class BulkStream:
24 | q = queue.SimpleQueue()
25 | _lock = threading.Lock()
26 | _has_start_bulk_stdout = False
27 |
28 | @classmethod
29 | def _bulk_real_stdout(cls):
30 | with cls._lock:
31 | msg_str_all = ''
32 | while not cls.q.empty():
33 | msg_str_all += str(cls.q.get())
34 | if msg_str_all:
35 | sys.stdout.write(msg_str_all)
36 |
37 | @classmethod
38 | def stdout(cls, msg):
39 | with cls._lock:
40 | cls.q.put(msg)
41 |
42 | @classmethod
43 | def print(cls, *args, sep=' ', end='\n', file=None, flush=True, sys_getframe_n=2):
44 | args = (str(arg) for arg in args) # REMIND 防止是数字不能被join
45 | msg = sep.join(args) + end
46 | fra = sys._getframe(1) # noqa
47 | line = fra.f_lineno
48 | full_path = fra.f_code.co_filename # type :str
49 | # file_name = full_path.split('/')[-1].split('\\')[-1]
50 | full_path_and_line = f'"{full_path}:{line}"'
51 | fun = fra.f_code.co_name
52 | full_msg = f'{aware_now()} - {full_path_and_line} - {fun} - [print] - {msg}'
53 | color_msg = f'\033[0;34m{full_msg}\033[0m'
54 | with cls._lock:
55 | cls.q.put(color_msg)
56 |
57 | @classmethod
58 | def _when_exit(cls):
59 | # stdout_raw('结束 stdout_raw')
60 | return cls._bulk_real_stdout()
61 |
62 | @classmethod
63 | def start_bulk_stdout(cls):
64 | def _bulk_stdout():
65 | while 1:
66 | cls._bulk_real_stdout()
67 | time.sleep(0.1)
68 |
69 | if not cls._has_start_bulk_stdout:
70 | cls._has_start_bulk_write = True
71 | threading.Thread(target=_bulk_stdout, daemon=True).start()
72 |
73 | @classmethod
74 | def patch_print(cls):
75 | """
76 | Python有几个namespace,分别是
77 |
78 | locals
79 |
80 | globals
81 |
82 | builtin
83 |
84 | 其中定义在函数内声明的变量属于locals,而模块内定义的函数属于globals。
85 |
86 |
87 | https://codeday.me/bug/20180929/266673.html python – 为什么__builtins__既是模块又是dict
88 |
89 | :return:
90 | """
91 | try:
92 | __builtins__.print = cls.print
93 | except AttributeError:
94 | """
95 |
96 | 'dict' object has no attribute 'print'
97 | """
98 | # noinspection PyUnresolvedReferences
99 | __builtins__['print'] = cls.print
100 | # traceback.print_exception = print_exception # file类型为 <_io.StringIO object at 0x00000264F2F065E8> 单独判断,解决了,不要加这个。
101 |
102 | OsStream = Stream
103 |
104 |
105 | if os.name == 'nt': # windows io性能差
106 | OsStream = BulkStream
107 | BulkStream.start_bulk_stdout()
108 | BulkStream.patch_print()
109 | atexit.register(BulkStream._when_exit)
110 |
--------------------------------------------------------------------------------
/nb_log_config.py:
--------------------------------------------------------------------------------
1 | # coding=utf8
2 | """
3 | 此文件nb_log_config.py是自动生成到python项目的根目录的,因为是自动生成到 sys.path[1]。
4 | 在这里面写的变量会覆盖此文件nb_log_config_default中的值。对nb_log包进行默认的配置。用户是无需修改nb_log安装包位置里面的配置文件的。
5 |
6 | 但最终配置方式是由get_logger_and_add_handlers方法的各种传参决定,如果方法相应的传参为None则使用这里面的配置。
7 | """
8 |
9 | """
10 | 如果反对日志有各种彩色,可以设置 DEFAULUT_USE_COLOR_HANDLER = False
11 | 如果反对日志有块状背景彩色,可以设置 DISPLAY_BACKGROUD_COLOR_IN_CONSOLE = False
12 | 如果想屏蔽nb_log包对怎么设置pycahrm的颜色的提示,可以设置 WARNING_PYCHARM_COLOR_SETINGS = False
13 | 如果想改变日志模板,可以设置 FORMATTER_KIND 参数,只带了7种模板,可以自定义添加喜欢的模板
14 | LOG_PATH 配置文件日志的保存路径的文件夹。
15 | """
16 | import sys
17 | # noinspection PyUnresolvedReferences
18 | import logging
19 | import os
20 | # noinspection PyUnresolvedReferences
21 | from pathlib import Path # noqa
22 | import socket
23 | from pythonjsonlogger.jsonlogger import JsonFormatter
24 |
25 |
26 | def get_host_ip():
27 | ip = ''
28 | host_name = ''
29 | # noinspection PyBroadException
30 | try:
31 | sc = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
32 | sc.connect(('8.8.8.8', 80))
33 | ip = sc.getsockname()[0]
34 | host_name = socket.gethostname()
35 | sc.close()
36 | except Exception:
37 | pass
38 | return ip, host_name
39 |
40 |
41 | computer_ip, computer_name = get_host_ip()
42 |
43 |
44 | class JsonFormatterJumpAble(JsonFormatter):
45 | def add_fields(self, log_record, record, message_dict):
46 | # log_record['jump_click'] = f"""File '{record.__dict__.get('pathname')}', line {record.__dict__.get('lineno')}"""
47 | log_record[f"{record.__dict__.get('pathname')}:{record.__dict__.get('lineno')}"] = '' # 加个能点击跳转的字段。
48 | log_record['ip'] = computer_ip
49 | log_record['host_name'] = computer_name
50 | super().add_fields(log_record, record, message_dict)
51 | if 'for_segmentation_color' in log_record:
52 | del log_record['for_segmentation_color']
53 |
54 |
55 | DING_TALK_TOKEN = '3dd0eexxxxxadab014bd604XXXXXXXXXXXX' # 钉钉报警机器人
56 |
57 | EMAIL_HOST = ('smtp.sohu.com', 465)
58 | EMAIL_FROMADDR = 'aaa0509@sohu.com' # 'matafyhotel-techl@matafy.com',
59 | EMAIL_TOADDRS = ('cccc.cheng@silknets.com', 'yan@dingtalk.com',)
60 | EMAIL_CREDENTIALS = ('aaa0509@sohu.com', 'abcdefg')
61 |
62 | ELASTIC_HOST = '127.0.0.1'
63 | ELASTIC_PORT = 9200
64 |
65 | KAFKA_BOOTSTRAP_SERVERS = ['192.168.199.202:9092']
66 | ALWAYS_ADD_KAFKA_HANDLER_IN_TEST_ENVIRONENT = False
67 |
68 | MONGO_URL = 'mongodb://myUserAdmin:mimamiama@127.0.0.1:27016/admin'
69 |
70 | # 项目中的print是否自动写入到文件中。值为None则不重定向print到文件中。 自动每天一个文件, 2023-06-30.my_proj.print,生成的文件位置在定义的LOG_PATH
71 | # 如果你设置了环境变量,export PRINT_WRTIE_FILE_NAME="my_proj.print" (linux临时环境变量语法,windows语法自己百度这里不举例),那就优先使用环境变量中设置的文件名字,而不是nb_log_config.py中设置的名字
72 | PRINT_WRTIE_FILE_NAME = Path(sys.path[1]).name + '.print'
73 |
74 | # 项目中的所有标准输出(不仅包括print,还包括了streamHandler日志)都写入到这个文件,为None将不把标准输出重定向到文件。自动每天一个文件, 2023-06-30.my_proj.std,生成的文件位置在定义的LOG_PATH
75 | # 如果你设置了环境变量,export SYS_STD_FILE_NAME="my_proj.std" (linux临时环境变量语法,windows语法自己百度这里不举例),那就优先使用环境变量中设置的文件名字,,而不是nb_log_config.py中设置的名字
76 | SYS_STD_FILE_NAME = Path(sys.path[1]).name + '.std'
77 |
78 | USE_BULK_STDOUT_ON_WINDOWS = False # 在win上是否每隔0.1秒批量stdout,win的io太差了
79 |
80 | DEFAULUT_USE_COLOR_HANDLER = True # 是否默认使用有彩的日志。
81 | DISPLAY_BACKGROUD_COLOR_IN_CONSOLE = True # 在控制台是否显示彩色块状的日志。为False则不使用大块的背景颜色。
82 | AUTO_PATCH_PRINT = True # 是否自动打print的猴子补丁,如果打了猴子补丁,print自动变色和可点击跳转。
83 | SHOW_PYCHARM_COLOR_SETINGS = True # 有的人很反感启动代码时候提示教你怎么优化pycahrm控制台颜色,可以把这里设置为False
84 |
85 | DEFAULT_ADD_MULTIPROCESSING_SAFE_ROATING_FILE_HANDLER = False # 是否默认同时将日志记录到记log文件记事本中,就是用户不指定 log_filename的值,会自动写入日志命名空间.log文件中。
86 | LOG_FILE_SIZE = 100 # 单位是M,每个文件的切片大小,超过多少后就自动切割
87 | LOG_FILE_BACKUP_COUNT = 10 # 对同一个日志文件,默认最多备份几个文件,超过就删除了。
88 |
89 | LOG_PATH = '/pythonlogs' # 默认的日志文件夹,如果不写明磁盘名,则是项目代码所在磁盘的根目录下的/pythonlogs
90 | # LOG_PATH = Path(__file__).absolute().parent / Path("pythonlogs") #这么配置就会自动在你项目的根目录下创建pythonlogs文件夹了并写入。
91 | if os.name == 'posix': # linux非root用户和mac用户无法操作 /pythonlogs 文件夹,没有权限,默认修改为 home/[username] 下面了。例如你的linux用户名是 xiaomin,那么默认会创建并在 /home/xiaomin/pythonlogs文件夹下写入日志文件。
92 | home_path = os.environ.get("HOME", '/') # 这个是获取linux系统的当前用户的主目录,不需要亲自设置
93 | LOG_PATH = Path(home_path) / Path('pythonlogs') # linux mac 权限很严格,非root权限不能在/pythonlogs写入,修改一下默认值。
94 | # print('LOG_PATH:',LOG_PATH)
95 |
96 | LOG_FILE_HANDLER_TYPE = 6 # 1 2 3 4 5 6
97 | """
98 | LOG_FILE_HANDLER_TYPE 这个值可以设置为 1 2 3 4 5 四种值,
99 | 1为使用多进程安全按日志文件大小切割的文件日志,这是本人实现的批量写入日志,减少操作文件锁次数,测试10进程快速写入文件,win上性能比第5种提高了100倍,linux提升5倍
100 | 2为多进程安全按天自动切割的文件日志,同一个文件,每天生成一个新的日志文件。日志文件名字后缀自动加上日期。
101 | 3为不自动切割的单个文件的日志(不切割文件就不会出现所谓进程安不安全的问题)
102 | 4为 WatchedFileHandler,这个是需要在linux下才能使用,需要借助lograte外力进行日志文件的切割,多进程安全。
103 | 5 为第三方的concurrent_log_handler.ConcurrentRotatingFileHandler按日志文件大小切割的文件日志,
104 | 这个是采用了文件锁,多进程安全切割,文件锁在linux上使用fcntl性能还行,win上使用win32con性能非常惨。按大小切割建议不要选第5个个filehandler而是选择第1个。
105 | 6 BothDayAndSizeRotatingFileHandler 使用本人完全彻底开发的,同时按照时间和大小切割,无论是文件的大小、还是时间达到了需要切割的条件就切割。
106 | """
107 |
108 | LOG_LEVEL_FILTER = logging.DEBUG # 默认日志级别,低于此级别的日志不记录了。例如设置为INFO,那么logger.debug的不会记录,只会记录logger.info以上级别的。
109 | # 强烈不建议调高这里的级别为INFO,日志是有命名空间的,单独提高打印啰嗦的日志命名空间的日志级别就可以了,不要全局提高日志级别。
110 | # https://nb-log-doc.readthedocs.io/zh_CN/latest/articles/c9.html#id2 文档9.5里面讲了几百次 python logging的命名空间的作用了,有些人到现在还不知道日志的name作用。
111 |
112 | # 屏蔽的字符串显示,用 if in {打印信息} 来判断实现的,如果打印的消息中包括 FILTER_WORDS_PRINT 数组中的任何一个字符串,那么消息就不执行打印。
113 | # 这个配置对 print 和 logger的控制台输出都生效。这个可以过滤某些啰嗦的print信息,也可以过滤同级别日志中的某些烦人的日志。可以用来过滤三方包中某些控制台打印。数组不要配置过多,否则有一丝丝影响性能会。
114 | FILTER_WORDS_PRINT = [] # 例如, 你希望消息中包括阿弥陀佛 或者 包括善哉善哉 就不打印,那么可以设置 FILTER_WORDS_PRINT = ['阿弥陀佛','善哉善哉']
115 |
116 | RUN_ENV = 'test'
117 |
118 | FORMATTER_DICT = {
119 | 1: logging.Formatter(
120 | '日志时间【%(asctime)s】 - 日志名称【%(name)s】 - 文件【%(filename)s】 - 第【%(lineno)d】行 - 日志等级【%(levelname)s】 - 日志信息【%(message)s】',
121 | "%Y-%m-%d %H:%M:%S"),
122 | 2: logging.Formatter(
123 | '%(asctime)s - %(name)s - %(filename)s - %(funcName)s - %(lineno)d - %(levelname)s - %(message)s',
124 | "%Y-%m-%d %H:%M:%S"),
125 | 3: logging.Formatter(
126 | '%(asctime)s - %(name)s - 【 File "%(pathname)s", line %(lineno)d, in %(funcName)s 】 - %(levelname)s - %(message)s',
127 | "%Y-%m-%d %H:%M:%S"), # 一个模仿traceback异常的可跳转到打印日志地方的模板
128 | 4: logging.Formatter(
129 | '%(asctime)s - %(name)s - "%(filename)s" - %(funcName)s - %(lineno)d - %(levelname)s - %(message)s - File "%(pathname)s", line %(lineno)d ',
130 | "%Y-%m-%d %H:%M:%S"), # 这个也支持日志跳转
131 | 5: logging.Formatter(
132 | '%(asctime)s - %(name)s - "%(pathname)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s',
133 | "%Y-%m-%d %H:%M:%S"), # 我认为的最好的模板,推荐
134 | 6: logging.Formatter('%(name)s - %(asctime)-15s - %(filename)s - %(lineno)d - %(levelname)s: %(message)s',
135 | "%Y-%m-%d %H:%M:%S"),
136 | 7: logging.Formatter('%(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 一个只显示简短文件名和所处行数的日志模板
137 |
138 | 8: JsonFormatterJumpAble('%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(filename)s %(lineno)d %(process)d %(thread)d', "%Y-%m-%d %H:%M:%S.%f",
139 | json_ensure_ascii=False), # 这个是json日志,方便elk采集分析.
140 |
141 | 9: logging.Formatter(
142 | '[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(pathname)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s',
143 | "%Y-%m-%d %H:%M:%S"), # 对5改进,带进程和线程显示的日志模板。
144 | 10: logging.Formatter(
145 | '[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 对7改进,带进程和线程显示的日志模板。
146 | 11: logging.Formatter(
147 | f'({computer_ip},{computer_name})-[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 对7改进,带进程和线程显示的日志模板以及ip和主机名。
148 | }
149 |
150 | FORMATTER_KIND = 5 # 如果get_logger不指定日志模板,则默认选择第几个模板
151 |
--------------------------------------------------------------------------------
/pycharm.env:
--------------------------------------------------------------------------------
1 |
2 |
3 | PYTEST_ADDOPTS=--ignore=test*.py
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | from setuptools import setup, find_packages
3 |
4 | filepath = 'README.md'
5 | print(filepath)
6 |
7 | setup(
8 | name='kuai_log', #
9 | version="0.8",
10 | description=(
11 | "kuai_log is most fast python log"
12 | ),
13 | keywords=['logging', 'log', 'logger', 'loguru', 'nb_log', 'rotate file'],
14 | # long_description=open('README.md', 'r',encoding='utf8').read(),
15 | long_description_content_type="text/markdown",
16 | long_description=open(filepath, 'r', encoding='utf8').read(),
17 | # data_files=[filepath],
18 | author='bfzs',
19 | author_email='ydf0509@sohu.com',
20 | maintainer='ydf',
21 | maintainer_email='ydf0509@sohu.com',
22 | license='BSD License',
23 | packages=find_packages(),
24 | include_package_data=True,
25 | platforms=["all"],
26 | url='https://github.com/ydf0509/kuai_log',
27 | classifiers=[
28 | 'Development Status :: 4 - Beta',
29 | 'Operating System :: OS Independent',
30 | 'Intended Audience :: Developers',
31 | 'License :: OSI Approved :: BSD License',
32 | 'Programming Language :: Python',
33 | 'Programming Language :: Python :: Implementation',
34 | 'Programming Language :: Python :: 3.6',
35 | 'Programming Language :: Python :: 3.7',
36 | 'Programming Language :: Python :: 3.8',
37 | 'Programming Language :: Python :: 3.9',
38 | 'Programming Language :: Python :: 3.10',
39 | 'Programming Language :: Python :: 3.11',
40 | 'Programming Language :: Python :: 3.12',
41 | 'Topic :: Software Development :: Libraries'
42 | ],
43 | install_requires=['tzlocal', ]
44 | )
45 | """
46 | 打包上传
47 | python setup.py sdist upload -r pypi
48 |
49 |
50 | python setup.py sdist & python -m twine upload dist/kuai_log-0.8.tar.gz
51 |
52 |
53 | python -m pip install kuai_log --upgrade -i https://pypi.org/simple
54 | """
55 |
--------------------------------------------------------------------------------
/tests/compare_spped/nb_log_config.py:
--------------------------------------------------------------------------------
1 | # coding=utf8
2 | """
3 | 此文件nb_log_config.py是自动生成到python项目的根目录的,因为是自动生成到 sys.path[1]。
4 | 在这里面写的变量会覆盖此文件nb_log_config_default中的值。对nb_log包进行默认的配置。用户是无需修改nb_log安装包位置里面的配置文件的。
5 |
6 | 但最终配置方式是由get_logger_and_add_handlers方法的各种传参决定,如果方法相应的传参为None则使用这里面的配置。
7 | """
8 |
9 | """
10 | 如果反对日志有各种彩色,可以设置 DEFAULUT_USE_COLOR_HANDLER = False
11 | 如果反对日志有块状背景彩色,可以设置 DISPLAY_BACKGROUD_COLOR_IN_CONSOLE = False
12 | 如果想屏蔽nb_log包对怎么设置pycahrm的颜色的提示,可以设置 WARNING_PYCHARM_COLOR_SETINGS = False
13 | 如果想改变日志模板,可以设置 FORMATTER_KIND 参数,只带了7种模板,可以自定义添加喜欢的模板
14 | LOG_PATH 配置文件日志的保存路径的文件夹。
15 | """
16 | import sys
17 | # noinspection PyUnresolvedReferences
18 | import logging
19 | import os
20 | # noinspection PyUnresolvedReferences
21 | from pathlib import Path # noqa
22 | import socket
23 | from pythonjsonlogger.jsonlogger import JsonFormatter
24 |
25 |
26 | def get_host_ip():
27 | ip = ''
28 | host_name = ''
29 | # noinspection PyBroadException
30 | try:
31 | sc = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
32 | sc.connect(('8.8.8.8', 80))
33 | ip = sc.getsockname()[0]
34 | host_name = socket.gethostname()
35 | sc.close()
36 | except Exception:
37 | pass
38 | return ip, host_name
39 |
40 |
41 | computer_ip, computer_name = get_host_ip()
42 |
43 |
44 | class JsonFormatterJumpAble(JsonFormatter):
45 | def add_fields(self, log_record, record, message_dict):
46 | # log_record['jump_click'] = f"""File '{record.__dict__.get('pathname')}', line {record.__dict__.get('lineno')}"""
47 | log_record[f"{record.__dict__.get('pathname')}:{record.__dict__.get('lineno')}"] = '' # 加个能点击跳转的字段。
48 | log_record['ip'] = computer_ip
49 | log_record['host_name'] = computer_name
50 | super().add_fields(log_record, record, message_dict)
51 | if 'for_segmentation_color' in log_record:
52 | del log_record['for_segmentation_color']
53 |
54 |
55 | DING_TALK_TOKEN = '3dd0eexxxxxadab014bd604XXXXXXXXXXXX' # 钉钉报警机器人
56 |
57 | EMAIL_HOST = ('smtp.sohu.com', 465)
58 | EMAIL_FROMADDR = 'aaa0509@sohu.com' # 'matafyhotel-techl@matafy.com',
59 | EMAIL_TOADDRS = ('cccc.cheng@silknets.com', 'yan@dingtalk.com',)
60 | EMAIL_CREDENTIALS = ('aaa0509@sohu.com', 'abcdefg')
61 |
62 | ELASTIC_HOST = '127.0.0.1'
63 | ELASTIC_PORT = 9200
64 |
65 | KAFKA_BOOTSTRAP_SERVERS = ['192.168.199.202:9092']
66 | ALWAYS_ADD_KAFKA_HANDLER_IN_TEST_ENVIRONENT = False
67 |
68 | MONGO_URL = 'mongodb://myUserAdmin:mimamiama@127.0.0.1:27016/admin'
69 |
70 | # 项目中的print是否自动写入到文件中。值为None则不重定向print到文件中。 自动每天一个文件, 2023-06-30.my_proj.print,生成的文件位置在定义的LOG_PATH
71 | # 如果你设置了环境变量,export PRINT_WRTIE_FILE_NAME="my_proj.print" (linux临时环境变量语法,windows语法自己百度这里不举例),那就优先使用环境变量中设置的文件名字,而不是nb_log_config.py中设置的名字
72 | PRINT_WRTIE_FILE_NAME = Path(sys.path[1]).name + '.print'
73 |
74 | # 项目中的所有标准输出(不仅包括print,还包括了streamHandler日志)都写入到这个文件,为None将不把标准输出重定向到文件。自动每天一个文件, 2023-06-30.my_proj.std,生成的文件位置在定义的LOG_PATH
75 | # 如果你设置了环境变量,export SYS_STD_FILE_NAME="my_proj.std" (linux临时环境变量语法,windows语法自己百度这里不举例),那就优先使用环境变量中设置的文件名字,,而不是nb_log_config.py中设置的名字
76 | SYS_STD_FILE_NAME = Path(sys.path[1]).name + '.std'
77 |
78 | USE_BULK_STDOUT_ON_WINDOWS = True # 在win上是否每隔0.1秒批量stdout,win的io太差了
79 |
80 | DEFAULUT_USE_COLOR_HANDLER = True # 是否默认使用有彩的日志。
81 | DISPLAY_BACKGROUD_COLOR_IN_CONSOLE = True # 在控制台是否显示彩色块状的日志。为False则不使用大块的背景颜色。
82 | AUTO_PATCH_PRINT = True # 是否自动打print的猴子补丁,如果打了猴子补丁,print自动变色和可点击跳转。
83 | SHOW_PYCHARM_COLOR_SETINGS = True # 有的人很反感启动代码时候提示教你怎么优化pycahrm控制台颜色,可以把这里设置为False
84 |
85 | DEFAULT_ADD_MULTIPROCESSING_SAFE_ROATING_FILE_HANDLER = False # 是否默认同时将日志记录到记log文件记事本中,就是用户不指定 log_filename的值,会自动写入日志命名空间.log文件中。
86 | LOG_FILE_SIZE = 100 # 单位是M,每个文件的切片大小,超过多少后就自动切割
87 | LOG_FILE_BACKUP_COUNT = 10 # 对同一个日志文件,默认最多备份几个文件,超过就删除了。
88 |
89 | LOG_PATH = '/pythonlogs' # 默认的日志文件夹,如果不写明磁盘名,则是项目代码所在磁盘的根目录下的/pythonlogs
90 | # LOG_PATH = Path(__file__).absolute().parent / Path("pythonlogs") #这么配置就会自动在你项目的根目录下创建pythonlogs文件夹了并写入。
91 | if os.name == 'posix': # linux非root用户和mac用户无法操作 /pythonlogs 文件夹,没有权限,默认修改为 home/[username] 下面了。例如你的linux用户名是 xiaomin,那么默认会创建并在 /home/xiaomin/pythonlogs文件夹下写入日志文件。
92 | home_path = os.environ.get("HOME", '/') # 这个是获取linux系统的当前用户的主目录,不需要亲自设置
93 | LOG_PATH = Path(home_path) / Path('pythonlogs') # linux mac 权限很严格,非root权限不能在/pythonlogs写入,修改一下默认值。
94 | # print('LOG_PATH:',LOG_PATH)
95 |
96 | LOG_FILE_HANDLER_TYPE = 6 # 1 2 3 4 5 6
97 | """
98 | LOG_FILE_HANDLER_TYPE 这个值可以设置为 1 2 3 4 5 四种值,
99 | 1为使用多进程安全按日志文件大小切割的文件日志,这是本人实现的批量写入日志,减少操作文件锁次数,测试10进程快速写入文件,win上性能比第5种提高了100倍,linux提升5倍
100 | 2为多进程安全按天自动切割的文件日志,同一个文件,每天生成一个新的日志文件。日志文件名字后缀自动加上日期。
101 | 3为不自动切割的单个文件的日志(不切割文件就不会出现所谓进程安不安全的问题)
102 | 4为 WatchedFileHandler,这个是需要在linux下才能使用,需要借助lograte外力进行日志文件的切割,多进程安全。
103 | 5 为第三方的concurrent_log_handler.ConcurrentRotatingFileHandler按日志文件大小切割的文件日志,
104 | 这个是采用了文件锁,多进程安全切割,文件锁在linux上使用fcntl性能还行,win上使用win32con性能非常惨。按大小切割建议不要选第5个个filehandler而是选择第1个。
105 | 6 BothDayAndSizeRotatingFileHandler 使用本人完全彻底开发的,同时按照时间和大小切割,无论是文件的大小、还是时间达到了需要切割的条件就切割。
106 | """
107 |
108 | LOG_LEVEL_FILTER = logging.DEBUG # 默认日志级别,低于此级别的日志不记录了。例如设置为INFO,那么logger.debug的不会记录,只会记录logger.info以上级别的。
109 | # 强烈不建议调高这里的级别为INFO,日志是有命名空间的,单独提高打印啰嗦的日志命名空间的日志级别就可以了,不要全局提高日志级别。
110 | # https://nb-log-doc.readthedocs.io/zh_CN/latest/articles/c9.html#id2 文档9.5里面讲了几百次 python logging的命名空间的作用了,有些人到现在还不知道日志的name作用。
111 |
112 | # 屏蔽的字符串显示,用 if in {打印信息} 来判断实现的,如果打印的消息中包括 FILTER_WORDS_PRINT 数组中的任何一个字符串,那么消息就不执行打印。
113 | # 这个配置对 print 和 logger的控制台输出都生效。这个可以过滤某些啰嗦的print信息,也可以过滤同级别日志中的某些烦人的日志。可以用来过滤三方包中某些控制台打印。数组不要配置过多,否则有一丝丝影响性能会。
114 | FILTER_WORDS_PRINT = [] # 例如, 你希望消息中包括阿弥陀佛 或者 包括善哉善哉 就不打印,那么可以设置 FILTER_WORDS_PRINT = ['阿弥陀佛','善哉善哉']
115 |
116 | RUN_ENV = 'test'
117 |
118 | FORMATTER_DICT = {
119 | 1: logging.Formatter(
120 | '日志时间【%(asctime)s】 - 日志名称【%(name)s】 - 文件【%(filename)s】 - 第【%(lineno)d】行 - 日志等级【%(levelname)s】 - 日志信息【%(message)s】',
121 | "%Y-%m-%d %H:%M:%S"),
122 | 2: logging.Formatter(
123 | '%(asctime)s - %(name)s - %(filename)s - %(funcName)s - %(lineno)d - %(levelname)s - %(message)s',
124 | "%Y-%m-%d %H:%M:%S"),
125 | 3: logging.Formatter(
126 | '%(asctime)s - %(name)s - 【 File "%(pathname)s", line %(lineno)d, in %(funcName)s 】 - %(levelname)s - %(message)s',
127 | "%Y-%m-%d %H:%M:%S"), # 一个模仿traceback异常的可跳转到打印日志地方的模板
128 | 4: logging.Formatter(
129 | '%(asctime)s - %(name)s - "%(filename)s" - %(funcName)s - %(lineno)d - %(levelname)s - %(message)s - File "%(pathname)s", line %(lineno)d ',
130 | "%Y-%m-%d %H:%M:%S"), # 这个也支持日志跳转
131 | 5: logging.Formatter(
132 | '%(asctime)s - %(name)s - "%(pathname)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s',
133 | "%Y-%m-%d %H:%M:%S"), # 我认为的最好的模板,推荐
134 | 6: logging.Formatter('%(name)s - %(asctime)-15s - %(filename)s - %(lineno)d - %(levelname)s: %(message)s',
135 | "%Y-%m-%d %H:%M:%S"),
136 | 7: logging.Formatter('%(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 一个只显示简短文件名和所处行数的日志模板
137 |
138 | 8: JsonFormatterJumpAble('%(asctime)s - %(name)s - %(levelname)s - %(message)s - %(filename)s %(lineno)d %(process)d %(thread)d', "%Y-%m-%d %H:%M:%S.%f",
139 | json_ensure_ascii=False), # 这个是json日志,方便elk采集分析.
140 |
141 | 9: logging.Formatter(
142 | '[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(pathname)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s',
143 | "%Y-%m-%d %H:%M:%S"), # 对5改进,带进程和线程显示的日志模板。
144 | 10: logging.Formatter(
145 | '[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 对7改进,带进程和线程显示的日志模板。
146 | 11: logging.Formatter(
147 | f'({computer_ip},{computer_name})-[p%(process)d_t%(thread)d] %(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(funcName)s - %(levelname)s - %(message)s', "%Y-%m-%d %H:%M:%S"), # 对7改进,带进程和线程显示的日志模板以及ip和主机名。
148 | }
149 |
150 | FORMATTER_KIND = 5 # 如果get_logger不指定日志模板,则默认选择第几个模板
151 |
--------------------------------------------------------------------------------
/tests/compare_spped/test_kuai_log.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import time
3 |
4 | from kuai_log import get_logger
5 |
6 | logger = get_logger(name='test', level=logging.DEBUG, log_filename='t5.log',
7 | log_path='/pythonlogs', is_add_file_handler=True,
8 | # formatter_template='{file_name} - {function} - {name} - {levelname} - {message}',
9 | # formatter_template='{levelname} - {message}'
10 | )
11 |
12 | t_start = time.time()
13 | for i in range(100000):
14 | logger.debug(i)
15 | # logger.debug({'b':i},extra={'aaa':i*2})
16 | print(time.time() - t_start)
17 |
--------------------------------------------------------------------------------
/tests/compare_spped/test_loguru.py:
--------------------------------------------------------------------------------
1 |
2 | import time
3 | import loguru
4 | from loguru import logger # type: loguru._logger.Logger
5 |
6 | print(type(logger))
7 | logger.add('/pythonlogs/logurutest_{time}.log',rotation="00:00")
8 |
9 |
10 | t_start = time.time()
11 | for i in range(100000):
12 | logger.debug(i)
13 |
14 |
15 | print(time.time() - t_start)
--------------------------------------------------------------------------------
/tests/compare_spped/test_nb_log.py:
--------------------------------------------------------------------------------
1 |
2 | import time
3 |
4 | from nb_log import get_logger
5 |
6 |
7 | logger = get_logger(name='test', log_level_int=10, log_filename='t6abcnb.log', log_path='/pythonlogs'
8 | # formatter_template='{file_name} - {function} - {name} - {levelname} - {message}',
9 | # formatter_template='{levelname} - {message}'
10 | )
11 | t_start = time.time()
12 | for i in range(100000):
13 | logger.debug(111)
14 | # logger.info(111)
15 | # logger.warning(111)
16 | # logger.error(111)
17 | # logger.critical(111)
18 |
19 | print(time.time() - t_start)
--------------------------------------------------------------------------------
/tests/compare_spped/test_raw_log.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import time
4 | import logging
5 |
6 |
7 | import logging
8 | from logging import handlers
9 |
10 | logger_raw = logging.getLogger("test_logging")
11 | logger_raw.setLevel(logging.DEBUG)
12 |
13 | logger_raw.warning('日志太简单太丑了,并且没记录到文件')
14 |
15 | # 添加控制台日志
16 | handler_console = logging.StreamHandler()
17 | formatter1 = logging.Formatter('%(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S")
18 | handler_console.setFormatter(formatter1)
19 | logger_raw.addHandler(handler_console)
20 |
21 | # 添加文件日志handler
22 | handler_file = logging.handlers.RotatingFileHandler('test_logging.log',mode='a')
23 | formatter2 = logging.Formatter('%(asctime)s - %(name)s - %(funcName)s - %(levelname)s - %(message)s',
24 | "%Y-%m-%d %H:%M:%S")
25 | handler_file.setFormatter(formatter2)
26 | logger_raw.addHandler(handler_file)
27 |
28 |
29 | t_start = time.time()
30 | for i in range(100000):
31 | logger_raw.error("日志现在格式变好了,并且记录到文件了")
32 | # logger.info(111)
33 | # logger.warning(111)
34 | # logger.error(111)
35 | # logger.critical(111)
36 |
37 | print(time.time() - t_start)
--------------------------------------------------------------------------------
/tests/find_log_test.py:
--------------------------------------------------------------------------------
1 |
2 | from pathlib import Path
3 | import re
4 |
5 | print(len(re.findall('hi',(Path('/pythonlogs')/Path('2023-10-07.0001.f1d.log')).read_text())))
--------------------------------------------------------------------------------
/tests/kuai_log_example.py:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | import logging
5 | import time
6 |
7 | from kuai_log import get_logger
8 |
9 | logger = get_logger(name='test_example', level=logging.DEBUG, log_filename='kuailog_ex.log',
10 | log_path='/pythonlogs', is_add_file_handler=True,
11 | json_log_path='/pythonlogs_json', is_add_json_file_handler=True,
12 | )
13 |
14 | t_start = time.time()
15 | for i in range(1):
16 | logger.debug(i)
17 | logger.info(i)
18 | logger.warning(i)
19 | logger.error(i)
20 | logger.critical(i)
21 |
22 | print(time.time() - t_start)
23 |
24 | time.sleep(10000)
25 |
--------------------------------------------------------------------------------
/tests/kuai_log_test_multi_process.py:
--------------------------------------------------------------------------------
1 | import time
2 |
3 | from kuai_log import get_logger
4 | from multiprocessing import Process
5 |
6 | logger1 = get_logger('f1',log_filename='f1c.log',log_path='/pythonlogs',is_add_file_handler=True)
7 | logger2 = get_logger('f2',log_filename='f2c.log',log_path='/pythonlogs',is_add_file_handler=True)
8 |
9 |
10 | def fun():
11 | for i in range(10000):
12 | logger1.warning(f'hi {i}')
13 | logger2.debug(f'hello {i}')
14 |
15 |
16 | if __name__ == '__main__':
17 | Process(target=fun).start()
18 | Process(target=fun).start()
19 |
20 | time.sleep(10000)
--------------------------------------------------------------------------------
/tests/test_format.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | s1 = 'ab{c}'.format(c=1,d=2)
4 |
5 | print(s1)
--------------------------------------------------------------------------------
/tests/test_get_logging_name.py:
--------------------------------------------------------------------------------
1 | import flask
2 | import logging
3 | import requests
4 | # 获取所有的日志记录器名称
5 | logger_names = logging.Logger.manager.loggerDict.keys()
6 |
7 | # 打印日志记录器名称
8 | for name in logger_names:
9 | print(name)
--------------------------------------------------------------------------------
/tests/test_logging_name.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import time
3 |
4 | from kuai_log import get_logger
5 |
6 |
7 |
8 | l1 = logging.getLogger('l1.c')
9 | import tornado
10 | import fastapi
11 | print(logging.getLogger('l1') is l1)
12 | l1.info('xixi1')
13 |
14 |
15 | logger = get_logger('l1')
16 |
17 | # l1._log = logger.log
18 |
19 | print(l1._log)
20 |
21 | l1.info('xixi2')
22 |
23 |
24 |
25 | # class InterceptHandler(logging.Handler):
26 | # def emit(self, record: logging.LogRecord) -> None:
27 | # print(record)
28 | # print(record.getMessage())
29 | # logger.log(record.levelno,record.getMessage(),record.args,record.exc_info,stack_info=2)
30 | #
31 | # l1.addHandler(InterceptHandler())
32 |
33 | # l1._log = logger.log
34 |
35 | l1.error('xixi3')
36 |
37 | l1.log(10,'logggg')
38 | print('over')
39 | time.sleep(1000000)
--------------------------------------------------------------------------------
/tests/test_namespace.py:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | import requests
5 | from kuai_log import get_logger
6 | import logging
7 |
8 | get_logger('urllib3.connectionpool')
9 | import nb_log
10 | # logger = logging.getLogger(None)
11 | # logger.setLevel(10)
12 |
13 | # nb_log.get_logger(None)
14 | requests.get('https://www.baidu.com')
--------------------------------------------------------------------------------
/tests/test_raw_log_pysnooper.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import time
4 | import logging
5 |
6 |
7 | import logging
8 | from logging import handlers
9 |
10 | logger_raw = logging.getLogger("test_logging")
11 | logger_raw.setLevel(logging.DEBUG)
12 |
13 | logger_raw.warning('日志太简单太丑了,并且没记录到文件')
14 |
15 | # 添加控制台日志
16 | handler_console = logging.StreamHandler()
17 | formatter1 = logging.Formatter('%(asctime)s - %(name)s - "%(filename)s:%(lineno)d" - %(levelname)s - %(message)s',"%Y-%m-%d %H:%M:%S")
18 | handler_console.setFormatter(formatter1)
19 | logger_raw.addHandler(handler_console)
20 |
21 | # 添加文件日志handler
22 | handler_file = logging.handlers.RotatingFileHandler('test_logging.log',mode='a')
23 | formatter2 = logging.Formatter('%(asctime)s - %(name)s - %(funcName)s - %(levelname)s - %(message)s',
24 | "%Y-%m-%d %H:%M:%S")
25 | handler_file.setFormatter(formatter2)
26 | logger_raw.addHandler(handler_file)
27 |
28 |
29 | t_start = time.time()
30 |
31 | import pysnooper
32 | import pysnooper_click_able
33 |
34 | @pysnooper_click_able.snoop(depth=1000)
35 | def f():
36 | import gevent
37 | # for i in range(1):
38 | # logger_raw.error("日志现在格式变好了,并且记录到文件了")
39 | # logger.info(111)
40 | # logger.warning(111)
41 | # logger.error(111)
42 | # logger.critical(111)
43 |
44 | f()
45 | print(time.time() - t_start)
--------------------------------------------------------------------------------
/tests/test_timestr.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import time
4 |
5 | # print(time.gmtime())
6 |
7 | from kuai_log._datetime import datetime,aware_now
8 | import pysnooper_click_able
9 | # datetime.strftime('%Y-%m-%dT%H:%M:%S.%f%z')
10 |
11 | @pysnooper_click_able.snoop(depth=10000)
12 | def f():
13 | for i in range(1):
14 | t = time.strftime('%Y-%m-%dT%H:%M:%S%z')
15 | # t = aware_now()
16 | if i %1000 == 0:
17 | # print(aware_now())
18 | print(t)
19 |
20 |
21 |
22 | f()
23 |
24 |
25 | time.sleep(100)
--------------------------------------------------------------------------------