├── .gitignore ├── LICENSE ├── NanoLog.hpp ├── README.md ├── main.cpp ├── nanolog.hpp ├── nanolog.vcxproj └── nanolog.vcxproj.filters /.gitignore: -------------------------------------------------------------------------------- 1 | # Prerequisites 2 | *.d 3 | 4 | # Compiled Object files 5 | *.slo 6 | *.lo 7 | *.o 8 | *.obj 9 | 10 | # Precompiled Headers 11 | *.gch 12 | *.pch 13 | 14 | # Compiled Dynamic libraries 15 | *.so 16 | *.dylib 17 | *.dll 18 | 19 | # Fortran module files 20 | *.mod 21 | *.smod 22 | 23 | # Compiled Static libraries 24 | *.lai 25 | *.la 26 | *.a 27 | *.lib 28 | 29 | # Executables 30 | *.exe 31 | *.out 32 | *.app 33 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /NanoLog.hpp: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | 20 | #if defined (_MSC_VER) 21 | #include 22 | namespace nanolog_fs = std::filesystem; 23 | #if defined max 24 | #undef max 25 | #endif 26 | #elif defined (__GNUC__) 27 | #if __GNUC__ < 8 28 | #include 29 | namespace nanolog_fs = std::experimental::filesystem; 30 | #else 31 | #include 32 | namespace nanolog_fs = std::filesystem; 33 | #endif 34 | #else 35 | #include 36 | namespace nanolog_fs = std::filesystem; 37 | #endif 38 | 39 | #define YEAR (1900) 40 | #define MONTH (1) 41 | #define CCT (+8) 42 | 43 | #if 1 44 | namespace waf 45 | { 46 | #endif 47 | 48 | namespace nanolog 49 | { 50 | class NanologBase 51 | { 52 | public: 53 | /* Returns microseconds since epoch */ 54 | static uint64_t timestamp_now() 55 | { 56 | // return std::chrono::duration_cast(std::chrono::high_resolution_clock::now().time_since_epoch()).count(); 57 | return std::chrono::duration_cast(std::chrono::system_clock::now().time_since_epoch()).count(); 58 | } 59 | 60 | static std::string get_datetime(uint64_t timestamp) 61 | { 62 | char miliseconds[4]; 63 | sprintf(miliseconds, "%03llu", timestamp % 1000); 64 | std::time_t time_t = timestamp / 1000; 65 | tm* gmtime = std::localtime(&time_t); 66 | 67 | std::ostringstream ostr; 68 | if(nullptr == gmtime) 69 | { 70 | ostr<< miliseconds; 71 | } 72 | else 73 | { 74 | char datetime[32]; 75 | //sprintf(datetime, "%d%02d%02d%02d%02d%02d", gmtime->tm_year+YEAR, gmtime->tm_mon+MONTH, 76 | // gmtime->tm_mday, gmtime->tm_hour + CCT, gmtime->tm_min, gmtime->tm_sec); 77 | sprintf(datetime, "%d%02d%02d", gmtime->tm_year+YEAR, gmtime->tm_mon+MONTH, gmtime->tm_mday); 78 | ostr<< datetime; 79 | } 80 | 81 | return ostr.str(); 82 | } 83 | /* I want [2016-10-13 00:01:23.528514] */ 84 | static void format_timestamp(std::ostream & os, uint64_t timestamp) 85 | { 86 | // The next 3 lines do not work on MSVC! 87 | // auto duration = std::chrono::microseconds(timestamp); 88 | // std::chrono::high_resolution_clock::time_point time_point(duration); 89 | // std::time_t time_t = std::chrono::high_resolution_clock::to_time_t(time_point); 90 | char miliseconds[7]; 91 | sprintf(miliseconds, "%03llu", timestamp % 1000); 92 | std::time_t time_t = timestamp / 1000; 93 | tm* gmtime = std::localtime(&time_t); 94 | 95 | if(nullptr == gmtime) 96 | { 97 | os << '[' << miliseconds << ']'; 98 | } 99 | else 100 | { 101 | char datetime[32]; 102 | sprintf(datetime, "%d-%02d-%02d %02d:%02d:%02d", gmtime->tm_year+YEAR, gmtime->tm_mon+MONTH, 103 | gmtime->tm_mday, gmtime->tm_hour, gmtime->tm_min, gmtime->tm_sec); 104 | os << '['<< datetime<< "."<< miliseconds << ']'; 105 | } 106 | } 107 | 108 | static std::thread::id this_thread_id() 109 | { 110 | static thread_local const std::thread::id id = std::this_thread::get_id(); 111 | return id; 112 | } 113 | }; 114 | 115 | enum class LogLevel : uint8_t { INFO, WARN, CRIT }; 116 | 117 | class NanoLogLine 118 | { 119 | public: 120 | template < typename T, typename Tuple > 121 | struct TupleIndex; 122 | 123 | template < typename T,typename ... Types > 124 | struct TupleIndex < T, std::tuple < T, Types... > > 125 | { 126 | static constexpr const std::size_t value = 0; 127 | }; 128 | 129 | template < typename T, typename U, typename ... Types > 130 | struct TupleIndex < T, std::tuple < U, Types... > > 131 | { 132 | static constexpr const std::size_t value = 1 + TupleIndex < T, std::tuple < Types... > >::value; 133 | }; 134 | 135 | struct string_literal_t 136 | { 137 | explicit string_literal_t(char const * s) : m_s(s) {} 138 | char const * m_s; 139 | }; 140 | typedef std::tuple < char, uint32_t, uint64_t, int32_t, int64_t, double, NanoLogLine::string_literal_t, char * > SupportedTypes; 141 | 142 | NanoLogLine(LogLevel level, char const * file, char const * function, uint32_t line) 143 | : m_bytes_used(0) 144 | , m_buffer_size(sizeof(m_stack_buffer)) 145 | { 146 | encode < uint64_t >(NanologBase::timestamp_now()); 147 | encode < std::thread::id >(NanologBase::this_thread_id()); 148 | encode < string_literal_t >(string_literal_t(file)); 149 | encode < string_literal_t >(string_literal_t(function)); 150 | encode < uint32_t >(line); 151 | encode < LogLevel >(level); 152 | } 153 | ~NanoLogLine() = default; 154 | 155 | NanoLogLine(NanoLogLine &&) = default; 156 | NanoLogLine& operator=(NanoLogLine &&) = default; 157 | 158 | void stringify(std::ostream & os) 159 | { 160 | char * b = !m_heap_buffer ? m_stack_buffer : m_heap_buffer.get(); 161 | char const * const end = b + m_bytes_used; 162 | uint64_t timestamp = *reinterpret_cast < uint64_t * >(b); b += sizeof(uint64_t); 163 | std::thread::id threadid = *reinterpret_cast < std::thread::id * >(b); b += sizeof(std::thread::id); 164 | string_literal_t file = *reinterpret_cast < string_literal_t * >(b); b += sizeof(string_literal_t); 165 | string_literal_t function = *reinterpret_cast < string_literal_t * >(b); b += sizeof(string_literal_t); 166 | uint32_t line = *reinterpret_cast < uint32_t * >(b); b += sizeof(uint32_t); 167 | LogLevel loglevel = *reinterpret_cast < LogLevel * >(b); b += sizeof(LogLevel); 168 | 169 | NanologBase::format_timestamp(os, timestamp); 170 | 171 | os << '[' << to_string(loglevel) << ']' 172 | << '[' << threadid << ']' 173 | << '[' << file.m_s << ':' << function.m_s << ':' << line << "] "; 174 | 175 | stringify(os, b, end); 176 | 177 | os << std::endl; 178 | 179 | if (loglevel >= LogLevel::CRIT) 180 | os.flush(); 181 | } 182 | 183 | NanoLogLine& operator<<(char arg) 184 | { 185 | encode < char >(arg, TupleIndex < char, SupportedTypes >::value); 186 | return *this; 187 | } 188 | NanoLogLine& operator<<(int32_t arg) 189 | { 190 | encode < int32_t >(arg, TupleIndex < int32_t, SupportedTypes >::value); 191 | return *this; 192 | } 193 | NanoLogLine& operator<<(uint32_t arg) 194 | { 195 | encode < uint32_t >(arg, TupleIndex < uint32_t, SupportedTypes >::value); 196 | return *this; 197 | } 198 | NanoLogLine& operator<<(int64_t arg) 199 | { 200 | encode < int64_t >(arg, TupleIndex < int64_t, SupportedTypes >::value); 201 | return *this; 202 | } 203 | NanoLogLine& operator<<(uint64_t arg) 204 | { 205 | encode < uint64_t >(arg, TupleIndex < uint64_t, SupportedTypes >::value); 206 | return *this; 207 | } 208 | NanoLogLine& operator<<(double arg) 209 | { 210 | encode < double >(arg, TupleIndex < double, SupportedTypes >::value); 211 | return *this; 212 | } 213 | NanoLogLine& operator<<(std::string const & arg) 214 | { 215 | encode_c_string(arg.c_str(), arg.length()); 216 | return *this; 217 | } 218 | 219 | template < size_t N > 220 | NanoLogLine& operator<<(const char (&arg)[N]) 221 | { 222 | encode(string_literal_t(arg)); 223 | return *this; 224 | } 225 | 226 | template < typename Arg > 227 | typename std::enable_if < std::is_same < Arg, char const * >::value, NanoLogLine& >::type 228 | operator<<(Arg const & arg) 229 | { 230 | encode(arg); 231 | return *this; 232 | } 233 | 234 | template < typename Arg > 235 | typename std::enable_if < std::is_same < Arg, char * >::value, NanoLogLine& >::type 236 | operator<<(Arg const & arg) 237 | { 238 | encode(arg); 239 | return *this; 240 | } 241 | 242 | private: 243 | char const * to_string(LogLevel loglevel) 244 | { 245 | switch (loglevel) 246 | { 247 | case LogLevel::INFO: 248 | return "INFO"; 249 | case LogLevel::WARN: 250 | return "WARN"; 251 | case LogLevel::CRIT: 252 | return "CRIT"; 253 | } 254 | return "XXXX"; 255 | } 256 | 257 | char * buffer() 258 | { 259 | return !m_heap_buffer ? &m_stack_buffer[m_bytes_used] : &(m_heap_buffer.get())[m_bytes_used]; 260 | } 261 | 262 | template < typename Arg > 263 | void encode(Arg arg) 264 | { 265 | *reinterpret_cast(buffer()) = arg; 266 | m_bytes_used += sizeof(Arg); 267 | } 268 | 269 | template < typename Arg > 270 | void encode(Arg arg, uint8_t type_id) 271 | { 272 | resize_buffer_if_needed(sizeof(Arg) + sizeof(uint8_t)); 273 | encode < uint8_t >(type_id); 274 | encode < Arg >(arg); 275 | } 276 | 277 | void encode(char * arg) 278 | { 279 | if (arg != nullptr) 280 | encode_c_string(arg, strlen(arg)); 281 | } 282 | void encode(char const * arg) 283 | { 284 | if (arg != nullptr) 285 | encode_c_string(arg, strlen(arg)); 286 | } 287 | void encode(string_literal_t arg) 288 | { 289 | encode < string_literal_t >(arg, TupleIndex < string_literal_t, SupportedTypes >::value); 290 | } 291 | void encode_c_string(char const * arg, size_t length) 292 | { 293 | if (length == 0) 294 | return; 295 | 296 | resize_buffer_if_needed(1 + length + 1); 297 | char * b = buffer(); 298 | auto type_id = TupleIndex < char *, SupportedTypes >::value; 299 | *reinterpret_cast(b++) = static_cast(type_id); 300 | memcpy(b, arg, length + 1); 301 | m_bytes_used += 1 + length + 1; 302 | } 303 | 304 | void resize_buffer_if_needed(size_t additional_bytes) 305 | { 306 | size_t const required_size = m_bytes_used + additional_bytes; 307 | 308 | if (required_size <= m_buffer_size) 309 | return; 310 | 311 | if (!m_heap_buffer) 312 | { 313 | m_buffer_size = std::max(static_cast(512), required_size); 314 | m_heap_buffer.reset(new char[m_buffer_size]); 315 | memcpy(m_heap_buffer.get(), m_stack_buffer, m_bytes_used); 316 | return; 317 | } 318 | else 319 | { 320 | m_buffer_size = std::max(static_cast(2 * m_buffer_size), required_size); 321 | std::unique_ptr < char [] > new_heap_buffer(new char[m_buffer_size]); 322 | memcpy(new_heap_buffer.get(), m_heap_buffer.get(), m_bytes_used); 323 | m_heap_buffer.swap(new_heap_buffer); 324 | } 325 | } 326 | void stringify(std::ostream & os, char * start, char const * const end) 327 | { 328 | if (start == end) 329 | return; 330 | 331 | int type_id = static_cast < int >(*start); start++; 332 | 333 | switch (type_id) 334 | { 335 | case 0: 336 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 337 | return; 338 | case 1: 339 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 340 | return; 341 | case 2: 342 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 343 | return; 344 | case 3: 345 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 346 | return; 347 | case 4: 348 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 349 | return; 350 | case 5: 351 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 352 | return; 353 | case 6: 354 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 355 | return; 356 | case 7: 357 | stringify(os, decode(os, start, static_cast::type*>(nullptr)), end); 358 | return; 359 | } 360 | } 361 | 362 | template < typename Arg > 363 | char * decode(std::ostream & os, char * b, Arg * dummy) 364 | { 365 | Arg arg = *reinterpret_cast < Arg * >(b); 366 | os << arg; 367 | return b + sizeof(Arg); 368 | } 369 | 370 | char * decode(std::ostream & os, char * b, NanoLogLine::string_literal_t * dummy) 371 | { 372 | NanoLogLine::string_literal_t s = *reinterpret_cast < NanoLogLine::string_literal_t * >(b); 373 | os << s.m_s; 374 | return b + sizeof(NanoLogLine::string_literal_t); 375 | } 376 | 377 | char * decode(std::ostream & os, char * b, char ** dummy) 378 | { 379 | while (*b != '\0') 380 | { 381 | os << *b; 382 | ++b; 383 | } 384 | return ++b; 385 | } 386 | 387 | private: 388 | size_t m_bytes_used; 389 | size_t m_buffer_size; 390 | std::unique_ptr < char [] > m_heap_buffer; 391 | char m_stack_buffer[256 - 2 * sizeof(size_t) - sizeof(decltype(m_heap_buffer)) - 8 /* Reserved */]; 392 | }; 393 | 394 | struct BufferBase 395 | { 396 | virtual ~BufferBase() = default; 397 | virtual void push(NanoLogLine && logline) = 0; 398 | virtual bool try_pop(NanoLogLine & logline) = 0; 399 | }; 400 | 401 | class SpinLock 402 | { 403 | public: 404 | SpinLock(std::atomic_flag & flag) : m_flag(flag) 405 | { 406 | while (m_flag.test_and_set(std::memory_order_acquire)); 407 | } 408 | 409 | ~SpinLock() 410 | { 411 | m_flag.clear(std::memory_order_release); 412 | } 413 | 414 | private: 415 | std::atomic_flag & m_flag; 416 | }; 417 | 418 | /* Multi Producer Single Consumer Ring Buffer */ 419 | class RingBuffer : public BufferBase 420 | { 421 | public: 422 | struct alignas(64) Item 423 | { 424 | Item() 425 | : written(0) 426 | , logline(LogLevel::INFO, nullptr, nullptr, 0) 427 | { 428 | } 429 | 430 | std::atomic_flag flag = ATOMIC_FLAG_INIT; 431 | char written; 432 | char padding[256 - sizeof(std::atomic_flag) - sizeof(char) - sizeof(NanoLogLine)]; 433 | NanoLogLine logline; 434 | }; 435 | 436 | RingBuffer(size_t const size) 437 | : m_size(size) 438 | , m_ring(static_cast(std::malloc(size * sizeof(Item)))) 439 | , m_write_index(0) 440 | , m_read_index(0) 441 | { 442 | for (size_t i = 0; i < m_size; ++i) 443 | { 444 | new (&m_ring[i]) Item(); 445 | } 446 | static_assert(sizeof(Item) == 256, "Unexpected size != 256"); 447 | } 448 | 449 | ~RingBuffer() 450 | { 451 | for (size_t i = 0; i < m_size; ++i) 452 | { 453 | m_ring[i].~Item(); 454 | } 455 | std::free(m_ring); 456 | } 457 | 458 | void push(NanoLogLine && logline) override 459 | { 460 | unsigned int write_index = m_write_index.fetch_add(1, std::memory_order_relaxed) % m_size; 461 | Item & item = m_ring[write_index]; 462 | SpinLock spinlock(item.flag); 463 | item.logline = std::move(logline); 464 | item.written = 1; 465 | } 466 | 467 | bool try_pop(NanoLogLine & logline) override 468 | { 469 | Item & item = m_ring[m_read_index % m_size]; 470 | SpinLock spinlock(item.flag); 471 | if (item.written == 1) 472 | { 473 | logline = std::move(item.logline); 474 | item.written = 0; 475 | ++m_read_index; 476 | return true; 477 | } 478 | return false; 479 | } 480 | 481 | RingBuffer(RingBuffer const &) = delete; 482 | RingBuffer& operator=(RingBuffer const &) = delete; 483 | 484 | private: 485 | size_t const m_size; 486 | Item * m_ring; 487 | std::atomic < unsigned int > m_write_index; 488 | char pad[64]; 489 | unsigned int m_read_index; 490 | }; 491 | 492 | class Buffer 493 | { 494 | public: 495 | struct Item 496 | { 497 | Item(NanoLogLine && nanologline) : logline(std::move(nanologline)) {} 498 | char padding[256 - sizeof(NanoLogLine)]; 499 | NanoLogLine logline; 500 | }; 501 | 502 | static constexpr const size_t size = 32768; // 8MB. Helps reduce memory fragmentation 503 | 504 | Buffer() : m_buffer(static_cast(std::malloc(size * sizeof(Item)))) 505 | { 506 | for (size_t i = 0; i <= size; ++i) 507 | { 508 | m_write_state[i].store(0, std::memory_order_relaxed); 509 | } 510 | static_assert(sizeof(Item) == 256, "Unexpected size != 256"); 511 | } 512 | 513 | ~Buffer() 514 | { 515 | unsigned int write_count = m_write_state[size].load(); 516 | for (size_t i = 0; i < write_count; ++i) 517 | { 518 | m_buffer[i].~Item(); 519 | } 520 | std::free(m_buffer); 521 | } 522 | 523 | // Returns true if we need to switch to next buffer 524 | bool push(NanoLogLine && logline, unsigned int const write_index) 525 | { 526 | new (&m_buffer[write_index]) Item(std::move(logline)); 527 | m_write_state[write_index].store(1, std::memory_order_release); 528 | return m_write_state[size].fetch_add(1, std::memory_order_acquire) + 1 == size; 529 | } 530 | 531 | bool try_pop(NanoLogLine & logline, unsigned int const read_index) 532 | { 533 | if (m_write_state[read_index].load(std::memory_order_acquire)) 534 | { 535 | Item & item = m_buffer[read_index]; 536 | logline = std::move(item.logline); 537 | return true; 538 | } 539 | return false; 540 | } 541 | 542 | Buffer(Buffer const &) = delete; 543 | Buffer& operator=(Buffer const &) = delete; 544 | 545 | private: 546 | Item * m_buffer; 547 | std::atomic < unsigned int > m_write_state[size + 1]; 548 | }; 549 | 550 | class QueueBuffer : public BufferBase 551 | { 552 | public: 553 | QueueBuffer(QueueBuffer const &) = delete; 554 | QueueBuffer& operator=(QueueBuffer const &) = delete; 555 | 556 | QueueBuffer() : m_current_read_buffer{nullptr} 557 | , m_write_index(0) 558 | , m_read_index(0) 559 | { 560 | setup_next_write_buffer(); 561 | } 562 | 563 | void push(NanoLogLine && logline) override 564 | { 565 | unsigned int write_index = m_write_index.fetch_add(1, std::memory_order_relaxed); 566 | if (write_index < Buffer::size) 567 | { 568 | if (m_current_write_buffer.load(std::memory_order_acquire)->push(std::move(logline), write_index)) 569 | { 570 | setup_next_write_buffer(); 571 | } 572 | } 573 | else 574 | { 575 | while (m_write_index.load(std::memory_order_acquire) >= Buffer::size); 576 | push(std::move(logline)); 577 | } 578 | } 579 | 580 | bool try_pop(NanoLogLine & logline) override 581 | { 582 | if (m_current_read_buffer == nullptr) 583 | m_current_read_buffer = get_next_read_buffer(); 584 | 585 | Buffer * read_buffer = m_current_read_buffer; 586 | 587 | if (read_buffer == nullptr) 588 | return false; 589 | 590 | if (bool success = read_buffer->try_pop(logline, m_read_index)) 591 | { 592 | m_read_index++; 593 | if (m_read_index == Buffer::size) 594 | { 595 | m_read_index = 0; 596 | m_current_read_buffer = nullptr; 597 | SpinLock spinlock(m_flag); 598 | m_buffers.pop(); 599 | } 600 | return true; 601 | } 602 | 603 | return false; 604 | } 605 | 606 | private: 607 | void setup_next_write_buffer() 608 | { 609 | std::unique_ptr < Buffer > next_write_buffer(new Buffer()); 610 | m_current_write_buffer.store(next_write_buffer.get(), std::memory_order_release); 611 | SpinLock spinlock(m_flag); 612 | m_buffers.push(std::move(next_write_buffer)); 613 | m_write_index.store(0, std::memory_order_relaxed); 614 | } 615 | 616 | Buffer * get_next_read_buffer() 617 | { 618 | SpinLock spinlock(m_flag); 619 | return m_buffers.empty() ? nullptr : m_buffers.front().get(); 620 | } 621 | 622 | private: 623 | std::queue < std::unique_ptr < Buffer > > m_buffers; 624 | std::atomic < Buffer * > m_current_write_buffer; 625 | Buffer * m_current_read_buffer; 626 | std::atomic < unsigned int > m_write_index; 627 | unsigned int m_read_index; 628 | std::atomic_flag m_flag = ATOMIC_FLAG_INIT; 629 | }; 630 | 631 | class FileWriter 632 | { 633 | public: 634 | struct sfile_data 635 | { 636 | int64_t date = 0LL; 637 | int64_t index = 0LL; 638 | 639 | bool operator > (const sfile_data& right) const 640 | { 641 | if(date > right.date) 642 | { 643 | return true; 644 | } 645 | else if(date < right.date) 646 | { 647 | return false; 648 | } 649 | else 650 | { 651 | return index > right.index; 652 | } 653 | } 654 | 655 | bool operator == (const sfile_data& right) const 656 | { 657 | return (date == right.date) && (index == right.index); 658 | } 659 | }; 660 | 661 | FileWriter(std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb, int file_num) 662 | : m_log_file_roll_size_bytes(log_file_roll_size_mb * 1024 * 1024) 663 | , m_log_directory(log_directory) 664 | , m_name(log_directory + "/" + log_file_name) 665 | , m_filenum(file_num) 666 | , m_first_create(true) 667 | { 668 | //roll_file(); 669 | init_fileset(); 670 | create_file(); 671 | } 672 | 673 | void write(NanoLogLine & logline) 674 | { 675 | auto pos = m_os->tellp(); 676 | logline.stringify(*m_os); 677 | m_bytes_written += m_os->tellp() - pos; 678 | if (m_bytes_written > m_log_file_roll_size_bytes) 679 | { 680 | roll_file(); 681 | } 682 | } 683 | 684 | private: 685 | void init_fileset() 686 | { 687 | for(auto& p : nanolog_fs::directory_iterator(m_log_directory)) 688 | { 689 | const nanolog_fs::path& path = p.path(); 690 | insert(path.filename().string()); 691 | } 692 | } 693 | 694 | void insert(sfile_data file_data) 695 | { 696 | if(file_data.date <= 0LL || file_data.index < 0LL) 697 | { 698 | return; 699 | } 700 | m_file_value_set.insert(file_data); 701 | } 702 | 703 | void insert(const std::string& filename) 704 | { 705 | sfile_data file_data = filename_to_file_data(filename); 706 | if(file_data.date <= 0LL || file_data.index < 0LL) 707 | { 708 | return; 709 | } 710 | m_file_value_set.insert(file_data); 711 | } 712 | 713 | sfile_data filename_to_file_data(const std::string& filename){ 714 | sfile_data file_data; 715 | size_t pos1 = filename.find_first_of("_"); 716 | if (pos1 == std::string::npos) 717 | { 718 | return file_data; 719 | } 720 | size_t pos2 = filename.find_last_of("_"); 721 | if (pos2 == std::string::npos) 722 | { 723 | return file_data; 724 | } 725 | std::string str_date = filename.substr(pos1 + 1, pos2 - pos1); 726 | file_data.date = atoll(str_date.c_str()); 727 | std::string str_index = filename.substr(pos2 + 1); 728 | file_data.index = atoll(str_index.c_str()); 729 | return file_data; 730 | } 731 | 732 | std::string get_logfilename() 733 | { 734 | std::string log_file_name = "_"; 735 | std::string date_time = NanologBase::get_datetime(NanologBase::timestamp_now()); 736 | log_file_name.append(date_time); 737 | log_file_name.append("_"); 738 | log_file_name.append(std::to_string(++m_file_number)); 739 | return log_file_name; 740 | } 741 | 742 | void create_file() 743 | { 744 | if(m_first_create){ 745 | sfile_data file_data_max; 746 | if(!m_file_value_set.empty()) 747 | { 748 | file_data_max = *(m_file_value_set.begin()); 749 | } 750 | sfile_data cur_file_data = filename_to_file_data(get_logfilename()); 751 | if(file_data_max > cur_file_data || 752 | file_data_max == cur_file_data) 753 | { 754 | cur_file_data.index = file_data_max.index; 755 | } 756 | std::string file_path = m_name + file_data_to_filename(cur_file_data); 757 | m_os.reset(new std::ofstream()); 758 | m_os->open(file_path, std::ofstream::out | std::ofstream::app); 759 | insert(cur_file_data); 760 | init_file_number(cur_file_data); 761 | m_first_create = false; 762 | } 763 | else 764 | { 765 | std::string logfile_suffix = get_logfilename(); 766 | std::string file_path = m_name + logfile_suffix; 767 | m_os.reset(new std::ofstream()); 768 | m_os->open(file_path, std::ofstream::out | std::ofstream::app); 769 | insert(logfile_suffix); 770 | } 771 | 772 | 773 | delete_exceed_file(); 774 | } 775 | 776 | void init_file_number(const sfile_data& cur_file_data){ 777 | m_file_number = cur_file_data.index; 778 | } 779 | 780 | void delete_exceed_file(){ 781 | if(m_filenum > 0 && m_file_value_set.size() > m_filenum) 782 | { 783 | int num = 0; 784 | auto it_set = m_file_value_set.begin(); 785 | while(it_set != m_file_value_set.end()) 786 | { 787 | if(num >= m_filenum) 788 | { 789 | std::string filename = m_name + file_data_to_filename(*it_set); 790 | try{ 791 | nanolog_fs::remove(filename); 792 | } 793 | catch(const std::exception& excep) 794 | { 795 | 796 | } 797 | m_file_value_set.erase(it_set++); 798 | } 799 | else 800 | { 801 | ++num; 802 | ++it_set; 803 | } 804 | } 805 | } 806 | } 807 | 808 | void roll_file() 809 | { 810 | if (m_os) 811 | { 812 | m_os->flush(); 813 | m_os->close(); 814 | } 815 | 816 | m_bytes_written = 0; 817 | //m_file_number = 0; 818 | // m_os.reset(new std::ofstream()); 819 | // TODO Optimize this part. Does it even matter ? 820 | // std::string log_file_name = m_name; 821 | // log_file_name.append("."); 822 | // log_file_name.append(std::to_string(++m_file_number)); 823 | // log_file_name.append(".txt"); 824 | // m_os->open(log_file_name, std::ofstream::out | std::ofstream::trunc); 825 | create_file(); 826 | } 827 | 828 | std::string file_data_to_filename(const sfile_data& cur_file_data) 829 | { 830 | std::string filename = "_"; 831 | filename += std::to_string(cur_file_data.date); 832 | filename += "_"; 833 | filename += std::to_string(cur_file_data.index); 834 | return filename; 835 | } 836 | 837 | private: 838 | int64_t m_file_number = 0; 839 | std::streamoff m_bytes_written = 0; 840 | uint32_t const m_log_file_roll_size_bytes; 841 | std::string const m_log_directory; 842 | std::string const m_name; 843 | int m_filenum; 844 | bool m_first_create; 845 | std::unique_ptr < std::ofstream > m_os; 846 | std::set > m_file_value_set; 847 | }; 848 | 849 | /* 850 | * Non guaranteed logging. Uses a ring buffer to hold log lines. 851 | * When the ring gets full, the previous log line in the slot will be dropped. 852 | * Does not block producer even if the ring buffer is full. 853 | * ring_buffer_size_mb - LogLines are pushed into a mpsc ring buffer whose size 854 | * is determined by this parameter. Since each LogLine is 256 bytes, 855 | * ring_buffer_size = ring_buffer_size_mb * 1024 * 1024 / 256 856 | */ 857 | struct NonGuaranteedLogger 858 | { 859 | NonGuaranteedLogger(uint32_t ring_buffer_size_mb_) : ring_buffer_size_mb(ring_buffer_size_mb_) {} 860 | uint32_t ring_buffer_size_mb; 861 | }; 862 | 863 | /* 864 | * Provides a guarantee log lines will not be dropped. 865 | */ 866 | struct GuaranteedLogger 867 | { 868 | }; 869 | 870 | class NanoLogger 871 | { 872 | public: 873 | NanoLogger(NonGuaranteedLogger ngl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb, int file_num) 874 | : m_state(State::INIT) 875 | , m_buffer_base(new RingBuffer(std::max(1u, ngl.ring_buffer_size_mb) * 1024 * 4)) 876 | , m_file_writer(log_directory, log_file_name, std::max(1u, log_file_roll_size_mb), file_num) 877 | , m_thread(&NanoLogger::pop, this) 878 | { 879 | m_state.store(State::READY, std::memory_order_release); 880 | } 881 | 882 | NanoLogger(GuaranteedLogger gl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb, int file_num) 883 | : m_state(State::INIT) 884 | , m_buffer_base(new QueueBuffer()) 885 | , m_file_writer(log_directory, log_file_name, std::max(1u, log_file_roll_size_mb), file_num) 886 | , m_thread(&NanoLogger::pop, this) 887 | { 888 | m_state.store(State::READY, std::memory_order_release); 889 | } 890 | 891 | ~NanoLogger() 892 | { 893 | m_state.store(State::SHUTDOWN); 894 | m_thread.join(); 895 | } 896 | 897 | void add(NanoLogLine && logline) 898 | { 899 | m_buffer_base->push(std::move(logline)); 900 | } 901 | 902 | void pop() 903 | { 904 | // Wait for constructor to complete and pull all stores done there to this thread / core. 905 | while (m_state.load(std::memory_order_acquire) == State::INIT) 906 | std::this_thread::sleep_for(std::chrono::microseconds(50)); 907 | 908 | NanoLogLine logline(LogLevel::INFO, nullptr, nullptr, 0); 909 | 910 | while (m_state.load() == State::READY) 911 | { 912 | if (m_buffer_base->try_pop(logline)) 913 | m_file_writer.write(logline); 914 | else 915 | std::this_thread::sleep_for(std::chrono::microseconds(50)); 916 | } 917 | 918 | // Pop and log all remaining entries 919 | while (m_buffer_base->try_pop(logline)) 920 | { 921 | m_file_writer.write(logline); 922 | } 923 | } 924 | 925 | private: 926 | enum class State 927 | { 928 | INIT, 929 | READY, 930 | SHUTDOWN 931 | }; 932 | 933 | std::atomic < State > m_state; 934 | std::unique_ptr < BufferBase > m_buffer_base; 935 | FileWriter m_file_writer; 936 | std::thread m_thread; 937 | }; 938 | 939 | inline std::atomic < unsigned int > loglevel; 940 | inline std::unique_ptr < NanoLogger > nanologger; 941 | inline std::atomic < NanoLogger * > atomic_nanologger; 942 | 943 | inline std::atomic flush_to_console; 944 | 945 | class Logger 946 | { 947 | public: 948 | /* 949 | * Ensure initialize() is called prior to any log statements. 950 | * log_directory - where to create the logs. For example - "/tmp/" 951 | * log_file_name - root of the file name. For example - "nanolog" 952 | * This will create log files of the form - 953 | * /tmp/nanolog.1.txt 954 | * /tmp/nanolog.2.txt 955 | * etc. 956 | * log_file_roll_size_mb - mega bytes after which we roll to next log file. 957 | */ 958 | static void initialize(GuaranteedLogger gl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb, int file_num) 959 | { 960 | nanolog_fs::create_directories(log_directory); 961 | nanologger.reset(new NanoLogger(gl, log_directory, log_file_name, log_file_roll_size_mb, file_num)); 962 | atomic_nanologger.store(nanologger.get(), std::memory_order_seq_cst); 963 | } 964 | 965 | static void initialize(NonGuaranteedLogger ngl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb, int file_num) 966 | { 967 | nanologger.reset(new NanoLogger(ngl, log_directory, log_file_name, log_file_roll_size_mb, file_num)); 968 | atomic_nanologger.store(nanologger.get(), std::memory_order_seq_cst); 969 | } 970 | 971 | static void set_log_level(LogLevel level) 972 | { 973 | loglevel.store(static_cast(level), std::memory_order_release); 974 | } 975 | 976 | static void set_flush_to_console(bool v) 977 | { 978 | flush_to_console.store(v); 979 | } 980 | 981 | public: 982 | static bool is_logged(LogLevel level) 983 | { 984 | return static_cast(level) >= loglevel.load(std::memory_order_relaxed); 985 | } 986 | 987 | bool operator==(NanoLogLine & logline) 988 | { 989 | #ifdef NDEBUG 990 | // nothing to do 991 | #else 992 | if (flush_to_console) { 993 | std::ostringstream ostm{}; 994 | logline.stringify(ostm); 995 | std::cout << ostm.str(); 996 | } 997 | #endif 998 | atomic_nanologger.load(std::memory_order_acquire)->add(std::move(logline)); 999 | return true; 1000 | } 1001 | }; 1002 | 1003 | } // namespace nanolog 1004 | 1005 | #if 1 1006 | } // namespace waf 1007 | #endif 1008 | 1009 | #ifdef _WIN32 1010 | #define FILE_NAME(x) strrchr(x,'\\')?strrchr(x,'\\')+1:x 1011 | #else 1012 | #define FILE_NAME(x) x 1013 | #endif 1014 | 1015 | #if defined (NDEBUG) 1016 | #define NANO_LOG(LEVEL) ::waf::nanolog::Logger() == ::waf::nanolog::NanoLogLine(LEVEL, "", __func__, __LINE__) 1017 | #else 1018 | #define NANO_LOG(LEVEL) ::waf::nanolog::Logger() == ::waf::nanolog::NanoLogLine(LEVEL, FILE_NAME(__FILE__), __func__, __LINE__) 1019 | #endif 1020 | 1021 | #define LOG_INFO ::waf::nanolog::Logger::is_logged(::waf::nanolog::LogLevel::INFO) && NANO_LOG(::waf::nanolog::LogLevel::INFO) 1022 | #define LOG_WARN ::waf::nanolog::Logger::is_logged(::waf::nanolog::LogLevel::WARN) && NANO_LOG(::waf::nanolog::LogLevel::WARN) 1023 | #define LOG_CRIT ::waf::nanolog::Logger::is_logged(::waf::nanolog::LogLevel::CRIT) && NANO_LOG(::waf::nanolog::LogLevel::CRIT) 1024 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 一个header only的c++ 日志库 2 | 3 | # 背景 4 | [NanoLog](https://github.com/Iyengar111/NanoLog)是一个非常小巧的log库,代码很少,不到一千行,速度比spdlog还快,应用上也能满足需求,我很喜欢。但是也存在一些不足,比如日志文件的数量没有限制,每次重启之后会从头开始写等等问题,还需要进一步完善。于是我新建了一个工程[nanolog](https://github.com/qicosmos/nanolog),这个工程继承于nanolog,将原工程改成header only,并用了一些最新的特性来简化原来的代码。 5 | 6 | # 快速示例 7 | 8 | #include "nanolog.hpp" 9 | 10 | nanolog::initialize(nanolog::GuaranteedLogger(), "/tmp/", "nanolog", 1); 11 | LOG_INFO << "Sample NanoLog: " << 1 << 2.5 << 'c'; 12 | 13 | # 如何编译 14 | 15 | 由于使用了C++17的新特性,所以需要支持C++17的编译器,gcc7.2,vs2017 15.5 16 | 17 | # roadmap 18 | 19 | 1. 增加文件数量上限 20 | 2. 重新写日志从上次的位置继续写 21 | -------------------------------------------------------------------------------- /main.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "nanolog.hpp" 3 | 4 | int main() { 5 | nanolog::initialize(nanolog::GuaranteedLogger(), "/tmp/", "nanolog", 1); 6 | LOG_INFO << "Sample NanoLog: " << 1 << 2.5 << 'c'; 7 | // Or if you want to use the non guaranteed logger - 8 | // ring_buffer_size_mb - LogLines are pushed into a mpsc ring buffer whose size 9 | // is determined by this parameter. Since each LogLine is 256 bytes, 10 | // ring_buffer_size = ring_buffer_size_mb * 1024 * 1024 / 256 11 | // In this example ring_buffer_size_mb = 3. 12 | // nanolog::initialize(nanolog::NonGuaranteedLogger(3), "/tmp/", "nanolog", 1); 13 | 14 | //for (int i = 0; i < 50000; ++i) 15 | //{ 16 | // LOG_INFO << "Sample NanoLog: " << i; 17 | //} 18 | 19 | return 0; 20 | } -------------------------------------------------------------------------------- /nanolog.hpp: -------------------------------------------------------------------------------- 1 | #pragma once 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | namespace nanolog 19 | { 20 | enum class LogLevel : uint8_t { INFO, WARN, CRIT }; 21 | 22 | /* 23 | * Non guaranteed logging. Uses a ring buffer to hold log lines. 24 | * When the ring gets full, the previous log line in the slot will be dropped. 25 | * Does not block producer even if the ring buffer is full. 26 | * ring_buffer_size_mb - LogLines are pushed into a mpsc ring buffer whose size 27 | * is determined by this parameter. Since each LogLine is 256 bytes, 28 | * ring_buffer_size = ring_buffer_size_mb * 1024 * 1024 / 256 29 | */ 30 | struct NonGuaranteedLogger 31 | { 32 | NonGuaranteedLogger(uint32_t ring_buffer_size_mb_) : ring_buffer_size_mb(ring_buffer_size_mb_) {} 33 | uint32_t ring_buffer_size_mb; 34 | }; 35 | 36 | /* 37 | * Provides a guarantee log lines will not be dropped. 38 | */ 39 | struct GuaranteedLogger 40 | { 41 | }; 42 | 43 | namespace 44 | { 45 | 46 | /* Returns microseconds since epoch */ 47 | uint64_t timestamp_now() 48 | { 49 | return std::chrono::duration_cast(std::chrono::high_resolution_clock::now().time_since_epoch()).count(); 50 | } 51 | 52 | /* I want [2016-10-13 00:01:23.528514] */ 53 | void format_timestamp(std::ostream & os, uint64_t timestamp) 54 | { 55 | auto n = std::chrono::system_clock::now(); 56 | auto m = n.time_since_epoch(); 57 | auto diff = std::chrono::duration_cast(m).count(); 58 | //auto const msecs = diff % 1000; 59 | 60 | std::time_t t = std::chrono::system_clock::to_time_t(n); 61 | os << '[' << std::put_time(std::localtime(&t), "%Y-%m-%d %H.%M.%S") << "." << diff << ']'; 62 | } 63 | 64 | std::thread::id this_thread_id() 65 | { 66 | static thread_local const std::thread::id id = std::this_thread::get_id(); 67 | return id; 68 | } 69 | 70 | template < typename T, typename Tuple > 71 | struct TupleIndex; 72 | 73 | template < typename T, typename ... Types > 74 | struct TupleIndex < T, std::tuple < T, Types... > > 75 | { 76 | static constexpr const std::size_t value = 0; 77 | }; 78 | 79 | template < typename T, typename U, typename ... Types > 80 | struct TupleIndex < T, std::tuple < U, Types... > > 81 | { 82 | static constexpr const std::size_t value = 1 + TupleIndex < T, std::tuple < Types... > >::value; 83 | }; 84 | 85 | template 86 | struct is_c_string; 87 | 88 | template 89 | struct is_c_string : std::integral_constant> || std::is_same_v>> 90 | { 91 | }; 92 | 93 | template 94 | constexpr bool is_c_string_v = is_c_string::value; 95 | } // anonymous namespace 96 | 97 | inline char const * to_string(LogLevel loglevel) 98 | { 99 | switch (loglevel) 100 | { 101 | case LogLevel::INFO: 102 | return "INFO"; 103 | case LogLevel::WARN: 104 | return "WARN"; 105 | case LogLevel::CRIT: 106 | return "CRIT"; 107 | } 108 | return "XXXX"; 109 | } 110 | 111 | class NanoLogLine 112 | { 113 | public: 114 | typedef std::tuple < char, uint32_t, uint64_t, int32_t, int64_t, double, const char*, char * > SupportedTypes; 115 | NanoLogLine(LogLevel level, char const * file, char const * function, uint32_t line) : m_bytes_used(0) 116 | , m_buffer_size(sizeof(m_stack_buffer)) 117 | { 118 | encode0(timestamp_now(), this_thread_id(), file, function, line, level); 119 | } 120 | 121 | ~NanoLogLine() = default; 122 | 123 | NanoLogLine(NanoLogLine &&) = default; 124 | NanoLogLine& operator=(NanoLogLine &&) = default; 125 | 126 | void stringify(std::ostream & os) 127 | { 128 | char * b = !m_heap_buffer ? m_stack_buffer : m_heap_buffer.get(); 129 | char const * const end = b + m_bytes_used; 130 | uint64_t timestamp = *reinterpret_cast (b); b += sizeof(uint64_t); 131 | std::thread::id threadid = *reinterpret_cast (b); b += sizeof(std::thread::id); 132 | const char* file = *reinterpret_cast (b); b += sizeof(const char*); 133 | const char* function = *reinterpret_cast (b); b += sizeof(const char*); 134 | uint32_t line = *reinterpret_cast (b); b += sizeof(uint32_t); 135 | LogLevel loglevel = *reinterpret_cast (b); b += sizeof(LogLevel); 136 | 137 | format_timestamp(os, timestamp); 138 | 139 | os << '[' << to_string(loglevel) << ']' 140 | << '[' << threadid << ']' 141 | << '[' << file << ':' << function << ':' << line << "] "; 142 | 143 | stringify(os, b, end); 144 | 145 | os << std::endl; 146 | 147 | if (loglevel >= LogLevel::CRIT) 148 | os.flush(); 149 | } 150 | 151 | template < typename Arg > 152 | NanoLogLine& operator<<(Arg arg) 153 | { 154 | if constexpr(std::is_arithmetic_v) { 155 | encode(arg, TupleIndex < Arg, SupportedTypes >::value); 156 | } 157 | else if constexpr(is_c_string_v) { 158 | encode_c_string(arg); 159 | } 160 | 161 | return *this; 162 | } 163 | 164 | private: 165 | 166 | char * buffer() 167 | { 168 | return !m_heap_buffer ? &m_stack_buffer[m_bytes_used] : &(m_heap_buffer.get())[m_bytes_used]; 169 | } 170 | 171 | void resize_buffer_if_needed(size_t additional_bytes) 172 | { 173 | size_t const required_size = m_bytes_used + additional_bytes; 174 | 175 | if (required_size <= m_buffer_size) 176 | return; 177 | 178 | if (!m_heap_buffer) 179 | { 180 | m_buffer_size = std::max(static_cast(512), required_size); 181 | m_heap_buffer.reset(new char[m_buffer_size]); 182 | memcpy(m_heap_buffer.get(), m_stack_buffer, m_bytes_used); 183 | return; 184 | } 185 | else 186 | { 187 | m_buffer_size = std::max(static_cast(2 * m_buffer_size), required_size); 188 | std::unique_ptr < char[] > new_heap_buffer(new char[m_buffer_size]); 189 | memcpy(new_heap_buffer.get(), m_heap_buffer.get(), m_bytes_used); 190 | m_heap_buffer.swap(new_heap_buffer); 191 | } 192 | } 193 | 194 | template < typename Arg > 195 | void encode_c_string(Arg arg) 196 | { 197 | if (arg != nullptr) 198 | encode_c_string(arg, strlen(arg)); 199 | } 200 | 201 | void encode_c_string(char const * arg, size_t length) 202 | { 203 | if (length == 0) 204 | return; 205 | 206 | resize_buffer_if_needed(1 + length + 1); 207 | char * b = buffer(); 208 | auto type_id = TupleIndex < char *, SupportedTypes >::value; 209 | *reinterpret_cast(b++) = static_cast(type_id); 210 | memcpy(b, arg, length + 1); 211 | m_bytes_used += 1 + length + 1; 212 | } 213 | 214 | template < typename... Arg > 215 | void encode0(Arg... arg) 216 | { 217 | ((*reinterpret_cast(buffer()) = arg, m_bytes_used += sizeof(Arg)), ...); 218 | } 219 | 220 | template < typename Arg > 221 | void encode(Arg arg, uint8_t type_id) 222 | { 223 | resize_buffer_if_needed(sizeof(Arg) + sizeof(uint8_t)); 224 | encode0(type_id, arg); 225 | } 226 | 227 | template < typename Arg > 228 | char * decode(std::ostream & os, char * b, Arg * dummy) 229 | { 230 | if constexpr(std::is_arithmetic_v) { 231 | Arg arg = *reinterpret_cast (b); 232 | os << arg; 233 | return b + sizeof(Arg); 234 | } 235 | else if constexpr(std::is_same_v) { 236 | const char* s = *reinterpret_cast (b); 237 | os << s; 238 | return b + sizeof(const char*); 239 | } 240 | else if constexpr(std::is_same_v) { 241 | while (*b != '\0') 242 | { 243 | os << *b; 244 | ++b; 245 | } 246 | return ++b; 247 | } 248 | } 249 | 250 | template 251 | using ele_type_p = std::tuple_element_t*; 252 | 253 | void stringify(std::ostream & os, char * start, char const * const end) 254 | { 255 | if (start == end) 256 | return; 257 | 258 | int type_id = static_cast (*start); start++; 259 | 260 | switch (type_id) 261 | { 262 | case 0: 263 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 264 | return; 265 | case 1: 266 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 267 | return; 268 | case 2: 269 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 270 | return; 271 | case 3: 272 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 273 | return; 274 | case 4: 275 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 276 | return; 277 | case 5: 278 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 279 | return; 280 | case 6: 281 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 282 | return; 283 | case 7: 284 | stringify(os, decode(os, start, static_cast>(nullptr)), end); 285 | return; 286 | } 287 | } 288 | 289 | private: 290 | size_t m_bytes_used; 291 | size_t m_buffer_size; 292 | std::unique_ptr < char[] > m_heap_buffer; 293 | char m_stack_buffer[256 - 2 * sizeof(size_t) - sizeof(decltype(m_heap_buffer)) - 8 /* Reserved */]; 294 | }; 295 | 296 | struct BufferBase 297 | { 298 | virtual ~BufferBase() = default; 299 | virtual void push(NanoLogLine && logline) = 0; 300 | virtual bool try_pop(NanoLogLine & logline) = 0; 301 | }; 302 | 303 | struct SpinLock 304 | { 305 | SpinLock(std::atomic_flag & flag) : m_flag(flag) 306 | { 307 | while (m_flag.test_and_set(std::memory_order_acquire)); 308 | } 309 | 310 | ~SpinLock() 311 | { 312 | m_flag.clear(std::memory_order_release); 313 | } 314 | 315 | private: 316 | std::atomic_flag & m_flag; 317 | }; 318 | 319 | /* Multi Producer Single Consumer Ring Buffer */ 320 | class RingBuffer : public BufferBase 321 | { 322 | public: 323 | struct alignas(64) Item 324 | { 325 | Item() 326 | : flag{ ATOMIC_FLAG_INIT } 327 | , written(0) 328 | , logline(LogLevel::INFO, nullptr, nullptr, 0) 329 | { 330 | } 331 | 332 | std::atomic_flag flag; 333 | char written; 334 | char padding[256 - sizeof(std::atomic_flag) - sizeof(char) - sizeof(NanoLogLine)]; 335 | NanoLogLine logline; 336 | }; 337 | 338 | RingBuffer(size_t const size) 339 | : m_size(size) 340 | , m_ring(static_cast(std::malloc(size * sizeof(Item)))) 341 | , m_write_index(0) 342 | , m_read_index(0) 343 | { 344 | for (size_t i = 0; i < m_size; ++i) 345 | { 346 | new (&m_ring[i]) Item(); 347 | } 348 | static_assert(sizeof(Item) == 256, "Unexpected size != 256"); 349 | } 350 | 351 | ~RingBuffer() 352 | { 353 | for (size_t i = 0; i < m_size; ++i) 354 | { 355 | m_ring[i].~Item(); 356 | } 357 | std::free(m_ring); 358 | } 359 | 360 | void push(NanoLogLine && logline) override 361 | { 362 | unsigned int write_index = m_write_index.fetch_add(1, std::memory_order_relaxed) % m_size; 363 | Item & item = m_ring[write_index]; 364 | SpinLock spinlock(item.flag); 365 | item.logline = std::move(logline); 366 | item.written = 1; 367 | } 368 | 369 | bool try_pop(NanoLogLine & logline) override 370 | { 371 | Item & item = m_ring[m_read_index % m_size]; 372 | SpinLock spinlock(item.flag); 373 | if (item.written == 1) 374 | { 375 | logline = std::move(item.logline); 376 | item.written = 0; 377 | ++m_read_index; 378 | return true; 379 | } 380 | return false; 381 | } 382 | 383 | RingBuffer(RingBuffer const &) = delete; 384 | RingBuffer& operator=(RingBuffer const &) = delete; 385 | 386 | private: 387 | size_t const m_size; 388 | Item * m_ring; 389 | std::atomic < unsigned int > m_write_index; 390 | char pad[64]; 391 | unsigned int m_read_index; 392 | }; 393 | 394 | 395 | class Buffer 396 | { 397 | public: 398 | struct Item 399 | { 400 | Item(NanoLogLine && nanologline) : logline(std::move(nanologline)) {} 401 | char padding[256 - sizeof(NanoLogLine)]; 402 | NanoLogLine logline; 403 | }; 404 | 405 | static constexpr const size_t size = 32768; // 8MB. Helps reduce memory fragmentation 406 | 407 | Buffer() : m_buffer(static_cast(std::malloc(size * sizeof(Item)))) 408 | { 409 | for (size_t i = 0; i <= size; ++i) 410 | { 411 | m_write_state[i].store(0, std::memory_order_relaxed); 412 | } 413 | static_assert(sizeof(Item) == 256, "Unexpected size != 256"); 414 | } 415 | 416 | ~Buffer() 417 | { 418 | unsigned int write_count = m_write_state[size].load(); 419 | for (size_t i = 0; i < write_count; ++i) 420 | { 421 | m_buffer[i].~Item(); 422 | } 423 | std::free(m_buffer); 424 | } 425 | 426 | // Returns true if we need to switch to next buffer 427 | bool push(NanoLogLine && logline, unsigned int const write_index) 428 | { 429 | new (&m_buffer[write_index]) Item(std::move(logline)); 430 | m_write_state[write_index].store(1, std::memory_order_release); 431 | return m_write_state[size].fetch_add(1, std::memory_order_acquire) + 1 == size; 432 | } 433 | 434 | bool try_pop(NanoLogLine & logline, unsigned int const read_index) 435 | { 436 | if (m_write_state[read_index].load(std::memory_order_acquire)) 437 | { 438 | Item & item = m_buffer[read_index]; 439 | logline = std::move(item.logline); 440 | return true; 441 | } 442 | return false; 443 | } 444 | 445 | Buffer(Buffer const &) = delete; 446 | Buffer& operator=(Buffer const &) = delete; 447 | 448 | private: 449 | Item * m_buffer; 450 | std::atomic < unsigned int > m_write_state[size + 1]; 451 | }; 452 | 453 | class QueueBuffer : public BufferBase 454 | { 455 | public: 456 | QueueBuffer(QueueBuffer const &) = delete; 457 | QueueBuffer& operator=(QueueBuffer const &) = delete; 458 | 459 | QueueBuffer() : m_current_read_buffer{ nullptr } 460 | , m_write_index(0) 461 | , m_flag{ ATOMIC_FLAG_INIT } 462 | , m_read_index(0) 463 | { 464 | setup_next_write_buffer(); 465 | } 466 | 467 | void push(NanoLogLine && logline) override 468 | { 469 | unsigned int write_index = m_write_index.fetch_add(1, std::memory_order_relaxed); 470 | if (write_index < Buffer::size) 471 | { 472 | if (m_current_write_buffer.load(std::memory_order_acquire)->push(std::move(logline), write_index)) 473 | { 474 | setup_next_write_buffer(); 475 | } 476 | } 477 | else 478 | { 479 | while (m_write_index.load(std::memory_order_acquire) >= Buffer::size); 480 | push(std::move(logline)); 481 | } 482 | } 483 | 484 | bool try_pop(NanoLogLine & logline) override 485 | { 486 | if (m_current_read_buffer == nullptr) 487 | m_current_read_buffer = get_next_read_buffer(); 488 | 489 | Buffer * read_buffer = m_current_read_buffer; 490 | 491 | if (read_buffer == nullptr) 492 | return false; 493 | 494 | if (bool success = read_buffer->try_pop(logline, m_read_index)) 495 | { 496 | m_read_index++; 497 | if (m_read_index == Buffer::size) 498 | { 499 | m_read_index = 0; 500 | m_current_read_buffer = nullptr; 501 | SpinLock spinlock(m_flag); 502 | m_buffers.pop(); 503 | } 504 | return true; 505 | } 506 | 507 | return false; 508 | } 509 | 510 | private: 511 | void setup_next_write_buffer() 512 | { 513 | std::unique_ptr < Buffer > next_write_buffer(new Buffer()); 514 | m_current_write_buffer.store(next_write_buffer.get(), std::memory_order_release); 515 | SpinLock spinlock(m_flag); 516 | m_buffers.push(std::move(next_write_buffer)); 517 | m_write_index.store(0, std::memory_order_relaxed); 518 | } 519 | 520 | Buffer * get_next_read_buffer() 521 | { 522 | SpinLock spinlock(m_flag); 523 | return m_buffers.empty() ? nullptr : m_buffers.front().get(); 524 | } 525 | 526 | private: 527 | std::queue < std::unique_ptr < Buffer > > m_buffers; 528 | std::atomic < Buffer * > m_current_write_buffer; 529 | Buffer * m_current_read_buffer; 530 | std::atomic < unsigned int > m_write_index; 531 | std::atomic_flag m_flag; 532 | unsigned int m_read_index; 533 | }; 534 | 535 | class FileWriter 536 | { 537 | public: 538 | FileWriter(std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb) 539 | : m_log_file_roll_size_bytes(log_file_roll_size_mb * 1024 * 1024) 540 | , m_name(log_directory + log_file_name) 541 | { 542 | roll_file(); 543 | } 544 | 545 | void write(NanoLogLine & logline) 546 | { 547 | auto pos = m_os->tellp(); 548 | logline.stringify(*m_os); 549 | m_bytes_written += m_os->tellp() - pos; 550 | if (m_bytes_written > m_log_file_roll_size_bytes) 551 | { 552 | roll_file(); 553 | } 554 | } 555 | 556 | private: 557 | void roll_file() 558 | { 559 | if (m_os) 560 | { 561 | m_os->flush(); 562 | m_os->close(); 563 | } 564 | 565 | m_bytes_written = 0; 566 | m_os.reset(new std::ofstream()); 567 | // TODO Optimize this part. Does it even matter ? 568 | std::string log_file_name = m_name; 569 | log_file_name.append("."); 570 | log_file_name.append(std::to_string(++m_file_number)); 571 | log_file_name.append(".txt"); 572 | m_os->open(log_file_name, std::ofstream::out | std::ofstream::trunc); 573 | } 574 | 575 | private: 576 | uint32_t m_file_number = 0; 577 | std::streamoff m_bytes_written = 0; 578 | uint32_t const m_log_file_roll_size_bytes; 579 | std::string const m_name; 580 | std::unique_ptr < std::ofstream > m_os; 581 | }; 582 | 583 | class NanoLogger 584 | { 585 | public: 586 | NanoLogger(NonGuaranteedLogger ngl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb) 587 | : m_state(State::INIT) 588 | , m_buffer_base(new RingBuffer(std::max(1u, ngl.ring_buffer_size_mb) * 1024 * 4)) 589 | , m_file_writer(log_directory, log_file_name, std::max(1u, log_file_roll_size_mb)) 590 | , m_thread(&NanoLogger::pop, this) 591 | { 592 | m_state.store(State::READY, std::memory_order_release); 593 | } 594 | 595 | NanoLogger(GuaranteedLogger gl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb) 596 | : m_state(State::INIT) 597 | , m_buffer_base(new QueueBuffer()) 598 | , m_file_writer(log_directory, log_file_name, std::max(1u, log_file_roll_size_mb)) 599 | , m_thread(&NanoLogger::pop, this) 600 | { 601 | m_state.store(State::READY, std::memory_order_release); 602 | } 603 | 604 | ~NanoLogger() 605 | { 606 | m_state.store(State::SHUTDOWN); 607 | m_thread.join(); 608 | } 609 | 610 | void add(NanoLogLine && logline) 611 | { 612 | m_buffer_base->push(std::move(logline)); 613 | } 614 | 615 | void pop() 616 | { 617 | // Wait for constructor to complete and pull all stores done there to this thread / core. 618 | while (m_state.load(std::memory_order_acquire) == State::INIT) 619 | std::this_thread::sleep_for(std::chrono::microseconds(50)); 620 | 621 | NanoLogLine logline(LogLevel::INFO, nullptr, nullptr, 0); 622 | 623 | while (m_state.load() == State::READY) 624 | { 625 | if (m_buffer_base->try_pop(logline)) 626 | m_file_writer.write(logline); 627 | else 628 | std::this_thread::sleep_for(std::chrono::microseconds(50)); 629 | } 630 | 631 | // Pop and log all remaining entries 632 | while (m_buffer_base->try_pop(logline)) 633 | { 634 | m_file_writer.write(logline); 635 | } 636 | } 637 | 638 | private: 639 | enum class State 640 | { 641 | INIT, 642 | READY, 643 | SHUTDOWN 644 | }; 645 | 646 | std::atomic < State > m_state; 647 | std::unique_ptr < BufferBase > m_buffer_base; 648 | FileWriter m_file_writer; 649 | std::thread m_thread; 650 | }; 651 | 652 | inline std::unique_ptr < NanoLogger > nanologger; 653 | 654 | inline std::atomic < NanoLogger * > atomic_nanologger; 655 | 656 | struct NanoLog 657 | { 658 | /* 659 | * Ideally this should have been operator+= 660 | * Could not get that to compile, so here we are... 661 | */ 662 | bool operator==(NanoLogLine & logline) 663 | { 664 | atomic_nanologger.load(std::memory_order_acquire)->add(std::move(logline)); 665 | return true; 666 | } 667 | }; 668 | 669 | inline std::atomic < unsigned int > loglevel = { 0 }; 670 | 671 | inline void set_log_level(LogLevel level) 672 | { 673 | loglevel.store(static_cast(level), std::memory_order_release); 674 | } 675 | 676 | inline bool is_logged(LogLevel level) 677 | { 678 | return static_cast(level) >= loglevel.load(std::memory_order_relaxed); 679 | } 680 | 681 | //void set_log_level(LogLevel level); 682 | // 683 | //bool is_logged(LogLevel level); 684 | 685 | /* 686 | * Ensure initialize() is called prior to any log statements. 687 | * log_directory - where to create the logs. For example - "/tmp/" 688 | * log_file_name - root of the file name. For example - "nanolog" 689 | * This will create log files of the form - 690 | * /tmp/nanolog.1.txt 691 | * /tmp/nanolog.2.txt 692 | * etc. 693 | * log_file_roll_size_mb - mega bytes after which we roll to next log file. 694 | */ 695 | // void initialize(GuaranteedLogger gl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb); 696 | // void initialize(NonGuaranteedLogger ngl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb); 697 | inline void initialize(NonGuaranteedLogger ngl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb) 698 | { 699 | nanologger.reset(new NanoLogger(ngl, log_directory, log_file_name, log_file_roll_size_mb)); 700 | atomic_nanologger.store(nanologger.get(), std::memory_order_seq_cst); 701 | } 702 | 703 | inline void initialize(GuaranteedLogger gl, std::string const & log_directory, std::string const & log_file_name, uint32_t log_file_roll_size_mb) 704 | { 705 | nanologger.reset(new NanoLogger(gl, log_directory, log_file_name, log_file_roll_size_mb)); 706 | atomic_nanologger.store(nanologger.get(), std::memory_order_seq_cst); 707 | } 708 | } // namespace nanolog 709 | 710 | #define NANO_LOG(LEVEL) nanolog::NanoLog() == nanolog::NanoLogLine(LEVEL, __FILE__, __func__, __LINE__) 711 | #define LOG_INFO nanolog::is_logged(nanolog::LogLevel::INFO) && NANO_LOG(nanolog::LogLevel::INFO) 712 | #define LOG_WARN nanolog::is_logged(nanolog::LogLevel::WARN) && NANO_LOG(nanolog::LogLevel::WARN) 713 | #define LOG_CRIT nanolog::is_logged(nanolog::LogLevel::CRIT) && NANO_LOG(nanolog::LogLevel::CRIT) 714 | -------------------------------------------------------------------------------- /nanolog.vcxproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Debug 6 | Win32 7 | 8 | 9 | Release 10 | Win32 11 | 12 | 13 | Debug 14 | x64 15 | 16 | 17 | Release 18 | x64 19 | 20 | 21 | 22 | 15.0 23 | {FF5079D5-99E7-492D-A341-61E32084294D} 24 | nanolog 25 | 10.0.16299.0 26 | 27 | 28 | 29 | Application 30 | true 31 | v141 32 | MultiByte 33 | 34 | 35 | Application 36 | false 37 | v141 38 | true 39 | MultiByte 40 | 41 | 42 | Application 43 | true 44 | v141 45 | MultiByte 46 | 47 | 48 | Application 49 | false 50 | v141 51 | true 52 | MultiByte 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | Level3 76 | Disabled 77 | true 78 | true 79 | stdcpp17 80 | 4996 81 | 82 | 83 | 84 | 85 | Level3 86 | Disabled 87 | true 88 | true 89 | 90 | 91 | 92 | 93 | Level3 94 | MaxSpeed 95 | true 96 | true 97 | true 98 | true 99 | 100 | 101 | true 102 | true 103 | 104 | 105 | 106 | 107 | Level3 108 | MaxSpeed 109 | true 110 | true 111 | true 112 | true 113 | 114 | 115 | true 116 | true 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | -------------------------------------------------------------------------------- /nanolog.vcxproj.filters: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | {4FC737F1-C7A5-4376-A066-2A32D752A2FF} 6 | cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx 7 | 8 | 9 | {93995380-89BD-4b04-88EB-625FBE52EBFB} 10 | h;hh;hpp;hxx;hm;inl;inc;xsd 11 | 12 | 13 | {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} 14 | rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms 15 | 16 | 17 | 18 | 19 | 源文件 20 | 21 | 22 | 23 | 24 | 头文件 25 | 26 | 27 | --------------------------------------------------------------------------------