├── LICENSE
├── README.md
├── _config.yml
├── bin
├── pdns-cof-server.py
├── pdns-import-cof.py
├── pdns-import.py
└── pdns-ingestion.py
├── etc
├── analyzer.conf.sample
├── kvrocks.conf
├── records-type.json
└── redis.conf
├── install_server.sh
├── install_server_kvrocks.sh
├── launch_server.sh
└── requirements
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU AFFERO GENERAL PUBLIC LICENSE
2 | Version 3, 19 November 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU Affero General Public License is a free, copyleft license for
11 | software and other kinds of works, specifically designed to ensure
12 | cooperation with the community in the case of network server software.
13 |
14 | The licenses for most software and other practical works are designed
15 | to take away your freedom to share and change the works. By contrast,
16 | our General Public Licenses are intended to guarantee your freedom to
17 | share and change all versions of a program--to make sure it remains free
18 | software for all its users.
19 |
20 | When we speak of free software, we are referring to freedom, not
21 | price. Our General Public Licenses are designed to make sure that you
22 | have the freedom to distribute copies of free software (and charge for
23 | them if you wish), that you receive source code or can get it if you
24 | want it, that you can change the software or use pieces of it in new
25 | free programs, and that you know you can do these things.
26 |
27 | Developers that use our General Public Licenses protect your rights
28 | with two steps: (1) assert copyright on the software, and (2) offer
29 | you this License which gives you legal permission to copy, distribute
30 | and/or modify the software.
31 |
32 | A secondary benefit of defending all users' freedom is that
33 | improvements made in alternate versions of the program, if they
34 | receive widespread use, become available for other developers to
35 | incorporate. Many developers of free software are heartened and
36 | encouraged by the resulting cooperation. However, in the case of
37 | software used on network servers, this result may fail to come about.
38 | The GNU General Public License permits making a modified version and
39 | letting the public access it on a server without ever releasing its
40 | source code to the public.
41 |
42 | The GNU Affero General Public License is designed specifically to
43 | ensure that, in such cases, the modified source code becomes available
44 | to the community. It requires the operator of a network server to
45 | provide the source code of the modified version running there to the
46 | users of that server. Therefore, public use of a modified version, on
47 | a publicly accessible server, gives the public access to the source
48 | code of the modified version.
49 |
50 | An older license, called the Affero General Public License and
51 | published by Affero, was designed to accomplish similar goals. This is
52 | a different license, not a version of the Affero GPL, but Affero has
53 | released a new version of the Affero GPL which permits relicensing under
54 | this license.
55 |
56 | The precise terms and conditions for copying, distribution and
57 | modification follow.
58 |
59 | TERMS AND CONDITIONS
60 |
61 | 0. Definitions.
62 |
63 | "This License" refers to version 3 of the GNU Affero General Public License.
64 |
65 | "Copyright" also means copyright-like laws that apply to other kinds of
66 | works, such as semiconductor masks.
67 |
68 | "The Program" refers to any copyrightable work licensed under this
69 | License. Each licensee is addressed as "you". "Licensees" and
70 | "recipients" may be individuals or organizations.
71 |
72 | To "modify" a work means to copy from or adapt all or part of the work
73 | in a fashion requiring copyright permission, other than the making of an
74 | exact copy. The resulting work is called a "modified version" of the
75 | earlier work or a work "based on" the earlier work.
76 |
77 | A "covered work" means either the unmodified Program or a work based
78 | on the Program.
79 |
80 | To "propagate" a work means to do anything with it that, without
81 | permission, would make you directly or secondarily liable for
82 | infringement under applicable copyright law, except executing it on a
83 | computer or modifying a private copy. Propagation includes copying,
84 | distribution (with or without modification), making available to the
85 | public, and in some countries other activities as well.
86 |
87 | To "convey" a work means any kind of propagation that enables other
88 | parties to make or receive copies. Mere interaction with a user through
89 | a computer network, with no transfer of a copy, is not conveying.
90 |
91 | An interactive user interface displays "Appropriate Legal Notices"
92 | to the extent that it includes a convenient and prominently visible
93 | feature that (1) displays an appropriate copyright notice, and (2)
94 | tells the user that there is no warranty for the work (except to the
95 | extent that warranties are provided), that licensees may convey the
96 | work under this License, and how to view a copy of this License. If
97 | the interface presents a list of user commands or options, such as a
98 | menu, a prominent item in the list meets this criterion.
99 |
100 | 1. Source Code.
101 |
102 | The "source code" for a work means the preferred form of the work
103 | for making modifications to it. "Object code" means any non-source
104 | form of a work.
105 |
106 | A "Standard Interface" means an interface that either is an official
107 | standard defined by a recognized standards body, or, in the case of
108 | interfaces specified for a particular programming language, one that
109 | is widely used among developers working in that language.
110 |
111 | The "System Libraries" of an executable work include anything, other
112 | than the work as a whole, that (a) is included in the normal form of
113 | packaging a Major Component, but which is not part of that Major
114 | Component, and (b) serves only to enable use of the work with that
115 | Major Component, or to implement a Standard Interface for which an
116 | implementation is available to the public in source code form. A
117 | "Major Component", in this context, means a major essential component
118 | (kernel, window system, and so on) of the specific operating system
119 | (if any) on which the executable work runs, or a compiler used to
120 | produce the work, or an object code interpreter used to run it.
121 |
122 | The "Corresponding Source" for a work in object code form means all
123 | the source code needed to generate, install, and (for an executable
124 | work) run the object code and to modify the work, including scripts to
125 | control those activities. However, it does not include the work's
126 | System Libraries, or general-purpose tools or generally available free
127 | programs which are used unmodified in performing those activities but
128 | which are not part of the work. For example, Corresponding Source
129 | includes interface definition files associated with source files for
130 | the work, and the source code for shared libraries and dynamically
131 | linked subprograms that the work is specifically designed to require,
132 | such as by intimate data communication or control flow between those
133 | subprograms and other parts of the work.
134 |
135 | The Corresponding Source need not include anything that users
136 | can regenerate automatically from other parts of the Corresponding
137 | Source.
138 |
139 | The Corresponding Source for a work in source code form is that
140 | same work.
141 |
142 | 2. Basic Permissions.
143 |
144 | All rights granted under this License are granted for the term of
145 | copyright on the Program, and are irrevocable provided the stated
146 | conditions are met. This License explicitly affirms your unlimited
147 | permission to run the unmodified Program. The output from running a
148 | covered work is covered by this License only if the output, given its
149 | content, constitutes a covered work. This License acknowledges your
150 | rights of fair use or other equivalent, as provided by copyright law.
151 |
152 | You may make, run and propagate covered works that you do not
153 | convey, without conditions so long as your license otherwise remains
154 | in force. You may convey covered works to others for the sole purpose
155 | of having them make modifications exclusively for you, or provide you
156 | with facilities for running those works, provided that you comply with
157 | the terms of this License in conveying all material for which you do
158 | not control copyright. Those thus making or running the covered works
159 | for you must do so exclusively on your behalf, under your direction
160 | and control, on terms that prohibit them from making any copies of
161 | your copyrighted material outside their relationship with you.
162 |
163 | Conveying under any other circumstances is permitted solely under
164 | the conditions stated below. Sublicensing is not allowed; section 10
165 | makes it unnecessary.
166 |
167 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
168 |
169 | No covered work shall be deemed part of an effective technological
170 | measure under any applicable law fulfilling obligations under article
171 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
172 | similar laws prohibiting or restricting circumvention of such
173 | measures.
174 |
175 | When you convey a covered work, you waive any legal power to forbid
176 | circumvention of technological measures to the extent such circumvention
177 | is effected by exercising rights under this License with respect to
178 | the covered work, and you disclaim any intention to limit operation or
179 | modification of the work as a means of enforcing, against the work's
180 | users, your or third parties' legal rights to forbid circumvention of
181 | technological measures.
182 |
183 | 4. Conveying Verbatim Copies.
184 |
185 | You may convey verbatim copies of the Program's source code as you
186 | receive it, in any medium, provided that you conspicuously and
187 | appropriately publish on each copy an appropriate copyright notice;
188 | keep intact all notices stating that this License and any
189 | non-permissive terms added in accord with section 7 apply to the code;
190 | keep intact all notices of the absence of any warranty; and give all
191 | recipients a copy of this License along with the Program.
192 |
193 | You may charge any price or no price for each copy that you convey,
194 | and you may offer support or warranty protection for a fee.
195 |
196 | 5. Conveying Modified Source Versions.
197 |
198 | You may convey a work based on the Program, or the modifications to
199 | produce it from the Program, in the form of source code under the
200 | terms of section 4, provided that you also meet all of these conditions:
201 |
202 | a) The work must carry prominent notices stating that you modified
203 | it, and giving a relevant date.
204 |
205 | b) The work must carry prominent notices stating that it is
206 | released under this License and any conditions added under section
207 | 7. This requirement modifies the requirement in section 4 to
208 | "keep intact all notices".
209 |
210 | c) You must license the entire work, as a whole, under this
211 | License to anyone who comes into possession of a copy. This
212 | License will therefore apply, along with any applicable section 7
213 | additional terms, to the whole of the work, and all its parts,
214 | regardless of how they are packaged. This License gives no
215 | permission to license the work in any other way, but it does not
216 | invalidate such permission if you have separately received it.
217 |
218 | d) If the work has interactive user interfaces, each must display
219 | Appropriate Legal Notices; however, if the Program has interactive
220 | interfaces that do not display Appropriate Legal Notices, your
221 | work need not make them do so.
222 |
223 | A compilation of a covered work with other separate and independent
224 | works, which are not by their nature extensions of the covered work,
225 | and which are not combined with it such as to form a larger program,
226 | in or on a volume of a storage or distribution medium, is called an
227 | "aggregate" if the compilation and its resulting copyright are not
228 | used to limit the access or legal rights of the compilation's users
229 | beyond what the individual works permit. Inclusion of a covered work
230 | in an aggregate does not cause this License to apply to the other
231 | parts of the aggregate.
232 |
233 | 6. Conveying Non-Source Forms.
234 |
235 | You may convey a covered work in object code form under the terms
236 | of sections 4 and 5, provided that you also convey the
237 | machine-readable Corresponding Source under the terms of this License,
238 | in one of these ways:
239 |
240 | a) Convey the object code in, or embodied in, a physical product
241 | (including a physical distribution medium), accompanied by the
242 | Corresponding Source fixed on a durable physical medium
243 | customarily used for software interchange.
244 |
245 | b) Convey the object code in, or embodied in, a physical product
246 | (including a physical distribution medium), accompanied by a
247 | written offer, valid for at least three years and valid for as
248 | long as you offer spare parts or customer support for that product
249 | model, to give anyone who possesses the object code either (1) a
250 | copy of the Corresponding Source for all the software in the
251 | product that is covered by this License, on a durable physical
252 | medium customarily used for software interchange, for a price no
253 | more than your reasonable cost of physically performing this
254 | conveying of source, or (2) access to copy the
255 | Corresponding Source from a network server at no charge.
256 |
257 | c) Convey individual copies of the object code with a copy of the
258 | written offer to provide the Corresponding Source. This
259 | alternative is allowed only occasionally and noncommercially, and
260 | only if you received the object code with such an offer, in accord
261 | with subsection 6b.
262 |
263 | d) Convey the object code by offering access from a designated
264 | place (gratis or for a charge), and offer equivalent access to the
265 | Corresponding Source in the same way through the same place at no
266 | further charge. You need not require recipients to copy the
267 | Corresponding Source along with the object code. If the place to
268 | copy the object code is a network server, the Corresponding Source
269 | may be on a different server (operated by you or a third party)
270 | that supports equivalent copying facilities, provided you maintain
271 | clear directions next to the object code saying where to find the
272 | Corresponding Source. Regardless of what server hosts the
273 | Corresponding Source, you remain obligated to ensure that it is
274 | available for as long as needed to satisfy these requirements.
275 |
276 | e) Convey the object code using peer-to-peer transmission, provided
277 | you inform other peers where the object code and Corresponding
278 | Source of the work are being offered to the general public at no
279 | charge under subsection 6d.
280 |
281 | A separable portion of the object code, whose source code is excluded
282 | from the Corresponding Source as a System Library, need not be
283 | included in conveying the object code work.
284 |
285 | A "User Product" is either (1) a "consumer product", which means any
286 | tangible personal property which is normally used for personal, family,
287 | or household purposes, or (2) anything designed or sold for incorporation
288 | into a dwelling. In determining whether a product is a consumer product,
289 | doubtful cases shall be resolved in favor of coverage. For a particular
290 | product received by a particular user, "normally used" refers to a
291 | typical or common use of that class of product, regardless of the status
292 | of the particular user or of the way in which the particular user
293 | actually uses, or expects or is expected to use, the product. A product
294 | is a consumer product regardless of whether the product has substantial
295 | commercial, industrial or non-consumer uses, unless such uses represent
296 | the only significant mode of use of the product.
297 |
298 | "Installation Information" for a User Product means any methods,
299 | procedures, authorization keys, or other information required to install
300 | and execute modified versions of a covered work in that User Product from
301 | a modified version of its Corresponding Source. The information must
302 | suffice to ensure that the continued functioning of the modified object
303 | code is in no case prevented or interfered with solely because
304 | modification has been made.
305 |
306 | If you convey an object code work under this section in, or with, or
307 | specifically for use in, a User Product, and the conveying occurs as
308 | part of a transaction in which the right of possession and use of the
309 | User Product is transferred to the recipient in perpetuity or for a
310 | fixed term (regardless of how the transaction is characterized), the
311 | Corresponding Source conveyed under this section must be accompanied
312 | by the Installation Information. But this requirement does not apply
313 | if neither you nor any third party retains the ability to install
314 | modified object code on the User Product (for example, the work has
315 | been installed in ROM).
316 |
317 | The requirement to provide Installation Information does not include a
318 | requirement to continue to provide support service, warranty, or updates
319 | for a work that has been modified or installed by the recipient, or for
320 | the User Product in which it has been modified or installed. Access to a
321 | network may be denied when the modification itself materially and
322 | adversely affects the operation of the network or violates the rules and
323 | protocols for communication across the network.
324 |
325 | Corresponding Source conveyed, and Installation Information provided,
326 | in accord with this section must be in a format that is publicly
327 | documented (and with an implementation available to the public in
328 | source code form), and must require no special password or key for
329 | unpacking, reading or copying.
330 |
331 | 7. Additional Terms.
332 |
333 | "Additional permissions" are terms that supplement the terms of this
334 | License by making exceptions from one or more of its conditions.
335 | Additional permissions that are applicable to the entire Program shall
336 | be treated as though they were included in this License, to the extent
337 | that they are valid under applicable law. If additional permissions
338 | apply only to part of the Program, that part may be used separately
339 | under those permissions, but the entire Program remains governed by
340 | this License without regard to the additional permissions.
341 |
342 | When you convey a copy of a covered work, you may at your option
343 | remove any additional permissions from that copy, or from any part of
344 | it. (Additional permissions may be written to require their own
345 | removal in certain cases when you modify the work.) You may place
346 | additional permissions on material, added by you to a covered work,
347 | for which you have or can give appropriate copyright permission.
348 |
349 | Notwithstanding any other provision of this License, for material you
350 | add to a covered work, you may (if authorized by the copyright holders of
351 | that material) supplement the terms of this License with terms:
352 |
353 | a) Disclaiming warranty or limiting liability differently from the
354 | terms of sections 15 and 16 of this License; or
355 |
356 | b) Requiring preservation of specified reasonable legal notices or
357 | author attributions in that material or in the Appropriate Legal
358 | Notices displayed by works containing it; or
359 |
360 | c) Prohibiting misrepresentation of the origin of that material, or
361 | requiring that modified versions of such material be marked in
362 | reasonable ways as different from the original version; or
363 |
364 | d) Limiting the use for publicity purposes of names of licensors or
365 | authors of the material; or
366 |
367 | e) Declining to grant rights under trademark law for use of some
368 | trade names, trademarks, or service marks; or
369 |
370 | f) Requiring indemnification of licensors and authors of that
371 | material by anyone who conveys the material (or modified versions of
372 | it) with contractual assumptions of liability to the recipient, for
373 | any liability that these contractual assumptions directly impose on
374 | those licensors and authors.
375 |
376 | All other non-permissive additional terms are considered "further
377 | restrictions" within the meaning of section 10. If the Program as you
378 | received it, or any part of it, contains a notice stating that it is
379 | governed by this License along with a term that is a further
380 | restriction, you may remove that term. If a license document contains
381 | a further restriction but permits relicensing or conveying under this
382 | License, you may add to a covered work material governed by the terms
383 | of that license document, provided that the further restriction does
384 | not survive such relicensing or conveying.
385 |
386 | If you add terms to a covered work in accord with this section, you
387 | must place, in the relevant source files, a statement of the
388 | additional terms that apply to those files, or a notice indicating
389 | where to find the applicable terms.
390 |
391 | Additional terms, permissive or non-permissive, may be stated in the
392 | form of a separately written license, or stated as exceptions;
393 | the above requirements apply either way.
394 |
395 | 8. Termination.
396 |
397 | You may not propagate or modify a covered work except as expressly
398 | provided under this License. Any attempt otherwise to propagate or
399 | modify it is void, and will automatically terminate your rights under
400 | this License (including any patent licenses granted under the third
401 | paragraph of section 11).
402 |
403 | However, if you cease all violation of this License, then your
404 | license from a particular copyright holder is reinstated (a)
405 | provisionally, unless and until the copyright holder explicitly and
406 | finally terminates your license, and (b) permanently, if the copyright
407 | holder fails to notify you of the violation by some reasonable means
408 | prior to 60 days after the cessation.
409 |
410 | Moreover, your license from a particular copyright holder is
411 | reinstated permanently if the copyright holder notifies you of the
412 | violation by some reasonable means, this is the first time you have
413 | received notice of violation of this License (for any work) from that
414 | copyright holder, and you cure the violation prior to 30 days after
415 | your receipt of the notice.
416 |
417 | Termination of your rights under this section does not terminate the
418 | licenses of parties who have received copies or rights from you under
419 | this License. If your rights have been terminated and not permanently
420 | reinstated, you do not qualify to receive new licenses for the same
421 | material under section 10.
422 |
423 | 9. Acceptance Not Required for Having Copies.
424 |
425 | You are not required to accept this License in order to receive or
426 | run a copy of the Program. Ancillary propagation of a covered work
427 | occurring solely as a consequence of using peer-to-peer transmission
428 | to receive a copy likewise does not require acceptance. However,
429 | nothing other than this License grants you permission to propagate or
430 | modify any covered work. These actions infringe copyright if you do
431 | not accept this License. Therefore, by modifying or propagating a
432 | covered work, you indicate your acceptance of this License to do so.
433 |
434 | 10. Automatic Licensing of Downstream Recipients.
435 |
436 | Each time you convey a covered work, the recipient automatically
437 | receives a license from the original licensors, to run, modify and
438 | propagate that work, subject to this License. You are not responsible
439 | for enforcing compliance by third parties with this License.
440 |
441 | An "entity transaction" is a transaction transferring control of an
442 | organization, or substantially all assets of one, or subdividing an
443 | organization, or merging organizations. If propagation of a covered
444 | work results from an entity transaction, each party to that
445 | transaction who receives a copy of the work also receives whatever
446 | licenses to the work the party's predecessor in interest had or could
447 | give under the previous paragraph, plus a right to possession of the
448 | Corresponding Source of the work from the predecessor in interest, if
449 | the predecessor has it or can get it with reasonable efforts.
450 |
451 | You may not impose any further restrictions on the exercise of the
452 | rights granted or affirmed under this License. For example, you may
453 | not impose a license fee, royalty, or other charge for exercise of
454 | rights granted under this License, and you may not initiate litigation
455 | (including a cross-claim or counterclaim in a lawsuit) alleging that
456 | any patent claim is infringed by making, using, selling, offering for
457 | sale, or importing the Program or any portion of it.
458 |
459 | 11. Patents.
460 |
461 | A "contributor" is a copyright holder who authorizes use under this
462 | License of the Program or a work on which the Program is based. The
463 | work thus licensed is called the contributor's "contributor version".
464 |
465 | A contributor's "essential patent claims" are all patent claims
466 | owned or controlled by the contributor, whether already acquired or
467 | hereafter acquired, that would be infringed by some manner, permitted
468 | by this License, of making, using, or selling its contributor version,
469 | but do not include claims that would be infringed only as a
470 | consequence of further modification of the contributor version. For
471 | purposes of this definition, "control" includes the right to grant
472 | patent sublicenses in a manner consistent with the requirements of
473 | this License.
474 |
475 | Each contributor grants you a non-exclusive, worldwide, royalty-free
476 | patent license under the contributor's essential patent claims, to
477 | make, use, sell, offer for sale, import and otherwise run, modify and
478 | propagate the contents of its contributor version.
479 |
480 | In the following three paragraphs, a "patent license" is any express
481 | agreement or commitment, however denominated, not to enforce a patent
482 | (such as an express permission to practice a patent or covenant not to
483 | sue for patent infringement). To "grant" such a patent license to a
484 | party means to make such an agreement or commitment not to enforce a
485 | patent against the party.
486 |
487 | If you convey a covered work, knowingly relying on a patent license,
488 | and the Corresponding Source of the work is not available for anyone
489 | to copy, free of charge and under the terms of this License, through a
490 | publicly available network server or other readily accessible means,
491 | then you must either (1) cause the Corresponding Source to be so
492 | available, or (2) arrange to deprive yourself of the benefit of the
493 | patent license for this particular work, or (3) arrange, in a manner
494 | consistent with the requirements of this License, to extend the patent
495 | license to downstream recipients. "Knowingly relying" means you have
496 | actual knowledge that, but for the patent license, your conveying the
497 | covered work in a country, or your recipient's use of the covered work
498 | in a country, would infringe one or more identifiable patents in that
499 | country that you have reason to believe are valid.
500 |
501 | If, pursuant to or in connection with a single transaction or
502 | arrangement, you convey, or propagate by procuring conveyance of, a
503 | covered work, and grant a patent license to some of the parties
504 | receiving the covered work authorizing them to use, propagate, modify
505 | or convey a specific copy of the covered work, then the patent license
506 | you grant is automatically extended to all recipients of the covered
507 | work and works based on it.
508 |
509 | A patent license is "discriminatory" if it does not include within
510 | the scope of its coverage, prohibits the exercise of, or is
511 | conditioned on the non-exercise of one or more of the rights that are
512 | specifically granted under this License. You may not convey a covered
513 | work if you are a party to an arrangement with a third party that is
514 | in the business of distributing software, under which you make payment
515 | to the third party based on the extent of your activity of conveying
516 | the work, and under which the third party grants, to any of the
517 | parties who would receive the covered work from you, a discriminatory
518 | patent license (a) in connection with copies of the covered work
519 | conveyed by you (or copies made from those copies), or (b) primarily
520 | for and in connection with specific products or compilations that
521 | contain the covered work, unless you entered into that arrangement,
522 | or that patent license was granted, prior to 28 March 2007.
523 |
524 | Nothing in this License shall be construed as excluding or limiting
525 | any implied license or other defenses to infringement that may
526 | otherwise be available to you under applicable patent law.
527 |
528 | 12. No Surrender of Others' Freedom.
529 |
530 | If conditions are imposed on you (whether by court order, agreement or
531 | otherwise) that contradict the conditions of this License, they do not
532 | excuse you from the conditions of this License. If you cannot convey a
533 | covered work so as to satisfy simultaneously your obligations under this
534 | License and any other pertinent obligations, then as a consequence you may
535 | not convey it at all. For example, if you agree to terms that obligate you
536 | to collect a royalty for further conveying from those to whom you convey
537 | the Program, the only way you could satisfy both those terms and this
538 | License would be to refrain entirely from conveying the Program.
539 |
540 | 13. Remote Network Interaction; Use with the GNU General Public License.
541 |
542 | Notwithstanding any other provision of this License, if you modify the
543 | Program, your modified version must prominently offer all users
544 | interacting with it remotely through a computer network (if your version
545 | supports such interaction) an opportunity to receive the Corresponding
546 | Source of your version by providing access to the Corresponding Source
547 | from a network server at no charge, through some standard or customary
548 | means of facilitating copying of software. This Corresponding Source
549 | shall include the Corresponding Source for any work covered by version 3
550 | of the GNU General Public License that is incorporated pursuant to the
551 | following paragraph.
552 |
553 | Notwithstanding any other provision of this License, you have
554 | permission to link or combine any covered work with a work licensed
555 | under version 3 of the GNU General Public License into a single
556 | combined work, and to convey the resulting work. The terms of this
557 | License will continue to apply to the part which is the covered work,
558 | but the work with which it is combined will remain governed by version
559 | 3 of the GNU General Public License.
560 |
561 | 14. Revised Versions of this License.
562 |
563 | The Free Software Foundation may publish revised and/or new versions of
564 | the GNU Affero General Public License from time to time. Such new versions
565 | will be similar in spirit to the present version, but may differ in detail to
566 | address new problems or concerns.
567 |
568 | Each version is given a distinguishing version number. If the
569 | Program specifies that a certain numbered version of the GNU Affero General
570 | Public License "or any later version" applies to it, you have the
571 | option of following the terms and conditions either of that numbered
572 | version or of any later version published by the Free Software
573 | Foundation. If the Program does not specify a version number of the
574 | GNU Affero General Public License, you may choose any version ever published
575 | by the Free Software Foundation.
576 |
577 | If the Program specifies that a proxy can decide which future
578 | versions of the GNU Affero General Public License can be used, that proxy's
579 | public statement of acceptance of a version permanently authorizes you
580 | to choose that version for the Program.
581 |
582 | Later license versions may give you additional or different
583 | permissions. However, no additional obligations are imposed on any
584 | author or copyright holder as a result of your choosing to follow a
585 | later version.
586 |
587 | 15. Disclaimer of Warranty.
588 |
589 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
590 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
591 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
592 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
593 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
594 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
595 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
596 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
597 |
598 | 16. Limitation of Liability.
599 |
600 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
601 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
602 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
603 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
604 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
605 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
606 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
607 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
608 | SUCH DAMAGES.
609 |
610 | 17. Interpretation of Sections 15 and 16.
611 |
612 | If the disclaimer of warranty and limitation of liability provided
613 | above cannot be given local legal effect according to their terms,
614 | reviewing courts shall apply local law that most closely approximates
615 | an absolute waiver of all civil liability in connection with the
616 | Program, unless a warranty or assumption of liability accompanies a
617 | copy of the Program in return for a fee.
618 |
619 | END OF TERMS AND CONDITIONS
620 |
621 | How to Apply These Terms to Your New Programs
622 |
623 | If you develop a new program, and you want it to be of the greatest
624 | possible use to the public, the best way to achieve this is to make it
625 | free software which everyone can redistribute and change under these terms.
626 |
627 | To do so, attach the following notices to the program. It is safest
628 | to attach them to the start of each source file to most effectively
629 | state the exclusion of warranty; and each file should have at least
630 | the "copyright" line and a pointer to where the full notice is found.
631 |
632 |
633 | Copyright (C)
634 |
635 | This program is free software: you can redistribute it and/or modify
636 | it under the terms of the GNU Affero General Public License as published by
637 | the Free Software Foundation, either version 3 of the License, or
638 | (at your option) any later version.
639 |
640 | This program is distributed in the hope that it will be useful,
641 | but WITHOUT ANY WARRANTY; without even the implied warranty of
642 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
643 | GNU Affero General Public License for more details.
644 |
645 | You should have received a copy of the GNU Affero General Public License
646 | along with this program. If not, see .
647 |
648 | Also add information on how to contact you by electronic and paper mail.
649 |
650 | If your software can interact with users remotely through a computer
651 | network, you should also make sure that it provides a way for users to
652 | get its source. For example, if your program is a web application, its
653 | interface could display a "Source" link that leads users to an archive
654 | of the code. There are many ways you could offer source, and different
655 | solutions will be better for different programs; see section 13 for the
656 | specific requirements.
657 |
658 | You should also get your employer (if you work as a programmer) or school,
659 | if any, to sign a "copyright disclaimer" for the program, if necessary.
660 | For more information on this, and how to apply and follow the GNU AGPL, see
661 | .
662 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # analyzer-d4-passivedns
2 |
3 | analyzer-d4-passivedns is an analyzer for a D4 network sensor including a complete Passive DNS server. The analyser can process data produced by D4 sensors (in [passivedns](https://github.com/gamelinux/passivedns) CSV format (more to come)) or independently from D4 using [COF websocket](https://datatracker.ietf.org/doc/html/draft-dulaunoy-dnsop-passive-dns-cof) streams.
4 |
5 | The package include a Passive DNS server which can be queried later to search for the Passive DNS records.
6 |
7 | # Features
8 |
9 | - [Input stream] - A D4 analyzer which can be plugged to one or more [D4 servers](https://github.com/D4-project/d4-core) to get a stream of DNS records
10 | - [Input Stream] - A websocket stream (or a file stream) in NDJSON [COF format](https://datatracker.ietf.org/doc/html/draft-dulaunoy-dnsop-passive-dns-cof)
11 | - [Output API] A compliant [Passive DNS ReST server compliant to Common Output Format](https://tools.ietf.org/html/draft-dulaunoy-dnsop-passive-dns-cof)
12 | - A flexible and simple analyser which can be configured to collect the required records from DNS records
13 |
14 | # Overview
15 |
16 | ## Requirements
17 |
18 | - Python 3.8
19 | - Redis >5.0 or [kvrocks](https://github.com/apache/incubator-kvrocks)
20 | - Tornado
21 | - iptools
22 |
23 | ## Install
24 |
25 | ### Redis
26 |
27 | ~~~~
28 | ./install_server.sh
29 | ~~~~
30 |
31 | All the Python 3 code will be installed in a virtualenv (PDNSENV).
32 |
33 | ### Kvrocks
34 |
35 | ~~~
36 | ./install_server_kvrocks.sh
37 | ~~~
38 |
39 | All the Python 3 code will be installed in a virtualenv (PDNSENV).
40 |
41 | ## Running
42 |
43 | ### Start the redis server or kvrocks server
44 |
45 | Don't forget to set the DB directory in the redis.conf configuration. By default, the redis for Passive DNS is running on TCP port 6400
46 |
47 | ~~~~
48 | ./redis/src/redis-server ./etc/redis.conf
49 | ~~~~
50 |
51 | or
52 |
53 | ~~~~
54 | ./kvrocks/src/kvrocks -c ./etc/kvrocks.conf
55 | ~~~~
56 |
57 | ### Start the Passive DNS COF server
58 |
59 | ~~~~
60 | . ./PDNSENV/bin/activate
61 | cd ./bin/
62 | python3 ./pdns-cof-server.py
63 | ~~~~
64 |
65 | ## Feeding the Passive DNS server
66 |
67 | You have two ways to feed the Passive DNS server. You can combine multiple streams. A sample public COF stream is available from CIRCL with the newly seen IPv6 addresses and DNS records.
68 |
69 | ### (via COF websocket stream) start the importer
70 |
71 | ~~~~
72 | python3 pdns-import-cof.py --websocket ws://crh.circl.lu:8888
73 | ~~~~
74 |
75 | ### (via D4) Configure and start the D4 analyzer
76 |
77 | ~~~~
78 | cd ./etc
79 | cp analyzer.conf.sample analyzer.conf
80 | ~~~~
81 |
82 | Edit the analyzer.conf to match the UUID of the analyzer queue from your D4 server.
83 |
84 | ~~~~
85 | [global]
86 | my-uuid = 6072e072-bfaa-4395-9bb1-cdb3b470d715
87 | d4-server = 127.0.0.1:6380
88 | # INFO|DEBUG
89 | logging-level = INFO
90 | ~~~~
91 |
92 | then you can start the analyzer which will fetch the data from the analyzer, parse it and
93 | populate the Passive DNS database.
94 |
95 | ~~~~
96 | . ./PDNSENV/bin/activate/
97 | cd ./bin/
98 | python3 pdns-ingestion.py
99 | ~~~~
100 |
101 | ## Usage
102 |
103 | ### Querying the server
104 |
105 | ~~~~shell
106 | adulau@kolmogorov ~/git/analyzer-d4-passivedns (master)$ curl -s http://127.0.0.1:8400/query/xn--ihuvudetpevap-xfb.se | jq .
107 | {
108 | "time_first": 1657878272,
109 | "time_last": 1657878272,
110 | "count": 1,
111 | "rrtype": "AAAA",
112 | "rrname": "xn--ihuvudetpevap-xfb.se",
113 | "rdata": "2a02:250:0:8::53",
114 | "origin": "origin not configured"
115 | }
116 | ~~~~
117 |
118 | ~~~~shell
119 | curl -s http://127.0.0.1:8400/query/2a02:250:0:8::53
120 | {"time_first": 1657878141, "time_last": 1657878141, "count": 1, "rrtype": "AAAA", "rrname": "media.vastporten.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
121 | {"time_first": 1657878929, "time_last": 1657878929, "count": 1, "rrtype": "AAAA", "rrname": "www.folkinitiativetarjeplog.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
122 | {"time_first": 1657878272, "time_last": 1657878272, "count": 1, "rrtype": "AAAA", "rrname": "xn--ihuvudetpevap-xfb.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
123 | {"time_first": 1657878189, "time_last": 1657878189, "count": 1, "rrtype": "AAAA", "rrname": "media.primesteps.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
124 | {"time_first": 1657878986, "time_last": 1657878986, "count": 1, "rrtype": "AAAA", "rrname": "media.skellefteaadventurepark.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
125 | {"time_first": 1657874940, "time_last": 1657874940, "count": 1, "rrtype": "AAAA", "rrname": "galleri.torsaspaintball.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
126 | {"time_first": 1657874205, "time_last": 1657874205, "count": 1, "rrtype": "AAAA", "rrname": "www.media1.harlaut.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
127 | {"time_first": 1657878165, "time_last": 1657878165, "count": 1, "rrtype": "AAAA", "rrname": "www.sd-nekretnine.rs", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
128 | {"time_first": 1657878678, "time_last": 1657878678, "count": 1, "rrtype": "AAAA", "rrname": "www.www2.resultat-balans.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
129 | {"time_first": 1657874288, "time_last": 1657874288, "count": 1, "rrtype": "AAAA", "rrname": "www.assistanshemtjanst.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
130 | {"time_first": 1657878943, "time_last": 1657878943, "count": 1, "rrtype": "AAAA", "rrname": "kafekultur.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
131 | {"time_first": 1657878141, "time_last": 1657878141, "count": 1, "rrtype": "AAAA", "rrname": "media1.rlab.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
132 | {"time_first": 1657878997, "time_last": 1657878997, "count": 1, "rrtype": "AAAA", "rrname": "serbiagreenbuildingexpo.com", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
133 | {"time_first": 1657879064, "time_last": 1657879064, "count": 1, "rrtype": "AAAA", "rrname": "www.framtro.nu", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
134 | {"time_first": 1657874285, "time_last": 1657874285, "count": 1, "rrtype": "AAAA", "rrname": "www.twotheartist.com", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
135 | {"time_first": 1657878774, "time_last": 1657878774, "count": 1, "rrtype": "AAAA", "rrname": "media.narkesten.se", "rdata": "2a02:250:0:8::53", "origin": "origin not configured"}
136 | ~~~~
137 |
138 | # License
139 |
140 | The software is free software/open source released under the GNU Affero General Public License version 3.
141 |
142 |
--------------------------------------------------------------------------------
/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-cayman
--------------------------------------------------------------------------------
/bin/pdns-cof-server.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # -*- coding: utf-8 -*-
3 | #
4 | # A Passive DNS COF compliant passive DNS server for the analyzer-d4-passivedns
5 | #
6 | # The output format is compliant with Passive DNS - Common Output Format
7 | #
8 | # https://tools.ietf.org/html/draft-dulaunoy-dnsop-passive-dns-cof
9 | #
10 | # This software is part of the D4 project.
11 | #
12 | # The software is released under the GNU Affero General Public version 3.
13 | #
14 | # Copyright (c) 2013-2022 Alexandre Dulaunoy - a@foo.be
15 | # Copyright (c) 2019-2022 Computer Incident Response Center Luxembourg (CIRCL)
16 |
17 |
18 | from datetime import date
19 | import tornado.escape
20 | import tornado.ioloop
21 | import tornado.web
22 |
23 | import iptools
24 | import redis
25 | import json
26 | import os
27 |
28 | rrset = [
29 | {
30 | "Reference": "[RFC1035]",
31 | "Type": "A",
32 | "Value": "1",
33 | "Meaning": "a host address",
34 | "Template": "",
35 | "Registration Date": "",
36 | },
37 | {
38 | "Reference": "[RFC1035]",
39 | "Type": "NS",
40 | "Value": "2",
41 | "Meaning": "an authoritative name server",
42 | "Template": "",
43 | "Registration Date": "",
44 | },
45 | {
46 | "Reference": "[RFC1035]",
47 | "Type": "MD",
48 | "Value": "3",
49 | "Meaning": "a mail destination (OBSOLETE - use MX)",
50 | "Template": "",
51 | "Registration Date": "",
52 | },
53 | {
54 | "Reference": "[RFC1035]",
55 | "Type": "MF",
56 | "Value": "4",
57 | "Meaning": "a mail forwarder (OBSOLETE - use MX)",
58 | "Template": "",
59 | "Registration Date": "",
60 | },
61 | {
62 | "Reference": "[RFC1035]",
63 | "Type": "CNAME",
64 | "Value": "5",
65 | "Meaning": "the canonical name for an alias",
66 | "Template": "",
67 | "Registration Date": "",
68 | },
69 | {
70 | "Reference": "[RFC1035]",
71 | "Type": "SOA",
72 | "Value": "6",
73 | "Meaning": "marks the start of a zone of authority",
74 | "Template": "",
75 | "Registration Date": "",
76 | },
77 | {
78 | "Reference": "[RFC1035]",
79 | "Type": "MB",
80 | "Value": "7",
81 | "Meaning": "a mailbox domain name (EXPERIMENTAL)",
82 | "Template": "",
83 | "Registration Date": "",
84 | },
85 | {
86 | "Reference": "[RFC1035]",
87 | "Type": "MG",
88 | "Value": "8",
89 | "Meaning": "a mail group member (EXPERIMENTAL)",
90 | "Template": "",
91 | "Registration Date": "",
92 | },
93 | {
94 | "Reference": "[RFC1035]",
95 | "Type": "MR",
96 | "Value": "9",
97 | "Meaning": "a mail rename domain name (EXPERIMENTAL)",
98 | "Template": "",
99 | "Registration Date": "",
100 | },
101 | {
102 | "Reference": "[RFC1035]",
103 | "Type": "NULL",
104 | "Value": "10",
105 | "Meaning": "a null RR (EXPERIMENTAL)",
106 | "Template": "",
107 | "Registration Date": "",
108 | },
109 | {
110 | "Reference": "[RFC1035]",
111 | "Type": "WKS",
112 | "Value": "11",
113 | "Meaning": "a well known service description",
114 | "Template": "",
115 | "Registration Date": "",
116 | },
117 | {
118 | "Reference": "[RFC1035]",
119 | "Type": "PTR",
120 | "Value": "12",
121 | "Meaning": "a domain name pointer",
122 | "Template": "",
123 | "Registration Date": "",
124 | },
125 | {
126 | "Reference": "[RFC1035]",
127 | "Type": "HINFO",
128 | "Value": "13",
129 | "Meaning": "host information",
130 | "Template": "",
131 | "Registration Date": "",
132 | },
133 | {
134 | "Reference": "[RFC1035]",
135 | "Type": "MINFO",
136 | "Value": "14",
137 | "Meaning": "mailbox or mail list information",
138 | "Template": "",
139 | "Registration Date": "",
140 | },
141 | {
142 | "Reference": "[RFC1035]",
143 | "Type": "MX",
144 | "Value": "15",
145 | "Meaning": "mail exchange",
146 | "Template": "",
147 | "Registration Date": "",
148 | },
149 | {
150 | "Reference": "[RFC1035]",
151 | "Type": "TXT",
152 | "Value": "16",
153 | "Meaning": "text strings",
154 | "Template": "",
155 | "Registration Date": "",
156 | },
157 | {
158 | "Reference": "[RFC1183]",
159 | "Type": "RP",
160 | "Value": "17",
161 | "Meaning": "for Responsible Person",
162 | "Template": "",
163 | "Registration Date": "",
164 | },
165 | {
166 | "Reference": "[RFC1183][RFC5864]",
167 | "Type": "AFSDB",
168 | "Value": "18",
169 | "Meaning": "for AFS Data Base location",
170 | "Template": "",
171 | "Registration Date": "",
172 | },
173 | {
174 | "Reference": "[RFC1183]",
175 | "Type": "X25",
176 | "Value": "19",
177 | "Meaning": "for X.25 PSDN address",
178 | "Template": "",
179 | "Registration Date": "",
180 | },
181 | {
182 | "Reference": "[RFC1183]",
183 | "Type": "ISDN",
184 | "Value": "20",
185 | "Meaning": "for ISDN address",
186 | "Template": "",
187 | "Registration Date": "",
188 | },
189 | {
190 | "Reference": "[RFC1183]",
191 | "Type": "RT",
192 | "Value": "21",
193 | "Meaning": "for Route Through",
194 | "Template": "",
195 | "Registration Date": "",
196 | },
197 | {
198 | "Reference": "[RFC1706]",
199 | "Type": "NSAP",
200 | "Value": "22",
201 | "Meaning": "for NSAP address, NSAP style A record",
202 | "Template": "",
203 | "Registration Date": "",
204 | },
205 | {
206 | "Reference": "[RFC1348][RFC1637][RFC1706]",
207 | "Type": "NSAP-PTR",
208 | "Value": "23",
209 | "Meaning": "for domain name pointer, NSAP style",
210 | "Template": "",
211 | "Registration Date": "",
212 | },
213 | {
214 | "Reference": "[RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2931][RFC3110][RFC3008]",
215 | "Type": "SIG",
216 | "Value": "24",
217 | "Meaning": "for security signature",
218 | "Template": "",
219 | "Registration Date": "",
220 | },
221 | {
222 | "Reference": "[RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2539][RFC3008][RFC3110]",
223 | "Type": "KEY",
224 | "Value": "25",
225 | "Meaning": "for security key",
226 | "Template": "",
227 | "Registration Date": "",
228 | },
229 | {
230 | "Reference": "[RFC2163]",
231 | "Type": "PX",
232 | "Value": "26",
233 | "Meaning": "X.400 mail mapping information",
234 | "Template": "",
235 | "Registration Date": "",
236 | },
237 | {
238 | "Reference": "[RFC1712]",
239 | "Type": "GPOS",
240 | "Value": "27",
241 | "Meaning": "Geographical Position",
242 | "Template": "",
243 | "Registration Date": "",
244 | },
245 | {
246 | "Reference": "[RFC3596]",
247 | "Type": "AAAA",
248 | "Value": "28",
249 | "Meaning": "IP6 Address",
250 | "Template": "",
251 | "Registration Date": "",
252 | },
253 | {
254 | "Reference": "[RFC1876]",
255 | "Type": "LOC",
256 | "Value": "29",
257 | "Meaning": "Location Information",
258 | "Template": "",
259 | "Registration Date": "",
260 | },
261 | {
262 | "Reference": "[RFC3755][RFC2535]",
263 | "Type": "NXT",
264 | "Value": "30",
265 | "Meaning": "Next Domain (OBSOLETE)",
266 | "Template": "",
267 | "Registration Date": "",
268 | },
269 | {
270 | "Reference": "[Michael_Patton][http://ana-3.lcs.mit.edu/~jnc/nimrod/dns.txt]",
271 | "Type": "EID",
272 | "Value": "31",
273 | "Meaning": "Endpoint Identifier",
274 | "Template": "",
275 | "Registration Date": "1995-06",
276 | },
277 | {
278 | "Reference": "[1][Michael_Patton][http://ana-3.lcs.mit.edu/~jnc/nimrod/dns.txt]",
279 | "Type": "NIMLOC",
280 | "Value": "32",
281 | "Meaning": "Nimrod Locator",
282 | "Template": "",
283 | "Registration Date": "1995-06",
284 | },
285 | {
286 | "Reference": "[1][RFC2782]",
287 | "Type": "SRV",
288 | "Value": "33",
289 | "Meaning": "Server Selection",
290 | "Template": "",
291 | "Registration Date": "",
292 | },
293 | {
294 | "Reference": "[\n ATM Forum Technical Committee, \"ATM Name System, V2.0\", Doc ID: AF-DANS-0152.000, July 2000. Available from and held in escrow by IANA.]",
295 | "Type": "ATMA",
296 | "Value": "34",
297 | "Meaning": "ATM Address",
298 | "Template": "",
299 | "Registration Date": "",
300 | },
301 | {
302 | "Reference": "[RFC2915][RFC2168][RFC3403]",
303 | "Type": "NAPTR",
304 | "Value": "35",
305 | "Meaning": "Naming Authority Pointer",
306 | "Template": "",
307 | "Registration Date": "",
308 | },
309 | {
310 | "Reference": "[RFC2230]",
311 | "Type": "KX",
312 | "Value": "36",
313 | "Meaning": "Key Exchanger",
314 | "Template": "",
315 | "Registration Date": "",
316 | },
317 | {
318 | "Reference": "[RFC4398]",
319 | "Type": "CERT",
320 | "Value": "37",
321 | "Meaning": "CERT",
322 | "Template": "",
323 | "Registration Date": "",
324 | },
325 | {
326 | "Reference": "[RFC3226][RFC2874][RFC6563]",
327 | "Type": "A6",
328 | "Value": "38",
329 | "Meaning": "A6 (OBSOLETE - use AAAA)",
330 | "Template": "",
331 | "Registration Date": "",
332 | },
333 | {
334 | "Reference": "[RFC6672]",
335 | "Type": "DNAME",
336 | "Value": "39",
337 | "Meaning": "DNAME",
338 | "Template": "",
339 | "Registration Date": "",
340 | },
341 | {
342 | "Reference": "[Donald_E_Eastlake][http://tools.ietf.org/html/draft-eastlake-kitchen-sink]",
343 | "Type": "SINK",
344 | "Value": "40",
345 | "Meaning": "SINK",
346 | "Template": "",
347 | "Registration Date": "1997-11",
348 | },
349 | {
350 | "Reference": "[RFC6891][RFC3225]",
351 | "Type": "OPT",
352 | "Value": "41",
353 | "Meaning": "OPT",
354 | "Template": "",
355 | "Registration Date": "",
356 | },
357 | {
358 | "Reference": "[RFC3123]",
359 | "Type": "APL",
360 | "Value": "42",
361 | "Meaning": "APL",
362 | "Template": "",
363 | "Registration Date": "",
364 | },
365 | {
366 | "Reference": "[RFC4034][RFC3658]",
367 | "Type": "DS",
368 | "Value": "43",
369 | "Meaning": "Delegation Signer",
370 | "Template": "",
371 | "Registration Date": "",
372 | },
373 | {
374 | "Reference": "[RFC4255]",
375 | "Type": "SSHFP",
376 | "Value": "44",
377 | "Meaning": "SSH Key Fingerprint",
378 | "Template": "",
379 | "Registration Date": "",
380 | },
381 | {
382 | "Reference": "[RFC4025]",
383 | "Type": "IPSECKEY",
384 | "Value": "45",
385 | "Meaning": "IPSECKEY",
386 | "Template": "",
387 | "Registration Date": "",
388 | },
389 | {
390 | "Reference": "[RFC4034][RFC3755]",
391 | "Type": "RRSIG",
392 | "Value": "46",
393 | "Meaning": "RRSIG",
394 | "Template": "",
395 | "Registration Date": "",
396 | },
397 | {
398 | "Reference": "[RFC4034][RFC3755]",
399 | "Type": "NSEC",
400 | "Value": "47",
401 | "Meaning": "NSEC",
402 | "Template": "",
403 | "Registration Date": "",
404 | },
405 | {
406 | "Reference": "[RFC4034][RFC3755]",
407 | "Type": "DNSKEY",
408 | "Value": "48",
409 | "Meaning": "DNSKEY",
410 | "Template": "",
411 | "Registration Date": "",
412 | },
413 | {
414 | "Reference": "[RFC4701]",
415 | "Type": "DHCID",
416 | "Value": "49",
417 | "Meaning": "DHCID",
418 | "Template": "",
419 | "Registration Date": "",
420 | },
421 | {
422 | "Reference": "[RFC5155]",
423 | "Type": "NSEC3",
424 | "Value": "50",
425 | "Meaning": "NSEC3",
426 | "Template": "",
427 | "Registration Date": "",
428 | },
429 | {
430 | "Reference": "[RFC5155]",
431 | "Type": "NSEC3PARAM",
432 | "Value": "51",
433 | "Meaning": "NSEC3PARAM",
434 | "Template": "",
435 | "Registration Date": "",
436 | },
437 | {
438 | "Reference": "[RFC6698]",
439 | "Type": "TLSA",
440 | "Value": "52",
441 | "Meaning": "TLSA",
442 | "Template": "",
443 | "Registration Date": "",
444 | },
445 | {
446 | "Reference": "[RFC5205]",
447 | "Type": "HIP",
448 | "Value": "55",
449 | "Meaning": "Host Identity Protocol",
450 | "Template": "",
451 | "Registration Date": "",
452 | },
453 | {
454 | "Reference": "[Jim_Reid]",
455 | "Type": "NINFO",
456 | "Value": "56",
457 | "Meaning": "NINFO",
458 | "Template": "NINFO/ninfo-completed-template",
459 | "Registration Date": "2008-01-21",
460 | },
461 | {
462 | "Reference": "[Jim_Reid]",
463 | "Type": "RKEY",
464 | "Value": "57",
465 | "Meaning": "RKEY",
466 | "Template": "RKEY/rkey-completed-template",
467 | "Registration Date": "2008-01-21",
468 | },
469 | {
470 | "Reference": "[Wouter_Wijngaards]",
471 | "Type": "TALINK",
472 | "Value": "58",
473 | "Meaning": "Trust Anchor LINK",
474 | "Template": "TALINK/talink-completed-template",
475 | "Registration Date": "2010-02-17",
476 | },
477 | {
478 | "Reference": "[George_Barwood]",
479 | "Type": "CDS",
480 | "Value": "59",
481 | "Meaning": "Child DS",
482 | "Template": "CDS/cds-completed-template",
483 | "Registration Date": "2011-06-06",
484 | },
485 | {
486 | "Reference": "[RFC4408]",
487 | "Type": "SPF",
488 | "Value": "99",
489 | "Meaning": "",
490 | "Template": "",
491 | "Registration Date": "",
492 | },
493 | {
494 | "Reference": "[IANA-Reserved]",
495 | "Type": "UINFO",
496 | "Value": "100",
497 | "Meaning": "",
498 | "Template": "",
499 | "Registration Date": "",
500 | },
501 | {
502 | "Reference": "[IANA-Reserved]",
503 | "Type": "UID",
504 | "Value": "101",
505 | "Meaning": "",
506 | "Template": "",
507 | "Registration Date": "",
508 | },
509 | {
510 | "Reference": "[IANA-Reserved]",
511 | "Type": "GID",
512 | "Value": "102",
513 | "Meaning": "",
514 | "Template": "",
515 | "Registration Date": "",
516 | },
517 | {
518 | "Reference": "[IANA-Reserved]",
519 | "Type": "UNSPEC",
520 | "Value": "103",
521 | "Meaning": "",
522 | "Template": "",
523 | "Registration Date": "",
524 | },
525 | {
526 | "Reference": "[RFC6742]",
527 | "Type": "NID",
528 | "Value": "104",
529 | "Meaning": "",
530 | "Template": "ILNP/nid-completed-template",
531 | "Registration Date": "",
532 | },
533 | {
534 | "Reference": "[RFC6742]",
535 | "Type": "L32",
536 | "Value": "105",
537 | "Meaning": "",
538 | "Template": "ILNP/l32-completed-template",
539 | "Registration Date": "",
540 | },
541 | {
542 | "Reference": "[RFC6742]",
543 | "Type": "L64",
544 | "Value": "106",
545 | "Meaning": "",
546 | "Template": "ILNP/l64-completed-template",
547 | "Registration Date": "",
548 | },
549 | {
550 | "Reference": "[RFC6742]",
551 | "Type": "LP",
552 | "Value": "107",
553 | "Meaning": "",
554 | "Template": "ILNP/lp-completed-template",
555 | "Registration Date": "",
556 | },
557 | {
558 | "Reference": "[RFC7043]",
559 | "Type": "EUI48",
560 | "Value": "108",
561 | "Meaning": "an EUI-48 address",
562 | "Template": "EUI48/eui48-completed-template",
563 | "Registration Date": "2013-03-27",
564 | },
565 | {
566 | "Reference": "[RFC7043]",
567 | "Type": "EUI64",
568 | "Value": "109",
569 | "Meaning": "an EUI-64 address",
570 | "Template": "EUI64/eui64-completed-template",
571 | "Registration Date": "2013-03-27",
572 | },
573 | {
574 | "Reference": "[RFC2930]",
575 | "Type": "TKEY",
576 | "Value": "249",
577 | "Meaning": "Transaction Key",
578 | "Template": "",
579 | "Registration Date": "",
580 | },
581 | {
582 | "Reference": "[RFC2845]",
583 | "Type": "TSIG",
584 | "Value": "250",
585 | "Meaning": "Transaction Signature",
586 | "Template": "",
587 | "Registration Date": "",
588 | },
589 | {
590 | "Reference": "[RFC1995]",
591 | "Type": "IXFR",
592 | "Value": "251",
593 | "Meaning": "incremental transfer",
594 | "Template": "",
595 | "Registration Date": "",
596 | },
597 | {
598 | "Reference": "[RFC1035][RFC5936]",
599 | "Type": "AXFR",
600 | "Value": "252",
601 | "Meaning": "transfer of an entire zone",
602 | "Template": "",
603 | "Registration Date": "",
604 | },
605 | {
606 | "Reference": "[RFC1035]",
607 | "Type": "MAILB",
608 | "Value": "253",
609 | "Meaning": "mailbox-related RRs (MB, MG or MR)",
610 | "Template": "",
611 | "Registration Date": "",
612 | },
613 | {
614 | "Reference": "[RFC1035]",
615 | "Type": "MAILA",
616 | "Value": "254",
617 | "Meaning": "mail agent RRs (OBSOLETE - see MX)",
618 | "Template": "",
619 | "Registration Date": "",
620 | },
621 | {
622 | "Reference": "[RFC1035][RFC6895]",
623 | "Type": "*",
624 | "Value": "255",
625 | "Meaning": "A request for all records the server/cache has available",
626 | "Template": "",
627 | "Registration Date": "",
628 | },
629 | {
630 | "Reference": "[Patrik_Faltstrom]",
631 | "Type": "URI",
632 | "Value": "256",
633 | "Meaning": "URI",
634 | "Template": "URI/uri-completed-template",
635 | "Registration Date": "2011-02-22",
636 | },
637 | {
638 | "Reference": "[RFC6844]",
639 | "Type": "CAA",
640 | "Value": "257",
641 | "Meaning": "Certification Authority Restriction",
642 | "Template": "CAA/caa-completed-template",
643 | "Registration Date": "2011-04-07",
644 | },
645 | {
646 | "Reference": "[Sam_Weiler][http://cameo.library.cmu.edu/][\n Deploying DNSSEC Without a Signed Root. Technical Report 1999-19,\nInformation Networking Institute, Carnegie Mellon University, April 2004.]",
647 | "Type": "TA",
648 | "Value": "32768",
649 | "Meaning": "DNSSEC Trust Authorities",
650 | "Template": "",
651 | "Registration Date": "2005-12-13",
652 | },
653 | {
654 | "Reference": "[RFC4431]",
655 | "Type": "DLV",
656 | "Value": "32769",
657 | "Meaning": "DNSSEC Lookaside Validation",
658 | "Template": "",
659 | "Registration Date": "",
660 | },
661 | {
662 | "Reference": "",
663 | "Type": "Reserved",
664 | "Value": "65535",
665 | "Meaning": "",
666 | "Template": "",
667 | "Registration Date": "",
668 | },
669 | ]
670 |
671 | analyzer_redis_host = os.getenv('D4_ANALYZER_REDIS_HOST', '127.0.0.1')
672 | analyzer_redis_port = int(os.getenv('D4_ANALYZER_REDIS_PORT', 6400))
673 |
674 | r = redis.StrictRedis(host=analyzer_redis_host, port=analyzer_redis_port, db=0)
675 |
676 | rrset_supported = ['1', '2', '5', '15', '16', '28', '33', '46']
677 | expiring_type = ['16']
678 |
679 |
680 | origin = "origin not configured"
681 |
682 |
683 | def getFirstSeen(t1=None, t2=None):
684 | if t1 is None or t2 is None:
685 | return False
686 | rec = f's:{t1.lower()}:{t2.lower()}'
687 | for rr in rrset:
688 | if (rr['Value']) is not None and rr['Value'] in rrset_supported:
689 | qrec = f'{rec}:{rr["Value"]}'
690 | recget = r.get(qrec)
691 | if recget is not None:
692 | return int(recget.decode(encoding='UTF-8'))
693 |
694 |
695 | def getLastSeen(t1=None, t2=None):
696 | if t1 is None or t2 is None:
697 | return False
698 | rec = f'l:{t1.lower()}:{t2.lower()}'
699 | for rr in rrset:
700 | if (rr['Value']) is not None and rr['Value'] in rrset_supported:
701 | qrec = f'{rec}:{rr["Value"]}'
702 | recget = r.get(qrec)
703 | if recget is not None:
704 | return int(recget.decode(encoding='UTF-8'))
705 |
706 |
707 | def getCount(t1=None, t2=None):
708 | if t1 is None or t2 is None:
709 | return False
710 | rec = f'o:{t1.lower()}:{t2.lower()}'
711 | for rr in rrset:
712 | if (rr['Value']) is not None and rr['Value'] in rrset_supported:
713 | qrec = f'{rec}:{rr["Value"]}'
714 | recget = r.get(qrec)
715 | if recget is not None:
716 | return int(recget.decode(encoding='UTF-8'))
717 |
718 |
719 | def getRecord(t=None):
720 | if t is None:
721 | return False
722 | rrfound = []
723 | for rr in rrset:
724 | if (rr['Value']) is not None and rr['Value'] in rrset_supported:
725 | rec = f'r:{t}:{rr["Value"]}'
726 | setsize = r.scard(rec)
727 | if setsize < 200:
728 | rs = r.smembers(rec)
729 | else:
730 | # TODO: improve with a new API end-point with SSCAN
731 | # rs = r.srandmember(rec, number=300)
732 | rs = False
733 |
734 | if rs:
735 | for v in rs:
736 | rrval = {}
737 | rdata = v.decode(encoding='UTF-8').strip()
738 | rrval['time_first'] = getFirstSeen(t1=t, t2=rdata)
739 | rrval['time_last'] = getLastSeen(t1=t, t2=rdata)
740 | if rrval['time_first'] is None:
741 | break
742 | rrval['count'] = getCount(t1=t, t2=rdata)
743 | rrval['rrtype'] = rr['Type']
744 | rrval['rrname'] = t
745 | rrval['rdata'] = rdata
746 | if origin:
747 | rrval['origin'] = origin
748 | rrfound.append(rrval)
749 | return rrfound
750 |
751 |
752 | def getAssociatedRecords(rdata=None):
753 | if rdata is None:
754 | return False
755 | rec = f'v:{rdata.lower()}'
756 | records = []
757 | for rr in rrset:
758 | if (rr['Value']) is not None and rr['Value'] in rrset_supported:
759 | qrec = f'{rec}:{rr["Value"]}'
760 | if r.smembers(qrec):
761 | for v in r.smembers(qrec):
762 | records.append(v.decode(encoding='UTF-8'))
763 | return records
764 |
765 |
766 | def RemDuplicate(d=None):
767 | if d is None:
768 | return False
769 | outd = [dict(t) for t in set([tuple(o.items()) for o in d])]
770 | return outd
771 |
772 |
773 | def JsonQOF(rrfound=None, RemoveDuplicate=True):
774 | if rrfound is None:
775 | return False
776 | rrqof = ""
777 |
778 | if RemoveDuplicate:
779 | rrfound = RemDuplicate(d=rrfound)
780 |
781 | for rr in rrfound:
782 | rrqof = rrqof + json.dumps(rr) + "\n"
783 | return rrqof
784 |
785 |
786 | class InfoHandler(tornado.web.RequestHandler):
787 | def get(self):
788 | stats = int(r.get("stats:processed"))
789 | response = {'version': 'git', 'software': 'analyzer-d4-passivedns'}
790 | response['stats'] = stats
791 | sensors = r.zrevrange('stats:sensors', 0, -1, withscores=True)
792 | rsensors = []
793 | for x in sensors:
794 | d = dict()
795 | d['sensor_id'] = x[0].decode()
796 | d['count'] = int(float(x[1]))
797 | rsensors.append(d)
798 | response['sensors'] = rsensors
799 | self.write(response)
800 |
801 |
802 | class QueryHandler(tornado.web.RequestHandler):
803 | def get(self, q):
804 | print(f'query: {q}')
805 | if iptools.ipv4.validate_ip(q) or iptools.ipv6.validate_ip(q):
806 | for x in getAssociatedRecords(q):
807 | self.write(JsonQOF(getRecord(x)))
808 | else:
809 | self.write(JsonQOF(getRecord(t=q.strip())))
810 |
811 |
812 | class FullQueryHandler(tornado.web.RequestHandler):
813 | def get(self, q):
814 | print(f'fquery: {q}')
815 | if iptools.ipv4.validate_ip(q) or iptools.ipv6.validate_ip(q):
816 | for x in getAssociatedRecords(q):
817 | self.write(JsonQOF(getRecord(x)))
818 | else:
819 | for x in getAssociatedRecords(q):
820 | self.write(JsonQOF(getRecord(t=x.strip())))
821 |
822 |
823 | application = tornado.web.Application(
824 | [
825 | (r"/query/(.*)", QueryHandler),
826 | (r"/fquery/(.*)", FullQueryHandler),
827 | (r"/info", InfoHandler),
828 | ]
829 | )
830 |
831 | if __name__ == "test":
832 |
833 | qq = ["foo.be", "8.8.8.8"]
834 |
835 | for q in qq:
836 | if iptools.ipv4.validate_ip(q) or iptools.ipv6.validate_ip(q):
837 | for x in getAssociatedRecords(q):
838 | print(JsonQOF(getRecord(x)))
839 | else:
840 | print(JsonQOF(getRecord(t=q)))
841 | else:
842 | application.listen(8400)
843 | tornado.ioloop.IOLoop.instance().start()
844 |
--------------------------------------------------------------------------------
/bin/pdns-import-cof.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | #
3 | # pdns-import is a simple import from Passive DNS cof format (from NDJSON)
4 | # and import these back into a Passive DNS backend
5 | #
6 | # This software is part of the D4 project.
7 | #
8 | # The software is released under the GNU Affero General Public version 3.
9 | #
10 | # Copyright (c) 2019-2022 Alexandre Dulaunoy - a@foo.be
11 | # Copyright (c) 2019 Computer Incident Response Center Luxembourg (CIRCL)
12 |
13 |
14 | import redis
15 | import json
16 | import logging
17 | import sys
18 | import argparse
19 | import os
20 | import ndjson
21 |
22 | # ! websocket-client not websocket
23 | import websocket
24 |
25 | parser = argparse.ArgumentParser(
26 | description='Import array of standard Passive DNS cof format into your Passive DNS server'
27 | )
28 | parser.add_argument('--file', dest='filetoimport', help='JSON file to import')
29 | parser.add_argument(
30 | '--websocket', dest='websocket', help='Import from a websocket stream'
31 | )
32 | args = parser.parse_args()
33 |
34 |
35 | logger = logging.getLogger('pdns ingestor')
36 | ch = logging.StreamHandler()
37 | logger.setLevel(logging.DEBUG)
38 | formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
39 | ch.setFormatter(formatter)
40 | logger.addHandler(ch)
41 |
42 | logger.info("Starting COF ingestor")
43 |
44 | analyzer_redis_host = os.getenv('D4_ANALYZER_REDIS_HOST', '127.0.0.1')
45 | analyzer_redis_port = int(os.getenv('D4_ANALYZER_REDIS_PORT', 6400))
46 |
47 | r = redis.Redis(host='127.0.0.1', port=6400)
48 |
49 | excludesubstrings = ['spamhaus.org', 'asn.cymru.com']
50 | with open('../etc/records-type.json') as rtypefile:
51 | rtype = json.load(rtypefile)
52 |
53 | dnstype = {}
54 |
55 | stats = True
56 |
57 | for v in rtype:
58 | dnstype[(v['type'])] = v['value']
59 |
60 | expiration = None
61 | if (not (args.filetoimport)) and (not (args.websocket)):
62 | parser.print_help()
63 | sys.exit(0)
64 |
65 |
66 | def add_record(rdns=None):
67 | if rdns is None:
68 | return False
69 | logger.debug("parsed record: {}".format(rdns))
70 | if 'rrname' not in rdns:
71 | logger.debug(
72 | 'Parsing of passive DNS line is incomplete: {}'.format(rdns.strip())
73 | )
74 | return False
75 | if rdns['rrname'] and rdns['rrtype']:
76 | rdns['type'] = dnstype[rdns['rrtype']]
77 | rdns['v'] = rdns['rdata']
78 | excludeflag = False
79 | for exclude in excludesubstrings:
80 | if exclude in rdns['rrname']:
81 | excludeflag = True
82 | if excludeflag:
83 | logger.debug('Excluded {}'.format(rdns['rrname']))
84 | return False
85 | if rdns['type'] == '16':
86 | rdns['v'] = rdns['v'].replace("\"", "", 1)
87 | query = "r:{}:{}".format(rdns['rrname'], rdns['type'])
88 | logger.debug('redis sadd: {} -> {}'.format(query, rdns['v']))
89 | r.sadd(query, rdns['v'])
90 | res = "v:{}:{}".format(rdns['v'], rdns['type'])
91 | logger.debug('redis sadd: {} -> {}'.format(res, rdns['rrname']))
92 | r.sadd(res, rdns['rrname'])
93 |
94 | firstseen = "s:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
95 | if not r.exists(firstseen):
96 | r.set(firstseen, int(float(rdns['time_first'])))
97 | logger.debug('redis set: {} -> {}'.format(firstseen, rdns['time_first']))
98 |
99 | lastseen = "l:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
100 | last = r.get(lastseen)
101 | if last is None or int(float(last)) < int(float(rdns['time_last'])):
102 | r.set(lastseen, int(float(rdns['time_last'])))
103 | logger.debug('redis set: {} -> {}'.format(lastseen, rdns['time_last']))
104 |
105 | occ = "o:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
106 | if 'count' in rdns:
107 | r.set(occ, rdns['count'])
108 | else:
109 | r.incrby(occ, amount=1)
110 |
111 | if stats:
112 | r.incrby('stats:processed', amount=1)
113 | r.sadd('sensors:seen', rdns["sensor_id"])
114 | r.zincrby('stats:sensors', 1, rdns["sensor_id"])
115 | if not r:
116 | logger.info('empty passive dns record')
117 | return False
118 |
119 |
120 | def on_open(ws):
121 | logger.debug('[websocket] connection open')
122 |
123 |
124 | def on_close(ws):
125 | logger.debug('[websocket] connection closed')
126 |
127 |
128 | def on_message(ws, message):
129 | logger.debug('Message received via websocket')
130 | add_record(rdns=json.loads(message))
131 |
132 |
133 | if args.filetoimport:
134 | with open(args.filetoimport, "r") as dnsimport:
135 | reader = ndjson.load(dnsimport)
136 | for rdns in reader:
137 | add_record(rdns=rdns)
138 | elif args.websocket:
139 | ws = websocket.WebSocketApp(
140 | args.websocket, on_open=on_open, on_close=on_close, on_message=on_message
141 | )
142 | ws.run_forever()
143 |
--------------------------------------------------------------------------------
/bin/pdns-import.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | #
3 | # pdns-import is a simple import from Passive DNS cof format (in an array)
4 | # and import these back into a Passive DNS backend
5 | #
6 | # This software is part of the D4 project.
7 | #
8 | # The software is released under the GNU Affero General Public version 3.
9 | #
10 | # Copyright (c) 2019 Alexandre Dulaunoy - a@foo.be
11 | # Copyright (c) Computer Incident Response Center Luxembourg (CIRCL)
12 |
13 |
14 | import re
15 | import redis
16 | import fileinput
17 | import json
18 | import configparser
19 | import time
20 | import logging
21 | import sys
22 | import argparse
23 | import os
24 |
25 | parser = argparse.ArgumentParser(description='Import array of standard Passive DNS cof format into your Passive DNS server')
26 | parser.add_argument('--file', dest='filetoimport', help='JSON file to import')
27 | args = parser.parse_args()
28 |
29 | config = configparser.RawConfigParser()
30 | config.read('../etc/analyzer.conf')
31 |
32 | expirations = config.items('expiration')
33 | excludesubstrings = config.get('exclude', 'substring').split(',')
34 | myuuid = config.get('global', 'my-uuid')
35 | myqueue = "analyzer:8:{}".format(myuuid)
36 | mylogginglevel = config.get('global', 'logging-level')
37 | logger = logging.getLogger('pdns ingestor')
38 | ch = logging.StreamHandler()
39 | if mylogginglevel == 'DEBUG':
40 | logger.setLevel(logging.DEBUG)
41 | ch.setLevel(logging.DEBUG)
42 | elif mylogginglevel == 'INFO':
43 | logger.setLevel(logging.INFO)
44 | ch.setLevel(logging.INFO)
45 | formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
46 | ch.setFormatter(formatter)
47 | logger.addHandler(ch)
48 |
49 | logger.info("Starting and using FIFO {} from D4 server".format(myqueue))
50 |
51 | analyzer_redis_host = os.getenv('D4_ANALYZER_REDIS_HOST', '127.0.0.1')
52 | analyzer_redis_port = int(os.getenv('D4_ANALYZER_REDIS_PORT', 6400))
53 |
54 | d4_server, d4_port = config.get('global', 'd4-server').split(':')
55 | host_redis_metadata = os.getenv('D4_REDIS_METADATA_HOST', d4_server)
56 | port_redis_metadata = int(os.getenv('D4_REDIS_METADATA_PORT', d4_port))
57 |
58 | r = redis.Redis(host=analyzer_redis_host, port=analyzer_redis_port)
59 | r_d4 = redis.Redis(host=host_redis_metadata, port=port_redis_metadata, db=2)
60 |
61 | with open('../etc/records-type.json') as rtypefile:
62 | rtype = json.load(rtypefile)
63 |
64 | dnstype = {}
65 |
66 | stats = True
67 |
68 | for v in rtype:
69 | dnstype[(v['type'])] = v['value']
70 |
71 | expiration = None
72 | if not (args.filetoimport):
73 | parser.print_help()
74 | sys.exit(0)
75 | with open(args.filetoimport) as dnsimport:
76 | records = json.load(dnsimport)
77 |
78 | print (records)
79 | for rdns in records:
80 | logger.debug("parsed record: {}".format(r))
81 | if 'rrname' not in rdns:
82 | logger.debug('Parsing of passive DNS line is incomplete: {}'.format(l.strip()))
83 | continue
84 | if rdns['rrname'] and rdns['rrtype']:
85 | rdns['type'] = dnstype[rdns['rrtype']]
86 | rdns['v'] = rdns['rdata']
87 | excludeflag = False
88 | for exclude in excludesubstrings:
89 | if exclude in rdns['rrname']:
90 | excludeflag = True
91 | if excludeflag:
92 | logger.debug('Excluded {}'.format(rdns['rrname']))
93 | continue
94 | if rdns['type'] == '16':
95 | rdns['v'] = rdns['v'].replace("\"", "", 1)
96 | query = "r:{}:{}".format(rdns['rrname'],rdns['type'])
97 | logger.debug('redis sadd: {} -> {}'.format(query,rdns['v']))
98 | r.sadd(query, rdns['v'])
99 | res = "v:{}:{}".format(rdns['v'], rdns['type'])
100 | logger.debug('redis sadd: {} -> {}'.format(res,rdns['rrname']))
101 | r.sadd(res, rdns['rrname'])
102 |
103 | firstseen = "s:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
104 | if not r.exists(firstseen):
105 | r.set(firstseen, rdns['time_first'])
106 | logger.debug('redis set: {} -> {}'.format(firstseen, rdns['time_first']))
107 |
108 |
109 | lastseen = "l:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
110 | last = r.get(lastseen)
111 | if last is None or int(last) < int(rdns['time_last']):
112 | r.set(lastseen, rdns['time_last'])
113 | logger.debug('redis set: {} -> {}'.format(lastseen, rdns['time_last']))
114 |
115 | occ = "o:{}:{}:{}".format(rdns['rrname'], rdns['v'], rdns['type'])
116 | r.set(occ, rdns['count'])
117 |
118 |
119 | if stats:
120 | r.incrby('stats:processed', amount=1)
121 | if not r:
122 | logger.info('empty passive dns record')
123 | continue
124 |
--------------------------------------------------------------------------------
/bin/pdns-ingestion.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | #
3 | # pdns-ingestion is the D4 analyzer for the Passive DNS backend.
4 | #
5 | # This software parses input (via a Redis list) from a D4 server and
6 | # ingest it into a redis compliant server to server the records for
7 | # the passive DNS at later stage.
8 | #
9 | # This software is part of the D4 project.
10 | #
11 | # The software is released under the GNU Affero General Public version 3.
12 | #
13 | # Copyright (c) 2019 Alexandre Dulaunoy - a@foo.be
14 | # Copyright (c) Computer Incident Response Center Luxembourg (CIRCL)
15 |
16 |
17 | import re
18 | import redis
19 | import fileinput
20 | import json
21 | import configparser
22 | import time
23 | import logging
24 | import sys
25 | import os
26 |
27 | config = configparser.RawConfigParser()
28 | config.read('../etc/analyzer.conf')
29 |
30 | expirations = config.items('expiration')
31 | excludesubstrings = config.get('exclude', 'substring').split(',')
32 | myuuid = config.get('global', 'my-uuid')
33 | myqueue = "analyzer:8:{}".format(myuuid)
34 | mylogginglevel = config.get('global', 'logging-level')
35 | logger = logging.getLogger('pdns ingestor')
36 | ch = logging.StreamHandler()
37 | if mylogginglevel == 'DEBUG':
38 | logger.setLevel(logging.DEBUG)
39 | ch.setLevel(logging.DEBUG)
40 | elif mylogginglevel == 'INFO':
41 | logger.setLevel(logging.INFO)
42 | ch.setLevel(logging.INFO)
43 | formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
44 | ch.setFormatter(formatter)
45 | logger.addHandler(ch)
46 |
47 | logger.info("Starting and using FIFO {} from D4 server".format(myqueue))
48 |
49 | analyzer_redis_host = os.getenv('D4_ANALYZER_REDIS_HOST', '127.0.0.1')
50 | analyzer_redis_port = int(os.getenv('D4_ANALYZER_REDIS_PORT', 6400))
51 |
52 | d4_server, d4_port = config.get('global', 'd4-server').split(':')
53 | host_redis_metadata = os.getenv('D4_REDIS_METADATA_HOST', d4_server)
54 | port_redis_metadata = int(os.getenv('D4_REDIS_METADATA_PORT', d4_port))
55 |
56 | r = redis.Redis(host=analyzer_redis_host, port=analyzer_redis_port)
57 | r_d4 = redis.Redis(host=host_redis_metadata, port=port_redis_metadata, db=2)
58 |
59 |
60 | with open('../etc/records-type.json') as rtypefile:
61 | rtype = json.load(rtypefile)
62 |
63 | dnstype = {}
64 |
65 | stats = True
66 |
67 | for v in rtype:
68 | dnstype[(v['type'])] = v['value']
69 |
70 |
71 | def process_format_passivedns(line=None):
72 | # log line example
73 | # timestamp||ip-src||ip-dst||class||q||type||v||ttl||count
74 | # 1548624738.280922||192.168.1.12||8.8.8.8||IN||www-google-analytics.l.google.com.||AAAA||2a00:1450:400e:801::200e||299||12
75 | vkey = ['timestamp','ip-src','ip-dst','class','q','type','v','ttl','count']
76 | record = {}
77 | if line is None or line == '':
78 | return False
79 | v = line.split("||")
80 | i = 0
81 | for r in v:
82 | # trailing dot is removed and avoid case sensitivity
83 | if i == 4 or i == 6:
84 | r = r.lower().strip('.')
85 | # timestamp is just epoch - second precision is only required
86 | if i == 0:
87 | r = r.split('.')[0]
88 | record[vkey[i]] = r
89 | # replace DNS type with the known DNS record type value
90 | if i == 5:
91 | record[vkey[i]] = dnstype[r]
92 | i = i + 1
93 | return record
94 |
95 |
96 | while (True):
97 | expiration = None
98 | d4_record_line = r_d4.rpop(myqueue)
99 | if d4_record_line is None:
100 | time.sleep (1)
101 | continue
102 | l = d4_record_line.decode('utf-8')
103 | rdns = process_format_passivedns(line=l.strip())
104 | logger.debug("parsed record: {}".format(rdns))
105 | if rdns is False:
106 | logger.debug('Parsing of passive DNS line failed: {}'.format(l.strip()))
107 | continue
108 | if 'q' not in rdns:
109 | logger.debug('Parsing of passive DNS line is incomplete: {}'.format(l.strip()))
110 | continue
111 | if rdns['q'] and rdns['type']:
112 | excludeflag = False
113 | for exclude in excludesubstrings:
114 | if exclude in rdns['q']:
115 | excludeflag = True
116 | if excludeflag:
117 | logger.debug('Excluded {}'.format(rdns['q']))
118 | continue
119 | for y in expirations:
120 | if y[0] == rdns['type']:
121 | expiration=y[1]
122 | if rdns['type'] == '16':
123 | rdns['v'] = rdns['v'].replace("\"", "", 1)
124 | query = "r:{}:{}".format(rdns['q'],rdns['type'])
125 | logger.debug('redis sadd: {} -> {}'.format(query,rdns['v']))
126 | r.sadd(query, rdns['v'])
127 | if expiration:
128 | logger.debug("Expiration {} {}".format(expiration, query))
129 | r.expire(query, expiration)
130 | res = "v:{}:{}".format(rdns['v'], rdns['type'])
131 | logger.debug('redis sadd: {} -> {}'.format(res,rdns['q']))
132 | r.sadd(res, rdns['q'])
133 | if expiration:
134 | logger.debug("Expiration {} {}".format(expiration, query))
135 | r.expire(res, expiration)
136 |
137 | firstseen = "s:{}:{}:{}".format(rdns['q'], rdns['v'], rdns['type'])
138 | if not r.exists(firstseen):
139 | r.set(firstseen, rdns['timestamp'])
140 | logger.debug('redis set: {} -> {}'.format(firstseen, rdns['timestamp']))
141 |
142 | if expiration:
143 | logger.debug("Expiration {} {}".format(expiration, query))
144 | r.expire(firstseen, expiration)
145 |
146 | lastseen = "l:{}:{}:{}".format(rdns['q'], rdns['v'], rdns['type'])
147 | last = r.get(lastseen)
148 | if last is None or int(last) < int(rdns['timestamp']):
149 | r.set(lastseen, rdns['timestamp'])
150 | logger.debug('redis set: {} -> {}'.format(lastseen, rdns['timestamp']))
151 | if expiration:
152 | logger.debug("Expiration {} {}".format(expiration, query))
153 | r.expire(query, expiration)
154 |
155 | occ = "o:{}:{}:{}".format(rdns['q'], rdns['v'], rdns['type'])
156 | r.incr(occ, amount=1)
157 | if expiration:
158 | logger.debug("Expiration {} {}".format(expiration, query))
159 | r.expire(occ, expiration)
160 |
161 |
162 |
163 | # TTL, Class, DNS Type distribution stats
164 | if 'ttl' in rdns:
165 | r.hincrby('dist:ttl', rdns['ttl'], amount=1)
166 | if 'class' in rdns:
167 | r.hincrby('dist:class', rdns['class'], amount=1)
168 | if 'type' in rdns:
169 | r.hincrby('dist:type', rdns['type'], amount=1)
170 | if stats:
171 | r.incrby('stats:processed', amount=1)
172 | if not r:
173 | logger.info('empty passive dns record')
174 | continue
175 |
--------------------------------------------------------------------------------
/etc/analyzer.conf.sample:
--------------------------------------------------------------------------------
1 | [global]
2 | my-uuid = 6a2461ce-c29d-44fc-b4fa-947d68826639
3 | d4-server = 127.0.0.1:6380
4 | # INFO|DEBUG
5 | logging-level = INFO
6 | [expiration]
7 | 16 = 24000
8 | 99 = 26000
9 | [exclude]
10 | substring = spamhaus.org,asn.cymru.com
11 |
--------------------------------------------------------------------------------
/etc/kvrocks.conf:
--------------------------------------------------------------------------------
1 | ################################ GENERAL #####################################
2 |
3 | # By default kvrocks listens for connections from all the network interfaces
4 | # available on the server. It is possible to listen to just one or multiple
5 | # interfaces using the "bind" configuration directive, followed by one or
6 | # more IP addresses.
7 | #
8 | # Examples:
9 | #
10 | # bind 192.168.1.100 10.0.0.1
11 | # bind 127.0.0.1
12 | bind 127.0.0.1
13 |
14 | # Accept connections on the specified port, default is 6666.
15 | port 6400
16 |
17 | # Close the connection after a client is idle for N seconds (0 to disable)
18 | timeout 0
19 |
20 | # The number of worker's threads, increase or decrease it would effect the performance.
21 | workers 8
22 |
23 | # By default kvrocks does not run as a daemon. Use 'yes' if you need it.
24 | # Note that kvrocks will write a pid file in /var/run/kvrocks.pid when daemonized.
25 | daemonize no
26 |
27 | # Kvrocks implements cluster solution that is similar with redis cluster solution.
28 | # You can get cluster information by CLUSTER NODES|SLOTS|INFO command, it also is
29 | # adapted to redis-cli, redis-benchmark, redis cluster SDK and redis cluster proxy.
30 | # But kvrocks doesn't support to communicate with each others, so you must set
31 | # cluster topology by CLUSTER SETNODES|SETNODEID commands, more details: #219.
32 | #
33 | # PLEASE NOTE:
34 | # If you enable cluster, kvrocks will encode key with its slot id calculated by
35 | # CRC16 and modulo 16384, endoding key with its slot id makes it efficient to
36 | # migrate keys based on slot. So if you enabled at first time, cluster mode must
37 | # not be disabled after restarting, and vice versa. That is to say, data is not
38 | # compatible between standalone mode with cluster mode, you must migrate data
39 | # if you want to change mode, otherwise, kvrocks will make data corrupt.
40 | #
41 | # Default: no
42 | cluster-enabled no
43 |
44 | # Set the max number of connected clients at the same time. By default
45 | # this limit is set to 10000 clients, however if the server is not
46 | # able to configure the process file limit to allow for the specified limit
47 | # the max number of allowed clients is set to the current file limit
48 | #
49 | # Once the limit is reached the server will close all the new connections sending
50 | # an error 'max number of clients reached'.
51 | #
52 | maxclients 10000
53 |
54 | # Require clients to issue AUTH before processing any other
55 | # commands. This might be useful in environments in which you do not trust
56 | # others with access to the host running kvrocks.
57 | #
58 | # This should stay commented out for backward compatibility and because most
59 | # people do not need auth (e.g. they run their own servers).
60 | #
61 | # Warning: since kvrocks is pretty fast an outside user can try up to
62 | # 150k passwords per second against a good box. This means that you should
63 | # use a very strong password otherwise it will be very easy to break.
64 | #
65 | # requirepass foobared
66 |
67 | # If the master is password protected (using the "masterauth" configuration
68 | # directive below) it is possible to tell the slave to authenticate before
69 | # starting the replication synchronization process, otherwise the master will
70 | # refuse the slave request.
71 | #
72 | # masterauth foobared
73 |
74 | # Master-Salve replication would check db name is matched. if not, the slave should
75 | # refuse to sync the db from master. Don't use default value, set the db-name to identify
76 | # the cluster.
77 | db-name d4-pdns.db
78 |
79 | # The working directory
80 | #
81 | # The DB will be written inside this directory
82 | # Note that you must specify a directory here, not a file name.
83 | #dir /tmp/kvrocks
84 |
85 | # The logs of server will be stored in this directory. If you don't specify
86 | # one directory, by default, we store logs in the working directory that set
87 | # by 'dir' above.
88 | # log-dir /tmp/kvrocks
89 |
90 | # When running daemonized, kvrocks writes a pid file in ${CONFIG_DIR}/kvrocks.pid by
91 | # default. You can specify a custom pid file location here.
92 | # pidfile /var/run/kvrocks.pid
93 | pidfile ""
94 |
95 | # You can configure a slave instance to accept writes or not. Writing against
96 | # a slave instance may be useful to store some ephemeral data (because data
97 | # written on a slave will be easily deleted after resync with the master) but
98 | # may also cause problems if clients are writing to it because of a
99 | # misconfiguration.
100 | slave-read-only yes
101 |
102 | # The slave priority is an integer number published by Kvrocks in the INFO output.
103 | # It is used by Redis Sentinel in order to select a slave to promote into a
104 | # master if the master is no longer working correctly.
105 | #
106 | # A slave with a low priority number is considered better for promotion, so
107 | # for instance if there are three slave with priority 10, 100, 25 Sentinel will
108 | # pick the one with priority 10, that is the lowest.
109 | #
110 | # However a special priority of 0 marks the replica as not able to perform the
111 | # role of master, so a slave with priority of 0 will never be selected by
112 | # Redis Sentinel for promotion.
113 | #
114 | # By default the priority is 100.
115 | slave-priority 100
116 |
117 | # TCP listen() backlog.
118 | #
119 | # In high requests-per-second environments you need an high backlog in order
120 | # to avoid slow clients connections issues. Note that the Linux kernel
121 | # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
122 | # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
123 | # in order to Get the desired effect.
124 | tcp-backlog 511
125 |
126 | # If the master is an old version, it may have specified replication threads
127 | # that use 'port + 1' as listening port, but in new versions, we don't use
128 | # extra port to implement replication. In order to allow the new replicas to
129 | # copy old masters, you should indicate that the master uses replication port
130 | # or not.
131 | # If yes, that indicates master uses replication port and replicas will connect
132 | # to 'master's listening port + 1' when synchronization.
133 | # If no, that indicates master doesn't use replication port and replicas will
134 | # connect 'master's listening port' when synchronization.
135 | master-use-repl-port no
136 |
137 | # Master-Slave replication. Use slaveof to make a kvrocks instance a copy of
138 | # another kvrocks server. A few things to understand ASAP about kvrocks replication.
139 | #
140 | # 1) Kvrocks replication is asynchronous, but you can configure a master to
141 | # stop accepting writes if it appears to be not connected with at least
142 | # a given number of slaves.
143 | # 2) Kvrocks slaves are able to perform a partial resynchronization with the
144 | # master if the replication link is lost for a relatively small amount of
145 | # time. You may want to configure the replication backlog size (see the next
146 | # sections of this file) with a sensible value depending on your needs.
147 | # 3) Replication is automatic and does not need user intervention. After a
148 | # network partition slaves automatically try to reconnect to masters
149 | # and resynchronize with them.
150 | #
151 | # slaveof
152 | # slaveof 127.0.0.1 6379
153 |
154 | # When a slave loses its connection with the master, or when the replication
155 | # is still in progress, the slave can act in two different ways:
156 | #
157 | # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
158 | # still reply to client requests, possibly with out of date data, or the
159 | # data set may just be empty if this is the first synchronization.
160 | #
161 | # 2) if slave-serve-stale-data is set to 'no' the slave will reply with
162 | # an error "SYNC with master in progress" to all the kind of commands
163 | # but to INFO and SLAVEOF.
164 | #
165 | slave-serve-stale-data yes
166 |
167 | # To guarantee slave's data safe and serve when it is in full synchronization
168 | # state, slave still keep itself data. But this way needs to occupy much disk
169 | # space, so we provide a way to reduce disk occupation, slave will delete itself
170 | # entire database before fetching files from master during full synchronization.
171 | # If you want to enable this way, you can set 'slave-delete-db-before-fullsync'
172 | # to yes, but you must know that database will be lost if master is down during
173 | # full synchronization, unless you have a backup of database.
174 | #
175 | # This option is similar redis replicas RDB diskless load option:
176 | # repl-diskless-load on-empty-db
177 | #
178 | # Default: no
179 | slave-empty-db-before-fullsync no
180 |
181 | # If replicas need full synchronization with master, master need to create
182 | # checkpoint for feeding replicas, and replicas also stage a checkpoint of
183 | # the master. If we also keep the backup, it maybe occupy extra disk space.
184 | # You can enable 'purge-backup-on-fullsync' if disk is not sufficient, but
185 | # that may cause remote backup copy failing.
186 | #
187 | # Default: no
188 | purge-backup-on-fullsync no
189 |
190 | # The maximum allowed rate (in MB/s) that should be used by Replication.
191 | # If the rate exceeds max-replication-mb, replication will slow down.
192 | # Default: 0 (i.e. no limit)
193 | max-replication-mb 0
194 |
195 | # The maximum allowed aggregated write rate of flush and compaction (in MB/s).
196 | # If the rate exceeds max-io-mb, io will slow down.
197 | # 0 is no limit
198 | # Default: 500
199 | max-io-mb 500
200 |
201 | # The maximum allowed space (in GB) that should be used by RocksDB.
202 | # If the total size of the SST files exceeds max_allowed_space, writes to RocksDB will fail.
203 | # Please see: https://github.com/facebook/rocksdb/wiki/Managing-Disk-Space-Utilization
204 | # Default: 0 (i.e. no limit)
205 | max-db-size 0
206 |
207 | # The maximum backup to keep, server cron would run every minutes to check the num of current
208 | # backup, and purge the old backup if exceed the max backup num to keep. If max-backup-to-keep
209 | # is 0, no backup would be keep. But now, we only support 0 or 1.
210 | max-backup-to-keep 1
211 |
212 | # The maximum hours to keep the backup. If max-backup-keep-hours is 0, wouldn't purge any backup.
213 | # default: 1 day
214 | max-backup-keep-hours 24
215 |
216 | # max-bitmap-to-string-mb use to limit the max size of bitmap to string transformation(MB).
217 | #
218 | # Default: 16
219 | max-bitmap-to-string-mb 16
220 |
221 | ################################## SLOW LOG ###################################
222 |
223 | # The Kvrocks Slow Log is a mechanism to log queries that exceeded a specified
224 | # execution time. The execution time does not include the I/O operations
225 | # like talking with the client, sending the reply and so forth,
226 | # but just the time needed to actually execute the command (this is the only
227 | # stage of command execution where the thread is blocked and can not serve
228 | # other requests in the meantime).
229 | #
230 | # You can configure the slow log with two parameters: one tells Kvrocks
231 | # what is the execution time, in microseconds, to exceed in order for the
232 | # command to get logged, and the other parameter is the length of the
233 | # slow log. When a new command is logged the oldest one is removed from the
234 | # queue of logged commands.
235 |
236 | # The following time is expressed in microseconds, so 1000000 is equivalent
237 | # to one second. Note that -1 value disables the slow log, while
238 | # a value of zero forces the logging of every command.
239 | slowlog-log-slower-than 100000
240 |
241 | # There is no limit to this length. Just be aware that it will consume memory.
242 | # You can reclaim memory used by the slow log with SLOWLOG RESET.
243 | slowlog-max-len 128
244 |
245 | # If you run kvrocks from upstart or systemd, kvrocks can interact with your
246 | # supervision tree. Options:
247 | # supervised no - no supervision interaction
248 | # supervised upstart - signal upstart by putting kvrocks into SIGSTOP mode
249 | # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
250 | # supervised auto - detect upstart or systemd method based on
251 | # UPSTART_JOB or NOTIFY_SOCKET environment variables
252 | # Note: these supervision methods only signal "process is ready."
253 | # They do not enable continuous liveness pings back to your supervisor.
254 | supervised no
255 |
256 | ################################## PERF LOG ###################################
257 |
258 | # The Kvrocks Perf Log is a mechanism to log queries' performance context that
259 | # exceeded a specified execution time. This mechanism uses rocksdb's
260 | # Perf Context and IO Stats Context, Please see:
261 | # https://github.com/facebook/rocksdb/wiki/Perf-Context-and-IO-Stats-Context
262 | #
263 | # This mechanism is enabled when profiling-sample-commands is not empty and
264 | # profiling-sample-ratio greater than 0.
265 | # It is important to note that this mechanism affects performance, but it is
266 | # useful for troubleshooting performance bottlenecks, so it should only be
267 | # enabled when performance problems occur.
268 |
269 | # The name of the commands you want to record. Must be original name of
270 | # commands supported by Kvrocks. Use ',' to separate multiple commands and
271 | # use '*' to record all commands supported by Kvrocks.
272 | # Example:
273 | # - Single command: profiling-sample-commands get
274 | # - Multiple commands: profiling-sample-commands get,mget,hget
275 | #
276 | # Default: empty
277 | # profiling-sample-commands ""
278 |
279 | # Ratio of the samples would be recorded. We simply use the rand to determine
280 | # whether to record the sample or not.
281 | #
282 | # Default: 0
283 | profiling-sample-ratio 0
284 |
285 | # There is no limit to this length. Just be aware that it will consume memory.
286 | # You can reclaim memory used by the perf log with PERFLOG RESET.
287 | #
288 | # Default: 256
289 | profiling-sample-record-max-len 256
290 |
291 | # profiling-sample-record-threshold-ms use to tell the kvrocks when to record.
292 | #
293 | # Default: 100 millisecond
294 | profiling-sample-record-threshold-ms 100
295 |
296 | ################################## CRON ###################################
297 |
298 | # Compact Scheduler, auto compact at schedule time
299 | # time expression format is the same as crontab(currently only support * and int)
300 | # e.g. compact-cron 0 3 * * * 0 4 * * *
301 | # would compact the db at 3am and 4am everyday
302 | # compact-cron 0 3 * * *
303 |
304 | # The hour range that compaction checker would be active
305 | # e.g. compaction-checker-range 0-7 means compaction checker would be worker between
306 | # 0-7am every day.
307 | compaction-checker-range 0-7
308 |
309 | # Bgsave scheduler, auto bgsave at schedule time
310 | # time expression format is the same as crontab(currently only support * and int)
311 | # e.g. bgsave-cron 0 3 * * * 0 4 * * *
312 | # would bgsave the db at 3am and 4am everyday
313 |
314 | # Command renaming.
315 | #
316 | # It is possible to change the name of dangerous commands in a shared
317 | # environment. For instance the KEYS command may be renamed into something
318 | # hard to guess so that it will still be available for internal-use tools
319 | # but not available for general clients.
320 | #
321 | # Example:
322 | #
323 | # rename-command KEYS b840fc02d524045429941cc15f59e41cb7be6c52
324 | #
325 | # It is also possible to completely kill a command by renaming it into
326 | # an empty string:
327 | #
328 | # rename-command KEYS ""
329 |
330 | # The key-value size may so be quite different in many scenes, and use 256MiB as SST file size
331 | # may cause data loading(large index/filter block) ineffective when the key-value was too small.
332 | # kvrocks supports user-defined SST file in config(rocksdb.target_file_size_base),
333 | # but it still too trivial and inconvenient to adjust the different sizes for different instances.
334 | # so we want to periodic auto-adjust the SST size in-flight with user avg key-value size.
335 | #
336 | # If enabled, kvrocks will auto resize rocksdb.target_file_size_base
337 | # and rocksdb.write_buffer_size in-flight with user avg key-value size.
338 | # Please see #118.
339 | #
340 | # Default: yes
341 | auto-resize-block-and-sst yes
342 |
343 | ################################ MIGRATE #####################################
344 | # If the network bandwidth is completely consumed by the migration task,
345 | # it will affect the availability of kvrocks. To avoid this situation,
346 | # migrate-speed is adpoted to limit the migrating speed.
347 | # Migrating speed is limited by controling the duraiton between sending data,
348 | # the duation is calculated by: 1000000 * migrate-pipeline-size / migrate-speed (us).
349 | # Value: [0,INT_MAX], 0 means no limit
350 | #
351 | # Default: 4096
352 | migrate-speed 4096
353 |
354 | # In order to reduce data transimission times and improve the efficiency of data migration,
355 | # pipeline is adopted to send multiple data at once. Pipeline size can be set by this option.
356 | # Value: [1, INT_MAX], it can't be 0
357 | #
358 | # Default: 16
359 | migrate-pipeline-size 16
360 |
361 | # In order to reduce the write forbidden time during migrating slot, we will migrate the incremetal
362 | # data sevral times to reduce the amount of incremetal data. Until the quantity of incremetal
363 | # data is reduced to a certain threshold, slot will be forbidden write. The threshold is set by
364 | # this option.
365 | # Value: [1, INT_MAX], it can't be 0
366 | #
367 | # Default: 10000
368 | migrate-sequence-gap 10000
369 |
370 | ################################ ROCKSDB #####################################
371 |
372 | # Specify the capacity of metadata column family block cache. Larger block cache
373 | # may make request faster while more keys would be cached. Max Size is 200*1024.
374 | # Default: 2048MB
375 | rocksdb.metadata_block_cache_size 2048
376 |
377 | # Specify the capacity of subkey column family block cache. Larger block cache
378 | # may make request faster while more keys would be cached. Max Size is 200*1024.
379 | # Default: 2048MB
380 | rocksdb.subkey_block_cache_size 2048
381 |
382 | # Metadata column family and subkey column family will share a single block cache
383 | # if set 'yes'. The capacity of shared block cache is
384 | # metadata_block_cache_size + subkey_block_cache_size
385 | #
386 | # Default: yes
387 | rocksdb.share_metadata_and_subkey_block_cache yes
388 |
389 | # A global cache for table-level rows in RocksDB. If almost always point
390 | # lookups, enlarging row cache may improve read performance. Otherwise,
391 | # if we enlarge this value, we can lessen metadata/subkey block cache size.
392 | #
393 | # Default: 0 (disabled)
394 | rocksdb.row_cache_size 0
395 |
396 | # Number of open files that can be used by the DB. You may need to
397 | # increase this if your database has a large working set. Value -1 means
398 | # files opened are always kept open. You can estimate number of files based
399 | # on target_file_size_base and target_file_size_multiplier for level-based
400 | # compaction. For universal-style compaction, you can usually set it to -1.
401 | # Default: 4096
402 | rocksdb.max_open_files 8096
403 |
404 | # Amount of data to build up in memory (backed by an unsorted log
405 | # on disk) before converting to a sorted on-disk file.
406 | #
407 | # Larger values increase performance, especially during bulk loads.
408 | # Up to max_write_buffer_number write buffers may be held in memory
409 | # at the same time,
410 | # so you may wish to adjust this parameter to control memory usage.
411 | # Also, a larger write buffer will result in a longer recovery time
412 | # the next time the database is opened.
413 | #
414 | # Note that write_buffer_size is enforced per column family.
415 | # See db_write_buffer_size for sharing memory across column families.
416 |
417 | # default is 64MB
418 | rocksdb.write_buffer_size 64
419 |
420 | # Target file size for compaction, target file size for Leve N can be caculated
421 | # by target_file_size_base * (target_file_size_multiplier ^ (L-1))
422 | #
423 | # Default: 128MB
424 | rocksdb.target_file_size_base 128
425 |
426 | # The maximum number of write buffers that are built up in memory.
427 | # The default and the minimum number is 2, so that when 1 write buffer
428 | # is being flushed to storage, new writes can continue to the other
429 | # write buffer.
430 | # If max_write_buffer_number > 3, writing will be slowed down to
431 | # options.delayed_write_rate if we are writing to the last write buffer
432 | # allowed.
433 | rocksdb.max_write_buffer_number 4
434 |
435 | # Maximum number of concurrent background compaction jobs, submitted to
436 | # the default LOW priority thread pool.
437 | rocksdb.max_background_compactions 4
438 |
439 | # Maximum number of concurrent background memtable flush jobs, submitted by
440 | # default to the HIGH priority thread pool. If the HIGH priority thread pool
441 | # is configured to have zero threads, flush jobs will share the LOW priority
442 | # thread pool with compaction jobs.
443 | rocksdb.max_background_flushes 4
444 |
445 | # This value represents the maximum number of threads that will
446 | # concurrently perform a compaction job by breaking it into multiple,
447 | # smaller ones that are run simultaneously.
448 | # Default: 2 (i.e. no subcompactions)
449 | rocksdb.max_sub_compactions 2
450 |
451 | # In order to limit the size of WALs, RocksDB uses DBOptions::max_total_wal_size
452 | # as the trigger of column family flush. Once WALs exceed this size, RocksDB
453 | # will start forcing the flush of column families to allow deletion of some
454 | # oldest WALs. This config can be useful when column families are updated at
455 | # non-uniform frequencies. If there's no size limit, users may need to keep
456 | # really old WALs when the infrequently-updated column families hasn't flushed
457 | # for a while.
458 | #
459 | # In kvrocks, we use multiple column families to store metadata, subkeys, etc.
460 | # If users always use string type, but use list, hash and other complex data types
461 | # infrequently, there will be a lot of old WALs if we don't set size limit
462 | # (0 by default in rocksdb), because rocksdb will dynamically choose the WAL size
463 | # limit to be [sum of all write_buffer_size * max_write_buffer_number] * 4 if set to 0.
464 | #
465 | # Moreover, you should increase this value if you already set rocksdb.write_buffer_size
466 | # to a big value, to avoid influencing the effect of rocksdb.write_buffer_size and
467 | # rocksdb.max_write_buffer_number.
468 | #
469 | # default is 512MB
470 | rocksdb.max_total_wal_size 512
471 |
472 | # We impl the repliction with rocksdb WAL, it would trigger full sync when the seq was out of range.
473 | # wal_ttl_seconds and wal_size_limit_mb would affect how archived logswill be deleted.
474 | # If WAL_ttl_seconds is not 0, then WAL files will be checked every WAL_ttl_seconds / 2 and those that
475 | # are older than WAL_ttl_seconds will be deleted#
476 | #
477 | # Default: 3 Hours
478 | rocksdb.wal_ttl_seconds 10800
479 |
480 | # If WAL_ttl_seconds is 0 and WAL_size_limit_MB is not 0,
481 | # WAL files will be checked every 10 min and if total size is greater
482 | # then WAL_size_limit_MB, they will be deleted starting with the
483 | # earliest until size_limit is met. All empty files will be deleted
484 | # Default: 16GB
485 | rocksdb.wal_size_limit_mb 16384
486 |
487 | # Approximate size of user data packed per block. Note that the
488 | # block size specified here corresponds to uncompressed data. The
489 | # actual size of the unit read from disk may be smaller if
490 | # compression is enabled.
491 | #
492 | # Default: 4KB
493 | rocksdb.block_size 16384
494 |
495 | # Indicating if we'd put index/filter blocks to the block cache
496 | #
497 | # Default: no
498 | rocksdb.cache_index_and_filter_blocks yes
499 |
500 | # Specify the compression to use. Only compress level greater
501 | # than 2 to improve performance.
502 | # Accept value: "no", "snappy"
503 | # default snappy
504 | #rocksdb.compression snappy
505 |
506 | # If non-zero, we perform bigger reads when doing compaction. If you're
507 | # running RocksDB on spinning disks, you should set this to at least 2MB.
508 | # That way RocksDB's compaction is doing sequential instead of random reads.
509 | # When non-zero, we also force new_table_reader_for_compaction_inputs to
510 | # true.
511 | #
512 | # Default: 2 MB
513 | rocksdb.compaction_readahead_size 2097152
514 |
515 | # he limited write rate to DB if soft_pending_compaction_bytes_limit or
516 | # level0_slowdown_writes_trigger is triggered.
517 |
518 | # If the value is 0, we will infer a value from `rater_limiter` value
519 | # if it is not empty, or 16MB if `rater_limiter` is empty. Note that
520 | # if users change the rate in `rate_limiter` after DB is opened,
521 | # `delayed_write_rate` won't be adjusted.
522 | #
523 | rocksdb.delayed_write_rate 0
524 | # If enable_pipelined_write is true, separate write thread queue is
525 | # maintained for WAL write and memtable write.
526 | #
527 | # Default: no
528 | rocksdb.enable_pipelined_write no
529 |
530 | # Soft limit on number of level-0 files. We start slowing down writes at this
531 | # point. A value <0 means that no writing slow down will be triggered by
532 | # number of files in level-0.
533 | #
534 | # Default: 20
535 | rocksdb.level0_slowdown_writes_trigger 20
536 |
537 | # Maximum number of level-0 files. We stop writes at this point.
538 | #
539 | # Default: 40
540 | rocksdb.level0_stop_writes_trigger 40
541 |
542 | # Number of files to trigger level-0 compaction.
543 | #
544 | # Default: 4
545 | rocksdb.level0_file_num_compaction_trigger 4
546 |
547 | # if not zero, dump rocksdb.stats to LOG every stats_dump_period_sec
548 | #
549 | # Default: 0
550 | rocksdb.stats_dump_period_sec 0
551 |
552 | # if yes, the auto compaction would be disabled, but the manual compaction remain works
553 | #
554 | # Default: no
555 | rocksdb.disable_auto_compactions no
556 |
557 | # BlobDB(key-value separation) is essentially RocksDB for large-value use cases.
558 | # Since 6.18.0, The new implementation is integrated into the RocksDB core.
559 | # When set, large values (blobs) are written to separate blob files, and only
560 | # pointers to them are stored in SST files. This can reduce write amplification
561 | # for large-value use cases at the cost of introducing a level of indirection
562 | # for reads. Please see: https://github.com/facebook/rocksdb/wiki/BlobDB.
563 | #
564 | # Note that when enable_blob_files is set to yes, BlobDB-related configuration
565 | # items will take effect.
566 | #
567 | # Default: no
568 | rocksdb.enable_blob_files no
569 |
570 | # The size of the smallest value to be stored separately in a blob file. Values
571 | # which have an uncompressed size smaller than this threshold are stored alongside
572 | # the keys in SST files in the usual fashion.
573 | #
574 | # Default: 4096 byte, 0 means that all values are stored in blob files
575 | rocksdb.min_blob_size 4096
576 |
577 | # The size limit for blob files. When writing blob files, a new file is
578 | # opened once this limit is reached.
579 | #
580 | # Default: 128 M
581 | rocksdb.blob_file_size 128
582 |
583 | # Enables garbage collection of blobs. Valid blobs residing in blob files
584 | # older than a cutoff get relocated to new files as they are encountered
585 | # during compaction, which makes it possible to clean up blob files once
586 | # they contain nothing but obsolete/garbage blobs.
587 | # See also rocksdb.blob_garbage_collection_age_cutoff below.
588 | #
589 | # Default: yes
590 | rocksdb.enable_blob_garbage_collection yes
591 |
592 | # The percentage cutoff in terms of blob file age for garbage collection.
593 | # Blobs in the oldest N blob files will be relocated when encountered during
594 | # compaction, where N = (garbage_collection_cutoff/100) * number_of_blob_files.
595 | # Note that this value must belong to [0, 100].
596 | #
597 | # Default: 25
598 | rocksdb.blob_garbage_collection_age_cutoff 25
599 |
600 | ################################ NAMESPACE #####################################
601 | # namespace.test change.me
602 |
603 |
--------------------------------------------------------------------------------
/etc/records-type.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "type": "A",
4 | "value": "1",
5 | "description": "a host address",
6 | "ref": "[RFC1035]"
7 | },
8 | {
9 | "type": "NS",
10 | "value": "2",
11 | "description": "an authoritative name server",
12 | "ref": "[RFC1035]"
13 | },
14 | {
15 | "type": "MD",
16 | "value": "3",
17 | "description": "a mail destination (OBSOLETE - use MX)",
18 | "ref": "[RFC1035]"
19 | },
20 | {
21 | "type": "MF",
22 | "value": "4",
23 | "description": "a mail forwarder (OBSOLETE - use MX)",
24 | "ref": "[RFC1035]"
25 | },
26 | {
27 | "type": "CNAME",
28 | "value": "5",
29 | "description": "the canonical name for an alias",
30 | "ref": "[RFC1035]"
31 | },
32 | {
33 | "type": "SOA",
34 | "value": "6",
35 | "description": "marks the start of a zone of authority",
36 | "ref": "[RFC1035]"
37 | },
38 | {
39 | "type": "MB",
40 | "value": "7",
41 | "description": "a mailbox domain name (EXPERIMENTAL)",
42 | "ref": "[RFC1035]"
43 | },
44 | {
45 | "type": "MG",
46 | "value": "8",
47 | "description": "a mail group member (EXPERIMENTAL)",
48 | "ref": "[RFC1035]"
49 | },
50 | {
51 | "type": "MR",
52 | "value": "9",
53 | "description": "a mail rename domain name (EXPERIMENTAL)",
54 | "ref": "[RFC1035]"
55 | },
56 | {
57 | "type": "NULL",
58 | "value": "10",
59 | "description": "a null RR (EXPERIMENTAL)",
60 | "ref": "[RFC1035]"
61 | },
62 | {
63 | "type": "WKS",
64 | "value": "11",
65 | "description": "a well known service description",
66 | "ref": "[RFC1035]"
67 | },
68 | {
69 | "type": "PTR",
70 | "value": "12",
71 | "description": "a domain name pointer",
72 | "ref": "[RFC1035]"
73 | },
74 | {
75 | "type": "HINFO",
76 | "value": "13",
77 | "description": "host information",
78 | "ref": "[RFC1035]"
79 | },
80 | {
81 | "type": "MINFO",
82 | "value": "14",
83 | "description": "mailbox or mail list information",
84 | "ref": "[RFC1035]"
85 | },
86 | {
87 | "type": "MX",
88 | "value": "15",
89 | "description": "mail exchange",
90 | "ref": "[RFC1035]"
91 | },
92 | {
93 | "type": "TXT",
94 | "value": "16",
95 | "description": "text strings",
96 | "ref": "[RFC1035]"
97 | },
98 | {
99 | "type": "RP",
100 | "value": "17",
101 | "description": "for Responsible Person",
102 | "ref": "[RFC1183]"
103 | },
104 | {
105 | "type": "AFSDB",
106 | "value": "18",
107 | "description": "for AFS Data Base location",
108 | "ref": "[RFC1183][RFC5864]"
109 | },
110 | {
111 | "type": "X25",
112 | "value": "19",
113 | "description": "for X.25 PSDN address",
114 | "ref": "[RFC1183]"
115 | },
116 | {
117 | "type": "ISDN",
118 | "value": "20",
119 | "description": "for ISDN address",
120 | "ref": "[RFC1183]"
121 | },
122 | {
123 | "type": "RT",
124 | "value": "21",
125 | "description": "for Route Through",
126 | "ref": "[RFC1183]"
127 | },
128 | {
129 | "type": "NSAP",
130 | "value": "22",
131 | "description": "\"for NSAP address",
132 | "ref": " NSAP style A record\""
133 | },
134 | {
135 | "type": "NSAP-PTR",
136 | "value": "23",
137 | "description": "\"for domain name pointer",
138 | "ref": " NSAP style\""
139 | },
140 | {
141 | "type": "SIG",
142 | "value": "24",
143 | "description": "for security signature",
144 | "ref": "[RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2931][RFC3110][RFC3008]"
145 | },
146 | {
147 | "type": "KEY",
148 | "value": "25",
149 | "description": "for security key",
150 | "ref": "[RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2539][RFC3008][RFC3110]"
151 | },
152 | {
153 | "type": "PX",
154 | "value": "26",
155 | "description": "X.400 mail mapping information",
156 | "ref": "[RFC2163]"
157 | },
158 | {
159 | "type": "GPOS",
160 | "value": "27",
161 | "description": "Geographical Position",
162 | "ref": "[RFC1712]"
163 | },
164 | {
165 | "type": "AAAA",
166 | "value": "28",
167 | "description": "IP6 Address",
168 | "ref": "[RFC3596]"
169 | },
170 | {
171 | "type": "LOC",
172 | "value": "29",
173 | "description": "Location Information",
174 | "ref": "[RFC1876]"
175 | },
176 | {
177 | "type": "NXT",
178 | "value": "30",
179 | "description": "Next Domain (OBSOLETE)",
180 | "ref": "[RFC3755][RFC2535]"
181 | },
182 | {
183 | "type": "EID",
184 | "value": "31",
185 | "description": "Endpoint Identifier",
186 | "ref": "[Michael_Patton][http://ana-3.lcs.mit.edu/~jnc/nimrod/dns.txt]"
187 | },
188 | {
189 | "type": "NIMLOC",
190 | "value": "32",
191 | "description": "Nimrod Locator",
192 | "ref": "[1][Michael_Patton][http://ana-3.lcs.mit.edu/~jnc/nimrod/dns.txt]"
193 | },
194 | {
195 | "type": "SRV",
196 | "value": "33",
197 | "description": "Server Selection",
198 | "ref": "[1][RFC2782]"
199 | },
200 | {
201 | "type": "ATMA",
202 | "value": "34",
203 | "description": "ATM Address",
204 | "ref": "\"[ ATM Forum Technical Committee"
205 | },
206 | {
207 | "type": "NAPTR",
208 | "value": "35",
209 | "description": "Naming Authority Pointer",
210 | "ref": "[RFC2915][RFC2168][RFC3403]"
211 | },
212 | {
213 | "type": "KX",
214 | "value": "36",
215 | "description": "Key Exchanger",
216 | "ref": "[RFC2230]"
217 | },
218 | {
219 | "type": "CERT",
220 | "value": "37",
221 | "description": "CERT",
222 | "ref": "[RFC4398]"
223 | },
224 | {
225 | "type": "A6",
226 | "value": "38",
227 | "description": "A6 (OBSOLETE - use AAAA)",
228 | "ref": "[RFC3226][RFC2874][RFC6563]"
229 | },
230 | {
231 | "type": "DNAME",
232 | "value": "39",
233 | "description": "DNAME",
234 | "ref": "[RFC6672]"
235 | },
236 | {
237 | "type": "SINK",
238 | "value": "40",
239 | "description": "SINK",
240 | "ref": "[Donald_E_Eastlake][http://tools.ietf.org/html/draft-eastlake-kitchen-sink]"
241 | },
242 | {
243 | "type": "OPT",
244 | "value": "41",
245 | "description": "OPT",
246 | "ref": "[RFC6891][RFC3225]"
247 | },
248 | {
249 | "type": "APL",
250 | "value": "42",
251 | "description": "APL",
252 | "ref": "[RFC3123]"
253 | },
254 | {
255 | "type": "DS",
256 | "value": "43",
257 | "description": "Delegation Signer",
258 | "ref": "[RFC4034][RFC3658]"
259 | },
260 | {
261 | "type": "SSHFP",
262 | "value": "44",
263 | "description": "SSH Key Fingerprint",
264 | "ref": "[RFC4255]"
265 | },
266 | {
267 | "type": "IPSECKEY",
268 | "value": "45",
269 | "description": "IPSECKEY",
270 | "ref": "[RFC4025]"
271 | },
272 | {
273 | "type": "RRSIG",
274 | "value": "46",
275 | "description": "RRSIG",
276 | "ref": "[RFC4034][RFC3755]"
277 | },
278 | {
279 | "type": "NSEC",
280 | "value": "47",
281 | "description": "NSEC",
282 | "ref": "[RFC4034][RFC3755]"
283 | },
284 | {
285 | "type": "DNSKEY",
286 | "value": "48",
287 | "description": "DNSKEY",
288 | "ref": "[RFC4034][RFC3755]"
289 | },
290 | {
291 | "type": "DHCID",
292 | "value": "49",
293 | "description": "DHCID",
294 | "ref": "[RFC4701]"
295 | },
296 | {
297 | "type": "NSEC3",
298 | "value": "50",
299 | "description": "NSEC3",
300 | "ref": "[RFC5155]"
301 | },
302 | {
303 | "type": "NSEC3PARAM",
304 | "value": "51",
305 | "description": "NSEC3PARAM",
306 | "ref": "[RFC5155]"
307 | },
308 | {
309 | "type": "TLSA",
310 | "value": "52",
311 | "description": "TLSA",
312 | "ref": "[RFC6698]"
313 | },
314 | {
315 | "type": "SMIMEA",
316 | "value": "53",
317 | "description": "S/MIME cert association",
318 | "ref": "[RFC8162]"
319 | },
320 | {
321 | "type": "Unassigned",
322 | "value": "54",
323 | "description": "",
324 | "ref": ""
325 | },
326 | {
327 | "type": "HIP",
328 | "value": "55",
329 | "description": "Host Identity Protocol",
330 | "ref": "[RFC8005]"
331 | },
332 | {
333 | "type": "NINFO",
334 | "value": "56",
335 | "description": "NINFO",
336 | "ref": "[Jim_Reid]"
337 | },
338 | {
339 | "type": "RKEY",
340 | "value": "57",
341 | "description": "RKEY",
342 | "ref": "[Jim_Reid]"
343 | },
344 | {
345 | "type": "TALINK",
346 | "value": "58",
347 | "description": "Trust Anchor LINK",
348 | "ref": "[Wouter_Wijngaards]"
349 | },
350 | {
351 | "type": "CDS",
352 | "value": "59",
353 | "description": "Child DS",
354 | "ref": "[RFC7344]"
355 | },
356 | {
357 | "type": "CDNSKEY",
358 | "value": "60",
359 | "description": "DNSKEY(s) the Child wants reflected in DS",
360 | "ref": "[RFC7344]"
361 | },
362 | {
363 | "type": "OPENPGPKEY",
364 | "value": "61",
365 | "description": "OpenPGP Key",
366 | "ref": "[RFC7929]"
367 | },
368 | {
369 | "type": "CSYNC",
370 | "value": "62",
371 | "description": "Child-To-Parent Synchronization",
372 | "ref": "[RFC7477]"
373 | },
374 | {
375 | "type": "ZONEMD",
376 | "value": "63",
377 | "description": "message digest for DNS zone",
378 | "ref": "[draft-wessels-dns-zone-digest]"
379 | },
380 | {
381 | "type": "SPF",
382 | "value": "99",
383 | "description": "",
384 | "ref": "[RFC7208]"
385 | },
386 | {
387 | "type": "UINFO",
388 | "value": "100",
389 | "description": "",
390 | "ref": "[IANA-Reserved]"
391 | },
392 | {
393 | "type": "UID",
394 | "value": "101",
395 | "description": "",
396 | "ref": "[IANA-Reserved]"
397 | },
398 | {
399 | "type": "GID",
400 | "value": "102",
401 | "description": "",
402 | "ref": "[IANA-Reserved]"
403 | },
404 | {
405 | "type": "UNSPEC",
406 | "value": "103",
407 | "description": "",
408 | "ref": "[IANA-Reserved]"
409 | },
410 | {
411 | "type": "NID",
412 | "value": "104",
413 | "description": "",
414 | "ref": "[RFC6742]"
415 | },
416 | {
417 | "type": "L32",
418 | "value": "105",
419 | "description": "",
420 | "ref": "[RFC6742]"
421 | },
422 | {
423 | "type": "L64",
424 | "value": "106",
425 | "description": "",
426 | "ref": "[RFC6742]"
427 | },
428 | {
429 | "type": "LP",
430 | "value": "107",
431 | "description": "",
432 | "ref": "[RFC6742]"
433 | },
434 | {
435 | "type": "EUI48",
436 | "value": "108",
437 | "description": "an EUI-48 address",
438 | "ref": "[RFC7043]"
439 | },
440 | {
441 | "type": "EUI64",
442 | "value": "109",
443 | "description": "an EUI-64 address",
444 | "ref": "[RFC7043]"
445 | },
446 | {
447 | "type": "TKEY",
448 | "value": "249",
449 | "description": "Transaction Key",
450 | "ref": "[RFC2930]"
451 | },
452 | {
453 | "type": "TSIG",
454 | "value": "250",
455 | "description": "Transaction Signature",
456 | "ref": "[RFC2845]"
457 | },
458 | {
459 | "type": "IXFR",
460 | "value": "251",
461 | "description": "incremental transfer",
462 | "ref": "[RFC1995]"
463 | },
464 | {
465 | "type": "AXFR",
466 | "value": "252",
467 | "description": "transfer of an entire zone",
468 | "ref": "[RFC1035][RFC5936]"
469 | },
470 | {
471 | "type": "MAILB",
472 | "value": "253",
473 | "description": "\"mailbox-related RRs (MB",
474 | "ref": " MG or MR)\""
475 | },
476 | {
477 | "type": "MAILA",
478 | "value": "254",
479 | "description": "mail agent RRs (OBSOLETE - see MX)",
480 | "ref": "[RFC1035]"
481 | },
482 | {
483 | "type": "*",
484 | "value": "255",
485 | "description": "A request for some or all records the server has available",
486 | "ref": "[RFC1035][RFC6895][RFC8482]"
487 | },
488 | {
489 | "type": "URI",
490 | "value": "256",
491 | "description": "URI",
492 | "ref": "[RFC7553]"
493 | },
494 | {
495 | "type": "CAA",
496 | "value": "257",
497 | "description": "Certification Authority Restriction",
498 | "ref": "[RFC6844]"
499 | },
500 | {
501 | "type": "AVC",
502 | "value": "258",
503 | "description": "Application Visibility and Control",
504 | "ref": "[Wolfgang_Riedel]"
505 | },
506 | {
507 | "type": "DOA",
508 | "value": "259",
509 | "description": "Digital Object Architecture",
510 | "ref": "[draft-durand-doa-over-dns]"
511 | },
512 | {
513 | "type": "TA",
514 | "value": "32768",
515 | "description": "DNSSEC Trust Authorities",
516 | "ref": "\"[Sam_Weiler][http://cameo.library.cmu.edu/][ Deploying DNSSEC Without a Signed Root. Technical Report 1999-19"
517 | },
518 | {
519 | "type": "DLV",
520 | "value": "32769",
521 | "description": "DNSSEC Lookaside Validation",
522 | "ref": "[RFC4431]"
523 | },
524 | {
525 | "type": "Reserved",
526 | "value": "65535",
527 | "description": "",
528 | "ref": ""
529 | }
530 | ]
531 |
--------------------------------------------------------------------------------
/etc/redis.conf:
--------------------------------------------------------------------------------
1 | # Redis configuration file example.
2 | #
3 | # Note that in order to read the configuration file, Redis must be
4 | # started with the file path as first argument:
5 | #
6 | # ./redis-server /path/to/redis.conf
7 |
8 | # Note on units: when memory size is needed, it is possible to specify
9 | # it in the usual form of 1k 5GB 4M and so forth:
10 | #
11 | # 1k => 1000 bytes
12 | # 1kb => 1024 bytes
13 | # 1m => 1000000 bytes
14 | # 1mb => 1024*1024 bytes
15 | # 1g => 1000000000 bytes
16 | # 1gb => 1024*1024*1024 bytes
17 | #
18 | # units are case insensitive so 1GB 1Gb 1gB are all the same.
19 |
20 | ################################## INCLUDES ###################################
21 |
22 | # Include one or more other config files here. This is useful if you
23 | # have a standard template that goes to all Redis servers but also need
24 | # to customize a few per-server settings. Include files can include
25 | # other files, so use this wisely.
26 | #
27 | # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
28 | # from admin or Redis Sentinel. Since Redis always uses the last processed
29 | # line as value of a configuration directive, you'd better put includes
30 | # at the beginning of this file to avoid overwriting config change at runtime.
31 | #
32 | # If instead you are interested in using includes to override configuration
33 | # options, it is better to use include as the last line.
34 | #
35 | # include /path/to/local.conf
36 | # include /path/to/other.conf
37 |
38 | ################################## MODULES #####################################
39 |
40 | # Load modules at startup. If the server is not able to load modules
41 | # it will abort. It is possible to use multiple loadmodule directives.
42 | #
43 | # loadmodule /path/to/my_module.so
44 | # loadmodule /path/to/other_module.so
45 |
46 | ################################## NETWORK #####################################
47 |
48 | # By default, if no "bind" configuration directive is specified, Redis listens
49 | # for connections from all the network interfaces available on the server.
50 | # It is possible to listen to just one or multiple selected interfaces using
51 | # the "bind" configuration directive, followed by one or more IP addresses.
52 | #
53 | # Examples:
54 | #
55 | # bind 192.168.1.100 10.0.0.1
56 | # bind 127.0.0.1 ::1
57 | #
58 | # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
59 | # internet, binding to all the interfaces is dangerous and will expose the
60 | # instance to everybody on the internet. So by default we uncomment the
61 | # following bind directive, that will force Redis to listen only into
62 | # the IPv4 loopback interface address (this means Redis will be able to
63 | # accept connections only from clients running into the same computer it
64 | # is running).
65 | #
66 | # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
67 | # JUST COMMENT THE FOLLOWING LINE.
68 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
69 | bind 127.0.0.1
70 |
71 | # Protected mode is a layer of security protection, in order to avoid that
72 | # Redis instances left open on the internet are accessed and exploited.
73 | #
74 | # When protected mode is on and if:
75 | #
76 | # 1) The server is not binding explicitly to a set of addresses using the
77 | # "bind" directive.
78 | # 2) No password is configured.
79 | #
80 | # The server only accepts connections from clients connecting from the
81 | # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
82 | # sockets.
83 | #
84 | # By default protected mode is enabled. You should disable it only if
85 | # you are sure you want clients from other hosts to connect to Redis
86 | # even if no authentication is configured, nor a specific set of interfaces
87 | # are explicitly listed using the "bind" directive.
88 | protected-mode yes
89 |
90 | # Accept connections on the specified port, default is 6379 (IANA #815344).
91 | # If port 0 is specified Redis will not listen on a TCP socket.
92 | port 6400
93 |
94 | # TCP listen() backlog.
95 | #
96 | # In high requests-per-second environments you need an high backlog in order
97 | # to avoid slow clients connections issues. Note that the Linux kernel
98 | # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
99 | # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
100 | # in order to get the desired effect.
101 | tcp-backlog 511
102 |
103 | # Unix socket.
104 | #
105 | # Specify the path for the Unix socket that will be used to listen for
106 | # incoming connections. There is no default, so Redis will not listen
107 | # on a unix socket when not specified.
108 | #
109 | # unixsocket /tmp/redis.sock
110 | # unixsocketperm 700
111 |
112 | # Close the connection after a client is idle for N seconds (0 to disable)
113 | timeout 0
114 |
115 | # TCP keepalive.
116 | #
117 | # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
118 | # of communication. This is useful for two reasons:
119 | #
120 | # 1) Detect dead peers.
121 | # 2) Take the connection alive from the point of view of network
122 | # equipment in the middle.
123 | #
124 | # On Linux, the specified value (in seconds) is the period used to send ACKs.
125 | # Note that to close the connection the double of the time is needed.
126 | # On other kernels the period depends on the kernel configuration.
127 | #
128 | # A reasonable value for this option is 300 seconds, which is the new
129 | # Redis default starting with Redis 3.2.1.
130 | tcp-keepalive 300
131 |
132 | ################################# GENERAL #####################################
133 |
134 | # By default Redis does not run as a daemon. Use 'yes' if you need it.
135 | # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
136 | daemonize no
137 |
138 | # If you run Redis from upstart or systemd, Redis can interact with your
139 | # supervision tree. Options:
140 | # supervised no - no supervision interaction
141 | # supervised upstart - signal upstart by putting Redis into SIGSTOP mode
142 | # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
143 | # supervised auto - detect upstart or systemd method based on
144 | # UPSTART_JOB or NOTIFY_SOCKET environment variables
145 | # Note: these supervision methods only signal "process is ready."
146 | # They do not enable continuous liveness pings back to your supervisor.
147 | supervised no
148 |
149 | # If a pid file is specified, Redis writes it where specified at startup
150 | # and removes it at exit.
151 | #
152 | # When the server runs non daemonized, no pid file is created if none is
153 | # specified in the configuration. When the server is daemonized, the pid file
154 | # is used even if not specified, defaulting to "/var/run/redis.pid".
155 | #
156 | # Creating a pid file is best effort: if Redis is not able to create it
157 | # nothing bad happens, the server will start and run normally.
158 | pidfile /var/run/redis_6379.pid
159 |
160 | # Specify the server verbosity level.
161 | # This can be one of:
162 | # debug (a lot of information, useful for development/testing)
163 | # verbose (many rarely useful info, but not a mess like the debug level)
164 | # notice (moderately verbose, what you want in production probably)
165 | # warning (only very important / critical messages are logged)
166 | loglevel notice
167 |
168 | # Specify the log file name. Also the empty string can be used to force
169 | # Redis to log on the standard output. Note that if you use standard
170 | # output for logging but daemonize, logs will be sent to /dev/null
171 | logfile ""
172 |
173 | # To enable logging to the system logger, just set 'syslog-enabled' to yes,
174 | # and optionally update the other syslog parameters to suit your needs.
175 | # syslog-enabled no
176 |
177 | # Specify the syslog identity.
178 | # syslog-ident redis
179 |
180 | # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
181 | # syslog-facility local0
182 |
183 | # Set the number of databases. The default database is DB 0, you can select
184 | # a different one on a per-connection basis using SELECT where
185 | # dbid is a number between 0 and 'databases'-1
186 | databases 16
187 |
188 | # By default Redis shows an ASCII art logo only when started to log to the
189 | # standard output and if the standard output is a TTY. Basically this means
190 | # that normally a logo is displayed only in interactive sessions.
191 | #
192 | # However it is possible to force the pre-4.0 behavior and always show a
193 | # ASCII art logo in startup logs by setting the following option to yes.
194 | always-show-logo yes
195 |
196 | ################################ SNAPSHOTTING ################################
197 | #
198 | # Save the DB on disk:
199 | #
200 | # save
201 | #
202 | # Will save the DB if both the given number of seconds and the given
203 | # number of write operations against the DB occurred.
204 | #
205 | # In the example below the behaviour will be to save:
206 | # after 900 sec (15 min) if at least 1 key changed
207 | # after 300 sec (5 min) if at least 10 keys changed
208 | # after 60 sec if at least 10000 keys changed
209 | #
210 | # Note: you can disable saving completely by commenting out all "save" lines.
211 | #
212 | # It is also possible to remove all the previously configured save
213 | # points by adding a save directive with a single empty string argument
214 | # like in the following example:
215 | #
216 | # save ""
217 |
218 | save 900 1
219 | save 300 10
220 | save 60 10000
221 |
222 | # By default Redis will stop accepting writes if RDB snapshots are enabled
223 | # (at least one save point) and the latest background save failed.
224 | # This will make the user aware (in a hard way) that data is not persisting
225 | # on disk properly, otherwise chances are that no one will notice and some
226 | # disaster will happen.
227 | #
228 | # If the background saving process will start working again Redis will
229 | # automatically allow writes again.
230 | #
231 | # However if you have setup your proper monitoring of the Redis server
232 | # and persistence, you may want to disable this feature so that Redis will
233 | # continue to work as usual even if there are problems with disk,
234 | # permissions, and so forth.
235 | stop-writes-on-bgsave-error yes
236 |
237 | # Compress string objects using LZF when dump .rdb databases?
238 | # For default that's set to 'yes' as it's almost always a win.
239 | # If you want to save some CPU in the saving child set it to 'no' but
240 | # the dataset will likely be bigger if you have compressible values or keys.
241 | rdbcompression yes
242 |
243 | # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
244 | # This makes the format more resistant to corruption but there is a performance
245 | # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
246 | # for maximum performances.
247 | #
248 | # RDB files created with checksum disabled have a checksum of zero that will
249 | # tell the loading code to skip the check.
250 | rdbchecksum yes
251 |
252 | # The filename where to dump the DB
253 | dbfilename dump.rdb
254 |
255 | # The working directory.
256 | #
257 | # The DB will be written inside this directory, with the filename specified
258 | # above using the 'dbfilename' configuration directive.
259 | #
260 | # The Append Only File will also be created inside this directory.
261 | #
262 | # Note that you must specify a directory here, not a file name.
263 | dir ./db
264 |
265 | ################################# REPLICATION #################################
266 |
267 | # Master-Replica replication. Use replicaof to make a Redis instance a copy of
268 | # another Redis server. A few things to understand ASAP about Redis replication.
269 | #
270 | # +------------------+ +---------------+
271 | # | Master | ---> | Replica |
272 | # | (receive writes) | | (exact copy) |
273 | # +------------------+ +---------------+
274 | #
275 | # 1) Redis replication is asynchronous, but you can configure a master to
276 | # stop accepting writes if it appears to be not connected with at least
277 | # a given number of replicas.
278 | # 2) Redis replicas are able to perform a partial resynchronization with the
279 | # master if the replication link is lost for a relatively small amount of
280 | # time. You may want to configure the replication backlog size (see the next
281 | # sections of this file) with a sensible value depending on your needs.
282 | # 3) Replication is automatic and does not need user intervention. After a
283 | # network partition replicas automatically try to reconnect to masters
284 | # and resynchronize with them.
285 | #
286 | # replicaof
287 |
288 | # If the master is password protected (using the "requirepass" configuration
289 | # directive below) it is possible to tell the replica to authenticate before
290 | # starting the replication synchronization process, otherwise the master will
291 | # refuse the replica request.
292 | #
293 | # masterauth
294 |
295 | # When a replica loses its connection with the master, or when the replication
296 | # is still in progress, the replica can act in two different ways:
297 | #
298 | # 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
299 | # still reply to client requests, possibly with out of date data, or the
300 | # data set may just be empty if this is the first synchronization.
301 | #
302 | # 2) if replica-serve-stale-data is set to 'no' the replica will reply with
303 | # an error "SYNC with master in progress" to all the kind of commands
304 | # but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
305 | # SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
306 | # COMMAND, POST, HOST: and LATENCY.
307 | #
308 | replica-serve-stale-data yes
309 |
310 | # You can configure a replica instance to accept writes or not. Writing against
311 | # a replica instance may be useful to store some ephemeral data (because data
312 | # written on a replica will be easily deleted after resync with the master) but
313 | # may also cause problems if clients are writing to it because of a
314 | # misconfiguration.
315 | #
316 | # Since Redis 2.6 by default replicas are read-only.
317 | #
318 | # Note: read only replicas are not designed to be exposed to untrusted clients
319 | # on the internet. It's just a protection layer against misuse of the instance.
320 | # Still a read only replica exports by default all the administrative commands
321 | # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
322 | # security of read only replicas using 'rename-command' to shadow all the
323 | # administrative / dangerous commands.
324 | replica-read-only yes
325 |
326 | # Replication SYNC strategy: disk or socket.
327 | #
328 | # -------------------------------------------------------
329 | # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
330 | # -------------------------------------------------------
331 | #
332 | # New replicas and reconnecting replicas that are not able to continue the replication
333 | # process just receiving differences, need to do what is called a "full
334 | # synchronization". An RDB file is transmitted from the master to the replicas.
335 | # The transmission can happen in two different ways:
336 | #
337 | # 1) Disk-backed: The Redis master creates a new process that writes the RDB
338 | # file on disk. Later the file is transferred by the parent
339 | # process to the replicas incrementally.
340 | # 2) Diskless: The Redis master creates a new process that directly writes the
341 | # RDB file to replica sockets, without touching the disk at all.
342 | #
343 | # With disk-backed replication, while the RDB file is generated, more replicas
344 | # can be queued and served with the RDB file as soon as the current child producing
345 | # the RDB file finishes its work. With diskless replication instead once
346 | # the transfer starts, new replicas arriving will be queued and a new transfer
347 | # will start when the current one terminates.
348 | #
349 | # When diskless replication is used, the master waits a configurable amount of
350 | # time (in seconds) before starting the transfer in the hope that multiple replicas
351 | # will arrive and the transfer can be parallelized.
352 | #
353 | # With slow disks and fast (large bandwidth) networks, diskless replication
354 | # works better.
355 | repl-diskless-sync no
356 |
357 | # When diskless replication is enabled, it is possible to configure the delay
358 | # the server waits in order to spawn the child that transfers the RDB via socket
359 | # to the replicas.
360 | #
361 | # This is important since once the transfer starts, it is not possible to serve
362 | # new replicas arriving, that will be queued for the next RDB transfer, so the server
363 | # waits a delay in order to let more replicas arrive.
364 | #
365 | # The delay is specified in seconds, and by default is 5 seconds. To disable
366 | # it entirely just set it to 0 seconds and the transfer will start ASAP.
367 | repl-diskless-sync-delay 5
368 |
369 | # Replicas send PINGs to server in a predefined interval. It's possible to change
370 | # this interval with the repl_ping_replica_period option. The default value is 10
371 | # seconds.
372 | #
373 | # repl-ping-replica-period 10
374 |
375 | # The following option sets the replication timeout for:
376 | #
377 | # 1) Bulk transfer I/O during SYNC, from the point of view of replica.
378 | # 2) Master timeout from the point of view of replicas (data, pings).
379 | # 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
380 | #
381 | # It is important to make sure that this value is greater than the value
382 | # specified for repl-ping-replica-period otherwise a timeout will be detected
383 | # every time there is low traffic between the master and the replica.
384 | #
385 | # repl-timeout 60
386 |
387 | # Disable TCP_NODELAY on the replica socket after SYNC?
388 | #
389 | # If you select "yes" Redis will use a smaller number of TCP packets and
390 | # less bandwidth to send data to replicas. But this can add a delay for
391 | # the data to appear on the replica side, up to 40 milliseconds with
392 | # Linux kernels using a default configuration.
393 | #
394 | # If you select "no" the delay for data to appear on the replica side will
395 | # be reduced but more bandwidth will be used for replication.
396 | #
397 | # By default we optimize for low latency, but in very high traffic conditions
398 | # or when the master and replicas are many hops away, turning this to "yes" may
399 | # be a good idea.
400 | repl-disable-tcp-nodelay no
401 |
402 | # Set the replication backlog size. The backlog is a buffer that accumulates
403 | # replica data when replicas are disconnected for some time, so that when a replica
404 | # wants to reconnect again, often a full resync is not needed, but a partial
405 | # resync is enough, just passing the portion of data the replica missed while
406 | # disconnected.
407 | #
408 | # The bigger the replication backlog, the longer the time the replica can be
409 | # disconnected and later be able to perform a partial resynchronization.
410 | #
411 | # The backlog is only allocated once there is at least a replica connected.
412 | #
413 | # repl-backlog-size 1mb
414 |
415 | # After a master has no longer connected replicas for some time, the backlog
416 | # will be freed. The following option configures the amount of seconds that
417 | # need to elapse, starting from the time the last replica disconnected, for
418 | # the backlog buffer to be freed.
419 | #
420 | # Note that replicas never free the backlog for timeout, since they may be
421 | # promoted to masters later, and should be able to correctly "partially
422 | # resynchronize" with the replicas: hence they should always accumulate backlog.
423 | #
424 | # A value of 0 means to never release the backlog.
425 | #
426 | # repl-backlog-ttl 3600
427 |
428 | # The replica priority is an integer number published by Redis in the INFO output.
429 | # It is used by Redis Sentinel in order to select a replica to promote into a
430 | # master if the master is no longer working correctly.
431 | #
432 | # A replica with a low priority number is considered better for promotion, so
433 | # for instance if there are three replicas with priority 10, 100, 25 Sentinel will
434 | # pick the one with priority 10, that is the lowest.
435 | #
436 | # However a special priority of 0 marks the replica as not able to perform the
437 | # role of master, so a replica with priority of 0 will never be selected by
438 | # Redis Sentinel for promotion.
439 | #
440 | # By default the priority is 100.
441 | replica-priority 100
442 |
443 | # It is possible for a master to stop accepting writes if there are less than
444 | # N replicas connected, having a lag less or equal than M seconds.
445 | #
446 | # The N replicas need to be in "online" state.
447 | #
448 | # The lag in seconds, that must be <= the specified value, is calculated from
449 | # the last ping received from the replica, that is usually sent every second.
450 | #
451 | # This option does not GUARANTEE that N replicas will accept the write, but
452 | # will limit the window of exposure for lost writes in case not enough replicas
453 | # are available, to the specified number of seconds.
454 | #
455 | # For example to require at least 3 replicas with a lag <= 10 seconds use:
456 | #
457 | # min-replicas-to-write 3
458 | # min-replicas-max-lag 10
459 | #
460 | # Setting one or the other to 0 disables the feature.
461 | #
462 | # By default min-replicas-to-write is set to 0 (feature disabled) and
463 | # min-replicas-max-lag is set to 10.
464 |
465 | # A Redis master is able to list the address and port of the attached
466 | # replicas in different ways. For example the "INFO replication" section
467 | # offers this information, which is used, among other tools, by
468 | # Redis Sentinel in order to discover replica instances.
469 | # Another place where this info is available is in the output of the
470 | # "ROLE" command of a master.
471 | #
472 | # The listed IP and address normally reported by a replica is obtained
473 | # in the following way:
474 | #
475 | # IP: The address is auto detected by checking the peer address
476 | # of the socket used by the replica to connect with the master.
477 | #
478 | # Port: The port is communicated by the replica during the replication
479 | # handshake, and is normally the port that the replica is using to
480 | # listen for connections.
481 | #
482 | # However when port forwarding or Network Address Translation (NAT) is
483 | # used, the replica may be actually reachable via different IP and port
484 | # pairs. The following two options can be used by a replica in order to
485 | # report to its master a specific set of IP and port, so that both INFO
486 | # and ROLE will report those values.
487 | #
488 | # There is no need to use both the options if you need to override just
489 | # the port or the IP address.
490 | #
491 | # replica-announce-ip 5.5.5.5
492 | # replica-announce-port 1234
493 |
494 | ################################## SECURITY ###################################
495 |
496 | # Require clients to issue AUTH before processing any other
497 | # commands. This might be useful in environments in which you do not trust
498 | # others with access to the host running redis-server.
499 | #
500 | # This should stay commented out for backward compatibility and because most
501 | # people do not need auth (e.g. they run their own servers).
502 | #
503 | # Warning: since Redis is pretty fast an outside user can try up to
504 | # 150k passwords per second against a good box. This means that you should
505 | # use a very strong password otherwise it will be very easy to break.
506 | #
507 | # requirepass foobared
508 |
509 | # Command renaming.
510 | #
511 | # It is possible to change the name of dangerous commands in a shared
512 | # environment. For instance the CONFIG command may be renamed into something
513 | # hard to guess so that it will still be available for internal-use tools
514 | # but not available for general clients.
515 | #
516 | # Example:
517 | #
518 | # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
519 | #
520 | # It is also possible to completely kill a command by renaming it into
521 | # an empty string:
522 | #
523 | # rename-command CONFIG ""
524 | #
525 | # Please note that changing the name of commands that are logged into the
526 | # AOF file or transmitted to replicas may cause problems.
527 |
528 | ################################### CLIENTS ####################################
529 |
530 | # Set the max number of connected clients at the same time. By default
531 | # this limit is set to 10000 clients, however if the Redis server is not
532 | # able to configure the process file limit to allow for the specified limit
533 | # the max number of allowed clients is set to the current file limit
534 | # minus 32 (as Redis reserves a few file descriptors for internal uses).
535 | #
536 | # Once the limit is reached Redis will close all the new connections sending
537 | # an error 'max number of clients reached'.
538 | #
539 | # maxclients 10000
540 |
541 | ############################## MEMORY MANAGEMENT ################################
542 |
543 | # Set a memory usage limit to the specified amount of bytes.
544 | # When the memory limit is reached Redis will try to remove keys
545 | # according to the eviction policy selected (see maxmemory-policy).
546 | #
547 | # If Redis can't remove keys according to the policy, or if the policy is
548 | # set to 'noeviction', Redis will start to reply with errors to commands
549 | # that would use more memory, like SET, LPUSH, and so on, and will continue
550 | # to reply to read-only commands like GET.
551 | #
552 | # This option is usually useful when using Redis as an LRU or LFU cache, or to
553 | # set a hard memory limit for an instance (using the 'noeviction' policy).
554 | #
555 | # WARNING: If you have replicas attached to an instance with maxmemory on,
556 | # the size of the output buffers needed to feed the replicas are subtracted
557 | # from the used memory count, so that network problems / resyncs will
558 | # not trigger a loop where keys are evicted, and in turn the output
559 | # buffer of replicas is full with DELs of keys evicted triggering the deletion
560 | # of more keys, and so forth until the database is completely emptied.
561 | #
562 | # In short... if you have replicas attached it is suggested that you set a lower
563 | # limit for maxmemory so that there is some free RAM on the system for replica
564 | # output buffers (but this is not needed if the policy is 'noeviction').
565 | #
566 | # maxmemory
567 |
568 | # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
569 | # is reached. You can select among five behaviors:
570 | #
571 | # volatile-lru -> Evict using approximated LRU among the keys with an expire set.
572 | # allkeys-lru -> Evict any key using approximated LRU.
573 | # volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
574 | # allkeys-lfu -> Evict any key using approximated LFU.
575 | # volatile-random -> Remove a random key among the ones with an expire set.
576 | # allkeys-random -> Remove a random key, any key.
577 | # volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
578 | # noeviction -> Don't evict anything, just return an error on write operations.
579 | #
580 | # LRU means Least Recently Used
581 | # LFU means Least Frequently Used
582 | #
583 | # Both LRU, LFU and volatile-ttl are implemented using approximated
584 | # randomized algorithms.
585 | #
586 | # Note: with any of the above policies, Redis will return an error on write
587 | # operations, when there are no suitable keys for eviction.
588 | #
589 | # At the date of writing these commands are: set setnx setex append
590 | # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
591 | # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
592 | # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
593 | # getset mset msetnx exec sort
594 | #
595 | # The default is:
596 | #
597 | # maxmemory-policy noeviction
598 |
599 | # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
600 | # algorithms (in order to save memory), so you can tune it for speed or
601 | # accuracy. For default Redis will check five keys and pick the one that was
602 | # used less recently, you can change the sample size using the following
603 | # configuration directive.
604 | #
605 | # The default of 5 produces good enough results. 10 Approximates very closely
606 | # true LRU but costs more CPU. 3 is faster but not very accurate.
607 | #
608 | # maxmemory-samples 5
609 |
610 | # Starting from Redis 5, by default a replica will ignore its maxmemory setting
611 | # (unless it is promoted to master after a failover or manually). It means
612 | # that the eviction of keys will be just handled by the master, sending the
613 | # DEL commands to the replica as keys evict in the master side.
614 | #
615 | # This behavior ensures that masters and replicas stay consistent, and is usually
616 | # what you want, however if your replica is writable, or you want the replica to have
617 | # a different memory setting, and you are sure all the writes performed to the
618 | # replica are idempotent, then you may change this default (but be sure to understand
619 | # what you are doing).
620 | #
621 | # Note that since the replica by default does not evict, it may end using more
622 | # memory than the one set via maxmemory (there are certain buffers that may
623 | # be larger on the replica, or data structures may sometimes take more memory and so
624 | # forth). So make sure you monitor your replicas and make sure they have enough
625 | # memory to never hit a real out-of-memory condition before the master hits
626 | # the configured maxmemory setting.
627 | #
628 | # replica-ignore-maxmemory yes
629 |
630 | ############################# LAZY FREEING ####################################
631 |
632 | # Redis has two primitives to delete keys. One is called DEL and is a blocking
633 | # deletion of the object. It means that the server stops processing new commands
634 | # in order to reclaim all the memory associated with an object in a synchronous
635 | # way. If the key deleted is associated with a small object, the time needed
636 | # in order to execute the DEL command is very small and comparable to most other
637 | # O(1) or O(log_N) commands in Redis. However if the key is associated with an
638 | # aggregated value containing millions of elements, the server can block for
639 | # a long time (even seconds) in order to complete the operation.
640 | #
641 | # For the above reasons Redis also offers non blocking deletion primitives
642 | # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
643 | # FLUSHDB commands, in order to reclaim memory in background. Those commands
644 | # are executed in constant time. Another thread will incrementally free the
645 | # object in the background as fast as possible.
646 | #
647 | # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
648 | # It's up to the design of the application to understand when it is a good
649 | # idea to use one or the other. However the Redis server sometimes has to
650 | # delete keys or flush the whole database as a side effect of other operations.
651 | # Specifically Redis deletes objects independently of a user call in the
652 | # following scenarios:
653 | #
654 | # 1) On eviction, because of the maxmemory and maxmemory policy configurations,
655 | # in order to make room for new data, without going over the specified
656 | # memory limit.
657 | # 2) Because of expire: when a key with an associated time to live (see the
658 | # EXPIRE command) must be deleted from memory.
659 | # 3) Because of a side effect of a command that stores data on a key that may
660 | # already exist. For example the RENAME command may delete the old key
661 | # content when it is replaced with another one. Similarly SUNIONSTORE
662 | # or SORT with STORE option may delete existing keys. The SET command
663 | # itself removes any old content of the specified key in order to replace
664 | # it with the specified string.
665 | # 4) During replication, when a replica performs a full resynchronization with
666 | # its master, the content of the whole database is removed in order to
667 | # load the RDB file just transferred.
668 | #
669 | # In all the above cases the default is to delete objects in a blocking way,
670 | # like if DEL was called. However you can configure each case specifically
671 | # in order to instead release memory in a non-blocking way like if UNLINK
672 | # was called, using the following configuration directives:
673 |
674 | lazyfree-lazy-eviction no
675 | lazyfree-lazy-expire no
676 | lazyfree-lazy-server-del no
677 | replica-lazy-flush no
678 |
679 | ############################## APPEND ONLY MODE ###############################
680 |
681 | # By default Redis asynchronously dumps the dataset on disk. This mode is
682 | # good enough in many applications, but an issue with the Redis process or
683 | # a power outage may result into a few minutes of writes lost (depending on
684 | # the configured save points).
685 | #
686 | # The Append Only File is an alternative persistence mode that provides
687 | # much better durability. For instance using the default data fsync policy
688 | # (see later in the config file) Redis can lose just one second of writes in a
689 | # dramatic event like a server power outage, or a single write if something
690 | # wrong with the Redis process itself happens, but the operating system is
691 | # still running correctly.
692 | #
693 | # AOF and RDB persistence can be enabled at the same time without problems.
694 | # If the AOF is enabled on startup Redis will load the AOF, that is the file
695 | # with the better durability guarantees.
696 | #
697 | # Please check http://redis.io/topics/persistence for more information.
698 |
699 | appendonly no
700 |
701 | # The name of the append only file (default: "appendonly.aof")
702 |
703 | appendfilename "appendonly.aof"
704 |
705 | # The fsync() call tells the Operating System to actually write data on disk
706 | # instead of waiting for more data in the output buffer. Some OS will really flush
707 | # data on disk, some other OS will just try to do it ASAP.
708 | #
709 | # Redis supports three different modes:
710 | #
711 | # no: don't fsync, just let the OS flush the data when it wants. Faster.
712 | # always: fsync after every write to the append only log. Slow, Safest.
713 | # everysec: fsync only one time every second. Compromise.
714 | #
715 | # The default is "everysec", as that's usually the right compromise between
716 | # speed and data safety. It's up to you to understand if you can relax this to
717 | # "no" that will let the operating system flush the output buffer when
718 | # it wants, for better performances (but if you can live with the idea of
719 | # some data loss consider the default persistence mode that's snapshotting),
720 | # or on the contrary, use "always" that's very slow but a bit safer than
721 | # everysec.
722 | #
723 | # More details please check the following article:
724 | # http://antirez.com/post/redis-persistence-demystified.html
725 | #
726 | # If unsure, use "everysec".
727 |
728 | # appendfsync always
729 | appendfsync everysec
730 | # appendfsync no
731 |
732 | # When the AOF fsync policy is set to always or everysec, and a background
733 | # saving process (a background save or AOF log background rewriting) is
734 | # performing a lot of I/O against the disk, in some Linux configurations
735 | # Redis may block too long on the fsync() call. Note that there is no fix for
736 | # this currently, as even performing fsync in a different thread will block
737 | # our synchronous write(2) call.
738 | #
739 | # In order to mitigate this problem it's possible to use the following option
740 | # that will prevent fsync() from being called in the main process while a
741 | # BGSAVE or BGREWRITEAOF is in progress.
742 | #
743 | # This means that while another child is saving, the durability of Redis is
744 | # the same as "appendfsync none". In practical terms, this means that it is
745 | # possible to lose up to 30 seconds of log in the worst scenario (with the
746 | # default Linux settings).
747 | #
748 | # If you have latency problems turn this to "yes". Otherwise leave it as
749 | # "no" that is the safest pick from the point of view of durability.
750 |
751 | no-appendfsync-on-rewrite no
752 |
753 | # Automatic rewrite of the append only file.
754 | # Redis is able to automatically rewrite the log file implicitly calling
755 | # BGREWRITEAOF when the AOF log size grows by the specified percentage.
756 | #
757 | # This is how it works: Redis remembers the size of the AOF file after the
758 | # latest rewrite (if no rewrite has happened since the restart, the size of
759 | # the AOF at startup is used).
760 | #
761 | # This base size is compared to the current size. If the current size is
762 | # bigger than the specified percentage, the rewrite is triggered. Also
763 | # you need to specify a minimal size for the AOF file to be rewritten, this
764 | # is useful to avoid rewriting the AOF file even if the percentage increase
765 | # is reached but it is still pretty small.
766 | #
767 | # Specify a percentage of zero in order to disable the automatic AOF
768 | # rewrite feature.
769 |
770 | auto-aof-rewrite-percentage 100
771 | auto-aof-rewrite-min-size 64mb
772 |
773 | # An AOF file may be found to be truncated at the end during the Redis
774 | # startup process, when the AOF data gets loaded back into memory.
775 | # This may happen when the system where Redis is running
776 | # crashes, especially when an ext4 filesystem is mounted without the
777 | # data=ordered option (however this can't happen when Redis itself
778 | # crashes or aborts but the operating system still works correctly).
779 | #
780 | # Redis can either exit with an error when this happens, or load as much
781 | # data as possible (the default now) and start if the AOF file is found
782 | # to be truncated at the end. The following option controls this behavior.
783 | #
784 | # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
785 | # the Redis server starts emitting a log to inform the user of the event.
786 | # Otherwise if the option is set to no, the server aborts with an error
787 | # and refuses to start. When the option is set to no, the user requires
788 | # to fix the AOF file using the "redis-check-aof" utility before to restart
789 | # the server.
790 | #
791 | # Note that if the AOF file will be found to be corrupted in the middle
792 | # the server will still exit with an error. This option only applies when
793 | # Redis will try to read more data from the AOF file but not enough bytes
794 | # will be found.
795 | aof-load-truncated yes
796 |
797 | # When rewriting the AOF file, Redis is able to use an RDB preamble in the
798 | # AOF file for faster rewrites and recoveries. When this option is turned
799 | # on the rewritten AOF file is composed of two different stanzas:
800 | #
801 | # [RDB file][AOF tail]
802 | #
803 | # When loading Redis recognizes that the AOF file starts with the "REDIS"
804 | # string and loads the prefixed RDB file, and continues loading the AOF
805 | # tail.
806 | aof-use-rdb-preamble yes
807 |
808 | ################################ LUA SCRIPTING ###############################
809 |
810 | # Max execution time of a Lua script in milliseconds.
811 | #
812 | # If the maximum execution time is reached Redis will log that a script is
813 | # still in execution after the maximum allowed time and will start to
814 | # reply to queries with an error.
815 | #
816 | # When a long running script exceeds the maximum execution time only the
817 | # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
818 | # used to stop a script that did not yet called write commands. The second
819 | # is the only way to shut down the server in the case a write command was
820 | # already issued by the script but the user doesn't want to wait for the natural
821 | # termination of the script.
822 | #
823 | # Set it to 0 or a negative value for unlimited execution without warnings.
824 | lua-time-limit 5000
825 |
826 | ################################ REDIS CLUSTER ###############################
827 | #
828 | # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
829 | # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
830 | # in order to mark it as "mature" we need to wait for a non trivial percentage
831 | # of users to deploy it in production.
832 | # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
833 | #
834 | # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
835 | # started as cluster nodes can. In order to start a Redis instance as a
836 | # cluster node enable the cluster support uncommenting the following:
837 | #
838 | # cluster-enabled yes
839 |
840 | # Every cluster node has a cluster configuration file. This file is not
841 | # intended to be edited by hand. It is created and updated by Redis nodes.
842 | # Every Redis Cluster node requires a different cluster configuration file.
843 | # Make sure that instances running in the same system do not have
844 | # overlapping cluster configuration file names.
845 | #
846 | # cluster-config-file nodes-6379.conf
847 |
848 | # Cluster node timeout is the amount of milliseconds a node must be unreachable
849 | # for it to be considered in failure state.
850 | # Most other internal time limits are multiple of the node timeout.
851 | #
852 | # cluster-node-timeout 15000
853 |
854 | # A replica of a failing master will avoid to start a failover if its data
855 | # looks too old.
856 | #
857 | # There is no simple way for a replica to actually have an exact measure of
858 | # its "data age", so the following two checks are performed:
859 | #
860 | # 1) If there are multiple replicas able to failover, they exchange messages
861 | # in order to try to give an advantage to the replica with the best
862 | # replication offset (more data from the master processed).
863 | # Replicas will try to get their rank by offset, and apply to the start
864 | # of the failover a delay proportional to their rank.
865 | #
866 | # 2) Every single replica computes the time of the last interaction with
867 | # its master. This can be the last ping or command received (if the master
868 | # is still in the "connected" state), or the time that elapsed since the
869 | # disconnection with the master (if the replication link is currently down).
870 | # If the last interaction is too old, the replica will not try to failover
871 | # at all.
872 | #
873 | # The point "2" can be tuned by user. Specifically a replica will not perform
874 | # the failover if, since the last interaction with the master, the time
875 | # elapsed is greater than:
876 | #
877 | # (node-timeout * replica-validity-factor) + repl-ping-replica-period
878 | #
879 | # So for example if node-timeout is 30 seconds, and the replica-validity-factor
880 | # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
881 | # replica will not try to failover if it was not able to talk with the master
882 | # for longer than 310 seconds.
883 | #
884 | # A large replica-validity-factor may allow replicas with too old data to failover
885 | # a master, while a too small value may prevent the cluster from being able to
886 | # elect a replica at all.
887 | #
888 | # For maximum availability, it is possible to set the replica-validity-factor
889 | # to a value of 0, which means, that replicas will always try to failover the
890 | # master regardless of the last time they interacted with the master.
891 | # (However they'll always try to apply a delay proportional to their
892 | # offset rank).
893 | #
894 | # Zero is the only value able to guarantee that when all the partitions heal
895 | # the cluster will always be able to continue.
896 | #
897 | # cluster-replica-validity-factor 10
898 |
899 | # Cluster replicas are able to migrate to orphaned masters, that are masters
900 | # that are left without working replicas. This improves the cluster ability
901 | # to resist to failures as otherwise an orphaned master can't be failed over
902 | # in case of failure if it has no working replicas.
903 | #
904 | # Replicas migrate to orphaned masters only if there are still at least a
905 | # given number of other working replicas for their old master. This number
906 | # is the "migration barrier". A migration barrier of 1 means that a replica
907 | # will migrate only if there is at least 1 other working replica for its master
908 | # and so forth. It usually reflects the number of replicas you want for every
909 | # master in your cluster.
910 | #
911 | # Default is 1 (replicas migrate only if their masters remain with at least
912 | # one replica). To disable migration just set it to a very large value.
913 | # A value of 0 can be set but is useful only for debugging and dangerous
914 | # in production.
915 | #
916 | # cluster-migration-barrier 1
917 |
918 | # By default Redis Cluster nodes stop accepting queries if they detect there
919 | # is at least an hash slot uncovered (no available node is serving it).
920 | # This way if the cluster is partially down (for example a range of hash slots
921 | # are no longer covered) all the cluster becomes, eventually, unavailable.
922 | # It automatically returns available as soon as all the slots are covered again.
923 | #
924 | # However sometimes you want the subset of the cluster which is working,
925 | # to continue to accept queries for the part of the key space that is still
926 | # covered. In order to do so, just set the cluster-require-full-coverage
927 | # option to no.
928 | #
929 | # cluster-require-full-coverage yes
930 |
931 | # This option, when set to yes, prevents replicas from trying to failover its
932 | # master during master failures. However the master can still perform a
933 | # manual failover, if forced to do so.
934 | #
935 | # This is useful in different scenarios, especially in the case of multiple
936 | # data center operations, where we want one side to never be promoted if not
937 | # in the case of a total DC failure.
938 | #
939 | # cluster-replica-no-failover no
940 |
941 | # In order to setup your cluster make sure to read the documentation
942 | # available at http://redis.io web site.
943 |
944 | ########################## CLUSTER DOCKER/NAT support ########################
945 |
946 | # In certain deployments, Redis Cluster nodes address discovery fails, because
947 | # addresses are NAT-ted or because ports are forwarded (the typical case is
948 | # Docker and other containers).
949 | #
950 | # In order to make Redis Cluster working in such environments, a static
951 | # configuration where each node knows its public address is needed. The
952 | # following two options are used for this scope, and are:
953 | #
954 | # * cluster-announce-ip
955 | # * cluster-announce-port
956 | # * cluster-announce-bus-port
957 | #
958 | # Each instruct the node about its address, client port, and cluster message
959 | # bus port. The information is then published in the header of the bus packets
960 | # so that other nodes will be able to correctly map the address of the node
961 | # publishing the information.
962 | #
963 | # If the above options are not used, the normal Redis Cluster auto-detection
964 | # will be used instead.
965 | #
966 | # Note that when remapped, the bus port may not be at the fixed offset of
967 | # clients port + 10000, so you can specify any port and bus-port depending
968 | # on how they get remapped. If the bus-port is not set, a fixed offset of
969 | # 10000 will be used as usually.
970 | #
971 | # Example:
972 | #
973 | # cluster-announce-ip 10.1.1.5
974 | # cluster-announce-port 6379
975 | # cluster-announce-bus-port 6380
976 |
977 | ################################## SLOW LOG ###################################
978 |
979 | # The Redis Slow Log is a system to log queries that exceeded a specified
980 | # execution time. The execution time does not include the I/O operations
981 | # like talking with the client, sending the reply and so forth,
982 | # but just the time needed to actually execute the command (this is the only
983 | # stage of command execution where the thread is blocked and can not serve
984 | # other requests in the meantime).
985 | #
986 | # You can configure the slow log with two parameters: one tells Redis
987 | # what is the execution time, in microseconds, to exceed in order for the
988 | # command to get logged, and the other parameter is the length of the
989 | # slow log. When a new command is logged the oldest one is removed from the
990 | # queue of logged commands.
991 |
992 | # The following time is expressed in microseconds, so 1000000 is equivalent
993 | # to one second. Note that a negative number disables the slow log, while
994 | # a value of zero forces the logging of every command.
995 | slowlog-log-slower-than 10000
996 |
997 | # There is no limit to this length. Just be aware that it will consume memory.
998 | # You can reclaim memory used by the slow log with SLOWLOG RESET.
999 | slowlog-max-len 128
1000 |
1001 | ################################ LATENCY MONITOR ##############################
1002 |
1003 | # The Redis latency monitoring subsystem samples different operations
1004 | # at runtime in order to collect data related to possible sources of
1005 | # latency of a Redis instance.
1006 | #
1007 | # Via the LATENCY command this information is available to the user that can
1008 | # print graphs and obtain reports.
1009 | #
1010 | # The system only logs operations that were performed in a time equal or
1011 | # greater than the amount of milliseconds specified via the
1012 | # latency-monitor-threshold configuration directive. When its value is set
1013 | # to zero, the latency monitor is turned off.
1014 | #
1015 | # By default latency monitoring is disabled since it is mostly not needed
1016 | # if you don't have latency issues, and collecting data has a performance
1017 | # impact, that while very small, can be measured under big load. Latency
1018 | # monitoring can easily be enabled at runtime using the command
1019 | # "CONFIG SET latency-monitor-threshold " if needed.
1020 | latency-monitor-threshold 0
1021 |
1022 | ############################# EVENT NOTIFICATION ##############################
1023 |
1024 | # Redis can notify Pub/Sub clients about events happening in the key space.
1025 | # This feature is documented at http://redis.io/topics/notifications
1026 | #
1027 | # For instance if keyspace events notification is enabled, and a client
1028 | # performs a DEL operation on key "foo" stored in the Database 0, two
1029 | # messages will be published via Pub/Sub:
1030 | #
1031 | # PUBLISH __keyspace@0__:foo del
1032 | # PUBLISH __keyevent@0__:del foo
1033 | #
1034 | # It is possible to select the events that Redis will notify among a set
1035 | # of classes. Every class is identified by a single character:
1036 | #
1037 | # K Keyspace events, published with __keyspace@__ prefix.
1038 | # E Keyevent events, published with __keyevent@__ prefix.
1039 | # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
1040 | # $ String commands
1041 | # l List commands
1042 | # s Set commands
1043 | # h Hash commands
1044 | # z Sorted set commands
1045 | # x Expired events (events generated every time a key expires)
1046 | # e Evicted events (events generated when a key is evicted for maxmemory)
1047 | # A Alias for g$lshzxe, so that the "AKE" string means all the events.
1048 | #
1049 | # The "notify-keyspace-events" takes as argument a string that is composed
1050 | # of zero or multiple characters. The empty string means that notifications
1051 | # are disabled.
1052 | #
1053 | # Example: to enable list and generic events, from the point of view of the
1054 | # event name, use:
1055 | #
1056 | # notify-keyspace-events Elg
1057 | #
1058 | # Example 2: to get the stream of the expired keys subscribing to channel
1059 | # name __keyevent@0__:expired use:
1060 | #
1061 | # notify-keyspace-events Ex
1062 | #
1063 | # By default all notifications are disabled because most users don't need
1064 | # this feature and the feature has some overhead. Note that if you don't
1065 | # specify at least one of K or E, no events will be delivered.
1066 | notify-keyspace-events ""
1067 |
1068 | ############################### ADVANCED CONFIG ###############################
1069 |
1070 | # Hashes are encoded using a memory efficient data structure when they have a
1071 | # small number of entries, and the biggest entry does not exceed a given
1072 | # threshold. These thresholds can be configured using the following directives.
1073 | hash-max-ziplist-entries 512
1074 | hash-max-ziplist-value 64
1075 |
1076 | # Lists are also encoded in a special way to save a lot of space.
1077 | # The number of entries allowed per internal list node can be specified
1078 | # as a fixed maximum size or a maximum number of elements.
1079 | # For a fixed maximum size, use -5 through -1, meaning:
1080 | # -5: max size: 64 Kb <-- not recommended for normal workloads
1081 | # -4: max size: 32 Kb <-- not recommended
1082 | # -3: max size: 16 Kb <-- probably not recommended
1083 | # -2: max size: 8 Kb <-- good
1084 | # -1: max size: 4 Kb <-- good
1085 | # Positive numbers mean store up to _exactly_ that number of elements
1086 | # per list node.
1087 | # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
1088 | # but if your use case is unique, adjust the settings as necessary.
1089 | list-max-ziplist-size -2
1090 |
1091 | # Lists may also be compressed.
1092 | # Compress depth is the number of quicklist ziplist nodes from *each* side of
1093 | # the list to *exclude* from compression. The head and tail of the list
1094 | # are always uncompressed for fast push/pop operations. Settings are:
1095 | # 0: disable all list compression
1096 | # 1: depth 1 means "don't start compressing until after 1 node into the list,
1097 | # going from either the head or tail"
1098 | # So: [head]->node->node->...->node->[tail]
1099 | # [head], [tail] will always be uncompressed; inner nodes will compress.
1100 | # 2: [head]->[next]->node->node->...->node->[prev]->[tail]
1101 | # 2 here means: don't compress head or head->next or tail->prev or tail,
1102 | # but compress all nodes between them.
1103 | # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
1104 | # etc.
1105 | list-compress-depth 0
1106 |
1107 | # Sets have a special encoding in just one case: when a set is composed
1108 | # of just strings that happen to be integers in radix 10 in the range
1109 | # of 64 bit signed integers.
1110 | # The following configuration setting sets the limit in the size of the
1111 | # set in order to use this special memory saving encoding.
1112 | set-max-intset-entries 512
1113 |
1114 | # Similarly to hashes and lists, sorted sets are also specially encoded in
1115 | # order to save a lot of space. This encoding is only used when the length and
1116 | # elements of a sorted set are below the following limits:
1117 | zset-max-ziplist-entries 128
1118 | zset-max-ziplist-value 64
1119 |
1120 | # HyperLogLog sparse representation bytes limit. The limit includes the
1121 | # 16 bytes header. When an HyperLogLog using the sparse representation crosses
1122 | # this limit, it is converted into the dense representation.
1123 | #
1124 | # A value greater than 16000 is totally useless, since at that point the
1125 | # dense representation is more memory efficient.
1126 | #
1127 | # The suggested value is ~ 3000 in order to have the benefits of
1128 | # the space efficient encoding without slowing down too much PFADD,
1129 | # which is O(N) with the sparse encoding. The value can be raised to
1130 | # ~ 10000 when CPU is not a concern, but space is, and the data set is
1131 | # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
1132 | hll-sparse-max-bytes 3000
1133 |
1134 | # Streams macro node max size / items. The stream data structure is a radix
1135 | # tree of big nodes that encode multiple items inside. Using this configuration
1136 | # it is possible to configure how big a single node can be in bytes, and the
1137 | # maximum number of items it may contain before switching to a new node when
1138 | # appending new stream entries. If any of the following settings are set to
1139 | # zero, the limit is ignored, so for instance it is possible to set just a
1140 | # max entires limit by setting max-bytes to 0 and max-entries to the desired
1141 | # value.
1142 | stream-node-max-bytes 4096
1143 | stream-node-max-entries 100
1144 |
1145 | # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
1146 | # order to help rehashing the main Redis hash table (the one mapping top-level
1147 | # keys to values). The hash table implementation Redis uses (see dict.c)
1148 | # performs a lazy rehashing: the more operation you run into a hash table
1149 | # that is rehashing, the more rehashing "steps" are performed, so if the
1150 | # server is idle the rehashing is never complete and some more memory is used
1151 | # by the hash table.
1152 | #
1153 | # The default is to use this millisecond 10 times every second in order to
1154 | # actively rehash the main dictionaries, freeing memory when possible.
1155 | #
1156 | # If unsure:
1157 | # use "activerehashing no" if you have hard latency requirements and it is
1158 | # not a good thing in your environment that Redis can reply from time to time
1159 | # to queries with 2 milliseconds delay.
1160 | #
1161 | # use "activerehashing yes" if you don't have such hard requirements but
1162 | # want to free memory asap when possible.
1163 | activerehashing yes
1164 |
1165 | # The client output buffer limits can be used to force disconnection of clients
1166 | # that are not reading data from the server fast enough for some reason (a
1167 | # common reason is that a Pub/Sub client can't consume messages as fast as the
1168 | # publisher can produce them).
1169 | #
1170 | # The limit can be set differently for the three different classes of clients:
1171 | #
1172 | # normal -> normal clients including MONITOR clients
1173 | # replica -> replica clients
1174 | # pubsub -> clients subscribed to at least one pubsub channel or pattern
1175 | #
1176 | # The syntax of every client-output-buffer-limit directive is the following:
1177 | #
1178 | # client-output-buffer-limit
1179 | #
1180 | # A client is immediately disconnected once the hard limit is reached, or if
1181 | # the soft limit is reached and remains reached for the specified number of
1182 | # seconds (continuously).
1183 | # So for instance if the hard limit is 32 megabytes and the soft limit is
1184 | # 16 megabytes / 10 seconds, the client will get disconnected immediately
1185 | # if the size of the output buffers reach 32 megabytes, but will also get
1186 | # disconnected if the client reaches 16 megabytes and continuously overcomes
1187 | # the limit for 10 seconds.
1188 | #
1189 | # By default normal clients are not limited because they don't receive data
1190 | # without asking (in a push way), but just after a request, so only
1191 | # asynchronous clients may create a scenario where data is requested faster
1192 | # than it can read.
1193 | #
1194 | # Instead there is a default limit for pubsub and replica clients, since
1195 | # subscribers and replicas receive data in a push fashion.
1196 | #
1197 | # Both the hard or the soft limit can be disabled by setting them to zero.
1198 | client-output-buffer-limit normal 0 0 0
1199 | client-output-buffer-limit replica 256mb 64mb 60
1200 | client-output-buffer-limit pubsub 32mb 8mb 60
1201 |
1202 | # Client query buffers accumulate new commands. They are limited to a fixed
1203 | # amount by default in order to avoid that a protocol desynchronization (for
1204 | # instance due to a bug in the client) will lead to unbound memory usage in
1205 | # the query buffer. However you can configure it here if you have very special
1206 | # needs, such us huge multi/exec requests or alike.
1207 | #
1208 | # client-query-buffer-limit 1gb
1209 |
1210 | # In the Redis protocol, bulk requests, that are, elements representing single
1211 | # strings, are normally limited ot 512 mb. However you can change this limit
1212 | # here.
1213 | #
1214 | # proto-max-bulk-len 512mb
1215 |
1216 | # Redis calls an internal function to perform many background tasks, like
1217 | # closing connections of clients in timeout, purging expired keys that are
1218 | # never requested, and so forth.
1219 | #
1220 | # Not all tasks are performed with the same frequency, but Redis checks for
1221 | # tasks to perform according to the specified "hz" value.
1222 | #
1223 | # By default "hz" is set to 10. Raising the value will use more CPU when
1224 | # Redis is idle, but at the same time will make Redis more responsive when
1225 | # there are many keys expiring at the same time, and timeouts may be
1226 | # handled with more precision.
1227 | #
1228 | # The range is between 1 and 500, however a value over 100 is usually not
1229 | # a good idea. Most users should use the default of 10 and raise this up to
1230 | # 100 only in environments where very low latency is required.
1231 | hz 10
1232 |
1233 | # Normally it is useful to have an HZ value which is proportional to the
1234 | # number of clients connected. This is useful in order, for instance, to
1235 | # avoid too many clients are processed for each background task invocation
1236 | # in order to avoid latency spikes.
1237 | #
1238 | # Since the default HZ value by default is conservatively set to 10, Redis
1239 | # offers, and enables by default, the ability to use an adaptive HZ value
1240 | # which will temporary raise when there are many connected clients.
1241 | #
1242 | # When dynamic HZ is enabled, the actual configured HZ will be used as
1243 | # as a baseline, but multiples of the configured HZ value will be actually
1244 | # used as needed once more clients are connected. In this way an idle
1245 | # instance will use very little CPU time while a busy instance will be
1246 | # more responsive.
1247 | dynamic-hz yes
1248 |
1249 | # When a child rewrites the AOF file, if the following option is enabled
1250 | # the file will be fsync-ed every 32 MB of data generated. This is useful
1251 | # in order to commit the file to the disk more incrementally and avoid
1252 | # big latency spikes.
1253 | aof-rewrite-incremental-fsync yes
1254 |
1255 | # When redis saves RDB file, if the following option is enabled
1256 | # the file will be fsync-ed every 32 MB of data generated. This is useful
1257 | # in order to commit the file to the disk more incrementally and avoid
1258 | # big latency spikes.
1259 | rdb-save-incremental-fsync yes
1260 |
1261 | # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
1262 | # idea to start with the default settings and only change them after investigating
1263 | # how to improve the performances and how the keys LFU change over time, which
1264 | # is possible to inspect via the OBJECT FREQ command.
1265 | #
1266 | # There are two tunable parameters in the Redis LFU implementation: the
1267 | # counter logarithm factor and the counter decay time. It is important to
1268 | # understand what the two parameters mean before changing them.
1269 | #
1270 | # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
1271 | # uses a probabilistic increment with logarithmic behavior. Given the value
1272 | # of the old counter, when a key is accessed, the counter is incremented in
1273 | # this way:
1274 | #
1275 | # 1. A random number R between 0 and 1 is extracted.
1276 | # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
1277 | # 3. The counter is incremented only if R < P.
1278 | #
1279 | # The default lfu-log-factor is 10. This is a table of how the frequency
1280 | # counter changes with a different number of accesses with different
1281 | # logarithmic factors:
1282 | #
1283 | # +--------+------------+------------+------------+------------+------------+
1284 | # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
1285 | # +--------+------------+------------+------------+------------+------------+
1286 | # | 0 | 104 | 255 | 255 | 255 | 255 |
1287 | # +--------+------------+------------+------------+------------+------------+
1288 | # | 1 | 18 | 49 | 255 | 255 | 255 |
1289 | # +--------+------------+------------+------------+------------+------------+
1290 | # | 10 | 10 | 18 | 142 | 255 | 255 |
1291 | # +--------+------------+------------+------------+------------+------------+
1292 | # | 100 | 8 | 11 | 49 | 143 | 255 |
1293 | # +--------+------------+------------+------------+------------+------------+
1294 | #
1295 | # NOTE: The above table was obtained by running the following commands:
1296 | #
1297 | # redis-benchmark -n 1000000 incr foo
1298 | # redis-cli object freq foo
1299 | #
1300 | # NOTE 2: The counter initial value is 5 in order to give new objects a chance
1301 | # to accumulate hits.
1302 | #
1303 | # The counter decay time is the time, in minutes, that must elapse in order
1304 | # for the key counter to be divided by two (or decremented if it has a value
1305 | # less <= 10).
1306 | #
1307 | # The default value for the lfu-decay-time is 1. A Special value of 0 means to
1308 | # decay the counter every time it happens to be scanned.
1309 | #
1310 | # lfu-log-factor 10
1311 | # lfu-decay-time 1
1312 |
1313 | ########################### ACTIVE DEFRAGMENTATION #######################
1314 | #
1315 | # WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
1316 | # even in production and manually tested by multiple engineers for some
1317 | # time.
1318 | #
1319 | # What is active defragmentation?
1320 | # -------------------------------
1321 | #
1322 | # Active (online) defragmentation allows a Redis server to compact the
1323 | # spaces left between small allocations and deallocations of data in memory,
1324 | # thus allowing to reclaim back memory.
1325 | #
1326 | # Fragmentation is a natural process that happens with every allocator (but
1327 | # less so with Jemalloc, fortunately) and certain workloads. Normally a server
1328 | # restart is needed in order to lower the fragmentation, or at least to flush
1329 | # away all the data and create it again. However thanks to this feature
1330 | # implemented by Oran Agra for Redis 4.0 this process can happen at runtime
1331 | # in an "hot" way, while the server is running.
1332 | #
1333 | # Basically when the fragmentation is over a certain level (see the
1334 | # configuration options below) Redis will start to create new copies of the
1335 | # values in contiguous memory regions by exploiting certain specific Jemalloc
1336 | # features (in order to understand if an allocation is causing fragmentation
1337 | # and to allocate it in a better place), and at the same time, will release the
1338 | # old copies of the data. This process, repeated incrementally for all the keys
1339 | # will cause the fragmentation to drop back to normal values.
1340 | #
1341 | # Important things to understand:
1342 | #
1343 | # 1. This feature is disabled by default, and only works if you compiled Redis
1344 | # to use the copy of Jemalloc we ship with the source code of Redis.
1345 | # This is the default with Linux builds.
1346 | #
1347 | # 2. You never need to enable this feature if you don't have fragmentation
1348 | # issues.
1349 | #
1350 | # 3. Once you experience fragmentation, you can enable this feature when
1351 | # needed with the command "CONFIG SET activedefrag yes".
1352 | #
1353 | # The configuration parameters are able to fine tune the behavior of the
1354 | # defragmentation process. If you are not sure about what they mean it is
1355 | # a good idea to leave the defaults untouched.
1356 |
1357 | # Enabled active defragmentation
1358 | # activedefrag yes
1359 |
1360 | # Minimum amount of fragmentation waste to start active defrag
1361 | # active-defrag-ignore-bytes 100mb
1362 |
1363 | # Minimum percentage of fragmentation to start active defrag
1364 | # active-defrag-threshold-lower 10
1365 |
1366 | # Maximum percentage of fragmentation at which we use maximum effort
1367 | # active-defrag-threshold-upper 100
1368 |
1369 | # Minimal effort for defrag in CPU percentage
1370 | # active-defrag-cycle-min 5
1371 |
1372 | # Maximal effort for defrag in CPU percentage
1373 | # active-defrag-cycle-max 75
1374 |
1375 | # Maximum number of set/hash/zset/list fields that will be processed from
1376 | # the main dictionary scan
1377 | # active-defrag-max-scan-fields 1000
1378 |
1379 |
--------------------------------------------------------------------------------
/install_server.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 | set -x
5 |
6 | sudo apt-get install python3-pip virtualenv screen -y
7 |
8 | if [ -z "$VIRTUAL_ENV" ]; then
9 | virtualenv -p python3 PDNSENV
10 | echo export PDNS_HOME=$(pwd) >> ./PDNSENV/bin/activate
11 | . ./PDNSENV/bin/activate
12 | fi
13 |
14 | python3 -m pip install -r requirements
15 |
16 | # REDIS #
17 | mkdir -p db
18 | test ! -d redis/ && git clone https://github.com/antirez/redis.git
19 | pushd redis/
20 | git checkout 5.0
21 | make
22 | popd
23 |
--------------------------------------------------------------------------------
/install_server_kvrocks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e
4 | set -x
5 |
6 | sudo apt-get install python3-pip virtualenv screen -y
7 |
8 | if [ -z "$VIRTUAL_ENV" ]; then
9 | virtualenv -p python3 PDNSENV
10 | echo export PDNS_HOME=$(pwd) >> ./PDNSENV/bin/activate
11 | . ./PDNSENV/bin/activate
12 | fi
13 |
14 | python3 -m pip install -r requirements
15 |
16 | # REDIS #
17 | mkdir -p db
18 | test ! -d kvrocks/ && git clone https://github.com/apache/incubator-kvrocks.git kvrocks
19 | pushd kvrocks/
20 | git checkout 2.0
21 | make -j4
22 | popd
23 |
--------------------------------------------------------------------------------
/launch_server.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
4 |
5 | if [ -e "${DIR}/PDNSENV/bin/python" ]; then
6 | ENV_PY="${DIR}/PDNSENV/bin/python"
7 | else
8 | echo "Please make sure you ran install_server.py first."
9 | exit 1
10 | fi
11 |
12 | screen -dmS "pdns"
13 | sleep 0.1
14 |
15 | if [ -e "${DIR}/redis" ]; then
16 | screen -S "pdns" -X screen -t "pdns-lookup-redis" bash -c "(${DIR}/redis/src/redis-server ${DIR}/etc/redis.conf); read x;"
17 | fi
18 |
19 | if [ -e "${DIR}/kvrocks" ]; then
20 | screen -S "pdns" -X screen -t "pdns-lookup-kvrocks" bash -c "(${DIR}/kvrocks/src/kvrocks -c ${DIR}/etc/kvrocks.conf); read x;"
21 | fi
22 |
23 | screen -S "pdns" -X screen -t "pdns-cof" bash -c "(cd bin; ${ENV_PY} ./pdns-cof-server.py; read x;)"
24 | screen -S "pdns" -X screen -t "pdns-ingester" bash -c "(cd bin; ${ENV_PY} ./pdns-ingestion.py; read x;)"
25 |
26 | exit 0
27 |
--------------------------------------------------------------------------------
/requirements:
--------------------------------------------------------------------------------
1 | redis
2 | iptools
3 | tornado
4 | ndjson
5 | websocket-client
6 |
--------------------------------------------------------------------------------