├── .gitignore ├── config.ini ├── .travis.yml ├── setup.py ├── LICENSE ├── README.rst └── coucharchive /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.py[cod] 3 | /dist/ 4 | /*.egg-info 5 | -------------------------------------------------------------------------------- /config.ini: -------------------------------------------------------------------------------- 1 | [source] 2 | url = http://root:password@localhost:5984 3 | 4 | [target] 5 | url = 6 | 7 | [replication] 8 | ignore_dbs = db_to_ignore, other_useless_db 9 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | language: python 3 | python: 4 | - 2.7 5 | - 3.5 6 | - 3.6 7 | - nightly 8 | install: 9 | - pip install flake8 10 | - pip install . 11 | script: 12 | - flake8 . 13 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 Adrien Vergé 2 | # All rights reserved 3 | 4 | from setuptools import setup 5 | 6 | 7 | setup( 8 | name='coucharchive', 9 | version='2.4.0', 10 | author='Adrien Vergé', 11 | url='https://github.com/adrienverge/coucharchive', 12 | license='MIT', 13 | description=('Create and restore backups of a whole CouchDB server, with ' 14 | 'simple tar.gz files.'), 15 | 16 | scripts=['coucharchive'], 17 | install_requires=[ 18 | 'CouchDB >=1.2.0', 19 | ], 20 | ) 21 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2016 Adrien Vergé 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so, 8 | subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all 11 | copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 15 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 16 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 17 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 18 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 19 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | coucharchive 2 | ============ 3 | 4 | Use coucharchive to create and restore backups of a whole CouchDB server / 5 | cluster of servers, or to simply replicate it to another server / cluster. 6 | 7 | Backup archives are saved as single tar.gz file, and can be imported later to 8 | any other CouchDB server. 9 | 10 | Useful for making backups, or replicating your database to a secondary test 11 | environment. 12 | 13 | Installation 14 | ------------ 15 | 16 | Using pip: 17 | 18 | .. code:: bash 19 | 20 | $ pip3 install --user coucharchive 21 | 22 | Examples 23 | -------- 24 | 25 | Make a backup archive from a running server: 26 | 27 | .. code:: bash 28 | 29 | $ coucharchive create --from http://root:password@server.com:5984 \ 30 | --output test.tar.gz 31 | 32 | Restore this archive to another server: 33 | 34 | .. code:: bash 35 | 36 | $ coucharchive restore --to http://other-server.com:5984 \ 37 | --input test.tar.gz 38 | 39 | Simply load a backup in a temporary CouchDB server: 40 | 41 | .. code:: bash 42 | 43 | $ coucharchive load -i test.tar.gz 44 | Launched CouchDB instance at http://root:L76mqQE5fE@localhost:38095 45 | 46 | Replicate from a CouchDB installation to another one: 47 | 48 | .. code:: bash 49 | 50 | $ coucharchive replicate --from http://root@server.com:5984 \ 51 | --to http://admin@other-server.com:5984 52 | 53 | Slow down replication (to decrease servers load) so that it runs in one hour: 54 | 55 | .. code:: bash 56 | 57 | $ coucharchive replicate --from http://root@server.com:5984 \ 58 | --to http://admin@other-server.com:5984 \ 59 | --ideal-duration 3600 60 | 61 | Don't pass credentials on the command line: 62 | 63 | .. code:: bash 64 | 65 | $ coucharchive create --from private.server.com:5984 -o test.tar.gz 66 | CouchDB admin for private.server.com: root 67 | CouchDB password for root@private.server.com: p4ss 68 | 69 | How do archives work? 70 | --------------------- 71 | 72 | coucharchive spawns a local CouchDB instance locally, using a temporary directory 73 | for storing data and configuration. 74 | 75 | When dumping, it replicates your source CouchDB server (i.e. replicates all dbs, 76 | including `_users`) to the fresh local one, then saves and compresses its data 77 | to a tar.gz archive. 78 | 79 | When loading, it uncompresses the archive, has the local CouchDB instance use it 80 | as its data, then replicates to your remote CouchDB server. 81 | 82 | Go further 83 | ---------- 84 | 85 | For repetitive backups, you can use a config file: 86 | 87 | .. code:: bash 88 | 89 | $ cat config.ini 90 | [source] 91 | url = http://root:password@my-server:5984 92 | 93 | [replication] 94 | ignore_dbs = db_to_ignore, other_useless_db 95 | 96 | $ coucharchive -c config.ini create -o test.tar.gz 97 | 98 | To save a backup on AWS S3 and notify somebody via email: 99 | 100 | .. code:: bash 101 | 102 | $ aws s3 cp /tmp/archive.tar.gz s3://my-backups/archive.tar.gz 103 | $ cat >/tmp/email.txt < ten_min_ago] 296 | self._last_errors = [e for e in self._last_errors 297 | if e[0] > ten_min_ago] 298 | 299 | def recent_errors(self): 300 | self._clean_and_read_events() 301 | return len(self._last_errors) 302 | 303 | def _ideal_number_of_replications_for_ideal_duration(self): 304 | if not len(self._last_successes): 305 | # We do not have enough data yet to return anything useful. 306 | # Start with 4 concurrent replications: 307 | return 4 308 | else: 309 | # Compute the ideal number of concurrent replications, to finish in 310 | # ideal_duration: 311 | databases_left = (self.total_replications - 312 | self.completed_replications.value) 313 | time_left = max( 314 | 1, self._start_time + self.ideal_duration - time.time()) 315 | self.ideal_speed = databases_left / time_left 316 | self.current_avge_speed = ( 317 | len(self._last_successes) / 318 | min(time.time() - self._start_time, 5 * 60)) 319 | current_avge_replications = ( 320 | sum(e[1] for e in self._last_successes) / 321 | len(self._last_successes)) 322 | ideal_number = round(current_avge_replications * 323 | self.ideal_speed / self.current_avge_speed) 324 | 325 | # But do not increase too rapidly: never go above 2 × the best 326 | # known successful value: 327 | best_successful_number = max(e[1] for e in self._last_successes) 328 | return min(ideal_number, 2 * best_successful_number) 329 | 330 | def ideal_number_of_replications(self): 331 | self._clean_and_read_events() 332 | 333 | # Choose the value to achieve replication in ideal_duration: 334 | ideal_number = self._ideal_number_of_replications_for_ideal_duration() 335 | 336 | # ... but if there were errors, compute an age-weighted average 337 | # between past errors and ideal_number: 338 | # 339 | # ideal 340 | # target 341 | # last errors, 342 | # age-weighted + 343 | # | 344 | # + | new target = weighted average 345 | # | | 346 | # + | | 347 | # + + + + + 348 | # ----------------------> t 349 | # 10 minutes 350 | now = time.time() 351 | weights = [] 352 | weighted_values = [] 353 | target = [(now, ideal_number)] 354 | # Lower last errors values a bit (× 0.9) so we stay just below the 355 | # "error zone": 356 | last_errors_lowered = [(i[0], i[1] * 0.9) for i in self._last_errors] 357 | for i in last_errors_lowered + target: 358 | age = now - i[0] 359 | # Weight for an old event is 0, weight for a recent event is 1, 360 | # between the two it is an exponential curve. 361 | weight = math.exp(- age / 120) 362 | weights.append(weight) 363 | weighted_values.append(weight * i[1]) 364 | ideal_number = sum(weighted_values) / sum(weights) 365 | 366 | # ... and stay within [1, self.max_workers] 367 | ideal_number = max(1, min(self.max_workers, ideal_number)) 368 | 369 | return ideal_number 370 | 371 | def ideal_sleep_value(self): 372 | self._clean_and_read_events() 373 | 374 | # If there was 1 error within last 5 minutes, pause for 5 seconds, 375 | # if there were 2 errors within last 5 minutes, pause for 10 seconds, 376 | # if there were 3 errors within last 5 minutes, pause for 15 seconds... 377 | wait = 5 * len(self._last_errors) 378 | return min(60, wait) # stay within [5 s, 60 s] 379 | 380 | 381 | def replicate_couchdb_server(source_url, target_url, max_workers, 382 | reuse_db_if_exist=False, 383 | ignore_dbs=[], 384 | ideal_duration=None): 385 | while source_url.endswith('/'): 386 | source_url = source_url[:-1] 387 | while target_url.endswith('/'): 388 | target_url = target_url[:-1] 389 | 390 | ignore_dbs += ('_global_changes', '_metadata', '_replicator') 391 | dbs = [db for db in list(couchdb.Server(source_url)) 392 | if db not in ignore_dbs] 393 | 394 | in_queue = multiprocessing.Queue() 395 | out_queue = multiprocessing.Queue() 396 | control = ReplicationControl(len(dbs), ideal_duration, max_workers) 397 | pool = multiprocessing.Pool(max_workers, replicate_databases, 398 | (in_queue, out_queue, control, source_url, 399 | target_url, reuse_db_if_exist)) 400 | error = None 401 | last_log = time.time() 402 | 403 | while len(dbs) and not error: 404 | ideal = control.ideal_number_of_replications() 405 | while len(dbs) and control.running_replications.value < ideal: 406 | in_queue.put(dbs.pop(0)) 407 | control.running_replications.value += 1 408 | 409 | # No more than one log per second: 410 | if time.time() > last_log + 1: 411 | last_log = time.time() 412 | logging.debug('Currently running %d replication workers' 413 | % control.running_replications.value) 414 | logging.debug('Ideal speed = ' + 415 | ('%.1f rep/s' % control.ideal_speed 416 | if ideal_duration else 'fastest') + 417 | '; current average speed = %.1f rep/s' % 418 | control.current_avge_speed) 419 | n = control.recent_errors() 420 | if n: 421 | logging.debug(('There were %d CouchDB errors encountered in ' 422 | 'the last 5 minutes') % n) 423 | 424 | time.sleep(.1) 425 | 426 | while True: 427 | try: 428 | result = out_queue.get_nowait() 429 | except queue.Empty: 430 | break 431 | 432 | if result is True: 433 | control.running_replications.value -= 1 434 | control.completed_replications.value += 1 435 | else: 436 | logging.error('A replication failed, stopping...') 437 | error = result 438 | break 439 | 440 | for i in range(max_workers): 441 | in_queue.put(None) # message to workers to say "it's over" 442 | in_queue.close() 443 | pool.close() 444 | pool.join() 445 | 446 | if error: 447 | raise error 448 | 449 | 450 | def replicate_databases(in_queue, out_queue, control, source_url, target_url, 451 | reuse_db_if_exist): 452 | # Create one TCP connection per worker process: 453 | source = couchdb.Server(source_url) 454 | target = couchdb.Server(target_url) 455 | source_host = (urlparse(source_url).netloc 456 | .rsplit('@', 1)[-1].rsplit(':', 1)[0]) 457 | source_is_local = source_host in ('localhost', '127.0.0.1', '::1') 458 | 459 | while True: 460 | try: 461 | item = in_queue.get(True, 1) 462 | except queue.Empty: 463 | continue 464 | 465 | if item is None: # message saying "it's over" 466 | break 467 | 468 | try: 469 | replicate_one_database(control, source, source_url, 470 | source_is_local, target, target_url, 471 | reuse_db_if_exist, item) 472 | out_queue.put(True) # message to report success to the parent 473 | except Exception as e: 474 | out_queue.put(e) # message to report failure to the parent 475 | raise 476 | 477 | 478 | def replicate_one_database(control, source, source_url, source_is_local, 479 | target, target_url, reuse_db_if_exist, db): 480 | retries = 10 481 | 482 | try: 483 | target.create(db) 484 | except couchdb.http.PreconditionFailed as e: 485 | if e.args[0][0] == 'file_exists' and db in ('_users',): 486 | pass 487 | elif e.args[0][0] == 'file_exists' and reuse_db_if_exist: 488 | pass 489 | else: 490 | logging.error('Failed to create database %s' % db) 491 | raise 492 | 493 | server = source if source_is_local else target 494 | server.replicate(source_url + '/' + db, target_url + '/' + db) 495 | 496 | source_db = couchdb.Database(source_url + '/' + db) 497 | target_db = couchdb.Database(target_url + '/' + db) 498 | 499 | while True: 500 | try: 501 | target_db.security = source_db.security 502 | break 503 | except socket.gaierror as e: 504 | if retries == 0: 505 | raise e 506 | control.report_error() 507 | time.sleep(control.ideal_sleep_value()) 508 | retries -= 1 509 | except couchdb.http.ServerError as e: 510 | if retries == 0: 511 | if e.args[0][1][1] in ('no_majority', 'no_ring'): 512 | logging.error('Retry with a greater ulimit (e.g. ' 513 | f'`ulimit -n {2*RECOMMENDED_MAX_NOFILE}`)') 514 | raise 515 | control.report_error() 516 | time.sleep(control.ideal_sleep_value()) 517 | retries -= 1 518 | 519 | # Check if source and target have the same number of documents. If not, it 520 | # can be explained by two reasons: source db had a new document created 521 | # during replication, or bug #1418 happened. 522 | while True: 523 | try: 524 | source_len, target_len = len(source_db), len(target_db) 525 | if source_len == target_len: 526 | break 527 | elif source_len > target_len: 528 | # Overcome bug https://github.com/apache/couchdb/issues/1418 529 | bug_1418_create_missing_documents(source_db, target_db) 530 | server.replicate(source_url + '/' + db, target_url + '/' + db) 531 | except couchdb.http.ServerError: 532 | pass 533 | 534 | if retries == 0: 535 | raise Exception( 536 | '%s: replicated database has %d docs, source has %d' 537 | % (db, target_len, source_len)) 538 | control.report_error() 539 | time.sleep(control.ideal_sleep_value()) 540 | retries -= 1 541 | 542 | logging.info('%s: done' % db) 543 | 544 | control.report_success() 545 | 546 | 547 | def bug_1418_create_missing_documents(source_db, target_db): 548 | # Temporary function to be deleted when CouchDB team fixes the bug. 549 | # It manually creates document on the target database that where 550 | # previously deleted and re-created on the source. 551 | # 552 | # If the bug #1418 occured for a document, the situation should be as 553 | # followed: on target, doc is deleted (its last revision is a tombstone 554 | # with `_deleted: true`). On source, this tombstone revision is followed by 555 | # one or more non-tombstone revisions. 556 | # 557 | # tombstone rev 558 | # first rev (last common ancestor) latest rev on source 559 | # ↘ ↓ ↙ 560 | # SOURCE o----o----o----o----o----o----o----o 561 | # TARGET o----o----o----o 562 | # ↖ 563 | # tombstone rev (latest rev on target) 564 | 565 | missing_documents = [doc for doc in source_db if doc not in target_db] 566 | 567 | for doc_id in missing_documents: 568 | source_info = source_db.get(doc_id, revs_info=True)['_revs_info'] 569 | target_doc = target_db.get(doc_id, revs=True, open_revs='all') 570 | if not target_doc: 571 | continue # not bug #1418 572 | target_doc = target_doc[0]['ok'] 573 | if not target_doc['_deleted']: 574 | continue # not bug #1418 575 | common_ancestor = target_doc['_rev'] # e.g. '6-0ebc2b61b51eb4ed' 576 | if common_ancestor not in [i['rev'] for i in source_info]: 577 | continue # not bug #1418 578 | 579 | # e.g. ['7-f031bf11190de325', '8-a50a40b72aaea825'] 580 | revisions_to_catch_up = [] 581 | for info in source_info: 582 | if info['rev'] == common_ancestor: 583 | break 584 | if info['status'] not in ('available', 'deleted'): 585 | raise Exception('bug_1418_create_missing_documents: a source ' 586 | 'rev is not available anymore! ' 587 | '%s/%s, source %s target %s' 588 | % (source_db.name, doc_id, info['rev'], 589 | common_ancestor)) 590 | revisions_to_catch_up.insert(0, info['rev']) 591 | 592 | prev_target_rev = None 593 | for rev in revisions_to_catch_up: 594 | doc = source_db.get(doc_id, rev=rev) 595 | if prev_target_rev: 596 | doc['_rev'] = prev_target_rev 597 | else: 598 | del doc['_rev'] 599 | _, target_rev = target_db.save(doc) 600 | if target_rev != rev: 601 | raise Exception('bug_1418_create_missing_documents: bug in the' 602 | 'bug-solving code! %s/%s, source %s target %s' 603 | % (source_db.name, doc_id, rev, target_rev)) 604 | if doc.get('_deleted', False): 605 | prev_target_rev = None 606 | else: 607 | prev_target_rev = rev 608 | 609 | 610 | def create(source, filename, max_workers, ignore_dbs=[], ideal_duration=None): 611 | erlang_node = 'coucharchive-%s@localhost' % ''.join( 612 | random.choice(string.ascii_letters + string.digits) for _ in range(10)) 613 | 614 | with CouchDBInstance(erlang_node) as local_couchdb: 615 | local_couchdb.start() 616 | logging.info('Launched CouchDB instance at %s' % local_couchdb.url) 617 | 618 | replicate_couchdb_server(source, local_couchdb.url, max_workers, 619 | ignore_dbs=ignore_dbs, 620 | ideal_duration=ideal_duration) 621 | 622 | local_couchdb.stop() 623 | 624 | logging.info('Creating backup archive at %s' % filename) 625 | with tarfile.open(filename, 'w:gz') as tar: 626 | tar.add(local_couchdb.confdir, arcname='etc') 627 | tar.add(local_couchdb.datadir, arcname='data') 628 | 629 | file = tarfile.TarInfo('erlang_node_name') 630 | file.size = len(erlang_node) 631 | tar.addfile(file, BytesIO(erlang_node.encode('utf-8'))) 632 | 633 | info = ( 634 | 'CouchDB backup made on %s\n' % datetime.now().isoformat() + 635 | 'with CouchDB version %s, ' % local_couchdb.version + 636 | 'info: %s\n' % local_couchdb.version_info 637 | ).encode('utf-8') 638 | file = tarfile.TarInfo('info') 639 | file.size = len(info) 640 | tar.addfile(file, BytesIO(info)) 641 | 642 | 643 | def _load_archive(filename, callback): 644 | if not os.path.isfile(filename): 645 | raise Exception('File "%s" does not exist' % filename) 646 | 647 | with tarfile.open(filename) as tar, \ 648 | tempfile.TemporaryDirectory(prefix='coucharchive-') as tmp: 649 | logging.info('Extracting backup archive from %s' % filename) 650 | tar.extractall(path=tmp) 651 | 652 | if os.path.isfile(tmp + '/erlang_node_name'): 653 | with open(tmp + '/erlang_node_name', 'r') as f: 654 | erlang_node = f.read().strip() 655 | else: # for archives made before coucharchive 1.2.1 656 | erlang_node = 'coucharchive@localhost' 657 | 658 | with CouchDBInstance(erlang_node) as local_couchdb: 659 | os.rmdir(local_couchdb.datadir) 660 | os.rename(tmp + '/data', local_couchdb.datadir) 661 | 662 | local_couchdb.start() 663 | logging.info('Launched CouchDB instance at %s' % local_couchdb.url) 664 | 665 | callback(local_couchdb.url) 666 | 667 | 668 | def load(filename): 669 | def callback(local_couch_server_url): 670 | logging.info('Ready!') 671 | try: 672 | time.sleep(365 * 24 * 3600) 673 | except KeyboardInterrupt: 674 | pass 675 | 676 | _load_archive(filename, callback) 677 | 678 | 679 | def restore(target, filename, max_workers, reuse_db_if_exist=False, 680 | ignore_dbs=[], ideal_duration=None): 681 | def callback(local_couch_server_url): 682 | replicate_couchdb_server(local_couch_server_url, target, max_workers, 683 | reuse_db_if_exist=reuse_db_if_exist, 684 | ignore_dbs=ignore_dbs, 685 | ideal_duration=ideal_duration) 686 | 687 | _load_archive(filename, callback) 688 | 689 | 690 | def main(): 691 | # Get action and archive file from command line 692 | parser = argparse.ArgumentParser() 693 | parser.add_argument('-v', '--verbose', action='count', 694 | help='be more verbose (can be used multiple times)') 695 | parser.add_argument('-q', '--quiet', action='count', 696 | help='be more quiet (can be used multiple times)') 697 | parser.add_argument('-c', '--config', dest='config_file', 698 | action='store', help='path to config file') 699 | subparsers = parser.add_subparsers(dest='action') 700 | sub = {} 701 | 702 | def check_ideal_duration(value): 703 | if value == 'fastest': 704 | return 0 705 | if value.isdigit() and int(value) >= 0: 706 | return int(value) 707 | raise argparse.ArgumentTypeError( 708 | 'must be a positive integer or "fastest"') 709 | 710 | sub['create'] = subparsers.add_parser('create') 711 | sub['create'].add_argument( 712 | '--from', dest='source_server', action='store', 713 | help='source CouchDB server to create archive from') 714 | sub['create'].add_argument( 715 | '-o', '--output', dest='output', action='store', required=True, 716 | help='path to archive to create') 717 | sub['create'].add_argument( 718 | '--ideal-duration', dest='ideal_duration', action='store', 719 | help='optimize concurrent replications and server load to finish in N ' 720 | 'seconds', default='fastest', type=check_ideal_duration) 721 | sub['create'].add_argument( 722 | '-w', '--max-workers', dest='max_workers', action='store', default=64, 723 | help='the maximum number of concurrent replications', type=int) 724 | 725 | sub['restore'] = subparsers.add_parser('restore') 726 | sub['restore'].add_argument( 727 | '--to', dest='target_server', action='store', 728 | help='target CouchDB server to restore archive to') 729 | sub['restore'].add_argument( 730 | '-i', '--input', dest='input', action='store', required=True, 731 | help='path to archive to restore') 732 | sub['restore'].add_argument( 733 | '--reuse-db-if-exist', dest='reuse_db_if_exist', 734 | action='store_true', default=False, 735 | help='continue restoration even if database exists on target') 736 | sub['restore'].add_argument( 737 | '--ideal-duration', dest='ideal_duration', action='store', 738 | help='optimize concurrent replications and server load to finish in N ' 739 | 'seconds', default='fastest', type=check_ideal_duration) 740 | sub['restore'].add_argument( 741 | '-w', '--max-workers', dest='max_workers', action='store', default=64, 742 | help='the maximum number of concurrent replications', type=int) 743 | 744 | sub['load'] = subparsers.add_parser('load') 745 | sub['load'].add_argument( 746 | '-i', '--input', dest='input', action='store', required=True, 747 | help='path to archive to load') 748 | 749 | sub['replicate'] = subparsers.add_parser('replicate') 750 | sub['replicate'].add_argument( 751 | '--from', dest='source_server', action='store', 752 | help='source CouchDB server to replicate from') 753 | sub['replicate'].add_argument( 754 | '--to', dest='target_server', action='store', 755 | help='target CouchDB server to replicate to') 756 | sub['replicate'].add_argument( 757 | '--reuse-db-if-exist', dest='reuse_db_if_exist', 758 | action='store_true', default=False, 759 | help='continue replication even if database exists on target') 760 | sub['replicate'].add_argument( 761 | '--ideal-duration', dest='ideal_duration', action='store', 762 | help='optimize concurrent replications and server load to finish in N ' 763 | 'seconds', default='fastest', type=check_ideal_duration) 764 | sub['replicate'].add_argument( 765 | '-w', '--max-workers', dest='max_workers', action='store', default=64, 766 | help='the maximum number of concurrent replications', type=int) 767 | 768 | args = parser.parse_args() 769 | 770 | # https://docs.python.org/fr/3/library/logging.html#levels 771 | loglevel = 10 * (2 + (args.quiet or 0) - (args.verbose or 0)) 772 | logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s', 773 | datefmt="%Y-%m-%dT%H:%M:%S%z", 774 | level=loglevel) 775 | 776 | config = configparser.ConfigParser() 777 | if args.config_file: 778 | config.read(args.config_file) 779 | 780 | if args.action in ('create', 'replicate'): 781 | if not args.source_server and 'source' in config.sections(): 782 | args.source_server = config['source'].get('url', '') 783 | if not args.source_server: 784 | sub[args.action].print_help() 785 | parser.exit(1) 786 | if args.action in ('restore', 'replicate'): 787 | if not args.target_server and 'target' in config.sections(): 788 | args.target_server = config['target'].get('url', '') 789 | if not args.target_server: 790 | sub[args.action].print_help() 791 | parser.exit(1) 792 | 793 | ignore_dbs = [] 794 | if 'replication' in config.sections(): 795 | ignore_dbs = config['replication'].get('ignore_dbs', '').split(',') 796 | ignore_dbs = [db.strip() for db in ignore_dbs if db.strip()] 797 | 798 | if getattr(args, 'source_server', None): 799 | args.source_server = _canonical_couchdb_url(args.source_server) 800 | _check_couchdb_connection(args.source_server) 801 | if getattr(args, 'target_server', None): 802 | args.target_server = _canonical_couchdb_url(args.target_server) 803 | _check_couchdb_connection(args.target_server) 804 | 805 | if args.action == 'create': 806 | create(args.source_server, args.output, args.max_workers, 807 | ignore_dbs=ignore_dbs, ideal_duration=args.ideal_duration) 808 | elif args.action == 'restore': 809 | restore(args.target_server, args.input, args.max_workers, 810 | reuse_db_if_exist=args.reuse_db_if_exist, 811 | ignore_dbs=ignore_dbs, ideal_duration=args.ideal_duration) 812 | elif args.action == 'load': 813 | load(args.input) 814 | elif args.action == 'replicate': 815 | replicate_couchdb_server(args.source_server, args.target_server, 816 | args.max_workers, 817 | reuse_db_if_exist=args.reuse_db_if_exist, 818 | ignore_dbs=ignore_dbs, 819 | ideal_duration=args.ideal_duration) 820 | else: 821 | parser.print_help() 822 | parser.exit(1) 823 | 824 | 825 | if __name__ == '__main__': 826 | main() 827 | --------------------------------------------------------------------------------