├── debian ├── compat ├── source │ └── format ├── rules ├── tests │ ├── control │ └── smoke ├── gbp.conf ├── changelog ├── upstream │ └── metadata ├── autobuild.sh ├── copyright └── control ├── src ├── rdiff_backup │ ├── __init__.py │ ├── hash.py │ └── Rdiff.py ├── rdiffbackup │ ├── __init__.py │ ├── utils │ │ └── __init__.py │ ├── actions │ │ ├── info.py │ │ ├── calculate.py │ │ ├── test.py │ │ ├── server.py │ │ └── verify.py │ └── actions_mgr.py └── rdiff-backup ├── .dockerignore ├── CONTRIBUTING.adoc ├── tools ├── crossversion │ ├── host_vars │ │ └── oldrdiffbackup │ │ │ └── version.yml │ ├── .gitignore │ ├── ansible.cfg │ ├── playbook-provision.yml │ ├── README.adoc │ ├── Vagrantfile │ └── playbook-smoke-test.yml ├── windows │ ├── .gitignore │ ├── group_vars │ │ ├── samba_servers │ │ │ ├── rhbase.yml │ │ │ └── samba.yml │ │ ├── windows_builders │ │ │ ├── rdiff-backup-test.yml │ │ │ ├── rdiff-backup.yml │ │ │ └── librsync.yml │ │ └── windows_hosts │ │ │ └── generic.yml │ ├── roles │ │ ├── rh-base │ │ │ ├── meta │ │ │ │ ├── .galaxy_install_info │ │ │ │ └── main.yml │ │ │ ├── templates │ │ │ │ ├── etc_profile.d_localtime.j2 │ │ │ │ ├── etc_dnf.conf.j2 │ │ │ │ ├── etc_yum.conf.j2 │ │ │ │ ├── etc_dnf_automatic.conf.j2 │ │ │ │ ├── etc_yum_yum-cron.conf.j2 │ │ │ │ └── etc_yum_yum-cron-hourly.conf.j2 │ │ │ ├── .gitignore │ │ │ ├── handlers │ │ │ │ └── main.yml │ │ │ ├── vars │ │ │ │ ├── RedHat.yml │ │ │ │ ├── Fedora.yml │ │ │ │ └── RedHat8.yml │ │ │ ├── tasks │ │ │ │ ├── main.yml │ │ │ │ ├── auto-updates.yml │ │ │ │ ├── admin.yml │ │ │ │ ├── services.yml │ │ │ │ ├── users.yml │ │ │ │ ├── security.yml │ │ │ │ └── install.yml │ │ │ ├── LICENSE.md │ │ │ ├── .yamllint │ │ │ ├── defaults │ │ │ │ └── main.yml │ │ │ ├── files │ │ │ │ └── dynamic-motd.sh │ │ │ └── CHANGELOG.md │ │ ├── samba │ │ │ ├── meta │ │ │ │ ├── .galaxy_install_info │ │ │ │ └── main.yml │ │ │ ├── templates │ │ │ │ └── smbusers.j2 │ │ │ ├── handlers │ │ │ │ └── main.yml │ │ │ ├── .gitignore │ │ │ ├── vars │ │ │ │ ├── os_Archlinux.yml │ │ │ │ ├── os_Debian.yml │ │ │ │ └── os_RedHat.yml │ │ │ ├── defaults │ │ │ │ └── main.yml │ │ │ ├── .travis.yml │ │ │ └── LICENSE.md │ │ └── requirements.yml │ ├── ansible.cfg │ ├── tasks │ │ ├── get-librsync-git.yml │ │ └── get-librsync-tarball.yml │ ├── playbook-test-rdiff-backup.yml │ ├── playbook-build-librsync.yml │ ├── Vagrantfile │ ├── playbook-provision.yml │ └── playbook-build-rdiff-backup.yml ├── hook-rdiffbackup.actions_mgr.py ├── rdiff-backup.bat ├── misc │ ├── rdiff-many-files.py │ ├── README.adoc │ ├── make-many-data-files.py │ ├── librsync-many-files.py │ ├── remove-comments.py │ ├── find2dirs │ ├── create_fs_as_user.sh │ ├── rdiff-backup-wrap │ ├── setup_dev_archlinux.sh │ ├── init_files.py │ ├── generate-code-overview.sh │ └── python-rdiff ├── win_provision.sh ├── bash-completion │ ├── README.adoc │ └── rdiff-backup ├── get_changelog_since.sh ├── win_build_librsync.sh ├── win_package_rdiffbackup.sh ├── win_test_rdiffbackup.sh ├── build_wheels.sh ├── win_build_rdiffbackup.sh ├── prepare_api_doc.sh └── setup-testfiles.sh ├── docs ├── resources │ ├── logo-32.png │ ├── logo-a.png │ ├── logo-128.png │ └── logo-banner.png ├── arch │ ├── rdiff_backup_classes.sh │ ├── README.adoc │ └── locations.adoc ├── credits.adoc ├── index.adoc ├── api │ ├── v200.adoc │ └── v201.adoc ├── rdiff-backup-statistics.1 └── rdiff-backup-statistics.1.adoc ├── testing ├── makerestoretest3 ├── robusttest.py ├── test_with_profiling.py ├── server.py ├── backuptest.py ├── action_test_test.py ├── ctest.py ├── setconnectionstest.py ├── find-max-ram.py ├── api_test.py ├── regressfailedlongname.sh ├── rdb_arguments.py ├── action_backuprestore_test.py ├── user_grouptest.py ├── errorsrecovertest.py ├── FilenameMappingtest.py └── fs_abilitiestest.py ├── .gitignore ├── .github ├── ISSUE_TEMPLATE │ ├── feature_request.md │ └── bug_report.md └── workflows │ ├── test_windows.yml │ └── test_linux.yml ├── tox_dist.ini ├── tox_slow.ini ├── requirements.txt ├── tox_root.ini ├── Dockerfile ├── Makefile ├── tox.ini └── tox_win.ini /debian/compat: -------------------------------------------------------------------------------- 1 | 11 2 | -------------------------------------------------------------------------------- /src/rdiff_backup/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/rdiffbackup/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | .tox 2 | build 3 | -------------------------------------------------------------------------------- /CONTRIBUTING.adoc: -------------------------------------------------------------------------------- 1 | docs/DEVELOP.adoc -------------------------------------------------------------------------------- /debian/source/format: -------------------------------------------------------------------------------- 1 | 3.0 (native) 2 | -------------------------------------------------------------------------------- /debian/rules: -------------------------------------------------------------------------------- 1 | #!/usr/bin/make -f 2 | 3 | %: 4 | dh $@ --buildsystem=pybuild --with python3 5 | -------------------------------------------------------------------------------- /debian/tests/control: -------------------------------------------------------------------------------- 1 | Tests: smoke 2 | Depends: rdiff-backup 3 | Restrictions: allow-stderr 4 | -------------------------------------------------------------------------------- /tools/crossversion/host_vars/oldrdiffbackup/version.yml: -------------------------------------------------------------------------------- 1 | --- 2 | rdiff_backup_old_version: "2.0.5" 3 | -------------------------------------------------------------------------------- /tools/windows/.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | # ignore roles installed using ansible-galaxy 3 | roles/*.*/ 4 | -------------------------------------------------------------------------------- /tools/crossversion/.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | # ignore roles installed using ansible-galaxy 3 | roles/*.*/ 4 | -------------------------------------------------------------------------------- /tools/windows/group_vars/samba_servers/rhbase.yml: -------------------------------------------------------------------------------- 1 | --- 2 | rhbase_firewall_allow_services: 3 | - samba 4 | -------------------------------------------------------------------------------- /docs/resources/logo-32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raptor2101/rdiff-backup/master/docs/resources/logo-32.png -------------------------------------------------------------------------------- /docs/resources/logo-a.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raptor2101/rdiff-backup/master/docs/resources/logo-a.png -------------------------------------------------------------------------------- /docs/resources/logo-128.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raptor2101/rdiff-backup/master/docs/resources/logo-128.png -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/meta/.galaxy_install_info: -------------------------------------------------------------------------------- 1 | install_date: Wed Jun 3 06:31:27 2020 2 | version: v3.0.0 3 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/meta/.galaxy_install_info: -------------------------------------------------------------------------------- 1 | install_date: Wed Jun 3 14:47:24 2020 2 | version: v2.7.1 3 | -------------------------------------------------------------------------------- /docs/resources/logo-banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raptor2101/rdiff-backup/master/docs/resources/logo-banner.png -------------------------------------------------------------------------------- /tools/windows/roles/samba/templates/smbusers.j2: -------------------------------------------------------------------------------- 1 | {% for entry in samba_username_map %} 2 | {{ entry.to }} = {{ entry.from }} 3 | {% endfor %} 4 | -------------------------------------------------------------------------------- /tools/hook-rdiffbackup.actions_mgr.py: -------------------------------------------------------------------------------- 1 | from PyInstaller.utils.hooks import collect_submodules 2 | hiddenimports = collect_submodules('rdiffbackup.actions') 3 | -------------------------------------------------------------------------------- /tools/windows/roles/requirements.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # ansible-galaxy install --roles-path roles -r roles/requirements.yml 3 | - src: bertvv.rh-base 4 | - src: bertvv.samba 5 | -------------------------------------------------------------------------------- /tools/rdiff-backup.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | REM simple wrapper script to call rdiff-backup from the repo 3 | REM d=Disk, p=(dir)path, n=name(without extension), x=extension 4 | python "%~dpn0" %* 5 | -------------------------------------------------------------------------------- /debian/gbp.conf: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | # Ignore requirement to use branch name 'debian/master' to make it easier 3 | # for contributors to work with feature and bugfix branches 4 | ignore-branch = True 5 | -------------------------------------------------------------------------------- /debian/changelog: -------------------------------------------------------------------------------- 1 | rdiff-backup (1.9.0b0) unstable; urgency=medium 2 | 3 | * Initial changelog entry for native rdiff-backup packaging. 4 | 5 | -- Otto Kekäläinen Fri, 02 Aug 2019 21:01:40 +0100 6 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/handlers/main.yml: -------------------------------------------------------------------------------- 1 | # File: roles/samba/handlers/main.yml 2 | --- 3 | - name: Restart Samba services 4 | service: 5 | name: "{{ item }}" 6 | state: restarted 7 | with_items: "{{ samba_services }}" 8 | -------------------------------------------------------------------------------- /tools/windows/ansible.cfg: -------------------------------------------------------------------------------- 1 | # config file for ansible -- https://ansible.com/ 2 | # =============================================== 3 | 4 | [defaults] 5 | 6 | inventory = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 7 | 8 | stdout_callback = debug 9 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_profile.d_localtime.j2: -------------------------------------------------------------------------------- 1 | # Sets the TZ variable, which saves system calls 2 | # See: https://blog.packagecloud.io/eng/2017/02/21/set-environment-variable-save-thousands-of-system-calls/ 3 | 4 | TZ="{{ rhbase_tz }}" 5 | 6 | export TZ 7 | -------------------------------------------------------------------------------- /tools/crossversion/ansible.cfg: -------------------------------------------------------------------------------- 1 | # config file for ansible -- https://ansible.com/ 2 | # =============================================== 3 | 4 | [defaults] 5 | 6 | inventory = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 7 | 8 | stdout_callback = debug 9 | -------------------------------------------------------------------------------- /debian/upstream/metadata: -------------------------------------------------------------------------------- 1 | Name: Rdiff-backup 2 | Bug-Database: https://github.com/rdiff-backup/rdiff-backup/issues 3 | Contact: https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users 4 | Repository: git://github.com/rdiff-backup/rdiff-backup.git 5 | Repository-Browse: https://github.com/rdiff-backup/rdiff-backup/ 6 | -------------------------------------------------------------------------------- /tools/windows/group_vars/windows_builders/rdiff-backup-test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # where to get the source code from 3 | rdiffbackup_files_git_repo: "https://github.com/rdiff-backup/rdiff-backup-filesrepo.git" 4 | # where to put the source code of rdiff-backup 5 | rdiffbackup_files_dir: "{{ working_dir }}/rdiff-backup-filesrepo" 6 | -------------------------------------------------------------------------------- /tools/windows/tasks/get-librsync-git.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: clone the librsync sources from Git 3 | win_command: > 4 | git.exe clone 5 | {% if librsync_git_ref is defined %}--branch {{ librsync_git_ref }}{% endif %} 6 | {{ librsync_git_repo }} 7 | "{{ librsync_dir }}" 8 | args: 9 | creates: "{{ librsync_dir }}" 10 | -------------------------------------------------------------------------------- /src/rdiffbackup/utils/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Utility libraries for rdiff-backup 3 | 4 | In this package are modules which wrap standard or external libraries 5 | without being really specific to rdiff-backup (they have no backup/restore 6 | context), and could be replaced at any time (i.e. they don't participate 7 | to the API of rdiff-backup). 8 | """ 9 | -------------------------------------------------------------------------------- /debian/autobuild.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Automatically update changelog with new version number 4 | VERSION="$(./setup.py --version)" 5 | dch -b -v "${VERSION}" "Automatic build" 6 | 7 | # Build package ignoring the modified changelog 8 | gbp buildpackage -us -uc --git-ignore-new 9 | 10 | # Reset debian/changelog 11 | git checkout debian/changelog 12 | -------------------------------------------------------------------------------- /tools/windows/group_vars/samba_servers/samba.yml: -------------------------------------------------------------------------------- 1 | --- 2 | samba_users: 3 | - name: vagrant 4 | password: vagrant 5 | 6 | samba_shares: 7 | - name: readonlyshare 8 | browseable: true 9 | - name: readwriteshare 10 | browseable: true 11 | group: vagrant 12 | write_list: +vagrant 13 | 14 | samba_netbios_name: "{{ inventory_hostname_short }}" 15 | -------------------------------------------------------------------------------- /tools/windows/group_vars/windows_hosts/generic.yml: -------------------------------------------------------------------------------- 1 | --- 2 | working_dir: "C:/Users/{{ ansible_user }}/Develop" 3 | # define msbuild_exe if you want to use MSBuild directly instead of CMake 4 | #msbuild_exe: "C:/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/MSBuild/15.0/Bin/MSBuild.exe" 5 | # full path to CMake command 6 | cmake_exe: "C:/Program Files/CMake/bin/cmake.exe" 7 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/.gitignore: -------------------------------------------------------------------------------- 1 | # .gitignore 2 | 3 | # Hidden Vagrant-directory 4 | .vagrant 5 | 6 | # Backup files (e.g. Vim, Gedit, etc.) 7 | *~ 8 | 9 | # Vagrant base boxes (you never know when someone puts one in the repository) 10 | *.box 11 | 12 | # Compiled Python 13 | *.pyc 14 | 15 | # Test directories. These are Git worktrees to separate test branches 16 | *-tests/ 17 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/.gitignore: -------------------------------------------------------------------------------- 1 | # .gitignore 2 | 3 | # Hidden Vagrant-directory 4 | .vagrant 5 | 6 | # Backup files (e.g. Vim, Gedit, etc.) 7 | *~ 8 | 9 | # Vagrant base boxes (you never know when someone puts one in the repository) 10 | *.box 11 | 12 | # Python artefacts 13 | .ropeproject 14 | *.pyc 15 | 16 | # Ignore test directory 17 | tests/ 18 | vagrant-tests/ 19 | docker-tests/ 20 | -------------------------------------------------------------------------------- /debian/tests/smoke: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -ex 3 | 4 | # dep8 smoke test for rdiff-backup 5 | # Author: Otto Kekäläinen 6 | # 7 | # This very simple test just checks that the binary starts and prints out 8 | # the usual complaint: 9 | # Fatal Error: No arguments given 10 | # See the rdiff-backup manual page for more information. 11 | 12 | rdiff-backup --print-statistics 2>&1 | grep 'No arguments given' 13 | -------------------------------------------------------------------------------- /testing/makerestoretest3: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # This script will create the testing/restoretest3 directory as it 4 | # needs to be for one of the tests in restoretest.py to work. 5 | 6 | OLDTESTDIR=$(dirname $0)/../../rdiff-backup_testfiles 7 | rm -rvf ${OLDTESTDIR}/restoretest3 8 | for i in 1 2 3 4 9 | do 10 | rdiff-backup --current-time $((i * 10000)) \ 11 | ${OLDTESTDIR}/increment${i} ${OLDTESTDIR}/restoretest3 12 | done 13 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_dnf.conf.j2: -------------------------------------------------------------------------------- 1 | [main] 2 | gpgcheck={{ rhbase_repo_gpgcheck }} 3 | installonly_limit={{ rhbase_repo_installonly_limit }} 4 | clean_requirements_on_remove={{ rhbase_repo_remove_dependencies }} 5 | 6 | {% if rhbase_repo_exclude_from_update|length != 0 %} 7 | exclude={% for item in rhbase_repo_exclude_from_update %}{{ item }} {%endfor %} 8 | {% endif %} 9 | 10 | ## vim: ft=dosini 11 | 12 | -------------------------------------------------------------------------------- /tools/misc/rdiff-many-files.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Run rdiff to transform everything in one dir to another""" 3 | 4 | import sys 5 | import os 6 | 7 | dir1, dir2 = sys.argv[1:3] 8 | for i in range(1000): 9 | assert not os.system("rdiff signature %s/%s sig" % (dir1, i)) 10 | assert not os.system("rdiff delta sig %s/%s diff" % (dir2, i)) 11 | assert not os.system( 12 | "rdiff patch %s/%s diff %s/%s.out" % (dir1, i, dir1, i)) 13 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_yum.conf.j2: -------------------------------------------------------------------------------- 1 | [main] 2 | gpgcheck={% if rhbase_repo_gpgcheck %}1{% else %}0{% endif %} 3 | 4 | installonly_limit={{ rhbase_repo_installonly_limit }} 5 | clean_requirements_on_remove={% if rhbase_repo_remove_dependencies %}1{% else %}0{% endif %} 6 | 7 | {% if rhbase_repo_exclude_from_update|length != 0 %} 8 | exclude={% for item in rhbase_repo_exclude_from_update %}{{ item }} {%endfor %} 9 | {% endif %} 10 | 11 | ## vim: ft=dosini 12 | -------------------------------------------------------------------------------- /tools/misc/README.adoc: -------------------------------------------------------------------------------- 1 | :sectnums: 2 | :toc: 3 | 4 | = Miscellaneous files/scripts 5 | 6 | The files in this directory link:.[tools/misc] are not properly maintained and might be dangerous to use. 7 | Please be careful! 8 | 9 | Each script has a small purpose description at the top. 10 | 11 | ____ 12 | *NOTE:* If you'd like to make a script more "official", make sure that it's properly and automatically tested, and move it to the link:..[tools] directory or one of it's sub-directories. 13 | ____ 14 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/handlers/main.yml: -------------------------------------------------------------------------------- 1 | # roles/rh-base/handlers/main.yml 2 | --- 3 | 4 | - name: restart journald 5 | service: 6 | name: systemd-journald 7 | state: restarted 8 | 9 | - name: restart firewalld 10 | service: 11 | name: firewalld 12 | state: restarted 13 | 14 | - name: restart updates 15 | service: 16 | name: "{{ rhbase_updates_service }}" 17 | state: restarted 18 | 19 | - name: restart sshd 20 | service: 21 | name: sshd 22 | state: restarted 23 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/meta/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | galaxy_info: 3 | author: Bert Van Vreckem 4 | description: >- 5 | Ansible role for basic setup of a server with a RedHat-based Linux 6 | distribution (CentOS, Fedora, RHEL, ...) with the systemd init system. 7 | license: BSD 8 | min_ansible_version: 2.1 9 | platforms: 10 | - name: EL 11 | versions: 12 | - 7 13 | - name: Fedora 14 | versions: 15 | - 29 16 | galaxy_tags: 17 | - system 18 | dependencies: [] 19 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/vars/os_Archlinux.yml: -------------------------------------------------------------------------------- 1 | # roles/samba/vars/os_Archlinux.yml 2 | --- 3 | 4 | samba_packages: 5 | - samba 6 | - smbclient 7 | 8 | samba_vfs_packages: [] 9 | 10 | samba_selinux_packages: [] 11 | samba_selinux_booleans: [] 12 | 13 | samba_configuration_dir: /etc/samba 14 | samba_configuration: "{{ samba_configuration_dir }}/smb.conf" 15 | samba_username_map_file: "{{ samba_configuration_dir }}/smbusers" 16 | 17 | samba_services: 18 | - smbd 19 | - nmbd 20 | 21 | samba_www_documentroot: /var/www 22 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/vars/RedHat.yml: -------------------------------------------------------------------------------- 1 | # roles/rh-base/vars/RedHat.yml 2 | --- 3 | rhbase_systemd_services: 4 | - systemd-journald 5 | - systemd-tmpfiles-setup-dev 6 | - systemd-tmpfiles-setup 7 | 8 | rhbase_dependencies: 9 | - libselinux-python 10 | - libsemanage-python 11 | - firewalld 12 | 13 | rhbase_package_manager: yum 14 | rhbase_package_manager_configuration: /etc/yum.conf 15 | 16 | rhbase_updates_packages: 17 | - yum-cron 18 | rhbase_updates_service: yum-cron 19 | rhbase_updates_config: 20 | - yum-cron.conf 21 | - yum-cron-hourly.conf 22 | -------------------------------------------------------------------------------- /tools/win_provision.sh: -------------------------------------------------------------------------------- 1 | # provision Python for 32 and 64 bits in given version using Chocolatey 2 | 3 | PYTHON_VERSION=$1 4 | 5 | choco install python3 \ 6 | --version ${PYTHON_VERSION} \ 7 | --params "/InstallDir:C:\Python64 /InstallDir32:C:\Python32" 8 | 9 | for bits in 32 64 10 | do 11 | C:/Python${bits}/python.exe -VV 12 | C:/Python${bits}/Scripts/pip.exe install --upgrade \ 13 | pywin32 pyinstaller wheel certifi setuptools-scm tox PyYAML 14 | C:/Python${bits}/python.exe -c \ 15 | 'import pywintypes, winnt, win32api, win32security, win32file, win32con' 16 | done 17 | -------------------------------------------------------------------------------- /tools/windows/group_vars/windows_builders/rdiff-backup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # where to get the source code from 3 | rdiffbackup_git_repo: "https://github.com/rdiff-backup/rdiff-backup.git" 4 | # where to put the source code of rdiff-backup 5 | rdiffbackup_dir: "{{ working_dir }}/rdiff-backup" 6 | 7 | # the local dist directory where to fetch rdiff-backup.exe 8 | rdiffbackup_local_dist_dir: ../../dist 9 | 10 | # Bits to be used for compiling librsync, win32 or win-amd64 11 | python_win_bits: win-amd64 12 | 13 | python_version: 3.9 14 | python_version_full: "{{ python_version }}.1" 15 | -------------------------------------------------------------------------------- /tools/bash-completion/README.adoc: -------------------------------------------------------------------------------- 1 | = Bash completion for rdiff-backup 2 | :sectnums: 3 | :toc: 4 | 5 | Install https://github.com/scop/bash-completion[bash-completion], available in most distros' repositories, and copy `rdiff-backup` to `/usr/share/bash-completion/completions/`. 6 | Be sure to keep same file name to allow bash-completion to dynamic load it on demand. 7 | 8 | This file was originally taken from https://salsa.debian.org/python-team/applications/rdiff-backup/blob/6f71f7603b2756066e28eedeeb426334734948f9/debian/local/bash-completion[Debian rdiff-backup packaging repository]. 9 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/vars/Fedora.yml: -------------------------------------------------------------------------------- 1 | # roles/rh-base/vars/Fedora.yml 2 | --- 3 | rhbase_systemd_services: 4 | - systemd-journald.service 5 | - systemd-tmpfiles-setup-dev.service 6 | - systemd-tmpfiles-setup.service 7 | 8 | rhbase_dependencies: 9 | - firewalld 10 | - python3-firewall 11 | - python3-libsemanage 12 | 13 | rhbase_package_manager: dnf 14 | rhbase_package_manager_configuration: /etc/dnf/dnf.conf 15 | 16 | rhbase_updates_packages: 17 | - dnf-automatic 18 | rhbase_updates_service: dnf-automatic.timer 19 | rhbase_updates_config: 20 | - automatic.conf 21 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/vars/RedHat8.yml: -------------------------------------------------------------------------------- 1 | # roles/rh-base/vars/Fedora.yml 2 | --- 3 | rhbase_systemd_services: 4 | - systemd-journald.service 5 | - systemd-tmpfiles-setup-dev.service 6 | - systemd-tmpfiles-setup.service 7 | 8 | rhbase_dependencies: 9 | - firewalld 10 | - python3-firewall 11 | - python3-libsemanage 12 | 13 | rhbase_package_manager: dnf 14 | rhbase_package_manager_configuration: /etc/dnf/dnf.conf 15 | 16 | rhbase_updates_packages: 17 | - dnf-automatic 18 | rhbase_updates_service: dnf-automatic.timer 19 | rhbase_updates_config: 20 | - automatic.conf 21 | -------------------------------------------------------------------------------- /tools/misc/make-many-data-files.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Make 10000 files consisting of data 3 | 4 | Syntax: test.py directory_name number_of_files character filelength""" 5 | 6 | import sys 7 | import os 8 | 9 | dirname = sys.argv[1] 10 | num_files = int(sys.argv[2]) 11 | character = sys.argv[3] 12 | filelength = int(sys.argv[4]) 13 | 14 | os.mkdir(dirname) 15 | for i in range(num_files): 16 | fp = open("%s/%s" % (dirname, i), "w") 17 | fp.write(character * filelength) 18 | fp.close() 19 | 20 | fp = open("%s.big" % dirname, "w") 21 | fp.write(character * (filelength * num_files)) 22 | fp.close() 23 | -------------------------------------------------------------------------------- /tools/windows/tasks/get-librsync-tarball.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: 3 | set_fact: 4 | librsync_tarfile: "{{ working_dir }}/{{ librsync_src_tarfile | urlsplit('path') | basename }}" 5 | - name: download librsync sources 6 | win_get_url: 7 | dest: "{{ librsync_tarfile }}" 8 | url: "{{ librsync_src_tarfile }}" 9 | # the following option exists only with Ansible >= 2.9 10 | follow_redirects: "{{ ansible_version.full is version('2.9', '>=') | ternary('all', omit) }}" 11 | - name: extract librsync sources 12 | win_unzip: 13 | dest: "{{ working_dir }}" 14 | src: "{{ librsync_tarfile }}" 15 | recurse: true # else the tarfile is only ungzipped 16 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | .*.swp 3 | build/ 4 | dist/ 5 | __pycache__ 6 | 7 | # tox testing stuff 8 | .tox* 9 | MANIFEST 10 | 11 | # just ignore files marked local through their name 12 | *[._]local[._]* 13 | 14 | # setup.py installation files 15 | src/rdiff_backup.egg-info/ 16 | .eggs/ 17 | 18 | # Windows build file 19 | rdiff-backup.spec 20 | 21 | # Pydev project file 22 | /.project 23 | /.pydevproject 24 | /.externalToolBuilders/ 25 | 26 | # Jekyll static sites 27 | _site/ 28 | # generated files 29 | *.html 30 | *.pdf 31 | 32 | # test coverage files 33 | .coverage* 34 | 35 | # Profiling information 36 | mprofile_*.dat 37 | 38 | # Ignore Windows artefacts 39 | *.dll 40 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/vars/os_Debian.yml: -------------------------------------------------------------------------------- 1 | # roles/samba/vars/os_Debian.yml 2 | --- 3 | 4 | samba_packages: 5 | - samba-common 6 | - samba 7 | - samba-client 8 | 9 | samba_vfs_packages: 10 | - samba-vfs-modules 11 | 12 | samba_selinux_packages: [] 13 | samba_selinux_booleans: [] 14 | 15 | samba_configuration_dir: /etc/samba 16 | samba_configuration: "{{ samba_configuration_dir }}/smb.conf" 17 | samba_username_map_file: "{{ samba_configuration_dir }}/smbusers" 18 | 19 | # The name of the Samba service in older releases (Ubuntu 14.04, 20 | # Debian <8) is "samba". 21 | samba_services: 22 | - smbd 23 | - nmbd 24 | 25 | samba_www_documentroot: /var/www 26 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/meta/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | galaxy_info: 3 | author: Bert Van Vreckem 4 | description: This role installs and configures Samba as a file server. 5 | license: BSD 6 | min_ansible_version: 2.8 7 | platforms: 8 | - name: EL 9 | versions: 10 | - 7 11 | - name: Fedora 12 | versions: 13 | - 28 14 | - name: Ubuntu 15 | versions: 16 | - xenial 17 | - bionic 18 | - name: Debian 19 | versions: 20 | - jessie 21 | - stretch 22 | - name: ArchLinux 23 | versions: 24 | - all 25 | galaxy_tags: 26 | - system 27 | - networking 28 | dependencies: [] 29 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/vars/os_RedHat.yml: -------------------------------------------------------------------------------- 1 | # roles/samba/vars/os_RedHat.yml 2 | --- 3 | 4 | samba_packages: 5 | - samba-common 6 | - samba 7 | - samba-client 8 | 9 | samba_vfs_packages: [] 10 | 11 | samba_selinux_packages: 12 | - "{{ ( ansible_distribution_major_version | int >= 8 ) | ternary('python3-libsemanage', 'libsemanage-python') }}" 13 | 14 | samba_selinux_booleans: 15 | - samba_enable_home_dirs 16 | - samba_export_all_rw 17 | 18 | samba_configuration_dir: /etc/samba 19 | samba_configuration: "{{ samba_configuration_dir }}/smb.conf" 20 | samba_username_map_file: "{{ samba_configuration_dir }}/smbusers" 21 | 22 | samba_services: 23 | - smb 24 | - nmb 25 | 26 | samba_www_documentroot: /var/www/html 27 | -------------------------------------------------------------------------------- /debian/copyright: -------------------------------------------------------------------------------- 1 | Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ 2 | Upstream-Name: rdiff-backup 3 | Upstream-Contact: rdiff-backup-users@nongnu.org 4 | Source: https://github.com/rdiff-backup/rdiff-backup 5 | 6 | Files: * 7 | Copyright: 2001-2008 Ben Escoto 8 | 2019-2020 Eric Lavarde 9 | License: GPL-2+ 10 | 11 | Files: debian/* 12 | Copyright: 2005-2009 Daniel Baumann , 13 | 2019 Otto Kekäläinen 14 | License: GPL-2+ 15 | 16 | License: GPL-2+ 17 | On Debian based systems the full text of the GNU General Public License version 18 | 2 can be found in the file `/usr/share/common-licenses/GPL-2`. 19 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # roles/samba/defaults/main.yml 2 | --- 3 | 4 | samba_workgroup: 'WORKGROUP' 5 | samba_server_string: 'Fileserver %m' 6 | samba_log_size: 5000 7 | samba_log_level: 0 8 | samba_interfaces: [] 9 | samba_security: 'user' 10 | samba_passdb_backend: 'tdbsam' 11 | samba_map_to_guest: 'never' 12 | samba_load_printers: false 13 | samba_printer_type: 'cups' 14 | samba_cups_server: 'localhost:631' 15 | samba_load_homes: false 16 | samba_create_varwww_symlinks: false 17 | samba_shares_root: '/srv/shares' 18 | samba_shares: [] 19 | samba_users: [] 20 | 21 | samba_wins_support: 'yes' 22 | samba_local_master: 'yes' 23 | samba_domain_master: 'yes' 24 | samba_preferred_master: 'yes' 25 | samba_mitigate_cve_2017_7494: true 26 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/main.yml 2 | --- 3 | 4 | - include_vars: "{{ item }}" 5 | with_first_found: 6 | - "{{ ansible_distribution }}.yml" 7 | - "{{ ansible_os_family }}{{ ansible_distribution_major_version }}.yml" 8 | - "{{ ansible_os_family }}.yml" 9 | tags: rhbase 10 | 11 | - include_tasks: install.yml # Install repositories and packages 12 | - include_tasks: config.yml # Configuration (/etc/) 13 | - include_tasks: services.yml # Start/stop basic services 14 | - include_tasks: security.yml # Security settings 15 | - include_tasks: users.yml # Create users 16 | - include_tasks: admin.yml # Admin user (a.o. SSH key) 17 | - include_tasks: auto-updates.yml # Automatic updates 18 | when: rhbase_automatic_updates 19 | -------------------------------------------------------------------------------- /testing/robusttest.py: -------------------------------------------------------------------------------- 1 | import os 2 | import unittest 3 | from rdiff_backup import robust 4 | 5 | 6 | class RobustTest(unittest.TestCase): 7 | """Test robust module""" 8 | 9 | def test_check_common_error(self): 10 | """Test capturing errors""" 11 | 12 | def cause_catchable_error(a): 13 | os.lstat("aoenuthaoeu/aosutnhcg.4fpr,38p") 14 | 15 | def cause_uncatchable_error(): 16 | ansoethusaotneuhsaotneuhsaontehuaou # noqa: F821 undefined name 17 | 18 | result = robust.check_common_error(None, cause_catchable_error, [1]) 19 | self.assertIsNone(result) 20 | with self.assertRaises(NameError): 21 | robust.check_common_error(None, cause_uncatchable_error) 22 | 23 | 24 | if __name__ == '__main__': 25 | unittest.main() 26 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: "[ENH] " 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ## Is your feature request related to a problem? Please describe. 11 | 12 | REPLACE THIS LINE - A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 13 | 14 | ## Describe the solution you'd like 15 | 16 | REPLACE THIS LINE - A clear and concise description of what you want to happen. 17 | 18 | ## Describe alternatives you've considered 19 | 20 | REPLACE THIS LINE - A clear and concise description of any alternative solutions or features you've considered. 21 | 22 | ## Additional context 23 | 24 | REPLACE THIS LINE - Add any other context or screenshots about the feature request here. 25 | -------------------------------------------------------------------------------- /tools/get_changelog_since.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # get a list of changes and authors since a give revision tag in Git 3 | 4 | if [[ -z "${1}" ]] 5 | then 6 | echo "Usage: $0 " >&2 7 | echo " outputs changes marked with 'XYZ:' and a unique list of authors since the tagged release" >&2 8 | exit 1 9 | fi 10 | RELTAG="${1}" 11 | 12 | echo "(make sure the version is the next correct one)" >&2 13 | echo 14 | echo "New in v$($(dirname $0)/../setup.py --version) ($(date -I))" 15 | echo "----------------------------" 16 | 17 | echo -e "\n## Changes\n" 18 | git log ${RELTAG}.. | 19 | sed -n '/^ *[A-Z][A-Z][A-Z]: / s/^ */* /p' | sort \ 20 | | fold -w 72 -s | sed 's/^\([^*]\)/ \1/' 21 | 22 | echo -e "\n## Authors\n" 23 | git log ${RELTAG}.. | 24 | awk -F': *| *<' '$1 == "Author" { print "* " $2 }' | sort -u 25 | 26 | echo 27 | -------------------------------------------------------------------------------- /testing/test_with_profiling.py: -------------------------------------------------------------------------------- 1 | import profile 2 | import pstats 3 | import os 4 | 5 | # if you need to profile a certain test, replace the "metadatatest" placeholder 6 | # with the test you want to profile and all functions will get imported into 7 | # the current namespace. 8 | # I didn't find a way to do it dynamically... 9 | from metadatatest import * # noqa: F403, F401 star import and unused 10 | 11 | # Create a profile output filename (don't forget to adapt the name) 12 | abs_work_dir = os.getenvb( 13 | b'TOX_ENV_DIR', 14 | os.getenvb(b'VIRTUAL_ENV', os.path.join(os.getcwdb(), b'build'))) 15 | profile_output = os.path.join(abs_work_dir, b"profile-metadatatest.out") 16 | 17 | # Run and output the test profile 18 | profile.run("unittest.main()", profile_output) 19 | p = pstats.Stats(profile_output) 20 | p.sort_stats('time') 21 | p.print_stats(40) 22 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_dnf_automatic.conf.j2: -------------------------------------------------------------------------------- 1 | # Dnf automatic updates configuration 2 | # {{ ansible_managed }} 3 | [commands] 4 | upgrade_type = {{ rhbase_updates_type }} 5 | random_sleep = {{ rhbase_updates_random_sleep }} 6 | download_updates = {{ rhbase_updates_download }} 7 | apply_updates = {{ rhbase_updates_apply }} 8 | 9 | [emitters] 10 | system_name = {{ ansible_hostname }} 11 | emit_via = {{ rhbase_updates_emit_via }} 12 | output_width = 80 13 | 14 | [email] 15 | email_from = {{ rhbase_updates_email_from }} 16 | email_to = {{ rhbase_updates_email_to }} 17 | email_host = {{ rhbase_updates_email_host }} 18 | 19 | [command] 20 | 21 | [command_email] 22 | email_from = {{ rhbase_updates_email_from }} 23 | email_to = {{ rhbase_updates_email_to }} 24 | 25 | [base] 26 | debuglevel = {{ rhbase_updates_debuglevel }} 27 | 28 | ## vim: ft=dosini 29 | -------------------------------------------------------------------------------- /tools/win_build_librsync.sh: -------------------------------------------------------------------------------- 1 | BITS=$1 2 | LIBRSYNC_VERSION=$2 # actually the corresponding Git tag 3 | 4 | if [[ ${BITS} == *32 ]] || [[ ${BITS} == *86 ]] 5 | then 6 | bits=32 7 | lib_win_bits=Win32 8 | py_win_bits=win32 9 | elif [[ ${BITS} == *64 ]] 10 | then 11 | bits=64 12 | lib_win_bits=x64 13 | py_win_bits=win-amd64 14 | else 15 | echo "ERROR: bits size must be 32 or 64, not '${BITS}'." >&2 16 | exit 1 17 | fi 18 | 19 | LIBRSYNC_GIT_DIR=${HOME}/.librsync${bits} 20 | LIBRSYNC_DIR=${HOME}/librsync${bits} 21 | export LIBRSYNC_DIR 22 | 23 | git clone -b ${LIBRSYNC_VERSION} https://github.com/librsync/librsync.git ${LIBRSYNC_GIT_DIR} 24 | 25 | pushd ${LIBRSYNC_GIT_DIR} 26 | cmake -DCMAKE_INSTALL_PREFIX=${LIBRSYNC_DIR} -A ${lib_win_bits} -DBUILD_SHARED_LIBS=TRUE -DBUILD_RDIFF=OFF . 27 | cmake --build . --config Release 28 | cmake --install . --config Release 29 | popd 30 | -------------------------------------------------------------------------------- /tools/windows/group_vars/windows_builders/librsync.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Always needed, the version of librsync 3 | librsync_version: 2.2.1 4 | # where to grab the librsync tarball from 5 | librsync_src_tarfile: https://github.com/librsync/librsync/releases/download/v{{ librsync_version }}/librsync-{{ librsync_version }}.tar.gz 6 | 7 | # where to put the source code of librsync 8 | librsync_dir: "{{ working_dir }}/librsync-{{ librsync_version }}" 9 | # where to install the compiled librsync 10 | librsync_install_dir: "{{ working_dir }}/librsync.d" 11 | 12 | # Define the git_repo variable if you want to clone the source code from there 13 | #librsync_git_repo: https://github.com/librsync/librsync.git 14 | # Only needed if you want to pull something else than the default branch 15 | #librsync_git_ref: master 16 | 17 | # Bits to be used for compiling librsync, Win32 or x64 18 | librsync_win_bits: x64 19 | -------------------------------------------------------------------------------- /tox_dist.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | # Configuration file for creating binary packages, esp. wheels 7 | 8 | [tox] 9 | envlist = py36, py37, py38, py39 10 | 11 | [testenv] 12 | # make sure those variables are passed down; you should define 13 | # either explicitly the RDIFF_TEST_* variables or rely on the current 14 | # user being correctly identified (which might not happen in a container) 15 | passenv = RDIFF_TEST_* RDIFF_BACKUP_* 16 | skip_install = true 17 | deps = 18 | setuptools-scm 19 | PyYAML 20 | pyxattr 21 | pylibacl 22 | wheel 23 | commands_pre = 24 | python setup.py clean --all 25 | commands = 26 | python setup.py bdist_wheel 27 | -------------------------------------------------------------------------------- /tox_slow.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | # This file is used for long running tests, keep short tests in tox.ini 7 | # Call with `tox -c tox_slow.ini` 8 | 9 | [tox] 10 | envlist = py36, py37, py38, py39 11 | 12 | [testenv] 13 | passenv = RDIFF_TEST_* RDIFF_BACKUP_* 14 | deps = 15 | importlib-metadata ~= 1.0 ; python_version < "3.8" 16 | PyYAML 17 | pyxattr 18 | pylibacl 19 | # whitelist_externals = 20 | commands_pre = 21 | rdiff-backup --version 22 | # must be the first command to setup the test environment 23 | python testing/commontest.py 24 | commands = 25 | python testing/benchmark.py many 26 | python testing/benchmark.py nested 27 | -------------------------------------------------------------------------------- /tools/win_package_rdiffbackup.sh: -------------------------------------------------------------------------------- 1 | BITS=$1 2 | PYDIRECT=$2 3 | 4 | if [[ ${BITS} == *32 ]] || [[ ${BITS} == *86 ]] 5 | then 6 | bits=32 7 | lib_win_bits=Win32 8 | py_win_bits=win32 9 | elif [[ ${BITS} == *64 ]] 10 | then 11 | bits=64 12 | lib_win_bits=x64 13 | py_win_bits=win-amd64 14 | else 15 | echo "ERROR: bits size must be 32 or 64, not '${BITS}'." >&2 16 | exit 1 17 | fi 18 | 19 | PYEXE=python.exe 20 | PYINST=PyInstaller.exe 21 | if [[ -n ${PYDIRECT} ]] 22 | then 23 | py_dir=C:/Python${bits} 24 | PYEXE=${py_dir}/${PYEXE} 25 | PYINST=${py_dir}/Scripts/${PYINST} 26 | fi 27 | 28 | ver_name=rdiff-backup-$(${PYEXE} setup.py --version) 29 | 30 | cp CHANGELOG.adoc COPYING README.adoc \ 31 | docs/FAQ.adoc docs/examples.adoc docs/DEVELOP.adoc \ 32 | docs/Windows-README.adoc docs/Windows-DEVELOP.adoc \ 33 | build/${ver_name}-${bits} 34 | pushd build 35 | 7z a -tzip ../dist/${ver_name}.win${bits}exe.zip ${ver_name}-${bits} 36 | popd 37 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_yum_yum-cron.conf.j2: -------------------------------------------------------------------------------- 1 | # Yum-cron automatic updates configuration 2 | # {{ ansible_managed }} 3 | [commands] 4 | update_cmd = {{ rhbase_updates_type }} 5 | random_sleep = {{ rhbase_updates_random_sleep }} 6 | download_updates = {{ 'yes' if rhbase_updates_download else 'no' }} 7 | apply_updates = {{ 'yes' if rhbase_updates_apply else 'no' }} 8 | update_messages = {{ 'yes' if rhbase_updates_message else 'no' }} 9 | 10 | [emitters] 11 | system_name = {{ ansible_hostname }} 12 | emit_via = {{ rhbase_updates_emit_via }} 13 | output_width = 80 14 | 15 | [email] 16 | email_from = {{ rhbase_updates_email_from }} 17 | email_to = {{ rhbase_updates_email_to }} 18 | email_host = {{ rhbase_updates_email_host }} 19 | 20 | [groups] 21 | group_list = None 22 | group_package_types = mandatory, default 23 | 24 | [base] 25 | debuglevel = -{{ rhbase_updates_debuglevel }} 26 | mdpolicy = group:main 27 | 28 | ## vim: ft=dosini 29 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/auto-updates.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/yum-cron.yml 2 | # 3 | # Settings regarding yum-cron 4 | --- 5 | - name: Updates | Install automatic updates service ({{ rhbase_updates_service }}) 6 | package: 7 | name: "{{ rhbase_updates_packages }}" 8 | state: present 9 | tags: 10 | - rhbase 11 | - updates 12 | 13 | - name: Updates | Install automatic updates configuration 14 | template: 15 | src: "etc_{{ rhbase_package_manager }}_{{ item }}.j2" 16 | dest: "/etc/{{ rhbase_package_manager }}/{{ item }}" 17 | owner: root 18 | group: root 19 | mode: 0644 20 | with_items: "{{ rhbase_updates_config }}" 21 | notify: restart updates 22 | tags: 23 | - rhbase 24 | - updates 25 | 26 | - name: Updates | Ensure automatic updates service is running 27 | service: 28 | name: "{{ rhbase_updates_service }}" 29 | state: started 30 | enabled: true 31 | tags: 32 | - rhbase 33 | - updates 34 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: "[BUG]" 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | ## Bug summary 11 | 12 | REPLACE THIS LINE - A clear and concise description of what the bug is 13 | 14 | ## Version, Python, Operating System 15 | 16 | Call `rdiff-backup info` (version >= 2.1) or `rdiff-backup -v9` and replace the following line with the output, repeat for each environment impacted: 17 | 18 | ```yaml 19 | 20 | ``` 21 | 22 | ## rdiff-backup call 23 | 24 | How did you call rdiff-backup, with which parameters: 25 | 26 | ``` 27 | 28 | ``` 29 | 30 | ## What happened and what did you expect? 31 | 32 | REPLACE THIS LINE - what happened, what did you expect? 33 | 34 | ## More information 35 | 36 | If you can, please repeat the action leading to the error, adding the option `-v9` and attach the output to this bug report. 37 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # This file describes the (development) dependencies so that we can be warned 2 | # by GitHub dependency graph if something is compromised, see: 3 | # https://docs.github.com/en/github/visualizing-repository-data-with-graphs/about-the-dependency-graph 4 | 5 | # We don't pin versions unless we see a reason for it. Having developers with 6 | # slightly different versions of the dependencies increases our chance to 7 | # detect non working combinations. 8 | 9 | # You can also use the file to install your environment with the following command: 10 | # pip install -r requirements.txt 11 | 12 | # mandatory 13 | setuptools 14 | setuptools-scm 15 | importlib-metadata ~= 1.0 ; python_version < "3.8" 16 | PyYAML # for rdiff-backup >= 2.1 17 | 18 | # optional 19 | pylibacl 20 | pyxattr 21 | # or xattr under SuSE Linux & Co 22 | 23 | # for Windows 24 | py2exe 25 | pywin32 26 | pyinstaller 27 | certifi 28 | 29 | # purely for development and testing purposes 30 | tox 31 | flake8 32 | coverage 33 | wheel 34 | -------------------------------------------------------------------------------- /testing/server.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import sys 4 | 5 | __doc__ = """ 6 | 7 | This starts an rdiff-backup server using the existing source files. 8 | If not run from the source directory, the only argument should be 9 | the directory the source files are in. 10 | """ 11 | 12 | 13 | def Test_SetConnGlobals(conn, setting, value): 14 | """This is used in connectiontest.py""" 15 | conn.Globals.set(setting, value) 16 | 17 | 18 | def print_usage(): 19 | print("Usage: server.py [path to source files]", __doc__) 20 | 21 | 22 | if len(sys.argv) > 2: 23 | print_usage() 24 | sys.exit(1) 25 | 26 | try: 27 | if len(sys.argv) == 2: 28 | sys.path.insert(0, sys.argv[1]) 29 | import rdiff_backup.Security 30 | from rdiff_backup.connection import PipeConnection 31 | except (OSError, ImportError): 32 | print_usage() 33 | raise 34 | 35 | rdiff_backup.Security._security_level = "override" 36 | sys.exit(PipeConnection(sys.stdin.buffer, sys.stdout.buffer).Server()) 37 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/admin.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/admin.yml 2 | # 3 | # Settings regarding the admin user 4 | --- 5 | - name: Admin | Make sure users from the wheel group can use sudo 6 | lineinfile: 7 | dest: /etc/sudoers.d/wheel 8 | state: present 9 | create: true 10 | regexp: '^%wheel' 11 | line: '%wheel ALL=(ALL) ALL' 12 | validate: 'visudo -cf %s' 13 | tags: 14 | - rhbase 15 | - admin 16 | 17 | - name: Admin | Set attributes of sudo configuration file for wheel group 18 | file: 19 | path: /etc/sudoers.d/wheel 20 | owner: root 21 | group: root 22 | mode: 0440 23 | tags: 24 | - rhbase 25 | - admin 26 | 27 | - name: Admin | Make sure only these groups can ssh 28 | lineinfile: 29 | dest: /etc/ssh/sshd_config 30 | state: present 31 | create: true 32 | regexp: '^AllowGroups' 33 | line: "AllowGroups={{ ' '.join(rhbase_ssh_allow_groups) }}" 34 | notify: restart sshd 35 | tags: 36 | - rhbase 37 | - admin 38 | -------------------------------------------------------------------------------- /tools/misc/librsync-many-files.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """Use librsync to transform everything in one dir to another""" 3 | 4 | import sys 5 | import librsync 6 | 7 | dir1, dir2 = sys.argv[1:3] 8 | for i in range(1000): 9 | dir1fn = "%s/%s" % (dir1, i) 10 | dir2fn = "%s/%s" % (dir2, i) 11 | 12 | # Write signature file 13 | f1 = open(dir1fn, "rb") 14 | sigfile = open("sig", "wb") 15 | librsync.filesig(f1, sigfile, 2048) 16 | f1.close() 17 | sigfile.close() 18 | 19 | # Write delta file 20 | f2 = open(dir2fn, "r") 21 | sigfile = open("sig", "rb") 22 | deltafile = open("delta", "wb") 23 | librsync.filerdelta(sigfile, f2, deltafile) 24 | f2.close() 25 | sigfile.close() 26 | deltafile.close() 27 | 28 | # Write patched file 29 | f1 = open(dir1fn, "rb") 30 | newfile = open("%s/%s.out" % (dir1, i), "wb") 31 | deltafile = open("delta", "rb") 32 | librsync.filepatch(f1, deltafile, newfile) 33 | f1.close() 34 | deltafile.close() 35 | newfile.close() 36 | -------------------------------------------------------------------------------- /tools/misc/remove-comments.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """remove-comments.py 3 | 4 | Given a python program on standard input, spit one out on stdout that 5 | should work the same, but has blank and comment lines removed. 6 | 7 | """ 8 | 9 | import sys 10 | import re 11 | 12 | triple_regex = re.compile('"""') 13 | 14 | 15 | def eattriple(initial_line_stripped): 16 | """Keep reading until end of doc string""" 17 | assert initial_line_stripped.startswith('"""') 18 | if triple_regex.search(initial_line_stripped[3:]): 19 | return 20 | while 1: 21 | line = sys.stdin.readline() 22 | if not line or triple_regex.search(line): 23 | break 24 | 25 | 26 | while 1: 27 | line = sys.stdin.readline() 28 | if not line: 29 | break 30 | stripped = line.strip() 31 | if not stripped: 32 | continue 33 | if stripped[0] == "#": 34 | continue 35 | if stripped.startswith('"""'): 36 | eattriple(stripped) 37 | continue 38 | sys.stdout.write(line) 39 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/services.yml: -------------------------------------------------------------------------------- 1 | # rhbase/tasks/services.yml 2 | # Make sure the necessary services are enabled 3 | --- 4 | - name: Services | Ensure SSH daemon is running 5 | service: 6 | name: sshd 7 | enabled: true 8 | state: started 9 | tags: 10 | - rhbase 11 | - services 12 | 13 | - name: Services | Ensure `/var/log/journal` exists 14 | file: 15 | path: /var/log/journal 16 | state: directory 17 | notify: 18 | - restart journald 19 | tags: 20 | - rhbase 21 | - services 22 | 23 | - name: Services | Ensure specified services are running 24 | service: 25 | name: "{{ item }}" 26 | enabled: true 27 | state: started 28 | with_items: "{{ rhbase_start_services }}" 29 | tags: 30 | - rhbase 31 | - services 32 | 33 | - name: Services | Ensure specified services are NOT running 34 | service: 35 | name: "{{ item }}" 36 | enabled: false 37 | state: stopped 38 | with_items: "{{ rhbase_stop_services }}" 39 | tags: 40 | - rhbase 41 | - services 42 | -------------------------------------------------------------------------------- /tools/crossversion/playbook-provision.yml: -------------------------------------------------------------------------------- 1 | - name: provision with an old version of rdiff-backup for cross-version tests 2 | hosts: all 3 | become: true 4 | gather_facts: false 5 | 6 | tasks: 7 | - name: install software collections (SCL) repository and Ansible dependencies 8 | package: 9 | name: 10 | - centos-release-scl 11 | - libselinux-python 12 | state: present 13 | - name: install python 3 and other build dependencies 14 | package: 15 | name: 16 | - rh-python36 17 | - gcc 18 | - libacl-devel 19 | state: present 20 | - name: install rdiff-backup and dependencies using pip within the SCL environment 21 | command: > 22 | scl enable rh-python36 23 | -- pip install --upgrade rdiff-backup=={{ rdiff_backup_old_version }} pyxattr pylibacl 24 | - name: create rdiff-backup wrapper to always use the SCL version 25 | copy: 26 | dest: /usr/bin/rdiff-backup 27 | content: | 28 | #!/bin/sh 29 | exec scl enable rh-python36 -- rdiff-backup "$@" 30 | mode: a+rx 31 | -------------------------------------------------------------------------------- /tools/win_test_rdiffbackup.sh: -------------------------------------------------------------------------------- 1 | BITS=$1 2 | PYTHON_VERSION=$2 3 | PYDIRECT=$3 4 | 5 | if [[ ${BITS} == *32 ]] || [[ ${BITS} == *86 ]] 6 | then 7 | bits=32 8 | lib_win_bits=Win32 9 | py_win_bits=win32 10 | elif [[ ${BITS} == *64 ]] 11 | then 12 | bits=64 13 | lib_win_bits=x64 14 | py_win_bits=win-amd64 15 | else 16 | echo "ERROR: bits size must be 32 or 64, not '${BITS}'." >&2 17 | exit 1 18 | fi 19 | 20 | PYEXE=python.exe 21 | if [[ -n ${PYDIRECT} ]] 22 | then 23 | py_dir=C:/Python${bits} 24 | PYEXE=${py_dir}/${PYEXE} 25 | fi 26 | 27 | LIBRSYNC_DIR=${HOME}/librsync${bits} 28 | export LIBRSYNC_DIR 29 | 30 | ver_name=rdiff-backup-$(${PYEXE} setup.py --version) 31 | py_ver_brief=${PYTHON_VERSION%.[0-9]} 32 | 33 | # Extract the test files one directory higher 34 | pushd .. 35 | git clone https://github.com/rdiff-backup/rdiff-backup-filesrepo.git rdiff-backup-filesrepo 36 | # ignore the one "Can not create hard link" error 37 | 7z x rdiff-backup-filesrepo/rdiff-backup_testfiles.tar || true 38 | popd 39 | # Then execute the necessary tests 40 | ${PYEXE} -m tox -c tox_win.ini -e py 41 | -------------------------------------------------------------------------------- /tools/build_wheels.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -e -x 3 | 4 | basedir=$1 5 | plat=$2 6 | shift 2 7 | pybindirs="$@" 8 | 9 | build_dir=${basedir}/build 10 | dist_dir=${basedir}/dist 11 | 12 | # Install a system package required by our library 13 | yum install -y librsync-devel rubygems 14 | 15 | # asciidoctor 2.x isn't compatible with Ruby 1.8 16 | ruby_major=$(rpm -qi ruby | awk -F' *[:.] *' '$1=="Version" {print $2}') 17 | if [[ ${ruby_major} -lt 2 ]] 18 | then 19 | gem install asciidoctor -v 1.5.8 20 | else 21 | gem install asciidoctor 22 | fi 23 | 24 | # Compile wheels 25 | for PYBIN in $pybindirs; do 26 | "${PYBIN}/pip" install --user \ 27 | 'importlib-metadata ~= 1.0 ; python_version < "3.8"' 'PyYAML' 28 | "${PYBIN}/pip" wheel ${basedir} -w ${build_dir}/ 29 | done 30 | 31 | # Bundle external shared libraries into the wheels 32 | for whl in ${build_dir}/rdiff_backup*.whl; do 33 | auditwheel repair "$whl" --plat ${plat} -w ${dist_dir}/ 34 | done 35 | 36 | # Install packages 37 | for PYBIN in $pybindirs; do 38 | "${PYBIN}/pip" install rdiff-backup --no-index -f ${dist_dir} 39 | done 40 | -------------------------------------------------------------------------------- /tools/win_build_rdiffbackup.sh: -------------------------------------------------------------------------------- 1 | BITS=$1 2 | PYTHON_VERSION=$2 3 | PYDIRECT=$3 4 | 5 | if [[ ${BITS} == *32 ]] || [[ ${BITS} == *86 ]] 6 | then 7 | bits=32 8 | lib_win_bits=Win32 9 | py_win_bits=win32 10 | elif [[ ${BITS} == *64 ]] 11 | then 12 | bits=64 13 | lib_win_bits=x64 14 | py_win_bits=win-amd64 15 | else 16 | echo "ERROR: bits size must be 32 or 64, not '${BITS}'." >&2 17 | exit 1 18 | fi 19 | 20 | PYEXE=python.exe 21 | PYINST=PyInstaller.exe 22 | if [[ -n ${PYDIRECT} ]] 23 | then 24 | py_dir=C:/Python${bits} 25 | PYEXE=${py_dir}/${PYEXE} 26 | PYINST=${py_dir}/Scripts/${PYINST} 27 | fi 28 | 29 | LIBRSYNC_DIR=${HOME}/librsync${bits} 30 | export LIBRSYNC_DIR 31 | 32 | ver_name=rdiff-backup-$(${PYEXE} setup.py --version) 33 | py_ver_brief=${PYTHON_VERSION%.[0-9]} 34 | 35 | ${PYEXE} setup.py bdist_wheel 36 | ${PYINST} --onefile --distpath build/${ver_name}-${bits} \ 37 | --paths=build/lib.${py_win_bits}-${py_ver_brief} \ 38 | --additional-hooks-dir=tools \ 39 | --console build/scripts-${py_ver_brief}/rdiff-backup \ 40 | --add-data=src/rdiff_backup.egg-info/PKG-INFO\;rdiff_backup.egg-info 41 | -------------------------------------------------------------------------------- /testing/backuptest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from commontest import MirrorTest, old_inc1_dir, old_inc2_dir, old_inc3_dir, old_inc4_dir 3 | from rdiff_backup import Globals, SetConnections, user_group 4 | 5 | 6 | class RemoteMirrorTest(unittest.TestCase): 7 | """Test mirroring""" 8 | 9 | def setUp(self): 10 | """Start server""" 11 | Globals.change_source_perms = 1 12 | SetConnections.UpdateGlobal('checkpoint_interval', 3) 13 | user_group.init_user_mapping() 14 | user_group.init_group_mapping() 15 | 16 | def testMirror(self): 17 | """Testing simple mirror""" 18 | MirrorTest(None, None, [old_inc1_dir]) 19 | 20 | def testMirror2(self): 21 | """Test mirror with larger data set""" 22 | MirrorTest(1, None, 23 | [old_inc1_dir, old_inc2_dir, old_inc3_dir, old_inc4_dir]) 24 | 25 | def testMirror3(self): 26 | """Local version of testMirror2""" 27 | MirrorTest(1, 1, 28 | [old_inc1_dir, old_inc2_dir, old_inc3_dir, old_inc4_dir]) 29 | 30 | 31 | if __name__ == "__main__": 32 | unittest.main() 33 | -------------------------------------------------------------------------------- /tools/misc/find2dirs: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from __future__ import generators 4 | import sys 5 | import os 6 | import stat 7 | 8 | 9 | def usage(): 10 | print "Usage: find2dirs dir1 dir2" 11 | print 12 | print "Given the name of two directories, list all the files in both, one" 13 | print "per line, but don't repeat a file even if it is in both directories" 14 | sys.exit(1) 15 | 16 | 17 | def getlist(base, ext=""): 18 | """Return iterator yielding filenames from directory""" 19 | if ext: yield ext 20 | else: yield "." 21 | 22 | fullname = os.path.join(base, ext) 23 | if stat.S_ISDIR(stat.S_IFMT(os.lstat(fullname)[stat.ST_MODE])): 24 | for subfile in os.listdir(fullname): 25 | for fn in getlist(base, os.path.join(ext, subfile)): 26 | yield fn 27 | 28 | 29 | def main(dir1, dir2): 30 | d = {} 31 | for fn in getlist(dir1): 32 | d[fn] = 1 33 | for fn in getlist(dir2): 34 | d[fn] = 1 35 | for fn in d.keys(): 36 | print fn 37 | 38 | 39 | if not len(sys.argv) == 3: usage() 40 | else: main(sys.argv[1], sys.argv[2]) 41 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/.travis.yml: -------------------------------------------------------------------------------- 1 | # .travis.yml Execution script for role tests on Travis-CI 2 | --- 3 | sudo: required 4 | 5 | env: 6 | matrix: 7 | - DISTRIBUTION: centos 8 | VERSION: 7 9 | - DISTRIBUTION: ubuntu 10 | VERSION: 18.04 11 | - DISTRIBUTION: debian 12 | VERSION: 9 13 | - DISTRIBUTION: fedora 14 | VERSION: 28 15 | 16 | services: 17 | - docker 18 | 19 | before_install: 20 | # Install latest Git 21 | - sudo apt-get update 22 | - sudo apt-get install --only-upgrade git 23 | - sudo apt-get install smbclient 24 | # Allow fetching other branches than master 25 | - git config remote.origin.fetch +refs/heads/*:refs/remotes/origin/* 26 | # Fetch the branch with test code 27 | - git fetch origin docker-tests 28 | - git worktree add docker-tests origin/docker-tests 29 | 30 | script: 31 | # Create container and apply test playbook 32 | - ./docker-tests/docker-tests.sh 33 | 34 | # Run functional tests on the container 35 | - SUT_IP=172.17.0.2 ./docker-tests/functional-tests.sh 36 | 37 | notifications: 38 | webhooks: https://galaxy.ansible.com/api/v1/notifications/ 39 | -------------------------------------------------------------------------------- /tools/windows/playbook-test-rdiff-backup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Test rdiff-backup on a prepared Windows 3 | hosts: windows_builders 4 | gather_facts: false 5 | 6 | tasks: 7 | - name: make sure working directory {{ working_dir }} exists 8 | win_file: 9 | state: directory 10 | path: "{{ working_dir }}" 11 | - name: clone the rdiff-backup testfiles from Git 12 | win_command: > 13 | git.exe clone {{ rdiffbackup_files_git_repo }} 14 | "{{ rdiffbackup_files_dir }}" 15 | args: 16 | creates: "{{ rdiffbackup_files_dir }}" 17 | 18 | - name: unpack the testfiles (one hard link failure expected) 19 | win_command: 7z x "{{ rdiffbackup_files_dir }}/rdiff-backup_testfiles.tar" 20 | args: 21 | chdir: "{{ working_dir }}" 22 | creates: "{{ working_dir }}/rdiff-backup_testfiles" 23 | ignore_errors: true # 7z fails to extract one hard link 24 | 25 | - name: test rdiff-backup using tox 26 | win_command: tox -c tox_win.ini 27 | environment: # path absolutely needs to be Windows-style 28 | LIBRSYNC_DIR: "{{ librsync_install_dir | replace('/', '\\\\') }}" 29 | args: 30 | chdir: "{{ rdiffbackup_dir }}" 31 | -------------------------------------------------------------------------------- /tools/crossversion/README.adoc: -------------------------------------------------------------------------------- 1 | = Vagrant setup for testing older versions of rdiff-backup 2 | :sectnums: 3 | :toc: 4 | 5 | Just call `vagrant up` to get the version. 6 | 7 | If you want to get a different version of rdiff-backup than the default one in `host_vars`, use following command: 8 | 9 | ---- 10 | ansible-playbook playbook-provision.yml -e rdiff_backup_old_version=2.0.3 11 | ---- 12 | 13 | To come back to the default version, just leave out the `-e xxx` option (or simply call `vagrant provision`). 14 | 15 | You may call the smoke tests using the following command and validate that nothing wrong happens: 16 | 17 | ---- 18 | ansible-playbook -v playbook-smoke-test.yml 19 | ---- 20 | 21 | You can use the current development version by using something like the following, after having built from source using `./setup.py build`: 22 | 23 | ---- 24 | PATH=../../build/scripts-3.8:$PATH PYTHONPATH=../../build/lib.linux-x86_64-3.8 \ 25 | ansible-playbook -v playbook-smoke-test.yml 26 | ---- 27 | 28 | ____ 29 | *CAUTION:* the version shown locally might be DEV or any other version installed elsewhere locally, but the version used is definitely the built development version. 30 | ____ 31 | -------------------------------------------------------------------------------- /docs/arch/rdiff_backup_classes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -x 2 | # script to document classes from Python sources 3 | 4 | BASE_DIR=$(dirname $0)/../../src 5 | PUML_FILE=$(realpath ${0%.sh}.puml) 6 | 7 | cd ${BASE_DIR} 8 | grep -r --include \*.py ^class | sort | awk -F'[:()]*' ' 9 | BEGIN { 10 | print "@startuml" 11 | } 12 | { 13 | gsub(".py$", "", $1) 14 | gsub("^.*/", "", $1) 15 | gsub("^class *", "", $2) 16 | if ($1 != package) { 17 | if (package) { print "}" } 18 | package = $1 19 | print "package " package " {" 20 | } 21 | if ($3 == "Exception" || $2 == "Exception" || $2 ~ /Error$/) { 22 | print " class " $2 " << (E,#FF7700) Exception >>" 23 | } else { 24 | print " class " $2 25 | } 26 | # the Exception descendence is marked by coloring not by arrows 27 | if ($3 && $3 != "Exception") print " " $3 " <|-- " $2 28 | } 29 | END { 30 | if (package) print "}" 31 | print "@enduml" 32 | }' > ${PUML_FILE} 33 | 34 | plantuml -tsvg ${PUML_FILE} 35 | -------------------------------------------------------------------------------- /testing/action_test_test.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | import commontest as comtst 3 | 4 | 5 | class ActionTestTest(unittest.TestCase): 6 | """Test api versioning functionality""" 7 | 8 | def test_action_test(self): 9 | """test the "test" action""" 10 | # the test action works with one or two locations (or more) 11 | self.assertEqual( 12 | comtst.rdiff_backup_action(False, False, b"/dummy1", b"/dummy2", 13 | (), b"test", ()), 14 | 0) 15 | self.assertEqual( 16 | comtst.rdiff_backup_action(False, True, b"/dummy1", None, 17 | (), b"test", ()), 18 | 0) 19 | # but it doesn't work with a local one 20 | self.assertNotEqual( 21 | comtst.rdiff_backup_action(False, True, b"/dummy1", b"/dummy2", 22 | (), b"test", ()), 23 | 0) 24 | # and it doesn't work without any location 25 | self.assertNotEqual( 26 | comtst.rdiff_backup_action(True, True, None, None, 27 | (), b"test", ()), 28 | 0) 29 | 30 | 31 | if __name__ == "__main__": 32 | unittest.main() 33 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/users.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/users.yml 2 | # 3 | # Create groups and users on the system 4 | --- 5 | - name: Users | Process groups 6 | set_fact: 7 | rhbase_user_groups: "{{ rhbase_user_groups }} + {{ item.groups|default([]) }}" 8 | with_items: "{{ rhbase_users }}" 9 | tags: 10 | - rhbase 11 | - users 12 | 13 | - name: Users | Add groups 14 | group: 15 | name: "{{ item }}" 16 | state: present 17 | with_items: "{{ rhbase_user_groups | unique }}" 18 | tags: 19 | - rhbase 20 | - users 21 | 22 | - name: Users | Add users 23 | user: 24 | name: "{{ item.name }}" 25 | state: present 26 | comment: "{{ item.comment|default('') }}" 27 | shell: "{{ item.shell|default('/bin/bash') }}" 28 | groups: "{{ ','.join(item.groups) if item.groups is defined else [] }}" 29 | password: "{{ item.password|default('!!') }}" 30 | with_items: "{{ rhbase_users }}" 31 | tags: 32 | - rhbase 33 | - users 34 | 35 | - name: "Users | Set up SSH key for user {{ rhbase_ssh_user|default('(not specified)') }}" 36 | authorized_key: 37 | user: "{{ rhbase_ssh_user }}" 38 | key: "{{ rhbase_ssh_key }}" 39 | when: rhbase_ssh_user is defined 40 | tags: 41 | - rhbase 42 | - users 43 | -------------------------------------------------------------------------------- /src/rdiff-backup: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # rdiff-backup -- Mirror files while keeping incremental changes 3 | # Copyright (C) 2001-2005 Ben Escoto 4 | # 5 | # This program is licensed under the GNU General Public License (GPL). 6 | # you can redistribute it and/or modify it under the terms of the GNU 7 | # General Public License as published by the Free Software Foundation, 8 | # Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA; 9 | # either version 2 of the License, or (at your option) any later version. 10 | # Distributions of rdiff-backup should include a copy of the GPL in a 11 | # file called COPYING. The GPL is also available online at 12 | # https://www.gnu.org/copyleft/gpl.html. 13 | # 14 | # See https://rdiff-backup.net/ for more information. Please 15 | # send mail to me or the mailing list if you find bugs or have any 16 | # suggestions. 17 | 18 | import sys 19 | import rdiff_backup.Main 20 | 21 | try: 22 | import msvcrt 23 | import os 24 | 25 | msvcrt.setmode(0, os.O_BINARY) 26 | msvcrt.setmode(1, os.O_BINARY) 27 | except ImportError: 28 | pass 29 | 30 | # the __no_execute__ flag is used for tests 31 | if __name__ == "__main__" and "__no_execute__" not in globals(): 32 | rdiff_backup.Main.main_run_and_exit(sys.argv[1:]) 33 | -------------------------------------------------------------------------------- /tox_root.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | # Configuration file for tests as root. 7 | # Call with `sudo tox -c tox_root.ini` 8 | # Use tox_slow.ini for longer running tests and tox.ini for normal tests. 9 | 10 | [tox] 11 | envlist = py36, py37, py38, py39 12 | # some directories to avoid creating files with root rights that 13 | # couldn't be overwritten by the non-root tests: 14 | toxworkdir = {toxinidir}/.tox.root 15 | isolated_build_env = .package.root 16 | 17 | [testenv] 18 | # make sure those variables are passed down; you should define them 19 | # either implicitly through usage of sudo (which defines the SUDO_* variables), 20 | # or explicitly the RDIFF_* ones: 21 | passenv = SUDO_USER SUDO_UID SUDO_GID RDIFF_TEST_* RDIFF_BACKUP_* 22 | deps = 23 | importlib-metadata ~= 1.0 ; python_version < "3.8" 24 | PyYAML 25 | pyxattr 26 | pylibacl 27 | #whitelist_externals = env 28 | commands_pre = 29 | rdiff-backup --version 30 | # must be the first command to setup the test environment 31 | python testing/commontest.py 32 | commands = 33 | python testing/roottest.py 34 | -------------------------------------------------------------------------------- /tools/prepare_api_doc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # this function gathers all calls going through the client/server connection. 3 | # the result can only be used as a basis or validation for the actual API 4 | # documentation, especially because it can't make the difference between 5 | # internal and external methods. 6 | 7 | if ! [ -n "$1" ] 8 | then 9 | echo "Usage: $0 XYY" >&2 10 | exit 1 11 | fi 12 | 13 | API_FILE=docs/api/v${1}.md 14 | 15 | echo -e "\n## Sources\n" > ${API_FILE}.src 16 | 17 | echo -e "\n### Internal\n" >> ${API_FILE}.src 18 | 19 | grep -ro -e 'conn\.[a-zA-Z0-9._]*' src | \ 20 | awk -F:conn. '{print "* `" $2 "`"}' | sort -u >> ${API_FILE}.src 21 | 22 | echo -e "\n### External\n" >> ${API_FILE}.src 23 | 24 | 25 | echo -e "\n## Testing\n" > ${API_FILE}.testing 26 | 27 | echo -e "\n### Internal\n" >> ${API_FILE}.testing 28 | echo -e "\n### External\n" >> ${API_FILE}.testing 29 | 30 | grep -ro -e 'conn\.[a-zA-Z0-9._]*' testing | \ 31 | awk -F:conn. '{print "* `" $2 "`"}' | sort -u | \ 32 | grep --fixed-strings --line-regexp --file ${API_FILE}.src \ 33 | --invert-match >> ${API_FILE}.testing 34 | 35 | echo -e "# rdiff-backup API description v${1}\n" > ${API_FILE} 36 | echo -e "\n## Format\n" >> ${API_FILE} 37 | 38 | cat ${API_FILE}.* >> ${API_FILE} 39 | 40 | rm ${API_FILE}.* 41 | 42 | echo "API description prepared at '${API_FILE}'." 43 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # rdiff-backup build environment (for developers) 2 | FROM debian:sid 3 | 4 | # General Debian build dependencies 5 | RUN DEBIAN_FRONTEND=noninteractive apt-get update -yqq && \ 6 | apt-get install -y --no-install-recommends \ 7 | devscripts \ 8 | equivs \ 9 | curl \ 10 | ccache \ 11 | git \ 12 | git-buildpackage \ 13 | pristine-tar \ 14 | dh-python \ 15 | build-essential 16 | 17 | # Build dependencies specific for rdiff-backup 18 | RUN DEBIAN_FRONTEND=noninteractive apt-get update -yqq && \ 19 | apt-get install -y --no-install-recommends \ 20 | librsync-dev \ 21 | python3-all-dev \ 22 | python3-pylibacl \ 23 | python3-pyxattr \ 24 | asciidoctor 25 | 26 | # Build dependencies specific for rdiff-backup development and testing 27 | RUN DEBIAN_FRONTEND=noninteractive apt-get update -yqq && \ 28 | apt-get install -y --no-install-recommends \ 29 | tox \ 30 | rdiff \ 31 | python3-setuptools-scm \ 32 | # /usr/include/sys/acl.h is required by test builds 33 | libacl1-dev \ 34 | # tox_slow uses rsync for comperative benchmarking 35 | rsync 36 | 37 | # Tests require that there is a regular user 38 | ENV RDIFF_TEST_UID 1000 39 | ENV RDIFF_TEST_USER testuser 40 | ENV RDIFF_TEST_GROUP testuser 41 | 42 | RUN useradd -ms /bin/bash --uid ${RDIFF_TEST_UID} ${RDIFF_TEST_USER} 43 | -------------------------------------------------------------------------------- /tools/windows/roles/samba/LICENSE.md: -------------------------------------------------------------------------------- 1 | # BSD License 2 | 3 | Copyright (c) 2014, Bert Van Vreckem, (bert.vanvreckem@gmail.com) 4 | 5 | All rights reserved. 6 | 7 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 10 | 11 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 12 | 13 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 14 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/LICENSE.md: -------------------------------------------------------------------------------- 1 | # BSD License 2 | 3 | Copyright (c) 2016, Bert Van Vreckem, (bert.vanvreckem@gmail.com) 4 | 5 | All rights reserved. 6 | 7 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 10 | 11 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 12 | 13 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 14 | -------------------------------------------------------------------------------- /testing/ctest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from rdiff_backup import C 3 | 4 | 5 | class CTest(unittest.TestCase): 6 | """Test the C module by comparing results to python functions""" 7 | 8 | def test_sync(self): 9 | """Test running C.sync""" 10 | C.sync() 11 | 12 | def test_acl_quoting(self): 13 | """Test the acl_quote and acl_unquote functions""" 14 | self.assertEqual(C.acl_quote(b'foo'), b'foo') 15 | self.assertEqual(C.acl_quote(b'\n'), b'\\012') 16 | self.assertEqual(C.acl_unquote(b'\\012'), b'\n') 17 | s = b'\\\n\t\145\n\01==' 18 | self.assertEqual(C.acl_unquote(C.acl_quote(s)), s) 19 | 20 | def test_acl_quoting2(self): 21 | """This string used to segfault the quoting code, try now""" 22 | s = b'\xd8\xab\xb1Wb\xae\xc5]\x8a\xbb\x15v*\xf4\x0f!\xf9>\xe2Y\x86\xbb\xab\xdbp\xb0\x84\x13k\x1d\xc2\xf1\xf5e\xa5U\x82\x9aUV\xa0\xf4\xdf4\xba\xfdX\x03\x82\x07s\xce\x9e\x8b\xb34\x04\x9f\x17 \xf4\x8f\xa6\xfa\x97\xab\xd8\xac\xda\x85\xdcKvC\xfa#\x94\x92\x9e\xc9\xb7\xc3_\x0f\x84g\x9aB\x11<=^\xdbM\x13\x96c\x8b\xa7|*"\\\'^$@#!(){}?+ ~` ' 23 | quoted = C.acl_quote(s) 24 | self.assertEqual(C.acl_unquote(quoted), s) 25 | 26 | def test_acl_quoting_equals(self): 27 | """Make sure the equals character is quoted""" 28 | self.assertNotEqual(C.acl_quote(b'='), b'=') 29 | 30 | 31 | if __name__ == "__main__": 32 | unittest.main() 33 | -------------------------------------------------------------------------------- /docs/arch/README.adoc: -------------------------------------------------------------------------------- 1 | = RDIFF-BACKUP ARCHITECTURE 2 | :sectnums: 3 | :toc: 4 | 5 | this directories contains higher-level documentation about the rdiff-backup architecture. 6 | 7 | ____ 8 | *NOTE:* it is much work in progress as of now, and reverse engineering from code. 9 | ____ 10 | 11 | * The script link:rdiff_backup_classes.sh[rdiff_backup_classes.sh] generates a link:rdiff_backup_classes.puml[simple class diagram] in https://plantuml.com/class-diagram[PlantUML format], which is itself rendered into an link:rdiff_backup_classes.svg[SVG class diagram], which most browsers should be able to render properly. 12 | This diagram shows only class inheritance but is properly sorted into rdiff-backup's modules. 13 | * Generated with `pyreverse-3 -k -m y -o svg rdiff_backup` (from the `pylint` package), the link:classes.svg[pyreverse class diagram] looks less readable to me but it shows also composition/aggregation of classes. 14 | The link:packages.svg[packages diagram] generated by the same command is itself utterly useless. 15 | 16 | == Plug-in architecture 17 | 18 | Even if it might not make sense to write plug-ins for rdiff-backup, in the sense of "dropdown" external plug-ins, it definitely makes sense to define plug-in interfaces which makes it easier to disentangle the code, and extend it without inadvertently breaking other aspects. 19 | 20 | The following plug-in interfaces are defined each in its own documentation file: 21 | 22 | * xref:plugins_actions.adoc[actions plug-ins] 23 | -------------------------------------------------------------------------------- /tools/misc/create_fs_as_user.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # this script is more to be considered as an example on how to create 3 | # a file system for specific tests as a normal user. It works for vfat, 4 | # but not always, fails on the last command with exfat (use mount.exfat 5 | # instead), and doesn't seem to work for ntfs, but is still a good basis 6 | # for tests of "exotic" file systems under Linux. 7 | 8 | if [ -z "$1" ] || [ "$1" == '-h' ] || [ "$1" == '--help' ] 9 | then 10 | echo "$0 " >&2 11 | exit 0 12 | fi 13 | 14 | if [ "$1" == '-d' ] || [ "$1" == '--delete' ] 15 | then 16 | ACTION=delete 17 | shift 18 | else 19 | ACTION=create 20 | fi 21 | 22 | LOOP_FILE=$1 23 | FS_FORMAT=$2 24 | FS_LABEL=$3 25 | 26 | if [ ${ACTION} == "create" ] 27 | then 28 | fallocate -l 1m ${LOOP_FILE} 29 | udisksctl loop-setup --no-user-interaction -f ${LOOP_FILE} 30 | 31 | case ${FS_FORMAT} in 32 | vfat|exfat) 33 | mkfs.${FS_FORMAT} -n ${FS_LABEL} ${LOOP_FILE} 34 | ;; 35 | ntfs) 36 | mkfs.${FS_FORMAT} -L ${FS_LABEL} ${LOOP_FILE} 37 | ;; 38 | esac 39 | 40 | # we assume the last loop is the right one 41 | LOOP_DEV=$(ls -1v /dev/loop[0-9]* | tail -n 1) 42 | 43 | udisksctl mount --no-user-interaction -t ${FS_FORMAT} -b ${LOOP_DEV} 44 | elif [ ${ACTION} == "delete" ] 45 | then 46 | LOOP_DEV=$(ls -1v /dev/loop[0-9]* | tail -n 1) 47 | udisksctl unmount --no-user-interaction -b ${LOOP_DEV} 48 | udisksctl loop-delete --no-user-interaction -b ${LOOP_DEV} 49 | rm -f ${LOOP_FILE} 50 | fi 51 | -------------------------------------------------------------------------------- /testing/setconnectionstest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from rdiff_backup import SetConnections 3 | 4 | 5 | class SetConnectionsTest(unittest.TestCase): 6 | """Set SetConnections Class""" 7 | 8 | def testParsing(self): 9 | """Test parsing of various file descriptors""" 10 | 11 | pl = SetConnections.parse_location 12 | 13 | self.assertEqual(pl(b"bescoto@folly.stanford.edu::/usr/bin/ls"), 14 | (b"bescoto@folly.stanford.edu", b"/usr/bin/ls", None)) 15 | self.assertEqual(pl(b"hello there::/goodbye:euoeu"), 16 | (b"hello there", b"/goodbye:euoeu", None)) 17 | self.assertEqual(pl(b"a:b:c:d::e"), (b"a:b:c:d", b"e", None)) 18 | self.assertEqual(pl(b"foobar"), (None, b"foobar", None)) 19 | self.assertEqual(pl(rb"test\\ing\::more::and more\\.."), 20 | (b"test\\ing::more", b"and more/..", None)) 21 | self.assertEqual(pl(rb"strangely named\::file"), 22 | (None, b"strangely named::file", None)) 23 | self.assertEqual(pl(rb"foobar\\"), (None, b"foobar/", None)) 24 | self.assertEqual(pl(rb"not\::too::many\\\::paths"), 25 | (b"not::too", b"many/::paths", None)) 26 | 27 | # test missing path and missing host 28 | self.assertIsNotNone(pl(rb"a host without\:path::")[2]) 29 | self.assertIsNotNone(pl(b"::some/path/without/host")[2]) 30 | self.assertIsNotNone(pl(b"too::many::paths")[2]) 31 | 32 | 33 | if __name__ == "__main__": 34 | unittest.main() 35 | -------------------------------------------------------------------------------- /tools/setup-testfiles.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # help file to download and unpack/prepare the test files required by the automated tox tests 3 | # this script is being called from the Makefile in the root directory of the rdiff-backup repo 4 | 5 | # Exit on errors immediately 6 | set -e 7 | 8 | OLDTESTDIR=../rdiff-backup_testfiles 9 | TESTREPODIR=rdiff-backup-filesrepo 10 | TESTREPOURL=https://github.com/rdiff-backup/${TESTREPODIR}.git 11 | TESTTARFILE=rdiff-backup_testfiles.tar 12 | 13 | if [ -d ${OLDTESTDIR}/various_file_types ] 14 | then 15 | echo "Test files found, not re-installing them..." >&2 16 | else 17 | echo "Test files not found, installing them..." >&2 18 | cd .. 19 | if [ ! -f ${TESTREPODIR}/${TESTTARFILE} ] 20 | then 21 | rm -fr ${TESTREPODIR} # Clean away potential cruft 22 | git clone ${TESTREPOURL} 23 | else # update the existing Git repo 24 | git -C ${TESTREPODIR} pull --ff-only # fail if things don't look right 25 | fi 26 | 27 | if [ $(id -u) -eq 0 ] 28 | then # we do this because sudo might not be installed 29 | SUDO= 30 | else 31 | SUDO=sudo 32 | fi 33 | 34 | # the following commands must be run as root 35 | ${SUDO} rm -fr ${OLDTESTDIR} # Clean away potential cruft 36 | ${SUDO} tar xf ${TESTREPODIR}/${TESTTARFILE} 37 | ${SUDO} ${TESTREPODIR}/rdiff-backup_testfiles.fix.sh "${RDIFF_TEST_USER}" "${RDIFF_TEST_GROUP}" 38 | 39 | cd rdiff-backup 40 | fi 41 | 42 | echo " 43 | Verify a normal user for tests exist: 44 | RDIFF_TEST_UID: ${RDIFF_TEST_UID} 45 | RDIFF_TEST_USER: ${RDIFF_TEST_USER} 46 | RDIFF_TEST_GROUP: ${RDIFF_TEST_GROUP} 47 | " 48 | -------------------------------------------------------------------------------- /tools/misc/rdiff-backup-wrap: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Copyright (C) 2020 Patrik Dufresne 3 | # 4 | # This program is free software: you can redistribute it and/or modify 5 | # it under the terms of the GNU General Public License as published by 6 | # the Free Software Foundation, either version 3 of the License, or 7 | # (at your option) any later version. 8 | # 9 | # This program is distributed in the hope that it will be useful, 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 | # GNU General Public License for more details. 13 | # 14 | # You should have received a copy of the GNU General Public License 15 | # along with this program. If not, see . 16 | # 17 | # 18 | # 19 | # rdiff-backup-wrap 20 | # 21 | # This script is used to help the migration from legacy version (v1.2.8) 22 | # to new version (v2.0.0) by auto-detecting the remote rdiff-backup version 23 | # and then launch the right version locally. 24 | 25 | # Extract the source and destination from the arguments 26 | for ARG in "$@"; do 27 | SOURCE=$DEST 28 | DEST=$ARG 29 | done 30 | 31 | # Check if the source is remote. 32 | case "$SOURCE" in 33 | *::*) REMOTE_SOURCE=1;; 34 | *) REMOTE_SOURCE=0;; 35 | esac 36 | 37 | # Pick the right version of rdiff-backup 38 | if [ $REMOTE_SOURCE -eq 0 ]; then 39 | CMD="rdiff-backup2" 40 | else 41 | if rdiff-backup --test-server "$SOURCE" 2>&1 > /dev/null; then 42 | CMD="rdiff-backup" 43 | else 44 | CMD="rdiff-backup2" 45 | fi 46 | fi 47 | 48 | "$CMD" $@ -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/.yamllint: -------------------------------------------------------------------------------- 1 | # .yamllint -- Custom rules for linting Yaml code 2 | # Longer line length (130) is allowed 3 | --- 4 | 5 | rules: 6 | braces: 7 | min-spaces-inside: 0 8 | max-spaces-inside: 0 9 | min-spaces-inside-empty: -1 10 | max-spaces-inside-empty: -1 11 | brackets: 12 | min-spaces-inside: 0 13 | max-spaces-inside: 0 14 | min-spaces-inside-empty: -1 15 | max-spaces-inside-empty: -1 16 | colons: 17 | max-spaces-before: 0 18 | max-spaces-after: 1 19 | commas: 20 | max-spaces-before: 0 21 | min-spaces-after: 1 22 | max-spaces-after: 1 23 | comments: 24 | level: warning 25 | require-starting-space: true 26 | min-spaces-from-content: 2 27 | comments-indentation: 28 | level: warning 29 | document-end: disable 30 | document-start: 31 | level: warning 32 | present: true 33 | empty-lines: 34 | max: 2 35 | max-start: 0 36 | max-end: 0 37 | empty-values: 38 | forbid-in-block-mappings: false 39 | forbid-in-flow-mappings: false 40 | hyphens: 41 | max-spaces-after: 1 42 | indentation: 43 | spaces: consistent 44 | indent-sequences: true 45 | check-multi-line-strings: false 46 | key-duplicates: enable 47 | key-ordering: disable 48 | line-length: 49 | max: 130 # Long enough for a SHA-512 password hash 50 | level: warning 51 | allow-non-breakable-words: true 52 | allow-non-breakable-inline-mappings: false 53 | new-line-at-end-of-file: enable 54 | new-lines: 55 | type: unix 56 | trailing-spaces: enable 57 | truthy: 58 | level: warning 59 | 60 | # vim: ft=yaml 61 | -------------------------------------------------------------------------------- /tools/windows/playbook-build-librsync.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Build librsync on a prepared Windows 3 | hosts: windows_builders 4 | gather_facts: false 5 | tasks: 6 | - name: make sure working directory {{ working_dir }} exists 7 | win_file: 8 | state: directory 9 | path: "{{ working_dir }}" 10 | 11 | - name: include tasks to download and prepare librsync sources 12 | include_tasks: tasks/get-librsync-{{ librsync_git_repo is defined | ternary('git', 'tarball') }}.yml 13 | 14 | - name: use cmake to generate build system for released build sources 15 | win_command: > 16 | "{{ cmake_exe }}" -G "Visual Studio 15 2017" -A {{ librsync_win_bits }} 17 | -D BUILD_RDIFF=OFF -D BUILD_SHARED_LIBS=TRUE 18 | -DCMAKE_INSTALL_PREFIX={{ librsync_install_dir }} 19 | . 20 | args: 21 | chdir: "{{ librsync_dir }}" 22 | # also possible: -S source_dir / -B build_dir 23 | # -D CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=TRUE 24 | 25 | - name: build librsync with MSBuild 26 | win_command: > 27 | "{{ msbuild_exe }}" librsync.sln /target:Build /property:Configuration=Release 28 | args: 29 | chdir: "{{ librsync_dir }}" 30 | when: msbuild_exe is defined 31 | - name: build librsync with CMake 32 | win_command: > 33 | "{{ cmake_exe }}" --build . --config Release 34 | args: 35 | chdir: "{{ librsync_dir }}" 36 | when: msbuild_exe is not defined 37 | - name: use CMake to install librsync 38 | win_command: > 39 | "{{ cmake_exe }}" --install . --config Release 40 | args: 41 | chdir: "{{ librsync_dir }}" 42 | -------------------------------------------------------------------------------- /.github/workflows/test_windows.yml: -------------------------------------------------------------------------------- 1 | name: Test-Windows 2 | 3 | on: 4 | push: 5 | branches: ['*_'] 6 | tags: 7 | - '*_' # ending underscore for trying things 8 | - 'v[0-9]+.[0-9]+.[0-9]+' # final version 9 | - 'v[0-9]+.[0-9]+.[0-9]+[abrc]+[0-9]+' # alpha, beta, release candidate (rc) 10 | - 'v[0-9]+.[0-9]+.[0-9]+.dev[0-9]+' # development versions 11 | pull_request: 12 | # paths-ignore: ['docs/**'] # we can't use it and enforce some checks 13 | 14 | # necessary for Windows 15 | defaults: 16 | run: 17 | shell: bash 18 | 19 | env: 20 | WIN_PYTHON_VERSION: 3.9.1 21 | WIN_LIBRSYNC_VERSION: v2.2.1 22 | 23 | jobs: 24 | test-tox-win: 25 | runs-on: windows-latest 26 | strategy: 27 | matrix: 28 | arch: [x86, x64] 29 | steps: 30 | - uses: actions/checkout@v2 31 | with: 32 | fetch-depth: 0 # to have the correct version 33 | - name: Set up Python ${{ env.WIN_PYTHON_VERSION }} 34 | uses: actions/setup-python@v2 35 | with: 36 | python-version: ${{ env.WIN_PYTHON_VERSION }} 37 | architecture: ${{ matrix.arch }} 38 | - name: Install dependencies 39 | run: | 40 | python.exe -VV 41 | pip.exe install --upgrade pywin32 pyinstaller wheel certifi setuptools-scm tox PyYAML 42 | python.exe -c 'import pywintypes, winnt, win32api, win32security, win32file, win32con' 43 | choco install ruby 44 | gem install asciidoctor 45 | - name: Build librsync 46 | run: tools/win_build_librsync.sh ${{ matrix.arch }} ${WIN_LIBRSYNC_VERSION} 47 | - name: Test rdiff-backup 48 | run: tools/win_test_rdiffbackup.sh ${{ matrix.arch }} ${WIN_PYTHON_VERSION} 49 | -------------------------------------------------------------------------------- /.github/workflows/test_linux.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Test-Linux 3 | 4 | on: 5 | push: 6 | branches: ['*_'] 7 | tags: 8 | - '*_' # ending underscore for trying things 9 | - 'v[0-9]+.[0-9]+.[0-9]+' # final version 10 | - 'v[0-9]+.[0-9]+.[0-9]+[abrc]+[0-9]+' # alpha, beta, release candidate (rc) 11 | - 'v[0-9]+.[0-9]+.[0-9]+.dev[0-9]+' # development versions 12 | pull_request: 13 | # paths-ignore: ['docs/**'] # we can't use it and enforce some checks 14 | 15 | jobs: 16 | test-make-test: 17 | runs-on: ubuntu-latest 18 | strategy: 19 | matrix: 20 | python-version: [3.9,3.8,3.7,3.6] 21 | steps: 22 | - uses: actions/checkout@v2 23 | - name: skip workflow if only docs PR 24 | id: skip-docs # id used for referencing step 25 | uses: saulmaldonado/skip-workflow@v1.1.0 26 | with: 27 | phrase: '[DOC]' 28 | search: '["pull_request"]' 29 | github-token: ${{ secrets.GITHUB_TOKEN }} 30 | - name: Set up Python ${{ matrix.python-version }} 31 | if: '!steps.skip-docs.outputs.skip' 32 | uses: actions/setup-python@v2 33 | with: 34 | python-version: ${{ matrix.python-version }} 35 | - name: Install dependencies 36 | if: '!steps.skip-docs.outputs.skip' 37 | run: | 38 | sudo apt install librsync-dev libacl1-dev rdiff asciidoctor 39 | sudo pip3 install --upgrade pip setuptools-scm 40 | sudo pip3 install --upgrade tox pyxattr pylibacl 41 | - name: Execute tests ${{ matrix.test-step }} 42 | if: '!steps.skip-docs.outputs.skip' 43 | run: | 44 | export RUN_COMMAND= 45 | export SUDO=sudo 46 | make test 47 | # the empty RUN_COMMAND avoids using docker 48 | -------------------------------------------------------------------------------- /debian/control: -------------------------------------------------------------------------------- 1 | Source: rdiff-backup 2 | Section: utils 3 | Priority: optional 4 | Maintainer: Otto Kekäläinen 5 | Build-Depends: debhelper (>= 11), 6 | dh-python, 7 | librsync-dev, 8 | python3-all-dev, 9 | python3-pylibacl, 10 | python3-pyxattr, 11 | python3-setuptools, 12 | python3-setuptools-scm 13 | Standards-Version: 4.4.0 14 | Homepage: http://rdiff-backup.net/ 15 | Vcs-Git: https://github.com/rdiff-backup/rdiff-backup.git 16 | Vcs-Browser: https://github.com/rdiff-backup/rdiff-backup/ 17 | 18 | Package: rdiff-backup 19 | Architecture: any 20 | Depends: ${misc:Depends}, 21 | ${python3:Depends}, 22 | ${shlibs:Depends} 23 | Recommends: python3-pylibacl, 24 | python3-pyxattr, 25 | python3-setuptools 26 | Description: remote incremental backup 27 | rdiff-backup backs up one directory to another, possibly over a network. The 28 | target directory ends up a copy of the source directory, but extra reverse 29 | diffs are stored in a special subdirectory of that target directory, so you can 30 | still recover files lost some time ago. The idea is to combine the best 31 | features of a mirror and an incremental backup. rdiff-backup also preserves 32 | subdirectories, hard links, dev files, permissions, uid/gid ownership, 33 | modification times, extended attributes, acls, and resource forks. 34 | . 35 | Also, rdiff-backup can operate in a bandwidth efficient manner over a pipe, 36 | like rsync. Thus you can use rdiff-backup and ssh to securely back a hard drive 37 | up to a remote location, and only the differences will be transmitted. Finally, 38 | rdiff-backup is easy to use and settings have sensible defaults. 39 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions/info.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | A built-in rdiff-backup action plug-in to output info, especially useful 22 | for documenting an issue. 23 | """ 24 | 25 | import yaml 26 | 27 | from rdiffbackup import actions 28 | from rdiff_backup import Globals 29 | 30 | 31 | class InfoAction(actions.BaseAction): 32 | """ 33 | Output information about the current system, so that it can be used in 34 | in a bug report, and exits. 35 | """ 36 | name = "info" 37 | security = None 38 | # information has no specific sub-options 39 | 40 | def setup(self): 41 | # there is nothing to setup for the info action 42 | return 0 43 | 44 | def run(self): 45 | runtime_info = Globals.get_runtime_info() 46 | print(yaml.safe_dump(runtime_info, 47 | explicit_start=True, explicit_end=True)) 48 | return 0 49 | 50 | 51 | def get_action_class(): 52 | return InfoAction 53 | -------------------------------------------------------------------------------- /docs/credits.adoc: -------------------------------------------------------------------------------- 1 | :sectnums: 2 | :toc: 3 | 4 | *NOTE!* This list is not complete. 5 | Please also see the git log itself for authors and committers and the Github statistics at: 6 | 7 | * https://github.com/rdiff-backup/rdiff-backup/people 8 | * https://github.com/rdiff-backup/rdiff-backup/graphs/contributors 9 | 10 | Project Lead / Maintainer History: 11 | 12 | * From August 2019 onwards the main driver for the project is Eric L. 13 | supported by Seravo. 14 | * Sol1 has officially taken over stewardship of rdiff-backup from February 2016. 15 | * Edward Ned Harvey, maintainer 2012 to 2016 16 | * Andrew Ferguson, maintainer 2008 to 2012 17 | * Dean Gaudet, maintainer 2006 to 2007 18 | * Ben Escoto, original author, maintainer 2001 to 2005. 19 | 20 | Other code contributors are: 21 | 22 | * Daniel Hazelbaker, who contributed Mac OS X resource fork support. 23 | (July 2003) 24 | * Dean Gaudet, for checking in many patches, and for finding and fixing many bugs. 25 | * Andrew Ferguson, for improving Mac OS X support and fixing many user-reported bugs. 26 | * Josh Nisly, for contributing native Windows support. 27 | (June 2008) 28 | * Fred Gansevles, for contributing Windows ACLs support. 29 | (July 2008) 30 | 31 | Thanks also to: 32 | 33 | * The http://www.fsf.org/[Free Software Foundation], for previously hosting the rdiff-backup project via their Savannah system. 34 | * Andrew Tridgell and Martin Pool for writing rdiff, and also for rsync, which gave Ben Escoto the idea. 35 | * Martin Pool and Donovan Baarda for their work on librsync, which rdiff-backup needs. 36 | * Michael Friedlander for initially acting interested in the idea and giving me accounts for testing. 37 | * Lots of people on the mailing list for their helpful comments, advice, and patches, particularly Alberto Accomazzi, Donovan Baarda, Jeb Campbell, Greg Freemyer, Jamie Heilman, Marc Dyksterhouse, and Ralph Lehmann. 38 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # roles/rh-base/defaults/main.yml 2 | --- 3 | 4 | # Package management 5 | rhbase_enable_repos: [] 6 | rhbase_install_packages: [] 7 | rhbase_remove_packages: [] 8 | rhbase_repo_exclude: [] 9 | rhbase_repo_gpgcheck: false 10 | rhbase_repo_installonly_limit: 3 11 | rhbase_repo_remove_dependencies: true 12 | rhbase_repo_exclude_from_update: [] 13 | rhbase_repositories: [] 14 | rhbase_update: false 15 | 16 | # Automatic updates 17 | rhbase_automatic_updates: false 18 | 19 | rhbase_updates_type: default 20 | rhbase_updates_random_sleep: 360 21 | rhbase_updates_download: true 22 | rhbase_updates_apply: true 23 | rhbase_updates_message: true 24 | rhbase_updates_emit_via: stdio 25 | rhbase_updates_email_from: root 26 | rhbase_updates_email_to: root 27 | rhbase_updates_email_host: localhost 28 | rhbase_updates_debuglevel: 0 29 | 30 | # Yum-cron 31 | rhbase_yum_cron_hourly_sleep_time: 15 32 | rhbase_yum_cron_hourly_update_level: minimal 33 | rhbase_yum_cron_hourly_update_messages: true 34 | rhbase_yum_cron_hourly_download_updates: true 35 | rhbase_yum_cron_hourly_install_updates: false 36 | 37 | # Configuration 38 | rhbase_override_firewalld_zones: false 39 | rhbase_hosts_entry: false 40 | rhbase_motd: false 41 | rhbase_tz: :/etc/localtime 42 | rhbase_dynamic_motd: false 43 | rhbase_ssh_hostbasedauthentication: 'no' 44 | rhbase_ssh_ignorerhosts: 'yes' 45 | rhbase_ssh_permitemptypasswords: 'no' 46 | rhbase_ssh_protocol_version: 2 47 | rhbase_ssh_rhostsrsaauthentication: 'no' 48 | 49 | # Services 50 | rhbase_start_services: [] 51 | rhbase_stop_services: [] 52 | 53 | # Security 54 | rhbase_firewall_allow_services: [] 55 | rhbase_firewall_allow_ports: [] 56 | rhbase_firewall_interfaces: [] 57 | rhbase_selinux_state: enforcing 58 | rhbase_selinux_booleans: [] 59 | 60 | # Users 61 | rhbase_users: [] 62 | rhbase_user_groups: [] 63 | rhbase_ssh_allow_groups: [] 64 | -------------------------------------------------------------------------------- /testing/find-max-ram.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """find-max-ram - Returns the maximum amount of memory used by a program. 3 | 4 | Every half second, run ps with the appropriate commands, getting the 5 | size of the program. Return max value. 6 | 7 | """ 8 | 9 | import os 10 | import sys 11 | import time 12 | from functools import reduce 13 | 14 | 15 | def get_val(cmdstr): 16 | """Runs ps and gets sum rss for processes making cmdstr 17 | 18 | Returns None if process not found. 19 | 20 | """ 21 | cmd = ("ps -Ao cmd -o rss | grep '%s' | grep -v grep" % cmdstr) 22 | # print "Running ", cmd 23 | fp = os.popen(cmd) 24 | lines = fp.readlines() 25 | fp.close() 26 | 27 | if not lines: 28 | return None 29 | else: 30 | return reduce(lambda x, y: x + y, list(map(read_ps_line, lines))) 31 | 32 | 33 | def read_ps_line(psline): 34 | """Given a specially formatted line by ps, return rss value""" 35 | pslist = psline.split() 36 | assert len(pslist) >= 2 # first few are name, last one is rss 37 | return int(pslist[-1]) 38 | 39 | 40 | def main(cmdstr): 41 | while get_val(cmdstr) is None: 42 | time.sleep(0.5) 43 | 44 | current_max = 0 45 | while 1: 46 | rss = get_val(cmdstr) 47 | print(rss) 48 | if rss is None: 49 | break 50 | current_max = max(current_max, rss) 51 | time.sleep(0.5) 52 | 53 | print(current_max) 54 | 55 | 56 | if __name__ == "__main__": 57 | 58 | if len(sys.argv) != 2: 59 | print("""Usage: find-max-ram [command string] 60 | 61 | It will then run ps twice a second and keep totalling how much RSS 62 | (resident set size) the process(es) whose ps command name contain the 63 | given string use up. When there are no more processes found, it will 64 | print the number and exit. 65 | """) 66 | sys.exit(1) 67 | else: 68 | main(sys.argv[1]) 69 | -------------------------------------------------------------------------------- /testing/api_test.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import unittest 4 | import yaml 5 | from commontest import RBBin 6 | from rdiff_backup import Globals 7 | 8 | 9 | class ApiVersionTest(unittest.TestCase): 10 | """Test api versioning functionality""" 11 | 12 | def test_runtime_info_calling(self): 13 | """make sure that the info output can be read back as YAML when API is 201""" 14 | output = subprocess.check_output([RBBin, b'--api-version', b'201', b'info']) 15 | out_info = yaml.safe_load(output) 16 | 17 | Globals.api_version['actual'] = 201 18 | info = Globals.get_runtime_info() 19 | 20 | # because the current test will have a different call than rdiff-backup itself 21 | # we can't compare certain keys 22 | self.assertIn('exec', out_info) 23 | self.assertIn('argv', out_info['exec']) 24 | out_info['exec'].pop('argv') 25 | info['exec'].pop('argv') 26 | # info['python']['executable'] could also be different but I think that 27 | # our test environments make sure that it doesn't happen, unless Windows 28 | if os.name == "nt": 29 | info['python']['executable'] = info['python']['executable'].lower() 30 | out_info['python']['executable'] = \ 31 | out_info['python']['executable'].lower() 32 | self.assertEqual(info, out_info) 33 | 34 | def test_default_actual_api(self): 35 | """validate that the default version is the actual one or the one explicitly set""" 36 | output = subprocess.check_output([RBBin, b'info']) 37 | api_version = yaml.safe_load(output)['exec']['api_version'] 38 | self.assertEqual(Globals.get_api_version(), api_version['default']) 39 | api_param = os.fsencode(str(api_version['max'])) 40 | output = subprocess.check_output([RBBin, b'--api-version', api_param, b'info']) 41 | out_info = yaml.safe_load(output) 42 | self.assertEqual(out_info['exec']['api_version']['actual'], api_version['max']) 43 | 44 | 45 | if __name__ == "__main__": 46 | unittest.main() 47 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/security.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/security.yml 2 | # 3 | # Basic security settings 4 | --- 5 | - name: Security | Make sure SELinux has the desired state ({{ rhbase_selinux_state }}) 6 | selinux: 7 | policy: targeted 8 | state: "{{ rhbase_selinux_state }}" 9 | tags: 10 | - rhbase 11 | - security 12 | 13 | - name: Security | Enable SELinux booleans 14 | seboolean: 15 | name: "{{ item }}" 16 | state: true 17 | persistent: true 18 | with_items: "{{ rhbase_selinux_booleans }}" 19 | when: rhbase_selinux_state == 'enforcing' or rhbase_selinux_state == 'permissive' 20 | tags: 21 | - rhbase 22 | - security 23 | 24 | - name: Security | Make sure the firewall is running 25 | service: 26 | name: firewalld 27 | state: started 28 | enabled: true 29 | tags: 30 | - rhbase 31 | - security 32 | 33 | - name: Security | Make sure basic services can pass through firewall 34 | firewalld: 35 | service: "{{ item }}" 36 | permanent: true 37 | state: enabled 38 | with_items: 39 | - dhcpv6-client 40 | - ssh 41 | notify: 42 | - restart firewalld 43 | tags: 44 | - rhbase 45 | - security 46 | 47 | - name: Security | Make sure user specified services can pass through firewall 48 | firewalld: 49 | service: "{{ item }}" 50 | permanent: true 51 | state: enabled 52 | with_items: "{{ rhbase_firewall_allow_services }}" 53 | notify: 54 | - restart firewalld 55 | tags: 56 | - rhbase 57 | - security 58 | 59 | - name: Security | Make sure user specified ports can pass through firewall 60 | firewalld: 61 | port: "{{ item }}" 62 | permanent: true 63 | state: enabled 64 | with_items: "{{ rhbase_firewall_allow_ports }}" 65 | notify: 66 | - restart firewalld 67 | tags: 68 | - rhbase 69 | - security 70 | - name: Security | Make sure specified interfaces are added 71 | firewalld: 72 | interface: "{{ item }}" 73 | zone: public 74 | permanent: true 75 | state: enabled 76 | with_items: "{{ rhbase_firewall_interfaces }}" 77 | notify: 78 | - restart firewalld 79 | tags: 80 | - rhbase 81 | - security 82 | -------------------------------------------------------------------------------- /tools/windows/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | 6 | config.vm.define "winbuilder", primary: true do |winbuilder| 7 | winbuilder.vm.box = "jborean93/WindowsServer2019" 8 | winbuilder.vm.provider :libvirt do |libvirt| 9 | libvirt.memory = 8192 # installation of VS fails with too little memory 10 | end 11 | winbuilder.vm.guest = :windows 12 | winbuilder.vm.communicator = "winrm" 13 | winbuilder.vm.boot_timeout = 600 14 | winbuilder.vm.graceful_halt_timeout = 600 15 | winbuilder.winrm.transport = :ssl 16 | winbuilder.winrm.basic_auth_only = true 17 | winbuilder.winrm.ssl_peer_verification = false 18 | end 19 | 20 | config.vm.define "samba", autostart: false do |samba| 21 | samba.vm.box = "centos/8" 22 | end 23 | 24 | # WARNING: if following line is removed, Vagrant seems to act like it would 25 | # be Linux with following error: 26 | # At line:1 char:33 27 | # + ip=$(which ip); ${ip:-/sbin/ip} addr show | grep -i 'inet ' | grep -v ... 28 | # + ~~~~ 29 | # Unexpected token 'addr' in expression or statement. 30 | # + CategoryInfo : ParserError: (:) [Invoke-Expression], ParseException 31 | # + FullyQualifiedErrorId : UnexpectedToken,Microsoft.PowerShell.Commands.InvokeExpressionCommand 32 | config.vm.synced_folder ".", "/vagrant", disabled: true 33 | 34 | # the following parameters can be adapted, the certificate validation must 35 | # be ignored because the box is setup with a self-signed certificate. 36 | config.vm.provision "ansible" do |ansible| 37 | ansible.verbose = "v" 38 | ansible.groups = { 39 | "windows_builders" => ["winbuilder"], 40 | "windows_hosts:children" => ["windows_builders"], 41 | "windows_hosts:vars" => { 42 | "ansible_winrm_server_cert_validation" => "ignore" 43 | }, 44 | "samba_servers" => ["samba"], 45 | "linux_hosts:children" => ["samba_servers"], 46 | } 47 | ansible.galaxy_role_file = "roles/requirements.yml" 48 | ansible.galaxy_roles_path = "roles" 49 | ansible.playbook = "playbook-provision.yml" 50 | end 51 | end 52 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/templates/etc_yum_yum-cron-hourly.conf.j2: -------------------------------------------------------------------------------- 1 | # Yum-cron hourly task configuration 2 | # {{ ansible_managed }} 3 | [commands] 4 | # What kind of update to use: 5 | # default = yum upgrade 6 | # security = yum --security upgrade 7 | # security-severity:Critical = yum --sec-severity=Critical upgrade 8 | # minimal = yum --bugfix update-minimal 9 | # minimal-security = yum --security update-minimal 10 | # minimal-security-severity:Critical = --sec-severity=Critical update-minimal 11 | update_cmd = {{ rhbase_yum_cron_hourly_update_level }} 12 | 13 | # Whether a message should emitted when updates are available. 14 | update_messages = {{ 'yes' if rhbase_yum_cron_hourly_update_messages else 'no' }} 15 | 16 | # Whether updates should be downloaded when they are available. Note 17 | # that updates_messages must also be yes for updates to be downloaded. 18 | download_updates = {{ 'yes' if rhbase_yum_cron_hourly_download_updates else 'no' }} 19 | 20 | # Whether updates should be applied when they are available. Note 21 | # that both update_messages and download_updates must also be yes for 22 | # the update to be applied 23 | apply_updates = {{ 'yes' if rhbase_yum_cron_hourly_install_updates else 'no' }} 24 | 25 | # Maximum amout of time to randomly sleep, in minutes. The program 26 | # will sleep for a random amount of time between 0 and random_sleep 27 | # minutes before running. This is useful for e.g. staggering the 28 | # times that multiple systems will access update servers. If 29 | # random_sleep is 0 or negative, the program will run immediately. 30 | random_sleep = {{ rhbase_yum_cron_hourly_sleep_time }} 31 | 32 | [emitters] 33 | system_name = {{ ansible_hostname }} 34 | emit_via = {{ rhbase_updates_emit_via }} 35 | output_width = 80 36 | 37 | [email] 38 | email_from = {{ rhbase_updates_email_from }} 39 | email_to = {{ rhbase_updates_email_to }} 40 | email_host = {{ rhbase_updates_email_host }} 41 | 42 | [groups] 43 | group_list = None 44 | group_package_types = mandatory, default 45 | 46 | [base] 47 | debuglevel = -{{ rhbase_updates_debuglevel }} 48 | mdpolicy = group:main 49 | 50 | ## vim: ft=dosini 51 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Makefile to automate rdiff-backup build and install steps 2 | 3 | # Currently all steps are run isolated inside a Docker image, but this could 4 | # be extended to have more options. 5 | RUN_COMMAND ?= docker run --rm -i -v ${PWD}/..:/build/ -w /build/$(shell basename `pwd`) rdiff-backup-dev:debian-sid 6 | 7 | # Define SUDO=sudo if you don't want to run the whole thing as root 8 | # we set SUDO="sudo -E env PATH=$PATH" if we want to keep the whole environment 9 | SUDO ?= 10 | 11 | all: clean container test build 12 | 13 | test: test-static test-runtime 14 | 15 | test-static: 16 | ${RUN_COMMAND} tox -c tox.ini -e flake8 17 | 18 | test-runtime: test-runtime-base test-runtime-root test-runtime-slow 19 | 20 | test-runtime-files: 21 | @echo "=== Install files required by the tests ===" 22 | ${RUN_COMMAND} ./tools/setup-testfiles.sh # This must run as root or sudo be available 23 | 24 | test-runtime-base: test-runtime-files 25 | @echo "=== Base tests ===" 26 | ${RUN_COMMAND} tox -c tox.ini -e py 27 | 28 | test-runtime-root: test-runtime-files 29 | @echo "=== Tests that require root permissions ===" 30 | ${RUN_COMMAND} ${SUDO} tox -c tox_root.ini -e py # This must run as root 31 | # NOTE! Session will user=root inside Docker) 32 | 33 | test-runtime-slow: test-runtime-files 34 | @echo "=== Long running performance tests ===" 35 | ${RUN_COMMAND} tox -c tox_slow.ini -e py 36 | 37 | test-misc: clean build test-static test-runtime-slow 38 | 39 | build: 40 | # Build rdiff-backup (assumes src/ is in directory 'rdiff-backup' and it's 41 | # parent is writeable) 42 | ${RUN_COMMAND} ./setup.py build 43 | 44 | bdist_wheel: 45 | # Prepare wheel for deployment. 46 | # See the notes for target "build" 47 | # auditwheel unfortunately does not work with modern glibc 48 | ${RUN_COMMAND} ./setup.py bdist_wheel 49 | # ${RUN_COMMAND} auditwheel repair dist/*.whl 50 | 51 | sdist: 52 | # Prepare wheel for deployment. 53 | ${RUN_COMMAND} ./setup.py sdist 54 | 55 | dist_deb: 56 | ${RUN_COMMAND} debian/autobuild.sh 57 | 58 | container: 59 | # Build development image 60 | docker build --pull --tag rdiff-backup-dev:debian-sid . 61 | 62 | clean: 63 | ${RUN_COMMAND} rm -rf .tox/ MANIFEST build/ testing/__pycache__/ dist/ 64 | ${RUN_COMMAND} ${SUDO} rm -rf .tox.root/ 65 | -------------------------------------------------------------------------------- /tools/misc/setup_dev_archlinux.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # small script to prepare ArchLinux for rdiff-backup development 3 | # run it as root, it was developed under the conditions of a container, 4 | # e.g. created with: 5 | # podman run -it docker.io/library/archlinux 6 | # CAUTION: it is more meant as an example of helpful commands for a developer/tester 7 | # not fully knowledgeable of the platform than as a truly and fully tested script. 8 | 9 | DEVUSER=devuser 10 | 11 | # upgrade all packages 12 | #pacman -Syu --noconfirm 13 | 14 | # packages under Arch generally contains what other distros place in -dev/-devel packages 15 | pacman -S --noconfirm librsync libffi 16 | pacman -S --noconfirm python python-pylibacl python-pyxattr 17 | pacman -S --noconfirm python-setuptools python-setuptools-scm tox 18 | pacman -S --noconfirm git openssh 19 | pacman -S --noconfirm base-devel 20 | pacman -S --noconfirm vim # optional if you like vim as editor 21 | 22 | # in order to not always work as root (though it might not be an issue in a container) 23 | useradd -m ${DEVUSER} 24 | cd /home/${DEVUSER} 25 | 26 | # only if you need a specific version of a package (just an example): 27 | #pacman -U https://archive.archlinux.org/packages/l/librsync/librsync-1%3A2.0.2-1-x86_64.pkg.tar.xz 28 | 29 | # I only test under ArchLinux but if you want to really develop, you should use SSH instead of HTTPS 30 | su - ${DEVUSER} -c 'git clone https://github.com/rdiff-backup/rdiff-backup.git' 31 | su - ${DEVUSER} -c 'git clone https://github.com/rdiff-backup/rdiff-backup-filesrepo.git' 32 | 33 | #su - ${DEVUSER} -c 'git clone git@github.com:rdiff-backup/rdiff-backup.git' 34 | #su - ${DEVUSER} -c 'git clone git@github.com:rdiff-backup/rdiff-backup-filesrepo.git' 35 | 36 | tar xvf ./rdiff-backup-filesrepo/rdiff-backup_testfiles.tar 37 | # if devuser hasn't the ID 1000, you will need following command, assuming 1234 is the UID/GID: 38 | #./rdiff-backup-filesrepo/rdiff-backup_testfiles.fix.sh 1234 1234 39 | 40 | # after that, as `DEVUSER`, following commands should be possible (for example) 41 | # ./setup.py clean --all 42 | # ./setup.py build 43 | # PATH=$PWD/build/scripts-3.8:$PATH PYTHONPATH=$PWD/build/lib.linux-x86_64-3.8 rdiff-backup --version 44 | # PATH=$PWD/build/scripts-3.8:$PATH PYTHONPATH=$PWD/build/lib.linux-x86_64-3.8 python testing/eas_aclstest.py 45 | 46 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/files/dynamic-motd.sh: -------------------------------------------------------------------------------- 1 | # Dynamic Message of the Day with system information 2 | 3 | main() { 4 | if is_interactive_shell; then 5 | print_pretty_hostname 6 | print_general_info 7 | print_filesystem_info 8 | print_nic_info 9 | fi 10 | } 11 | 12 | is_interactive_shell() { 13 | test -t 0 14 | } 15 | 16 | print_pretty_hostname() { 17 | figlet "$(hostname)" 18 | } 19 | 20 | print_general_info() { 21 | local distro load1 load5 load15 uptime \ 22 | total_memory memory_usage total_swap swap_usage 23 | 24 | distro=$(grep '^PRETTY' /etc/os-release | sed 's/.*"\(.*\)"/\1/') 25 | 26 | # System load 27 | load1=$(awk '{print $1}' /proc/loadavg ) 28 | load5=$(awk '{print $2}' /proc/loadavg ) 29 | load15=$(awk '{print $3}' /proc/loadavg ) 30 | 31 | # System uptime 32 | uptime=$(uptime --pretty | sed 's/up //') 33 | 34 | # Memory/swap usage in % (used/total*100) 35 | memory_usage=$(free | awk '/Mem:/ { printf("%3.2f%%", $3/$2*100) }') 36 | swap_usage=$( free | awk '/Swap:/ { printf("%3.2f%%", $3/$2*100) }') 37 | 38 | # Total memory/swap in MiB 39 | total_memory=$(free -m | awk '/Mem:/ { printf("%s MiB", $2) }') 40 | total_swap=$( free -m | awk '/Swap:/ { printf("%s MiB", $2) }') 41 | 42 | cat << _EOF_ 43 | System information as of $(date). 44 | 45 | Distro/Kernel: ${distro} / $(uname --kernel-release) 46 | 47 | System load: ${load1}, ${load5}, ${load15} Memory usage: ${memory_usage} of ${total_memory} 48 | System uptime: ${uptime} Swap usage: ${swap_usage} of ${total_swap} 49 | 50 | _EOF_ 51 | 52 | } 53 | 54 | print_filesystem_info() { 55 | df --si --local --print-type --total \ 56 | --exclude-type=tmpfs --exclude-type=devtmpfs 57 | } 58 | 59 | print_nic_info() { 60 | local interfaces 61 | interfaces=$(ip --brief link | awk '{ print $1}') 62 | local mac 63 | local ip_info 64 | local ip4 65 | local ip6 66 | 67 | printf '\nInterface\tMAC Address\t\tIPv4 Address\t\tIPv6 Address\n' 68 | for nic in ${interfaces}; do 69 | mac=$(ip --brief link show dev "${nic}" | awk '{print $3}') 70 | ip_info=$(ip --brief address show dev "${nic}") 71 | ip4=$(awk '{print $3}' <<< "${ip_info}") 72 | ip6=$(awk '{print $4}' <<< "${ip_info}") 73 | 74 | printf '%s\t\t%s\t%s\t\t%s\t\n' "${nic}" "${mac}" "${ip4}" "${ip6}" 75 | done 76 | } 77 | 78 | main 79 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions/calculate.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | A built-in rdiff-backup action plug-in to calculate average across multiple 22 | statistics files. 23 | """ 24 | 25 | from rdiff_backup import statistics 26 | from rdiffbackup import actions 27 | 28 | 29 | class CalculateAction(actions.BaseAction): 30 | """ 31 | Calculate values (average by default) across multiple statistics files. 32 | """ 33 | name = "calculate" 34 | security = "validate" 35 | 36 | @classmethod 37 | def add_action_subparser(cls, sub_handler): 38 | subparser = super().add_action_subparser(sub_handler) 39 | subparser.add_argument( 40 | "--method", choices=["average"], default="average", 41 | help="what to calculate from the different session statistics") 42 | subparser.add_argument( 43 | "locations", metavar="STATISTIC_FILE", nargs="+", 44 | help="locations of the session statistic files to calculate from") 45 | return subparser 46 | 47 | def run(self): 48 | """ 49 | Print out the calculation of the given statistics files, according 50 | to calculation method. 51 | """ 52 | statobjs = [ 53 | statistics.StatsObj().read_stats_from_rp(loc) 54 | for loc in self.connected_locations 55 | ] 56 | if self.values.method == "average": 57 | calc_stats = statistics.StatsObj().set_to_average(statobjs) 58 | print(calc_stats.get_stats_logstring( 59 | "Average of %d stat files" % len(self.connected_locations))) 60 | return 0 61 | 62 | 63 | def get_action_class(): 64 | return CalculateAction 65 | -------------------------------------------------------------------------------- /tools/misc/init_files.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """init_smallfiles.py 3 | 4 | This program makes a number of files of the given size in the 5 | specified directory. 6 | 7 | """ 8 | 9 | import os 10 | import sys 11 | import math 12 | 13 | 14 | if len(sys.argv) > 5 or len(sys.argv) < 4: 15 | print("Usage: init_files [directory name] [file size] [file count] [base]") 16 | print() 17 | print("Creates file_count files in directory_name of size file_size.") 18 | print("The created directory has a tree type structure where each level") 19 | print("has at most base files or directories in it. Default is 50.") 20 | sys.exit(1) 21 | 22 | 23 | dirname = sys.argv[1] 24 | filesize = int(sys.argv[2]) 25 | filecount = int(sys.argv[3]) 26 | block_size = 16384 27 | block = "." * block_size 28 | block_change = "." * (filesize % block_size) 29 | if len(sys.argv) == 4: 30 | base = 50 31 | else: 32 | base = int(sys.argv[4]) 33 | 34 | 35 | def make_file(path): 36 | """Make the file at path""" 37 | fp = open(path, "w") 38 | for i in range(int(math.floor(filesize / block_size))): 39 | fp.write(block) 40 | fp.write(block_change) 41 | fp.close() 42 | 43 | 44 | def find_sublevels(count): 45 | """Return number of sublevels required for count files""" 46 | return int(math.ceil(math.log(count) / math.log(base))) 47 | 48 | 49 | def make_dir(dir, count): 50 | """Make count files in the directory, making subdirectories if necessary""" 51 | print("Making directory %s with %d files" % (dir, count)) 52 | os.mkdir(dir) 53 | level = find_sublevels(count) 54 | assert count <= pow(base, level) 55 | if level == 1: 56 | for i in range(count): 57 | make_file(os.path.join(dir, "file%d" % i)) 58 | else: 59 | files_per_subdir = pow(base, level - 1) 60 | full_dirs = int(count / files_per_subdir) 61 | assert full_dirs <= base 62 | for i in range(full_dirs): 63 | make_dir(os.path.join(dir, "subdir%d" % i), files_per_subdir) 64 | 65 | change = count - full_dirs * files_per_subdir 66 | assert change >= 0 67 | if change > 0: 68 | make_dir(os.path.join(dir, "subdir%d" % full_dirs), change) 69 | 70 | 71 | def start(dir): 72 | try: 73 | os.stat(dir) 74 | except os.error: 75 | pass 76 | else: 77 | print("Directory %s already exists, exiting." % dir) 78 | sys.exit(1) 79 | 80 | make_dir(dirname, filecount) 81 | 82 | 83 | start(dirname) 84 | -------------------------------------------------------------------------------- /testing/regressfailedlongname.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -v 2 | # Reproducer for issue https://github.com/rdiff-backup/rdiff-backup/issues/9 3 | 4 | BASE_DIR=${TOX_ENV_DIR:-${VIRTUAL_ENV:-build}}/issue9 5 | rm -fr ${BASE_DIR}/* 6 | SRC_DIR=${BASE_DIR}/source 7 | DST_DIR=${BASE_DIR}/dest 8 | 9 | # Create a long file name -- 211 characters. This length is chosen to be less 10 | # than the maximum allowed for ext4 filesystems (255 max.), but long enough for 11 | # rdiff-backup to give it special treament (see longname.py). 12 | longName=b 13 | a=0123456789 14 | for (( i = 0; i < 21; i++ )); do 15 | longName=$longName$a 16 | done 17 | 18 | # Set up a source directory containing a file with the long name: 19 | mkdir -p ${SRC_DIR} 20 | echo test1 > ${SRC_DIR}/$longName 21 | echo TEST1 > ${SRC_DIR}/dummy 22 | ls -l ${SRC_DIR} 23 | 24 | # Make a backup: 25 | rdiff-backup ${SRC_DIR} ${DST_DIR} 26 | sleep 1 27 | 28 | # Keep a copy of the current_mirror file for use later: 29 | cp ${DST_DIR}/rdiff-backup-data/current_mirror* ${BASE_DIR} 30 | 31 | # Modify the ${SRC_DIR} file: 32 | echo test22 > ${SRC_DIR}/$longName 33 | echo TEST22 > ${SRC_DIR}/dummy 34 | 35 | # Make a 2nd backup: 36 | rdiff-backup ${SRC_DIR} ${DST_DIR} 37 | 38 | # Notice that the increment file is put in 39 | # ${DST_DIR}/rdiff-backup-data/long_filename_data/: 40 | ls -l ${DST_DIR}/rdiff-backup-data/long_filename_data 41 | 42 | 43 | # Copy the saved current_mirror back so as to force rdiff-backup into 44 | # concluding that the last backup failed (a simulated failure for this test): 45 | mv ${BASE_DIR}/current_mirror* ${DST_DIR}/rdiff-backup-data 46 | 47 | # rdiff-backup will now report that there is a problem: 48 | rdiff-backup --list-increments ${DST_DIR} 49 | 50 | # this avoids the error in the next call to rdiff-backup 51 | if [[ "$1" == "nocheck" ]] 52 | then 53 | exit 54 | fi 55 | 56 | # Perform the usual fix for the problem (regress the repository): 57 | rdiff-backup --check-destination-dir ${DST_DIR} 58 | 59 | # See that rdiff-backup appears to be happy: 60 | rdiff-backup --list-increments ${DST_DIR} 61 | 62 | # Here's the problem: regressing the repository failed to remove the increment 63 | # file from the 2nd backup: 64 | ls -l ${DST_DIR}/rdiff-backup-data/long_filename_data 65 | 66 | # this avoids the error in the next call to rdiff-backup 67 | if [[ "$1" == "fakeclean" ]] 68 | then 69 | rm ${DST_DIR}/rdiff-backup-data/long_filename_data/1* 70 | fi 71 | 72 | # Retry the 2nd backup. It fails as long as the old increment file is in the way: 73 | rdiff-backup ${SRC_DIR} ${DST_DIR} 74 | -------------------------------------------------------------------------------- /tools/misc/generate-code-overview.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # A rather stupid script to generate graphics from the rdiff-backup code structure 3 | # you'll need to install plantuml from https://plantuml.com/ to generate them. 4 | # Do NOT wonder, the graphics are huge and difficult to oversee. 5 | 6 | CODE_DIR=$(dirname $0)/../../src/rdiff_backup 7 | BUILD_DIR=${1:-$(dirname $0)/../../build} 8 | 9 | mkdir -p "${BUILD_DIR}" 10 | 11 | 12 | # generate an overview of class definitions and their hierarchy 13 | 14 | awk -F'[ ():]' ' 15 | BEGIN { print "@startuml" } 16 | $1 == "class" { 17 | if (FILENAME != pkg) { 18 | if (pkg) print "}" 19 | print "'\''", FILENAME 20 | fields = split(FILENAME, pkgs, "/") 21 | sub(".py","",pkgs[fields]) 22 | print "package",pkgs[fields],"{" 23 | pkg = FILENAME 24 | } 25 | if ($3) { 26 | if ($3 == "Exception") { 27 | print "\t" "class",$2,"<<(E,Red) Exception>>" 28 | } else { 29 | print "\t" $3,"<|--",$2 30 | } 31 | } else { 32 | print "\t" "class",$2 33 | } 34 | } 35 | END { 36 | if (pkg) print "}" 37 | print "@enduml" 38 | }' ${CODE_DIR}/*.py > ${BUILD_DIR}/rdiff-backup-classes.plantuml 39 | PLANTUML_LIMIT_SIZE=16384 plantuml ${BUILD_DIR}/rdiff-backup-classes.plantuml 40 | 41 | 42 | # generate an overview of the imports within the src/rdiff_backup modules 43 | 44 | awk -F' *import *|, *' ' 45 | BEGIN { print "@startuml" } 46 | $1 == "from ." && $0 !~ /^#/ { 47 | if (FILENAME != pkg) { 48 | print "'\''", FILENAME 49 | fields = split(FILENAME, pkgs, "/") 50 | module = pkgs[fields] 51 | sub(".py","",module) 52 | pkg = FILENAME 53 | } 54 | sub(" *#.*$","") # remove comments from line 55 | sub("^ *from . import *","") # remove import command 56 | eol = $NF 57 | do { 58 | sub("^ *","") # remove beginng blanks 59 | if ( $NF !~ /[a-z]/ ) last = NF - 1 60 | else last = NF 61 | if ( $NF == "(" ) field = last + 1 62 | else field = 1 63 | while (field <= last) { 64 | # the following looks useless but we can be more flexible 65 | if ( $field ~ /^[A-Z]/ ) print $field,"o--",module 66 | else print $field,"o--",module 67 | field++ 68 | } 69 | getline 70 | sub(" *#.*$","") # remove comments from line 71 | } while ((eol == "\\" && $NF == "\\") || (eol == "(" && $NF != ")")) 72 | } 73 | END { 74 | print "@enduml" 75 | }' ${CODE_DIR}/*.py > ${BUILD_DIR}/rdiff-backup-imports.plantuml 76 | PLANTUML_LIMIT_SIZE=16384 plantuml ${BUILD_DIR}/rdiff-backup-imports.plantuml 77 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/tasks/install.yml: -------------------------------------------------------------------------------- 1 | # roles/rhbase/tasks/install.yml 2 | # 3 | # Install custom repositories and packages. Repositories should be installed 4 | # using an RPM package. A list of URLs to the RPMs for these repositories 5 | # should be specified in group_vars or host_vars. 6 | --- 7 | - name: Install | Check minimal value of ‘rhbase_repo_installonly_limit’ (>= 2) 8 | debug: 9 | msg: >- 10 | The value of ‘rhbase_repo_installonly_limit’ should be at least 2, 11 | actual value is {{ rhbase_repo_installonly_limit }}. 12 | failed_when: rhbase_repo_installonly_limit < 2 13 | tags: 14 | - rhbase 15 | - install 16 | 17 | - name: Install | Ensure the machine-ID is available 18 | command: systemd-machine-id-setup 19 | args: 20 | creates: /etc/machine-id 21 | tags: rhbase 22 | 23 | - name: Install | Ensure basic systemd services are running 24 | service: 25 | name: "{{ item }}" 26 | state: started 27 | enabled: true 28 | with_items: "{{ rhbase_systemd_services }}" 29 | tags: rhbase 30 | 31 | - name: Install | Role/Ansible dependencies 32 | package: 33 | name: "{{ rhbase_dependencies }}" 34 | state: installed 35 | tags: 36 | - rhbase 37 | - install 38 | 39 | - name: Install | Package management configuration ({{ rhbase_package_manager }}) 40 | template: 41 | src: "etc_{{ rhbase_package_manager }}.conf.j2" 42 | dest: "{{ rhbase_package_manager_configuration }}" 43 | owner: root 44 | group: root 45 | mode: 0644 46 | tags: 47 | - rhbase 48 | - install 49 | 50 | - name: Install | Ensure specified external repositories are installed 51 | package: 52 | name: "{{ rhbase_repositories }}" 53 | state: installed 54 | tags: 55 | - rhbase 56 | - install 57 | 58 | - name: Install | Ensure specified repositories are enabled 59 | lineinfile: 60 | dest: "/etc/yum.repos.d/{{ item }}.repo" 61 | line: 'enabled=1' 62 | state: present 63 | regexp: '^enabled=' 64 | with_items: "{{ rhbase_enable_repos }}" 65 | tags: 66 | - rhbase 67 | - install 68 | 69 | - name: Install | Ensure specified packages are installed 70 | package: 71 | name: "{{ rhbase_install_packages }}" 72 | state: installed 73 | tags: 74 | - rhbase 75 | - install 76 | 77 | - name: Install | Ensure specified packages are NOT installed 78 | package: 79 | name: "{{ rhbase_remove_packages }}" 80 | state: absent 81 | tags: 82 | - rhbase 83 | - install 84 | 85 | - name: Install | Ensure all updates are installed 86 | package: 87 | name: '*' 88 | state: latest 89 | when: rhbase_update 90 | tags: 91 | - rhbase 92 | - install 93 | -------------------------------------------------------------------------------- /testing/rdb_arguments.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import unittest 3 | from commontest import RBBin 4 | from rdiffbackup import arguments, actions_mgr 5 | 6 | 7 | class ArgumentsTest(unittest.TestCase): 8 | """ 9 | Test how the function 'parse' is parsing arguments, using the new interface. 10 | """ 11 | 12 | def test_new_help(self): 13 | """ 14 | - make sure that the new help is shown either with --new or by using an action 15 | """ 16 | output = subprocess.check_output([RBBin, b'--new', b'--help']) 17 | self.assertIn(b"possible actions:", output) 18 | 19 | output = subprocess.check_output([RBBin, b'info', b'--help']) 20 | self.assertIn(b"Output information", output) 21 | 22 | def test_parse_function(self): 23 | """ 24 | - verify that the --version option exits and make a few more smoke tests of the parse option 25 | """ 26 | disc_actions = actions_mgr.get_discovered_actions() 27 | 28 | # verify that the --version option exits the program 29 | with self.assertRaises(SystemExit): 30 | values = arguments.parse(["--version"], "testing 0.0.1", 31 | actions_mgr.get_generic_parsers(), 32 | actions_mgr.get_parent_parsers_compat200(), 33 | disc_actions) 34 | 35 | # positive test of the parsing 36 | values = arguments.parse(["list", "increments", "dummy_test_repo"], "testing 0.0.2", 37 | actions_mgr.get_generic_parsers(), 38 | actions_mgr.get_parent_parsers_compat200(), 39 | disc_actions) 40 | self.assertEqual("list", values.action) 41 | self.assertEqual("increments", values.entity) 42 | self.assertIn("dummy_test_repo", values.locations) 43 | 44 | # negative test of the parsing due to too many or wrong arguments 45 | with self.assertRaises(SystemExit): 46 | values = arguments.parse(["backup", "from", "to", "toomuch"], "testing 0.0.3", 47 | actions_mgr.get_generic_parsers(), 48 | actions_mgr.get_parent_parsers_compat200(), 49 | disc_actions) 50 | with self.assertRaises(SystemExit): 51 | values = arguments.parse(["restore", "--no-such-thing", "from", "to"], "testing 0.0.4", 52 | actions_mgr.get_generic_parsers(), 53 | actions_mgr.get_parent_parsers_compat200(), 54 | disc_actions) 55 | 56 | 57 | if __name__ == "__main__": 58 | unittest.main() 59 | -------------------------------------------------------------------------------- /docs/index.adoc: -------------------------------------------------------------------------------- 1 | = rdiff-backup 2 | :sectnums: 3 | :toc: 4 | 5 | Rdiff-backup backs up one directory to another, possibly over a network. 6 | The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. 7 | The idea is to combine the best features of a mirror and an incremental backup. 8 | Rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership (if it is running as root), modification times, acls, eas, resource forks, etc. 9 | Finally, rdiff-backup can operate in a bandwidth efficient manner over a pipe, like rsync. 10 | Thus you can use rdiff-backup and ssh to securely back a hard drive up to a remote location, and only the differences will be transmitted. 11 | 12 | Read further: 13 | 14 | * xref:examples.adoc[Usage examples] 15 | * xref:FAQ.adoc[Frequently asked questions] 16 | * xref:credits.adoc[Authors and credits] 17 | * xref:DEVELOP.adoc[Developer documentation] 18 | * xref:Windows-README.adoc[Windows specific documentation] - possibly outdated 19 | * xref:Windows-DEVELOP.adoc[Windows specific Developer documentation] 20 | 21 | == Support or Contact 22 | 23 | If you have everything installed properly, and it still doesn't work, see the enclosed xref:docs/FAQ.adoc[FAQ], the https://rdiff-backup.net/[rdiff-backup web page] and/or the https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users[rdiff-backup-users mailing list]. 24 | 25 | We're also happy to help if you create an issue to our https://github.com/rdiff-backup/rdiff-backup/issues[GitHub repo]. 26 | The most important is probably to explain what happened with which version of rdiff-backup, with which command parameters on which operating system version, and attach the output of rdiff-backup run with the very verbose option `-v9`. 27 | 28 | This is an open source project and contributions are welcome! 29 | 30 | == History 31 | 32 | Rdiff-backup has been around for almost 20 years now and has proved to be a very solid solution for backups and it is still unique in its model of unlimited incrementals with no need to space consuming regular full backups. 33 | 34 | Current lead developers are Eric Lavarde, Patric Dufresene and Otto Kekäläinen. 35 | Full list of core developers available at https://github.com/rdiff-backup/rdiff-backup/people[rdiff-backup Github page] and on the xref:credits.adoc[credits page]. 36 | 37 | The original author and maintainer was *Ben Escoto* from 2001 to 2005. 38 | Key contributors from 2005 to 2016 were Dean Gaudet, Andrew Ferguson and Edward Ned Harvey. 39 | After some hibernation time Sol1 took over the stewardship of rdiff-backup from February 2016 but there were no new releases. 40 | In August 2019 https://www.lavar.de/[Eric Lavarde] with the support of Otto Kekäläinen from https://seravo.com/[Seravo] and Patrik Dufresne from http://www.patrikdufresne.com/en/minarca/[Minarca] took over, completed the Python 3 rewrite and finally released rdiff-backup 2.0 in March 2020. 41 | -------------------------------------------------------------------------------- /tools/misc/python-rdiff: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # use librsync module to do simple actions on files: 3 | # - create a signature from a base file 4 | # - create a delta between the signature and a new file 5 | # - re-create the new file by patching the base file with the delta 6 | # this can surely be used to analyse corrupt backups and similar things 7 | 8 | import sys 9 | from rdiff_backup import librsync 10 | 11 | blocksize = 64 * 1024 # just used in copying 12 | librsync_blocksize = None # If not set, use defaults in _librsync module 13 | 14 | 15 | def usage(): 16 | """Print usage and then exit""" 17 | print( 18 | """ 19 | Usage: %(cmd)s "signature" basis_file signature_file 20 | %(cmd)s "delta" signature_file new_file delta_file 21 | %(cmd)s "patch" basis_file delta_file new_file 22 | """ 23 | % {"cmd": sys.argv[0]} 24 | ) 25 | sys.exit(1) 26 | 27 | 28 | def copy_and_close(infp, outfp): 29 | """Copy file streams infp to outfp in blocks, closing when done""" 30 | while 1: 31 | buf = infp.read(blocksize) 32 | if not buf: 33 | break 34 | outfp.write(buf) 35 | assert not infp.close() and not outfp.close() 36 | 37 | 38 | def write_sig(input_path, output_path): 39 | """Open file at input_path, write signature to output_path""" 40 | infp = open(input_path, "rb") 41 | if librsync_blocksize: 42 | sigfp = librsync.SigFile(infp, librsync_blocksize) 43 | else: 44 | sigfp = librsync.SigFile(infp) 45 | copy_and_close(sigfp, open(output_path, "wb")) 46 | 47 | 48 | def write_delta(sig_path, new_path, output_path): 49 | """Read signature and new file, write to delta to delta_path""" 50 | deltafp = librsync.DeltaFile(open(sig_path, "rb"), open(new_path, "rb")) 51 | copy_and_close(deltafp, open(output_path, "wb")) 52 | 53 | 54 | def write_patch(basis_path, delta_path, out_path): 55 | """Patch file at basis_path with delta at delta_path, write to out_path""" 56 | patchfp = librsync.PatchedFile(open(basis_path, "rb"), open(delta_path, "rb")) 57 | copy_and_close(patchfp, open(out_path, "wb")) 58 | 59 | 60 | def check_different(filelist): 61 | """Make sure no files the same""" 62 | d = {} 63 | for file in filelist: 64 | d[file] = file 65 | assert len(d) == len(filelist), "Error, must use all different filenames" 66 | 67 | 68 | def Main(): 69 | """Run program""" 70 | if len(sys.argv) < 4: 71 | usage() 72 | mode = sys.argv[1] 73 | file_args = sys.argv[2:] 74 | check_different(file_args) 75 | if mode == "signature": 76 | if len(file_args) != 2: 77 | usage() 78 | write_sig(*file_args) 79 | elif mode == "delta": 80 | if len(file_args) != 3: 81 | usage() 82 | write_delta(*file_args) 83 | elif mode == "patch": 84 | if len(file_args) != 3: 85 | usage() 86 | write_patch(*file_args) 87 | else: 88 | usage() 89 | 90 | 91 | if __name__ == "__main__": 92 | Main() 93 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions/test.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | A built-in rdiff-backup action plug-in to test servers. 22 | 23 | This plug-in tests that all remote locations are properly reachable and 24 | usable for a back-up. 25 | """ 26 | 27 | from rdiffbackup import actions 28 | from rdiff_backup import log, SetConnections 29 | 30 | 31 | class TestAction(actions.BaseAction): 32 | """ 33 | Test that servers are properly reachable and usable for back-ups. 34 | """ 35 | name = "test" 36 | security = "validate" 37 | 38 | @classmethod 39 | def add_action_subparser(cls, sub_handler): 40 | subparser = super().add_action_subparser(sub_handler) 41 | subparser.add_argument( 42 | "locations", metavar="[USER@]SERVER::PATH", nargs="+", 43 | help="location of remote repositories to check for connection") 44 | return subparser 45 | 46 | def pre_check(self): 47 | return_code = super().pre_check() 48 | # validate that all locations are remote 49 | for location in self.values.locations: 50 | (file_host, file_path, err) = SetConnections.parse_location(location) 51 | if err: 52 | log.Log(err, log.ERROR) 53 | return_code |= 1 # binary 'or' to always get 1 54 | elif not file_host: 55 | log.Log("Only remote locations can be tested but location " 56 | "'{lo}' isn't remote".format(lo=location), log.ERROR) 57 | return_code |= 1 # binary 'or' to always get 1 58 | 59 | return return_code 60 | 61 | def check(self): 62 | # we call the parent check only to output the failed connections 63 | return_code = super().check() 64 | 65 | # even if some connections are bad, we want to validate the remaining 66 | # ones later on. The 'None' filter keeps only trueish values. 67 | self.connected_locations = list(filter(None, self.connected_locations)) 68 | if self.connected_locations: 69 | # at least one location is apparently valid 70 | return 0 71 | else: 72 | return return_code 73 | 74 | def run(self): 75 | result = SetConnections.TestConnections(self.connected_locations) 76 | return result 77 | 78 | 79 | def get_action_class(): 80 | return TestAction 81 | -------------------------------------------------------------------------------- /testing/action_backuprestore_test.py: -------------------------------------------------------------------------------- 1 | """ 2 | Test the basic backup and restore actions with api version >= 201 3 | """ 4 | import os 5 | import unittest 6 | 7 | import commontest as comtst 8 | import fileset 9 | 10 | 11 | class ActionBackupRestoreTest(unittest.TestCase): 12 | """ 13 | Test that rdiff-backup really restores what has been backed-up 14 | """ 15 | 16 | def setUp(self): 17 | self.base_dir = os.path.join(comtst.abs_test_dir, b"backuprestore") 18 | self.from1_struct = { 19 | "from1": {"subs": {"fileA": {"content": "initial"}, "fileB": {}}} 20 | } 21 | self.from1_path = os.path.join(self.base_dir, b"from1") 22 | self.from2_struct = { 23 | "from2": {"subs": {"fileA": {"content": "modified"}, "fileC": {}}} 24 | } 25 | self.from2_path = os.path.join(self.base_dir, b"from2") 26 | fileset.create_fileset(self.base_dir, self.from1_struct) 27 | fileset.create_fileset(self.base_dir, self.from2_struct) 28 | fileset.remove_fileset(self.base_dir, {"bak": {}}) 29 | fileset.remove_fileset(self.base_dir, {"to1": {}}) 30 | fileset.remove_fileset(self.base_dir, {"to2": {}}) 31 | self.bak_path = os.path.join(self.base_dir, b"bak") 32 | self.to1_path = os.path.join(self.base_dir, b"to1") 33 | self.to2_path = os.path.join(self.base_dir, b"to2") 34 | self.success = False 35 | 36 | def test_action_backuprestore(self): 37 | """test the "backup" and "restore" actions""" 38 | # we backup twice to the same backup repository at different times 39 | self.assertEqual(comtst.rdiff_backup_action( 40 | False, False, self.from1_path, self.bak_path, 41 | ("--api-version", "201", "--current-time", "10000"), 42 | b"backup", ()), 0) 43 | self.assertEqual(comtst.rdiff_backup_action( 44 | False, True, self.from2_path, self.bak_path, 45 | ("--api-version", "201", "--current-time", "20000"), 46 | b"backup", ()), 0) 47 | 48 | # then we restore the increment and the last mirror to two directories 49 | self.assertEqual(comtst.rdiff_backup_action( 50 | True, False, self.bak_path, self.to1_path, 51 | ("--api-version", "201"), 52 | b"restore", ("--at", "1B")), 0) 53 | self.assertEqual(comtst.rdiff_backup_action( 54 | True, True, self.bak_path, self.to2_path, 55 | ("--api-version", "201"), 56 | b"restore", ()), 0) 57 | 58 | self.assertFalse(fileset.compare_paths(self.from1_path, self.to1_path)) 59 | self.assertFalse(fileset.compare_paths(self.from2_path, self.to2_path)) 60 | 61 | # all tests were successful 62 | self.success = True 63 | 64 | def tearDown(self): 65 | # we clean-up only if the test was successful 66 | if self.success: 67 | fileset.remove_fileset(self.base_dir, self.from1_struct) 68 | fileset.remove_fileset(self.base_dir, self.from2_struct) 69 | fileset.remove_fileset(self.base_dir, {"bak": {}}) 70 | fileset.remove_fileset(self.base_dir, {"to1": {}}) 71 | fileset.remove_fileset(self.base_dir, {"to2": {}}) 72 | 73 | 74 | if __name__ == "__main__": 75 | unittest.main() 76 | -------------------------------------------------------------------------------- /tools/windows/playbook-provision.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Prepare Windows for rdiff-backup development 3 | hosts: windows_hosts 4 | gather_facts: false 5 | tasks: 6 | - name: enable running chocolatey scripts without confirmation 7 | win_chocolatey_feature: 8 | name: allowGlobalConfirmation 9 | state: enabled 10 | - name: install basis development tools via chocolatey 11 | win_chocolatey: 12 | name: 13 | #- virtio-drivers 14 | - git 15 | - cygwin 16 | - cyg-get # depends on cygwin 17 | - cmake 18 | - 7zip 19 | - vscode # Visual Studio Code (editor) 20 | state: present 21 | - name: install python via chocolatey 22 | win_chocolatey: 23 | name: "python3" 24 | version: "{{ python_version_full }}" 25 | state: present 26 | package_params: /InstallDir:C:\Python64 /InstallDir32:C:\Python32 27 | - name: install dependencies, allows for reboot in-between 28 | win_chocolatey: 29 | name: 30 | - ruby # dependency of asciidoctor, requires reboot 31 | - dotnetfx 32 | - vcredist140 33 | state: present 34 | register: vs_deps 35 | - name: execute reboot if something has changed to VS dependencies 36 | win_reboot: 37 | when: vs_deps is changed 38 | - name: install visual studio tools via chocolatey 39 | win_chocolatey: 40 | name: # VC 2017 is the version available on Travis CI 41 | - visualstudio2017buildtools 42 | - visualstudio2017-workload-python 43 | - visualstudio2017-workload-vctools 44 | state: present 45 | register: vs 46 | - name: execute reboot if something has been changed to visual studio 47 | win_reboot: 48 | when: vs is changed 49 | 50 | - name: install necessary python libraries 51 | win_command: "\\{{ item }}\\Scripts\\pip.exe install --upgrade pywin32 pyinstaller wheel certifi setuptools-scm tox PyYAML" 52 | register: pipcmd 53 | changed_when: "'Successfully installed ' in pipcmd.stdout" 54 | # pylibacl and pyxattr aren't supported under Windows 55 | loop: 56 | - Python64 57 | - Python32 58 | - name: validate that python seems to work as it should 59 | win_shell: "\\{{ item }}\\python.exe -c 'import pywintypes, winnt, win32api, win32security, win32file, win32con'" 60 | changed_when: false # this command doesn't change anything ever 61 | loop: 62 | - Python64 63 | - Python32 64 | 65 | - name: Install NuGet to overcome issue 50332 in Ansible 66 | # https://github.com/ansible/ansible/issues/50332 67 | win_shell: Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force 68 | - name: install the Powershell Community Extensions for win_unzip 69 | win_psmodule: 70 | name: Pscx 71 | state: present 72 | allow_clobber: true 73 | - name: refresh the environment variables as we need gem in the PATH 74 | win_command: refreshenv.cmd 75 | - name: install asciidoctor gem 76 | win_command: gem.cmd install asciidoctor 77 | 78 | - name: Prepare Linux to be used as Samba server 79 | hosts: samba_servers 80 | become: true 81 | gather_facts: true 82 | pre_tasks: 83 | - name: set hostname of server 84 | hostname: 85 | name: "{{ inventory_hostname_short }}.vagrant.example.com" 86 | roles: 87 | #- bertvv.rh-base 88 | - rh-base 89 | #- bertvv.samba 90 | - samba 91 | -------------------------------------------------------------------------------- /docs/api/v200.adoc: -------------------------------------------------------------------------------- 1 | = rdiff-backup API description v200 2 | :sectnums: 3 | :toc: 4 | 5 | == Format 6 | 7 | * format changed from Python 2 to Python 3 (incompatible types esp. 8 | bytes). 9 | 10 | == Sources 11 | 12 | === Internal 13 | 14 | * `backup.DestinationStruct` 15 | * `backup.SourceStruct` 16 | * `backup.SourceStruct.set_source_select` 17 | * `compare.DataSide` 18 | * `compare.RepoSide` 19 | * `compare.Verify` 20 | * `conn_number` 21 | * `quit` 22 | * `reval` 23 | * `eas_acls.get_acl_lists_from_rp` 24 | * `eas_acls.set_rp_acl` 25 | * `FilenameMapping.set_init_quote_vals` 26 | * `FilenameMapping.set_init_quote_vals_local` 27 | * `fs_abilities.backup_set_globals` 28 | * `fs_abilities.get_readonly_fsa` 29 | * `fs_abilities.restore_set_globals` 30 | * `fs_abilities.single_set_globals` 31 | * `Globals.get` 32 | * `Globals.postset_regexp_local` 33 | * `Globals.set` 34 | * `Globals.set_local` 35 | * `Hardlink.initialize_dictionaries` 36 | * `log.ErrorLog.close` 37 | * `log.ErrorLog.isopen` 38 | * `log.ErrorLog.open` 39 | * `log.ErrorLog.write` 40 | * `log.ErrorLog.write_if_open` 41 | * `log.Log.close_logfile_allconn` 42 | * `log.Log.close_logfile_local` 43 | * `log.Log.log_to_file` 44 | * `log.Log.open_logfile_allconn` 45 | * `log.Log.open_logfile_local` 46 | * `log.Log.setterm_verbosity` 47 | * `log.Log.setverbosity` 48 | * `Main.backup_close_statistics` 49 | * `Main.backup_remove_curmirror_local` 50 | * `Main.backup_touch_curmirror_local` 51 | * `manage.delete_earlier_than_local` 52 | * `regress.check_pids` 53 | * `regress.Regress` 54 | * `restore.ListAtTime` 55 | * `restore.ListChangedSince` 56 | * `restore.MirrorStruct` 57 | * `restore.MirrorStruct.set_mirror_select` 58 | * `restore.TargetStruct` 59 | * `restore.TargetStruct.set_target_select` 60 | * `robust.install_signal_handlers` 61 | * `rpath.copy_reg_file` 62 | * `rpath.delete_dir_no_files` 63 | * `rpath.gzip_open_local_read` 64 | * `rpath.make_file_dict` 65 | * `rpath.make_socket_local` 66 | * `rpath.open_local_read` 67 | * `rpath.RPath.fsync_local` 68 | * `rpath.setdata_local` 69 | * `SetConnections.add_redirected_conn` 70 | * `SetConnections.init_connection_remote` 71 | * `statistics.record_error` 72 | * `Time.setcurtime_local` 73 | * `Time.setprevtime_local` 74 | * `user_group.init_group_mapping` 75 | * `user_group.init_user_mapping` 76 | * `user_group.map_rpath` 77 | 78 | === External 79 | 80 | * `gzip.GzipFile` 81 | * `open` 82 | * `os.chmod` 83 | * `os.chown` 84 | * `os.getuid` 85 | * `os.lchown` 86 | * `os.link` 87 | * `os.listdir` 88 | * `os.makedev` 89 | * `os.makedirs` 90 | * `os.mkdir` 91 | * `os.mkfifo` 92 | * `os.mknod` 93 | * `os.name` 94 | * `os.rename` 95 | * `os.rmdir` 96 | * `os.symlink` 97 | * `os.unlink` 98 | * `os.utime` 99 | * `shutil.rmtree` 100 | * `sys.stdout.write` 101 | * `win32security.ConvertSecurityDescriptorToStringSecurityDescriptor` 102 | * `win32security.ConvertStringSecurityDescriptorToSecurityDescriptor` 103 | * `win32security.GetNamedSecurityInfo` 104 | * `win32security.SetNamedSecurityInfo` 105 | * `xattr.get` 106 | * `xattr.list` 107 | * `xattr.remove` 108 | * `xattr.set` 109 | 110 | == Testing 111 | 112 | === Internal 113 | 114 | === External 115 | 116 | * `hasattr` 117 | * `int` 118 | * `ord` 119 | * `os.lstat` 120 | * `os.path.join` 121 | * `os.remove` 122 | * `pow` 123 | * `str` 124 | * `tempfile.mktemp` 125 | -------------------------------------------------------------------------------- /docs/rdiff-backup-statistics.1: -------------------------------------------------------------------------------- 1 | .TH RDIFF-BACKUP-STATISTICS 1 "{{ month_year }}" "Version {{ version }}" "User Manuals" \" -*- nroff -*- 2 | .SH NAME 3 | rdiff-backup-statistics \- summarize rdiff-backup statistics files 4 | .SH SYNOPSIS 5 | .B rdiff-backup-statistics 6 | .BI [\-\-begin-time " time" ] 7 | .BI [\-\-end-time " time" ] 8 | .BI [\-\-minimum-ratio " ratio" ] 9 | .B [\-\-null-separator] 10 | .B [\-\-quiet] 11 | .B [-h|\-\-help] 12 | .B [-V|\-\-version] 13 | .I repository 14 | 15 | .SH DESCRIPTION 16 | .BI rdiff-backup-statistics 17 | reads the matching statistics files in a backup repository made by 18 | .B rdiff-backup 19 | and prints some summary statistics to the screen. It does not alter 20 | the repository in any way. 21 | 22 | The required argument is the pathname of the root of an rdiff-backup 23 | repository. For instance, if you ran "rdiff-backup in out", you could 24 | later run "rdiff-backup-statistics out". 25 | 26 | The output has two parts. The first is simply an average of the all 27 | matching session_statistics files. The meaning of these fields is 28 | explained in the FAQ included in the package, and also at 29 | .IR https://rdiff-backup.net/docs/FAQ.html#statistics . 30 | 31 | The second section lists some particularly significant files 32 | (including directories). These files are either contain a lot of 33 | data, take up increment space, or contain a lot of changed files. All 34 | the files that are above the minimum ratio (default 5%) will be 35 | listed. 36 | 37 | If a file or directory is listed, its contributions are subtracted 38 | from its parent. That is why the percentage listed after a directory 39 | can be larger than the percentage of its parent. Without this, the 40 | root directory would always be the largest, and the output would be 41 | boring. 42 | 43 | .SH OPTIONS 44 | .TP 45 | .BI \-\-begin-time " time" 46 | Do not read statistics files older than 47 | .IR time . 48 | By default, all statistics files will be read. 49 | .I time 50 | should be in the same format taken by \-\-restore-as-of. (See 51 | .B TIME FORMATS 52 | in the rdiff-backup man page for details.) 53 | .TP 54 | .BI \-\-end-time " time" 55 | Like 56 | .B \-\-begin-time 57 | but exclude statistics files later than 58 | .IR time . 59 | .TP 60 | .B -h, \-\-help 61 | Output a short usage description and exit. 62 | .TP 63 | .BI \-\-minimum-ratio " ratio" 64 | Print all directories contributing more than the given ratio to the 65 | total. The default value is .05, or 5 percent. 66 | .TP 67 | .B \-\-null-separator 68 | Specify that the lines of the file_statistics file are separated by 69 | nulls (\\0). The default is to assume that newlines separate. Use 70 | this switch if rdiff-backup was run with the \-\-null-separator when 71 | making the given repository. 72 | .TP 73 | .B \-\-quiet 74 | Suppress printing of the "Processing statistics from session..." 75 | output lines. 76 | .TP 77 | .B -V, \-\-version 78 | Output full path to command and version then exit. 79 | 80 | .SH BUGS 81 | When aggregating multiple statistics files, some directories above 82 | (but close to) the minimum ratio may not be displayed. For this 83 | reason, you may want to set the minimum-ratio lower than need. 84 | 85 | .SH AUTHOR 86 | Ben Escoto , based on original script by Dean Gaudet. 87 | 88 | .SH SEE ALSO 89 | .BR rdiff-backup (1), 90 | .BR python (1). 91 | The rdiff-backup web page is at 92 | .IR https://rdiff-backup.net/ . 93 | -------------------------------------------------------------------------------- /docs/rdiff-backup-statistics.1.adoc: -------------------------------------------------------------------------------- 1 | = RDIFF-BACKUP-STATISTICS(1) 2 | :doctype: manpage 3 | :docdate: {revdate} 4 | :man source: rdiff-backup-statistics 5 | :man version: {revnumber} 6 | :man manual: Rdiff-Backup-Statistics Manual {revnumber} 7 | 8 | == NAME 9 | 10 | rdiff-backup-statistics - summarize rdiff-backup statistics files 11 | 12 | == SYNOPSIS 13 | 14 | *rdiff-backup-statistics* [*--begin-time* _time_] [*--end-time* _time_] [*--minimum-ratio* _ratio_] [*--null-separator*] [*--quiet*] [*-h*|*--help*] [*-V*|*--version*] _repository_ 15 | 16 | == DESCRIPTION 17 | *rdiff-backup-statistics* 18 | reads the matching statistics files in a backup repository made by 19 | *rdiff-backup* 20 | and prints some summary statistics to the screen. It does not alter 21 | the repository in any way. 22 | 23 | The required argument is the pathname of the root of an rdiff-backup 24 | repository. For instance, if you ran '[.code]``rdiff-backup in out``', 25 | you could later run '[.code]``rdiff-backup-statistics out``'. 26 | 27 | The output has two parts. The first is simply an average of the all 28 | matching session_statistics files. The meaning of these fields is 29 | explained in the FAQ included in the package, and also at 30 | https://rdiff-backup.net/docs/FAQ.html#statistics . 31 | 32 | The second section lists some particularly significant files 33 | (including directories). These files are either contain a lot of 34 | data, take up increment space, or contain a lot of changed files. All 35 | the files that are above the minimum ratio (default 5%) will be 36 | listed. 37 | 38 | If a file or directory is listed, its contributions are subtracted 39 | from its parent. That is why the percentage listed after a directory 40 | can be larger than the percentage of its parent. Without this, the 41 | root directory would always be the largest, and the output would be 42 | boring. 43 | 44 | == OPTIONS 45 | 46 | --begin-time _time_:: 47 | Do not read statistics files older than _time_. 48 | By default, all statistics files will be read. 49 | _time_ should be in the same format taken by *--restore-as-of*. (See 50 | *TIME FORMATS* in the rdiff-backup man page for details.) 51 | 52 | --end-time _time_:: 53 | Like *--begin-time* but exclude statistics files later than _time_. 54 | 55 | -h, --help:: 56 | Output a short usage description and exit. 57 | 58 | --minimum-ratio _ratio_:: 59 | Print all directories contributing more than the given ratio to the 60 | total. The default value is .05, or 5 percent. 61 | 62 | --null-separator:: 63 | Specify that the lines of the file_statistics file are separated by 64 | nulls ('\0'). The default is to assume that newlines separate. Use 65 | this switch if rdiff-backup was run with the *--null-separator* when 66 | making the given repository. 67 | 68 | --quiet:: 69 | Suppress printing of the '```Processing statistics from session...```' 70 | output lines. 71 | 72 | -V, --version:: 73 | Output full path to command and version then exit. 74 | 75 | == BUGS 76 | When aggregating multiple statistics files, some directories above 77 | (but close to) the minimum ratio may not be displayed. For this 78 | reason, you may want to set the minimum-ratio lower than need. 79 | 80 | == AUTHOR 81 | Ben Escoto link:mailto:ben@emerose.org[ben@emerose.org], 82 | based on original script by Dean Gaudet. 83 | 84 | == SEE ALSO 85 | *rdiff-backup*(1), *python*(1). 86 | The rdiff-backup web page is at https://rdiff-backup.net/ . 87 | -------------------------------------------------------------------------------- /src/rdiff_backup/hash.py: -------------------------------------------------------------------------------- 1 | # Copyright 2005 Ben Escoto 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | """Contains a file wrapper that returns a hash on close""" 20 | 21 | import hashlib 22 | from . import Globals 23 | 24 | 25 | class FileWrapper: 26 | """Wrapper around a file-like object 27 | 28 | Only use this with files that will be read through in a single 29 | pass and then closed. (There is no seek().) When you close it, 30 | return value will be a Report. 31 | 32 | Currently this just calculates a sha1sum of the datastream. 33 | 34 | """ 35 | 36 | def __init__(self, fileobj): 37 | self.fileobj = fileobj 38 | self.sha1 = hashlib.sha1() 39 | self.closed = False 40 | 41 | def read(self, length=-1): 42 | assert not self.closed, "You can't read from an already closed file." 43 | buf = self.fileobj.read(length) 44 | self.sha1.update(buf) 45 | return buf 46 | 47 | def close(self): 48 | self.closed = True 49 | return Report(self.fileobj.close(), self.sha1.hexdigest()) 50 | 51 | 52 | class Report: 53 | """Hold final information about a byte stream""" 54 | 55 | def __init__(self, close_val, sha1_digest): 56 | # FIXME this is a strange construct because it looks like the fileobj 57 | # wrapped in a FileWrapper already returns a Report as closing value, 58 | # which we can't wrap again in a Report, so we only check that the 59 | # hash values do fit. 60 | if isinstance(close_val, Report): 61 | assert close_val.sha1_digest == sha1_digest, ( 62 | "Hashes from return code {hash1} and given {hash2} " 63 | "don't match".format( 64 | hash1=close_val.sha1_digest, hash2=sha1_digest)) 65 | else: 66 | assert not close_val, ( 67 | "Return code {rc} of type {rctype} isn't null".format( 68 | rc=close_val, rctype=type(close_val))) 69 | self.sha1_digest = sha1_digest 70 | 71 | 72 | def compute_sha1(rp, compressed=0): 73 | """Return the hex sha1 hash of given rpath""" 74 | assert rp.conn is Globals.local_connection, ( 75 | "It's inefficient to calculate hash remotely.") 76 | digest = compute_sha1_fp(rp.open("rb", compressed)) 77 | rp.set_sha1(digest) 78 | return digest 79 | 80 | 81 | def compute_sha1_fp(fp, compressed=0): 82 | """Return hex sha1 hash of given file-like object""" 83 | blocksize = Globals.blocksize 84 | fw = FileWrapper(fp) 85 | while fw.read(blocksize): 86 | pass # we rely on FileWrapper to calculate the checksum 87 | return fw.close().sha1_digest 88 | -------------------------------------------------------------------------------- /tools/windows/roles/rh-base/CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Change log 2 | 3 | This file contains al notable changes to the bertvv.rh-base Ansible role. This file adheres to the guidelines of [http://keepachangelog.com/](http://keepachangelog.com/). Versioning follows [Semantic Versioning](http://semver.org/). 4 | 5 | ## 3.0.0 - 2019-10-16 6 | 7 | ### Added 8 | 9 | - (GH-12) Dynamic Message of the Day (credit: @TimCaudron) 10 | - (GH-11, GH-13) Support for automatic updates (credit: @BrechtClaeys and @JensVanDeynse1994) 11 | - (GH-16) Set AllowGroups in sshd config 12 | 13 | ### Changed 14 | 15 | - (GH-10) Update vars-CentOS.yml (credit: @MichaelLeeHobbs) 16 | - (GH-14) Ensure all groups specified in `rhbase_users` exist, even if not added to `rhbase_groups` (credit: @T0MASD) 17 | - Set SELinux booleans when appropriate, i.e. when SELinux state is either permissive or enforcing. 18 | - Fix coding style using Yamllint 19 | 20 | ### Removed 21 | 22 | - `rhbase_motd`, superseded by `rhbase_dynamic_motd`. **Breaking change***: old playbooks will still work, but the role's behaviour is changed (no custom MOTD will be generated). 23 | 24 | ## 2.3.0 - 2017-11-14 25 | 26 | ### Added 27 | 28 | - (GH-8) Variable `rhbase_selinux_booleans` (credit: @SebaNuss) 29 | 30 | ### Changed 31 | 32 | - (GH-7) Updated documentation for generating password hashes 33 | - Fix Ansible 2.4 deprecation warnings 34 | 35 | ## 2.2.0 - 2017-09-28 36 | 37 | ### Added 38 | 39 | - (GH-5) Variable `rhbase_tz` sets the TZ environment variable to save system calls 40 | 41 | ### Changed 42 | 43 | - Give keys of the `rhbase_users` variable default values. Only user name is required. See the for details. 44 | - Upgrade base boxes to latest versions: CentOS 7.4 and Fedora 26 45 | - (GH-6) Ensure /var/log/journal/ exists so the system can keep persistent logs 46 | 47 | ## 2.1.0 - 2016-11-24 48 | 49 | ### Added 50 | 51 | - Variable `rhbase_firewall_interfaces` can now be used to add a network interface to the public zone. 52 | 53 | ### Changed 54 | 55 | - The role now ensures essential systemd services (e.g. journald, tmpfiles) are running. The difference with previous version is that two services were added and they are now enumerated in a variable. 56 | 57 | ## 2.0.0 - 2016-10-30 58 | 59 | ### Added 60 | 61 | - Added ‘exclude=’ option to `/etc/dnf.conf` and `/etc/yum.conf` and the variable `rhbase_repo_exclude_from_updates` 62 | - Added task that creates `/etc/machine-id` if necessary. When this file is not set up correctly (with `systemd-machine-id-setup`), `systemd-journald` cannot run. 63 | 64 | ### Changed 65 | 66 | - (RH-2) Renamed variables `rhbase_admin_user` to `rhbase_ssh_user` and `rhbase_admin_ssh_key` to `rhbase_ssh_key`. The previous names were confusing, as the role did nothing special to make this user an admin (i.e. member of `wheel`). This is a breaking change, hence the major version bump. 67 | - Added `firewalld` to the list of packages to be installed as dependencies 68 | 69 | ## 1.0.2 - 2016-09-20 70 | 71 | Bugfix release 72 | 73 | ### Changed 74 | 75 | - Fixed tag names in tasks 76 | 77 | ## 1.0.1 - 2016-09-21 78 | 79 | Bugfix release 80 | 81 | ### Changed 82 | 83 | - Added handler to restart the firewall daemon 84 | - Added forgotten role tag to a task 85 | 86 | ## 1.0.0 - 2016-06-08 87 | 88 | First release! 89 | 90 | ### Added 91 | 92 | - Functionality from roles [bertvv.el7](https://galaxy.ansible.com/bertvv/el7) and [bertvv.fedora](https://galaxy.ansible.com/bertvv/fedora). 93 | 94 | -------------------------------------------------------------------------------- /testing/user_grouptest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | import pwd 3 | import code 4 | from rdiff_backup import user_group, Globals 5 | 6 | 7 | class UserGroupTest(unittest.TestCase): 8 | """Test user and group functionality""" 9 | 10 | def test_basic_conversion(self): 11 | """Test basic id2name. May need to modify for different systems""" 12 | user_group._uid2uname_dict = {} 13 | user_group._gid2gname_dict = {} 14 | self.assertEqual(user_group.uid2uname(0), "root") 15 | self.assertEqual(user_group.uid2uname(0), "root") 16 | self.assertEqual(user_group.gid2gname(0), "root") 17 | self.assertEqual(user_group.gid2gname(0), "root") 18 | # Assume no user has uid 29378 19 | self.assertIsNone(user_group.gid2gname(29378)) 20 | self.assertIsNone(user_group.gid2gname(29378)) 21 | 22 | def test_basic_reverse(self): 23 | """Test basic name2id. Depends on systems users/groups""" 24 | user_group._uname2uid_dict = {} 25 | user_group._gname2gid_dict = {} 26 | self.assertEqual(user_group._uname2uid("root"), 0) 27 | self.assertEqual(user_group._uname2uid("root"), 0) 28 | self.assertEqual(user_group._gname2gid("root"), 0) 29 | self.assertEqual(user_group._gname2gid("root"), 0) 30 | self.assertIsNone(user_group._uname2uid("aoeuth3t2ug89")) 31 | self.assertIsNone(user_group._uname2uid("aoeuth3t2ug89")) 32 | 33 | def test_default_mapping(self): 34 | """Test the default user mapping""" 35 | Globals.isdest = 1 36 | rootid = 0 37 | binid = pwd.getpwnam('bin')[2] 38 | syncid = pwd.getpwnam('sync')[2] 39 | user_group.init_user_mapping() 40 | self.assertEqual(user_group._user_map(0), rootid) 41 | self.assertEqual(user_group._user_map(0, 'bin'), binid) 42 | self.assertEqual(user_group._user_map(0, 'sync'), syncid) 43 | self.assertIsNone(user_group._user_map.map_acl(0, 'aoeuth3t2ug89')) 44 | 45 | def test_user_mapping(self): 46 | """Test the user mapping file through the _DefinedMap class""" 47 | mapping_string = """ 48 | root:bin 49 | bin:root 50 | 500:501 51 | 0:sync 52 | sync:0""" 53 | Globals.isdest = 1 54 | rootid = 0 55 | binid = pwd.getpwnam('bin')[2] 56 | syncid = pwd.getpwnam('sync')[2] 57 | daemonid = pwd.getpwnam('daemon')[2] 58 | user_group.init_user_mapping(mapping_string) 59 | 60 | self.assertEqual(user_group._user_map(rootid, 'root'), binid) 61 | self.assertEqual(user_group._user_map(binid, 'bin'), rootid) 62 | self.assertEqual(user_group._user_map(0), syncid) 63 | self.assertEqual(user_group._user_map(syncid, 'sync'), 0) 64 | self.assertEqual(user_group._user_map(500), 501) 65 | 66 | self.assertEqual(user_group._user_map(501), 501) 67 | self.assertEqual(user_group._user_map(123, 'daemon'), daemonid) 68 | 69 | self.assertIsNone(user_group._user_map.map_acl(29378, 'aoeuth3t2ug89')) 70 | self.assertIs(user_group._user_map.map_acl(0, 'aoeuth3t2ug89'), syncid) 71 | 72 | if 0: 73 | code.InteractiveConsole(globals()).interact() 74 | 75 | def test_overflow(self): 76 | """Make sure querying large uids/gids doesn't raise exception""" 77 | large_num = 4000000000 78 | self.assertIsNone(user_group.uid2uname(large_num)) 79 | self.assertIsNone(user_group.gid2gname(large_num)) 80 | 81 | 82 | if __name__ == "__main__": 83 | unittest.main() 84 | -------------------------------------------------------------------------------- /tools/crossversion/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure("2") do |config| 9 | # The most common configuration options are documented and commented below. 10 | # For a complete reference, please see the online documentation at 11 | # https://docs.vagrantup.com. 12 | 13 | # Every Vagrant development environment requires a box. You can search for 14 | # boxes at https://vagrantcloud.com/search. 15 | config.vm.define "oldrdiffbackup" do |oldrdiffbackup| 16 | oldrdiffbackup.vm.box = "centos/7" 17 | end 18 | 19 | # Disable automatic box update checking. If you disable this, then 20 | # boxes will only be checked for updates when the user runs 21 | # `vagrant box outdated`. This is not recommended. 22 | # config.vm.box_check_update = false 23 | 24 | # Create a forwarded port mapping which allows access to a specific port 25 | # within the machine from a port on the host machine. In the example below, 26 | # accessing "localhost:8080" will access port 80 on the guest machine. 27 | # NOTE: This will enable public access to the opened port 28 | # config.vm.network "forwarded_port", guest: 80, host: 8080 29 | 30 | # Create a forwarded port mapping which allows access to a specific port 31 | # within the machine from a port on the host machine and only allow access 32 | # via 127.0.0.1 to disable public access 33 | # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" 34 | 35 | # Create a private network, which allows host-only access to the machine 36 | # using a specific IP. 37 | # config.vm.network "private_network", ip: "192.168.33.10" 38 | 39 | # Create a public network, which generally matched to bridged network. 40 | # Bridged networks make the machine appear as another physical device on 41 | # your network. 42 | # config.vm.network "public_network" 43 | 44 | # Share an additional folder to the guest VM. The first argument is 45 | # the path on the host to the actual folder. The second argument is 46 | # the path on the guest to mount the folder. And the optional third 47 | # argument is a set of non-required options. 48 | # config.vm.synced_folder "../data", "/vagrant_data" 49 | config.vm.synced_folder ".", "/vagrant", disabled: true 50 | 51 | 52 | # Provider-specific configuration so you can fine-tune various 53 | # backing providers for Vagrant. These expose provider-specific options. 54 | # Example for VirtualBox: 55 | # 56 | # config.vm.provider "virtualbox" do |vb| 57 | # # Display the VirtualBox GUI when booting the machine 58 | # vb.gui = true 59 | # 60 | # # Customize the amount of memory on the VM: 61 | # vb.memory = "1024" 62 | # end 63 | # 64 | # View the documentation for the provider you are using for more 65 | # information on available options. 66 | 67 | # Enable provisioning with a shell script. Additional provisioners such as 68 | # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the 69 | # documentation for more information about their specific syntax and use. 70 | # config.vm.provision "shell", inline: <<-SHELL 71 | # apt-get update 72 | # apt-get install -y apache2 73 | # SHELL 74 | config.vm.provision "ansible" do |ansible| 75 | ansible.verbose = "v" 76 | ansible.playbook = "playbook-provision.yml" 77 | end 78 | end 79 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions/server.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | A built-in rdiff-backup action plug-in to start a remote server process. 22 | """ 23 | 24 | import os 25 | import sys 26 | 27 | from rdiffbackup import actions 28 | from rdiff_backup import (connection, log, Security) 29 | 30 | 31 | class ServerAction(actions.BaseAction): 32 | """ 33 | Start rdiff-backup in server mode (only meant for internal use). 34 | """ 35 | name = "server" 36 | security = "server" 37 | parent_parsers = [actions.RESTRICT_PARSER] 38 | 39 | @classmethod 40 | def add_action_subparser(cls, sub_handler): 41 | subparser = super().add_action_subparser(sub_handler) 42 | subparser.add_argument( 43 | "--debug", action="store_true", 44 | help="Allow for remote python debugging (rpdb) using netcat") 45 | return subparser 46 | 47 | def __init__(self, values): 48 | super().__init__(values) 49 | if 'debug' in self.values and self.values.debug: 50 | self._set_breakpoint() 51 | 52 | def connect(self): 53 | conn_value = super().connect() 54 | if conn_value: 55 | Security.initialize(self.get_security_class(), [], 56 | security_level=self.values.restrict_mode, 57 | restrict_path=self.values.restrict_path) 58 | return conn_value 59 | 60 | def run(self): 61 | return connection.PipeConnection(sys.stdin.buffer, 62 | sys.stdout.buffer).Server() 63 | 64 | def _set_breakpoint(self): 65 | """ 66 | Set a breakpoint for remote debugging 67 | 68 | Use the environment variable RDIFF_BACKUP_DEBUG to set a non-default 69 | listening address and/or port (default is 127.0.0.1:4444). 70 | Valid values are 'addr', 'addr:port' or ':port'. 71 | """ 72 | try: 73 | import rpdb 74 | debug_values = os.getenv("RDIFF_BACKUP_DEBUG", "").split(":") 75 | if debug_values != [""]: 76 | if debug_values[0]: 77 | debug_addr = debug_values[0] 78 | else: 79 | debug_addr = "127.0.0.1" 80 | if len(debug_values) > 1: 81 | debug_port = int(debug_values[1]) 82 | else: 83 | debug_port = 4444 84 | rpdb.set_trace(addr=debug_addr, port=debug_port) 85 | else: 86 | # connect to the default 127.0.0.1:4444 87 | rpdb.set_trace() 88 | except ImportError: 89 | log.Log("Remote debugging impossible, please install rpdb", 90 | log.Log.WARNING) 91 | 92 | 93 | def get_action_class(): 94 | return ServerAction 95 | -------------------------------------------------------------------------------- /testing/errorsrecovertest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from commontest import abs_test_dir, re_init_rpath_dir, rdiff_backup 3 | from rdiff_backup import rpath, Globals 4 | 5 | # This testing file is meant for tests based on errors introduced by 6 | # earlier versions of rdiff-backup and how newer versions are able to cope 7 | # with those. 8 | 9 | 10 | class BrokenRepoTest(unittest.TestCase): 11 | """Handling of somehow broken repos""" 12 | def makerp(self, path): 13 | return rpath.RPath(Globals.local_connection, path) 14 | 15 | def makeext(self, path): 16 | return self.root.new_index(tuple(path.split("/"))) 17 | 18 | def testDuplicateMetadataTimestamp(self): 19 | """This test is based on issue #322 where a diff and a snapshot 20 | metadata mirror files had the same timestamp, which made rdiff-backup 21 | choke. We check that rdiff-backup still fails by default but can be 22 | taught to ignore the error with --allow-duplicate-timestamps so that 23 | the repo can be fixed.""" 24 | 25 | # create an empty directory 26 | test_base_rp = self.makerp(abs_test_dir).append("dupl_meta_time") 27 | re_init_rpath_dir(test_base_rp) 28 | 29 | # create enough incremental backups to have one metadata snapshot 30 | # in-between, which we can manipulate to simulate the error 31 | source_rp = test_base_rp.append("source") 32 | target_rp = test_base_rp.append("target") 33 | source_rp.mkdir() 34 | for suffix in range(1, 15): 35 | source_rp.append("file%02d" % suffix).touch() 36 | rdiff_backup(1, 1, source_rp.__fspath__(), target_rp.__fspath__(), 37 | current_time=suffix * 10000) 38 | # identify the oldest (aka first) mirror metadata snapshot 39 | # and sort the list because some filesystems don't respect the order 40 | rb_data_rp = target_rp.append("rdiff-backup-data") 41 | files_list = sorted(filter( 42 | lambda x: x.startswith(b"mirror_metadata."), 43 | rb_data_rp.listdir())) 44 | meta_snapshot_rp = rb_data_rp.append(files_list[8]) 45 | # create a diff with the same data as the identified snapshot 46 | meta_dupldiff_rp = rb_data_rp.append(files_list[8].replace( 47 | b".snapshot.gz", b".diff.gz")) 48 | rpath.copy(meta_snapshot_rp, meta_dupldiff_rp) 49 | 50 | # this succeeds 51 | rdiff_backup(1, 1, target_rp.__fspath__(), None, 52 | extra_options=b"--check-destination-dir") 53 | # now this should fail 54 | source_rp.append("file15").touch() 55 | rdiff_backup(1, 1, source_rp.__fspath__(), target_rp.__fspath__(), 56 | current_time=15 * 10000, expected_ret_val=1) 57 | # and this should also fail 58 | rdiff_backup(1, 1, target_rp.__fspath__(), None, expected_ret_val=1, 59 | extra_options=b"--check-destination-dir") 60 | # but this should succeed 61 | rdiff_backup(1, 1, target_rp.__fspath__(), None, 62 | extra_options=b"--allow-duplicate-timestamps --check-destination-dir") 63 | # now we can clean-up, getting rid of the duplicate metadata mirrors 64 | # NOTE: we could have cleaned-up even without checking/fixing the directory 65 | # but this shouldn't be the recommended practice. 66 | rdiff_backup(1, 1, target_rp.__fspath__(), None, 67 | extra_options=b"--remove-older-than 100000 --force") 68 | # and this should at last succeed 69 | source_rp.append("file16").touch() 70 | rdiff_backup(1, 1, source_rp.__fspath__(), target_rp.__fspath__(), 71 | current_time=16 * 10000) 72 | 73 | 74 | if __name__ == "__main__": 75 | unittest.main() 76 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | # Configuration file for quick / short tests. 7 | # Use tox_slow.ini for longer running tests. 8 | 9 | [tox] 10 | envlist = py36, py37, py38, py39, flake8 11 | 12 | [testenv] 13 | # make sure those variables are passed down; you should define 14 | # either explicitly the RDIFF_TEST_* variables or rely on the current 15 | # user being correctly identified (which might not happen in a container) 16 | passenv = RDIFF_TEST_* RDIFF_BACKUP_* 17 | setenv = 18 | # paths for coverage must be absolute so that sub-processes find them 19 | # even if they're started from another location. 20 | COVERAGE_FILE = {envlogdir}/coverage.sqlite 21 | COVERAGE_PROCESS_START = {toxinidir}/tox.ini 22 | deps = 23 | importlib-metadata ~= 1.0 ; python_version < "3.8" 24 | PyYAML 25 | pyxattr 26 | pylibacl 27 | coverage 28 | # whitelist_externals = 29 | commands_pre = 30 | rdiff-backup --version 31 | # must be the first command to setup the test environment 32 | python testing/commontest.py 33 | coverage erase 34 | # write the hook file which will make sure that coverage is loaded 35 | # also for sub-processes, like for "client/server" rdiff-backup 36 | python -c 'with open("{envsitepackagesdir}/coverage.pth","w") as fd: fd.write("import coverage; coverage.process_startup()\n")' 37 | commands = 38 | coverage run testing/action_test_test.py 39 | coverage run testing/action_backuprestore_test.py 40 | coverage run testing/api_test.py 41 | coverage run testing/ctest.py 42 | coverage run testing/timetest.py 43 | coverage run testing/librsynctest.py 44 | coverage run testing/statisticstest.py 45 | coverage run testing/user_grouptest.py 46 | coverage run testing/setconnectionstest.py 47 | coverage run testing/iterfiletest.py 48 | coverage run testing/longnametest.py 49 | coverage run testing/robusttest.py 50 | coverage run testing/connectiontest.py 51 | coverage run testing/incrementtest.py 52 | coverage run testing/hardlinktest.py 53 | coverage run testing/eas_aclstest.py 54 | coverage run testing/FilenameMappingtest.py 55 | coverage run testing/fs_abilitiestest.py 56 | coverage run testing/hashtest.py 57 | coverage run testing/selectiontest.py 58 | coverage run testing/metadatatest.py 59 | coverage run testing/rpathtest.py 60 | coverage run testing/rorpitertest.py 61 | coverage run testing/rdifftest.py 62 | coverage run testing/securitytest.py 63 | coverage run testing/killtest.py 64 | coverage run testing/backuptest.py 65 | coverage run testing/comparetest.py 66 | coverage run testing/regresstest.py 67 | coverage run testing/restoretest.py 68 | coverage run testing/cmdlinetest.py 69 | coverage run testing/rdiffbackupdeletetest.py 70 | coverage run testing/errorsrecovertest.py 71 | coverage run testing/rdb_arguments.py --verbose --buffer 72 | # can only work on OS/X TODO later 73 | # coverage run testing/resourceforktest.py 74 | 75 | # combine all coverage results and show the summary 76 | coverage combine 77 | coverage report 78 | 79 | [testenv:flake8] 80 | deps = 81 | flake8 82 | commands_pre= 83 | commands = 84 | flake8 setup.py src testing tools 85 | 86 | [flake8] 87 | ignore = 88 | E501 # line too long (86 > 79 characters) 89 | W503 # line break before binary operator 90 | filename = 91 | *.py, 92 | src/rdiff-backup* 93 | exclude = 94 | .git 95 | .tox 96 | .tox.root 97 | __pycache__ 98 | build 99 | max-complexity = 20 100 | 101 | [coverage:run] 102 | parallel = True 103 | 104 | [coverage:report] 105 | include = 106 | */rdiff_backup/* 107 | */rdiffbackup/* 108 | skip_empty = True 109 | fail_under = 80 110 | sort = Cover 111 | -------------------------------------------------------------------------------- /docs/api/v201.adoc: -------------------------------------------------------------------------------- 1 | = rdiff-backup API description v201 2 | :sectnums: 3 | :toc: 4 | 5 | == Format 6 | 7 | * the old CLI is deprecated and replaced by the new action-based CLI 8 | 9 | == Sources 10 | 11 | === Internal 12 | 13 | ==== rdiff_backup 14 | 15 | * `backup.DestinationStruct` **deprecated** 16 | * `backup.SourceStruct` **deprecated** 17 | * `backup.SourceStruct.set_source_select` **deprecated** 18 | * `compare.DataSide` 19 | * `compare.RepoSide` 20 | * `compare.Verify` 21 | * `connection.conn_number` 22 | * `connection.quit` 23 | * `connection.reval` 24 | * `eas_acls.get_acl_lists_from_rp` 25 | * `eas_acls.set_rp_acl` 26 | * `FilenameMapping.set_init_quote_vals` 27 | * `FilenameMapping.set_init_quote_vals_local` 28 | * `fs_abilities.backup_set_globals` 29 | * `fs_abilities.get_readonly_fsa` 30 | * `fs_abilities.restore_set_globals` 31 | * `fs_abilities.single_set_globals` 32 | * `Globals.get` 33 | * `Globals.postset_regexp_local` 34 | * `Globals.set` 35 | * `Globals.set_local` 36 | * `Hardlink.initialize_dictionaries` 37 | * `log.ErrorLog.close` 38 | * `log.ErrorLog.isopen` 39 | * `log.ErrorLog.open` 40 | * `log.ErrorLog.write` 41 | * `log.ErrorLog.write_if_open` 42 | * `log.Log.close_logfile_allconn` 43 | * `log.Log.close_logfile_local` 44 | * `log.Log.log_to_file` 45 | * `log.Log.open_logfile_allconn` 46 | * `log.Log.open_logfile_local` 47 | * `log.Log.setterm_verbosity` 48 | * `log.Log.setverbosity` 49 | * `Main.backup_close_statistics` **deprecated** 50 | * `Main.backup_remove_curmirror_local` **deprecated** 51 | * `Main.backup_touch_curmirror_local` **deprecated** 52 | * `manage.delete_earlier_than_local` 53 | * `regress.check_pids` 54 | * `regress.Regress` 55 | * `restore.ListAtTime` 56 | * `restore.ListChangedSince` 57 | * `restore.MirrorStruct` 58 | * `restore.MirrorStruct.set_mirror_select` 59 | * `restore.TargetStruct` 60 | * `restore.TargetStruct.set_target_select` 61 | * `robust.install_signal_handlers` 62 | * `rpath.copy_reg_file` 63 | * `rpath.delete_dir_no_files` 64 | * `rpath.gzip_open_local_read` 65 | * `rpath.make_file_dict` 66 | * `rpath.make_socket_local` 67 | * `rpath.open_local_read` 68 | * `rpath.RPath.fsync_local` 69 | * `rpath.setdata_local` 70 | * `SetConnections.add_redirected_conn` 71 | * `SetConnections.init_connection_remote` 72 | * `statistics.record_error` 73 | * `Time.setcurtime_local` 74 | * `Time.setprevtime_local` 75 | * `user_group.init_group_mapping` 76 | * `user_group.init_user_mapping` 77 | * `user_group.map_rpath` 78 | 79 | ==== rdiffbackup 80 | 81 | * `locations._dir_shadow.ShadowReadDir` **new** 82 | ** `.set_select` 83 | ** `.get_select` 84 | ** `.get_diffs` 85 | * `locations._repo_shadow.ShadowRepo` **new** 86 | ** `.set_rorp_cache` 87 | ** `.get_sigs` 88 | ** `.patch` 89 | ** `.patch_and_increment` 90 | ** `.touch_current_mirror` 91 | ** `.remove_current_mirror` 92 | ** `.close_statistics` 93 | 94 | === External 95 | 96 | * `gzip.GzipFile` 97 | * `open` 98 | * `os.chmod` 99 | * `os.chown` 100 | * `os.getuid` 101 | * `os.lchown` 102 | * `os.link` 103 | * `os.listdir` 104 | * `os.makedev` 105 | * `os.makedirs` 106 | * `os.mkdir` 107 | * `os.mkfifo` 108 | * `os.mknod` 109 | * `os.name` 110 | * `os.rename` 111 | * `os.rmdir` 112 | * `os.symlink` 113 | * `os.unlink` 114 | * `os.utime` 115 | * `shutil.rmtree` 116 | * `sys.stdout.write` 117 | * `win32security.ConvertSecurityDescriptorToStringSecurityDescriptor` 118 | * `win32security.ConvertStringSecurityDescriptorToSecurityDescriptor` 119 | * `win32security.GetNamedSecurityInfo` 120 | * `win32security.SetNamedSecurityInfo` 121 | * `xattr.get` 122 | * `xattr.list` 123 | * `xattr.remove` 124 | * `xattr.set` 125 | 126 | == Testing 127 | 128 | === Internal 129 | 130 | === External 131 | 132 | * `hasattr` 133 | * `int` 134 | * `ord` 135 | * `os.lstat` 136 | * `os.path.join` 137 | * `os.remove` 138 | * `pow` 139 | * `str` 140 | * `tempfile.mktemp` 141 | -------------------------------------------------------------------------------- /tox_win.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | # Configuration file for quick / short tests. 7 | # Use tox_slow.ini for longer running tests. 8 | 9 | [tox] 10 | envlist = py, flake8 11 | 12 | [testenv] 13 | # make sure those variables are passed down; you should define 14 | # either explicitly the RDIFF_TEST_* variables or rely on the current 15 | # user being correctly identified (which might not happen in a container) 16 | passenv = RDIFF_TEST_* RDIFF_BACKUP_* LIBRSYNC_DIR 17 | deps = 18 | pywin32 19 | pyinstaller 20 | wheel 21 | whitelist_externals = cmd 22 | commands_pre = 23 | # a shell independent way of copying the DLL and BAT scripts 24 | python -c "import sys,shutil; shutil.copy(sys.argv[1],sys.argv[2])" "{env:LIBRSYNC_DIR}/bin/rsync.dll" "{envsitepackagesdir}/rdiff_backup" 25 | # we copy the batch script so that Windows finds rdiff-backup aka RBBin 26 | python -c "import sys,shutil; shutil.copy(sys.argv[1],sys.argv[2])" "tools/rdiff-backup.bat" "{envbindir}" 27 | 28 | python {envbindir}/rdiff-backup --version 29 | # must be the first command to setup the test environment 30 | python testing/commontest.py 31 | commands = 32 | # The commented tests do not run on Windows yet and will be fixed & uncommented one by one 33 | python testing/api_test.py 34 | python testing/ctest.py 35 | python testing/timetest.py 36 | # python testing/librsynctest.py # rdiff binary is missing 37 | python testing/statisticstest.py 38 | # python testing/user_grouptest.py # no module named pwd under Windows 39 | python testing/setconnectionstest.py # backslashes interpreted differently 40 | python testing/iterfiletest.py 41 | # python testing/longnametest.py # handling of long filenames too different 42 | python testing/robusttest.py 43 | python testing/connectiontest.py 44 | # python testing/incrementtest.py # issue with symlinks 45 | # python testing/hardlinktest.py # too many path errors 46 | # python testing/eas_aclstest.py # no module named pwd under Windows 47 | # python testing/FilenameMappingtest.py # issues with : in date/time-string 48 | # python testing/fs_abilitiestest.py # module 'os' has no attribute 'getuid' 49 | python testing/hashtest.py 50 | # python testing/selectiontest.py # too many errors to count... 51 | # python testing/metadatatest.py # issues with : in date/time/string 52 | # python testing/rpathtest.py # many small issues 53 | python testing/rorpitertest.py 54 | python testing/rdifftest.py 55 | # python testing/securitytest.py # no os.getuid + backslash in path handlingon --server 56 | # python testing/killtest.py # cannot create symbolic link 57 | # python testing/backuptest.py # No module named rdiff_backup in server.py 58 | # python testing/comparetest.py # No module named rdiff_backup in server.py 59 | # python testing/regresstest.py # issue with : in date/time/string 60 | # python testing/restoretest.py # too many issues to count 61 | # python testing/cmdlinetest.py # too many issues to count 62 | # python testing/rdiffbackupdeletetest.py # not written to run under Windows 63 | python testing/errorsrecovertest.py 64 | python testing/rdb_arguments.py --verbose --buffer 65 | # can only work on OS/X TODO later 66 | # python testing/resourceforktest.py 67 | 68 | [testenv:flake8] 69 | deps = 70 | flake8 71 | pywin32 72 | pyinstaller 73 | wheel 74 | commands = 75 | flake8 setup.py src testing tools 76 | 77 | [flake8] 78 | ignore = 79 | E501 # line too long (86 > 79 characters) 80 | W503 # line break before binary operator 81 | exclude = 82 | .git 83 | .tox 84 | .tox.root 85 | __pycache__ 86 | build 87 | max-complexity = 40 88 | -------------------------------------------------------------------------------- /testing/FilenameMappingtest.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | import unittest 4 | import commontest as ct 5 | from rdiff_backup import FilenameMapping, rpath, Globals 6 | 7 | 8 | class FilenameMappingTest(unittest.TestCase): 9 | """Test the FilenameMapping class, for quoting filenames""" 10 | 11 | def setUp(self): 12 | """Just initialize quoting""" 13 | Globals.chars_to_quote = b'A-Z' 14 | FilenameMapping.set_init_quote_vals() 15 | 16 | def testBasicQuote(self): 17 | """Test basic quoting and unquoting""" 18 | filenames = [ 19 | b"hello", b"HeLLo", b"EUOeu/EUOeu", b":", b"::::EU", b"/:/:" 20 | ] 21 | for filename in filenames: 22 | quoted = FilenameMapping.quote(filename) 23 | self.assertEqual(FilenameMapping.unquote(quoted), filename) 24 | 25 | def testQuotedRPath(self): 26 | """Test the QuotedRPath class""" 27 | path = (b"/usr/local/mirror_metadata" 28 | b".1969-12-31;08421;05833;05820-07;05800.data.gz") 29 | qrp = FilenameMapping.get_quotedrpath( 30 | rpath.RPath(Globals.local_connection, path), 1) 31 | self.assertEqual(qrp.base, b"/usr/local") 32 | self.assertEqual(len(qrp.index), 1) 33 | self.assertEqual(qrp.index[0], 34 | b"mirror_metadata.1969-12-31T21:33:20-07:00.data.gz") 35 | 36 | def testLongFilenames(self): 37 | """See if long quoted filenames cause crash""" 38 | ct.MakeOutputDir() 39 | outrp = rpath.RPath(Globals.local_connection, ct.abs_output_dir) 40 | inrp = rpath.RPath(Globals.local_connection, 41 | os.path.join(ct.abs_test_dir, b"quotetest")) 42 | ct.re_init_rpath_dir(inrp) 43 | long_filename = b"A" * 200 # when quoted should cause overflow 44 | longrp = inrp.append(long_filename) 45 | longrp.touch() 46 | shortrp = inrp.append(b"B") 47 | shortrp.touch() 48 | 49 | ct.rdiff_backup(True, True, 50 | inrp.path, outrp.path, 51 | 100000, extra_options=b"--override-chars-to-quote A") 52 | 53 | longrp_out = outrp.append(long_filename) 54 | self.assertFalse(longrp_out.lstat()) 55 | shortrp_out = outrp.append('B') 56 | self.assertTrue(shortrp_out.lstat()) 57 | 58 | ct.rdiff_backup(True, True, 59 | os.path.join(ct.old_test_dir, b"empty"), outrp.path, 60 | 200000) 61 | shortrp_out.setdata() 62 | self.assertFalse(shortrp_out.lstat()) 63 | ct.rdiff_backup(True, True, inrp.path, outrp.path, 300000) 64 | shortrp_out.setdata() 65 | self.assertTrue(shortrp_out.lstat()) 66 | 67 | def testReQuote(self): 68 | inrp = rpath.RPath(Globals.local_connection, 69 | os.path.join(ct.abs_test_dir, b"requote")) 70 | ct.re_init_rpath_dir(inrp) 71 | inrp.append("ABC_XYZ.1").touch() 72 | outrp = rpath.RPath(Globals.local_connection, ct.abs_output_dir) 73 | ct.re_init_rpath_dir(outrp) 74 | self.assertEqual( 75 | ct.rdiff_backup_action(True, True, inrp.path, outrp.path, 76 | ("--chars-to-quote", "A-C"), 77 | b"backup", ()), 78 | 0) 79 | time.sleep(1) 80 | inrp.append("ABC_XYZ.2").touch() 81 | # enforce a requote of the whole repository 82 | self.assertEqual( 83 | ct.rdiff_backup_action(True, True, inrp.path, outrp.path, 84 | ("--chars-to-quote", "X-Z", "--force"), 85 | b"backup", ()), 86 | 0) 87 | # let's check that both files have been properly quoted or requoted 88 | self.assertTrue(outrp.append("ABC_;088;089;090.1").lstat()) 89 | self.assertTrue(outrp.append("ABC_;088;089;090.2").lstat()) 90 | 91 | 92 | if __name__ == "__main__": 93 | unittest.main() 94 | -------------------------------------------------------------------------------- /tools/windows/playbook-build-rdiff-backup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Build rdiff-backup on a prepared Windows 3 | hosts: windows_builders 4 | gather_facts: false 5 | tasks: 6 | - name: make sure working directory {{ working_dir }} exists 7 | win_file: 8 | state: directory 9 | path: "{{ working_dir }}" 10 | - name: clone the rdiff-backup sources from Git 11 | win_command: > 12 | git.exe clone 13 | {% if rdiff_version_tag is defined %}--branch {{ rdiff_version_tag }}{% endif %} 14 | {{ rdiffbackup_git_repo }} 15 | "{{ rdiffbackup_dir }}" 16 | args: 17 | creates: "{{ rdiffbackup_dir }}" 18 | - name: build rdiff-backup and package it as wheel 19 | win_command: > 20 | python.exe setup.py bdist_wheel --librsync-dir="{{ librsync_install_dir }}" 21 | --lflags="/NODEFAULTLIB:libcmt.lib msvcrt.lib" 22 | args: 23 | chdir: "{{ rdiffbackup_dir }}" 24 | register: bdist_wheel 25 | - name: find out the name of the wheel package just created 26 | set_fact: 27 | wheel_pkg: "{{ bdist_wheel.stdout | regex_search('rdiff_backup-[^ ]*.whl') }}" 28 | - name: compile rdiff-backup into an executable using pyinstaller 29 | win_command: > 30 | pyinstaller --onefile 31 | --paths=build/lib.{{ python_win_bits }}-{{ python_version }} 32 | --paths={{ librsync_install_dir }}/lib 33 | --paths={{ librsync_install_dir }}/bin 34 | --additional-hooks-dir=tools 35 | --console build/scripts-{{ python_version }}/rdiff-backup 36 | --add-data src/rdiff_backup.egg-info/PKG-INFO;rdiff_backup.egg-info 37 | environment: 38 | LIBRSYNC_DIR: "{{ librsync_install_dir }}" 39 | args: 40 | chdir: "{{ rdiffbackup_dir }}" 41 | - name: generate a versioned and specific name for the compiled executable 42 | set_fact: 43 | bin_exe: "{{ wheel_pkg | regex_replace('^rdiff_backup', 'rdiff-backup') | regex_replace('.whl$', '.exe') }}" 44 | - name: rename the compiled executable 45 | win_shell: > 46 | Move-Item -Force 47 | -Path {{ rdiffbackup_dir }}/dist/rdiff-backup.exe 48 | -Destination {{ rdiffbackup_dir }}/dist/{{ bin_exe }} 49 | - name: fetch generated binary files into the local dist directory 50 | fetch: 51 | src: "{{ rdiffbackup_dir }}/dist/{{ item }}" 52 | dest: "{{ rdiffbackup_local_dist_dir }}/" 53 | flat: true # copy without the directory 54 | loop: 55 | - "{{ bin_exe }}" 56 | - "{{ wheel_pkg }}" 57 | tags: 58 | - fetch 59 | - never 60 | 61 | # the following lines are not absolutely necessary but help debugging rdiff-backup 62 | 63 | - name: copy rsync.dll to build directory to call rdiff-backup from repo 64 | win_copy: # newer versions of rsync.dll are installed in bin not lib 65 | src: "{{ librsync_install_dir }}/bin/rsync.dll" 66 | remote_src: true # file is already on the Windows machine 67 | dest: "{{ rdiffbackup_dir }}/build/lib.{{ python_win_bits }}-{{ python_version }}/rdiff_backup/" 68 | tags: debug_help 69 | - name: prepare variable backquote to avoid quoting issues 70 | set_fact: 71 | bq: \ 72 | tags: debug_help 73 | - name: create a simple setup script to call rdiff-backup from the repo 74 | win_copy: 75 | content: | 76 | REM call this script to get the right environment variable and examples 77 | SET PYTHONPATH={{ rdiffbackup_dir }}/build/lib.{{ python_win_bits }}-{{ python_version }} 78 | SET PATH={{ rdiffbackup_dir | replace('/', bq) }}\build\scripts-{{ python_version }};%PATH% 79 | dest: "{{ rdiffbackup_dir }}/build/setup-rdiff-backup.bat" 80 | tags: debug_help 81 | - name: create a wrapper script to call rdiff-backup from the repo 82 | win_copy: 83 | src: ../rdiff-backup.bat 84 | dest: "{{ rdiffbackup_dir }}/build/scripts-{{ python_version }}/rdiff-backup.bat" 85 | tags: debug_help 86 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions/verify.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | A built-in rdiff-backup action plug-in to verify a repository. 22 | 23 | This plug-in verifies that files in a repository at a given time 24 | have the correct hash. 25 | """ 26 | 27 | from rdiffbackup import actions 28 | from rdiffbackup.locations import repository 29 | 30 | 31 | class VerifyAction(actions.BaseAction): 32 | """ 33 | Verify that files in a backup repository correspond to their stored hash, 34 | or that servers are properly reachable. 35 | """ 36 | name = "verify" 37 | security = "validate" 38 | 39 | @classmethod 40 | def add_action_subparser(cls, sub_handler): 41 | subparser = super().add_action_subparser(sub_handler) 42 | subparser.add_argument( 43 | "--at", metavar="TIME", default="now", 44 | help="as of which time to check the files' hashes (default is now/latest)") 45 | subparser.add_argument( 46 | "locations", metavar="[[USER@]SERVER::]PATH", nargs=1, 47 | help="location of repository where to check files' hashes") 48 | return subparser 49 | 50 | def connect(self): 51 | conn_value = super().connect() 52 | if conn_value: 53 | self.source = repository.Repo( 54 | self.connected_locations[0], self.values.force, 55 | must_be_writable=False, must_exist=True, can_be_sub_path=True 56 | ) 57 | return conn_value 58 | 59 | def check(self): 60 | # we try to identify as many potential errors as possible before we 61 | # return, so we gather all potential issues and return only the final 62 | # result 63 | return_code = super().check() 64 | 65 | # we verify that source repository is correct 66 | return_code |= self.source.check() 67 | 68 | return return_code 69 | 70 | def setup(self): 71 | # in setup we return as soon as we detect an issue to avoid changing 72 | # too much 73 | return_code = super().setup() 74 | if return_code != 0: 75 | return return_code 76 | 77 | return_code = self.source.setup() 78 | if return_code != 0: 79 | return return_code 80 | 81 | # set the filesystem properties of the repository 82 | self.source.base_dir.conn.fs_abilities.single_set_globals( 83 | self.source.base_dir, 1) # read_only=True 84 | self.source.init_quoting(self.values.chars_to_quote) 85 | 86 | self.mirror_rpath = self.source.base_dir.new_index( 87 | self.source.restore_index) 88 | self.inc_rpath = self.source.data_dir.append_path( 89 | b'increments', self.source.restore_index) 90 | 91 | self.action_time = self._get_parsed_time(self.values.at, 92 | ref_rp=self.inc_rpath) 93 | if self.action_time is None: 94 | return 1 95 | 96 | return 0 # all is good 97 | 98 | def run(self): 99 | return self.source.base_dir.conn.compare.Verify( 100 | self.mirror_rpath, self.inc_rpath, self.action_time) 101 | 102 | 103 | def get_action_class(): 104 | return VerifyAction 105 | -------------------------------------------------------------------------------- /tools/crossversion/playbook-smoke-test.yml: -------------------------------------------------------------------------------- 1 | - name: do some smoke tests between the current version of rdiff-backup and a deemed older version 2 | hosts: all 3 | become: false 4 | gather_facts: true 5 | 6 | vars: 7 | test_user: vagrant 8 | test_server: oldrdiffbackup 9 | ssh_config: ssh.local.cfg 10 | remote_schema: 'ssh -F {{ ssh_config }} -C %s rdiff-backup --server' 11 | remote_base_dir: '/home/{{ test_user }}/smoke.local.d' 12 | local_base_dir: '../smoke.local.d' 13 | 14 | pre_tasks: 15 | 16 | - name: generate the necessary SSH configuration 17 | shell: vagrant ssh-config oldrdiffbackup > "{{ ssh_config }}" 18 | args: 19 | creates: "{{ ssh_config }}" 20 | delegate_to: localhost 21 | - name: remove remote base directory {{ remote_base_dir }} 22 | file: 23 | path: "{{ remote_base_dir }}" 24 | state: absent 25 | - name: remove local base directory {{ local_base_dir }} 26 | file: 27 | path: "{{ local_base_dir }}" 28 | state: absent 29 | delegate_to: localhost 30 | - name: create remote base directory {{ remote_base_dir }} 31 | file: 32 | path: "{{ remote_base_dir }}" 33 | state: directory 34 | - name: create local base directory {{ local_base_dir }} 35 | file: 36 | path: "{{ local_base_dir }}" 37 | state: directory 38 | delegate_to: localhost 39 | - name: delete dummy file 40 | file: 41 | path: file.local.txt 42 | state: absent 43 | delegate_to: localhost 44 | 45 | tasks: 46 | 47 | - name: call remote rdiff-backup --version 48 | command: rdiff-backup --version 49 | 50 | - name: call remote rdiff-backup --verbosity 9 (with expected failure) 51 | command: rdiff-backup --verbosity 9 52 | register: rb_res 53 | failed_when: rb_res.rc != 1 54 | 55 | - name: call local rdiff-backup --version 56 | command: rdiff-backup --version 57 | delegate_to: localhost 58 | 59 | - name: call local rdiff-backup info 60 | command: rdiff-backup info 61 | delegate_to: localhost 62 | 63 | - name: check that the remote rdiff-backup works 64 | command: > 65 | rdiff-backup --remote-schema '{{ remote_schema }}' --test-server 66 | {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 67 | delegate_to: localhost 68 | 69 | - name: make a simple backup from the local directory to remote repo 70 | command: > 71 | rdiff-backup --remote-schema '{{ remote_schema }}' 72 | . {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 73 | delegate_to: localhost 74 | 75 | - name: compare the current directory with the remote repo 76 | command: > 77 | rdiff-backup --remote-schema '{{ remote_schema }}' --compare-hash 78 | . {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 79 | delegate_to: localhost 80 | 81 | - name: create a dummy file to modify the local directory 82 | copy: 83 | dest: file.local.txt 84 | content: "{{ now() }}" 85 | delegate_to: localhost 86 | 87 | - name: re-make a simple backup from the local directory to remote repo 88 | command: > 89 | rdiff-backup --remote-schema '{{ remote_schema }}' 90 | . {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 91 | delegate_to: localhost 92 | 93 | - name: list the increments with size in the remote repo 94 | command: > 95 | rdiff-backup --remote-schema '{{ remote_schema }}' --list-increment-sizes 96 | {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 97 | delegate_to: localhost 98 | 99 | - name: verify using the remote rdiff-backup the hashes in the repo 100 | command: > 101 | rdiff-backup --remote-schema '{{ remote_schema }}' --verify 102 | {{ remote_base_dir }}/simplebackup 103 | 104 | - name: restore from remote repo to local directory 105 | command: > 106 | rdiff-backup --remote-schema '{{ remote_schema }}' --restore-as-of now 107 | {{ test_user }}\@{{ test_server }}::{{ remote_base_dir }}/simplebackup 108 | {{ local_base_dir }}/simplerestore 109 | delegate_to: localhost 110 | 111 | -------------------------------------------------------------------------------- /src/rdiff_backup/Rdiff.py: -------------------------------------------------------------------------------- 1 | # Copyright 2002 2005 Ben Escoto 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | """Invoke rdiff utility to make signatures, deltas, or patch""" 20 | 21 | from . import Globals, log, rpath, hash, librsync 22 | 23 | 24 | def get_signature(rp, blocksize=None): 25 | """Take signature of rpin file and return in file object""" 26 | if not blocksize: 27 | blocksize = _find_blocksize(rp.getsize()) 28 | log.Log("Getting signature of file {fi} with blocksize {bs}".format( 29 | fi=rp, bs=blocksize), log.DEBUG) 30 | return librsync.SigFile(rp.open("rb"), blocksize) 31 | 32 | 33 | def get_delta_sigrp_hash(rp_signature, rp_new): 34 | """Like above but also calculate hash of new as close() value""" 35 | log.Log("Getting delta (with hash) of file {fi} with signature {si}".format( 36 | fi=rp_new, si=rp_signature), log.DEBUG) 37 | return librsync.DeltaFile( 38 | rp_signature.open("rb"), hash.FileWrapper(rp_new.open("rb"))) 39 | 40 | 41 | def write_delta(basis, new, delta, compress=None): 42 | """Write rdiff delta which brings basis to new""" 43 | log.Log("Writing delta {de} from basis {ba} to new {ne}".format( 44 | ba=basis, ne=new, de=delta), log.DEBUG) 45 | deltafile = librsync.DeltaFile(get_signature(basis), new.open("rb")) 46 | delta.write_from_fileobj(deltafile, compress) 47 | 48 | 49 | def write_patched_fp(basis_fp, delta_fp, out_fp): 50 | """Write patched file to out_fp given input fps. Closes input files""" 51 | rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp) 52 | basis_fp.close() 53 | delta_fp.close() 54 | 55 | 56 | def patch_local(rp_basis, rp_delta, outrp=None, delta_compressed=None): 57 | """Patch routine that must be run locally, writes to outrp 58 | 59 | This should be run local to rp_basis because it needs to be a real 60 | file (librsync may need to seek around in it). If outrp is None, 61 | patch rp_basis instead. 62 | 63 | The return value is the close value of the delta, so it can be 64 | used to produce hashes. 65 | 66 | """ 67 | assert rp_basis.conn is Globals.local_connection, ( 68 | "This function must run locally and not over '{conn}'.".format( 69 | conn=rp_basis.conn)) 70 | if delta_compressed: 71 | deltafile = rp_delta.open("rb", 1) 72 | else: 73 | deltafile = rp_delta.open("rb") 74 | patchfile = librsync.PatchedFile(rp_basis.open("rb"), deltafile) 75 | if outrp: 76 | return outrp.write_from_fileobj(patchfile) 77 | else: 78 | return _write_via_tempfile(patchfile, rp_basis) 79 | 80 | 81 | def _find_blocksize(file_len): 82 | """Return a reasonable block size to use on files of length file_len 83 | 84 | If the block size is too big, deltas will be bigger than is 85 | necessary. If the block size is too small, making deltas and 86 | patching can take a really long time. 87 | 88 | """ 89 | if file_len < 4096: 90 | return 64 # set minimum of 64 bytes 91 | else: # Use square root, rounding to nearest 16 92 | return int(pow(file_len, 0.5) / 16) * 16 93 | 94 | 95 | def _write_via_tempfile(fp, rp): 96 | """Write fileobj fp to rp by writing to tempfile and renaming""" 97 | tf = rp.get_temp_rpath(sibling=True) 98 | retval = tf.write_from_fileobj(fp) 99 | rpath.rename(tf, rp) 100 | return retval 101 | -------------------------------------------------------------------------------- /src/rdiffbackup/actions_mgr.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 the rdiff-backup project 2 | # 3 | # This file is part of rdiff-backup. 4 | # 5 | # rdiff-backup is free software; you can redistribute it and/or modify 6 | # under the terms of the GNU General Public License as published by the 7 | # Free Software Foundation; either version 2 of the License, or (at your 8 | # option) any later version. 9 | # 10 | # rdiff-backup is distributed in the hope that it will be useful, but 11 | # WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 | # General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with rdiff-backup; if not, write to the Free Software 17 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 18 | # 02110-1301, USA 19 | 20 | """ 21 | rdiff-backup Actions Manager 22 | 23 | A module to discover and return built-in and 3rd party plugins for 24 | actions (used on the command line), like backup or restore. 25 | """ 26 | 27 | import importlib 28 | import pkgutil 29 | 30 | import rdiffbackup.actions 31 | 32 | 33 | def get_discovered_actions(): 34 | """ 35 | Discover all rdiff-backup action plug-ins 36 | 37 | They may come either from the 'rdiffbackup.actions' spacename, or 38 | top-level modules with a name starting with 'rdb_action_'. 39 | Returns a dictionary with the name of each Action-class as key, and 40 | the class returned by get_action_class() as value. 41 | """ 42 | # we discover first potential 3rd party plugins, based on name 43 | discovered_action_plugins = { 44 | name: importlib.import_module(name) 45 | for finder, name, ispkg 46 | in pkgutil.iter_modules() 47 | if name.startswith("rdb_action_") 48 | } 49 | # and we complete/overwrite with modules delivered in the namespace 50 | discovered_action_plugins.update({ 51 | name: importlib.import_module(name) 52 | for name 53 | in _iter_namespace(rdiffbackup.actions) 54 | }) 55 | # then we create the dictionary of {action_name: ActionClass} 56 | disc_actions = { 57 | action.get_action_class().get_name(): action.get_action_class() 58 | for action 59 | in discovered_action_plugins.values() 60 | } 61 | return disc_actions 62 | 63 | 64 | def get_generic_parsers(): 65 | """ 66 | Return a list of generic parsers 67 | 68 | This list is used to parse generic options common to all actions. 69 | """ 70 | return rdiffbackup.actions.GENERIC_PARSERS 71 | 72 | 73 | def get_parent_parsers_compat200(): 74 | """ 75 | Return a list of all parent sub-options used by all actions 76 | 77 | This list is solely used to simulate the old command line interface. 78 | """ 79 | return rdiffbackup.actions.PARENT_PARSERS 80 | 81 | 82 | def _iter_namespace(nsp): 83 | """ 84 | Return an iterator of names of modules found in a specific namespace. 85 | 86 | The names are made absolute, with the namespace as prefix, to simplify 87 | import. 88 | """ 89 | # Specifying the second argument (prefix) to iter_modules makes the 90 | # returned name an absolute name instead of a relative one. This allows 91 | # import_module to work without having to do additional modification to 92 | # the name. 93 | prefix = nsp.__name__ + "." 94 | for pkg in pkgutil.iter_modules(nsp.__path__, prefix): 95 | yield pkg[1] # pkg is (finder, name, ispkg) 96 | # special handling when the package is bundled with PyInstaller 97 | # See https://github.com/pyinstaller/pyinstaller/issues/1905 98 | toc = set() # table of content 99 | for importer in pkgutil.iter_importers(nsp.__name__.partition(".")[0]): 100 | if hasattr(importer, 'toc'): 101 | toc |= importer.toc 102 | for name in toc: 103 | if name.startswith(prefix): 104 | yield name 105 | 106 | 107 | if __name__ == "__main__": 108 | actions = get_discovered_actions() 109 | for name, action_class in actions.items(): 110 | print(name + ": " + action_class.get_version()) 111 | -------------------------------------------------------------------------------- /testing/fs_abilitiestest.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | import os 3 | import time 4 | from commontest import abs_test_dir, Myrm 5 | from rdiff_backup import Globals, rpath, fs_abilities 6 | 7 | 8 | class FSAbilitiesTest(unittest.TestCase): 9 | """Test testing of file system abilities 10 | 11 | Some of these tests assume that the actual file system tested has 12 | the given abilities. If the file system this is run on differs 13 | from the original test system, this test may/should fail. Change 14 | the expected values below. 15 | 16 | """ 17 | # Describes standard linux file system without acls/eas 18 | dir_to_test = abs_test_dir 19 | eas = acls = 1 20 | chars_to_quote = "" 21 | extended_filenames = 1 22 | case_sensitive = 1 23 | ownership = (os.getuid() == 0) 24 | hardlinks = fsync_dirs = 1 25 | dir_inc_perms = 1 26 | resource_forks = 0 27 | carbonfile = 0 28 | high_perms = 1 29 | 30 | # Describes MS-Windows style file system 31 | # dir_to_test = "/mnt/fat" 32 | # eas = acls = 0 33 | # extended_filenames = 0 34 | # chars_to_quote = "^a-z0-9_ -" 35 | # ownership = hardlinks = 0 36 | # fsync_dirs = 1 37 | # dir_inc_perms = 0 38 | # resource_forks = 0 39 | # carbonfile = 0 40 | 41 | # A case insensitive directory (FIXME must currently be created by root) 42 | # mkdir build/testfiles/fs_insensitive 43 | # dd if=/dev/zero of=build/testfiles/fs_fatfile.dd bs=512 count=1024 44 | # mkfs.fat build/testfiles/fs_fatfile.dd 45 | # sudo mount -o loop,uid=$(id -u) build/testfiles/fs_fatfile.dd build/testfiles/fs_insensitive 46 | # touch build/testfiles/fs_fatfile.dd build/testfiles/fs_insensitive/some_File 47 | 48 | case_insensitive_path = os.path.join(abs_test_dir, b'fs_insensitive') 49 | 50 | def testReadOnly(self): 51 | """Test basic querying read only""" 52 | base_dir = rpath.RPath(Globals.local_connection, self.dir_to_test) 53 | fsa = fs_abilities.FSAbilities('read-only', base_dir, read_only=True) 54 | print(fsa) 55 | self.assertEqual(fsa.read_only, 1) 56 | self.assertEqual(fsa.eas, self.eas) 57 | self.assertEqual(fsa.acls, self.acls) 58 | self.assertEqual(fsa.resource_forks, self.resource_forks) 59 | self.assertEqual(fsa.carbonfile, self.carbonfile) 60 | self.assertEqual(fsa.case_sensitive, self.case_sensitive) 61 | 62 | def testReadWrite(self): 63 | """Test basic querying read/write""" 64 | base_dir = rpath.RPath(Globals.local_connection, self.dir_to_test) 65 | new_dir = base_dir.append("fs_abilitiestest") 66 | if new_dir.lstat(): 67 | Myrm(new_dir.path) 68 | new_dir.setdata() 69 | new_dir.mkdir() 70 | t = time.time() 71 | fsa = fs_abilities.FSAbilities('read/write', new_dir) 72 | print("Time elapsed = ", time.time() - t) 73 | print(fsa) 74 | self.assertEqual(fsa.read_only, 0) 75 | self.assertEqual(fsa.eas, self.eas) 76 | self.assertEqual(fsa.acls, self.acls) 77 | self.assertEqual(fsa.ownership, self.ownership) 78 | self.assertEqual(fsa.hardlinks, self.hardlinks) 79 | self.assertEqual(fsa.fsync_dirs, self.fsync_dirs) 80 | self.assertEqual(fsa.dir_inc_perms, self.dir_inc_perms) 81 | self.assertEqual(fsa.resource_forks, self.resource_forks) 82 | self.assertEqual(fsa.carbonfile, self.carbonfile) 83 | self.assertEqual(fsa.high_perms, self.high_perms) 84 | self.assertEqual(fsa.extended_filenames, self.extended_filenames) 85 | 86 | new_dir.delete() 87 | 88 | @unittest.skipUnless(os.path.isdir(case_insensitive_path), 89 | "Case insensitive directory %s does not exist" % 90 | case_insensitive_path) 91 | def test_case_sensitive(self): 92 | """Test a read-only case-INsensitive directory""" 93 | rp = rpath.RPath(Globals.local_connection, self.case_insensitive_path) 94 | fsa = fs_abilities.FSAbilities('read-only', rp, read_only=True) 95 | fsa.set_case_sensitive_readonly(rp) 96 | self.assertEqual(fsa.case_sensitive, 0) 97 | 98 | 99 | if __name__ == "__main__": 100 | unittest.main() 101 | -------------------------------------------------------------------------------- /tools/bash-completion/rdiff-backup: -------------------------------------------------------------------------------- 1 | # /etc/bash_completion.d/rdiff-backup - bash-completion for rdiff-backup 2 | # 2008 Andreas Olsson 3 | # 4 | # Developed for 1.2.x but can be "ported" to older version by modifying the 5 | # lists of available options. 6 | # 7 | # Besides supplying options it will also try to determine 8 | # when it is suitible to complete what. 9 | # 10 | # Feel free to send comments or suggestions. 11 | 12 | # Set extglob, saving whether it was already set so we know whether to unset it 13 | shopt -q extglob; _rdiff_backup_extglob=$? 14 | if ((_rdiff_backup_extglob)); then shopt -s extglob; fi 15 | 16 | _rdiff_backup () 17 | { 18 | local cur prev wfilearg wpatharg wnumarg wotherarg longopts shortopts options 19 | COMPREPLAY=() 20 | cur="${COMP_WORDS[COMP_CWORD]}" 21 | prev="${COMP_WORDS[COMP_CWORD-1]}" 22 | 23 | # These options will be completed by the path to a filename. 24 | wfilearg="--exclude-filelist|--exclude-globbing-filelist|--exclude-if-present| 25 | |--group-mapping-file|--include-filelist|--include-globbing-filelist| 26 | |--user-mapping-file" 27 | 28 | # These options will be completed by the path to a directory. 29 | wpatharg="--remote-tempdir|--restrict|--restrict-read-only| 30 | |--restrict-update-only|--tempdir" 31 | 32 | # These options will be completed by a number, from 0 to 9. 33 | wnumarg="--terminal-verbosity|--verbosity|-v|--api-version" 34 | 35 | # These options requires a non-completable argument. 36 | # They won't be completed at all. 37 | wotherarg="--compare-at-time|--compare-full-at-time|--compare-hash-at-time| 38 | |--current-time|--exclude|--exclude-regexp|--include|--include-regexp| 39 | |--list-at-time|--list-changed-since|--max-file-size|--min-file-size| 40 | |--no-compression-regexp|-r|--restore-as-of|--remote-schema| 41 | |--remove-older-than|--verify-at-time" 42 | 43 | # Available long options 44 | longopts="--allow-duplicate-timestamps --api-version --backup-mode \ 45 | --calculate-average --carbonfile --check-destination-dir \ 46 | --compare --compare-at-time --compare-full --compare-full-at-time \ 47 | --compare-hash --compare-hash-at-time --create-full-path --current-time \ 48 | --exclude --exclude-device-files --exclude-fifos --exclude-filelist \ 49 | --exclude-filelist-stdin --exclude-globbing-filelist --exclude-globbing-filelist-stdin \ 50 | --exclude-other-filesystems --exclude-regexp --exclude-special-files --exclude-sockets \ 51 | --exclude-symbolic-links --exclude-if-present --force --group-mapping-file --include \ 52 | --include-filelist --include-filelist-stdin --include-globbing-filelist \ 53 | --include-globbing-filelist-stdin --include-regexp --include-special-files \ 54 | --include-symbolic-links --list-at-time --list-changed-since --list-increments \ 55 | --list-increment-sizes --max-file-size --min-file-size --never-drop-acls --no-acls \ 56 | --no-carbonfile --no-compare-inode --no-compression --no-compression-regexp --no-eas \ 57 | --no-file-statistics --no-fsync --no-hard-links --null-separator --parsable-output \ 58 | --override-chars-to-quote --preserve-numerical-ids --print-statistics --restore-as-of \ 59 | --remote-schema --remote-tempdir --remove-older-than --restrict \ 60 | --restrict-read-only --restrict-update-only --ssh-no-compression --tempdir \ 61 | --terminal-verbosity --test-server --use-compatible-timestamps \ 62 | --user-mapping-file --verbosity --verify --verify-at-time \ 63 | --version" 64 | 65 | # Available short options 66 | shortopts="-b -l -r -v -V" 67 | 68 | options=${longopts}" "${shortopts} 69 | 70 | case "$prev" in 71 | @($wfilearg)) 72 | _filedir 73 | return 0 74 | ;; 75 | 76 | @($wpatharg)) 77 | _filedir -d 78 | return 0 79 | ;; 80 | 81 | @($wotherarg)) 82 | return 0 83 | ;; 84 | 85 | @($wnumarg)) 86 | COMPREPLY=( $( compgen -W '0 1 2 3 4 5 6 7 8 9' -- $cur ) ) 87 | return 0 88 | ;; 89 | esac 90 | 91 | if [[ ${cur} == -* ]] 92 | then 93 | COMPREPLY=( $(compgen -W "${options}" -- ${cur}) ) 94 | return 0 95 | else 96 | _filedir 97 | return 0 98 | fi 99 | } 100 | 101 | complete -F _rdiff_backup -o filenames rdiff-backup 102 | 103 | # unset extglob if it wasn't originally set 104 | if ((_rdiff_backup_extglob)); then shopt -u extglob; fi 105 | unset _rdiff_backup_extglob 106 | -------------------------------------------------------------------------------- /docs/arch/locations.adoc: -------------------------------------------------------------------------------- 1 | = Locations, Directories and Repositories 2 | 3 | A location is any source or target path given on the command line, including the optional user/hostname. 4 | It can either be a simple directory from which to backup, or to which to restore, or it can be a backup repository. 5 | 6 | == Locations 7 | 8 | A location has a base_dir represented by an _rpath.RPath_ object. 9 | 10 | NOTE: an idea could be to make Location a child class of rpath.RPath, so that it represents its own `base_dir`. 11 | An alternative idea could to use `__getattr__` to delegate attributes to `base_dir`... 12 | The author of those lines is still thinking about pros and cons of these options. 13 | 14 | The ultimate idea would be to have all client/server communications go through the location because, at the end, it's what is present remotely, and the whole actions only act against those locations, potentially transferring data from one location to the next. 15 | The current code is far from this ideal, with remote functions all over the place. 16 | 17 | The Location class knows two methods `check` and `setup`, which, like the corresponding action methods, return 0 if everything is well, or else 1. 18 | 19 | [NOTE] 20 | ==== 21 | The `set_select` method is also common to directories and repositories but is still implemented specifically for each class, and is problably not generic enough as each relies on a different implementation: 22 | 23 | * ReadDir.set_select: backup.SourceStruct.set_source_select 24 | * WriteDir.set_select: restore.TargetStruct.set_target_select 25 | * Repository.set_select: restore.MirrorStruct.set_mirror_select 26 | ==== 27 | 28 | == Shadows 29 | 30 | Each Location class has a "shadow" class, which it uses to communicate remotely. 31 | Those shadow classes are called so because they do only what their "real" class does but haven't an existence of their own. 32 | 33 | The shadow classes hence have only class methods and can't be called directly, but only through their real class, which must offer corresponding methods to interact with the remote side. 34 | This is why those shadow classes are prefixed with an underscore, they are considered private to the location namespace. 35 | 36 | The aim is to bundle the communication with the remote side (and hence the API definition) through those shadow classes. 37 | 38 | Internally, each location class defines a _private_ `_shadow` variable which points to the corresponding shadow class (either locally or remotely, through the connection of the basis directory). 39 | It is set in the `setup()` method (so it can't be used earlier). 40 | This variable can be used by the interfacing methods to interact with the class methods of the shadow class. 41 | 42 | CAUTION: the fact that the shadow classes can't be instantiated make them some kind of singleton, so that only one type of each can be handled locally. 43 | This isn't an issue currently, but needs to be considered should an extension require this feature. 44 | This said, it isn't an issue remotely, because each location has its own instance running in a different rdiff-backup process. 45 | 46 | == Directories 47 | 48 | A directory is either: 49 | 50 | * readable and must exist prior to the action (currently only backup), 51 | * or it must be writable and shouldn't exist prior to the action (currently only restore), or should at least be empty (the force-option overrides this pre-requisite). 52 | 53 | This simple dichotomy allows us to have one _ReadDir_ and one _WriteDir_ class. 54 | 55 | == Repositories 56 | 57 | Most actions act solely on a pre-existing repository, only the backup action creates a new repository. 58 | This said more actions (delete and regress currently) also modify an existing repository. 59 | And some actions (restore, verify, compare and list) do not point at the base directory but at some sub-path within it, either a sub-directory or even a dated increment. 60 | 61 | These multiple dimensions mean that the differences couldn't be represented by few classes, but had to be implemented by one single _Repository_ class, configured by different flags: 62 | 63 | * must_be_writable 64 | * must_exist 65 | * can_be_sub_path 66 | 67 | The _Repository_ class also knows the following methods: 68 | 69 | * __get_mirror_time()__ 70 | * __init_quoting(chars_to_quote)__ 71 | * __needs_regress()__ 72 | * __regress()__ 73 | 74 | NOTE: this list will become longer as more functions are centralized into this class. It is just too early to finalize the interface. 75 | --------------------------------------------------------------------------------