├── .README.html ├── .ansible-lint ├── .codespell_ignores ├── .codespellrc ├── .commitlintrc.js ├── .fmf └── version ├── .github ├── dependabot.yml ├── pull_request_template.md └── workflows │ ├── ansible-lint.yml │ ├── ansible-managed-var-comment.yml │ ├── ansible-test.yml │ ├── build_docs.yml │ ├── changelog_to_tag.yml │ ├── codespell.yml │ ├── markdownlint.yml │ ├── pr-title-lint.yml │ ├── qemu-kvm-integration-tests.yml │ ├── test_converting_readme.yml │ ├── tft.yml │ ├── tft_citest_bad.yml │ ├── weekly_ci.yml │ └── woke.yml ├── .gitignore ├── .markdownlint.yaml ├── .ostree ├── README.md ├── get_ostree_data.sh ├── packages-runtime-CentOS-10.txt ├── packages-runtime-RedHat-10.txt └── packages-runtime.txt ├── .pandoc_template.html5 ├── .yamllint.yml ├── CHANGELOG.md ├── COPYING ├── README-ansible.md ├── README-ostree.md ├── README.md ├── ansible_pytest_extra_requirements.txt ├── contributing.md ├── custom_requirements.txt ├── defaults └── main.yml ├── handlers └── main.yml ├── meta ├── collection-requirements.yml └── main.yml ├── molecule └── default │ ├── Dockerfile.j2 │ └── molecule.yml ├── molecule_extra_requirements.txt ├── plans ├── README-plans.md └── test_playbooks_parallel.fmf ├── pylint_extra_requirements.txt ├── pylintrc ├── pytest_extra_requirements.txt ├── tasks ├── main.yml ├── set_vars.yml └── ssh.yml ├── templates └── kdump.conf.j2 ├── tests ├── inventory.yaml.j2 ├── roles │ └── linux-system-roles.kdump │ │ ├── defaults │ │ ├── handlers │ │ ├── meta │ │ ├── tasks │ │ ├── templates │ │ └── vars ├── setup-snapshot.yml ├── tasks │ └── check_header.yml ├── templates │ └── get_ansible_managed.j2 ├── tests_default.yml ├── tests_default_reboot.yml ├── tests_default_wrapper.yml ├── tests_ssh.yml ├── tests_ssh_reboot.yml ├── tests_ssh_wrapper.yml └── vars │ └── rh_distros_vars.yml ├── tox.ini └── vars ├── AlmaLinux_10.yml ├── CentOS_10.yml ├── RedHat_10.yml ├── Rocky_10.yml └── main.yml /.README.html: -------------------------------------------------------------------------------- 1 | 2 | 50 | 51 | 52 | 53 | 54 | 55 | Ansible Role: Kernel Crash Dump 56 | 59 | 60 | 125 | 128 | 129 | 130 |
131 |
132 |

Ansible Role: Kernel Crash Dump

133 |
134 |
135 | 153 |
154 |

An ansible role which configures kdump.

155 |

Warning

156 |

The role replaces the kdump configuration of the managed host. 157 | Previous settings will be lost, even if they are not specified in the 158 | role variables. Currently, this includes replacing at least the 159 | following configuration file:

160 | 163 |

Requirements

164 |

See below

165 |

Collection requirements

166 |

The role requires external collections only for management of 167 | rpm-ostree nodes. Please run the following command to 168 | install them if you need to manage rpm-ostree nodes:

169 |
ansible-galaxy collection install -vv -r meta/collection-requirements.yml
171 |

Role Variables

172 |

kdump_target: Can be specified to write vmcore to a 173 | location that is not in the root file system. If type is 174 | raw or a filesystem type, location points to a partition 175 | (by device node name, label, or uuid). For example:

176 |
kdump_target:
178 |   type: raw
179 |   location: /dev/sda1
180 |

or for an ext4 filesystem:

181 |
kdump_target:
183 |   type: ext4
184 |   location: "12e3e25f-534e-4007-a40c-e7e080a933ad"
185 |

If type is ssh, location points to a 186 | server: example:

187 |
  type: ssh
189 |   location: user@example.com
190 |

Similarly for nfs, location points to an 191 | nfs server:

192 |
  type: nfs
194 |   location: nfs.example.com
195 |

Only the ssh type is considered stable, support for the 196 | other types is experimental.

197 |

kdump_path: The path to which vmcore will be 198 | written. If kdump_target is not null, path is relative to 199 | that dump target. Otherwise, it must be an absolute path in the root 200 | file system.

201 |

kdump_core_collector: A command to copy the vmcore. 202 | If null, uses makedumpfile with options depending on the 203 | kdump_target.type.

204 |

kdump_system_action: The action that is performed 205 | when dumping the core file fails. Can be reboot, 206 | halt, poweroff, or shell.

207 |

kdump_auto_reset_crashkernel: Whether to reset 208 | kernel crashkernel to new default value or not when kexec-tools updates 209 | the default crashkernel value and existing kernels using the old default 210 | kernel crashkernel value.

211 |

kdump_dracut_args: Pass extra dracut options when 212 | rebuilding kdump initrd.

213 |

kdump_reboot_ok: If you run the role on a managed 214 | node that does not have memory reserved for crash kernel, i.e. the file 215 | /sys/kernel/kexec_crash_size contains 0, it 216 | might be required to reboot the managed node to configure kdump.

217 |

By default, the role does not reboot the managed node. If a managed 218 | node requires reboot, the role sets the 219 | kdump_reboot_required fact and fails, so that the user can 220 | reboot the managed node when needed. If you want the role to reboot the 221 | system if required, set this variable to true. You do not 222 | need to re-execute the role after boot.

223 |

Default: false

224 |

Ansible Facts Returned by 225 | the Role

226 |

kdump_reboot_required: The role sets this fact if 227 | the managed node requires reboot to complete kdump configuration. 228 | Re-execute the role after boot to ensure that kdump is working.

229 |

rpm-ostree

230 |

See README-ostree.md

231 |

License

232 |

MIT

233 |
234 | 235 | 236 | -------------------------------------------------------------------------------- /.ansible-lint: -------------------------------------------------------------------------------- 1 | --- 2 | profile: production 3 | kinds: 4 | - yaml: "**/meta/collection-requirements.yml" 5 | - playbook: "**/tests/get_coverage.yml" 6 | - yaml: "**/tests/collection-requirements.yml" 7 | - playbook: "**/tests/tests_*.yml" 8 | - playbook: "**/tests/setup-snapshot.yml" 9 | - tasks: "**/tests/*.yml" 10 | - playbook: "**/tests/playbooks/*.yml" 11 | - tasks: "**/tests/tasks/*.yml" 12 | - tasks: "**/tests/tasks/*/*.yml" 13 | - vars: "**/tests/vars/*.yml" 14 | - playbook: "**/examples/*.yml" 15 | skip_list: 16 | - fqcn-builtins 17 | - var-naming[no-role-prefix] 18 | exclude_paths: 19 | - tests/roles/ 20 | - .github/ 21 | - .markdownlint.yaml 22 | - examples/roles/ 23 | mock_roles: 24 | - linux-system-roles.kdump 25 | supported_ansible_also: 26 | - "2.14.0" 27 | -------------------------------------------------------------------------------- /.codespell_ignores: -------------------------------------------------------------------------------- 1 | passt 2 | -------------------------------------------------------------------------------- /.codespellrc: -------------------------------------------------------------------------------- 1 | [codespell] 2 | check-hidden = true 3 | # Note that `-w` doesn't work when ignore-multiline-regex is set 4 | # https://github.com/codespell-project/codespell/issues/3642 5 | ignore-multiline-regex = codespell:ignore-begin.*codespell:ignore-end 6 | ignore-words = .codespell_ignores 7 | # skip-file is not available https://github.com/codespell-project/codespell/pull/2759 8 | # .pandoc_template.html5 contains a typo in Licence that we shouldn't edit 9 | # .README.html is generated from README.md automatically - no need to check spelling 10 | skip = .pandoc_template.html5,.README.html 11 | -------------------------------------------------------------------------------- /.commitlintrc.js: -------------------------------------------------------------------------------- 1 | module.exports = { 2 | parserPreset: 'conventional-changelog-conventionalcommits', 3 | rules: { 4 | 'body-leading-blank': [1, 'always'], 5 | 'body-max-line-length': [2, 'always', 100], 6 | 'footer-leading-blank': [1, 'always'], 7 | 'footer-max-line-length': [2, 'always', 100], 8 | 'header-max-length': [2, 'always', 100], 9 | 'subject-case': [ 10 | 2, 11 | 'never', 12 | ['start-case', 'pascal-case', 'upper-case'], 13 | ], 14 | 'subject-empty': [2, 'never'], 15 | 'subject-full-stop': [2, 'never', '.'], 16 | 'type-case': [2, 'always', 'lower-case'], 17 | 'type-empty': [2, 'never'], 18 | 'type-enum': [ 19 | 2, 20 | 'always', 21 | [ 22 | 'build', 23 | 'chore', 24 | 'ci', 25 | 'docs', 26 | 'feat', 27 | 'fix', 28 | 'perf', 29 | 'refactor', 30 | 'revert', 31 | 'style', 32 | 'test', 33 | 'tests', 34 | ], 35 | ], 36 | }, 37 | prompt: { 38 | questions: { 39 | type: { 40 | description: "Select the type of change that you're committing", 41 | enum: { 42 | feat: { 43 | description: 'A new feature', 44 | title: 'Features', 45 | emoji: '✨', 46 | }, 47 | fix: { 48 | description: 'A bug fix', 49 | title: 'Bug Fixes', 50 | emoji: '🐛', 51 | }, 52 | docs: { 53 | description: 'Documentation only changes', 54 | title: 'Documentation', 55 | emoji: '📚', 56 | }, 57 | style: { 58 | description: 59 | 'Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)', 60 | title: 'Styles', 61 | emoji: '💎', 62 | }, 63 | refactor: { 64 | description: 65 | 'A code change that neither fixes a bug nor adds a feature', 66 | title: 'Code Refactoring', 67 | emoji: '📦', 68 | }, 69 | perf: { 70 | description: 'A code change that improves performance', 71 | title: 'Performance Improvements', 72 | emoji: '🚀', 73 | }, 74 | test: { 75 | description: 'Adding missing tests or correcting existing tests', 76 | title: 'Tests', 77 | emoji: '🚨', 78 | }, 79 | tests: { 80 | description: 'Adding missing tests or correcting existing tests', 81 | title: 'Tests', 82 | emoji: '🚨', 83 | }, 84 | build: { 85 | description: 86 | 'Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)', 87 | title: 'Builds', 88 | emoji: '🛠', 89 | }, 90 | ci: { 91 | description: 92 | 'Changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)', 93 | title: 'Continuous Integrations', 94 | emoji: '⚙️', 95 | }, 96 | chore: { 97 | description: "Other changes that don't modify src or test files", 98 | title: 'Chores', 99 | emoji: '♻️', 100 | }, 101 | revert: { 102 | description: 'Reverts a previous commit', 103 | title: 'Reverts', 104 | emoji: '🗑', 105 | }, 106 | }, 107 | }, 108 | scope: { 109 | description: 110 | 'What is the scope of this change (e.g. component or file name)', 111 | }, 112 | subject: { 113 | description: 114 | 'Write a short, imperative tense description of the change', 115 | }, 116 | body: { 117 | description: 'Provide a longer description of the change', 118 | }, 119 | isBreaking: { 120 | description: 'Are there any breaking changes?', 121 | }, 122 | breakingBody: { 123 | description: 124 | 'A BREAKING CHANGE commit requires a body. Please enter a longer description of the commit itself', 125 | }, 126 | breaking: { 127 | description: 'Describe the breaking changes', 128 | }, 129 | isIssueAffected: { 130 | description: 'Does this change affect any open issues?', 131 | }, 132 | issuesBody: { 133 | description: 134 | 'If issues are closed, the commit requires a body. Please enter a longer description of the commit itself', 135 | }, 136 | issues: { 137 | description: 'Add issue references (e.g. "fix #123", "re #123".)', 138 | }, 139 | }, 140 | }, 141 | }; 142 | -------------------------------------------------------------------------------- /.fmf/version: -------------------------------------------------------------------------------- 1 | 1 2 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: 2 3 | updates: 4 | - package-ecosystem: github-actions 5 | directory: / 6 | schedule: 7 | interval: monthly 8 | commit-message: 9 | prefix: ci 10 | -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | Enhancement: 2 | 3 | Reason: 4 | 5 | Result: 6 | 7 | Issue Tracker Tickets (Jira or BZ if any): 8 | -------------------------------------------------------------------------------- /.github/workflows/ansible-lint.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Ansible Lint 3 | on: # yamllint disable-line rule:truthy 4 | pull_request: 5 | merge_group: 6 | branches: 7 | - main 8 | types: 9 | - checks_requested 10 | push: 11 | branches: 12 | - main 13 | workflow_dispatch: 14 | env: 15 | LSR_ROLE2COLL_NAMESPACE: fedora 16 | LSR_ROLE2COLL_NAME: linux_system_roles 17 | permissions: 18 | contents: read 19 | jobs: 20 | ansible_lint: 21 | runs-on: ubuntu-latest 22 | steps: 23 | - name: Update pip, git 24 | run: | 25 | set -euxo pipefail 26 | sudo apt update 27 | sudo apt install -y git 28 | 29 | - name: Checkout repo 30 | uses: actions/checkout@v4 31 | 32 | - name: Install tox, tox-lsr 33 | run: | 34 | set -euxo pipefail 35 | pip3 install "git+https://github.com/linux-system-roles/tox-lsr@3.11.0" 36 | 37 | - name: Convert role to collection format 38 | id: collection 39 | run: | 40 | set -euxo pipefail 41 | TOXENV=collection lsr_ci_runtox 42 | coll_dir=".tox/ansible_collections/$LSR_ROLE2COLL_NAMESPACE/$LSR_ROLE2COLL_NAME" 43 | # cleanup after collection conversion 44 | rm -rf "$coll_dir/.ansible" .tox/ansible-plugin-scan 45 | # ansible-lint action requires a .git directory??? 46 | # https://github.com/ansible/ansible-lint/blob/main/action.yml#L45 47 | mkdir -p "$coll_dir/.git" 48 | meta_req_file="${{ github.workspace }}/meta/collection-requirements.yml" 49 | test_req_file="${{ github.workspace }}/tests/collection-requirements.yml" 50 | if [ -f "$meta_req_file" ] && [ -f "$test_req_file" ]; then 51 | coll_req_file="${{ github.workspace }}/req.yml" 52 | python -c 'import sys; import yaml 53 | hsh1 = yaml.safe_load(open(sys.argv[1])) 54 | hsh2 = yaml.safe_load(open(sys.argv[2])) 55 | coll = {} 56 | for item in hsh1["collections"] + hsh2["collections"]: 57 | if isinstance(item, dict): 58 | name = item["name"] 59 | rec = item 60 | else: 61 | name = item # assume string 62 | rec = {"name": name} 63 | if name not in coll: 64 | coll[name] = rec 65 | hsh1["collections"] = list(coll.values()) 66 | yaml.safe_dump(hsh1, open(sys.argv[3], "w"))' "$meta_req_file" "$test_req_file" "$coll_req_file" 67 | echo merged "$coll_req_file" 68 | cat "$coll_req_file" 69 | elif [ -f "$meta_req_file" ]; then 70 | coll_req_file="$meta_req_file" 71 | elif [ -f "$test_req_file" ]; then 72 | coll_req_file="$test_req_file" 73 | else 74 | coll_req_file="" 75 | fi 76 | echo "coll_req_file=$coll_req_file" >> $GITHUB_OUTPUT 77 | 78 | - name: Run ansible-lint 79 | uses: ansible/ansible-lint@v25 80 | with: 81 | working_directory: ${{ github.workspace }}/.tox/ansible_collections/${{ env.LSR_ROLE2COLL_NAMESPACE }}/${{ env.LSR_ROLE2COLL_NAME }} 82 | requirements_file: ${{ steps.collection.outputs.coll_req_file }} 83 | env: 84 | ANSIBLE_COLLECTIONS_PATH: ${{ github.workspace }}/.tox 85 | -------------------------------------------------------------------------------- /.github/workflows/ansible-managed-var-comment.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Check for ansible_managed variable use in comments 3 | on: # yamllint disable-line rule:truthy 4 | pull_request: 5 | merge_group: 6 | branches: 7 | - main 8 | types: 9 | - checks_requested 10 | push: 11 | branches: 12 | - main 13 | workflow_dispatch: 14 | permissions: 15 | contents: read 16 | jobs: 17 | ansible_managed_var_comment: 18 | runs-on: ubuntu-latest 19 | steps: 20 | - name: Update pip, git 21 | run: | 22 | set -euxo pipefail 23 | python3 -m pip install --upgrade pip 24 | sudo apt update 25 | sudo apt install -y git 26 | 27 | - name: Checkout repo 28 | uses: actions/checkout@v4 29 | 30 | - name: Install tox, tox-lsr 31 | run: | 32 | set -euxo pipefail 33 | pip3 install "git+https://github.com/linux-system-roles/tox-lsr@3.11.0" 34 | 35 | - name: Run ansible-plugin-scan 36 | run: | 37 | set -euxo pipefail 38 | TOXENV=ansible-managed-var-comment lsr_ci_runtox 39 | -------------------------------------------------------------------------------- /.github/workflows/ansible-test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Ansible Test 3 | on: # yamllint disable-line rule:truthy 4 | pull_request: 5 | merge_group: 6 | branches: 7 | - main 8 | types: 9 | - checks_requested 10 | push: 11 | branches: 12 | - main 13 | workflow_dispatch: 14 | env: 15 | LSR_ROLE2COLL_NAMESPACE: fedora 16 | LSR_ROLE2COLL_NAME: linux_system_roles 17 | permissions: 18 | contents: read 19 | jobs: 20 | ansible_test: 21 | runs-on: ubuntu-latest 22 | steps: 23 | - name: Update pip, git 24 | run: | 25 | set -euxo pipefail 26 | python3 -m pip install --upgrade pip 27 | sudo apt update 28 | sudo apt install -y git 29 | 30 | - name: Checkout repo 31 | uses: actions/checkout@v4 32 | 33 | - name: Install tox, tox-lsr 34 | run: | 35 | set -euxo pipefail 36 | pip3 install "git+https://github.com/linux-system-roles/tox-lsr@3.11.0" 37 | 38 | - name: Convert role to collection format 39 | run: | 40 | set -euxo pipefail 41 | TOXENV=collection lsr_ci_runtox 42 | 43 | - name: Run ansible-test 44 | uses: ansible-community/ansible-test-gh-action@release/v1 45 | with: 46 | testing-type: sanity # wokeignore:rule=sanity 47 | ansible-core-version: stable-2.17 48 | collection-src-directory: ${{ github.workspace }}/.tox/ansible_collections/${{ env.LSR_ROLE2COLL_NAMESPACE }}/${{ env.LSR_ROLE2COLL_NAME }} 49 | -------------------------------------------------------------------------------- /.github/workflows/build_docs.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Convert README.md to HTML and push to docs branch 4 | on: # yamllint disable-line rule:truthy 5 | push: 6 | branches: 7 | - main 8 | paths: 9 | - README.md 10 | release: 11 | types: 12 | - published 13 | permissions: 14 | contents: read 15 | jobs: 16 | build_docs: 17 | runs-on: ubuntu-latest 18 | permissions: 19 | contents: write 20 | steps: 21 | - name: Update pip, git 22 | run: | 23 | set -euxo pipefail 24 | sudo apt update 25 | sudo apt install -y git 26 | 27 | - name: Check out code 28 | uses: actions/checkout@v4 29 | with: 30 | fetch-depth: 0 31 | - name: Ensure the docs branch 32 | run: | 33 | set -euxo pipefail 34 | branch=docs 35 | existed_in_remote=$(git ls-remote --heads origin $branch) 36 | 37 | if [ -z "${existed_in_remote}" ]; then 38 | echo "Creating $branch branch" 39 | git config --global user.name "${{ github.actor }}" 40 | git config --global user.email "${{ github.actor }}@users.noreply.github.com" 41 | git checkout --orphan $branch 42 | git reset --hard 43 | git commit --allow-empty -m "Initializing $branch branch" 44 | git push origin $branch 45 | echo "Created $branch branch" 46 | else 47 | echo "Branch $branch already exists" 48 | fi 49 | 50 | - name: Checkout the docs branch 51 | uses: actions/checkout@v4 52 | with: 53 | ref: docs 54 | 55 | - name: Fetch README.md and .pandoc_template.html5 template from the workflow branch 56 | uses: actions/checkout@v4 57 | with: 58 | sparse-checkout: | 59 | README.md 60 | .pandoc_template.html5 61 | sparse-checkout-cone-mode: false 62 | path: ref_branch 63 | - name: Set RELEASE_VERSION based on whether run on release or on push 64 | run: | 65 | set -euxo pipefail 66 | if [ ${{ github.event_name }} = release ]; then 67 | echo "RELEASE_VERSION=${{ github.event.release.tag_name }}" >> $GITHUB_ENV 68 | elif [ ${{ github.event_name }} = push ]; then 69 | echo "RELEASE_VERSION=latest" >> $GITHUB_ENV 70 | else 71 | echo Unsupported event 72 | exit 1 73 | fi 74 | 75 | - name: Ensure that version and docs directories exist 76 | run: mkdir -p ${{ env.RELEASE_VERSION }} docs 77 | 78 | - name: Remove badges from README.md prior to converting to HTML 79 | run: sed -i '1,8 {/^\[\!.*actions\/workflows/d}' ref_branch/README.md 80 | 81 | - name: Convert README.md to HTML and save to the version directory 82 | uses: docker://pandoc/core:latest 83 | with: 84 | args: >- 85 | --from gfm --to html5 --toc --shift-heading-level-by=-1 86 | --template ref_branch/.pandoc_template.html5 87 | --output ${{ env.RELEASE_VERSION }}/README.html ref_branch/README.md 88 | 89 | - name: Copy latest README.html to docs/index.html for GitHub pages 90 | if: env.RELEASE_VERSION == 'latest' 91 | run: cp ${{ env.RELEASE_VERSION }}/README.html docs/index.html 92 | 93 | - name: Upload README.html as an artifact 94 | uses: actions/upload-artifact@master 95 | with: 96 | name: README.html 97 | path: ${{ env.RELEASE_VERSION }}/README.html 98 | 99 | - name: Commit changes 100 | run: | 101 | git config --global user.name "${{ github.actor }}" 102 | git config --global user.email "${{ github.actor }}@users.noreply.github.com" 103 | git add ${{ env.RELEASE_VERSION }}/README.html docs/index.html 104 | git commit -m "Update README.html for ${{ env.RELEASE_VERSION }}" 105 | 106 | - name: Push changes 107 | uses: ad-m/github-push-action@master 108 | with: 109 | github_token: ${{ secrets.GITHUB_TOKEN }} 110 | branch: docs 111 | -------------------------------------------------------------------------------- /.github/workflows/changelog_to_tag.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Tag, release, and publish role based on CHANGELOG.md push 4 | on: # yamllint disable-line rule:truthy 5 | push: 6 | branches: 7 | - main 8 | paths: 9 | - CHANGELOG.md 10 | permissions: 11 | contents: read 12 | jobs: 13 | tag_release_publish: 14 | runs-on: ubuntu-latest 15 | permissions: 16 | contents: write 17 | steps: 18 | - name: Update pip, git 19 | run: | 20 | set -euxo pipefail 21 | sudo apt update 22 | sudo apt install -y git 23 | 24 | - name: checkout PR 25 | uses: actions/checkout@v4 26 | 27 | - name: Get tag and message from the latest CHANGELOG.md commit 28 | id: tag 29 | run: | 30 | set -euxo pipefail 31 | print=false 32 | while read -r line; do 33 | if [[ "$line" =~ ^\[([0-9]+\.[0-9]+\.[0-9]+)\]\ -\ [0-9-]+ ]]; then 34 | if [ "$print" = false ]; then 35 | _tagname="${BASH_REMATCH[1]}" 36 | echo "$line" 37 | print=true 38 | else 39 | break 40 | fi 41 | elif [ "$print" = true ]; then 42 | echo "$line" 43 | fi 44 | done < CHANGELOG.md > ./.tagmsg.txt 45 | git fetch --all --tags 46 | for t in $( git tag -l ); do 47 | if [ "$t" = "$_tagname" ]; then 48 | echo INFO: tag "$t" already exists 49 | exit 1 50 | fi 51 | done 52 | # Get name of the branch that the change was pushed to 53 | _branch="${GITHUB_REF_NAME:-}" 54 | if [ "$_branch" = master ] || [ "$_branch" = main ]; then 55 | echo Using branch name ["$_branch"] as push branch 56 | else 57 | echo WARNING: GITHUB_REF_NAME ["$_branch"] is not main or master 58 | _branch=$( git branch -r | grep -o 'origin/HEAD -> origin/.*$' | \ 59 | awk -F'/' '{print $3}' || : ) 60 | fi 61 | if [ -z "$_branch" ]; then 62 | _branch=$( git branch --points-at HEAD --no-color --format='%(refname:short)' ) 63 | fi 64 | if [ -z "$_branch" ]; then 65 | echo ERROR: unable to determine push branch 66 | git branch -a 67 | exit 1 68 | fi 69 | echo "tagname=$_tagname" >> "$GITHUB_OUTPUT" 70 | echo "branch=$_branch" >> "$GITHUB_OUTPUT" 71 | - name: Create tag 72 | uses: mathieudutour/github-tag-action@v6.2 73 | with: 74 | github_token: ${{ secrets.GITHUB_TOKEN }} 75 | custom_tag: ${{ steps.tag.outputs.tagname }} 76 | tag_prefix: '' 77 | 78 | - name: Create Release 79 | id: create_release 80 | uses: ncipollo/release-action@v1 81 | with: 82 | tag: ${{ steps.tag.outputs.tagname }} 83 | name: Version ${{ steps.tag.outputs.tagname }} 84 | bodyFile: ./.tagmsg.txt 85 | makeLatest: true 86 | 87 | - name: Publish role to Galaxy 88 | uses: robertdebock/galaxy-action@1.2.1 89 | with: 90 | galaxy_api_key: ${{ secrets.galaxy_api_key }} 91 | git_branch: ${{ steps.tag.outputs.branch }} 92 | -------------------------------------------------------------------------------- /.github/workflows/codespell.yml: -------------------------------------------------------------------------------- 1 | # Codespell configuration is within .codespellrc 2 | --- 3 | name: Codespell 4 | on: # yamllint disable-line rule:truthy 5 | - pull_request 6 | permissions: 7 | contents: read 8 | jobs: 9 | codespell: 10 | name: Check for spelling errors 11 | runs-on: ubuntu-latest 12 | steps: 13 | - name: Checkout 14 | uses: actions/checkout@v4 15 | 16 | - name: Codespell 17 | uses: codespell-project/actions-codespell@v2 18 | -------------------------------------------------------------------------------- /.github/workflows/markdownlint.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Markdown Lint 4 | on: # yamllint disable-line rule:truthy 5 | pull_request: 6 | merge_group: 7 | branches: 8 | - main 9 | types: 10 | - checks_requested 11 | push: 12 | branches: 13 | - main 14 | workflow_dispatch: 15 | permissions: 16 | contents: read 17 | jobs: 18 | markdownlint: 19 | runs-on: ubuntu-latest 20 | steps: 21 | - name: Update pip, git 22 | run: | 23 | set -euxo pipefail 24 | sudo apt update 25 | sudo apt install -y git 26 | 27 | - name: Check out code 28 | uses: actions/checkout@v4 29 | 30 | # CHANGELOG.md is generated automatically from PR titles and descriptions 31 | # It might have issues but they are not critical 32 | - name: Lint all markdown files except for CHANGELOG.md 33 | uses: docker://avtodev/markdown-lint:master 34 | with: 35 | args: >- 36 | --ignore=CHANGELOG.md 37 | **/*.md 38 | config: .markdownlint.yaml 39 | -------------------------------------------------------------------------------- /.github/workflows/pr-title-lint.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: PR Title Lint 3 | on: # yamllint disable-line rule:truthy 4 | pull_request: 5 | types: 6 | - opened 7 | - synchronize 8 | - reopened 9 | - edited 10 | merge_group: 11 | branches: 12 | - main 13 | types: 14 | - checks_requested 15 | permissions: 16 | contents: read 17 | jobs: 18 | commit-checks: 19 | runs-on: ubuntu-latest 20 | steps: 21 | - uses: actions/checkout@v4 22 | with: 23 | fetch-depth: 0 24 | 25 | - name: Install conventional-commit linter 26 | run: npm install @commitlint/config-conventional @commitlint/cli 27 | 28 | - name: Run commitlint on PR title 29 | env: 30 | PR_TITLE: ${{ github.event.pull_request.title }} 31 | # Echo from env variable to avoid bash errors with extra characters 32 | run: echo "$PR_TITLE" | npx commitlint --verbose 33 | -------------------------------------------------------------------------------- /.github/workflows/qemu-kvm-integration-tests.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Test 3 | on: # yamllint disable-line rule:truthy 4 | pull_request: 5 | merge_group: 6 | branches: 7 | - main 8 | types: 9 | - checks_requested 10 | push: 11 | branches: 12 | - main 13 | workflow_dispatch: 14 | 15 | permissions: 16 | contents: read 17 | # This is required for the ability to create/update the Pull request status 18 | statuses: write 19 | jobs: 20 | scenario: 21 | runs-on: ubuntu-latest 22 | 23 | strategy: 24 | fail-fast: false 25 | matrix: 26 | scenario: 27 | # QEMU 28 | - { image: "centos-9", env: "qemu-ansible-core-2.16" } 29 | - { image: "centos-10", env: "qemu-ansible-core-2.17" } 30 | # ansible/libdnf5 bug: https://issues.redhat.com/browse/RHELMISC-10110 31 | # - { image: "fedora-41", env: "qemu-ansible-core-2.17" } 32 | - { image: "fedora-42", env: "qemu-ansible-core-2.19" } 33 | 34 | # container 35 | - { image: "centos-9", env: "container-ansible-core-2.16" } 36 | - { image: "centos-9-bootc", env: "container-ansible-core-2.16" } 37 | # broken on non-running dbus 38 | # - { image: "centos-10", env: "container-ansible-core-2.17" } 39 | - { image: "centos-10-bootc", env: "container-ansible-core-2.17" } 40 | - { image: "fedora-41", env: "container-ansible-core-2.17" } 41 | - { image: "fedora-42", env: "container-ansible-core-2.17" } 42 | - { image: "fedora-41-bootc", env: "container-ansible-core-2.17" } 43 | - { image: "fedora-42-bootc", env: "container-ansible-core-2.17" } 44 | 45 | env: 46 | TOX_ARGS: "--skip-tags tests::infiniband,tests::nvme,tests::scsi" 47 | 48 | steps: 49 | - name: Checkout repo 50 | uses: actions/checkout@v4 51 | 52 | - name: Check if platform is supported 53 | id: check_platform 54 | run: | 55 | set -euxo pipefail 56 | image="${{ matrix.scenario.image }}" 57 | image="${image%-bootc}" 58 | 59 | # convert image to tag formats 60 | platform= 61 | platform_version= 62 | case "$image" in 63 | centos-*) platform=el; platform_version=el"${image#centos-}" ;; 64 | fedora-*) platform=fedora; platform_version="${image/-/}" ;; 65 | esac 66 | supported= 67 | if yq -e '.galaxy_info.galaxy_tags[] | select(. == "'${platform_version}'" or . == "'${platform}'")' meta/main.yml; then 68 | supported=true 69 | fi 70 | 71 | # bootc build support (in buildah) has a separate flag 72 | if [ "${{ matrix.scenario.image }}" != "$image" ]; then 73 | if ! yq -e '.galaxy_info.galaxy_tags[] | select(. == "containerbuild")' meta/main.yml; then 74 | supported= 75 | fi 76 | else 77 | # roles need to opt into support for running in a system container 78 | env="${{ matrix.scenario.env }}" 79 | if [ "${env#container}" != "$env" ] && 80 | ! yq -e '.galaxy_info.galaxy_tags[] | select(. == "container")' meta/main.yml; then 81 | supported= 82 | fi 83 | fi 84 | 85 | echo "supported=$supported" >> "$GITHUB_OUTPUT" 86 | 87 | - name: Set up /dev/kvm 88 | if: steps.check_platform.outputs.supported 89 | run: | 90 | echo 'KERNEL=="kvm", GROUP="kvm", MODE="0666", OPTIONS+="static_node=kvm"' | sudo tee /etc/udev/rules.d/99-kvm.rules 91 | sudo udevadm control --reload-rules 92 | sudo udevadm trigger --name-match=kvm --settle 93 | ls -l /dev/kvm 94 | 95 | - name: Disable man-db to speed up package install 96 | if: steps.check_platform.outputs.supported 97 | run: | 98 | echo "set man-db/auto-update false" | sudo debconf-communicate 99 | sudo dpkg-reconfigure man-db 100 | 101 | - name: Install test dependencies 102 | if: steps.check_platform.outputs.supported 103 | run: | 104 | set -euxo pipefail 105 | python3 -m pip install --upgrade pip 106 | sudo apt update 107 | sudo apt install -y --no-install-recommends git ansible-core genisoimage qemu-system-x86 108 | pip3 install "git+https://github.com/linux-system-roles/tox-lsr@3.11.0" 109 | 110 | # HACK: Drop this when moving this workflow to 26.04 LTS 111 | - name: Update podman to 5.x for compatibility with bootc-image-builder's podman 5 112 | if: steps.check_platform.outputs.supported && endsWith(matrix.scenario.image, '-bootc') 113 | run: | 114 | sed 's/noble/plucky/g' /etc/apt/sources.list.d/ubuntu.sources | sudo tee /etc/apt/sources.list.d/plucky.sources >/dev/null 115 | cat </dev/null 116 | Package: podman buildah golang-github-containers-common crun libgpgme11t64 libgpg-error0 golang-github-containers-image catatonit conmon containers-storage 117 | Pin: release n=plucky 118 | Pin-Priority: 991 119 | 120 | Package: libsubid4 netavark passt aardvark-dns containernetworking-plugins libslirp0 slirp4netns 121 | Pin: release n=plucky 122 | Pin-Priority: 991 123 | 124 | Package: * 125 | Pin: release n=plucky 126 | Pin-Priority: 400 127 | EOF 128 | 129 | sudo apt update 130 | sudo apt install -y podman crun conmon containers-storage 131 | 132 | - name: Configure tox-lsr 133 | if: steps.check_platform.outputs.supported 134 | run: >- 135 | curl -o ~/.config/linux-system-roles.json 136 | https://raw.githubusercontent.com/linux-system-roles/linux-system-roles.github.io/master/download/linux-system-roles.json 137 | 138 | - name: Run qemu integration tests 139 | if: steps.check_platform.outputs.supported && startsWith(matrix.scenario.env, 'qemu') 140 | run: >- 141 | tox -e ${{ matrix.scenario.env }} -- --image-name ${{ matrix.scenario.image }} --make-batch 142 | --log-level debug $TOX_ARGS --skip-tags tests::bootc-e2e 143 | --lsr-report-errors-url DEFAULT -- 144 | 145 | - name: Qemu result summary 146 | if: steps.check_platform.outputs.supported && startsWith(matrix.scenario.env, 'qemu') && always() 147 | run: | 148 | set -euo pipefail 149 | # some platforms may have setup/cleanup playbooks - need to find the 150 | # actual test playbook that starts with tests_ 151 | while read code start end test_files; do 152 | for f in $test_files; do 153 | test_file="$f" 154 | f="$(basename $test_file)" 155 | if [[ "$f" =~ ^tests_ ]]; then 156 | break 157 | fi 158 | done 159 | if [ "$code" = "0" ]; then 160 | echo -n "PASS: " 161 | mv "$test_file.log" "${test_file}-SUCCESS.log" 162 | else 163 | echo -n "FAIL: " 164 | mv "$test_file.log" "${test_file}-FAIL.log" 165 | fi 166 | echo "$f" 167 | done < batch.report 168 | 169 | - name: Run container tox integration tests 170 | if: steps.check_platform.outputs.supported && startsWith(matrix.scenario.env, 'container') 171 | run: | 172 | set -euo pipefail 173 | # HACK: debug.py/profile.py setup is broken 174 | export LSR_CONTAINER_PROFILE=false 175 | export LSR_CONTAINER_PRETTY=false 176 | rc=0 177 | for t in tests/tests_*.yml; do 178 | if tox -e ${{ matrix.scenario.env }} -- --image-name ${{ matrix.scenario.image }} $t > ${t}.log 2>&1; then 179 | echo "PASS: $(basename $t)" 180 | mv "${t}.log" "${t}-SUCCESS.log" 181 | else 182 | echo "FAIL: $(basename $t)" 183 | mv "${t}.log" "${t}-FAIL.log" 184 | rc=1 185 | fi 186 | done 187 | exit $rc 188 | 189 | - name: Run bootc validation tests in QEMU 190 | if: steps.check_platform.outputs.supported && 191 | startsWith(matrix.scenario.env, 'container') && 192 | endsWith(matrix.scenario.image, '-bootc') 193 | run: | 194 | set -euxo pipefail 195 | env=$(echo "${{ matrix.scenario.env }}" | sed 's/^container-/qemu-/') 196 | 197 | for image_file in $(ls tests/tmp/*/qcow2/disk.qcow2 2>/dev/null); do 198 | test="tests/$(basename $(dirname $(dirname $image_file))).yml" 199 | if tox -e "$env" -- --image-file "$(pwd)/$image_file" \ 200 | --log-level debug $TOX_ARGS \ 201 | --lsr-report-errors-url DEFAULT \ 202 | -e __bootc_validation=true \ 203 | -- "$test" >out 2>&1; then 204 | mv out "${test}-PASS.log" 205 | else 206 | mv out "${test}-FAIL.log" 207 | exit 1 208 | fi 209 | done 210 | 211 | - name: Upload test logs on failure 212 | if: failure() 213 | uses: actions/upload-artifact@v4 214 | with: 215 | name: "logs-${{ matrix.scenario.image }}-${{ matrix.scenario.env }}" 216 | path: | 217 | tests/*.log 218 | artifacts/default_provisioners.log 219 | artifacts/*.qcow2.*.log 220 | batch.txt 221 | batch.report 222 | retention-days: 30 223 | 224 | - name: Show test log failures 225 | if: steps.check_platform.outputs.supported && failure() 226 | run: | 227 | set -euo pipefail 228 | # grab check_logs.py script 229 | curl -s -L -o check_logs.py https://raw.githubusercontent.com/linux-system-roles/auto-maintenance/refs/heads/main/check_logs.py 230 | chmod +x check_logs.py 231 | declare -a cmdline=(./check_logs.py --github-action-format) 232 | for log in tests/*-FAIL.log; do 233 | cmdline+=(--lsr-error-log "$log") 234 | done 235 | "${cmdline[@]}" 236 | 237 | - name: Set commit status as success with a description that platform is skipped 238 | if: ${{ steps.check_platform.outputs.supported == '' }} 239 | uses: myrotvorets/set-commit-status-action@master 240 | with: 241 | status: success 242 | context: "${{ github.workflow }} / scenario (${{ matrix.scenario.image }}, ${{ matrix.scenario.env }}) (pull_request)" 243 | description: The role does not support this platform. Skipping. 244 | targetUrl: "" 245 | -------------------------------------------------------------------------------- /.github/workflows/test_converting_readme.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Test converting README.md to README.html 4 | on: # yamllint disable-line rule:truthy 5 | pull_request: 6 | merge_group: 7 | branches: 8 | - main 9 | types: 10 | - checks_requested 11 | push: 12 | branches: 13 | - main 14 | permissions: 15 | contents: read 16 | jobs: 17 | test_converting_readme: 18 | runs-on: ubuntu-latest 19 | permissions: 20 | contents: write 21 | steps: 22 | - name: Update pip, git 23 | run: | 24 | set -euxo pipefail 25 | sudo apt update 26 | sudo apt install -y git 27 | 28 | - name: Check out code 29 | uses: actions/checkout@v4 30 | 31 | - name: Remove badges from README.md prior to converting to HTML 32 | run: sed -i '1,8 {/^\[\!.*actions\/workflows/d}' README.md 33 | 34 | - name: Convert README.md to HTML 35 | uses: docker://pandoc/core:latest 36 | with: 37 | args: >- 38 | --from gfm --to html5 --toc --shift-heading-level-by=-1 39 | --template .pandoc_template.html5 40 | --output README.html README.md 41 | 42 | - name: Upload README.html as an artifact 43 | uses: actions/upload-artifact@master 44 | with: 45 | name: README.html 46 | path: README.html 47 | -------------------------------------------------------------------------------- /.github/workflows/tft.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Run integration tests in Testing Farm 3 | on: 4 | issue_comment: 5 | types: 6 | - created 7 | permissions: 8 | contents: read 9 | # This is required for the ability to create/update the Pull request status 10 | statuses: write 11 | jobs: 12 | prepare_vars: 13 | name: Get info from role and PR to determine if and how to test 14 | # The concurrency key is used to prevent multiple workflows from running at the same time 15 | concurrency: 16 | # group name contains reponame-pr_num to allow simualteneous runs in different PRs 17 | group: testing-farm-${{ github.event.repository.name }}-${{ github.event.issue.number }} 18 | cancel-in-progress: true 19 | # Let's schedule tests only on user request. NOT automatically. 20 | # Only repository owner or member can schedule tests 21 | if: | 22 | github.event.issue.pull_request 23 | && contains(github.event.comment.body, '[citest]') 24 | && (contains(fromJson('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association) 25 | || contains('systemroller', github.event.comment.user.login)) 26 | runs-on: ubuntu-latest 27 | outputs: 28 | supported_platforms: ${{ steps.supported_platforms.outputs.supported_platforms }} 29 | head_sha: ${{ steps.head_sha.outputs.head_sha }} 30 | memory: ${{ steps.memory.outputs.memory }} 31 | steps: 32 | - name: Dump github context 33 | run: echo "$GITHUB_CONTEXT" 34 | shell: bash 35 | env: 36 | GITHUB_CONTEXT: ${{ toJson(github) }} 37 | 38 | - name: Checkout repo 39 | uses: actions/checkout@v4 40 | 41 | - name: Get head sha of the PR 42 | id: head_sha 43 | run: | 44 | head_sha=$(gh api "repos/$REPO/pulls/$PR_NO" --jq '.head.sha') 45 | echo "head_sha=$head_sha" >> $GITHUB_OUTPUT 46 | env: 47 | REPO: ${{ github.repository }} 48 | PR_NO: ${{ github.event.issue.number }} 49 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 50 | 51 | - name: Checkout PR 52 | uses: actions/checkout@v4 53 | with: 54 | ref: ${{ steps.head_sha.outputs.head_sha }} 55 | 56 | - name: Get memory 57 | id: memory 58 | run: | 59 | if [ -d tests/provision.fmf ]; then 60 | memory=$(grep -rPo ' m: \K(.*)' tests/provision.fmf) 61 | fi 62 | if [ -n "$memory" ]; then 63 | echo "memory=$memory" >> $GITHUB_OUTPUT 64 | else 65 | echo "memory=2048" >> $GITHUB_OUTPUT 66 | fi 67 | 68 | - name: Get supported platforms 69 | id: supported_platforms 70 | run: | 71 | supported_platforms="" 72 | meta_main=meta/main.yml 73 | # All Fedora are supported, add latest Fedora versions to supported_platforms 74 | if yq '.galaxy_info.galaxy_tags[]' "$meta_main" | grep -qi fedora$; then 75 | supported_platforms+=" Fedora-41" 76 | supported_platforms+=" Fedora-42" 77 | fi 78 | # Specific Fedora versions supported 79 | if yq '.galaxy_info.galaxy_tags[]' "$meta_main" | grep -qiP 'fedora\d+$'; then 80 | for fedora_ver in $(yq '.galaxy_info.galaxy_tags[]' "$meta_main" | grep -iPo 'fedora\K(\d+$)'); do 81 | supported_platforms+=" Fedora-$fedora_ver" 82 | done 83 | fi 84 | if yq '.galaxy_info.galaxy_tags[]' "$meta_main" | grep -qi el7; then 85 | supported_platforms+=" CentOS-7-latest" 86 | fi 87 | for ver in 8 9 10; do 88 | if yq '.galaxy_info.galaxy_tags[]' "$meta_main" | grep -qi el"$ver"; then 89 | supported_platforms+=" CentOS-Stream-$ver" 90 | fi 91 | done 92 | echo "supported_platforms=$supported_platforms" >> $GITHUB_OUTPUT 93 | 94 | testing-farm: 95 | name: ${{ matrix.platform }}/ansible-${{ matrix.ansible_version }} 96 | needs: prepare_vars 97 | strategy: 98 | fail-fast: false 99 | matrix: 100 | include: 101 | - platform: Fedora-41 102 | ansible_version: 2.17 103 | - platform: Fedora-42 104 | ansible_version: 2.19 105 | - platform: CentOS-7-latest 106 | ansible_version: 2.9 107 | - platform: CentOS-Stream-8 108 | ansible_version: 2.9 109 | # On CentOS-Stream-8, latest supported Ansible is 2.16 110 | - platform: CentOS-Stream-8 111 | ansible_version: 2.16 112 | - platform: CentOS-Stream-9 113 | ansible_version: 2.17 114 | - platform: CentOS-Stream-10 115 | ansible_version: 2.17 116 | runs-on: ubuntu-latest 117 | env: 118 | ARTIFACTS_DIR_NAME: "tf_${{ github.event.repository.name }}-${{ github.event.issue.number }}_\ 119 | ${{ matrix.platform }}-${{ matrix.ansible_version }}_\ 120 | ${{ needs.prepare_vars.outputs.datetime }}/artifacts" 121 | ARTIFACT_TARGET_DIR: /srv/pub/alt/${{ vars.SR_LSR_USER }}/logs 122 | steps: 123 | - name: Set variables with DATETIME and artifact location 124 | id: set_vars 125 | run: | 126 | printf -v DATETIME '%(%Y%m%d-%H%M%S)T' -1 127 | ARTIFACTS_DIR_NAME="tf_${{ github.event.repository.name }}-${{ github.event.issue.number }}_\ 128 | ${{ matrix.platform }}-${{ matrix.ansible_version }}_$DATETIME/artifacts" 129 | ARTIFACTS_TARGET_DIR=/srv/pub/alt/${{ vars.SR_LSR_USER }}/logs 130 | ARTIFACTS_DIR=$ARTIFACTS_TARGET_DIR/$ARTIFACTS_DIR_NAME 131 | ARTIFACTS_URL=https://dl.fedoraproject.org/pub/alt/${{ vars.SR_LSR_USER }}/logs/$ARTIFACTS_DIR_NAME 132 | echo "DATETIME=$DATETIME" >> $GITHUB_OUTPUT 133 | echo "ARTIFACTS_DIR=$ARTIFACTS_DIR" >> $GITHUB_OUTPUT 134 | echo "ARTIFACTS_URL=$ARTIFACTS_URL" >> $GITHUB_OUTPUT 135 | 136 | - name: Set commit status as pending 137 | if: contains(needs.prepare_vars.outputs.supported_platforms, matrix.platform) 138 | uses: myrotvorets/set-commit-status-action@master 139 | with: 140 | sha: ${{ needs.prepare_vars.outputs.head_sha }} 141 | status: pending 142 | context: ${{ matrix.platform }}|ansible-${{ matrix.ansible_version }} 143 | description: Test started 144 | targetUrl: "" 145 | 146 | - name: Set commit status as success with a description that platform is skipped 147 | if: "!contains(needs.prepare_vars.outputs.supported_platforms, matrix.platform)" 148 | uses: myrotvorets/set-commit-status-action@master 149 | with: 150 | sha: ${{ needs.prepare_vars.outputs.head_sha }} 151 | status: success 152 | context: ${{ matrix.platform }}|ansible-${{ matrix.ansible_version }} 153 | description: The role does not support this platform. Skipping. 154 | targetUrl: "" 155 | 156 | - name: Run test in testing farm 157 | uses: sclorg/testing-farm-as-github-action@v4 158 | if: contains(needs.prepare_vars.outputs.supported_platforms, matrix.platform) 159 | with: 160 | git_ref: main 161 | pipeline_settings: '{ "type": "tmt-multihost" }' 162 | environment_settings: '{ "provisioning": { "tags": { "BusinessUnit": "system_roles" } } }' 163 | # Keeping SR_ARTIFACTS_URL at the bottom makes the link in logs clickable 164 | variables: "SR_ANSIBLE_VER=${{ matrix.ansible_version }};\ 165 | SR_REPO_NAME=${{ github.event.repository.name }};\ 166 | SR_GITHUB_ORG=${{ github.repository_owner }};\ 167 | SR_PR_NUM=${{ github.event.issue.number }};\ 168 | SR_ARTIFACTS_DIR=${{ steps.set_vars.outputs.ARTIFACTS_DIR }};\ 169 | SR_TEST_LOCAL_CHANGES=false;\ 170 | SR_LSR_USER=${{ vars.SR_LSR_USER }};\ 171 | SR_ARTIFACTS_URL=${{ steps.set_vars.outputs.ARTIFACTS_URL }}" 172 | # Note that LINUXSYSTEMROLES_SSH_KEY must be single-line, TF doesn't read multi-line variables fine. 173 | secrets: "SR_LSR_DOMAIN=${{ secrets.SR_LSR_DOMAIN }};\ 174 | SR_LSR_SSH_KEY=${{ secrets.SR_LSR_SSH_KEY }}" 175 | compose: ${{ matrix.platform }} 176 | # There are two blockers for using public ranch: 177 | # 1. multihost is not supported in public https://github.com/teemtee/tmt/issues/2620 178 | # 2. Security issue that leaks long secrets - Jira TFT-2698 179 | tf_scope: private 180 | api_key: ${{ secrets.TF_API_KEY_RH }} 181 | update_pull_request_status: false 182 | tmt_plan_filter: "tag:playbooks_parallel,kdump" 183 | 184 | - name: Set final commit status 185 | uses: myrotvorets/set-commit-status-action@master 186 | if: always() && contains(needs.prepare_vars.outputs.supported_platforms, matrix.platform) 187 | with: 188 | sha: ${{ needs.prepare_vars.outputs.head_sha }} 189 | status: ${{ job.status }} 190 | context: ${{ matrix.platform }}|ansible-${{ matrix.ansible_version }} 191 | description: Test finished 192 | targetUrl: ${{ steps.set_vars.outputs.ARTIFACTS_URL }} 193 | -------------------------------------------------------------------------------- /.github/workflows/tft_citest_bad.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: Re-run failed testing farm tests 3 | on: 4 | issue_comment: 5 | types: 6 | - created 7 | permissions: 8 | contents: read 9 | jobs: 10 | citest_bad_rerun: 11 | if: | 12 | github.event.issue.pull_request 13 | && contains(fromJson('["[citest_bad]", "[citest-bad]", "[citest bad]"]'), github.event.comment.body) 14 | && contains(fromJson('["OWNER", "MEMBER", "COLLABORATOR"]'), github.event.comment.author_association) 15 | permissions: 16 | actions: write # for re-running failed jobs: https://docs.github.com/en/rest/actions/workflow-runs?apiVersion=2022-11-28#re-run-a-job-from-a-workflow-run 17 | runs-on: ubuntu-latest 18 | steps: 19 | - name: Wait 10s until tft.yml workflow is created and skipped because new comment don't match [citest] 20 | run: sleep 10s 21 | 22 | - name: Re-run failed jobs for this PR 23 | env: 24 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 25 | REPO: ${{ github.repository }} 26 | PR_TITLE: ${{ github.event.issue.title }} 27 | run: | 28 | PENDING_RUN=$(gh api "repos/$REPO/actions/workflows/tft.yml/runs?event=issue_comment" \ 29 | | jq -r "[.workflow_runs[] | select( .display_title == \"$PR_TITLE\") | \ 30 | select(.status == \"pending\" or .status == \"queued\" or .status == \"in_progress\") | .id][0]") 31 | # if pending run don't exist, take the last run with failure state 32 | if [ "$PENDING_RUN" != "null" ]; then 33 | echo "The workflow $PENDING_RUN is still running, wait for it to finish to re-run" 34 | exit 1 35 | fi 36 | RUN_ID=$(gh api "repos/$REPO/actions/workflows/tft.yml/runs?event=issue_comment" \ 37 | | jq -r "[.workflow_runs[] | select( .display_title == \"$PR_TITLE\" ) | select( .conclusion == \"failure\" ) | .id][0]") 38 | if [ "$RUN_ID" = "null" ]; then 39 | echo "Failed workflow not found, exiting" 40 | exit 1 41 | fi 42 | echo "Re-running workflow $RUN_ID" 43 | gh api --method POST repos/$REPO/actions/runs/$RUN_ID/rerun-failed-jobs 44 | -------------------------------------------------------------------------------- /.github/workflows/weekly_ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Weekly CI trigger 4 | on: # yamllint disable-line rule:truthy 5 | workflow_dispatch: 6 | schedule: 7 | - cron: 0 7 * * 6 8 | env: 9 | BRANCH_NAME: weekly-ci 10 | COMMIT_MESSAGE: "ci: This PR is to trigger periodic CI testing" 11 | BODY_MESSAGE: >- 12 | This PR is for the purpose of triggering periodic CI testing. 13 | We don't currently have a way to trigger CI without a PR, 14 | so this PR serves that purpose. 15 | COMMENT: "[citest]" 16 | permissions: 17 | contents: read 18 | jobs: 19 | weekly_ci: 20 | runs-on: ubuntu-latest 21 | permissions: 22 | issues: write 23 | pull-requests: write 24 | contents: write 25 | steps: 26 | - name: Update pip, git 27 | run: | 28 | set -euxo pipefail 29 | sudo apt update 30 | sudo apt install -y git 31 | 32 | - name: Checkout latest code 33 | uses: actions/checkout@v4 34 | with: 35 | fetch-depth: 0 36 | - name: Create or rebase commit, add dump_packages callback 37 | run: | 38 | set -euxo pipefail 39 | 40 | git config --global user.name "github-actions[bot]" 41 | git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com" 42 | git checkout ${{ env.BRANCH_NAME }} || git checkout -b ${{ env.BRANCH_NAME }} 43 | git rebase main 44 | if [ ! -d tests/callback_plugins ]; then 45 | mkdir -p tests/callback_plugins 46 | fi 47 | curl -L -s -o tests/callback_plugins/dump_packages.py https://raw.githubusercontent.com/linux-system-roles/auto-maintenance/main/callback_plugins/dump_packages.py 48 | git add tests/callback_plugins 49 | git commit --allow-empty -m "${{ env.COMMIT_MESSAGE }}" 50 | git push -f --set-upstream origin ${{ env.BRANCH_NAME }} 51 | 52 | - name: Create and comment pull request 53 | uses: actions/github-script@v7 54 | with: 55 | github-token: ${{ secrets.GH_PUSH_TOKEN }} 56 | script: | 57 | const head = [context.repo.owner, ":", "${{ env.BRANCH_NAME }}"].join(""); 58 | const response = await github.rest.pulls.list({ 59 | owner: context.repo.owner, 60 | repo: context.repo.repo, 61 | head: head, 62 | base: context.ref, 63 | state: "open" 64 | }); 65 | let pr_number = ''; 66 | if (response.data.length === 0) { 67 | pr_number = (await github.rest.pulls.create({ 68 | owner: context.repo.owner, 69 | repo: context.repo.repo, 70 | title: "${{ env.COMMIT_MESSAGE }}", 71 | body: "${{ env.BODY_MESSAGE }}", 72 | head: "${{ env.BRANCH_NAME }}", 73 | base: context.ref, 74 | draft: true 75 | })).data.number; 76 | } else { 77 | pr_number = response.data[0].number; 78 | } 79 | github.rest.issues.createComment({ 80 | owner: context.repo.owner, 81 | repo: context.repo.repo, 82 | issue_number: pr_number, 83 | body: "${{ env.COMMENT }}", 84 | }); 85 | -------------------------------------------------------------------------------- /.github/workflows/woke.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # yamllint disable rule:line-length 3 | name: Woke 4 | on: # yamllint disable-line rule:truthy 5 | - pull_request 6 | jobs: 7 | woke: 8 | name: Detect non-inclusive language 9 | runs-on: ubuntu-latest 10 | steps: 11 | - name: Checkout 12 | uses: actions/checkout@v4 13 | 14 | - name: Run lsr-woke-action 15 | # Originally, uses: get-woke/woke-action@v0 16 | uses: linux-system-roles/lsr-woke-action@main 17 | with: 18 | woke-args: "-c https://raw.githubusercontent.com/linux-system-roles/tox-lsr/main/src/tox_lsr/config_files/woke.yml --count-only-error-for-failure" 19 | # Cause the check to fail on any broke rules 20 | fail-on-error: true 21 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | passes.yml 2 | vault.yml 3 | *.pyc 4 | *.retry 5 | /tests/.coverage 6 | /tests/htmlcov* 7 | /.tox 8 | /venv*/ 9 | /.venv/ 10 | .vscode/ 11 | artifacts/ 12 | __pycache__/ 13 | *~ 14 | .pytest_cache/ 15 | -------------------------------------------------------------------------------- /.markdownlint.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Default state for all rules 3 | default: true 4 | 5 | # Path to configuration file to extend 6 | extends: null 7 | 8 | # MD001/heading-increment/header-increment - Heading levels should only increment by one level at a time 9 | MD001: true 10 | 11 | # MD002/first-heading-h1/first-header-h1 - First heading should be a top-level heading 12 | MD002: 13 | # Heading level 14 | level: 1 15 | 16 | # MD003/heading-style/header-style - Heading style 17 | MD003: 18 | # Heading style 19 | style: "consistent" 20 | 21 | # MD004/ul-style - Unordered list style 22 | MD004: 23 | # List style 24 | style: "consistent" 25 | 26 | # MD005/list-indent - Inconsistent indentation for list items at the same level 27 | MD005: true 28 | 29 | # MD006/ul-start-left - Consider starting bulleted lists at the beginning of the line 30 | MD006: true 31 | 32 | # MD007/ul-indent - Unordered list indentation 33 | MD007: 34 | # Spaces for indent 35 | indent: 2 36 | # Whether to indent the first level of the list 37 | start_indented: false 38 | # Spaces for first level indent (when start_indented is set) 39 | start_indent: 2 40 | 41 | # MD009/no-trailing-spaces - Trailing spaces 42 | MD009: 43 | # Spaces for line break 44 | br_spaces: 2 45 | # Allow spaces for empty lines in list items 46 | list_item_empty_lines: false 47 | # Include unnecessary breaks 48 | strict: false 49 | 50 | # MD010/no-hard-tabs - Hard tabs 51 | MD010: 52 | # Include code blocks 53 | code_blocks: true 54 | # Fenced code languages to ignore 55 | ignore_code_languages: [] 56 | # Number of spaces for each hard tab 57 | spaces_per_tab: 1 58 | 59 | # MD011/no-reversed-links - Reversed link syntax 60 | MD011: true 61 | 62 | # MD012/no-multiple-blanks - Multiple consecutive blank lines 63 | MD012: 64 | # Consecutive blank lines 65 | maximum: 1 66 | 67 | # Modified for LSR 68 | # GFM does not limit line length 69 | # MD013/line-length - Line length 70 | MD013: false 71 | # # Number of characters 72 | # # line_length: 80 73 | # line_length: 999 74 | # # Number of characters for headings 75 | # heading_line_length: 80 76 | # # Number of characters for code blocks 77 | # code_block_line_length: 80 78 | # # Include code blocks 79 | # code_blocks: true 80 | # # Include tables 81 | # tables: true 82 | # # Include headings 83 | # headings: true 84 | # # Include headings 85 | # headers: true 86 | # # Strict length checking 87 | # strict: false 88 | # # Stern length checking 89 | # stern: false 90 | 91 | # MD014/commands-show-output - Dollar signs used before commands without showing output 92 | MD014: true 93 | 94 | # MD018/no-missing-space-atx - No space after hash on atx style heading 95 | MD018: true 96 | 97 | # MD019/no-multiple-space-atx - Multiple spaces after hash on atx style heading 98 | MD019: true 99 | 100 | # MD020/no-missing-space-closed-atx - No space inside hashes on closed atx style heading 101 | MD020: true 102 | 103 | # MD021/no-multiple-space-closed-atx - Multiple spaces inside hashes on closed atx style heading 104 | MD021: true 105 | 106 | # MD022/blanks-around-headings/blanks-around-headers - Headings should be surrounded by blank lines 107 | MD022: 108 | # Blank lines above heading 109 | lines_above: 1 110 | # Blank lines below heading 111 | lines_below: 1 112 | 113 | # MD023/heading-start-left/header-start-left - Headings must start at the beginning of the line 114 | MD023: true 115 | 116 | # MD024/no-duplicate-heading/no-duplicate-header - Multiple headings with the same content 117 | MD024: true 118 | 119 | # MD025/single-title/single-h1 - Multiple top-level headings in the same document 120 | MD025: 121 | # Heading level 122 | level: 1 123 | # RegExp for matching title in front matter 124 | front_matter_title: "^\\s*title\\s*[:=]" 125 | 126 | # MD026/no-trailing-punctuation - Trailing punctuation in heading 127 | MD026: 128 | # Punctuation characters not allowed at end of headings 129 | punctuation: ".,;:!。,;:!" 130 | 131 | # MD027/no-multiple-space-blockquote - Multiple spaces after blockquote symbol 132 | MD027: true 133 | 134 | # MD028/no-blanks-blockquote - Blank line inside blockquote 135 | MD028: true 136 | 137 | # MD029/ol-prefix - Ordered list item prefix 138 | MD029: 139 | # List style 140 | style: "one_or_ordered" 141 | 142 | # MD030/list-marker-space - Spaces after list markers 143 | MD030: 144 | # Spaces for single-line unordered list items 145 | ul_single: 1 146 | # Spaces for single-line ordered list items 147 | ol_single: 1 148 | # Spaces for multi-line unordered list items 149 | ul_multi: 1 150 | # Spaces for multi-line ordered list items 151 | ol_multi: 1 152 | 153 | # MD031/blanks-around-fences - Fenced code blocks should be surrounded by blank lines 154 | MD031: 155 | # Include list items 156 | list_items: true 157 | 158 | # MD032/blanks-around-lists - Lists should be surrounded by blank lines 159 | MD032: true 160 | 161 | # MD033/no-inline-html - Inline HTML 162 | MD033: 163 | # Allowed elements 164 | allowed_elements: [] 165 | 166 | # MD034/no-bare-urls - Bare URL used 167 | MD034: true 168 | 169 | # MD035/hr-style - Horizontal rule style 170 | MD035: 171 | # Horizontal rule style 172 | style: "consistent" 173 | 174 | # MD036/no-emphasis-as-heading/no-emphasis-as-header - Emphasis used instead of a heading 175 | MD036: 176 | # Punctuation characters 177 | punctuation: ".,;:!?。,;:!?" 178 | 179 | # MD037/no-space-in-emphasis - Spaces inside emphasis markers 180 | MD037: true 181 | 182 | # MD038/no-space-in-code - Spaces inside code span elements 183 | MD038: true 184 | 185 | # MD039/no-space-in-links - Spaces inside link text 186 | MD039: true 187 | 188 | # MD040/fenced-code-language - Fenced code blocks should have a language specified 189 | MD040: 190 | # List of languages 191 | allowed_languages: [] 192 | # Require language only 193 | language_only: false 194 | 195 | # MD041/first-line-heading/first-line-h1 - First line in a file should be a top-level heading 196 | MD041: 197 | # Heading level 198 | level: 1 199 | # RegExp for matching title in front matter 200 | front_matter_title: "^\\s*title\\s*[:=]" 201 | 202 | # MD042/no-empty-links - No empty links 203 | MD042: true 204 | 205 | # Modified for LSR 206 | # Disabling, we do not need this 207 | # MD043/required-headings/required-headers - Required heading structure 208 | MD043: false 209 | # # List of headings 210 | # headings: [] 211 | # # List of headings 212 | # headers: [] 213 | # # Match case of headings 214 | # match_case: false 215 | 216 | # MD044/proper-names - Proper names should have the correct capitalization 217 | MD044: 218 | # List of proper names 219 | names: [] 220 | # Include code blocks 221 | code_blocks: true 222 | # Include HTML elements 223 | html_elements: true 224 | 225 | # MD045/no-alt-text - Images should have alternate text (alt text) 226 | MD045: true 227 | 228 | # MD046/code-block-style - Code block style 229 | MD046: 230 | # Block style 231 | style: "consistent" 232 | 233 | # MD047/single-trailing-newline - Files should end with a single newline character 234 | MD047: true 235 | 236 | # MD048/code-fence-style - Code fence style 237 | MD048: 238 | # Code fence style 239 | style: "consistent" 240 | 241 | # MD049/emphasis-style - Emphasis style should be consistent 242 | MD049: 243 | # Emphasis style should be consistent 244 | style: "consistent" 245 | 246 | # MD050/strong-style - Strong style should be consistent 247 | MD050: 248 | # Strong style should be consistent 249 | style: "consistent" 250 | 251 | # MD051/link-fragments - Link fragments should be valid 252 | MD051: true 253 | 254 | # MD052/reference-links-images - Reference links and images should use a label that is defined 255 | MD052: true 256 | 257 | # MD053/link-image-reference-definitions - Link and image reference definitions should be needed 258 | MD053: 259 | # Ignored definitions 260 | ignored_definitions: 261 | - "//" 262 | -------------------------------------------------------------------------------- /.ostree/README.md: -------------------------------------------------------------------------------- 1 | *NOTE*: The `*.txt` files are used by `get_ostree_data.sh` to create the lists 2 | of packages, and to find other system roles used by this role. DO NOT use them 3 | directly. 4 | -------------------------------------------------------------------------------- /.ostree/get_ostree_data.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -euo pipefail 4 | 5 | ostree_dir="${OSTREE_DIR:-"$(dirname "$(realpath "$0")")"}" 6 | 7 | if [ -z "${4:-}" ] || [ "${1:-}" = help ] || [ "${1:-}" = -h ]; then 8 | cat <&2 echo ERROR - could not find role "$role" - please use ANSIBLE_COLLECTIONS_PATH 64 | exit 2 65 | } 66 | 67 | get_packages() { 68 | local ostree_dir pkgtype pkgfile rolefile 69 | ostree_dir="$1" 70 | for pkgtype in "${pkgtypes[@]}"; do 71 | for suff in "" "-$distro" "-${distro}-${major_ver}" "-${distro}-${ver}"; do 72 | pkgfile="$ostree_dir/packages-${pkgtype}${suff}.txt" 73 | if [ -f "$pkgfile" ]; then 74 | cat "$pkgfile" 75 | fi 76 | done 77 | rolefile="$ostree_dir/roles-${pkgtype}.txt" 78 | if [ -f "$rolefile" ]; then 79 | local roles role rolepath 80 | roles="$(cat "$rolefile")" 81 | for role in $roles; do 82 | rolepath="$(get_rolepath "$ostree_dir" "$role")" 83 | if [ -z "$rolepath" ]; then 84 | 1>&2 echo ERROR - could not find role "$role" - please use ANSIBLE_COLLECTIONS_PATH 85 | exit 2 86 | fi 87 | get_packages "$rolepath" 88 | done 89 | fi 90 | done | sort -u 91 | } 92 | 93 | format_packages_json() { 94 | local comma pkgs pkg 95 | comma="" 96 | pkgs="[" 97 | while read -r pkg; do 98 | pkgs="${pkgs}${comma}\"${pkg}\"" 99 | comma=, 100 | done 101 | pkgs="${pkgs}]" 102 | echo "$pkgs" 103 | } 104 | 105 | format_packages_raw() { 106 | cat 107 | } 108 | 109 | format_packages_yaml() { 110 | while read -r pkg; do 111 | echo "- $pkg" 112 | done 113 | } 114 | 115 | format_packages_toml() { 116 | while read -r pkg; do 117 | echo "[[packages]]" 118 | echo "name = \"$pkg\"" 119 | echo "version = \"*\"" 120 | done 121 | } 122 | 123 | distro="${distro_ver%%-*}" 124 | ver="${distro_ver##*-}" 125 | if [[ "$ver" =~ ^([0-9]*) ]]; then 126 | major_ver="${BASH_REMATCH[1]}" 127 | else 128 | echo ERROR: cannot parse major version number from version "$ver" 129 | exit 1 130 | fi 131 | 132 | "get_$category" "$ostree_dir" | "format_${category}_$format" 133 | -------------------------------------------------------------------------------- /.ostree/packages-runtime-CentOS-10.txt: -------------------------------------------------------------------------------- 1 | kdump-utils 2 | -------------------------------------------------------------------------------- /.ostree/packages-runtime-RedHat-10.txt: -------------------------------------------------------------------------------- 1 | kdump-utils 2 | -------------------------------------------------------------------------------- /.ostree/packages-runtime.txt: -------------------------------------------------------------------------------- 1 | grubby 2 | iproute 3 | kexec-tools 4 | openssh-clients 5 | -------------------------------------------------------------------------------- /.pandoc_template.html5: -------------------------------------------------------------------------------- 1 | $--| GitHub HTML5 Pandoc Template" v2.2 | 2020/08/12 | pandoc v2.1.1 2 | 3 | 51 | $-------------------------------------------------------------------------> lang 52 | 53 | 54 | $--============================================================================= 55 | $-- METADATA 56 | $--============================================================================= 57 | 58 | 59 | 60 | $-----------------------------------------------------------------------> author 61 | $for(author-meta)$ 62 | 63 | $endfor$ 64 | $-------------------------------------------------------------------------> date 65 | $if(date-meta)$ 66 | 67 | $endif$ 68 | $---------------------------------------------------------------------> keywords 69 | $if(keywords)$ 70 | 71 | $endif$ 72 | $------------------------------------------------------------------> description 73 | $if(description)$ 74 | 75 | $endif$ 76 | $------------------------------------------------------------------------> title 77 | $if(title-prefix)$$title-prefix$ – $endif$$pagetitle$ 78 | $--=========================================================================== 79 | $-- CSS STYLESHEETS 80 | $--=========================================================================== 81 | $-- Here comes the placeholder (within double braces) that will be replaced 82 | $-- by the CSS file in the finalized template: 83 | 86 | $------------------------------------------------------------------------------- 87 | 88 | $------------------------------------------------------------------------------- 89 | $if(quotes)$ 90 | 91 | $endif$ 92 | $-------------------------------------------------------------> highlighting-css 93 | $if(highlighting-css)$ 94 | 97 | $endif$ 98 | $--------------------------------------------------------------------------> css 99 | $for(css)$ 100 | 101 | $endfor$ 102 | $-------------------------------------------------------------------------> math 103 | $if(math)$ 104 | $math$ 105 | $endif$ 106 | $------------------------------------------------------------------------------- 107 | 110 | $--------------------------------------------------------------> header-includes 111 | $for(header-includes)$ 112 | $header-includes$ 113 | $endfor$ 114 | $------------------------------------------------------------------------------- 115 | 116 | 117 |
118 | $---------------------------------------------------------------> include-before 119 | $for(include-before)$ 120 | $include-before$ 121 | $endfor$ 122 | $-->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IF: title 123 | $if(title)$ 124 |
125 |

$title$

126 | $---------------------------------------------------------------------> subtitle 127 | $if(subtitle)$ 128 |

$subtitle$

129 | $endif$ 130 | $-----------------------------------------------------------------------> author 131 | $for(author)$ 132 |

$author$

133 | $endfor$ 134 | $-------------------------------------------------------------------------> date 135 | $if(date)$ 136 |

$date$

137 | $endif$ 138 | $----------------------------------------------------------------------> summary 139 | $if(summary)$ 140 |
141 | $summary$ 142 |
143 | $endif$ 144 | $------------------------------------------------------------------------------- 145 |
146 | $endif$ 147 | $--<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< END IF: title 148 | $--------------------------------------------------------------------------> toc 149 | $if(toc)$ 150 |
151 | 155 |
156 | $endif$ 157 | $-------------------------------------------------------------------------> body 158 | $body$ 159 | $----------------------------------------------------------------> include-after 160 | $for(include-after)$ 161 | $include-after$ 162 | $endfor$ 163 | $------------------------------------------------------------------------------- 164 |
165 | 166 | 167 | -------------------------------------------------------------------------------- /.yamllint.yml: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | --- 3 | ignore: | 4 | /.tox/ 5 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | Changelog 2 | ========= 3 | 4 | [1.4.10] - 2025-02-04 5 | -------------------- 6 | 7 | ### Other Changes 8 | 9 | - ci: ansible-plugin-scan is disabled for now (#223) 10 | - ci: bump ansible-lint to v25; provide collection requirements for ansible-lint (#226) 11 | - test: fix ssh wrapper test (#227) 12 | 13 | [1.4.9] - 2025-01-09 14 | -------------------- 15 | 16 | ### Other Changes 17 | 18 | - ci: Use Fedora 41, drop Fedora 39 (#220) 19 | - ci: Use Fedora 41, drop Fedora 39 - part two (#221) 20 | 21 | [1.4.8] - 2024-10-30 22 | -------------------- 23 | 24 | ### Other Changes 25 | 26 | - ci: Add tft plan and workflow (#208) 27 | - ci: Update fmf plan to add a separate job to prepare managed nodes (#210) 28 | - ci: bump sclorg/testing-farm-as-github-action from 2 to 3 (#211) 29 | - ci: Add workflow for ci_test bad, use remote fmf plan (#212) 30 | - ci: Fix missing slash in ARTIFACTS_URL (#213) 31 | - ci: Add tags to TF workflow, allow more [citest bad] formats (#214) 32 | - ci: ansible-test action now requires ansible-core version (#215) 33 | - ci: add YAML header to github action workflow files (#216) 34 | - refactor: Use vars/RedHat_N.yml symlink for CentOS, Rocky, Alma wherever possible (#218) 35 | 36 | [1.4.7] - 2024-07-02 37 | -------------------- 38 | 39 | ### Bug Fixes 40 | 41 | - fix: el10 kdump role should depend on kdump-utils (#204) 42 | - fix: add support for EL10 (#206) 43 | 44 | ### Other Changes 45 | 46 | - ci: ansible-lint action now requires absolute directory (#205) 47 | 48 | [1.4.6] - 2024-06-11 49 | -------------------- 50 | 51 | ### Other Changes 52 | 53 | - ci: use tox-lsr 3.3.0 which uses ansible-test 2.17 (#199) 54 | - ci: tox-lsr 3.4.0 - fix py27 tests; move other checks to py310 (#201) 55 | - ci: Add supported_ansible_also to .ansible-lint (#202) 56 | 57 | [1.4.5] - 2024-04-04 58 | -------------------- 59 | 60 | ### Other Changes 61 | 62 | - ci: fix python unit test - copy pytest config to tests/unit (#195) 63 | - ci: bump ansible/ansible-lint from 6 to 24 (#196) 64 | - ci: bump mathieudutour/github-tag-action from 6.1 to 6.2 (#197) 65 | 66 | [1.4.4] - 2024-01-16 67 | -------------------- 68 | 69 | ### Other Changes 70 | 71 | - ci: support ansible-lint and ansible-test 2.16 (#192) 72 | - ci: Use supported ansible-lint action; run ansible-lint against the collection (#193) 73 | 74 | [1.4.3] - 2023-12-08 75 | -------------------- 76 | 77 | ### Other Changes 78 | 79 | - ci: bump actions/github-script from 6 to 7 (#189) 80 | - refactor: get_ostree_data.sh use env shebang - remove from .sanity* (#190) 81 | 82 | [1.4.2] - 2023-11-30 83 | -------------------- 84 | 85 | ### Other Changes 86 | 87 | - test: clean up kdump_path (#187) 88 | 89 | [1.4.1] - 2023-11-29 90 | -------------------- 91 | 92 | ### Other Changes 93 | 94 | - refactor: improve support for ostree systems (#185) 95 | 96 | [1.4.0] - 2023-11-06 97 | -------------------- 98 | 99 | ### New Features 100 | 101 | - feat: support for ostree systems (#182) 102 | 103 | ### Other Changes 104 | 105 | - build(deps): bump actions/checkout from 3 to 4 (#173) 106 | - ci: ensure dependabot git commit message conforms to commitlint (#176) 107 | - ci: use dump_packages.py callback to get packages used by role (#178) 108 | - ci: tox-lsr version 3.1.1 (#180) 109 | 110 | [1.3.8] - 2023-09-12 111 | -------------------- 112 | 113 | ### Bug Fixes 114 | 115 | - fix: retry read of kexec_crash_size (#169) 116 | 117 | ### Other Changes 118 | 119 | - test: delete kdump user last, retry until success (#171) 120 | 121 | [1.3.7] - 2023-09-07 122 | -------------------- 123 | 124 | ### Other Changes 125 | 126 | - docs: Make badges consistent, run markdownlint on all .md files (#167) 127 | 128 | - Consistently generate badges for GH workflows in README RHELPLAN-146921 129 | - Run markdownlint on all .md files 130 | - Add custom-woke-action if not used already 131 | - Rename woke action to Woke for a pretty badge 132 | 133 | Signed-off-by: Sergei Petrosian 134 | 135 | - ci: Remove badges from README.md prior to converting to HTML (#168) 136 | 137 | - Remove thematic break after badges 138 | - Remove badges from README.md prior to converting to HTML 139 | 140 | Signed-off-by: Sergei Petrosian 141 | 142 | [1.3.6] - 2023-08-17 143 | -------------------- 144 | 145 | ### Bug Fixes 146 | 147 | - fix: ensure .ssh directory exists for kdump_ssh_user on kdump_ssh_server (#164) 148 | - fix: Ensure authorized_keys management works with multiple hosts (#165) 149 | 150 | [1.3.5] - 2023-08-16 151 | -------------------- 152 | 153 | ### Bug Fixes 154 | 155 | - fix: do not fail if authorized_keys not found (#161) 156 | - fix: Write new authorized_keys if needed is not idempotent (#162) 157 | 158 | ### Other Changes 159 | 160 | - ci: Add markdownlint, test_converting_readme, and build_docs workflows (#160) 161 | 162 | [1.3.4] - 2023-08-02 163 | -------------------- 164 | 165 | ### Other Changes 166 | 167 | - test: call handlers to refresh kdump before calling role again (#158) 168 | 169 | [1.3.3] - 2023-07-30 170 | -------------------- 171 | 172 | ### Bug Fixes 173 | 174 | - fix: use failure_action instead of default on EL9 and later (#155) 175 | 176 | ### Other Changes 177 | 178 | - test: add test for failure_action (#156) 179 | 180 | [1.3.2] - 2023-07-19 181 | -------------------- 182 | 183 | ### Bug Fixes 184 | 185 | - fix: facts being gathered unnecessarily (#152) 186 | 187 | ### Other Changes 188 | 189 | - ci: Rename commitlint to PR title Lint, echo PR titles from env var (#150) 190 | - ci: ansible-lint - ignore var-naming[no-role-prefix] (#151) 191 | 192 | [1.3.1] - 2023-06-20 193 | -------------------- 194 | 195 | ### Other Changes 196 | 197 | - ci: Add pull request template and run commitlint on PR title only (#147) 198 | - test: add test for crashkernel, dracut settings (#148) 199 | 200 | [1.3.0] - 2023-05-26 201 | -------------------- 202 | 203 | ### New Features 204 | 205 | - feat: Add support for auto_reset_crashkernel and dracut_args 206 | 207 | ### Bug Fixes 208 | 209 | - fix: do not use /etc/sysconfig/kdump 210 | - fix: use grubby to update crashkernel=auto if needed 211 | 212 | ### Other Changes 213 | 214 | - docs: Consistent contributing.md for all roles - allow role specific contributing.md section 215 | 216 | [1.2.9] - 2023-04-27 217 | -------------------- 218 | 219 | ### Other Changes 220 | 221 | - test: check generated files for ansible_managed, fingerprint 222 | - ci: Add commitlint GitHub action to ensure conventional commits with feedback 223 | 224 | [1.2.8] - 2023-04-13 225 | -------------------- 226 | 227 | ### Other Changes 228 | 229 | - ansible-lint - use changed_when even for conditional execution (#138) 230 | 231 | [1.2.7] - 2023-04-06 232 | -------------------- 233 | 234 | ### Bug Fixes 235 | 236 | - Use ansible_os_family in template (#133) 237 | 238 | ### Other Changes 239 | 240 | - Remove unused test script "semaphore" (#131) 241 | - Add README-ansible.md to refer Ansible intro page on linux-system-roles.github.io (#134) 242 | - Fingerprint RHEL System Role managed config files (#136) 243 | 244 | [1.2.6] - 2023-01-16 245 | -------------------- 246 | 247 | ### New Features 248 | 249 | - none 250 | 251 | ### Bug Fixes 252 | 253 | - none 254 | 255 | ### Other Changes 256 | 257 | - ansible-lint 6.x fixes 258 | - Add check for non-inclusive language 259 | - CHANGELOG.md - cleanup non-inclusive words. 260 | 261 | [1.2.5] - 2022-07-19 262 | -------------------- 263 | 264 | ### New Features 265 | 266 | - none 267 | 268 | ### Bug Fixes 269 | 270 | - none 271 | 272 | ### Other Changes 273 | 274 | - make min_ansible_version a string in meta/main.yml (#104) 275 | 276 | The Ansible developers say that `min_ansible_version` in meta/main.yml 277 | must be a `string` value like `"2.9"`, not a `float` value like `2.9`. 278 | 279 | - Add CHANGELOG.md (#105) 280 | 281 | [1.2.4] - 2022-05-06 282 | -------------------- 283 | 284 | ### New Features 285 | 286 | - none 287 | 288 | ### Bug Fixes 289 | 290 | - none 291 | 292 | ### Other Changes 293 | 294 | - bump tox-lsr version to 2.11.0; remove py37; add py310 295 | 296 | [1.2.3] - 2022-04-13 297 | -------------------- 298 | 299 | ### New features 300 | 301 | - support gather\_facts: false; support setup-snapshot.yml 302 | 303 | ### Bug Fixes 304 | 305 | - none 306 | 307 | ### Other Changes 308 | 309 | - bump tox-lsr version to 2.10.1 310 | 311 | [1.2.2] - 2022-02-08 312 | -------------------- 313 | 314 | ### New features 315 | 316 | - use kdumpctl reset-crashkernel on rhel9 317 | 318 | ### Bug Fixes 319 | 320 | - none 321 | 322 | ### Other Changes 323 | 324 | - bump tox-lsr version to 2.9.1 325 | 326 | [1.2.1] - 2022-01-10 327 | -------------------- 328 | 329 | ### New Features 330 | 331 | - none 332 | 333 | ### Bug Fixes 334 | 335 | - none 336 | 337 | ### Other Changes 338 | 339 | - bump tox-lsr version to 2.8.3 340 | - change recursive role symlink to individual role dir symlinks 341 | 342 | [1.2.0] - 2021-12-03 343 | -------------------- 344 | 345 | ### New features 346 | 347 | - Add reboot required 348 | 349 | ### Bug Fixes 350 | 351 | - none 352 | 353 | ### Other Changes 354 | 355 | - update tox-lsr version to 2.8.0 356 | 357 | [1.1.2] - 2021-11-08 358 | -------------------- 359 | 360 | ### New Features 361 | 362 | - none 363 | 364 | ### Bug Fixes 365 | 366 | - none 367 | 368 | ### Other Changes 369 | 370 | - update tox-lsr version to 2.7.1 371 | - support python 39, ansible-core 2.12, ansible-plugin-scan 372 | 373 | [1.1.1] - 2021-09-22 374 | -------------------- 375 | 376 | ### New Features 377 | 378 | - none 379 | 380 | ### Bug Fixes 381 | 382 | - Use {{ ansible\_managed | comment }} to fix multi-line ansible\_managed 383 | - remove authorized\_key; use ansible builtins 384 | 385 | ### Other Changes 386 | 387 | - use apt-get install -y 388 | - use tox-lsr version 2.5.1 389 | 390 | [1.1.0] - 2021-08-10 391 | -------------------- 392 | 393 | ### New features 394 | 395 | - Drop support for Ansible 2.8 by bumping the Ansible version to 2.9 396 | 397 | ### Bug Fixes 398 | 399 | - none 400 | 401 | ### Other Changes 402 | 403 | - none 404 | 405 | [1.0.5] - 2021-06-09 406 | -------------------- 407 | 408 | ### New features 409 | 410 | - use localhost if no SSH\_CONNECTION env. var. 411 | 412 | ### Bug Fixes 413 | 414 | - none 415 | 416 | ### Other Changes 417 | 418 | - none 419 | 420 | [1.0.4] - 2021-05-05 421 | -------------------- 422 | 423 | ### New features 424 | 425 | - Add `KDUMP_BOOTDIR="/boot"`to `/etc/sysconfig/kdump` for RedHat oses \<7 426 | - Copy the dump target's public host key to the managed node known\_hosts. 427 | 428 | ### Bug fixes 429 | 430 | - Cleaning up ansible-lint errors 431 | - fixing ansible-test errors 432 | - Avoid bare variable in when and the resulting warning. 433 | 434 | ### Other Changes 435 | 436 | - Remove python-26 environment from tox testing 437 | - update to tox-lsr 2.4.0 - add support for ansible-test with docker 438 | - Add tags to tests 439 | - CI: Add support for RHEL-9 440 | 441 | [1.0.3] - 2021-02-11 442 | -------------------- 443 | 444 | ### New Features 445 | 446 | - Add centos8 447 | 448 | ### Bug Fixes 449 | 450 | - Get rid of the extra final newline in string 451 | - Fix centos6 repos; use standard centos images 452 | 453 | ### Other Changes 454 | 455 | - use tox-lsr 2.2.0 456 | - use molecule v3, drop v2 - use tox-lsr 2.1.2 457 | - remove ansible 2.7 support from molecule 458 | - use tox for ansible-lint instead of molecule 459 | - use new tox-lsr plugin 460 | - disable yamllint line-length checks 461 | - disable line-length 462 | - add shellcheck,collection 463 | - use github actions instead of travis 464 | 465 | [1.0.2] - 2020-10-14 466 | -------------------- 467 | 468 | ### New Features 469 | 470 | - none 471 | 472 | ### Bug Fixes 473 | 474 | - none 475 | 476 | ### Other Changes 477 | 478 | - lock ansible-lint version at 4.3.5; suppress role name lint warning 479 | - sync collections related changes from template to kdump role 480 | - collections prep - use FQRN 481 | - Synchronize files from linux-system-roles/template 482 | 483 | [1.0.1] - 2020-08-05 484 | -------------------- 485 | 486 | ### New Features 487 | 488 | - Rename 'dump\_target.kind' to 'type' in conf template. 489 | 490 | ### Bug Fixes 491 | 492 | - Fix lint errors, fix checkmode 493 | 494 | ### Other Changes 495 | 496 | - Synchronize files from linux-system-roles/template 497 | - use molecule v2 498 | - Run tests in wrappers 499 | - Configure Molecule and Travis CI 500 | 501 | [1.0.0] - 2018-08-21 502 | -------------------- 503 | 504 | ### Initial Release 505 | -------------------------------------------------------------------------------- /COPYING: -------------------------------------------------------------------------------- 1 | Copyright (c) 2017 Red Hat, Inc. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all 11 | copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 19 | SOFTWARE. 20 | -------------------------------------------------------------------------------- /README-ansible.md: -------------------------------------------------------------------------------- 1 | # Introduction to Ansible for Linux System Roles 2 | 3 | If you are not familiar with Ansible, please see 4 | [Introduction to Ansible for Linux System Roles](https://linux-system-roles.github.io/documentation/intro-to-ansible-for-system-roles.html), 5 | where many useful links are presented. 6 | -------------------------------------------------------------------------------- /README-ostree.md: -------------------------------------------------------------------------------- 1 | # rpm-ostree 2 | 3 | The role supports running on [rpm-ostree](https://coreos.github.io/rpm-ostree/) 4 | systems. The primary issue is that the `/usr` filesystem is read-only, and the 5 | role cannot install packages. Instead, it will just verify that the necessary 6 | packages and any other `/usr` files are pre-installed. The role will change the 7 | package manager to one that is compatible with `rpm-ostree` systems. 8 | 9 | ## Building 10 | 11 | To build an ostree image for a particular operating system distribution and 12 | version, use the script `.ostree/get_ostree_data.sh` to get the list of 13 | packages. If the role uses other system roles, then the script will include the 14 | packages for the other roles in the list it outputs. The list of packages will 15 | be sorted in alphanumeric order. 16 | 17 | Usage: 18 | 19 | ```bash 20 | .ostree/get_ostree_data.sh packages runtime DISTRO-VERSION FORMAT 21 | ``` 22 | 23 | `DISTRO-VERSION` is in the format that Ansible uses for `ansible_distribution` 24 | and `ansible_distribution_version` - for example, `Fedora-38`, `CentOS-8`, 25 | `RedHat-9.4` 26 | 27 | `FORMAT` is one of `toml`, `json`, `yaml`, `raw` 28 | 29 | * `toml` - each package in a TOML `[[packages]]` element 30 | 31 | ```toml 32 | [[packages]] 33 | name = "package-a" 34 | version = "*" 35 | [[packages]] 36 | name = "package-b" 37 | version = "*" 38 | ... 39 | ``` 40 | 41 | * `yaml` - a YAML list of packages 42 | 43 | ```yaml 44 | - package-a 45 | - package-b 46 | ... 47 | ``` 48 | 49 | * `json` - a JSON list of packages 50 | 51 | ```json 52 | ["package-a","package-b",...] 53 | ``` 54 | 55 | * `raw` - a plain text list of packages, one per line 56 | 57 | ```bash 58 | package-a 59 | package-b 60 | ... 61 | ``` 62 | 63 | What format you choose depends on which image builder you are using. For 64 | example, if you are using something based on 65 | [osbuild-composer](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/composing_installing_and_managing_rhel_for_edge_images/index#creating-an-image-builder-blueprint-for-a-rhel-for-edge-image-using-the-command-line-interface_composing-a-rhel-for-edge-image-using-image-builder-command-line), 66 | you will probably want to use the `toml` output format. 67 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Ansible Role: Kernel Crash Dump 3 | 4 | [![ansible-lint.yml](https://github.com/linux-system-roles/kdump/actions/workflows/ansible-lint.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/ansible-lint.yml) [![ansible-test.yml](https://github.com/linux-system-roles/kdump/actions/workflows/ansible-test.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/ansible-test.yml) [![codespell.yml](https://github.com/linux-system-roles/kdump/actions/workflows/codespell.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/codespell.yml) [![markdownlint.yml](https://github.com/linux-system-roles/kdump/actions/workflows/markdownlint.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/markdownlint.yml) [![qemu-kvm-integration-tests.yml](https://github.com/linux-system-roles/kdump/actions/workflows/qemu-kvm-integration-tests.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/qemu-kvm-integration-tests.yml) [![tft.yml](https://github.com/linux-system-roles/kdump/actions/workflows/tft.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/tft.yml) [![tft_citest_bad.yml](https://github.com/linux-system-roles/kdump/actions/workflows/tft_citest_bad.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/tft_citest_bad.yml) [![woke.yml](https://github.com/linux-system-roles/kdump/actions/workflows/woke.yml/badge.svg)](https://github.com/linux-system-roles/kdump/actions/workflows/woke.yml) 5 | 6 | An ansible role which configures kdump. 7 | 8 | ## Warning 9 | 10 | The role replaces the kdump configuration of the managed 11 | host. Previous settings will be lost, even if they are not specified 12 | in the role variables. Currently, this includes replacing at least the 13 | following configuration file: 14 | 15 | * `/etc/kdump.conf` 16 | 17 | ## Requirements 18 | 19 | See below 20 | 21 | ### Collection requirements 22 | 23 | The role requires external collections only for management of `rpm-ostree` 24 | nodes. Please run the following command to install them if you need to manage 25 | `rpm-ostree` nodes: 26 | 27 | ```bash 28 | ansible-galaxy collection install -vv -r meta/collection-requirements.yml 29 | ``` 30 | 31 | ## Role Variables 32 | 33 | **kdump_target**: Can be specified to write vmcore to a location that is not in 34 | the root file system. If `type` is `raw` or a filesystem type, location points 35 | to a partition (by device node name, label, or uuid). For example: 36 | 37 | ```yaml 38 | kdump_target: 39 | type: raw 40 | location: /dev/sda1 41 | ``` 42 | 43 | or for an `ext4` filesystem: 44 | 45 | ```yaml 46 | kdump_target: 47 | type: ext4 48 | location: "12e3e25f-534e-4007-a40c-e7e080a933ad" 49 | ``` 50 | 51 | If `type` is `ssh`, location points to a server: 52 | example: 53 | 54 | ```yaml 55 | type: ssh 56 | location: user@example.com 57 | ``` 58 | 59 | Similarly for `nfs`, `location` points to an nfs server: 60 | 61 | ```yaml 62 | type: nfs 63 | location: nfs.example.com 64 | ``` 65 | 66 | Only the `ssh` type is considered stable, support for the other types 67 | is experimental. 68 | 69 | **kdump_path**: The path to which vmcore will be written. If `kdump_target` is not 70 | null, path is relative to that dump target. Otherwise, it must be an absolute 71 | path in the root file system. 72 | 73 | **kdump_core_collector**: A command to copy the vmcore. If null, uses `makedumpfile` 74 | with options depending on the `kdump_target.type`. 75 | 76 | **kdump_system_action**: 77 | The action that is performed when dumping the core file fails. Can be 78 | `reboot`, `halt`, `poweroff`, or `shell`. 79 | 80 | **kdump_auto_reset_crashkernel**: 81 | Whether to reset kernel crashkernel to new default value or not when kexec-tools 82 | updates the default crashkernel value and existing kernels using the old default 83 | kernel crashkernel value. 84 | 85 | **kdump_dracut_args**: 86 | Pass extra dracut options when rebuilding kdump initrd. 87 | 88 | **kdump_reboot_ok**: If you run the role on a managed node that does not have 89 | memory reserved for crash kernel, i.e. the file `/sys/kernel/kexec_crash_size` 90 | contains `0`, it might be required to reboot the managed node to configure kdump. 91 | 92 | By default, the role does not reboot the managed node. If a managed node 93 | requires reboot, the role sets the `kdump_reboot_required` fact and fails, so 94 | that the user can reboot the managed node when needed. If you want the role to 95 | reboot the system if required, set this variable to `true`. You do not need to 96 | re-execute the role after boot. 97 | 98 | Default: `false` 99 | 100 | ## Ansible Facts Returned by the Role 101 | 102 | **kdump_reboot_required**: The role sets this fact if the managed node requires 103 | reboot to complete kdump configuration. Re-execute the role after boot to ensure 104 | that kdump is working. 105 | 106 | ## rpm-ostree 107 | 108 | See README-ostree.md 109 | 110 | ## License 111 | 112 | MIT 113 | -------------------------------------------------------------------------------- /ansible_pytest_extra_requirements.txt: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # ansible and dependencies for all supported platforms 4 | ansible ; python_version > "2.6" 5 | idna<2.8 ; python_version < "2.7" 6 | PyYAML<5.1 ; python_version < "2.7" 7 | -------------------------------------------------------------------------------- /contributing.md: -------------------------------------------------------------------------------- 1 | # Contributing to the kdump Linux System Role 2 | 3 | ## Where to start 4 | 5 | The first place to go is [Contribute](https://linux-system-roles.github.io/contribute.html). 6 | This has all of the common information that all role developers need: 7 | 8 | * Role structure and layout 9 | * Development tools - How to run tests and checks 10 | * Ansible recommended practices 11 | * Basic git and github information 12 | * How to create git commits and submit pull requests 13 | 14 | **Bugs and needed implementations** are listed on 15 | [Github Issues](https://github.com/linux-system-roles/kdump/issues). 16 | Issues labeled with 17 | [**help wanted**](https://github.com/linux-system-roles/kdump/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) 18 | are likely to be suitable for new contributors! 19 | 20 | **Code** is managed on [Github](https://github.com/linux-system-roles/kdump), using 21 | [Pull Requests](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests). 22 | -------------------------------------------------------------------------------- /custom_requirements.txt: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # Write requirements for running your custom commands in tox here: 4 | -------------------------------------------------------------------------------- /defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kdump_target: null 3 | kdump_path: /var/crash 4 | kdump_core_collector: null 5 | kdump_system_action: reboot 6 | kdump_ssh_user: null 7 | kdump_ssh_server: null 8 | kdump_sshkey: /root/.ssh/kdump_id_rsa 9 | kdump_reboot_ok: false 10 | kdump_auto_reset_crashkernel: true 11 | kdump_dracut_args: null 12 | -------------------------------------------------------------------------------- /handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Restart kdump 3 | service: 4 | name: kdump 5 | state: restarted 6 | when: 7 | - not __kdump_service_start.changed | d(false) 8 | - not kdump_reboot_required 9 | -------------------------------------------------------------------------------- /meta/collection-requirements.yml: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | --- 3 | collections: 4 | - ansible.posix 5 | -------------------------------------------------------------------------------- /meta/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | galaxy_info: 3 | author: Baoquan He 4 | description: Configure kdump 5 | company: Red Hat, Inc. 6 | license: MIT 7 | min_ansible_version: "2.9" 8 | platforms: 9 | - name: Fedora 10 | versions: 11 | - all 12 | - name: EL 13 | versions: 14 | - "6" 15 | - "7" 16 | - "8" 17 | - "9" 18 | galaxy_tags: 19 | - centos 20 | - el6 21 | - el7 22 | - el8 23 | - el9 24 | - el10 25 | - fedora 26 | - kdump 27 | - redhat 28 | - rhel 29 | - system 30 | -------------------------------------------------------------------------------- /molecule/default/Dockerfile.j2: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | # Molecule managed 3 | 4 | {% if item.registry is defined %} 5 | FROM {{ item.registry.url }}/{{ item.image }} 6 | {% else %} 7 | FROM {{ item.image }} 8 | {% endif %} 9 | 10 | RUN set -euo pipefail; \ 11 | pkgs="python sudo yum-plugin-ovl bash"; \ 12 | if grep 'CentOS release 6' /etc/centos-release > /dev/null 2>&1; then \ 13 | for file in /etc/yum.repos.d/CentOS-*.repo; do \ 14 | if ! grep '^baseurl=.*vault[.]centos[.]org' "$file"; then \ 15 | sed -i -e 's,^mirrorlist,#mirrorlist,' \ 16 | -e 's,^#baseurl=,baseurl=,' \ 17 | -e 's,mirror.centos.org/centos/$releasever,vault.centos.org/6.10,' \ 18 | "$file"; \ 19 | fi; \ 20 | done; \ 21 | pkgs="$pkgs upstart chkconfig initscripts"; \ 22 | fi; \ 23 | if [ $(command -v apt-get) ]; then apt-get update && apt-get install -y python sudo bash ca-certificates && apt-get clean; \ 24 | elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install python3 sudo python3-devel python3-dnf bash && dnf clean all; \ 25 | elif [ $(command -v yum) ]; then yum makecache fast && yum install -y $pkgs && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \ 26 | elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python sudo bash python-xml && zypper clean -a; \ 27 | elif [ $(command -v apk) ]; then apk update && apk add --no-cache python sudo bash ca-certificates; \ 28 | elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python sudo bash ca-certificates && xbps-remove -O; fi 29 | -------------------------------------------------------------------------------- /molecule/default/molecule.yml: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | --- 3 | dependency: 4 | name: galaxy 5 | driver: 6 | name: ${LSR_MOLECULE_DRIVER:-docker} 7 | platforms: 8 | - name: centos-6 9 | image: registry.centos.org/centos:6 10 | volumes: 11 | - /sys/fs/cgroup:/sys/fs/cgroup:ro 12 | privileged: true 13 | command: /sbin/init 14 | - name: centos-7 15 | image: registry.centos.org/centos/systemd:latest 16 | volumes: 17 | - /sys/fs/cgroup:/sys/fs/cgroup:ro 18 | privileged: true 19 | command: /usr/lib/systemd/systemd --system 20 | - name: centos-8 21 | image: registry.centos.org/centos:8 22 | volumes: 23 | - /sys/fs/cgroup:/sys/fs/cgroup:ro 24 | privileged: true 25 | command: /usr/lib/systemd/systemd --system 26 | provisioner: 27 | name: ansible 28 | log: true 29 | playbooks: 30 | converge: ../../tests/tests_default.yml 31 | scenario: 32 | name: default 33 | test_sequence: 34 | - destroy 35 | - create 36 | - converge 37 | - idempotence 38 | - check 39 | - destroy 40 | -------------------------------------------------------------------------------- /molecule_extra_requirements.txt: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # Write extra requirements for running molecule here: 4 | -------------------------------------------------------------------------------- /plans/README-plans.md: -------------------------------------------------------------------------------- 1 | # Introduction CI Testing Plans 2 | 3 | Linux System Roles CI runs [tmt](https://tmt.readthedocs.io/en/stable/index.html) test plans in [Testing farm](https://docs.testing-farm.io/Testing%20Farm/0.1/index.html) with the [tft.yml](https://github.com/linux-system-roles/kdump/blob/main/.github/workflows/tft.yml) GitHub workflow. 4 | 5 | The `plans/test_playbooks_parallel.fmf` plan is a test plan that runs test playbooks in parallel on multiple managed nodes. 6 | `plans/test_playbooks_parallel.fmf` is generated centrally from `https://github.com/linux-system-roles/.github/`. 7 | The automation calculates the number of managed nodes to provision with this formula: 8 | 9 | ```plain 10 | number-of-test-playbooks / 10 + 1 11 | ``` 12 | 13 | The `plans/test_playbooks_parallel.fmf` plan does the following steps: 14 | 15 | 1. Provisions systems to be used as a control node and as managed nodes. 16 | 2. Does the required preparation on systems. 17 | 3. For the given role and the given PR, runs the general test from [test.sh](https://github.com/linux-system-roles/tft-tests/blob/main/tests/general/test.sh). 18 | 19 | The [tft.yml](https://github.com/linux-system-roles/kdump/blob/main/.github/workflows/tft.yml) workflow runs the above plan and uploads the results to our Fedora storage for public access. 20 | This workflow uses Testing Farm's Github Action [Schedule tests on Testing Farm](https://github.com/marketplace/actions/schedule-tests-on-testing-farm). 21 | 22 | ## Running Tests 23 | 24 | You can run tests locally with the `tmt try` cli or remotely in Testing Farm. 25 | 26 | ### Running Tests Locally 27 | 28 | 1. Install `tmt` as described in [Installation](https://tmt.readthedocs.io/en/stable/stories/install.html). 29 | 2. Change to the role repository directory. 30 | 3. Modify `plans/test_playbooks_parallel.fmf` to suit your requirements: 31 | 1. Due to [issue #3138](https://github.com/teemtee/tmt/issues/3138), comment out all managed nodes except for one. 32 | 2. Optionally modify environment variables to, e.g. run only specified test playbooks by modifying `SYSTEM_ROLES_ONLY_TESTS`. 33 | 4. Enter `tmt try -p plans/test_playbooks_parallel `. 34 | This command identifies the `plans/test_playbooks_parallel.fmf` plan and provisions local VMs, a control node and a managed node. 35 | 5. `tmt try` is in development and does not identify tests from URL automatically, so after provisioning the machines, you must type `t`, `p`, `t` from the interactive prompt to identify tests, run preparation steps, and run the tests. 36 | 37 | ### Running in Testing Farm 38 | 39 | 1. Install `testing-farm` as described in [Installation](https://gitlab.com/testing-farm/cli/-/blob/main/README.adoc#user-content-installation). 40 | 2. Change to the role repository directory. 41 | 3. If you want to run tests with edits in your branch, you need to commit and push changes first to some branch. 42 | 4. You can uncomment "Inject your ssh public key to test systems" discover step in the plan if you want to troubleshoot tests by SSHing into test systems in Testing Farm. 43 | 5. Enter `testing-farm request`. 44 | Edit to your needs. 45 | 46 | ```bash 47 | $ TESTING_FARM_API_TOKEN= \ 48 | testing-farm request --pipeline-type="tmt-multihost" \ 49 | --plan-filter="tag:playbooks_parallel" \ 50 | --git-url "https://github.com//kdump" \ 51 | --git-ref "" \ 52 | --compose CentOS-Stream-9 \ 53 | -e "SYSTEM_ROLES_ONLY_TESTS=tests_default.yml" \ 54 | --no-wait 55 | ``` 56 | -------------------------------------------------------------------------------- /plans/test_playbooks_parallel.fmf: -------------------------------------------------------------------------------- 1 | summary: A general test for a system role 2 | tag: playbooks_parallel 3 | provision: 4 | # TF uses `how: artemis`, and `tmt try`` uses `how: virtual`. 5 | # Hence there is no need to define `how` explicitly. 6 | - name: control-node1 7 | role: control_node 8 | - name: managed-node1 9 | role: managed_node 10 | - name: managed-node2 11 | role: managed_node 12 | environment: 13 | SR_ANSIBLE_VER: 2.17 14 | SR_REPO_NAME: kdump 15 | SR_PYTHON_VERSION: 3.12 16 | SR_ONLY_TESTS: "" # tests_default.yml 17 | SR_TEST_LOCAL_CHANGES: true 18 | SR_PR_NUM: "" 19 | SR_LSR_USER: "" 20 | SR_LSR_DOMAIN: "" 21 | SR_LSR_SSH_KEY: "" 22 | SR_ARTIFACTS_DIR: "" 23 | SR_ARTIFACTS_URL: "" 24 | SR_TFT_DEBUG: false 25 | prepare: 26 | - name: Use vault.centos.org repos (CS 7, 8 EOL workaround) 27 | script: | 28 | if grep -q 'CentOS Stream release 8' /etc/redhat-release; then 29 | sed -i '/^mirror/d;s/#\(baseurl=http:\/\/\)mirror/\1vault/' /etc/yum.repos.d/*.repo 30 | fi 31 | if grep -q 'CentOS Linux release 7.9' /etc/redhat-release; then 32 | sed -i '/^mirror/d;s/#\?\(baseurl=http:\/\/\)mirror/\1vault/' /etc/yum.repos.d/*.repo 33 | fi 34 | # Replace with feature: epel: enabled once https://github.com/teemtee/tmt/pull/3128 is merged 35 | - name: Enable epel to install beakerlib 36 | script: | 37 | # CS 10 and Fedora doesn't require epel 38 | if grep -q -e 'CentOS Stream release 10' -e 'Fedora release' /etc/redhat-release; then 39 | exit 0 40 | fi 41 | yum install epel-release yum-utils -y 42 | yum-config-manager --enable epel epel-debuginfo epel-source 43 | discover: 44 | - name: Prepare managed node 45 | how: fmf 46 | where: managed_node 47 | filter: tag:prep_managed_node 48 | url: https://github.com/linux-system-roles/tft-tests 49 | ref: main 50 | - name: Run test playbooks from control_node 51 | how: fmf 52 | where: control_node 53 | filter: tag:test_playbooks 54 | url: https://github.com/linux-system-roles/tft-tests 55 | ref: main 56 | # Uncomment this step for troubleshooting 57 | # This is required because currently testing-farm cli doesn't support running multi-node plans 58 | # You can set ID_RSA_PUB in the environment section above to your public key to distribute it to nodes 59 | # - name: Inject your ssh public key to test systems 60 | # how: fmf 61 | # where: control_node 62 | # filter: tag:reserve_system 63 | # url: https://github.com/linux-system-roles/tft-tests 64 | # ref: main 65 | execute: 66 | how: tmt 67 | -------------------------------------------------------------------------------- /pylint_extra_requirements.txt: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # Write extra requirements for running pylint here: 4 | -------------------------------------------------------------------------------- /pylintrc: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # This file was generated using `pylint --generate-rcfile > pylintrc` command. 4 | [MASTER] 5 | 6 | # A comma-separated list of package or module names from where C extensions may 7 | # be loaded. Extensions are loading into the active Python interpreter and may 8 | # run arbitrary code 9 | extension-pkg-whitelist= 10 | 11 | # Add files or directories to the blacklist. They should be base names, not 12 | # paths. 13 | ignore=.git,.tox 14 | 15 | # Add files or directories matching the regex patterns to the blacklist. The 16 | # regex matches against base names, not paths. 17 | ignore-patterns= 18 | 19 | # Python code to execute, usually for sys.path manipulation such as 20 | # pygtk.require(). 21 | #init-hook= 22 | 23 | # Use multiple processes to speed up Pylint. 24 | jobs=1 25 | 26 | # List of plugins (as comma separated values of python modules names) to load, 27 | # usually to register additional checkers. 28 | load-plugins= 29 | 30 | # Pickle collected data for later comparisons. 31 | persistent=yes 32 | 33 | # Specify a configuration file. 34 | #rcfile= 35 | 36 | # When enabled, pylint would attempt to guess common misconfiguration and emit 37 | # user-friendly hints instead of false-positive error messages 38 | suggestion-mode=yes 39 | 40 | # Allow loading of arbitrary C extensions. Extensions are imported into the 41 | # active Python interpreter and may run arbitrary code. 42 | unsafe-load-any-extension=no 43 | 44 | 45 | [MESSAGES CONTROL] 46 | 47 | # Only show warnings with the listed confidence levels. Leave empty to show 48 | # all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED 49 | confidence= 50 | 51 | # Disable the message, report, category or checker with the given id(s). You 52 | # can either give multiple identifiers separated by comma (,) or put this 53 | # option multiple times (only on the command line, not in the configuration 54 | # file where it should appear only once).You can also use "--disable=all" to 55 | # disable everything first and then re-enable specific checks. For example, if 56 | # you want to run only the similarities checker, you can use "--disable=all 57 | # --enable=similarities". If you want to run only the classes checker, but have 58 | # no Warning level messages displayed, use"--disable=all --enable=classes 59 | # --disable=W" 60 | disable=wrong-import-position 61 | #disable=print-statement, 62 | # parameter-unpacking, 63 | # unpacking-in-except, 64 | # old-raise-syntax, 65 | # backtick, 66 | # long-suffix, 67 | # old-ne-operator, 68 | # old-octal-literal, 69 | # import-star-module-level, 70 | # non-ascii-bytes-literal, 71 | # raw-checker-failed, 72 | # bad-inline-option, 73 | # locally-disabled, 74 | # locally-enabled, 75 | # file-ignored, 76 | # suppressed-message, 77 | # useless-suppression, 78 | # deprecated-pragma, 79 | # apply-builtin, 80 | # basestring-builtin, 81 | # buffer-builtin, 82 | # cmp-builtin, 83 | # coerce-builtin, 84 | # execfile-builtin, 85 | # file-builtin, 86 | # long-builtin, 87 | # raw_input-builtin, 88 | # reduce-builtin, 89 | # standarderror-builtin, 90 | # unicode-builtin, 91 | # xrange-builtin, 92 | # coerce-method, 93 | # delslice-method, 94 | # getslice-method, 95 | # setslice-method, 96 | # no-absolute-import, 97 | # old-division, 98 | # dict-iter-method, 99 | # dict-view-method, 100 | # next-method-called, 101 | # metaclass-assignment, 102 | # indexing-exception, 103 | # raising-string, 104 | # reload-builtin, 105 | # oct-method, 106 | # hex-method, 107 | # nonzero-method, 108 | # cmp-method, 109 | # input-builtin, 110 | # round-builtin, 111 | # intern-builtin, 112 | # unichr-builtin, 113 | # map-builtin-not-iterating, 114 | # zip-builtin-not-iterating, 115 | # range-builtin-not-iterating, 116 | # filter-builtin-not-iterating, 117 | # using-cmp-argument, 118 | # eq-without-hash, 119 | # div-method, 120 | # idiv-method, 121 | # rdiv-method, 122 | # exception-message-attribute, 123 | # invalid-str-codec, 124 | # sys-max-int, 125 | # bad-python3-import, 126 | # deprecated-string-function, 127 | # deprecated-str-translate-call, 128 | # deprecated-itertools-function, 129 | # deprecated-types-field, 130 | # next-method-defined, 131 | # dict-items-not-iterating, 132 | # dict-keys-not-iterating, 133 | # dict-values-not-iterating 134 | 135 | # Enable the message, report, category or checker with the given id(s). You can 136 | # either give multiple identifier separated by comma (,) or put this option 137 | # multiple time (only on the command line, not in the configuration file where 138 | # it should appear only once). See also the "--disable" option for examples. 139 | enable=c-extension-no-member 140 | 141 | 142 | [REPORTS] 143 | 144 | # Python expression which should return a note less than 10 (10 is the highest 145 | # note). You have access to the variables errors warning, statement which 146 | # respectively contain the number of errors / warnings messages and the total 147 | # number of statements analyzed. This is used by the global evaluation report 148 | # (RP0004). 149 | evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) 150 | 151 | # Template used to display messages. This is a python new-style format string 152 | # used to format the message information. See doc for all details 153 | #msg-template= 154 | 155 | # Set the output format. Available formats are text, parseable, colorized, json 156 | # and msvs (visual studio).You can also give a reporter class, eg 157 | # mypackage.mymodule.MyReporterClass. 158 | output-format=text 159 | 160 | # Tells whether to display a full report or only the messages 161 | reports=no 162 | 163 | # Activate the evaluation score. 164 | score=yes 165 | 166 | 167 | [REFACTORING] 168 | 169 | # Maximum number of nested blocks for function / method body 170 | max-nested-blocks=5 171 | 172 | # Complete name of functions that never returns. When checking for 173 | # inconsistent-return-statements if a never returning function is called then 174 | # it will be considered as an explicit return statement and no message will be 175 | # printed. 176 | never-returning-functions=optparse.Values,sys.exit 177 | 178 | 179 | [LOGGING] 180 | 181 | # Logging modules to check that the string format arguments are in logging 182 | # function parameter format 183 | logging-modules=logging 184 | 185 | 186 | [TYPECHECK] 187 | 188 | # List of decorators that produce context managers, such as 189 | # contextlib.contextmanager. Add to this list to register other decorators that 190 | # produce valid context managers. 191 | contextmanager-decorators=contextlib.contextmanager 192 | 193 | # List of members which are set dynamically and missed by pylint inference 194 | # system, and so shouldn't trigger E1101 when accessed. Python regular 195 | # expressions are accepted. 196 | generated-members= 197 | 198 | # Tells whether missing members accessed in mixin class should be ignored. A 199 | # mixin class is detected if its name ends with "mixin" (case insensitive). 200 | ignore-mixin-members=yes 201 | 202 | # This flag controls whether pylint should warn about no-member and similar 203 | # checks whenever an opaque object is returned when inferring. The inference 204 | # can return multiple potential results while evaluating a Python object, but 205 | # some branches might not be evaluated, which results in partial inference. In 206 | # that case, it might be useful to still emit no-member and other checks for 207 | # the rest of the inferred objects. 208 | ignore-on-opaque-inference=yes 209 | 210 | # List of class names for which member attributes should not be checked (useful 211 | # for classes with dynamically set attributes). This supports the use of 212 | # qualified names. 213 | ignored-classes=optparse.Values,thread._local,_thread._local 214 | 215 | # List of module names for which member attributes should not be checked 216 | # (useful for modules/projects where namespaces are manipulated during runtime 217 | # and thus existing member attributes cannot be deduced by static analysis. It 218 | # supports qualified module names, as well as Unix pattern matching. 219 | ignored-modules= 220 | 221 | # Show a hint with possible names when a member name was not found. The aspect 222 | # of finding the hint is based on edit distance. 223 | missing-member-hint=yes 224 | 225 | # The minimum edit distance a name should have in order to be considered a 226 | # similar match for a missing member name. 227 | missing-member-hint-distance=1 228 | 229 | # The total number of similar names that should be taken in consideration when 230 | # showing a hint for a missing member. 231 | missing-member-max-choices=1 232 | 233 | 234 | [FORMAT] 235 | 236 | # Expected format of line ending, e.g. empty (any line ending), LF or CRLF. 237 | expected-line-ending-format= 238 | 239 | # Regexp for a line that is allowed to be longer than the limit. 240 | ignore-long-lines=^\s*(# )??$ 241 | 242 | # Number of spaces of indent required inside a hanging or continued line. 243 | indent-after-paren=4 244 | 245 | # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 246 | # tab). 247 | indent-string=' ' 248 | 249 | # Maximum number of characters on a single line. 250 | max-line-length=88 251 | 252 | # Maximum number of lines in a module 253 | max-module-lines=1000 254 | 255 | # List of optional constructs for which whitespace checking is disabled. `dict- 256 | # separator` is used to allow tabulation in dicts, etc.: {1 : 1,\n222: 2}. 257 | # `trailing-comma` allows a space between comma and closing bracket: (a, ). 258 | # `empty-line` allows space-only lines. 259 | no-space-check=trailing-comma, 260 | dict-separator 261 | 262 | # Allow the body of a class to be on the same line as the declaration if body 263 | # contains single statement. 264 | single-line-class-stmt=no 265 | 266 | # Allow the body of an if to be on the same line as the test if there is no 267 | # else. 268 | single-line-if-stmt=no 269 | 270 | 271 | [SPELLING] 272 | 273 | # Limits count of emitted suggestions for spelling mistakes 274 | max-spelling-suggestions=4 275 | 276 | # Spelling dictionary name. Available dictionaries: none. To make it working 277 | # install python-enchant package. 278 | spelling-dict= 279 | 280 | # List of comma separated words that should not be checked. 281 | spelling-ignore-words= 282 | 283 | # A path to a file that contains private dictionary; one word per line. 284 | spelling-private-dict-file= 285 | 286 | # Tells whether to store unknown words to indicated private dictionary in 287 | # --spelling-private-dict-file option instead of raising a message. 288 | spelling-store-unknown-words=no 289 | 290 | 291 | [SIMILARITIES] 292 | 293 | # Ignore comments when computing similarities. 294 | ignore-comments=yes 295 | 296 | # Ignore docstrings when computing similarities. 297 | ignore-docstrings=yes 298 | 299 | # Ignore imports when computing similarities. 300 | ignore-imports=no 301 | 302 | # Minimum lines number of a similarity. 303 | min-similarity-lines=4 304 | 305 | 306 | [VARIABLES] 307 | 308 | # List of additional names supposed to be defined in builtins. Remember that 309 | # you should avoid to define new builtins when possible. 310 | additional-builtins= 311 | 312 | # Tells whether unused global variables should be treated as a violation. 313 | allow-global-unused-variables=yes 314 | 315 | # List of strings which can identify a callback function by name. A callback 316 | # name must start or end with one of those strings. 317 | callbacks=cb_, 318 | _cb 319 | 320 | # A regular expression matching the name of dummy variables (i.e. expectedly 321 | # not used). 322 | dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_ 323 | 324 | # Argument names that match this expression will be ignored. Default to name 325 | # with leading underscore 326 | ignored-argument-names=_.*|^ignored_|^unused_ 327 | 328 | # Tells whether we should check for unused import in __init__ files. 329 | init-import=no 330 | 331 | # List of qualified module names which can have objects that can redefine 332 | # builtins. 333 | redefining-builtins-modules=six.moves,past.builtins,future.builtins 334 | 335 | 336 | [BASIC] 337 | 338 | # Naming style matching correct argument names 339 | argument-naming-style=snake_case 340 | 341 | # Regular expression matching correct argument names. Overrides argument- 342 | # naming-style 343 | #argument-rgx= 344 | 345 | # Naming style matching correct attribute names 346 | attr-naming-style=snake_case 347 | 348 | # Regular expression matching correct attribute names. Overrides attr-naming- 349 | # style 350 | #attr-rgx= 351 | 352 | # Bad variable names which should always be refused, separated by a comma 353 | bad-names=foo, 354 | bar, 355 | baz, 356 | toto, 357 | tutu, 358 | tata 359 | 360 | # Naming style matching correct class attribute names 361 | class-attribute-naming-style=any 362 | 363 | # Regular expression matching correct class attribute names. Overrides class- 364 | # attribute-naming-style 365 | #class-attribute-rgx= 366 | 367 | # Naming style matching correct class names 368 | class-naming-style=PascalCase 369 | 370 | # Regular expression matching correct class names. Overrides class-naming-style 371 | #class-rgx= 372 | 373 | # Naming style matching correct constant names 374 | const-naming-style=UPPER_CASE 375 | 376 | # Regular expression matching correct constant names. Overrides const-naming- 377 | # style 378 | #const-rgx= 379 | 380 | # Minimum line length for functions/classes that require docstrings, shorter 381 | # ones are exempt. 382 | docstring-min-length=-1 383 | 384 | # Naming style matching correct function names 385 | function-naming-style=snake_case 386 | 387 | # Regular expression matching correct function names. Overrides function- 388 | # naming-style 389 | #function-rgx= 390 | 391 | # Good variable names which should always be accepted, separated by a comma 392 | good-names=i, 393 | j, 394 | k, 395 | ex, 396 | Run, 397 | _ 398 | 399 | # Include a hint for the correct naming format with invalid-name 400 | include-naming-hint=no 401 | 402 | # Naming style matching correct inline iteration names 403 | inlinevar-naming-style=any 404 | 405 | # Regular expression matching correct inline iteration names. Overrides 406 | # inlinevar-naming-style 407 | #inlinevar-rgx= 408 | 409 | # Naming style matching correct method names 410 | method-naming-style=snake_case 411 | 412 | # Regular expression matching correct method names. Overrides method-naming- 413 | # style 414 | #method-rgx= 415 | 416 | # Naming style matching correct module names 417 | module-naming-style=snake_case 418 | 419 | # Regular expression matching correct module names. Overrides module-naming- 420 | # style 421 | #module-rgx= 422 | 423 | # Colon-delimited sets of names that determine each other's naming style when 424 | # the name regexes allow several styles. 425 | name-group= 426 | 427 | # Regular expression which should only match function or class names that do 428 | # not require a docstring. 429 | no-docstring-rgx=^_ 430 | 431 | # List of decorators that produce properties, such as abc.abstractproperty. Add 432 | # to this list to register other decorators that produce valid properties. 433 | property-classes=abc.abstractproperty 434 | 435 | # Naming style matching correct variable names 436 | variable-naming-style=snake_case 437 | 438 | # Regular expression matching correct variable names. Overrides variable- 439 | # naming-style 440 | #variable-rgx= 441 | 442 | 443 | [MISCELLANEOUS] 444 | 445 | # List of note tags to take in consideration, separated by a comma. 446 | notes=FIXME, 447 | XXX, 448 | TODO 449 | 450 | 451 | [IMPORTS] 452 | 453 | # Allow wildcard imports from modules that define __all__. 454 | allow-wildcard-with-all=no 455 | 456 | # Analyse import fallback blocks. This can be used to support both Python 2 and 457 | # 3 compatible code, which means that the block might have code that exists 458 | # only in one or another interpreter, leading to false positives when analysed. 459 | analyse-fallback-blocks=no 460 | 461 | # Deprecated modules which should not be used, separated by a comma 462 | deprecated-modules=regsub, 463 | TERMIOS, 464 | Bastion, 465 | rexec 466 | 467 | # Create a graph of external dependencies in the given file (report RP0402 must 468 | # not be disabled) 469 | ext-import-graph= 470 | 471 | # Create a graph of every (i.e. internal and external) dependencies in the 472 | # given file (report RP0402 must not be disabled) 473 | import-graph= 474 | 475 | # Create a graph of internal dependencies in the given file (report RP0402 must 476 | # not be disabled) 477 | int-import-graph= 478 | 479 | # Force import order to recognize a module as part of the standard 480 | # compatibility libraries. 481 | known-standard-library= 482 | 483 | # Force import order to recognize a module as part of a third party library. 484 | known-third-party=enchant 485 | 486 | 487 | [DESIGN] 488 | 489 | # Maximum number of arguments for function / method 490 | max-args=5 491 | 492 | # Maximum number of attributes for a class (see R0902). 493 | max-attributes=7 494 | 495 | # Maximum number of boolean expressions in a if statement 496 | max-bool-expr=5 497 | 498 | # Maximum number of branch for function / method body 499 | max-branches=12 500 | 501 | # Maximum number of locals for function / method body 502 | max-locals=15 503 | 504 | # Maximum number of parents for a class (see R0901). 505 | max-parents=7 506 | 507 | # Maximum number of public methods for a class (see R0904). 508 | max-public-methods=20 509 | 510 | # Maximum number of return / yield for function / method body 511 | max-returns=6 512 | 513 | # Maximum number of statements in function / method body 514 | max-statements=50 515 | 516 | # Minimum number of public methods for a class (see R0903). 517 | min-public-methods=2 518 | 519 | 520 | [CLASSES] 521 | 522 | # List of method names used to declare (i.e. assign) instance attributes. 523 | defining-attr-methods=__init__, 524 | __new__, 525 | setUp 526 | 527 | # List of member names, which should be excluded from the protected access 528 | # warning. 529 | exclude-protected=_asdict, 530 | _fields, 531 | _replace, 532 | _source, 533 | _make 534 | 535 | # List of valid names for the first argument in a class method. 536 | valid-classmethod-first-arg=cls 537 | 538 | # List of valid names for the first argument in a metaclass class method. 539 | valid-metaclass-classmethod-first-arg=mcs 540 | 541 | 542 | [EXCEPTIONS] 543 | 544 | # Exceptions that will emit a warning when being caught. Defaults to 545 | # "Exception" 546 | overgeneral-exceptions=Exception 547 | -------------------------------------------------------------------------------- /pytest_extra_requirements.txt: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | 3 | # Write extra requirements for running pytest here: 4 | # If you need ansible then uncomment the following line: 5 | #-ransible_pytest_extra_requirements.txt 6 | # If you need mock then uncomment the following line: 7 | #mock ; python_version < "3.0" 8 | -------------------------------------------------------------------------------- /tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure ansible_facts used by role 3 | include_tasks: set_vars.yml 4 | 5 | - name: Install required packages 6 | package: 7 | name: "{{ __kdump_packages }}" 8 | state: present 9 | use: "{{ (__kdump_is_ostree | d(false)) | 10 | ternary('ansible.posix.rhel_rpm_ostree', omit) }}" 11 | 12 | - name: Ensure that kdump is enabled 13 | service: 14 | name: kdump 15 | enabled: true 16 | 17 | - name: Include SSH tasks 18 | include_tasks: ssh.yml 19 | when: 20 | - kdump_target.type | d(none) == 'ssh' 21 | 22 | - name: Get mode of /etc/kdump.conf if it exists 23 | stat: 24 | path: /etc/kdump.conf 25 | register: __kdump_conf 26 | 27 | - name: Generate /etc/kdump.conf 28 | template: 29 | src: kdump.conf.j2 30 | dest: /etc/kdump.conf 31 | backup: true 32 | mode: "{{ __kdump_conf.stat.mode | d('0644') }}" 33 | notify: Restart kdump 34 | 35 | - name: Find out reserved memory for the crash kernel 36 | slurp: 37 | src: /sys/kernel/kexec_crash_size 38 | register: kexec_crash_size 39 | until: kexec_crash_size is success 40 | 41 | - name: Set the kdump_reboot_required fact 42 | set_fact: 43 | kdump_reboot_required: "{{ 44 | kexec_crash_size.content | b64decode | int < 1 }}" 45 | 46 | - name: Update crashkernel setting if needed 47 | when: 48 | - kdump_reboot_required | bool 49 | - ansible_facts['distribution'] in ['RedHat', 'CentOS', 'Fedora'] 50 | block: 51 | - name: Use kdumpctl reset-crashkernel if needed 52 | command: kdumpctl reset-crashkernel --kernel=ALL 53 | when: ansible_facts['distribution_major_version'] | int >= 9 54 | changed_when: true 55 | 56 | - name: Use grubby to update crashkernel=auto if needed 57 | command: grubby --args=crashkernel=auto --update-kernel=ALL 58 | when: ansible_facts['distribution_major_version'] | int <= 8 59 | changed_when: true 60 | 61 | - name: Fail if reboot is required and kdump_reboot_ok is false 62 | fail: 63 | msg: >- 64 | "Reboot is required to apply changes. Re-execute the role after boot to 65 | ensure that kdump is working." 66 | when: 67 | - kdump_reboot_required | bool 68 | - not kdump_reboot_ok 69 | 70 | - name: Reboot the managed node 71 | reboot: 72 | when: 73 | - kdump_reboot_required | bool 74 | - kdump_reboot_ok | bool 75 | 76 | - name: Clear the kdump_reboot_required flag 77 | set_fact: 78 | kdump_reboot_required: false 79 | when: 80 | - kdump_reboot_required | bool 81 | - kdump_reboot_ok | bool 82 | 83 | # This task is for users who run the role after reboot. The handlers do not run 84 | # in this case, because the role results in not changed state. It is necessary 85 | # to explicitly start kdump to rebuild initrd. 86 | - name: Ensure that kdump is started 87 | service: 88 | name: kdump 89 | state: started 90 | register: __kdump_service_start 91 | -------------------------------------------------------------------------------- /tasks/set_vars.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure ansible_facts used by role 3 | setup: 4 | gather_subset: "{{ __kdump_required_facts_subsets }}" 5 | when: __kdump_required_facts | 6 | difference(ansible_facts.keys() | list) | length > 0 7 | 8 | - name: Determine if system is ostree and set flag 9 | when: not __kdump_is_ostree is defined 10 | block: 11 | - name: Check if system is ostree 12 | stat: 13 | path: /run/ostree-booted 14 | register: __ostree_booted_stat 15 | 16 | - name: Set flag to indicate system is ostree 17 | set_fact: 18 | __kdump_is_ostree: "{{ __ostree_booted_stat.stat.exists }}" 19 | 20 | - name: Set platform/version specific variables 21 | include_vars: "{{ __vars_file }}" 22 | loop: 23 | - "{{ ansible_facts['os_family'] }}.yml" 24 | - "{{ ansible_facts['distribution'] }}.yml" 25 | - >- 26 | {{ ansible_facts['distribution'] ~ '_' ~ 27 | ansible_facts['distribution_major_version'] }}.yml 28 | - >- 29 | {{ ansible_facts['distribution'] ~ '_' ~ 30 | ansible_facts['distribution_version'] }}.yml 31 | vars: 32 | __vars_file: "{{ role_path }}/vars/{{ item }}" 33 | when: __vars_file is file 34 | -------------------------------------------------------------------------------- /tasks/ssh.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create key if it does not exist 3 | command: 4 | cmd: "/usr/bin/ssh-keygen -t rsa -f {{ kdump_sshkey }} -N '' " 5 | creates: "{{ kdump_sshkey }}" 6 | 7 | - name: Fetch key 8 | slurp: 9 | src: "{{ kdump_sshkey }}.pub" 10 | register: __kdump_keydata 11 | 12 | - name: Handle authorized_keys update 13 | vars: 14 | __kdump_ssh_path: "{{ __kdump_ssh_user_info.home ~ '/.ssh' }}" 15 | __kdump_authorized_keys_path: "{{ 16 | __kdump_ssh_user_info.home ~ '/.ssh/authorized_keys' }}" 17 | __kdump_new_key: "{{ __kdump_keydata.content | b64decode | trim }}" 18 | delegate_to: "{{ kdump_ssh_server }}" 19 | block: 20 | - name: Get userinfo for {{ kdump_ssh_user }} 21 | user: 22 | name: "{{ kdump_ssh_user }}" 23 | state: present 24 | register: __kdump_ssh_user_info 25 | 26 | - name: Get the ssh directory for the user 27 | stat: 28 | path: "{{ __kdump_ssh_path }}" 29 | register: __kdump_ssh_path_stat 30 | 31 | - name: Get the authorized_keys file for the user 32 | stat: 33 | path: "{{ __kdump_authorized_keys_path }}" 34 | register: __kdump_authorized_keys_file 35 | 36 | - name: Ensure ssh directory for authorized_keys if needed 37 | file: 38 | path: "{{ __kdump_ssh_path_stat.stat.path | 39 | d(__kdump_ssh_path) }}" 40 | state: directory 41 | group: "{{ __kdump_ssh_path_stat.stat.gr_name | 42 | d(kdump_ssh_user) }}" 43 | owner: "{{ __kdump_ssh_path_stat.stat.pw_name | 44 | d(kdump_ssh_user) }}" 45 | mode: "{{ __kdump_ssh_path_stat.stat.mode | d('0700') }}" 46 | 47 | - name: Write new authorized_keys if needed 48 | lineinfile: 49 | line: "{{ __kdump_new_key }}" 50 | state: present 51 | create: true 52 | path: "{{ __kdump_authorized_keys_file.stat.path | 53 | d(__kdump_authorized_keys_path) }}" 54 | group: "{{ __kdump_authorized_keys_file.stat.gr_name | 55 | d(kdump_ssh_user) }}" 56 | owner: "{{ __kdump_authorized_keys_file.stat.pw_name | 57 | d(kdump_ssh_user) }}" 58 | mode: "{{ __kdump_authorized_keys_file.stat.mode | d('0600') }}" 59 | 60 | - name: Fetch the servers public key 61 | slurp: 62 | src: /etc/ssh/ssh_host_rsa_key.pub 63 | register: __kdump_serverpubkey 64 | delegate_to: "{{ kdump_ssh_server }}" 65 | 66 | - name: Add the servers public key to known_hosts on managed node 67 | known_hosts: 68 | key: "{{ __kdump_ssh_server_location }} {{ __kdump_serverpubkey.content | 69 | b64decode }}" 70 | name: "{{ __kdump_ssh_server_location }}" 71 | path: /etc/ssh/ssh_known_hosts 72 | -------------------------------------------------------------------------------- /templates/kdump.conf.j2: -------------------------------------------------------------------------------- 1 | {{ ansible_managed | comment }} 2 | {{ "system_role:kdump" | comment(prefix="", postfix="") }} 3 | 4 | {% if kdump_target %} 5 | {% if kdump_target.type == "ssh" %} 6 | ssh {{ kdump_target.location | d(kdump_ssh_user ~ '@' ~ kdump_ssh_server) }} 7 | 8 | {% if kdump_sshkey != '/root/.ssh/kdump_id_rsa' %} 9 | sshkey {{ kdump_sshkey }} 10 | {% endif %} 11 | {% else %} 12 | {{ kdump_target.type }} {{ kdump_target.location }} 13 | 14 | {% endif %} 15 | {% endif %} 16 | 17 | path {{ kdump_path }} 18 | {% if kdump_core_collector %} 19 | core_collector {{ kdump_core_collector }} 20 | {% endif %} 21 | 22 | {% if ansible_facts['distribution'] in ['RedHat', 'CentOS', 'Fedora'] %} 23 | {% if ansible_facts['distribution_major_version'] | int >= 9 %} 24 | auto_reset_crashkernel {{ kdump_auto_reset_crashkernel | bool | ternary("yes", "no") }} 25 | failure_action {{ kdump_system_action }} 26 | {% else %} 27 | default {{ kdump_system_action }} 28 | {% endif %} 29 | {% if ansible_facts['distribution_major_version'] | int >= 7 %} 30 | {% if kdump_dracut_args %} 31 | dracut_args {{ kdump_dracut_args }} 32 | {% endif %} 33 | {% endif %} 34 | {% endif %} 35 | -------------------------------------------------------------------------------- /tests/inventory.yaml.j2: -------------------------------------------------------------------------------- 1 | all: 2 | hosts: 3 | {{ inventory_hostname }}: 4 | {% for key in ["ansible_all_ipv4_addresses", "ansible_all_ipv6_addresses", 5 | "ansible_default_ipv4", "ansible_default_ipv6", "ansible_host", 6 | "ansible_port", "ansible_ssh_common_args", 7 | "ansible_ssh_private_key_file","ansible_user"] %} 8 | {% if key in hostvars[inventory_hostname] %} 9 | {{ key }}: {{ hostvars[inventory_hostname][key] }} 10 | {% endif %} 11 | {% endfor %} 12 | -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/defaults: -------------------------------------------------------------------------------- 1 | ../../../defaults -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/handlers: -------------------------------------------------------------------------------- 1 | ../../../handlers -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/meta: -------------------------------------------------------------------------------- 1 | ../../../meta -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/tasks: -------------------------------------------------------------------------------- 1 | ../../../tasks -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/templates: -------------------------------------------------------------------------------- 1 | ../../../templates -------------------------------------------------------------------------------- /tests/roles/linux-system-roles.kdump/vars: -------------------------------------------------------------------------------- 1 | ../../../vars -------------------------------------------------------------------------------- /tests/setup-snapshot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Setup snapshot 3 | hosts: all 4 | tasks: 5 | - name: Set platform/version specific variables 6 | include_role: 7 | name: linux-system-roles.kdump 8 | tasks_from: set_vars.yml 9 | public: true 10 | 11 | - name: Install test packages 12 | package: 13 | name: "{{ __kdump_packages }}" 14 | state: present 15 | -------------------------------------------------------------------------------- /tests/tasks/check_header.yml: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | --- 3 | - name: Get file 4 | slurp: 5 | path: "{{ __file }}" 6 | register: __content 7 | when: not __file_content is defined 8 | 9 | - name: Check for presence of ansible managed header, fingerprint 10 | assert: 11 | that: 12 | - ansible_managed in content 13 | - __fingerprint in content 14 | vars: 15 | content: "{{ (__file_content | d(__content)).content | b64decode }}" 16 | ansible_managed: "{{ lookup('template', 'get_ansible_managed.j2') }}" 17 | -------------------------------------------------------------------------------- /tests/templates/get_ansible_managed.j2: -------------------------------------------------------------------------------- 1 | {{ ansible_managed | comment(__comment_type | d("plain")) }} 2 | -------------------------------------------------------------------------------- /tests/tests_default.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure that the rule runs with default parameters 3 | hosts: all 4 | gather_facts: false 5 | tasks: 6 | - name: >- 7 | The role requires reboot only on specific systems. Hence running the 8 | role in a rescue block to catch when it fails with reboot. 9 | block: 10 | - name: Run the role. If reboot is not required - the play succeeds. 11 | include_role: 12 | name: linux-system-roles.kdump 13 | rescue: 14 | - name: If reboot is required - assert the expected fail message 15 | assert: 16 | that: 17 | - >- 18 | 'Reboot is required to apply changes.' in 19 | ansible_failed_result.msg 20 | - kdump_reboot_required | bool 21 | -------------------------------------------------------------------------------- /tests/tests_default_reboot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure that the rule runs with default parameters 3 | hosts: all 4 | vars: 5 | kdump_reboot_ok: true 6 | kdump_dracut_args: --kernel-cmdline nopti 7 | kdump_auto_reset_crashkernel: true 8 | tags: 9 | - tests::reboot 10 | tasks: 11 | - name: Run the role and reboot if necessary 12 | include_role: 13 | name: linux-system-roles.kdump 14 | public: true 15 | 16 | - name: Notify and run handlers 17 | meta: flush_handlers 18 | 19 | - name: Check generated files for ansible_managed, fingerprint 20 | include_tasks: tasks/check_header.yml 21 | vars: 22 | __fingerprint: "system_role:kdump" 23 | __file: /etc/kdump.conf 24 | 25 | - name: See if el9 or later, or el7 or el8 26 | set_fact: 27 | is_el9: "{{ 28 | ansible_facts['distribution'] in ['RedHat', 'CentOS', 'Fedora'] 29 | and ansible_facts['distribution_major_version'] | int >= 9 }}" 30 | is_el7_or_el8: "{{ 31 | ansible_facts['distribution'] in ['RedHat', 'CentOS'] 32 | and ansible_facts['distribution_major_version'] | int in [7, 8] }}" 33 | 34 | - name: Check for crashkernel, grubby settings 35 | when: is_el9 or is_el7_or_el8 36 | block: 37 | - name: Check parameters in conf file 38 | command: grep '^{{ item.param }} {{ item.value }}$' /etc/kdump.conf 39 | changed_when: false 40 | loop: "{{ params + (is_el9 | ternary(el9_params, not_el9_params)) }}" 41 | vars: 42 | params: 43 | - param: dracut_args 44 | value: "{{ kdump_dracut_args }}" 45 | el9_params: 46 | - param: auto_reset_crashkernel 47 | value: "{{ kdump_auto_reset_crashkernel | 48 | ternary('yes', 'no') }}" 49 | - param: failure_action 50 | value: "{{ kdump_system_action }}" 51 | not_el9_params: 52 | - param: default 53 | value: "{{ kdump_system_action }}" 54 | 55 | - name: Get crashkernel setting EL 9 and later 56 | command: kdumpctl get-default-crashkernel 57 | changed_when: false 58 | register: __crashkernel 59 | when: is_el9 | bool 60 | 61 | - name: Get crashkernel setting EL 8 and earlier 62 | set_fact: 63 | __crashkernel: 64 | stdout: auto 65 | when: is_el7_or_el8 | bool 66 | 67 | - name: Check for crashkernel setting 68 | shell: > 69 | set -euo pipefail; 70 | grubby --info=ALL | grep -E {{ pattern | quote }} 71 | changed_when: false 72 | vars: 73 | pattern: ^args=.* crashkernel={{ __crashkernel.stdout }}( |"|$) 74 | 75 | - name: Change parameters and run the role again 76 | set_fact: 77 | kdump_auto_reset_crashkernel: false 78 | kdump_dracut_args: --print-cmdline 79 | 80 | - name: Run the role again with new parameters 81 | include_role: 82 | name: linux-system-roles.kdump 83 | 84 | - name: Check for crashkernel, grubby settings 85 | when: is_el9 or is_el7_or_el8 86 | block: 87 | - name: Check parameters in conf file 88 | command: grep '^{{ item.param }} {{ item.value }}$' /etc/kdump.conf 89 | changed_when: false 90 | loop: "{{ params + (is_el9 | ternary(el9_params, [])) }}" 91 | vars: 92 | el9_params: 93 | - param: auto_reset_crashkernel 94 | value: "{{ kdump_auto_reset_crashkernel | 95 | ternary('yes', 'no') }}" 96 | params: 97 | - param: dracut_args 98 | value: "{{ kdump_dracut_args }}" 99 | 100 | - name: Get crashkernel setting EL 9 and later 101 | command: kdumpctl get-default-crashkernel 102 | changed_when: false 103 | register: __crashkernel 104 | when: is_el9 | bool 105 | 106 | - name: Get crashkernel setting EL 8 and earlier 107 | set_fact: 108 | __crashkernel: 109 | stdout: auto 110 | when: is_el7_or_el8 | bool 111 | 112 | - name: Check for crashkernel setting 113 | shell: > 114 | set -euo pipefail; 115 | grubby --info=ALL | grep -E {{ pattern | quote }} 116 | changed_when: false 117 | vars: 118 | pattern: ^args=.* crashkernel={{ __crashkernel.stdout }}( |"|$) 119 | -------------------------------------------------------------------------------- /tests/tests_default_wrapper.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create static inventory from hostvars 3 | hosts: all 4 | tags: 5 | - 'tests::slow' 6 | tasks: 7 | - name: Create temporary file 8 | tempfile: 9 | state: file 10 | suffix: .inventory.yaml 11 | register: tempinventory 12 | delegate_to: localhost 13 | 14 | - name: Create static inventory from hostvars 15 | template: 16 | src: inventory.yaml.j2 17 | dest: "{{ tempinventory.path }}" 18 | mode: "0644" 19 | delegate_to: localhost 20 | 21 | - name: Run tests_default.yml normally 22 | tags: 23 | - 'tests::slow' 24 | import_playbook: tests_default.yml 25 | 26 | - name: Run tests_default.yml in check_mode 27 | hosts: all 28 | tags: 29 | - 'tests::slow' 30 | tasks: 31 | - name: Run ansible-playbook with tests_default.yml in check mode 32 | command: > 33 | ansible-playbook -vvv -i {{ tempinventory.path }} 34 | --check tests_default.yml 35 | delegate_to: localhost 36 | changed_when: false 37 | 38 | - name: Remove the temporary file 39 | file: 40 | path: "{{ tempinventory.path }}" 41 | state: absent 42 | when: tempinventory.path is defined 43 | delegate_to: localhost 44 | -------------------------------------------------------------------------------- /tests/tests_ssh.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure that the rule runs with ssh 3 | hosts: all 4 | vars: 5 | # this is the outside address under which the ssh dump server is 6 | # known and ansible is supposed to be configured to be able to 7 | # connect to it (via inventory). 8 | # Can be set to localhost if the control node is being used as the 9 | # kdump target. This requires sshd to be running on the control 10 | # node, though. 11 | # In this case uncomment the tests::multihost_localhost tag below. 12 | # Setting it to inventory_hostname uses the managed node itself as 13 | # the kdump target. This makes the the test entirely single-host, 14 | # but is less realistic, as in practice a host can not dump core 15 | # to itself. 16 | kdump_test_ssh_server_outside: "{{ inventory_hostname }}" 17 | kdump_test_ssh_source: "{{ 18 | ansible_env['SSH_CONNECTION'].split()[0] 19 | if 'SSH_CONNECTION' in ansible_env 20 | else '127.0.0.1' }}" 21 | 22 | # this is the address at which the ssh dump server can be reached 23 | # from the managed host. Dumps will be uploaded there. 24 | kdump_test_ssh_server_inside: >- 25 | {{ 26 | kdump_test_ssh_source if kdump_test_ssh_source in 27 | hostvars[kdump_test_ssh_server_outside]['ansible_all_ipv4_addresses'] 28 | + hostvars[kdump_test_ssh_server_outside]['ansible_all_ipv6_addresses'] 29 | else 30 | hostvars[kdump_test_ssh_server_outside]['ansible_default_ipv4']['address'] 31 | }} 32 | 33 | # This is the outside address. Ansible will connect to it to 34 | # copy the ssh key. 35 | kdump_ssh_server: "{{ kdump_test_ssh_server_outside }}" 36 | kdump_ssh_user: >- 37 | {{ hostvars[kdump_test_ssh_server_outside]['ansible_user_id'] }} 38 | kdump_path: /tmp/tests_ssh 39 | kdump_target: 40 | type: ssh 41 | # This is the ssh dump server address visible from inside 42 | # the machine being configured. Dumps are to be copied 43 | # there. 44 | location: "{{ kdump_ssh_user }}@{{ kdump_test_ssh_server_inside }}" 45 | 46 | # This test may execute some tasks on localhost and rely on 47 | # localhost being a different host than the managed host 48 | # (localhost is being used as a second host in multihost 49 | # scenario). This would also mean that localhost would have to 50 | # be capable enough (not just a container - must be running a 51 | # sshd). 52 | # This applies only when kdump_test_ssh_server_outside is set to 53 | # localhost. In that case, uncomment the lines below and the 54 | # corresponding line in tests_ssh_wrapper.yml 55 | # tags: 56 | # - 'tests::multihost_localhost' 57 | 58 | tasks: 59 | - name: Gather facts from {{ kdump_test_ssh_server_outside }} 60 | setup: 61 | delegate_to: "{{ kdump_test_ssh_server_outside }}" 62 | delegate_facts: true 63 | 64 | - name: Print message that this test is skipped on EL 6 65 | debug: 66 | msg: Skipping the test on EL 6 when storing logs over ssh to localhost 67 | when: 68 | - ansible_distribution in ['CentOS','RedHat'] 69 | - ansible_distribution_major_version == '6' 70 | - kdump_test_ssh_server_outside == inventory_hostname 71 | 72 | # The skip is required on EL 6 because mkdumprd on EL 6 does not work when 73 | # configuring ssh to localhost 74 | - name: Skip the test on EL 6 when control node == managed node 75 | meta: end_host 76 | when: 77 | - ansible_distribution in ['CentOS','RedHat'] 78 | - ansible_distribution_major_version == '6' 79 | - kdump_test_ssh_server_outside == inventory_hostname 80 | 81 | - name: >- 82 | The role requires reboot only on specific systems. Hence running the 83 | role in a rescue block to catch when it fails with reboot. 84 | block: 85 | - name: Run the role. If reboot is not required - the play succeeds. 86 | include_role: 87 | name: linux-system-roles.kdump 88 | rescue: 89 | - name: If reboot is required - assert the expected fail message 90 | assert: 91 | that: 92 | - >- 93 | 'Reboot is required to apply changes.' in 94 | ansible_failed_result.msg 95 | - kdump_reboot_required | bool 96 | 97 | - name: Cleanup kdump_path 98 | file: 99 | path: "{{ kdump_path }}" 100 | state: absent 101 | delegate_to: "{{ kdump_ssh_server }}" 102 | -------------------------------------------------------------------------------- /tests/tests_ssh_reboot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure that the rule runs with ssh 3 | hosts: all 4 | vars: 5 | # this is the outside address under which the ssh dump server is 6 | # known and ansible is supposed to be configured to be able to 7 | # connect to it (via inventory). 8 | # Can be set to localhost if the control node is being used as the 9 | # kdump target. This requires sshd to be running on the control 10 | # node, though. 11 | # In this case uncomment the tests::multihost_localhost tag below. 12 | # Setting it to inventory_hostname uses the managed node itself as 13 | # the kdump target. This makes the the test entirely single-host, 14 | # but is less realistic, as in practice a host can not dump core 15 | # to itself. 16 | kdump_test_ssh_server_outside: "{{ inventory_hostname }}" 17 | kdump_test_ssh_source: "{{ 18 | ansible_env['SSH_CONNECTION'].split()[0] 19 | if 'SSH_CONNECTION' in ansible_env 20 | else '127.0.0.1' }}" 21 | 22 | # this is the address at which the ssh dump server can be reached 23 | # from the managed host. Dumps will be uploaded there. 24 | kdump_test_ssh_server_inside: >- 25 | {{ 26 | kdump_test_ssh_source if kdump_test_ssh_source in 27 | hostvars[kdump_test_ssh_server_outside]['ansible_all_ipv4_addresses'] 28 | + hostvars[kdump_test_ssh_server_outside]['ansible_all_ipv6_addresses'] 29 | else 30 | hostvars[kdump_test_ssh_server_outside]['ansible_default_ipv4']['address'] 31 | }} 32 | 33 | # This is the outside address. Ansible will connect to it to 34 | # copy the ssh key. 35 | kdump_ssh_server: "{{ kdump_test_ssh_server_outside }}" 36 | kdump_path: /tmp/tests_ssh_reboot 37 | kdump_target: 38 | type: ssh 39 | # This is the ssh dump server address visible from inside 40 | # the machine being configured. Dumps are to be copied 41 | # there. 42 | location: "{{ kdump_ssh_user }}@{{ kdump_test_ssh_server_inside }}" 43 | 44 | kdump_reboot_ok: true 45 | tags: 46 | - tests::reboot 47 | 48 | # This test may execute some tasks on localhost and rely on 49 | # localhost being a different host than the managed host 50 | # (localhost is being used as a second host in multihost 51 | # scenario). This would also mean that localhost would have to 52 | # be capable enough (not just a container - must be running a 53 | # sshd). 54 | # This applies only when kdump_test_ssh_server_outside is set to 55 | # localhost. In that case, uncomment the lines below and the 56 | # corresponding line in tests_ssh_wrapper.yml 57 | # tags: 58 | # - 'tests::multihost_localhost' 59 | 60 | tasks: 61 | - name: Gather facts from {{ kdump_test_ssh_server_outside }} 62 | setup: 63 | delegate_to: "{{ kdump_test_ssh_server_outside }}" 64 | delegate_facts: true 65 | 66 | - name: Create a kdump user on kdump_ssh_server 67 | user: 68 | name: kdump_ssh_user 69 | uid: 1189 70 | register: __user_info 71 | delegate_to: "{{ kdump_ssh_server }}" 72 | 73 | - name: Set kdump_ssh_user, sshkey, auth keys 74 | set_fact: 75 | kdump_ssh_user: kdump_ssh_user 76 | __ssh_dir: "{{ 77 | __user_info.home ~ '/.ssh' }}" 78 | __authorized_keys_path: "{{ 79 | __user_info.home ~ '/.ssh/authorized_keys' }}" 80 | 81 | - name: Print message that this test is skipped on EL 6 82 | debug: 83 | msg: Skipping the test on EL 6 because control node == managed node 84 | when: 85 | - ansible_distribution in ['CentOS','RedHat'] 86 | - ansible_distribution_major_version == '6' 87 | - kdump_test_ssh_server_outside == inventory_hostname 88 | 89 | # The skip is required on EL 6 because mkdumprd on EL 6 does not work when 90 | # configuring ssh to localhost 91 | - name: Skip the test on EL 6 when control node == managed node 92 | meta: end_host 93 | when: 94 | - ansible_distribution in ['CentOS','RedHat'] 95 | - ansible_distribution_major_version == '6' 96 | - kdump_test_ssh_server_outside == inventory_hostname 97 | 98 | - name: Determine if system is ostree and set flag 99 | when: not __kdump_is_ostree is defined 100 | block: 101 | - name: Check if system is ostree 102 | stat: 103 | path: /run/ostree-booted 104 | register: __ostree_booted_stat 105 | 106 | - name: Set flag to indicate system is ostree 107 | set_fact: 108 | __kdump_is_ostree: "{{ __ostree_booted_stat.stat.exists }}" 109 | 110 | - name: Skip the test on ostree systems 111 | meta: end_host 112 | when: __kdump_is_ostree | bool 113 | 114 | - name: Run the role and reboot if necessary 115 | include_role: 116 | name: linux-system-roles.kdump 117 | 118 | - name: Flush handlers 119 | meta: flush_handlers 120 | 121 | - name: Get the ssh dir for the user 122 | stat: 123 | path: "{{ __ssh_dir }}" 124 | register: __ssh_dir_stat_before 125 | delegate_to: "{{ kdump_ssh_server }}" 126 | 127 | - name: Get the authorized_keys file for the user 128 | stat: 129 | path: "{{ __authorized_keys_path }}" 130 | register: __authorized_keys_file 131 | delegate_to: "{{ kdump_ssh_server }}" 132 | 133 | - name: Get the authorized_keys contents 134 | slurp: 135 | src: "{{ __authorized_keys_file.stat.path }}" 136 | register: __authorized_keys_before 137 | delegate_to: "{{ kdump_ssh_server }}" 138 | 139 | - name: Run the role again 140 | include_role: 141 | name: linux-system-roles.kdump 142 | 143 | - name: Get the authorized_keys contents after 144 | slurp: 145 | src: "{{ __authorized_keys_file.stat.path }}" 146 | register: __authorized_keys_after 147 | delegate_to: "{{ kdump_ssh_server }}" 148 | 149 | - name: Assert no changes to authorized_keys 150 | assert: 151 | that: __authorized_keys_before == __authorized_keys_after 152 | 153 | - name: Get the ssh dir for the user after 154 | stat: 155 | path: "{{ __ssh_dir }}" 156 | register: __ssh_dir_stat_after 157 | delegate_to: "{{ kdump_ssh_server }}" 158 | 159 | - name: Assert no changes to ssh dir 160 | assert: 161 | that: 162 | - __ssh_dir_stat_before.stat.mode == __ssh_dir_stat_after.stat.mode 163 | - __ssh_dir_stat_before.stat.uid == __ssh_dir_stat_after.stat.uid 164 | - __ssh_dir_stat_before.stat.gid == __ssh_dir_stat_after.stat.gid 165 | 166 | - name: Delete user 167 | user: 168 | name: "{{ kdump_ssh_user }}" 169 | state: absent 170 | register: __user_delete 171 | until: __user_delete is success 172 | 173 | - name: Cleanup kdump_path 174 | file: 175 | path: "{{ kdump_path }}" 176 | state: absent 177 | delegate_to: "{{ kdump_ssh_server }}" 178 | -------------------------------------------------------------------------------- /tests/tests_ssh_wrapper.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create static inventory from hostvars 3 | hosts: all 4 | tags: 5 | - 'tests::slow' 6 | tasks: 7 | - name: Create temporary file 8 | tempfile: 9 | state: file 10 | suffix: .inventory.yaml 11 | register: tempinventory 12 | delegate_to: localhost 13 | 14 | - name: Create static inventory from hostvars 15 | template: 16 | src: inventory.yaml.j2 17 | dest: "{{ tempinventory.path }}" 18 | mode: "0644" 19 | delegate_to: localhost 20 | 21 | 22 | - name: Run tests_ssh.yml normally 23 | tags: 24 | - 'tests::slow' 25 | import_playbook: tests_ssh.yml 26 | 27 | - name: Run tests_ssh.yml in check_mode 28 | hosts: all 29 | tags: 30 | - 'tests::slow' 31 | # uncomment the line below if uncommenting it in tests_ssh.yml 32 | # - 'tests::multihost_localhost' 33 | tasks: 34 | - name: Run ansible-playbook with tests_ssh.yml in check mode 35 | command: | 36 | ansible-playbook -b -vv -i {{ tempinventory.path }} --check tests_ssh.yml 37 | delegate_to: localhost 38 | changed_when: false 39 | 40 | - name: Remove the temporary file 41 | file: 42 | path: "{{ tempinventory.path }}" 43 | state: absent 44 | when: tempinventory.path is defined 45 | delegate_to: localhost 46 | -------------------------------------------------------------------------------- /tests/vars/rh_distros_vars.yml: -------------------------------------------------------------------------------- 1 | # vars for handling conditionals for RedHat and clones 2 | # DO NOT EDIT - file is auto-generated 3 | # repo is https://github.com/linux-system-roles/.github 4 | # file is playbooks/templates/tests/vars/rh_distros_vars.yml 5 | --- 6 | # Ansible distribution identifiers that the role treats like RHEL 7 | __kdump_rh_distros: 8 | - AlmaLinux 9 | - CentOS 10 | - RedHat 11 | - Rocky 12 | 13 | # Same as above but includes Fedora 14 | __kdump_rh_distros_fedora: "{{ __kdump_rh_distros + ['Fedora'] }}" 15 | 16 | # Use this in conditionals to check if distro is Red Hat or clone 17 | __kdump_is_rh_distro: "{{ ansible_distribution in __kdump_rh_distros }}" 18 | 19 | # Use this in conditionals to check if distro is Red Hat or clone, or Fedora 20 | __kdump_is_rh_distro_fedora: "{{ ansible_distribution in __kdump_rh_distros_fedora }}" 21 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MIT 2 | [lsr_config] 3 | lsr_enable = true 4 | 5 | [lsr_ansible-lint] 6 | configfile = {toxinidir}/.ansible-lint 7 | 8 | [lsr_pylint] 9 | configfile = {toxinidir}/pylintrc 10 | -------------------------------------------------------------------------------- /vars/AlmaLinux_10.yml: -------------------------------------------------------------------------------- 1 | RedHat_10.yml -------------------------------------------------------------------------------- /vars/CentOS_10.yml: -------------------------------------------------------------------------------- 1 | RedHat_10.yml -------------------------------------------------------------------------------- /vars/RedHat_10.yml: -------------------------------------------------------------------------------- 1 | --- 2 | __kdump_packages: 3 | - grubby 4 | - iproute # for fact gathering for ip facts 5 | - kexec-tools 6 | - kdump-utils 7 | - openssh-clients 8 | -------------------------------------------------------------------------------- /vars/Rocky_10.yml: -------------------------------------------------------------------------------- 1 | RedHat_10.yml -------------------------------------------------------------------------------- /vars/main.yml: -------------------------------------------------------------------------------- 1 | # determine the managed node facing ssh server address 2 | --- 3 | __kdump_ssh_server_location: "{{ kdump_target.location | 4 | regex_replace('.*@(.*)$', '\\1') 5 | if kdump_target.location is defined 6 | else kdump_ssh_server }}" 7 | 8 | __kdump_packages: 9 | - grubby 10 | - iproute # for fact gathering for ip facts 11 | - kexec-tools 12 | - openssh-clients 13 | 14 | __kdump_required_facts: 15 | - all_ipv4_addresses 16 | - all_ipv6_addresses 17 | - default_ipv4 18 | - distribution 19 | - distribution_major_version 20 | - distribution_version 21 | - user_id 22 | 23 | # the subsets of ansible_facts that need to be gathered in case any of the 24 | # facts in required_facts is missing; see the documentation of 25 | # the 'gather_subset' parameter of the 'setup' module 26 | __kdump_required_facts_subsets: "{{ ['!all', '!min'] + 27 | __kdump_required_facts }}" 28 | 29 | # BEGIN - DO NOT EDIT THIS BLOCK - rh distros variables 30 | # Ansible distribution identifiers that the role treats like RHEL 31 | __kdump_rh_distros: 32 | - AlmaLinux 33 | - CentOS 34 | - RedHat 35 | - Rocky 36 | 37 | # Same as above but includes Fedora 38 | __kdump_rh_distros_fedora: "{{ __kdump_rh_distros + ['Fedora'] }}" 39 | 40 | # Use this in conditionals to check if distro is Red Hat or clone 41 | __kdump_is_rh_distro: "{{ ansible_distribution in __kdump_rh_distros }}" 42 | 43 | # Use this in conditionals to check if distro is Red Hat or clone, or Fedora 44 | __kdump_is_rh_distro_fedora: "{{ ansible_distribution in __kdump_rh_distros_fedora }}" 45 | # END - DO NOT EDIT THIS BLOCK - rh distros variables 46 | --------------------------------------------------------------------------------