├── .dockerignore ├── .github ├── CODEOWNERS ├── ISSUE_TEMPLATE │ ├── bug.yml │ ├── config.yml │ ├── doc.yml │ ├── feature_request.yml │ └── maintenance.yml ├── PULL_REQUEST_TEMPLATE.md ├── dependabot.yml ├── scripts │ └── get_latest_changelog.py └── workflows │ ├── auto_approve.yml │ ├── code_quality.yml │ ├── codeql.yml │ ├── release_bump.yml │ └── release_publish.yml ├── .gitignore ├── .semantic_release └── CHANGELOG.md.j2 ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── DEVELOPMENT.md ├── LICENSE ├── NOTICE ├── README.md ├── hatch.toml ├── hatch_version_hook.py ├── pipeline ├── build.sh └── publish.sh ├── pyproject.toml ├── requirements-development.txt ├── requirements-release.txt ├── requirements-testing.txt ├── scripts ├── add_copyright_headers.sh ├── run_codebuild_proxy_tests.sh ├── run_sudo_tests.sh └── windows_service_test.py ├── src └── openjd │ └── sessions │ ├── __init__.py │ ├── _action_filter.py │ ├── _embedded_files.py │ ├── _linux │ ├── __init__.py │ ├── _capabilities.py │ └── _sudo.py │ ├── _logging.py │ ├── _os_checker.py │ ├── _path_mapping.py │ ├── _runner_base.py │ ├── _runner_env_script.py │ ├── _runner_step_script.py │ ├── _scripts │ ├── _posix │ │ └── _signal_subprocess.sh │ └── _windows │ │ └── _signal_win_subprocess.py │ ├── _session.py │ ├── _session_user.py │ ├── _subprocess.py │ ├── _tempdir.py │ ├── _types.py │ ├── _win32 │ ├── __init__.py │ ├── _api.py │ ├── _helpers.py │ ├── _locate_executable.py │ └── _popen_as_user.py │ ├── _windows_permission_helper.py │ ├── _windows_process_killer.py │ └── py.typed ├── test └── openjd │ ├── sessions │ ├── __init__.py │ ├── conftest.py │ ├── support_files │ │ ├── __init__.py │ │ ├── app_20s_run.py │ │ ├── app_20s_run.sh │ │ ├── app_20s_run_ignore_signal.py │ │ ├── output_signal_sender.c │ │ └── run_app_20s_run.py │ ├── test_action_filter.py │ ├── test_embedded_files.py │ ├── test_importable.py │ ├── test_os_checker.py │ ├── test_path_mapping.py │ ├── test_redacted_env.py │ ├── test_redaction.py │ ├── test_runner_base.py │ ├── test_runner_env_script.py │ ├── test_runner_step_script.py │ ├── test_session.py │ ├── test_session_user.py │ ├── test_subprocess.py │ ├── test_tempdir.py │ └── test_windows_process_killer.py │ ├── test_copyright_header.py │ ├── test_importable.py │ └── utils │ └── windows_acl_helper.py └── testing_containers ├── codebuild_proxy ├── Dockerfile └── run_tests.sh ├── ldap_sudo_environment ├── Dockerfile ├── README.md ├── addUsersGroups.ldif ├── addUsersToSharedGroup.ldif ├── changePassword.ldif └── start_ldap.sh └── localuser_sudo_environment ├── Dockerfile └── README.md /.dockerignore: -------------------------------------------------------------------------------- 1 | # Directories 2 | .github 3 | .semantic_release 4 | build 5 | dist 6 | pipeline 7 | scripts 8 | 9 | # Top-level files 10 | .gitignore 11 | *.md 12 | LICENSE 13 | NOTICE 14 | # README.md is required since it is referenced by pyproject.toml 15 | !README.md 16 | Dockerfile 17 | -------------------------------------------------------------------------------- /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @OpenJobDescription/Developers -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug.yml: -------------------------------------------------------------------------------- 1 | name: "\U0001F41B Bug Report" 2 | description: Report a bug 3 | title: "Bug: (short bug description)" 4 | labels: ["bug", "needs triage"] 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | Thank you for taking the time to fill out this bug report! 10 | 11 | ⚠️ If the bug that you are reporting is a security-related issue or security vulnerability, 12 | then please do not create a report via this template. Instead please 13 | notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/) 14 | or directly via email to [AWS Security](aws-security@amazon.com). 15 | 16 | - type: textarea 17 | id: description 18 | attributes: 19 | label: Describe the bug 20 | description: What is the problem? A clear and concise description of the bug. 21 | validations: 22 | required: true 23 | 24 | - type: textarea 25 | id: expected_behaviour 26 | attributes: 27 | label: Expected Behaviour 28 | description: What did you expect to happen? 29 | validations: 30 | required: true 31 | 32 | - type: textarea 33 | id: current_behaviour 34 | attributes: 35 | label: Current Behaviour 36 | description: What actually happened? Please include as much detail as you can. 37 | validations: 38 | required: true 39 | 40 | - type: textarea 41 | id: reproduction_steps 42 | attributes: 43 | label: Reproduction Steps 44 | description: | 45 | Please provide as much detail as you can to help us understand how we can reproduce the bug. 46 | Step by step instructions and self-contained code snippets are ideal. 47 | validations: 48 | required: true 49 | 50 | - type: textarea 51 | id: environment 52 | attributes: 53 | label: Environment 54 | description: Please provide information on the environment and software versions that you are using to reproduce the bug. 55 | value: | 56 | At minimum: 57 | 1. Operating system (e.g. Windows Server 2022; Amazon Linux 2023; etc.) 58 | 2. Output of `python3 --version` 59 | 3. Version of this library 60 | 61 | Please share other details about your environment that you think might be relevant to reproducing the bug. 62 | validations: 63 | required: true 64 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/config.yml: -------------------------------------------------------------------------------- 1 | blank_issues_enabled: false -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/doc.yml: -------------------------------------------------------------------------------- 1 | 2 | name: "📕 Documentation Issue" 3 | description: Issue in the documentation 4 | title: "Docs: (short description of the issue)" 5 | labels: ["documentation", "needs triage"] 6 | body: 7 | - type: textarea 8 | id: documentation_issue 9 | attributes: 10 | label: Documentation Issue 11 | description: Describe the issue 12 | validations: 13 | required: true -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.yml: -------------------------------------------------------------------------------- 1 | name: "\U0001F680 Feature Request" 2 | description: Request a new feature 3 | title: "Feature request: (short description of the feature)" 4 | labels: ["enhancement", "needs triage"] 5 | body: 6 | - type: textarea 7 | id: problem 8 | attributes: 9 | label: Describe the problem 10 | description: | 11 | Help us understand the problem that you are trying to solve, and why it is important to you. 12 | Provide as much detail as you are able. 13 | validations: 14 | required: true 15 | 16 | - type: textarea 17 | id: proposed_solution 18 | attributes: 19 | label: Proposed Solution 20 | description: | 21 | Describe your proposed feature that you see solving this problem for you. If you have a 22 | full or partial prototype implementation then please open a draft pull request and link to 23 | it here as well. 24 | validations: 25 | required: true 26 | 27 | - type: textarea 28 | id: use_case 29 | attributes: 30 | label: Example Use Cases 31 | description: | 32 | Provide some sample code snippets or shell scripts that show how **you** would use this feature as 33 | you have proposed it. 34 | validations: 35 | required: true 36 | 37 | 38 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/maintenance.yml: -------------------------------------------------------------------------------- 1 | name: "🛠️ Maintenance" 2 | description: Some type of improvement 3 | title: "Maintenance: (short description of the issue)" 4 | labels: ["maintenance", "needs triage"] 5 | body: 6 | - type: textarea 7 | id: description 8 | attributes: 9 | label: Description 10 | description: Describe the improvement and why it is important to do. 11 | validations: 12 | required: true 13 | - type: textarea 14 | id: solution 15 | attributes: 16 | label: Solution 17 | description: Provide any ideas you have for how the suggestion can be implemented. 18 | validations: 19 | required: true 20 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Fixes: ** 2 | 3 | ### What was the problem/requirement? (What/Why) 4 | 5 | ### What was the solution? (How) 6 | 7 | ### What is the impact of this change? 8 | 9 | ### How was this change tested? 10 | 11 | See [DEVELOPMENT.md](https://github.com/OpenJobDescription/openjd-model-for-python/blob/mainline/DEVELOPMENT.md#testing) for information on running tests. 12 | 13 | - Have you run the unit tests? 14 | 15 | ### Was this change documented? 16 | 17 | - Are relevant docstrings in the code base updated? 18 | 19 | ### Is this a breaking change? 20 | 21 | A breaking change is one that modifies a public contract in a way that is not backwards compatible. See the 22 | [Public Interfaces](https://github.com/OpenJobDescription/openjd-model-for-python/blob/mainline/DEVELOPMENT.md#the-packages-public-interface) section 23 | of the DEVELOPMENT.md for more information on the public contracts. 24 | 25 | If so, then please describe the changes that users of this package must make to update their scripts, or Python applications. 26 | 27 | ### Does this change impact security? 28 | 29 | - Does the change need to be threat modeled? For example, does it create or modify files/directories that must only be readable by the process owner? 30 | - If so, then please label this pull request with the "security" label. We'll work with you to analyze the threats. 31 | 32 | ---- 33 | 34 | *By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.* -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # To get started with Dependabot version updates, you'll need to specify which 2 | # package ecosystems to update and where the package manifests are located. 3 | # Please see the documentation for all configuration options: 4 | # https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates 5 | 6 | version: 2 7 | updates: 8 | - package-ecosystem: "pip" 9 | directory: "/" # Location of package manifests 10 | schedule: 11 | interval: "weekly" 12 | day: "monday" 13 | commit-message: 14 | prefix: "chore(deps):" 15 | - package-ecosystem: "github-actions" 16 | directory: "/" # Location of package manifests 17 | schedule: 18 | interval: "weekly" 19 | day: "monday" 20 | commit-message: 21 | prefix: "chore(github):" -------------------------------------------------------------------------------- /.github/scripts/get_latest_changelog.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | """ 3 | This script gets the changelog notes for the latest version of this package. It makes the following assumptions 4 | 1. A file called CHANGELOG.md is in the current directory that has the changelog 5 | 2. The changelog file is formatted in a way such that level 2 headers are: 6 | a. The only indication of the beginning of a version's changelog notes. 7 | b. Always begin with `## ` 8 | 3. The changelog file contains the newest version's changelog notes at the top of the file. 9 | 10 | Example CHANGELOG.md: 11 | ``` 12 | ## 1.0.0 (2024-02-06) 13 | 14 | ### BREAKING CHANGES 15 | * **api**: rename all APIs 16 | 17 | ## 0.1.0 (2024-02-06) 18 | 19 | ### Features 20 | * **api**: add new api 21 | ``` 22 | 23 | Running this script on the above CHANGELOG.md should return the following contents: 24 | ``` 25 | ## 1.0.0 (2024-02-06) 26 | 27 | ### BREAKING CHANGES 28 | * **api**: rename all APIs 29 | 30 | ``` 31 | """ 32 | import re 33 | 34 | h2 = r"^##\s.*$" 35 | with open("CHANGELOG.md") as f: 36 | contents = f.read() 37 | matches = re.findall(h2, contents, re.MULTILINE) 38 | changelog = contents[: contents.find(matches[1]) - 1] if len(matches) > 1 else contents 39 | print(changelog) 40 | -------------------------------------------------------------------------------- /.github/workflows/auto_approve.yml: -------------------------------------------------------------------------------- 1 | name: Dependabot auto-approve 2 | on: pull_request 3 | 4 | permissions: 5 | pull-requests: write 6 | 7 | jobs: 8 | dependabot: 9 | runs-on: ubuntu-latest 10 | if: ${{ github.actor == 'dependabot[bot]' }} 11 | steps: 12 | - name: Dependabot metadata 13 | id: metadata 14 | uses: dependabot/fetch-metadata@v2 15 | with: 16 | github-token: "${{ secrets.GITHUB_TOKEN }}" 17 | - name: Approve a PR 18 | run: gh pr review --approve "$PR_URL" 19 | env: 20 | PR_URL: ${{ github.event.pull_request.html_url }} 21 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} -------------------------------------------------------------------------------- /.github/workflows/code_quality.yml: -------------------------------------------------------------------------------- 1 | name: Code Quality 2 | 3 | on: 4 | pull_request: 5 | branches: [ mainline, release ] 6 | workflow_call: 7 | inputs: 8 | branch: 9 | required: false 10 | type: string 11 | 12 | jobs: 13 | Test: 14 | name: Python 15 | strategy: 16 | matrix: 17 | os: [ubuntu-latest, windows-latest, macos-latest] 18 | python-version: ['3.9', '3.10', '3.11', '3.12'] 19 | uses: OpenJobDescription/.github/.github/workflows/reusable_python_build.yml@mainline 20 | with: 21 | os: ${{ matrix.os }} 22 | python-version: ${{ matrix.python-version }} 23 | -------------------------------------------------------------------------------- /.github/workflows/codeql.yml: -------------------------------------------------------------------------------- 1 | name: "CodeQL" 2 | 3 | on: 4 | push: 5 | branches: [ "mainline" ] 6 | pull_request: 7 | branches: [ "mainline" ] 8 | schedule: 9 | - cron: '0 8 * * MON' 10 | 11 | jobs: 12 | Analysis: 13 | name: Analysis 14 | uses: OpenJobDescription/.github/.github/workflows/reusable_codeql.yml@mainline 15 | -------------------------------------------------------------------------------- /.github/workflows/release_bump.yml: -------------------------------------------------------------------------------- 1 | name: "Release: Bump" 2 | 3 | on: 4 | workflow_dispatch: 5 | inputs: 6 | force_version_bump: 7 | required: false 8 | default: "" 9 | type: choice 10 | options: 11 | - "" 12 | - patch 13 | - minor 14 | - major 15 | 16 | concurrency: 17 | group: release 18 | 19 | jobs: 20 | UnitTests: 21 | name: Unit Tests 22 | uses: ./.github/workflows/code_quality.yml 23 | with: 24 | branch: mainline 25 | 26 | Bump: 27 | name: Version Bump 28 | needs: UnitTests 29 | uses: OpenJobDescription/.github/.github/workflows/reusable_bump.yml@mainline 30 | secrets: inherit 31 | with: 32 | force_version_bump: ${{ inputs.force_version_bump }} -------------------------------------------------------------------------------- /.github/workflows/release_publish.yml: -------------------------------------------------------------------------------- 1 | name: "Release: Publish" 2 | run-name: "Release: ${{ github.event.head_commit.message }}" 3 | 4 | on: 5 | push: 6 | branches: 7 | - mainline 8 | paths: 9 | - CHANGELOG.md 10 | 11 | concurrency: 12 | group: release 13 | 14 | jobs: 15 | Publish: 16 | name: Publish Release 17 | permissions: 18 | id-token: write 19 | contents: write 20 | uses: OpenJobDescription/.github/.github/workflows/reusable_publish.yml@mainline 21 | secrets: inherit 22 | # PyPI does not support reusable workflows yet 23 | # # See https://github.com/pypi/warehouse/issues/11096 24 | PublishToPyPI: 25 | needs: Publish 26 | runs-on: ubuntu-latest 27 | environment: release 28 | permissions: 29 | id-token: write 30 | steps: 31 | - name: Checkout 32 | uses: actions/checkout@v4 33 | with: 34 | ref: release 35 | fetch-depth: 0 36 | - name: Set up Python 37 | uses: actions/setup-python@v5 38 | with: 39 | python-version: '3.9' 40 | - name: Install dependencies 41 | run: | 42 | pip install --upgrade hatch 43 | - name: Build 44 | run: hatch -v build 45 | # # See https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-pypi 46 | - name: Publish to PyPI 47 | uses: pypa/gh-action-pypi-publish@release/v1 48 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *# 3 | *.swp 4 | 5 | *.DS_Store 6 | 7 | __pycache__/ 8 | *.py[cod] 9 | *$py.class 10 | *.egg-info/ 11 | 12 | /.coverage 13 | /.coverage.* 14 | /.cache 15 | /.pytest_cache 16 | /.mypy_cache 17 | /.ruff_cache 18 | /.attach_pid* 19 | /.venv 20 | 21 | /doc/_apidoc/ 22 | /build 23 | /dist 24 | _version.py 25 | *.log 26 | -------------------------------------------------------------------------------- /.semantic_release/CHANGELOG.md.j2: -------------------------------------------------------------------------------- 1 | {% for version, release in context.history.released.items() %} 2 | ## {{ version.as_tag() }} ({{ release.tagged_date.strftime("%Y-%m-%d") }}) 3 | 4 | {% if "breaking" in release["elements"] %} 5 | ### BREAKING CHANGES 6 | {% for commit in release["elements"]["breaking"] %} 7 | * {% if commit.scope %}**{{ commit.scope }}**: {% endif %}{{ commit.commit.summary[commit.commit.summary.find(": ")+1:].strip() }} ([`{{ commit.short_hash }}`]({{ commit.commit.hexsha | commit_hash_url }})) 8 | {% endfor %} 9 | {% endif %} 10 | 11 | {% if "features" in release["elements"] %} 12 | ### Features 13 | {% for commit in release["elements"]["features"] %} 14 | * {% if commit.scope %}**{{ commit.scope }}**: {% endif %}{{ commit.commit.summary[commit.commit.summary.find(": ")+1:].strip() }} ([`{{ commit.short_hash }}`]({{ commit.commit.hexsha | commit_hash_url }})) 15 | {% endfor %} 16 | {% endif %} 17 | 18 | {% if "bug fixes" in release["elements"] %} 19 | ### Bug Fixes 20 | {% for commit in release["elements"]["bug fixes"] %} 21 | * {% if commit.scope %}**{{ commit.scope }}**: {% endif %}{{ commit.commit.summary[commit.commit.summary.find(":")+1:].strip() }} ([`{{ commit.short_hash }}`]({{ commit.commit.hexsha | commit_hash_url }})) 22 | {% endfor %} 23 | {% endif %} 24 | 25 | {% endfor %} -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | Table of contents: 10 | 11 | * [Reporting Bugs/Feature Requests](#reporting-bugsfeature-requests) 12 | * [Development](#development) 13 | * [Finding contributions to work on](#finding-contributions-to-work-on) 14 | * [Talk with us first](#talk-with-us-first) 15 | * [Contributing via Pull Requests](#contributing-via-pull-requests) 16 | * [Conventional Commits](#conventional-commits) 17 | * [Licensing](#licensing) 18 | 19 | ## Reporting Bugs/Feature Requests 20 | 21 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 22 | 23 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 24 | reported the issue. Please try to include as much information as you can. 25 | 26 | ## Development 27 | 28 | We welcome you to contribute features and bug fixes via a [pull request](https://help.github.com/articles/creating-a-pull-request/). 29 | If you are new to contributing to GitHub repositories, then you may find the 30 | [GitHub documentation on collaborating with the fork and pull model](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/getting-started/about-collaborative-development-models#fork-and-pull-model) 31 | informative; this is the model that we follow. 32 | 33 | Please see [DEVELOPMENT.md](./DEVELOPMENT.md) for information about how to navigate this package's 34 | code base and development practices. 35 | 36 | ### Finding contributions to work on 37 | 38 | If you are not sure what you would like to contribute, then looking at the existing issues is a great way to find 39 | something to contribute on. Looking at 40 | [issues that have the "help wanted" or "good first issue" labels](https://github.com/OpenJobDescription/openjd-model-for-python/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22%2C%22help+wanted%22) 41 | are a good place to start, but please dive into any issue that interests you whether it has those labels or not. 42 | 43 | ### Talk with us first 44 | 45 | We ask that you please [open a feature request issue](https://github.com/OpenJobDescription/openjd-model-for-python/issues/new/choose) 46 | (if one does not already exist) and talk with us before posting a pull request that contains a significant amount of work, 47 | or one that proposes a change to a public interface such as to the interface of a publicly exported Python function or to 48 | the command-line interfaces' commands or arguments. We want to make sure that your time and effort is respected by working 49 | with you to design the change before you spend much of your time on it. If you want to create a draft pull request to show what 50 | you are thinking and then talk with us, then that works with us as well. 51 | 52 | We prefer that this package contain primarily features that are useful to many users of it, rather than features that are specific 53 | to niche workflows. If you have a feature in mind, but are not sure whether it is niche or not then please 54 | [open a feature request issue](https://github.com/OpenJobDescription/openjd-model-for-python/issues/new/choose). We will do our best to help 55 | you make that assessment, and posting a public issue will help others find your feature idea and add their support if they 56 | would also find it useful. 57 | 58 | ### Contributing via Pull Requests 59 | 60 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 61 | 62 | 1. You are working against the latest source on the *mainline* branch. 63 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 64 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 65 | 4. Your pull request will be focused on a single change - it is easier for us to understand when a change is focused rather 66 | than changing multiple things at once. 67 | 68 | To send us a pull request, please: 69 | 70 | 1. Fork the repository. 71 | 2. Modify the source and add tests for your change; please focus on the specific change you are contributing. 72 | If you also reformat all the code, it will be hard for us to focus on your change. 73 | Please see [DEVELOPMENT.md](./DEVELOPMENT.md) for tips. 74 | 3. Ensure tests pass. Please see the [Testing](./DEVELOPMENT.md#testing) section for information on tests. 75 | 4. Commit to your fork using clear commit messages. Note that all AWS Deadline Cloud GitHub repositories require the use 76 | of [conventional commit](#conventional-commits) syntax for the title of your commit. 77 | 5. Send us a pull request, answering any default questions in the pull request interface. 78 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 79 | 80 | GitHub provides additional documentation on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 81 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 82 | 83 | ### Conventional commits 84 | 85 | The commits in this repository are all required to use [conventional commit syntax](https://www.conventionalcommits.org/en/v1.0.0/) 86 | in their title to help us identify the kind of change that is being made, automatically generate the changelog, and 87 | automatically identify next release version number. Only the first commit that deviates from mainline in your pull request 88 | must adhere to this requirement. 89 | 90 | We ask that you use these commit types in your commit titles: 91 | 92 | * `feat` - When the pull request adds a new feature or functionality; 93 | * `fix` - When the pull request is implementing a fix to a bug; 94 | * `test` - When the pull request is only implementing an addition or change to tests or the testing infrastructure; 95 | * `docs` - When the pull request is primarily implementing an addition or change to the package's documentation; 96 | * `refactor` - When the pull request is implementing only a refactor of existing code; 97 | * `ci` - When the pull request is implementing a change to the CI infrastructure of the packge; 98 | * `chore` - When the pull request is a generic maintenance task. 99 | 100 | We also require that the type in your conventional commit title end in an exclaimation point (e.g. `feat!` or `fix!`) 101 | if the pull request should be considered to be a breaking change in some way. Please also include a "BREAKING CHANGE" footer 102 | in the description of your commit in this case ([example](https://www.conventionalcommits.org/en/v1.0.0/#commit-message-with-both--and-breaking-change-footer)). 103 | Examples of breaking changes include any that implements a backwards-imcompatible change to a public Python interface, 104 | the command-line interface, or the like. 105 | 106 | If you need change a commit message, then please see the 107 | [GitHub documentation on the topic](https://docs.github.com/en/pull-requests/committing-changes-to-your-project/creating-and-editing-commits/changing-a-commit-message) 108 | to guide you. 109 | 110 | ## Licensing 111 | 112 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 113 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /hatch.toml: -------------------------------------------------------------------------------- 1 | [envs.default] 2 | pre-install-commands = [ 3 | "pip install -r requirements-testing.txt" 4 | ] 5 | 6 | [envs.default.scripts] 7 | sync = "pip install -r requirements-testing.txt" 8 | test = "pytest --cov-config pyproject.toml {args:test}" 9 | typing = "mypy {args:src test}" 10 | style = [ 11 | "ruff check {args:.}", 12 | "black --check --diff {args:.}", 13 | ] 14 | fmt = [ 15 | "black {args:.}", 16 | "style", 17 | ] 18 | lint = [ 19 | "style", 20 | "typing", 21 | ] 22 | 23 | [[envs.all.matrix]] 24 | python = ["3.9", "3.10", "3.11", "3.12"] 25 | 26 | [envs.codebuild.scripts] 27 | build = "hatch build" 28 | 29 | [envs.container.env-vars] 30 | 31 | [envs.release] 32 | detached = true 33 | 34 | [envs.release.scripts] 35 | deps = "pip install -r requirements-release.txt" 36 | bump = "semantic-release -v --strict version --no-push --no-commit --no-tag --skip-build {args}" 37 | version = "semantic-release -v --strict version --print {args}" 38 | -------------------------------------------------------------------------------- /hatch_version_hook.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | import shutil 4 | import sys 5 | 6 | from dataclasses import dataclass 7 | from hatchling.builders.hooks.plugin.interface import BuildHookInterface 8 | from typing import Any, Optional 9 | 10 | 11 | _logger = logging.Logger(__name__, logging.INFO) 12 | _stdout_handler = logging.StreamHandler(sys.stdout) 13 | _stdout_handler.addFilter(lambda record: record.levelno <= logging.INFO) 14 | _stderr_handler = logging.StreamHandler(sys.stderr) 15 | _stderr_handler.addFilter(lambda record: record.levelno > logging.INFO) 16 | _logger.addHandler(_stdout_handler) 17 | _logger.addHandler(_stderr_handler) 18 | 19 | 20 | @dataclass 21 | class CopyConfig: 22 | sources: list[str] 23 | destinations: list[str] 24 | 25 | 26 | class CustomBuildHookException(Exception): 27 | pass 28 | 29 | 30 | class CustomBuildHook(BuildHookInterface): 31 | """ 32 | A Hatch build hook that is pulled in automatically by Hatch's "custom" hook support 33 | See: https://hatch.pypa.io/1.6/plugins/build-hook/custom/ 34 | This build hook copies files from one location (sources) to another (destinations). 35 | Config options: 36 | - `log_level (str)`: The logging level. Any value accepted by logging.Logger.setLevel is allowed. Default is INFO. 37 | - `copy_map (list[dict])`: A list of mappings of files to copy and the destinations to copy them into. In TOML files, 38 | this is expressed as an array of tables. See https://toml.io/en/v1.0.0#array-of-tables 39 | Example TOML config: 40 | ``` 41 | [tool.hatch.build.hooks.custom] 42 | path = "hatch_hook.py" 43 | log_level = "DEBUG" 44 | [[tool.hatch.build.hooks.custom.copy_map]] 45 | sources = [ 46 | "_version.py", 47 | ] 48 | destinations = [ 49 | "src/openjd", 50 | ] 51 | [[tool.hatch.build.hooks.custom.copy_map]] 52 | sources = [ 53 | "something_the_tests_need.py", 54 | "something_else_the_tests_need.ini", 55 | ] 56 | destinations = [ 57 | "test/openjd", 58 | ] 59 | ``` 60 | """ 61 | 62 | REQUIRED_OPTS = [ 63 | "copy_map", 64 | ] 65 | 66 | def initialize(self, version: str, build_data: dict[str, Any]) -> None: 67 | if not self._prepare(): 68 | return 69 | 70 | for copy_cfg in self.copy_map: 71 | _logger.info(f"Copying {copy_cfg.sources} to {copy_cfg.destinations}") 72 | for destination in copy_cfg.destinations: 73 | for source in copy_cfg.sources: 74 | copy_func = shutil.copy if os.path.isfile(source) else shutil.copytree 75 | copy_func( 76 | os.path.join(self.root, source), 77 | os.path.join(self.root, destination), 78 | ) 79 | _logger.info("Copy complete") 80 | 81 | def clean(self, versions: list[str]) -> None: 82 | if not self._prepare(): 83 | return 84 | 85 | for copy_cfg in self.copy_map: 86 | _logger.info(f"Cleaning {copy_cfg.sources} from {copy_cfg.destinations}") 87 | cleaned_count = 0 88 | for destination in copy_cfg.destinations: 89 | for source in copy_cfg.sources: 90 | source_path = os.path.join(self.root, destination, source) 91 | remove_func = os.remove if os.path.isfile(source_path) else os.rmdir 92 | try: 93 | remove_func(source_path) 94 | except FileNotFoundError: 95 | _logger.debug(f"Skipping {source_path} because it does not exist...") 96 | else: 97 | cleaned_count += 1 98 | _logger.info(f"Cleaned {cleaned_count} items") 99 | 100 | def _prepare(self) -> bool: 101 | missing_required_opts = [ 102 | opt for opt in self.REQUIRED_OPTS if opt not in self.config or not self.config[opt] 103 | ] 104 | if missing_required_opts: 105 | _logger.warning( 106 | f"Required options {missing_required_opts} are missing or empty. " 107 | "Contining without copying sources to destinations...", 108 | file=sys.stderr, 109 | ) 110 | return False 111 | 112 | log_level = self.config.get("log_level") 113 | if log_level: 114 | _logger.setLevel(log_level) 115 | 116 | return True 117 | 118 | @property 119 | def copy_map(self) -> Optional[list[CopyConfig]]: 120 | raw_copy_map: list[dict] = self.config.get("copy_map") 121 | if not raw_copy_map: 122 | return None 123 | 124 | if not ( 125 | isinstance(raw_copy_map, list) 126 | and all(isinstance(copy_cfg, dict) for copy_cfg in raw_copy_map) 127 | ): 128 | raise CustomBuildHookException( 129 | f'"copy_map" config option is a nonvalid type. Expected list[dict], but got {raw_copy_map}' 130 | ) 131 | 132 | def verify_list_of_file_paths(file_paths: Any, config_name: str): 133 | if not (isinstance(file_paths, list) and all(isinstance(fp, str) for fp in file_paths)): 134 | raise CustomBuildHookException( 135 | f'"{config_name}" config option is a nonvalid type. Expected list[str], but got {file_paths}' 136 | ) 137 | 138 | missing_paths = [ 139 | fp for fp in file_paths if not os.path.exists(os.path.join(self.root, fp)) 140 | ] 141 | if len(missing_paths) > 0: 142 | raise CustomBuildHookException( 143 | f'"{config_name}" config option contains some file paths that do not exist: {missing_paths}' 144 | ) 145 | 146 | copy_map: list[CopyConfig] = [] 147 | for copy_cfg in raw_copy_map: 148 | destinations: list[str] = copy_cfg.get("destinations") 149 | verify_list_of_file_paths(destinations, "destinations") 150 | 151 | sources: list[str] = copy_cfg.get("sources") 152 | verify_list_of_file_paths(sources, "source") 153 | 154 | copy_map.append(CopyConfig(sources, destinations)) 155 | 156 | return copy_map 157 | -------------------------------------------------------------------------------- /pipeline/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Set the -e option 3 | set -e 4 | 5 | pip install --upgrade pip 6 | pip install --upgrade hatch 7 | pip install --upgrade twine 8 | hatch run codebuild:lint 9 | hatch run codebuild:test 10 | hatch run codebuild:build -------------------------------------------------------------------------------- /pipeline/publish.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Set the -e option 3 | set -e 4 | 5 | ./pipeline/build.sh 6 | twine upload --repository codeartifact dist/* --verbose -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["hatchling", "hatch-vcs"] 3 | build-backend = "hatchling.build" 4 | 5 | [project] 6 | name = "openjd-sessions" 7 | authors = [ 8 | {name = "Amazon Web Services"}, 9 | ] 10 | dynamic = ["version"] 11 | readme = "README.md" 12 | license = "Apache-2.0" 13 | requires-python = ">=3.9" 14 | description = "Provides a library that can be used to build a runtime that is able to run Jobs in a Session as defined by Open Job Description." 15 | # https://pypi.org/classifiers/ 16 | classifiers = [ 17 | "Development Status :: 5 - Production/Stable", 18 | "Programming Language :: Python", 19 | "Programming Language :: Python :: 3 :: Only", 20 | "Programming Language :: Python :: 3.9", 21 | "Programming Language :: Python :: 3.10", 22 | "Programming Language :: Python :: 3.11", 23 | "Programming Language :: Python :: 3.12", 24 | "Operating System :: POSIX :: Linux", 25 | "Operating System :: Microsoft :: Windows", 26 | "Operating System :: MacOS", 27 | "License :: OSI Approved :: Apache Software License", 28 | "Intended Audience :: Developers", 29 | "Topic :: Software Development :: Libraries" 30 | ] 31 | dependencies = [ 32 | "openjd-model >= 0.8,< 0.9", 33 | "pywin32 >= 307; platform_system == 'Windows'", 34 | "psutil >= 5.9,< 7.1; platform_system == 'Windows'", 35 | ] 36 | 37 | [project.urls] 38 | Homepage = "https://github.com/OpenJobDescription/openjd-sessions-for-python" 39 | Source = "https://github.com/OpenJobDescription/openjd-sessions-for-python" 40 | 41 | [tool.hatch.build] 42 | artifacts = [ 43 | "*_version.py", 44 | ] 45 | 46 | [tool.hatch.version] 47 | source = "vcs" 48 | 49 | [tool.hatch.version.raw-options] 50 | version_scheme = "post-release" 51 | 52 | [tool.hatch.build.hooks.vcs] 53 | version-file = "_version.py" 54 | 55 | [tool.hatch.build.hooks.custom] 56 | path = "hatch_version_hook.py" 57 | 58 | [[tool.hatch.build.hooks.custom.copy_map]] 59 | sources = [ 60 | "_version.py", 61 | ] 62 | destinations = [ 63 | "src/openjd/sessions", 64 | ] 65 | 66 | [tool.hatch.build.targets.sdist] 67 | include = [ 68 | "src/openjd", 69 | "hatch_version_hook.py", 70 | ] 71 | 72 | [tool.hatch.build.targets.wheel] 73 | packages = [ 74 | "src/openjd", 75 | ] 76 | only-include = [ 77 | "src/openjd", 78 | ] 79 | 80 | [tool.mypy] 81 | check_untyped_defs = false 82 | show_error_codes = false 83 | pretty = true 84 | ignore_missing_imports = true 85 | disallow_incomplete_defs = false 86 | disallow_untyped_calls = false 87 | show_error_context = true 88 | strict_equality = false 89 | python_version = 3.9 90 | warn_redundant_casts = true 91 | warn_unused_configs = true 92 | warn_unused_ignores = false 93 | # Tell mypy that there's a namespace package at src/openjd 94 | namespace_packages = true 95 | explicit_package_bases = true 96 | mypy_path = "src" 97 | 98 | # See: https://docs.pydantic.dev/mypy_plugin/ 99 | # - Helps mypy understand pydantic typing. 100 | plugins = "pydantic.mypy" 101 | 102 | [tool.ruff] 103 | line-length = 100 104 | 105 | [tool.ruff.lint] 106 | ignore = [ 107 | "E501", 108 | # Double Check if this should be fixed 109 | "E731", 110 | ] 111 | 112 | [tool.ruff.lint.pep8-naming] 113 | classmethod-decorators = [ 114 | "classmethod", 115 | # pydantic decorators are classmethod decorators 116 | # suppress N805 errors on classes decorated with them 117 | "pydantic.validator", 118 | "pydantic.root_validator", 119 | ] 120 | 121 | [tool.ruff.lint.isort] 122 | known-first-party = [ 123 | "openjd", 124 | ] 125 | 126 | [tool.ruff.lint.per-file-ignores] 127 | # We need to use a platform assertion to short-circuit mypy type checking on non-Windows platforms 128 | # https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks 129 | # This causes imports to come after regular Python statements causing flake8 rule E402 to be flagged 130 | "src/openjd/sessions/_win32/*.py" = ["E402"] 131 | 132 | 133 | [tool.black] 134 | line-length = 100 135 | 136 | [tool.pytest.ini_options] 137 | xfail_strict = false 138 | addopts = [ 139 | "-rfEx", 140 | "--durations=5", 141 | "--cov=src/openjd/sessions/", 142 | "--color=yes", 143 | "--cov-report=html:build/coverage", 144 | "--cov-report=xml:build/coverage/coverage.xml", 145 | "--cov-report=term-missing", 146 | "--numprocesses=auto", 147 | "--timeout=30" 148 | ] 149 | markers = [ 150 | "requires_cap_kill: tests that require CAP_KILL Linux capability", 151 | ] 152 | 153 | 154 | [tool.coverage.run] 155 | branch = true 156 | parallel = true 157 | plugins = [ 158 | "coverage_conditional_plugin" 159 | ] 160 | 161 | [tool.coverage.paths] 162 | source = [ 163 | "src/" 164 | ] 165 | 166 | [tool.coverage.report] 167 | show_missing = true 168 | fail_under = 79 169 | 170 | # https://github.com/wemake-services/coverage-conditional-plugin 171 | [tool.coverage.coverage_conditional_plugin.omit] 172 | "sys_platform != 'win32'" = [ 173 | "src/openjd/sessions/_win32/*.py", 174 | "src/openjd/sessions/_scripts/_windows/*.py", 175 | "src/openjd/sessions/_windows*.py" 176 | ] 177 | "sys_platform != 'linux'" = [ 178 | "src/openjd/sessions/_linux/*.py", 179 | ] 180 | 181 | [tool.coverage.coverage_conditional_plugin.rules] 182 | # This cannot be empty otherwise coverage-conditional-plugin crashes with: 183 | # AttributeError: 'NoneType' object has no attribute 'items' 184 | # 185 | # =========== WARNING TO REVIEWERS ============ 186 | # 187 | # Any rules added here are ran through Python's 188 | # eval() function so watch for code injection 189 | # attacks. 190 | # 191 | # =========== WARNING TO REVIEWERS ============ 192 | 193 | [tool.semantic_release] 194 | # Can be removed or set to true once we are v1 195 | major_on_zero = false 196 | tag_format = "{version}" 197 | 198 | [tool.semantic_release.commit_parser_options] 199 | allowed_tags = [ 200 | "build", 201 | "chore", 202 | "ci", 203 | "docs", 204 | "feat", 205 | "fix", 206 | "perf", 207 | "style", 208 | "refactor", 209 | "test", 210 | ] 211 | minor_tags = [] 212 | patch_tags = [ 213 | "chore", 214 | "feat", 215 | "fix", 216 | "refactor", 217 | ] 218 | 219 | [tool.semantic_release.publish] 220 | upload_to_vcs_release = false 221 | 222 | [tool.semantic_release.changelog] 223 | template_dir = ".semantic_release" 224 | 225 | [tool.semantic_release.changelog.environment] 226 | trim_blocks = true 227 | lstrip_blocks = true 228 | 229 | [tool.semantic_release.branches.release] 230 | match = "(mainline|release)" 231 | -------------------------------------------------------------------------------- /requirements-development.txt: -------------------------------------------------------------------------------- 1 | hatch == 1.14.* 2 | hatch-vcs == 0.4.* -------------------------------------------------------------------------------- /requirements-release.txt: -------------------------------------------------------------------------------- 1 | python-semantic-release == 9.21.* -------------------------------------------------------------------------------- /requirements-testing.txt: -------------------------------------------------------------------------------- 1 | coverage[toml] == 7.* 2 | coverage-conditional-plugin == 0.9.* 3 | pytest == 8.3.* 4 | pytest-cov == 6.1.* 5 | pytest-timeout == 2.4.* 6 | pytest-xdist == 3.6.* 7 | black == 25.* 8 | ruff == 0.11.* 9 | mypy == 1.15.* 10 | psutil >= 5.9,< 7.1 11 | -------------------------------------------------------------------------------- /scripts/add_copyright_headers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | if [ $# -eq 0 ]; then 5 | echo "Usage: add-copyright-headers ..." >&2 6 | exit 1 7 | fi 8 | 9 | for file in "$@"; do 10 | if ! head -1 | grep 'Copyright ' "$file" >/dev/null; then 11 | case "$file" in 12 | *.java) 13 | CONTENT=$(cat "$file") 14 | cat > "$file" </dev/null; then 23 | CONTENT=$(tail -n +2 "$file") 24 | cat > "$file" < 27 | $CONTENT 28 | EOF 29 | else 30 | CONTENT=$(cat "$file") 31 | cat > "$file" < 33 | $CONTENT 34 | EOF 35 | fi 36 | ;; 37 | *.py) 38 | CONTENT=$(cat "$file") 39 | cat > "$file" < "$file" < "$file" < "$file" <&2 71 | exit 1 72 | ;; 73 | esac 74 | fi 75 | done -------------------------------------------------------------------------------- /scripts/run_codebuild_proxy_tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | set -eu 5 | 6 | # Run this from the root of the repository 7 | if ! test -d scripts 8 | then 9 | echo "Must run from the root of the repository" 10 | exit 1 11 | fi 12 | 13 | while [[ "${1:-}" != "" ]]; do 14 | case $1 in 15 | -h|--help) 16 | echo "Usage: $0 [--build]" 17 | exit 1 18 | ;; 19 | --build) 20 | docker build testing_containers/codebuild_proxy -t openjd_codebuild_proxy 21 | ;; 22 | *) 23 | echo "Unrecognized parameter: $1" 24 | exit 1 25 | ;; 26 | esac 27 | shift 28 | done 29 | 30 | # Copying the dist/ dir can cause permission issues, so just nuke it. 31 | hatch clean 2> /dev/null || true 32 | 33 | if test "${PIP_INDEX_URL:-}" != ""; then 34 | docker run --rm -v $(pwd):/code:ro -e PIP_INDEX_URL="${PIP_INDEX_URL}" openjd_codebuild_proxy:latest 35 | else 36 | docker run --rm -v $(pwd):/code:ro openjd_codebuild_proxy:latest 37 | fi -------------------------------------------------------------------------------- /scripts/run_sudo_tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | set -eux 5 | 6 | # Run this from the root of the repository 7 | if ! test -d scripts 8 | then 9 | echo "Must run from the root of the repository" 10 | exit 1 11 | fi 12 | 13 | USE_LDAP="False" 14 | BUILD_ONLY="False" 15 | while [[ "${1:-}" != "" ]]; do 16 | case $1 in 17 | -h|--help) 18 | echo "Usage: run_sudo_tests.sh [--build]" 19 | exit 1 20 | ;; 21 | --ldap) 22 | echo "Using the LDAP client container image for testing." 23 | USE_LDAP="True" 24 | ;; 25 | --build-only) 26 | BUILD_ONLY="True" 27 | ;; 28 | *) 29 | echo "Unrecognized parameter: $1" 30 | exit 1 31 | ;; 32 | esac 33 | shift 34 | done 35 | 36 | # Copying the dist/ dir can cause permission issues, so just nuke it. 37 | hatch clean 2> /dev/null || true 38 | 39 | ARGS="" 40 | 41 | if test "${PIP_INDEX_URL:-}" != ""; then 42 | # If PIP_INDEX_URL is set, then export that in to the container 43 | # so that `pip install` run in the container will fetch packages 44 | # from the correct repository. 45 | ARGS="${ARGS} -e PIP_INDEX_URL" 46 | fi 47 | 48 | if test "${USE_LDAP}" == "True"; then 49 | CONTAINER_HOSTNAME=ldap.environment.internal 50 | CONTAINER_IMAGE_TAG="openjd_ldap_test" 51 | CONTAINER_IMAGE_DIR="ldap_sudo_environment" 52 | else 53 | CONTAINER_HOSTNAME=localuser.environment.internal 54 | CONTAINER_IMAGE_TAG="openjd_localuser_test" 55 | CONTAINER_IMAGE_DIR="localuser_sudo_environment" 56 | fi 57 | ARGS="${ARGS} -h ${CONTAINER_HOSTNAME}" 58 | 59 | pip_index_arg="" 60 | if test "${PIP_INDEX_URL:-}" != ""; then 61 | pip_index_arg="--build-arg PIP_INDEX_URL " 62 | fi 63 | docker build -t "${CONTAINER_IMAGE_TAG}" $pip_index_arg --build-arg "BUILDKIT_SANDBOX_HOSTNAME=${CONTAINER_HOSTNAME}" --ulimit nofile=1024 --file "testing_containers/${CONTAINER_IMAGE_DIR}/Dockerfile" . 64 | 65 | if test "${BUILD_ONLY}" == "True"; then 66 | exit 0 67 | fi 68 | 69 | docker run --name test_openjd_sudo --rm ${ARGS} "${CONTAINER_IMAGE_TAG}:latest" 70 | 71 | if test "${USE_LDAP}" != "True"; then 72 | # Run capability tests 73 | # First with CAP_KILL in effective and permitted capability sets 74 | docker run --name test_openjd_sudo --user root --rm ${ARGS} "${CONTAINER_IMAGE_TAG}:latest" \ 75 | capsh \ 76 | --caps='cap_setuid,cap_setgid,cap_setpcap=ep cap_kill=eip' \ 77 | --keep=1 \ 78 | --user=hostuser \ 79 | --addamb=cap_kill \ 80 | -- \ 81 | -c 'capsh --noamb --caps=cap_kill=ep -- -c "hatch run test --no-cov -m requires_cap_kill"' 82 | # Second with CAP_KILL in permitted capability set but not effective capability set 83 | # this tests that OpenJD will add CAP_KILL to the effective capability set if needed 84 | docker run --name test_openjd_sudo --user root --rm ${ARGS} "${CONTAINER_IMAGE_TAG}:latest" \ 85 | capsh \ 86 | --caps='cap_setuid,cap_setgid,cap_setpcap=ep cap_kill=eip' \ 87 | --keep=1 \ 88 | --user=hostuser \ 89 | --addamb=cap_kill \ 90 | -- \ 91 | -c 'capsh --noamb --caps=cap_kill=p -- -c "hatch run test --no-cov -m requires_cap_kill"' 92 | fi 93 | -------------------------------------------------------------------------------- /scripts/windows_service_test.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import socket 4 | import logging 5 | from threading import Event 6 | from typing import Optional 7 | 8 | import win32serviceutil 9 | import win32service 10 | import servicemanager 11 | import subprocess 12 | import sys 13 | import os 14 | import argparse 15 | import shlex 16 | import win32con 17 | import win32api 18 | from getpass import getpass 19 | 20 | 21 | logger = logging.getLogger(__name__) 22 | 23 | 24 | class OpenJDSessionsForPythonTestService(win32serviceutil.ServiceFramework): 25 | # Pywin32 Service Configuration 26 | _svc_name_ = "OpenJDSessionsForPythonTest" 27 | _svc_display_name_ = "OpenJD Sessions For Python Test" 28 | _exe_name_ = "OpenJDSessionsForPythonTestService.exe" 29 | 30 | _stop_event: Event 31 | 32 | def __init__(self, args): 33 | win32serviceutil.ServiceFramework.__init__(self, args) 34 | 35 | self._stop_event = Event() 36 | socket.setdefaulttimeout(60) 37 | 38 | def SvcStop(self): 39 | """Invoked when the Windows Service is being stopped""" 40 | self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) 41 | logger.info("Windows Service is being stopped") 42 | self._stop_event.set() 43 | 44 | def SvcShutdown(self): 45 | """Invoked when the system is shutdown""" 46 | self.SvcStop() 47 | 48 | def SvcDoRun(self): 49 | """The main entrypoint called after the service is started""" 50 | servicemanager.LogMsg( 51 | servicemanager.EVENTLOG_INFORMATION_TYPE, 52 | servicemanager.PYS_SERVICE_STARTED, 53 | (self._svc_name_, ""), 54 | ) 55 | code_location = os.environ["CODE_LOCATION"] 56 | pytest_args = os.environ.get("PYTEST_ARGS", None) 57 | 58 | args = ["pytest", os.path.join(code_location, "test")] 59 | 60 | if pytest_args: 61 | args.extend(shlex.split(pytest_args, posix=False)) 62 | 63 | logging.basicConfig( 64 | filename=os.path.join(code_location, "test.log"), 65 | encoding="utf-8", 66 | level=logging.INFO, 67 | filemode="w", 68 | ) 69 | process = subprocess.Popen( 70 | args, 71 | stdout=subprocess.PIPE, 72 | stderr=subprocess.STDOUT, 73 | text=True, 74 | cwd=code_location, 75 | ) 76 | 77 | while True: 78 | output = process.stdout.readline() 79 | if not output and process.poll() is not None: 80 | break 81 | 82 | logger.info(output.strip()) 83 | 84 | servicemanager.LogMsg( 85 | servicemanager.EVENTLOG_INFORMATION_TYPE, 86 | servicemanager.PYS_SERVICE_STOPPED, 87 | (self._svc_name_, ""), 88 | ) 89 | logger.info("Stop status sent to Windows Service Controller") 90 | 91 | 92 | def _install_service(username: str, pytest_args: Optional[str] = None) -> list[str]: 93 | if "\\" not in username and "@" not in username: 94 | username = f".\\{username}" 95 | 96 | password = getpass("Please enter the user's password:") 97 | args = ["--username", username, "--password", password, "install"] 98 | 99 | exe_args = [f"CODE_LOCATION={os.path.dirname(os.path.dirname(os.path.abspath(__file__)))}"] 100 | 101 | if pytest_args: 102 | exe_args.append(f"PYTEST_ARGS={pytest_args}") 103 | 104 | key_handle = None 105 | try: 106 | # https://timgolden.me.uk/pywin32-docs/win32api__RegOpenKeyEx_meth.html 107 | key_handle = win32api.RegOpenKeyEx( 108 | getattr(win32con, "HKEY_LOCAL_MACHINE"), 109 | f"SYSTEM\\CurrentControlSet\\Services\\{OpenJDSessionsForPythonTestService._svc_name_}", 110 | 0, # reserved, only use 0 111 | win32con.KEY_SET_VALUE, 112 | ) 113 | # https://timgolden.me.uk/pywin32-docs/win32api__RegSetValueEx_meth.html 114 | win32api.RegSetValueEx( 115 | key_handle, 116 | "Environment", 117 | 0, # reserved, only use 0, 118 | win32con.REG_MULTI_SZ, # Multi-string value 119 | exe_args, 120 | ) 121 | finally: 122 | if key_handle is not None: 123 | win32api.CloseHandle(key_handle) 124 | 125 | return args 126 | 127 | 128 | if __name__ == "__main__": 129 | parser = argparse.ArgumentParser( 130 | prog="Run OpenJD Sessions for Python Windows Service Tests", 131 | ) 132 | 133 | # We wrap the commandline for win32serviceutil so that we can get the 134 | # password from stdin instead of plaintext on the command line. 135 | subparsers = parser.add_subparsers(dest="mode") 136 | 137 | install_service_args = subparsers.add_parser("install") 138 | install_service_args.add_argument("--username", required=True, type=str) 139 | install_service_args.add_argument( 140 | "--pytest-args", 141 | required=False, 142 | type=str, 143 | dest="pytest_args", 144 | default=None, 145 | help='Use this with an equals like --pytest-args="-vvv". Otherwise the argument parser will not recognize the dash at the beginning (-)', 146 | ) 147 | 148 | subparsers.add_parser("run") 149 | 150 | args = parser.parse_args() 151 | 152 | argv = [sys.argv[0]] 153 | 154 | if args.mode == "install": 155 | username = args.username 156 | 157 | argv.extend(_install_service(username=username, pytest_args=args.pytest_args)) 158 | 159 | elif args.mode == "run": 160 | argv.append("start") 161 | 162 | win32serviceutil.HandleCommandLine(OpenJDSessionsForPythonTestService, argv=argv) 163 | -------------------------------------------------------------------------------- /src/openjd/sessions/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from ._logging import LOG, LogContent 4 | from ._path_mapping import PathFormat, PathMappingRule 5 | from ._session import ActionStatus, Session, SessionCallbackType, SessionState 6 | from ._session_user import ( 7 | PosixSessionUser, 8 | SessionUser, 9 | WindowsSessionUser, 10 | BadCredentialsException, 11 | ) 12 | from ._types import ( 13 | ActionState, 14 | EnvironmentIdentifier, 15 | EnvironmentModel, 16 | EnvironmentScriptModel, 17 | StepScriptModel, 18 | ) 19 | from ._version import version 20 | 21 | __all__ = ( 22 | "ActionState", 23 | "ActionStatus", 24 | "EnvironmentIdentifier", 25 | "EnvironmentModel", 26 | "EnvironmentScriptModel", 27 | "LOG", 28 | "LogContent", 29 | "PathFormat", 30 | "PathMappingRule", 31 | "PosixSessionUser", 32 | "Session", 33 | "SessionCallbackType", 34 | "SessionState", 35 | "SessionUser", 36 | "StepScriptModel", 37 | "WindowsSessionUser", 38 | "BadCredentialsException", 39 | "version", 40 | ) 41 | -------------------------------------------------------------------------------- /src/openjd/sessions/_embedded_files.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | import stat 5 | from contextlib import contextmanager 6 | from dataclasses import dataclass 7 | from enum import Enum 8 | from pathlib import Path 9 | from shutil import chown 10 | from tempfile import mkstemp 11 | from typing import Any, Generator, Optional, cast 12 | 13 | from openjd.model import SymbolTable, FormatStringError 14 | from openjd.model.v2023_09 import EmbeddedFileText as EmbeddedFileText_2023_09 15 | from openjd.model.v2023_09 import ( 16 | ValueReferenceConstants as ValueReferenceConstants_2023_09, 17 | ) 18 | from ._logging import LoggerAdapter, LogExtraInfo, LogContent 19 | from ._session_user import PosixSessionUser, SessionUser, WindowsSessionUser 20 | from ._types import EmbeddedFilesListType, EmbeddedFileType 21 | 22 | from ._windows_permission_helper import WindowsPermissionHelper 23 | from ._os_checker import is_windows 24 | 25 | if is_windows(): 26 | from ._win32._helpers import get_process_user # type: ignore 27 | 28 | __all__ = ("EmbeddedFilesScope", "EmbeddedFiles") 29 | 30 | 31 | @contextmanager 32 | def _open_context(*args: Any, **kwargs: Any) -> Generator[int, None, None]: 33 | fd = os.open(*args, **kwargs) 34 | try: 35 | yield fd 36 | finally: 37 | os.close(fd) 38 | 39 | 40 | def write_file_for_user( 41 | filename: Path, data: str, user: Optional[SessionUser], additional_permissions: int = 0 42 | ) -> None: 43 | # File should only be r/w by the owner, by default 44 | 45 | # flags: 46 | # O_WRONLY - open for writing 47 | # O_CREAT - create if it does not exist 48 | # O_TRUNC - truncate the file. If we overwrite an existing file, then we 49 | # need to clear its contents. 50 | # O_EXCL (intentionally not present) - fail if file exists 51 | # - We exclude this 'cause we expect to be writing the same embedded file 52 | # into the same location repeatedly with different contents as we run 53 | # multiple Tasks in the same Session. 54 | flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC 55 | # mode: 56 | # S_IRUSR - Read by owner 57 | # S_IWUSR - Write by owner 58 | mode = stat.S_IRUSR | stat.S_IWUSR | (additional_permissions & stat.S_IRWXU) 59 | with _open_context(filename, flags, mode=mode) as fd: 60 | os.write(fd, data.encode("utf-8")) 61 | 62 | if os.name == "posix": 63 | if user is not None: 64 | user = cast(PosixSessionUser, user) 65 | # Set the group of the file 66 | chown(filename, group=user.group) 67 | # Update the permissions to include the group after the group is changed 68 | # Note: Only after changing group for security in case the group-ownership 69 | # change fails. 70 | mode |= stat.S_IRGRP | stat.S_IWGRP | (additional_permissions & stat.S_IRWXG) 71 | 72 | # The file may have already existed before calling this function (e.g. created by mkstemp) 73 | # so unconditionally set the file permissions to ensure that additional_permissions are set. 74 | os.chmod(filename, mode=mode) 75 | 76 | elif os.name == "nt": 77 | if user is not None: 78 | user = cast(WindowsSessionUser, user) 79 | process_user = get_process_user() 80 | WindowsPermissionHelper.set_permissions( 81 | str(filename), 82 | principals_full_control=[process_user], 83 | principals_modify_access=[user.user], 84 | ) 85 | 86 | 87 | class EmbeddedFilesScope(Enum): 88 | """What scope of Script a given set of files is for. 89 | This dictates what prefix is used in format string variables 90 | """ 91 | 92 | STEP = "step" 93 | ENV = "environment" 94 | 95 | 96 | @dataclass(frozen=True) 97 | class _FileRecord: 98 | symbol: str 99 | filename: Path 100 | file: EmbeddedFileType 101 | 102 | 103 | # Note: "EmbeddedFiles" is currently "Attachments" in the Open Job Description template, but that 104 | # will be changing to "EmbeddedFiles" to eliminate potential confusion with job bundle's 105 | # "attachments" 106 | class EmbeddedFiles: 107 | """Functionality for materializing a Script's Embedded Files to disk, and 108 | adding their values to a SymbolTable for use in the Script's Actions. 109 | """ 110 | 111 | def __init__( 112 | self, 113 | *, 114 | logger: LoggerAdapter, 115 | scope: EmbeddedFilesScope, 116 | session_files_directory: Path, 117 | user: Optional[SessionUser] = None, 118 | ) -> None: 119 | """ 120 | Arguments: 121 | logger (LoggerAdapter): Logger to send any logging messages to (e.g. errors). 122 | scope (EmbeddedFilesKind): The scope of the embedded files (used to determine 123 | value reference prefix in Format Strings). 124 | session_files_directory (Path): Directory within which to materialize the files to disk. 125 | user (Optional[SessionUser]): A group that will own the created files. 126 | The group rw bits will be set on the file if this option is supplied. 127 | Defaults to current user. 128 | """ 129 | self._logger = logger 130 | self._scope = scope 131 | self._target_directory = session_files_directory 132 | self._user = user 133 | 134 | def materialize(self, files: EmbeddedFilesListType, symtab: SymbolTable) -> None: 135 | if self._scope == EmbeddedFilesScope.ENV: 136 | self._logger.info("Writing embedded files for Environment to disk.") 137 | else: 138 | self._logger.info("Writing embedded files for Task to disk.") 139 | 140 | try: 141 | records = list[_FileRecord]() 142 | # Generate the symbol table values and filenames 143 | for file in files: 144 | # Raises: OSError 145 | symbol, filename = self._get_symtab_entry(file) 146 | records.append(_FileRecord(symbol=symbol, filename=filename, file=file)) 147 | 148 | # Add symbols to the symbol table 149 | for record in records: 150 | symtab[record.symbol] = str(record.filename) 151 | self._logger.info( 152 | f"Mapping: {record.symbol} -> {record.filename}", 153 | extra=LogExtraInfo( 154 | openjd_log_content=LogContent.FILE_PATH | LogContent.PARAMETER_INFO 155 | ), 156 | ) 157 | 158 | # Write the files to disk. 159 | for record in records: 160 | # Raises: OSError 161 | self._materialize_file(record.filename, record.file, symtab) 162 | except OSError as err: 163 | raise RuntimeError(f"Could not write embedded file: {err}") 164 | except FormatStringError as err: 165 | # This should *never* happen. All format string contents are 166 | # checked when building the Job Template model. If we get here, 167 | # then something is broken with our model validation. 168 | raise RuntimeError(f"Error resolving format string: {str(err)}") 169 | 170 | def _find_value_prefix(self, file: EmbeddedFileType) -> str: 171 | """Figure out what prefix to use when referencing the file in format strings. 172 | We figure this out based on the model that `file` comes from and 173 | self._scope. 174 | """ 175 | # When adding a new schema, start this method with a check for which 176 | # model 'file' belongs to -- that'll tell us the schema version. 177 | assert isinstance(file, EmbeddedFileText_2023_09) 178 | 179 | if self._scope == EmbeddedFilesScope.ENV: 180 | return ValueReferenceConstants_2023_09.ENV_FILE_PREFIX.value 181 | else: 182 | return ValueReferenceConstants_2023_09.TASK_FILE_PREFIX.value 183 | 184 | def _get_symtab_entry(self, file: EmbeddedFileType) -> tuple[str, Path]: 185 | """Figure out the entry to add to the symbol table for the given 186 | file. The value of the symbol table entry is the absolute filename 187 | of the file that we manifest on disk. 188 | 189 | Note: If a random filename is generated, then this does create 190 | the file as empty to reserve the filename on the filesystem. 191 | 192 | Returns: 193 | (symbol, value): 194 | symbol - The symbol to add to the symbol table. 195 | value - The absolute filename of the file to manifest. 196 | """ 197 | 198 | assert isinstance(file, EmbeddedFileText_2023_09) 199 | 200 | # Figure out what filename to use for the given embedded file. 201 | # This will either be provided in the given 'file' or we will 202 | # randomly generate one. 203 | filename: Path 204 | if not file.filename: 205 | # Raises: OSError 206 | fd, fname = mkstemp(dir=self._target_directory) # 0o600 207 | os.close(fd) 208 | filename = Path(fname) 209 | else: 210 | filename = self._target_directory / file.filename 211 | 212 | return (f"{self._find_value_prefix(file)}.{file.name}", filename) 213 | 214 | def _materialize_file( 215 | self, filename: Path, file: EmbeddedFileType, symtab: SymbolTable 216 | ) -> None: 217 | """Materialize/write the file data to disk. 218 | If self._user is set, then make it r/w by the given group. 219 | Make the file executable if the file settings indicate that we should. 220 | """ 221 | 222 | assert isinstance(file, EmbeddedFileText_2023_09) 223 | 224 | execute_permissions = 0 225 | if file.runnable: 226 | # Allow the owner to execute the file and the group if self._user is set 227 | execute_permissions |= stat.S_IXUSR | (stat.S_IXGRP if self._user is not None else 0) 228 | 229 | data = file.data.resolve(symtab=symtab) 230 | # Create the file as r/w owner, and optionally group 231 | write_file_for_user(filename, data, self._user, additional_permissions=execute_permissions) 232 | 233 | self._logger.info( 234 | f"Wrote: {file.name} -> {str(filename)}", 235 | extra=LogExtraInfo(openjd_log_content=LogContent.FILE_PATH), 236 | ) 237 | self._logger.debug( 238 | "Contents:\n%s", data, extra=LogExtraInfo(openjd_log_content=LogContent.FILE_CONTENTS) 239 | ) 240 | -------------------------------------------------------------------------------- /src/openjd/sessions/_linux/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /src/openjd/sessions/_linux/_capabilities.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | """This module contains code for interacting with Linux capabilities. The module uses the ctypes 4 | module from the Python standard library to wrap the libcap library. 5 | 6 | See https://man7.org/linux/man-pages/man7/capabilities.7.html for details on this Linux kernel 7 | feature. 8 | """ 9 | 10 | import ctypes 11 | import os 12 | import sys 13 | from contextlib import contextmanager 14 | from ctypes.util import find_library 15 | from enum import Enum 16 | from functools import cache 17 | from typing import Any, Generator, Optional, Tuple, TYPE_CHECKING 18 | 19 | 20 | from .._logging import LOG 21 | 22 | 23 | # Capability sets 24 | CAP_EFFECTIVE = 0 25 | CAP_PERMITTED = 1 26 | CAP_INHERITABLE = 2 27 | 28 | # Capability bit numbers 29 | CAP_KILL = 5 30 | 31 | # Values for cap_flag_value_t arguments 32 | CAP_CLEAR = 0 33 | CAP_SET = 1 34 | 35 | cap_flag_t = ctypes.c_int 36 | cap_flag_value_t = ctypes.c_int 37 | cap_value_t = ctypes.c_int 38 | 39 | 40 | class CapabilitySetType(Enum): 41 | INHERITABLE = CAP_INHERITABLE 42 | PERMITTED = CAP_PERMITTED 43 | EFFECTIVE = CAP_EFFECTIVE 44 | 45 | 46 | class UserCapHeader(ctypes.Structure): 47 | _fields_ = [ 48 | ("version", ctypes.c_uint32), 49 | ("pid", ctypes.c_int), 50 | ] 51 | 52 | 53 | class UserCapData(ctypes.Structure): 54 | _fields_ = [ 55 | ("effective", ctypes.c_uint32), 56 | ("permitted", ctypes.c_uint32), 57 | ("inheritable", ctypes.c_uint32), 58 | ] 59 | 60 | 61 | class Cap(ctypes.Structure): 62 | _fields_ = [ 63 | ("head", UserCapHeader), 64 | ("data", UserCapData), 65 | ] 66 | 67 | 68 | if TYPE_CHECKING: 69 | cap_t = ctypes._Pointer[Cap] 70 | cap_flag_value_ptr = ctypes._Pointer[cap_flag_value_t] 71 | cap_value_ptr = ctypes._Pointer[cap_value_t] 72 | ssize_ptr_t = ctypes._Pointer[ctypes.c_ssize_t] 73 | else: 74 | cap_t = ctypes.POINTER(Cap) 75 | cap_flag_value_ptr = ctypes.POINTER(cap_flag_value_t) 76 | cap_value_ptr = ctypes.POINTER(cap_value_t) 77 | ssize_ptr_t = ctypes.POINTER(ctypes.c_ssize_t) 78 | 79 | 80 | def _cap_set_err_check( 81 | result: ctypes.c_int, 82 | func: Any, 83 | args: Tuple[Any, ...], 84 | ) -> ctypes.c_int: 85 | if result != 0: 86 | errno = ctypes.get_errno() 87 | raise OSError(errno, os.strerror(errno)) 88 | return result 89 | 90 | 91 | def _cap_get_proc_err_check( 92 | result: cap_t, 93 | func: Any, 94 | args: Tuple[cap_t, cap_flag_t, ctypes.c_int, cap_value_ptr, cap_flag_value_t], 95 | ) -> cap_t: 96 | if not result: 97 | errno = ctypes.get_errno() 98 | raise OSError(errno, os.strerror(errno)) 99 | return result 100 | 101 | 102 | def _cap_get_flag_errcheck( 103 | result: ctypes.c_int, func: Any, args: Tuple[cap_t, cap_value_t, cap_flag_t, cap_flag_value_ptr] 104 | ) -> ctypes.c_int: 105 | if result != 0: 106 | errno = ctypes.get_errno() 107 | raise OSError(errno, os.strerror(errno)) 108 | return result 109 | 110 | 111 | @cache 112 | def _get_libcap() -> Optional[ctypes.CDLL]: 113 | if not sys.platform.startswith("linux"): 114 | raise OSError(f"libcap is only available on Linux, but found platform: {sys.platform}") 115 | 116 | libcap_path = find_library("cap") 117 | if libcap_path is None: 118 | LOG.info( 119 | "Unable to locate libcap. Session action cancelation signals will be sent using sudo" 120 | ) 121 | return None 122 | 123 | libcap = ctypes.CDLL(libcap_path, use_errno=True) 124 | 125 | # https://man7.org/linux/man-pages/man3/cap_set_proc.3.html 126 | libcap.cap_set_proc.restype = ctypes.c_int 127 | libcap.cap_set_proc.argtypes = [ 128 | cap_t, 129 | ] 130 | libcap.cap_set_proc.errcheck = _cap_set_err_check # type: ignore 131 | 132 | # https://man7.org/linux/man-pages/man3/cap_get_proc.3.html 133 | libcap.cap_get_proc.restype = cap_t 134 | libcap.cap_get_proc.argtypes = [] 135 | libcap.cap_get_proc.errcheck = _cap_get_proc_err_check # type: ignore 136 | 137 | # https://man7.org/linux/man-pages/man3/cap_set_flag.3.html 138 | libcap.cap_set_flag.restype = ctypes.c_int 139 | libcap.cap_set_flag.argtypes = [ 140 | cap_t, 141 | cap_flag_t, 142 | ctypes.c_int, 143 | cap_value_ptr, 144 | cap_flag_value_t, 145 | ] 146 | 147 | # https://man7.org/linux/man-pages/man3/cap_get_flag.3.html 148 | libcap.cap_get_flag.restype = ctypes.c_int 149 | libcap.cap_get_flag.argtypes = ( 150 | cap_t, 151 | cap_value_t, 152 | cap_flag_t, 153 | cap_flag_value_ptr, 154 | ) 155 | libcap.cap_get_flag.errcheck = _cap_get_flag_errcheck # type: ignore 156 | 157 | return libcap 158 | 159 | 160 | def _has_capability( 161 | *, 162 | libcap: ctypes.CDLL, 163 | caps: cap_t, 164 | capability: int, 165 | capability_set_type: CapabilitySetType, 166 | ) -> bool: 167 | flag_value = cap_flag_value_t() 168 | libcap.cap_get_flag(caps, capability, capability_set_type.value, ctypes.byref(flag_value)) 169 | return flag_value.value == CAP_SET 170 | 171 | 172 | @contextmanager 173 | def try_use_cap_kill() -> Generator[bool, None, None]: 174 | """ 175 | A context-manager that attempts to leverage the CAP_KILL Linux capability. 176 | 177 | If CAP_KILL is in the current thread's effective set, this context-manager takes no action and 178 | yields True. 179 | 180 | If CAP_KILL is not in the effective set but is in the permitted set, the context-manager: 181 | 1. adds CAP_KILL to the effective set before entering the context-manager 182 | 2. yields True 183 | 3. clears CAP_KILL from the effective set when exiting the context-manager 184 | 185 | Otherwise, the context-manager does nothing and yields False 186 | 187 | Returns: 188 | A context manager that yields a bool. See above for details. 189 | """ 190 | if not sys.platform.startswith("linux"): 191 | raise OSError(f"Only Linux is supported, but platform is {sys.platform}") 192 | 193 | libcap = _get_libcap() 194 | # If libcap is not found, we yield False indicating we are not aware of having CAP_KILL 195 | if not libcap: 196 | yield False 197 | return 198 | 199 | caps = libcap.cap_get_proc() 200 | 201 | if _has_capability( 202 | libcap=libcap, 203 | caps=caps, 204 | capability=CAP_KILL, 205 | capability_set_type=CapabilitySetType.EFFECTIVE, 206 | ): 207 | LOG.debug("CAP_KILL is in the thread's effective set") 208 | # CAP_KILL is already in the effective set 209 | yield True 210 | elif _has_capability( 211 | libcap=libcap, 212 | caps=caps, 213 | capability=CAP_KILL, 214 | capability_set_type=CapabilitySetType.PERMITTED, 215 | ): 216 | # CAP_KILL is in the permitted set. We will temporarily add it to the effective set 217 | LOG.debug("CAP_KILL is in the thread's permitted set. Temporarily adding to effective set") 218 | cap_value_arr_t = cap_value_t * 1 219 | cap_value_arr = cap_value_arr_t() 220 | cap_value_arr[0] = CAP_KILL 221 | libcap.cap_set_flag( 222 | caps, 223 | CAP_EFFECTIVE, 224 | 1, 225 | cap_value_arr, 226 | CAP_SET, 227 | ) 228 | libcap.cap_set_proc(caps) 229 | try: 230 | yield True 231 | finally: 232 | # Clear CAP_KILL from the effective set 233 | LOG.debug("Clearing CAP_KILL from the thread's effective set") 234 | libcap.cap_set_flag( 235 | caps, 236 | CAP_EFFECTIVE, 237 | 1, 238 | cap_value_arr, 239 | CAP_CLEAR, 240 | ) 241 | libcap.cap_set_proc(caps) 242 | else: 243 | yield False 244 | 245 | 246 | def main() -> None: 247 | """A developer debugging entrypoint for testing the try_use_cap_kill() behaviour""" 248 | import logging 249 | 250 | logging.basicConfig(level=logging.DEBUG) 251 | logging.getLogger("openjd.sessions").setLevel(logging.DEBUG) 252 | 253 | with try_use_cap_kill() as has_cap_kill: 254 | LOG.info("Has CAP_KILL: %s", has_cap_kill) 255 | 256 | 257 | if __name__ == "__main__": 258 | main() 259 | -------------------------------------------------------------------------------- /src/openjd/sessions/_linux/_sudo.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import glob 4 | import os 5 | import sys 6 | import time 7 | from subprocess import Popen, DEVNULL, PIPE, STDOUT, run 8 | from typing import Optional 9 | 10 | from .._logging import LoggerAdapter, LogContent, LogExtraInfo 11 | from .._os_checker import is_posix, is_linux 12 | 13 | 14 | class FindSignalTargetError(Exception): 15 | """Exception when unable to detect the signal target""" 16 | 17 | pass 18 | 19 | 20 | def find_sudo_child_process_group_id( 21 | *, 22 | logger: LoggerAdapter, 23 | sudo_process: Popen, 24 | timeout_seconds: float = 1, 25 | ) -> Optional[int]: 26 | # Hint to mypy to not raise module attribute errors (e.g. missing os.getpgid) 27 | if sys.platform == "win32": 28 | raise NotImplementedError("This method is for POSIX hosts only") 29 | if not is_posix(): 30 | raise NotImplementedError(f"Only POSIX supported, but running on {sys.platform}") 31 | if timeout_seconds <= 0: 32 | raise ValueError(f"Expected positive value for timeout_seconds but got {timeout_seconds}") 33 | 34 | # For cross-user support, we use sudo which creates an intermediate process: 35 | # 36 | # openjd-process 37 | # | 38 | # +-- sudo 39 | # | 40 | # +-- subprocess 41 | # 42 | # Sudo forwards signals that it is able to handle, but in the case of SIGKILL sudo cannot 43 | # handle the signal and the kernel will kill it leaving the child orphaned. We need to 44 | # send SIGKILL signals to the subprocess of sudo 45 | start = time.monotonic() 46 | now = start 47 | sudo_pgid = os.getpgid(sudo_process.pid) 48 | 49 | # Repeatedly scan for child processes 50 | # 51 | # This is put in a retry loop, because it takes a non-zero amount of time before sudo and 52 | # the kernel finish creating the subprocess. We cap this because the process may exit 53 | # quickly and we may never find the child process. 54 | sudo_child_pid: Optional[int] = None 55 | sudo_child_pgid: Optional[int] = None 56 | try: 57 | while now - start < timeout_seconds: 58 | if not sudo_child_pid: 59 | if is_linux(): 60 | sudo_child_pid = find_sudo_child_process_id_procfs( 61 | sudo_pid=sudo_process.pid, 62 | logger=logger, 63 | ) 64 | else: 65 | sudo_child_pid = find_child_process_id_pgrep( 66 | sudo_pid=sudo_process.pid, 67 | ) 68 | 69 | if sudo_child_pid: 70 | try: 71 | sudo_child_pgid = os.getpgid(sudo_child_pid) 72 | except ProcessLookupError: 73 | # If the process has exited, we short-circuit 74 | return None 75 | # sudo first forks, then creates a new process group. There is a race condition 76 | # where the process group ID we observe has not yet changed. If the PGID detected 77 | # matches the PGID of sudo, then we retry again in the loop 78 | if sudo_child_pgid == sudo_pgid: 79 | sudo_child_pgid = None 80 | else: 81 | break 82 | 83 | # If we did not find any child processes yet, sleep for some time and retry 84 | time.sleep(min(0.05, timeout_seconds - (now - start))) 85 | now = time.monotonic() 86 | if not sudo_child_pid or not sudo_child_pgid: 87 | raise FindSignalTargetError("unable to detect subprocess before timeout") 88 | except FindSignalTargetError as e: 89 | logger.warning( 90 | f"Unable to determine signal target: {e}", 91 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 92 | ) 93 | 94 | if sudo_child_pgid: 95 | logger.debug( 96 | f"Signal target PGID = {sudo_child_pgid}", 97 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 98 | ) 99 | 100 | return sudo_child_pgid 101 | 102 | 103 | def find_sudo_child_process_id_procfs( 104 | *, 105 | logger: LoggerAdapter, 106 | sudo_pid: int, 107 | ) -> Optional[int]: 108 | # Look for the child process of sudo using procfs. See 109 | # https://docs.kernel.org/filesystems/proc.html#proc-pid-task-tid-children-information-about-task-children 110 | 111 | child_pids: set[int] = set() 112 | for task_children_path in glob.glob(f"/proc/{sudo_pid}/task/**/children"): 113 | with open(task_children_path, "r") as f: 114 | child_pids.update(int(pid_str) for pid_str in f.read().split()) 115 | 116 | # If we found exactly one child, we return it 117 | if len(child_pids) == 1: 118 | 119 | child_pid = child_pids.pop() 120 | 121 | logger.debug( 122 | f"Session action process (sudo child) PID is {child_pid}", 123 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 124 | ) 125 | return child_pid 126 | # If we found multiple child processes, this violates our assumptions about how sudo 127 | # works. We will fall-back to using pkill for signalling the process 128 | elif len(child_pids) > 1: 129 | raise FindSignalTargetError( 130 | f"Expected single child processes of sudo, but found {child_pids}" 131 | ) 132 | return None 133 | 134 | 135 | def find_child_process_id_pgrep( 136 | *, 137 | sudo_pid: int, 138 | ) -> Optional[int]: 139 | pgrep_result = run( 140 | ["pgrep", "-P", str(sudo_pid)], 141 | stdout=PIPE, 142 | stderr=STDOUT, 143 | stdin=DEVNULL, 144 | text=True, 145 | ) 146 | if pgrep_result.returncode != 0: 147 | raise FindSignalTargetError("Unable to query child processes of sudo process") 148 | results = pgrep_result.stdout.splitlines() 149 | if len(results) > 1: 150 | raise FindSignalTargetError(f"Expected a single child process of sudo, but found {results}") 151 | elif len(results) == 0: 152 | return None 153 | sudo_subproc_pid = int(results[0]) 154 | return sudo_subproc_pid 155 | -------------------------------------------------------------------------------- /src/openjd/sessions/_logging.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | from __future__ import annotations 3 | 4 | from typing import Any, MutableMapping, TypedDict 5 | from enum import Flag, auto 6 | import logging 7 | 8 | __all__ = ["LOG", "LoggerAdapter", "LogExtraInfo", "LogContent"] 9 | 10 | 11 | class LogContent(Flag): 12 | """ 13 | A Flag Enum which describes the content of a log record. 14 | """ 15 | 16 | BANNER = auto() # Logs which contain a banner, such as to visually seperate log sections 17 | FILE_PATH = auto() # Logs which contain a filepath 18 | FILE_CONTENTS = auto() # Logs which contain the contents of a file 19 | COMMAND_OUTPUT = auto() # Logs which contain the output of a command run 20 | EXCEPTION_INFO = ( 21 | auto() 22 | ) # Logs which contain an exception openjd encountered, potentially including sensitive information such as filepaths or host information. 23 | PROCESS_CONTROL = ( 24 | auto() 25 | ) # Logs which contain information related to starting, killing, or signalling processes. 26 | PARAMETER_INFO = ( 27 | auto() 28 | ) # Logs which contain details about parameters and their values pertaining to the running action 29 | HOST_INFO = ( 30 | auto() 31 | ) # Logs which contain details about the system environment, e.g. dependency versions, OS name, CPU architecture. 32 | 33 | 34 | class LogExtraInfo(TypedDict): 35 | """ 36 | A TypedDict which contains extra information to be added to the "extra" key of a log record. 37 | """ 38 | 39 | openjd_log_content: LogContent | None 40 | 41 | 42 | class LoggerAdapter(logging.LoggerAdapter): 43 | """ 44 | LoggerAdapter which merges the "extra" kwarg instead of replacing with what the LoggerAdapter was initialized with. 45 | """ 46 | 47 | def process( 48 | self, msg: Any, kwargs: MutableMapping[str, Any] 49 | ) -> tuple[Any, MutableMapping[str, Any]]: 50 | """ 51 | Typically the LoggerAdaptor simply replaces the `extra` key in the kwargs with the one initialized with the 52 | adapter. However, we want to merge the two dictionaries, so we override it here. 53 | """ 54 | if "extra" not in kwargs: 55 | kwargs["extra"] = self.extra 56 | else: 57 | kwargs["extra"] |= self.extra 58 | return msg, kwargs 59 | 60 | 61 | # Name the logger for the sessions module, rather than this specific file 62 | LOG = logging.getLogger(".".join(__name__.split(".")[:-1])) 63 | """ 64 | The logger of the openjd sessions module. The logger has the name openjd.sessions and is used 65 | throughout the openjd sessions module to provide information on actions the module is taking, as 66 | well as any output from commands run during Sessions. 67 | 68 | Some LogRecords sent to the logger will have an extra attribute named "openjd_log_content" who's 69 | value is a LogContent which provides information on what data is contained in the LogRecord, or None 70 | if there is no applicable LogContent field (for example, a message like "Ending Session") 71 | 72 | If the LogRecord does not have the "openjd_log_content" no guarantees are made as to what content 73 | is in the LogRecord. LogRecords that contain LogContent.EXECPTION_INFO may also transitively include 74 | potentially sensitive information like filepaths or host info due to the nature of exception messages. 75 | """ 76 | LOG.setLevel(logging.INFO) 77 | 78 | _banner_log_extra = LogExtraInfo(openjd_log_content=LogContent.BANNER) 79 | 80 | 81 | def log_section_banner(logger: LoggerAdapter, section_title: str) -> None: 82 | logger.info("") 83 | logger.info("==============================================", extra=_banner_log_extra) 84 | logger.info(f"--------- {section_title}", extra=_banner_log_extra) 85 | logger.info("==============================================", extra=_banner_log_extra) 86 | 87 | 88 | def log_subsection_banner(logger: LoggerAdapter, section_title: str) -> None: 89 | logger.info("----------------------------------------------", extra=_banner_log_extra) 90 | logger.info(section_title, extra=_banner_log_extra) 91 | logger.info("----------------------------------------------", extra=_banner_log_extra) 92 | -------------------------------------------------------------------------------- /src/openjd/sessions/_os_checker.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | import sys 5 | 6 | LINUX = "linux" 7 | MACOS = "darwin" 8 | POSIX = "posix" 9 | WINDOWS = "nt" 10 | 11 | 12 | def is_linux() -> bool: 13 | return sys.platform == LINUX 14 | 15 | 16 | def is_posix() -> bool: 17 | return os.name == POSIX 18 | 19 | 20 | def is_windows() -> bool: 21 | return os.name == WINDOWS 22 | 23 | 24 | def check_os() -> None: 25 | if not (is_posix() or is_windows()): 26 | raise NotImplementedError( 27 | "Open Job Description can be only run in Posix or Windows system." 28 | f"os: {os.name} is not supported yet." 29 | ) 30 | -------------------------------------------------------------------------------- /src/openjd/sessions/_path_mapping.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from dataclasses import dataclass, fields 4 | from enum import Enum 5 | from os import name as os_name 6 | from pathlib import PurePath, PurePosixPath, PureWindowsPath 7 | 8 | 9 | class PathFormat(str, Enum): 10 | POSIX = "POSIX" 11 | WINDOWS = "WINDOWS" 12 | 13 | 14 | @dataclass(frozen=True) 15 | class PathMappingRule: 16 | source_path_format: PathFormat 17 | source_path: PurePath 18 | destination_path: PurePath 19 | 20 | def __init__( 21 | self, *, source_path_format: PathFormat, source_path: PurePath, destination_path: PurePath 22 | ): 23 | if source_path_format == PathFormat.POSIX: 24 | if not isinstance(source_path, PurePosixPath): 25 | raise ValueError( 26 | "Path mapping rule source_path_format does not match source_path type" 27 | ) 28 | else: 29 | if not isinstance(source_path, PureWindowsPath): 30 | raise ValueError( 31 | "Path mapping rule source_path_format does not match source_path type" 32 | ) 33 | 34 | # This roundabout way can set the attributes of a frozen dataclass 35 | object.__setattr__(self, "source_path_format", source_path_format) 36 | object.__setattr__(self, "source_path", source_path) 37 | object.__setattr__(self, "destination_path", destination_path) 38 | 39 | @staticmethod 40 | def from_dict(rule: dict[str, str]) -> "PathMappingRule": 41 | """Builds a PathMappingRule from a dictionary representation 42 | with strings as values.""" 43 | if not rule: 44 | raise ValueError("Empty path mapping rule") 45 | 46 | field_names = [field.name for field in fields(PathMappingRule)] 47 | for name in field_names: 48 | if name not in rule: 49 | raise ValueError(f"Path mapping rule requires the following fields: {field_names}") 50 | 51 | source_path_format = PathFormat(rule["source_path_format"].upper()) 52 | source_path: PurePath 53 | if source_path_format == PathFormat.POSIX: 54 | source_path = PurePosixPath(rule["source_path"]) 55 | else: 56 | source_path = PureWindowsPath(rule["source_path"]) 57 | destination_path = PurePath(rule["destination_path"]) 58 | 59 | unsupported_fields = set(rule.keys()) - set(field_names) 60 | if unsupported_fields: 61 | raise ValueError( 62 | f"Unsupported fields for constructing path mapping rule: {unsupported_fields}" 63 | ) 64 | 65 | return PathMappingRule( 66 | source_path_format=source_path_format, 67 | source_path=source_path, 68 | destination_path=destination_path, 69 | ) 70 | 71 | def to_dict(self) -> dict[str, str]: 72 | """Returns a dictionary representation of the PathMappingRule.""" 73 | return { 74 | "source_path_format": self.source_path_format.name, 75 | "source_path": str(self.source_path), 76 | "destination_path": str(self.destination_path), 77 | } 78 | 79 | def apply(self, *, path: str) -> tuple[bool, str]: 80 | """Applies the path mapping rule on the given path, if it matches the rule. 81 | Does not collapse ".." since symbolic paths could be used. 82 | 83 | Returns: tuple[bool, str] - indicating if the path matched the rule and the resulting 84 | mapped path. If it doesn't match, then it returns the original path unmodified. 85 | """ 86 | pure_path: PurePath 87 | if self.source_path_format == PathFormat.POSIX: 88 | pure_path = PurePosixPath(path) 89 | else: 90 | pure_path = PureWindowsPath(path) 91 | 92 | if not pure_path.is_relative_to(self.source_path): 93 | return False, path 94 | 95 | remapped_parts = ( 96 | self.destination_path.parts + pure_path.parts[len(self.source_path.parts) :] 97 | ) 98 | if os_name == "posix": 99 | result = str(PurePosixPath(*remapped_parts)) 100 | if self._has_trailing_slash(self.source_path_format, path): 101 | result += "/" 102 | else: 103 | result = str(PureWindowsPath(*remapped_parts)) 104 | if self._has_trailing_slash(self.source_path_format, path): 105 | result += "\\" 106 | 107 | return True, result 108 | 109 | def _has_trailing_slash(self, os: PathFormat, path: str) -> bool: 110 | if os == PathFormat.POSIX: 111 | return path.endswith("/") 112 | else: 113 | return path.endswith("\\") 114 | -------------------------------------------------------------------------------- /src/openjd/sessions/_runner_env_script.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from datetime import timedelta 4 | from ._logging import LoggerAdapter 5 | from pathlib import Path 6 | from typing import Callable, Optional 7 | 8 | from openjd.model import SymbolTable 9 | from openjd.model.v2023_09 import Action as Action_2023_09 10 | from openjd.model.v2023_09 import CancelationMode as CancelationMode_2023_09 11 | from openjd.model.v2023_09 import ( 12 | CancelationMethodNotifyThenTerminate as CancelationMethodNotifyThenTerminate_2023_09, 13 | ) 14 | from openjd.model.v2023_09 import EnvironmentScript as EnvironmentScript_2023_09 15 | from ._embedded_files import EmbeddedFilesScope 16 | from ._logging import log_subsection_banner 17 | from ._runner_base import ( 18 | CancelMethod, 19 | NotifyCancelMethod, 20 | ScriptRunnerBase, 21 | ScriptRunnerState, 22 | TerminateCancelMethod, 23 | ) 24 | from ._session_user import SessionUser 25 | from ._types import ActionModel, ActionState, EnvironmentScriptModel 26 | 27 | __all__ = ("EnvironmentScriptRunner",) 28 | 29 | 30 | _ENV_EXIT_DEFAULT_TIMEOUT = timedelta(minutes=5) 31 | """The default timeout for environment exit actions if none is specified. 32 | 33 | See https://github.com/OpenJobDescription/openjd-specifications/wiki/2023-09-Template-Schemas#5-action 34 | """ 35 | 36 | 37 | class EnvironmentScriptRunner(ScriptRunnerBase): 38 | """Use this to run actions from an Environment.""" 39 | 40 | _environment_script: Optional[EnvironmentScriptModel] 41 | """The environment script that we're running. 42 | """ 43 | 44 | _symtab: SymbolTable 45 | """Treat this as immutable. 46 | A SymbolTable containing values for all defined variables in the Step 47 | Script's scope (exluding any symbols defined within the Step Script itself). 48 | """ 49 | 50 | _session_files_directory: Path 51 | """The location in the filesystem where embedded files will be materialized. 52 | """ 53 | 54 | _action: Optional[ActionModel] 55 | """If defined, then this is the action that is currently running, or was last run. 56 | """ 57 | 58 | def __init__( 59 | self, 60 | *, 61 | logger: LoggerAdapter, 62 | user: Optional[SessionUser] = None, 63 | # environment for the subprocess that is run 64 | os_env_vars: Optional[dict[str, Optional[str]]] = None, 65 | # The working directory of the session 66 | session_working_directory: Path, 67 | # `cwd` for the subprocess that's run 68 | startup_directory: Optional[Path] = None, 69 | # Callback to invoke when a running action exits 70 | callback: Optional[Callable[[ActionState], None]] = None, 71 | environment_script: Optional[EnvironmentScriptModel] = None, 72 | symtab: SymbolTable, 73 | # Directory within which files/attachments should be materialized 74 | session_files_directory: Path, 75 | ): 76 | """ 77 | Arguments (from base class): 78 | logger (Logger): The logger to which all messages should be sent from this and the 79 | subprocess. 80 | os_env_vars (dict[str, str]): Environment variables and their values to inject into the 81 | running subprocess. 82 | session_working_directory (Path): The temporary directory in which the Session is running. 83 | user (Optional[SessionUser]): The user to run the subprocess as, if given. Defaults to the 84 | current user. 85 | startup_directory (Optional[Path]): cwd to set for the subprocess, if it's possible to set it. 86 | callback (Optional[Callable[[ActionState], None]]): Callback to invoke when the running 87 | subprocess has started, exited, or failed to start. Defaults to None. 88 | Arguments (unique to this class): 89 | environment (EnvironmentScriptModel): The Environment Script model that we're going to be running. 90 | symtab (SymbolTable): A SymbolTable containing values for all defined variables in the Step 91 | Script's scope (exluding any symbols defined within the Step Script itself). 92 | session_files_directory (Path): The location in the filesystem where embedded files will 93 | be materialized. 94 | """ 95 | super().__init__( 96 | logger=logger, 97 | user=user, 98 | os_env_vars=os_env_vars, 99 | session_working_directory=session_working_directory, 100 | startup_directory=startup_directory, 101 | callback=callback, 102 | ) 103 | self._environment_script = environment_script 104 | self._symtab = symtab 105 | self._session_files_directory = session_files_directory 106 | self._action = None 107 | 108 | if self._environment_script and not isinstance( 109 | self._environment_script, EnvironmentScript_2023_09 110 | ): 111 | raise NotImplementedError("Unknown model type") 112 | 113 | def _run_env_action( 114 | self, 115 | action: ActionModel, 116 | *, 117 | default_timeout: Optional[timedelta] = None, 118 | ) -> None: 119 | """Run a specific given action from this Environment.""" 120 | 121 | log_subsection_banner(self._logger, "Phase: Setup") 122 | 123 | # Write any embedded files to disk 124 | if ( 125 | self._environment_script is not None 126 | and self._environment_script.embeddedFiles is not None 127 | ): 128 | symtab = SymbolTable(source=self._symtab) 129 | # Note: _materialize_files calls the callback if it fails. 130 | self._materialize_files( 131 | EmbeddedFilesScope.ENV, 132 | self._environment_script.embeddedFiles, 133 | self._session_files_directory, 134 | symtab, 135 | ) 136 | if self.state == ScriptRunnerState.FAILED: 137 | return 138 | else: 139 | symtab = self._symtab 140 | 141 | # Construct the command by evalutating the format strings in the command 142 | self._action = action 143 | self._run_action(self._action, symtab, default_timeout=default_timeout) 144 | 145 | def enter(self) -> None: 146 | """Run the Environment's onEnter action.""" 147 | if self.state != ScriptRunnerState.READY: 148 | raise RuntimeError("This cannot be used to run a second subprocess.") 149 | 150 | # For the type checker 151 | if self._environment_script is not None: 152 | assert isinstance(self._environment_script, EnvironmentScript_2023_09) 153 | if self._environment_script is None or self._environment_script.actions.onEnter is None: 154 | self._state_override = ScriptRunnerState.SUCCESS 155 | # Nothing to do, no action defined. Call the callback 156 | # to inform the caller that the run is complete, and then exit. 157 | if self._callback is not None: 158 | self._callback(ActionState.SUCCESS) 159 | return 160 | 161 | self._run_env_action(self._environment_script.actions.onEnter) 162 | 163 | def exit(self) -> None: 164 | """Run the Environment's onExit action.""" 165 | if self.state != ScriptRunnerState.READY: 166 | raise RuntimeError("This cannot be used to run a second subprocess.") 167 | 168 | # For the type checker 169 | if self._environment_script is not None: 170 | assert isinstance(self._environment_script, EnvironmentScript_2023_09) 171 | if self._environment_script is None or self._environment_script.actions.onExit is None: 172 | self._state_override = ScriptRunnerState.SUCCESS 173 | # Nothing to do, no action defined. Call the callback 174 | # to inform the caller that the run is complete, and then exit. 175 | if self._callback is not None: 176 | self._callback(ActionState.SUCCESS) 177 | return 178 | 179 | self._run_env_action( 180 | self._environment_script.actions.onExit, 181 | default_timeout=_ENV_EXIT_DEFAULT_TIMEOUT, 182 | ) 183 | 184 | def cancel( 185 | self, *, time_limit: Optional[timedelta] = None, mark_action_failed: bool = False 186 | ) -> None: 187 | if self._action is None: 188 | # Nothing to do. 189 | return 190 | 191 | # For the type checker 192 | assert isinstance(self._action, Action_2023_09) 193 | 194 | method: CancelMethod 195 | if ( 196 | self._action.cancelation is None 197 | or self._action.cancelation.mode == CancelationMode_2023_09.TERMINATE 198 | ): 199 | # Note: Default cancelation for a 2023-09 Step Script is Terminate 200 | method = TerminateCancelMethod() 201 | else: 202 | model_cancel_method = self._action.cancelation 203 | # For the type checker 204 | assert isinstance(model_cancel_method, CancelationMethodNotifyThenTerminate_2023_09) 205 | if model_cancel_method.notifyPeriodInSeconds is None: 206 | # Default grace period is 30s for a 2023-09 Environment Script's notify cancel 207 | method = NotifyCancelMethod(terminate_delay=timedelta(seconds=30)) 208 | else: 209 | method = NotifyCancelMethod( 210 | terminate_delay=timedelta(seconds=model_cancel_method.notifyPeriodInSeconds) 211 | ) 212 | 213 | # Note: If the given time_limit is less than that in the method, then the time_limit will be what's used. 214 | self._cancel(method, time_limit, mark_action_failed) 215 | -------------------------------------------------------------------------------- /src/openjd/sessions/_runner_step_script.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from datetime import timedelta 4 | from ._logging import LoggerAdapter 5 | from pathlib import Path 6 | from typing import Callable, Optional 7 | 8 | from openjd.model import SymbolTable 9 | from openjd.model.v2023_09 import CancelationMode as CancelationMode_2023_09 10 | from openjd.model.v2023_09 import StepScript as StepScript_2023_09 11 | from openjd.model.v2023_09 import ( 12 | CancelationMethodNotifyThenTerminate as CancelationMethodNotifyThenTerminate_2023_09, 13 | ) 14 | from ._embedded_files import EmbeddedFilesScope 15 | from ._logging import log_subsection_banner 16 | from ._runner_base import ( 17 | CancelMethod, 18 | NotifyCancelMethod, 19 | ScriptRunnerBase, 20 | ScriptRunnerState, 21 | TerminateCancelMethod, 22 | ) 23 | from ._session_user import SessionUser 24 | from ._types import ActionState, StepScriptModel 25 | 26 | __all__ = ("StepScriptRunner",) 27 | 28 | 29 | class StepScriptRunner(ScriptRunnerBase): 30 | """Use this to run actions from a Step Script.""" 31 | 32 | _script: StepScriptModel 33 | """The step script that we're running. 34 | """ 35 | 36 | _symtab: SymbolTable 37 | """Treat this as immutable. 38 | A SymbolTable containing values for all defined variables in the Step 39 | Script's scope (exluding any symbols defined within the Step Script itself). 40 | """ 41 | 42 | _session_files_directory: Path 43 | """The location in the filesystem where embedded files will be materialized. 44 | """ 45 | 46 | def __init__( 47 | self, 48 | *, 49 | logger: LoggerAdapter, 50 | user: Optional[SessionUser] = None, 51 | # environment for the subprocess that is run 52 | os_env_vars: Optional[dict[str, Optional[str]]] = None, 53 | # The working directory of the session 54 | session_working_directory: Path, 55 | # `cwd` for the subprocess that's run 56 | startup_directory: Optional[Path] = None, 57 | # Callback to invoke when a running action exits 58 | callback: Optional[Callable[[ActionState], None]] = None, 59 | script: StepScriptModel, 60 | symtab: SymbolTable, 61 | # Directory within which files/attachments should be materialized 62 | session_files_directory: Path, 63 | ): 64 | """ 65 | Arguments (from base class): 66 | logger (LoggerAdapter): The logger to which all messages should be sent from this and the 67 | subprocess. 68 | os_env_vars (dict[str, str]): Environment variables and their values to inject into the 69 | running subprocess. 70 | session_working_directory (Path): The temporary directory in which the Session is running. 71 | user (Optional[SessionUser]): The user to run the subprocess as, if given. Defaults to the 72 | current user. 73 | startup_directory (Optional[Path]): cwd to set for the subprocess, if it's possible to set it. 74 | callback (Optional[Callable[[ActionState], None]]): Callback to invoke when the running 75 | subprocess has started, exited, or failed to start. Defaults to None. 76 | Arguments (unique to this class): 77 | script (StepScriptModel): The Step Script model that we're going to be running. 78 | symtab (SymbolTable): A SymbolTable containing values for all defined variables in the Step 79 | Script's scope (exluding any symbols defined within the Step Script itself). 80 | session_files_directory (Path): The location in the filesystem where embedded files will 81 | be materialized. 82 | """ 83 | super().__init__( 84 | logger=logger, 85 | user=user, 86 | os_env_vars=os_env_vars, 87 | session_working_directory=session_working_directory, 88 | startup_directory=startup_directory, 89 | callback=callback, 90 | ) 91 | self._script = script 92 | self._symtab = symtab 93 | self._session_files_directory = session_files_directory 94 | 95 | if not isinstance(self._script, StepScript_2023_09): 96 | raise NotImplementedError("Unknown model type") 97 | 98 | def run(self) -> None: 99 | """Run the Step Script's onRun Action.""" 100 | if self.state != ScriptRunnerState.READY: 101 | raise RuntimeError("This cannot be used to run a second subprocess.") 102 | 103 | log_subsection_banner(self._logger, "Phase: Setup") 104 | 105 | # For the type checker. 106 | assert isinstance(self._script, StepScript_2023_09) 107 | # Write any embedded files to disk 108 | if self._script.embeddedFiles is not None: 109 | symtab = SymbolTable(source=self._symtab) 110 | self._materialize_files( 111 | EmbeddedFilesScope.STEP, 112 | self._script.embeddedFiles, 113 | self._session_files_directory, 114 | symtab, 115 | ) 116 | if self.state == ScriptRunnerState.FAILED: 117 | return 118 | else: 119 | symtab = self._symtab 120 | 121 | # Construct the command by evalutating the format strings in the command 122 | self._run_action(self._script.actions.onRun, symtab) 123 | 124 | def cancel( 125 | self, *, time_limit: Optional[timedelta] = None, mark_action_failed: bool = False 126 | ) -> None: 127 | # For the type checker. 128 | assert isinstance(self._script, StepScript_2023_09) 129 | 130 | method: CancelMethod 131 | if ( 132 | self._script.actions.onRun.cancelation is None 133 | or self._script.actions.onRun.cancelation.mode == CancelationMode_2023_09.TERMINATE 134 | ): 135 | # Note: Default cancelation for a 2023-09 Step Script is Terminate 136 | method = TerminateCancelMethod() 137 | else: 138 | model_cancel_method = self._script.actions.onRun.cancelation 139 | # For the type checker 140 | assert isinstance(model_cancel_method, CancelationMethodNotifyThenTerminate_2023_09) 141 | if model_cancel_method.notifyPeriodInSeconds is None: 142 | # Default grace period is 120s for a 2023-09 Step Script's notify cancel 143 | method = NotifyCancelMethod(terminate_delay=timedelta(seconds=120)) 144 | else: 145 | method = NotifyCancelMethod( 146 | terminate_delay=timedelta(seconds=model_cancel_method.notifyPeriodInSeconds) 147 | ) 148 | 149 | # Note: If the given time_limit is less than that in the method, then the time_limit will be what's used. 150 | self._cancel(method, time_limit, mark_action_failed) 151 | -------------------------------------------------------------------------------- /src/openjd/sessions/_scripts/_posix/_signal_subprocess.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -x 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | # Usage: 5 | # $0 6 | # where: 7 | # PID of the parent process to the one to signal 8 | # is compatible with the `-s` option of /bin/kill 9 | # must be "True" or "False"; if True then 10 | # we instead signal the child of but not 11 | # must be "True" or "False" 12 | 13 | 14 | # Note: A limitation of this implementation is that it will only sigkill 15 | # processes that are in the same process-group as the command that we ran. 16 | # In the future, we can extend this to killing all processes spawned (including into 17 | # new process-groups since the parent-pid will allow the mapping) 18 | # by a depth-first traversal through the children. At each recursive 19 | # step we: 20 | # 1. SIGSTOP the process, so that it cannot create new subprocesses; 21 | # 2. Recurse into each child; and 22 | # 3. SIGKILL the process. 23 | # Things to watch for when doing so: 24 | # a. PIDs can get reused; just because a pid was a child of a process at one point doesn't 25 | # mean that it's still the same process when we recurse to it. So, check that the parent-pid 26 | # of any child is still as expected before we signal it or collect its children. 27 | # b. When we run the command using `sudo` then we need to either run code that does the whole 28 | # algorithm as the other user, or `sudo` to send every process signal. 29 | 30 | set -x 31 | 32 | PID="$1" 33 | SIG="$2" 34 | 35 | [ -f /bin/kill ] && KILL=/bin/kill 36 | [ ! -n "${KILL:-}" ] && [ -f /usr/bin/kill ] && KILL=/usr/bin/kill 37 | 38 | if [ ! -n "${KILL:-}" ] 39 | then 40 | echo "ERROR - Could not find the 'kill' command under /bin or /usr/bin. Please install it." 41 | exit 1 42 | fi 43 | 44 | exec "$KILL" -s "$SIG" -- "$PID" 45 | -------------------------------------------------------------------------------- /src/openjd/sessions/_scripts/_windows/_signal_win_subprocess.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # Note: This script will only be run as a subprocess. 4 | 5 | import sys 6 | import ctypes 7 | from ctypes.wintypes import BOOL, DWORD 8 | 9 | # This assertion short-circuits mypy from type checking this module on platforms other than Windows 10 | # https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks 11 | assert sys.platform == "win32" 12 | 13 | kernel32 = ctypes.WinDLL("kernel32") 14 | 15 | kernel32.AllocConsole.restype = BOOL 16 | kernel32.AllocConsole.argtypes = [] 17 | 18 | # https://learn.microsoft.com/en-us/windows/console/attachconsole 19 | kernel32.AttachConsole.restype = BOOL 20 | kernel32.AttachConsole.argtypes = [ 21 | DWORD, # [in] dwProcessId 22 | ] 23 | 24 | # https://learn.microsoft.com/en-us/windows/console/freeconsole 25 | kernel32.FreeConsole.restype = BOOL 26 | kernel32.FreeConsole.argtypes = [] 27 | 28 | kernel32.GenerateConsoleCtrlEvent.restype = BOOL 29 | kernel32.GenerateConsoleCtrlEvent.argtypes = [ 30 | DWORD, # [in] dwCtrlEvent 31 | DWORD, # [in] dwProcessGroupId 32 | ] 33 | 34 | CTRL_C_EVENT = 0 35 | CTRL_BREAK_EVENT = 1 36 | 37 | ATTACH_PARENT_PROCESS = -1 38 | 39 | 40 | def signal_process(pgid: int): 41 | # Send signal can only target processes in the same console. 42 | # We first detach from the current console and re-attach to that of process group. 43 | if not kernel32.FreeConsole(): 44 | raise ctypes.WinError() 45 | if not kernel32.AttachConsole(pgid): 46 | raise ctypes.WinError() 47 | 48 | # Send the signal 49 | # We send CTRL-BREAK as handler for it cannnot be disabled. 50 | # https://learn.microsoft.com/en-us/windows/console/ctrl-c-and-ctrl-break-signals 51 | 52 | # We only send CTRL-BREAK 53 | # if not kernel32.GenerateConsoleCtrlEvent(CTRL_C_EVENT, pgid): 54 | # raise ctypes.WinError() 55 | if not kernel32.GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, pgid): 56 | raise ctypes.WinError() 57 | 58 | if not kernel32.FreeConsole(): 59 | raise ctypes.WinError() 60 | if not kernel32.AttachConsole(ATTACH_PARENT_PROCESS): 61 | raise ctypes.WinError() 62 | 63 | 64 | if __name__ == "__main__": 65 | signal_process(int(sys.argv[1])) 66 | -------------------------------------------------------------------------------- /src/openjd/sessions/_session_user.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | from typing import Optional 5 | from abc import ABC, abstractmethod 6 | from ctypes.wintypes import HANDLE 7 | 8 | from ._os_checker import is_posix, is_windows 9 | 10 | if is_posix(): 11 | import grp 12 | import pwd 13 | 14 | if is_windows(): 15 | import win32api 16 | import win32security 17 | import pywintypes 18 | import winerror 19 | from win32con import LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT 20 | 21 | from ._win32._helpers import get_process_user, get_current_process_session_id # type: ignore 22 | 23 | __all__ = ( 24 | "PosixSessionUser", 25 | "SessionUser", 26 | "WindowsSessionUser", 27 | "BadCredentialsException", 28 | ) 29 | 30 | CURRENT_PROCESS_RUNNING_IN_WINDOWS_SESSION_0: bool 31 | if is_windows(): 32 | CURRENT_PROCESS_RUNNING_IN_WINDOWS_SESSION_0 = 0 == get_current_process_session_id() 33 | else: 34 | CURRENT_PROCESS_RUNNING_IN_WINDOWS_SESSION_0 = False 35 | 36 | 37 | class BadCredentialsException(Exception): 38 | """Exception raised for incorrect username or password.""" 39 | 40 | pass 41 | 42 | 43 | class SessionUser(ABC): 44 | """Base class for holding information on the specific os-user identity to run 45 | a Session as. 46 | """ 47 | 48 | user: str 49 | """ 50 | User name of the identity to run the Session's subprocesses under. 51 | """ 52 | 53 | @staticmethod 54 | @abstractmethod 55 | def _get_process_user(): 56 | """ 57 | Returns the user name of the user running the current process. 58 | """ 59 | pass 60 | 61 | def is_process_user(self) -> bool: 62 | """ 63 | Returns True if the session user is the user running the current process, else False. 64 | 65 | """ 66 | return self.user == self._get_process_user() 67 | 68 | 69 | class PosixSessionUser(SessionUser): 70 | __slots__ = ("user", "group") 71 | """Specific os-user identity to run a Session as under Linux/macOS.""" 72 | 73 | user: str 74 | """User name of the identity to run the Session's subprocesses under. 75 | """ 76 | 77 | group: str 78 | """Group name of the identity to run the Session's subprocesses under. 79 | """ 80 | 81 | def __init__(self, user: str, *, group: Optional[str] = None) -> None: 82 | """ 83 | Arguments: 84 | user (str): The user 85 | group (Optional[str]): The group. Defaults to the name of this 86 | process' effective group. 87 | """ 88 | if not is_posix(): 89 | raise RuntimeError("Only available on posix systems.") 90 | self.user = user 91 | self.group = group if group else grp.getgrgid(os.getegid()).gr_name # type: ignore 92 | 93 | @staticmethod 94 | def _get_process_user(): 95 | """ 96 | Returns the user name of the user running the current process. 97 | """ 98 | return pwd.getpwuid(os.geteuid()).pw_name 99 | 100 | 101 | class WindowsSessionUser(SessionUser): 102 | """Specific os-user identity to run a Session as under Windows. 103 | 104 | Note that you must check whether you are running in Windows Session 0 prior to 105 | creating an instance of this class. 106 | 1. If you're not in Session 0 (i.e. you're in a typical interactive logon via the desktop) 107 | then you must instantiate this class with a username + password; providing a logon token 108 | is not allowed at this time. 109 | 2. If you are in Session 0 (i.e. you're running within the context of a Windows Service; this 110 | includes a logon session obtained by ssh-ing into the host), then you must instantiate this 111 | class with a username + logon_token; providing a password is not allowed in Session 0. To 112 | create a logon_token, you will want to look in to the LogonUser family of Win32 system APIs. 113 | 114 | The user provided in this class directly influences the Directory ACL of the Session Working 115 | Directory that is created. The created directory: 116 | 1. Has Full Control by the owner of the calling process; and 117 | 2. Has Modify access by the provided user. 118 | The Session working directory will also be set so that all child directories and files 119 | inherit these permissions. 120 | """ 121 | 122 | __slots__ = ("user", "password", "logon_token") 123 | 124 | user: str 125 | """ 126 | User name of the identity to run the Session's subprocesses under. 127 | This can be either a plain username for a local user or a domain username in down-level logon form 128 | ex: localUser, domain\\domainUser 129 | """ 130 | 131 | password: Optional[str] 132 | """ 133 | Password of the identity to run the Session's subprocess(es) under. 134 | Mutually exclusive with: logon_token 135 | """ 136 | 137 | logon_token: Optional[HANDLE] 138 | """ 139 | A logon token to use to run the Session's subprocess(es) under. 140 | Mutually exclusive with: password 141 | """ 142 | 143 | def __init__( 144 | self, 145 | user: str, 146 | *, 147 | password: Optional[str] = None, 148 | logon_token: Optional[HANDLE] = None, 149 | ) -> None: 150 | """ 151 | Arguments: 152 | user (str): 153 | User name of the identity to run the Session's subprocesses under. 154 | This can be either a plain username for a local user, a domain username in down-level logon form, 155 | or a domain's UPN. 156 | ex: localUser, domain\\domainUser, domainUser@domain.com 157 | 158 | password (Optional[str]): 159 | Password of the identity to run the Session's subprocess under. This argument is mutually-exclusive with the 160 | "logon_token" argument. 161 | 162 | logon_token (Optional[ctypes.wintypes.HANDLE]): 163 | Windows logon handle for the target user. This argument is mutually-exclusive with the 164 | "password" argument. 165 | """ 166 | if not is_windows(): 167 | raise RuntimeError("Only available on Windows systems.") 168 | 169 | if password and logon_token: 170 | raise ValueError('The "password" and "logon_token" arguments are mutually exclusive') 171 | 172 | self.user = user 173 | 174 | # Note: We allow user to be the process user to support the case of being able to supply 175 | # the group that the process will run under; differing from the user's default group. 176 | if self.is_process_user(): 177 | if password is not None: 178 | raise RuntimeError("User is the process owner. Do not provide a password.") 179 | if logon_token is not None: 180 | raise RuntimeError("User is the process owner. Do not provide a logon token.") 181 | else: 182 | # Note: "" is allowed as that may actually be the password for the user. 183 | if password is None and logon_token is None: 184 | raise RuntimeError( 185 | "Must supply a password or logon token. User is not the process owner." 186 | ) 187 | if password is not None: 188 | if CURRENT_PROCESS_RUNNING_IN_WINDOWS_SESSION_0: 189 | raise RuntimeError( 190 | ( 191 | "Must supply a logon_token rather than a password. " 192 | "Passwords are not supported when running in Windows Session 0." 193 | ) 194 | ) 195 | self._validate_username_password(user, password) 196 | self.password = password 197 | self.logon_token = None 198 | else: 199 | self.password = None 200 | self.logon_token = logon_token 201 | 202 | @staticmethod 203 | def _get_process_user(): 204 | return get_process_user() 205 | 206 | @staticmethod 207 | def _validate_username_password(user_name: str, password: str) -> Optional[bool]: 208 | """ 209 | Validates the username and password against Windows authentication. 210 | 211 | Args: 212 | user_name (str): The username to be validated. 213 | domain_name (str): The domain where the user exists. None means current domain. 214 | password (str): The password to be validated. 215 | 216 | Returns: 217 | Optional[bool]: True if the credentials are valid 218 | 219 | Raises: 220 | BadCredentialsException: If the username or password is incorrect. 221 | """ 222 | if "\\" in user_name: 223 | domain, username = user_name.split("\\") 224 | else: 225 | domain, username = None, user_name 226 | try: 227 | handle = win32security.LogonUser( 228 | username, 229 | domain, 230 | password, 231 | LOGON32_LOGON_INTERACTIVE, 232 | LOGON32_PROVIDER_DEFAULT, 233 | ) 234 | win32api.CloseHandle(handle) 235 | return True 236 | except pywintypes.error as e: 237 | if e.winerror == winerror.ERROR_LOGON_FAILURE: 238 | raise BadCredentialsException("The username or password is incorrect.") 239 | raise 240 | -------------------------------------------------------------------------------- /src/openjd/sessions/_tempdir.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | import stat 5 | from ._logging import LoggerAdapter, LogContent, LogExtraInfo 6 | from pathlib import Path 7 | from shutil import chown, rmtree 8 | from tempfile import gettempdir, mkdtemp 9 | from typing import Optional, cast 10 | 11 | from ._session_user import PosixSessionUser, SessionUser, WindowsSessionUser 12 | from ._windows_permission_helper import WindowsPermissionHelper 13 | from ._os_checker import is_posix, is_windows 14 | 15 | if is_windows(): 16 | from ._win32._helpers import get_process_user # type: ignore 17 | 18 | 19 | def custom_gettempdir(logger: Optional[LoggerAdapter] = None) -> str: 20 | """ 21 | Get a platform-specific temporary directory. 22 | 23 | For Windows systems, this function returns a specific directory path, 24 | '%PROGRAMDATA%\\Amazon\\'. If this directory does not exist, it will be created. 25 | For non-Windows systems, it returns the system's default temporary directory. 26 | 27 | Args: 28 | logger (Optional[LoggerAdapter]): The logger to which all messages should be sent from this and the 29 | subprocess. 30 | 31 | Returns: 32 | str: The path to the temporary directory specific to the operating system. 33 | """ 34 | if is_windows(): 35 | program_data_path = os.getenv("PROGRAMDATA") 36 | if program_data_path is None: 37 | program_data_path = r"C:\ProgramData" 38 | if logger: 39 | logger.warning( 40 | f'Environment variable "PROGRAMDATA" is not set. Creating the session working directories under {program_data_path}', 41 | extra=LogExtraInfo(openjd_log_content=LogContent.FILE_PATH), 42 | ) 43 | 44 | temp_dir_parent = os.path.join(program_data_path, "Amazon") 45 | else: 46 | temp_dir_parent = gettempdir() 47 | 48 | temp_dir = os.path.join(temp_dir_parent, "OpenJD") 49 | os.makedirs(temp_dir, exist_ok=True) 50 | return temp_dir 51 | 52 | 53 | class TempDir: 54 | """This class securely creates a temporary directory using the same rules as mkdtemp(), 55 | but with the option of having the directory owned by a user other than this process' user. 56 | 57 | Notes: 58 | posix - Only the group of the temp directory is set. The directory owner will be this 59 | process' uid. This process must be running as root to change the ownership, so we don't 60 | do it (don't really need to, either, since the use-case for this class is to 61 | create the Open Job Description Session working directory and that working directory needs to be 62 | both writable and deletable by this process). 63 | """ 64 | 65 | path: Path 66 | """Pathname of the created directory. 67 | """ 68 | 69 | def __init__( 70 | self, 71 | *, 72 | dir: Optional[Path] = None, 73 | prefix: Optional[str] = None, 74 | user: Optional[SessionUser] = None, 75 | logger: Optional[LoggerAdapter] = None, 76 | ): 77 | """ 78 | Arguments: 79 | dir (Optional[Path]): The directory in which to create the temp dir. 80 | Defaults to tempfile.gettempdir(). 81 | prefix (Optional[str]): A prefix to use in the name of the generated temp dir. 82 | Defaults to "". 83 | user (Optional[SessionUser]): A group that will own the created directory. 84 | The group-write bit will be set on the directory if this option is supplied. 85 | Defaults to this process' effective user/group. 86 | logger (Optional[LoggerAdapter]): The logger to which all messages should be sent from this and the 87 | subprocess. 88 | 89 | Raises: 90 | RuntimeError - If this process cannot create the temporary directory, or change the 91 | group ownership of the created directory. 92 | """ 93 | # pre-flight checks 94 | if user and is_posix() and not isinstance(user, PosixSessionUser): # pragma: nocover 95 | raise ValueError("user must be a posix-user. Got %s", type(user)) 96 | elif user and is_windows() and not isinstance(user, WindowsSessionUser): 97 | raise ValueError("user must be a windows-user. Got %s", type(user)) 98 | 99 | if not dir: 100 | dir = Path(custom_gettempdir(logger)) 101 | 102 | dir = dir.resolve() 103 | try: 104 | self.path = Path(mkdtemp(dir=dir, prefix=prefix)) # 0o700 105 | except OSError as err: 106 | raise RuntimeError(f"Could not create temp directory within {str(dir)}: {str(err)}") 107 | 108 | # Change the owner 109 | if user: 110 | if is_posix(): 111 | user = cast(PosixSessionUser, user) 112 | # Change ownership 113 | try: 114 | chown(self.path, group=user.group) 115 | except OSError as err: 116 | raise RuntimeError( 117 | f"Could not change ownership of directory '{str(dir)}' (error: {str(err)}). Please ensure that uid {os.geteuid()} is a member of group {user.group}." # type: ignore 118 | ) 119 | # Update the permissions to include the group after the group is changed 120 | # Note: Only after changing group for security in case the group-ownership 121 | # change fails. 122 | os.chmod(self.path, mode=stat.S_IRWXU | stat.S_IRWXG) 123 | elif is_windows(): 124 | user = cast(WindowsSessionUser, user) 125 | try: 126 | WindowsPermissionHelper.set_permissions( 127 | str(self.path), 128 | principals_full_control=[get_process_user()], 129 | principals_modify_access=[user.user], 130 | ) 131 | except Exception as err: 132 | raise RuntimeError( 133 | f"Could not change permissions of directory '{str(dir)}' (error: {str(err)})" 134 | ) 135 | 136 | def cleanup(self) -> None: 137 | """Deletes the temporary directory and all of its contents. 138 | Raises: 139 | RuntimeError - If not all files could be deleted. 140 | """ 141 | encountered_errors = False 142 | file_paths: list[str] = [] 143 | 144 | def onerror(f, p, e): 145 | nonlocal encountered_errors 146 | nonlocal file_paths 147 | encountered_errors = True 148 | file_paths.append(str(p)) 149 | 150 | rmtree(self.path, onerror=onerror) 151 | if encountered_errors: 152 | raise RuntimeError( 153 | f"Files within temporary directory {str(self.path)} could not be deleted.\n" 154 | + "\n".join(file_paths) 155 | ) 156 | -------------------------------------------------------------------------------- /src/openjd/sessions/_types.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from enum import Enum 4 | 5 | from openjd.model.v2023_09 import Action as Action_2023_09 6 | from openjd.model.v2023_09 import EmbeddedFiles as EmbeddedFiles_2023_09 7 | from openjd.model.v2023_09 import EmbeddedFileText as EmbeddedFileText_2023_09 8 | from openjd.model.v2023_09 import Environment as Environment_2023_09 9 | from openjd.model.v2023_09 import EnvironmentScript as EnvironmentScript_2023_09 10 | from openjd.model.v2023_09 import StepScript as StepScript_2023_09 11 | 12 | # ---- Types to export 13 | 14 | EnvironmentIdentifier = str 15 | 16 | # Turn this into a Union as new schemas are added. 17 | StepScriptModel = StepScript_2023_09 18 | 19 | # Turn this into a Union as new schemas are added. 20 | EnvironmentModel = Environment_2023_09 21 | EnvironmentScriptModel = EnvironmentScript_2023_09 22 | 23 | 24 | class ActionState(str, Enum): 25 | RUNNING = "running" 26 | """The action is actively running.""" 27 | 28 | CANCELED = "canceled" 29 | """The action has been canceled and is no longer running.""" 30 | 31 | TIMEOUT = "timeout" 32 | """The action has been canceled due to reaching its runtime limit.""" 33 | 34 | FAILED = "failed" 35 | """The action is no longer running, and exited with a non-zero 36 | return code.""" 37 | 38 | SUCCESS = "success" 39 | """The action is no longer running, and exited with a zero 40 | return code.""" 41 | 42 | 43 | # --- Internal types 44 | 45 | # dev note: WHen adding support for new schemas, or when an existing schema adds a 46 | # new kind of embedded file, then make these a Union of all of the relevant types. 47 | EmbeddedFileType = EmbeddedFileText_2023_09 48 | EmbeddedFilesListType = EmbeddedFiles_2023_09 49 | 50 | # Turn this into a Union as new schemas are added. 51 | ActionModel = Action_2023_09 52 | -------------------------------------------------------------------------------- /src/openjd/sessions/_win32/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /src/openjd/sessions/_win32/_helpers.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import sys 4 | 5 | # This assertion short-circuits mypy from type checking this module on platforms other than Windows 6 | # https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks 7 | assert sys.platform == "win32" 8 | 9 | import win32api 10 | import win32con 11 | from ctypes.wintypes import DWORD, HANDLE 12 | from ctypes import ( 13 | WinError, 14 | byref, 15 | cast, 16 | c_void_p, 17 | c_wchar, 18 | c_wchar_p, 19 | sizeof, 20 | ) 21 | from contextlib import contextmanager 22 | from typing import Generator, Optional 23 | 24 | from ._api import ( 25 | # Constants 26 | LOGON32_LOGON_INTERACTIVE, 27 | LOGON32_PROVIDER_DEFAULT, 28 | # Functions 29 | CloseHandle, 30 | CreateEnvironmentBlock, 31 | DestroyEnvironmentBlock, 32 | GetCurrentProcessId, 33 | LogonUserW, 34 | ProcessIdToSessionId, 35 | ) 36 | 37 | 38 | def get_process_user(): 39 | """ 40 | Returns the user name of the user running the current process. 41 | """ 42 | return win32api.GetUserNameEx(win32con.NameSamCompatible) 43 | 44 | 45 | def get_current_process_session_id() -> int: 46 | """ 47 | Finds the Session ID of the current process, and returns it. 48 | """ 49 | proc_id = GetCurrentProcessId() 50 | session_id = DWORD(0) 51 | # Ignore the return value; will only fail if given a bad 52 | # process id, and that's clearly impossible here. 53 | ProcessIdToSessionId(proc_id, byref(session_id)) 54 | return session_id.value 55 | 56 | 57 | def logon_user(username: str, password: str) -> HANDLE: 58 | """ 59 | Attempt to logon as the given username & password. 60 | Return a HANDLE to a logon_token. 61 | 62 | Note: 63 | The caller *MUST* call CloseHandle on the returned value when done with it. 64 | Handles are not automatically garbage collected. 65 | 66 | Raises: 67 | OSError - If an error is encountered. 68 | """ 69 | hToken = HANDLE(0) 70 | if not LogonUserW( 71 | username, 72 | None, # TODO - domain handling?? 73 | password, 74 | LOGON32_LOGON_INTERACTIVE, 75 | LOGON32_PROVIDER_DEFAULT, 76 | byref(hToken), 77 | ): 78 | raise WinError() 79 | 80 | return hToken 81 | 82 | 83 | @contextmanager 84 | def logon_user_context(username: str, password: str) -> Generator[HANDLE, None, None]: 85 | """ 86 | A context manager wrapper around logon_user(). This will automatically 87 | Close the logon_token when the context manager is exited. 88 | """ 89 | hToken: Optional[HANDLE] = None 90 | try: 91 | hToken = logon_user(username, password) 92 | yield hToken 93 | finally: 94 | if hToken is not None and not CloseHandle(hToken): 95 | raise WinError() 96 | 97 | 98 | def environment_block_for_user(logon_token: HANDLE) -> c_void_p: 99 | """ 100 | Create an Environment Block for a given logon_token and return it. 101 | Per https://learn.microsoft.com/en-us/windows/win32/api/userenv/nf-userenv-createenvironmentblock 102 | "The environment block is an array of null-terminated Unicode strings. The list ends with two nulls (\0\0)." 103 | 104 | Returns: 105 | Pointer to the environment block 106 | 107 | Raises: 108 | OSError - If there is an error creating the block 109 | 110 | Notes: 111 | 1) The returned block *MUST* be deallocated with DestroyEnvironmentBlock when done 112 | 2) Destroying an environment block while it is in use (e.g. while the process it was passed 113 | to is still running) WILL result in a hard to debug crash in ntdll.dll. So, don't do that! 114 | """ 115 | environment = c_void_p() 116 | if not CreateEnvironmentBlock(byref(environment), logon_token, False): 117 | raise WinError() 118 | return environment 119 | 120 | 121 | @contextmanager 122 | def environment_block_for_user_context(logon_token: HANDLE) -> Generator[c_void_p, None, None]: 123 | """As environment_block_for_user, but as a context manager. This will automatically 124 | Destroy the environment block when exiting the context. 125 | """ 126 | lp_environment: Optional[c_void_p] = None 127 | try: 128 | lp_environment = environment_block_for_user(logon_token) 129 | yield lp_environment 130 | finally: 131 | if lp_environment is not None and not DestroyEnvironmentBlock(lp_environment): 132 | raise WinError() 133 | 134 | 135 | def environment_block_to_dict(block: c_void_p) -> dict[str, str]: 136 | """Converts an environment block as returned from CreateEnvironmentBlock to a Python dict of key/value strings. 137 | 138 | An environment block is a void pointer. The pointer points to a sequence of strings. Each string is terminated with a 139 | null character. The final string is terminated by an additional null character. 140 | """ 141 | assert block.value is not None 142 | w_char_size = sizeof(c_wchar) 143 | cur: int = block.value 144 | key_val_str = cast(cur, c_wchar_p).value 145 | env: dict[str, str] = {} 146 | while key_val_str: 147 | key, val = key_val_str.split("=", maxsplit=1) 148 | env[key] = val 149 | # Advance pointer by string length plus a null character 150 | cur += (len(key_val_str) + 1) * w_char_size 151 | key_val_str = cast(cur, c_wchar_p).value 152 | 153 | return env 154 | 155 | 156 | def environment_block_from_dict(env: dict[str, str]) -> c_wchar_p: 157 | """Converts a Python dictionary representation of an environment into a character buffer as expected by the 158 | lpEnvironment argument to the CreateProcess* family of win32 functions. 159 | 160 | Note: The returned c_char_p is pointing to the internal contents of an immutable python string; that is 161 | to say that it will be garbage collected, and the caller need not worry about deallocating it. 162 | """ 163 | # Create a string with null-character delimiters between each "key=value" string 164 | null_delimited = "\0".join(f"{key}={value}" for key, value in env.items()) 165 | # Add a final null-terminator character 166 | env_block_str = null_delimited + "\0" 167 | 168 | return c_wchar_p(env_block_str) 169 | -------------------------------------------------------------------------------- /src/openjd/sessions/_win32/_locate_executable.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | import sys 5 | import shutil 6 | from logging import INFO, getLogger 7 | from logging.handlers import QueueHandler 8 | from pathlib import Path 9 | from queue import Empty, SimpleQueue 10 | from threading import Lock 11 | from typing import Optional, Sequence 12 | 13 | from .._session_user import SessionUser 14 | from .._subprocess import LoggingSubprocess 15 | from .._logging import LoggerAdapter 16 | 17 | _internal_logger_lock = Lock() 18 | _internal_logger = getLogger("openjd_sessions_runner_base_internal_logger") 19 | _internal_logger_adapter = LoggerAdapter(_internal_logger, extra=dict()) 20 | _internal_logger.setLevel(INFO) 21 | _internal_logger.propagate = False 22 | _internal_logger_queue: SimpleQueue = SimpleQueue() 23 | _internal_logger.addHandler(QueueHandler(_internal_logger_queue)) 24 | 25 | 26 | def locate_windows_executable( 27 | args: Sequence[str], 28 | user: Optional[SessionUser], 29 | os_env_vars: Optional[dict[str, Optional[str]]], 30 | working_dir: str, 31 | ) -> Sequence[str]: 32 | cmd_path = Path(args[0]) 33 | if cmd_path.is_absolute(): 34 | # If it's an absolute path (e.g. C:\Foo\Bar.exe or C:\Foo\Bar) then we just return 35 | # and leave it up to the OS to resolve the executable's extention. 36 | 37 | # TODO: Do we actually still want to do the find as a check to see if the command exists & 38 | # is executable? This would catch stuff like 'c:\Foo\test.ps1' as a command (which fails) 39 | return args 40 | 41 | return_args = list(args) 42 | if user is None: 43 | return_args[0] = _locate_for_same_user(cmd_path, os_env_vars, working_dir) 44 | else: 45 | return_args[0] = _locate_for_other_user(cmd_path, os_env_vars, working_dir, user) 46 | return return_args 47 | 48 | 49 | def _get_path_var_for_shutil_which( 50 | os_env_vars: Optional[dict[str, Optional[str]]], working_dir: str 51 | ): 52 | path_var: Optional[str] = None 53 | if os_env_vars: 54 | env_var_keys = {k.lower(): k for k in os_env_vars} 55 | path_var = os_env_vars.get(env_var_keys["path"]) if "path" in env_var_keys else None 56 | if path_var is None: 57 | path_var = os.environ.get("PATH", "") 58 | path_var = "%s;%s" % (working_dir, path_var) 59 | return path_var 60 | 61 | 62 | def _locate_for_same_user( 63 | command: Path, os_env_vars: Optional[dict[str, Optional[str]]], working_dir: str 64 | ) -> str: 65 | path_var: Optional[str] = _get_path_var_for_shutil_which(os_env_vars, working_dir) 66 | exe = str(shutil.which(str(command), path=path_var)) 67 | if not exe: 68 | raise RuntimeError("Could not find executable file: %s" % command) 69 | return exe 70 | 71 | 72 | def _locate_for_other_user( 73 | command: Path, 74 | os_env_vars: Optional[dict[str, Optional[str]]], 75 | working_dir: str, 76 | user: SessionUser, 77 | ) -> str: # pragma: nocover 78 | # Running as a potentially different user, so it's possible that 79 | # this process doesn't have read access to the executable file's location. 80 | # Thus, we need to rely on running a subprocess as the user to be able 81 | # to find the executable. 82 | 83 | if len(command.parts) > 1: 84 | # Windows cannot find executables by relative location 85 | # i.e. where "dir\test.bat" 86 | # 87 | # Even if that worked, we'd have to prepend the relative part of the command 88 | # to the path and then search for only the command.name. But, we don't generally 89 | # have the user's PATH env var value. 90 | # 91 | # So, for both of those reasons we just return the command and let the action fail out 92 | # naturally. 93 | return str(command) 94 | 95 | # Prevent issues that might arise by having multiple Actions trying to start up 96 | # concurrently -- grab a lock. 97 | with _internal_logger_lock: 98 | # Drain the message queue to ensure nothing remains from previous runs. 99 | try: 100 | while True: 101 | _internal_logger_queue.get(block=False) 102 | except Empty: 103 | pass # Will happen when the queue is fully empty 104 | 105 | # When running in a service context, we want to call the non-service Python binary 106 | sys_executable = sys.executable.lower().replace("pythonservice.exe", "python.exe") 107 | 108 | # In the subprocess code, we avoid exit code 1 as that is returned if a Python exception is thrown. 109 | exit_code_success = 2 110 | exit_code_could_not_find_exe = 3 111 | 112 | path_var: Optional[str] = _get_path_var_for_shutil_which(os_env_vars, working_dir) 113 | process = LoggingSubprocess( 114 | logger=_internal_logger_adapter, 115 | args=[ 116 | sys_executable, 117 | "-c", 118 | # Command injection here is possible, but it's irrelevant. The command is running 119 | # as the given user. No need for an attacker to be fancy here, they could just run 120 | # the desired attack command directly in the job template. 121 | "import shutil, sys, pathlib\n" 122 | + f"cmd = shutil.which({str(command)!r}, path={path_var!r})\n" 123 | + "if cmd:\n" 124 | + " print(str(pathlib.Path(cmd).absolute()))\n" 125 | + f" sys.exit({exit_code_success})\n" 126 | + f"sys.exit({exit_code_could_not_find_exe})\n", 127 | ], 128 | user=user, 129 | os_env_vars=os_env_vars, 130 | working_dir=str(working_dir), 131 | ) 132 | process.run() # blocking call 133 | exit_code = process.exit_code 134 | 135 | # We're seeing random errors when trying to run an Action's command immediately after this 136 | # outside of Session 0; theory is that maybe this has something to do with running two 137 | # CreateProcessWithLogonW calls back-to-back with little time inbetween. So, explicitly 138 | # delete the process object to try to force some cleanup of handles that maybe help the 139 | # profile get unloaded. (this seems like it might be doing the trick) 140 | # Error: 141 | # [WinError 1018] Illegal operation attempted on a registry key that has been marked for deletion 142 | del process 143 | 144 | if exit_code == exit_code_could_not_find_exe: 145 | raise RuntimeError(f"Could not find executable file: {command}") 146 | 147 | # Parse the output 148 | try: 149 | while True: 150 | record = _internal_logger_queue.get(block=False) 151 | message = record.getMessage() 152 | if "Output:" in message: 153 | break 154 | 155 | if exit_code == exit_code_success: 156 | exe_record = _internal_logger_queue.get(block=False) 157 | # The line of output with the result of 'shutil.which' is the location of the command 158 | return exe_record.getMessage() 159 | except Empty: 160 | raise RuntimeError( 161 | f"Could not run Python as user {user.user} to find executable {command} in PATH.\n" 162 | + f"The host configuration must allow users to run {sys_executable}." 163 | ) 164 | 165 | # Collect the error output from the subprocess 166 | error_messages = [] 167 | try: 168 | while True: 169 | record = _internal_logger_queue.get(block=False) 170 | error_messages.append(record.getMessage()) 171 | except Empty: 172 | pass 173 | 174 | # Something went wrong in launching sys_executable. 175 | # Because this scenario may be difficult to diagnose, we include more context. 176 | raise RuntimeError( 177 | f"Could not run Python as user {user.user} to find executable {command} in PATH.\n" 178 | + f"The host configuration must allow users to run {sys_executable}.\n\nError output:\n" 179 | + "\n".join(error_messages) 180 | ) 181 | -------------------------------------------------------------------------------- /src/openjd/sessions/_win32/_popen_as_user.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import os 4 | import sys 5 | 6 | # This assertion short-circuits mypy from type checking this module on platforms other than Windows 7 | # https://mypy.readthedocs.io/en/stable/common_issues.html#python-version-and-system-platform-checks 8 | assert sys.platform == "win32" 9 | 10 | from typing import Any, Optional, cast 11 | import ctypes 12 | from ctypes.wintypes import HANDLE 13 | from subprocess import list2cmdline, Popen 14 | from subprocess import Handle # type: ignore # linter doesn't know it exists 15 | import platform 16 | from ._api import ( 17 | # Constants 18 | LOGON_WITH_PROFILE, 19 | PROC_THREAD_ATTRIBUTE_HANDLE_LIST, 20 | STARTF_USESHOWWINDOW, 21 | STARTF_USESTDHANDLES, 22 | SW_HIDE, 23 | # Structures 24 | PROCESS_INFORMATION, 25 | STARTUPINFO, 26 | STARTUPINFOEX, 27 | # Functions 28 | CloseHandle, 29 | CreateProcessAsUserW, 30 | CreateProcessWithLogonW, 31 | DestroyEnvironmentBlock, 32 | UpdateProcThreadAttribute, 33 | ) 34 | from ._helpers import ( 35 | environment_block_for_user, 36 | # environment_block_for_user_context, 37 | environment_block_from_dict, 38 | environment_block_to_dict, 39 | logon_user_context, 40 | ) 41 | from .._session_user import WindowsSessionUser 42 | 43 | if platform.python_implementation() != "CPython": 44 | raise RuntimeError( 45 | f"Not compatible with the {platform.python_implementation} of Python. Please use CPython." 46 | ) 47 | 48 | CREATE_UNICODE_ENVIRONMENT = 0x00000400 49 | EXTENDED_STARTUPINFO_PRESENT = 0x00080000 50 | 51 | 52 | def inherit_handles(startup_info: STARTUPINFOEX, handles: tuple[int]) -> ctypes.Array: 53 | """Set the given 'startup_info' to have the subprocess inherit the given handles, and only the 54 | given handles. 55 | 56 | Returns: 57 | - The allocated list of handles. Per the Win32 APIs, you must ensure that this buffer is 58 | only deallocated *after* the attribute list in the startup_info has been deallocated. 59 | """ 60 | handles_list = (HANDLE * len(handles))() 61 | for i, h in enumerate(handles): 62 | handles_list[i] = h 63 | if not UpdateProcThreadAttribute( 64 | startup_info.lpAttributeList, 65 | 0, # reserved and must be 0 66 | PROC_THREAD_ATTRIBUTE_HANDLE_LIST, 67 | ctypes.byref(handles_list), 68 | ctypes.sizeof(handles_list), 69 | None, # reserved and must be null 70 | None, # reserved and must be null 71 | ): 72 | raise ctypes.WinError() 73 | return handles_list 74 | 75 | 76 | class PopenWindowsAsUser(Popen): 77 | """Class to run a process as another user on Windows. 78 | Derived from Popen, it defines the _execute_child() method to call CreateProcessWithLogonW. 79 | """ 80 | 81 | def __init__(self, user: WindowsSessionUser, *args: Any, **kwargs: Any): 82 | """ 83 | Arguments: 84 | username (str): Name of user to run subprocess as 85 | password (str): Password for username 86 | args (Any): Popen constructor args 87 | kwargs (Any): Popen constructor kwargs 88 | https://docs.python.org/3/library/subprocess.html#popen-constructor 89 | """ 90 | self.user = user 91 | super(PopenWindowsAsUser, self).__init__(*args, **kwargs) 92 | 93 | def _execute_child( 94 | self, 95 | args, 96 | executable, 97 | preexec_fn, 98 | close_fds, 99 | pass_fds, 100 | cwd, 101 | env, 102 | startupinfo, 103 | creationflags, 104 | shell, 105 | p2cread, 106 | p2cwrite, 107 | c2pread, 108 | c2pwrite, 109 | errread, 110 | errwrite, 111 | restore_signals, 112 | start_new_session, 113 | *additional_args, 114 | **kwargs, 115 | ): 116 | """Execute program (MS Windows version). 117 | Calls CreateProcessWithLogonW to run a process as another user. 118 | """ 119 | 120 | assert not pass_fds, "pass_fds not supported on Windows." 121 | 122 | commandline = args if isinstance(args, str) else list2cmdline(args) 123 | # CreateProcess* may modify the commandline, so copy it to a mutable buffer 124 | cmdline = ctypes.create_unicode_buffer(commandline) 125 | 126 | if executable is not None: 127 | executable = os.fsdecode(executable) 128 | 129 | if cwd is not None: 130 | cwd = os.fsdecode(cwd) 131 | 132 | # Initialize structures 133 | si = STARTUPINFO() 134 | si.cb = ctypes.sizeof(STARTUPINFO) 135 | pi = PROCESS_INFORMATION() 136 | 137 | use_std_handles = -1 not in (p2cread, c2pwrite, errwrite) 138 | if use_std_handles: 139 | si.hStdInput = int(p2cread) 140 | si.hStdOutput = int(c2pwrite) 141 | si.hStdError = int(errwrite) 142 | si.dwFlags |= STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW 143 | # Ensure that the console window is hidden 144 | si.wShowWindow = SW_HIDE 145 | 146 | sys.audit("subprocess.Popen", executable, args, cwd, env, self.user.user) 147 | 148 | def _merge_environment( 149 | user_env: ctypes.c_void_p, env: dict[str, Optional[str]] 150 | ) -> ctypes.c_wchar_p: 151 | user_env_dict = cast(dict[str, Optional[str]], environment_block_to_dict(user_env)) 152 | # All environment keys are converted to uppercase. This is because while Windows environment variables are case insensitive, the win32 api 153 | # allows you to store multiple keys of mixed case, for example the win32 api would allow you 154 | # to set both "PATH" and "Path" as environment variable keys. This leads to undefined behaviour when calling one of the keys. 155 | user_env_dict = {k.upper(): v for k, v in user_env_dict.items()} 156 | upper_cased_env = {k.upper(): v for k, v in env.items()} 157 | user_env_dict.update(**upper_cased_env) 158 | result = {k: v for k, v in user_env_dict.items() if v is not None} 159 | return environment_block_from_dict(result) 160 | 161 | env_ptr = ctypes.c_void_p(0) 162 | try: 163 | if self.user.password is not None: 164 | with logon_user_context(self.user.user, self.user.password) as logon_token: 165 | env_ptr = environment_block_for_user(logon_token) 166 | if env: 167 | env_block = _merge_environment(env_ptr, env) 168 | else: 169 | env_block = env_ptr 170 | 171 | # https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createprocesswithlogonw 172 | if not CreateProcessWithLogonW( 173 | self.user.user, 174 | None, # TODO: Domains not yet supported 175 | self.user.password, 176 | LOGON_WITH_PROFILE, 177 | executable, 178 | cmdline, 179 | creationflags | CREATE_UNICODE_ENVIRONMENT, 180 | env_block, 181 | cwd, 182 | ctypes.byref(si), 183 | ctypes.byref(pi), 184 | ): 185 | # Raises: OSError 186 | raise ctypes.WinError() 187 | elif self.user.logon_token is not None: 188 | siex = STARTUPINFOEX() 189 | ctypes.memmove( 190 | ctypes.pointer(siex.StartupInfo), ctypes.pointer(si), ctypes.sizeof(STARTUPINFO) 191 | ) 192 | siex.StartupInfo.cb = ctypes.sizeof(STARTUPINFOEX) 193 | creationflags |= EXTENDED_STARTUPINFO_PRESENT 194 | 195 | handles_list: Optional[ctypes.Array] = None 196 | handles_to_inherit = tuple(int(h) for h in (p2cread, c2pwrite, errwrite) if h != -1) 197 | try: 198 | # Allocate the lpAttributeList array of the STARTUPINFOEX structure so that it 199 | # has sufficient space for a single attribute. We only have a single attribute that 200 | # we're setting -- namely a PROC_THREAD_ATTRIBUTE_HANDLE_LIST that itself contains a list of handles -- 201 | # so this is sufficient. 202 | siex.allocate_attribute_list(1) 203 | 204 | # Note: We must ensure that 'handles_list' must persist until the 205 | # attribute list is destroyed using DeleteProcThreadAttributeList. We do this by holding on 206 | # to a reference to it until after the finally block of this try. 207 | handles_list = inherit_handles( # noqa: F841 # ignore: assigned but not used 208 | siex, handles_to_inherit 209 | ) 210 | 211 | # From https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessasuserw 212 | # If the lpEnvironment parameter is NULL, the new process inherits the environment of the calling process. 213 | # CreateProcessAsUser does not automatically modify the environment block to include environment variables specific to 214 | # the user represented by hToken. For example, the USERNAME and USERDOMAIN variables are inherited from the calling 215 | # process if lpEnvironment is NULL. It is your responsibility to prepare the environment block for the new process and 216 | # specify it in lpEnvironment. 217 | 218 | env_ptr = environment_block_for_user(self.user.logon_token) 219 | if env: 220 | env_block = _merge_environment(env_ptr, env) 221 | else: 222 | env_block = env_ptr 223 | 224 | if not CreateProcessAsUserW( 225 | self.user.logon_token, 226 | executable, 227 | cmdline, 228 | None, 229 | None, 230 | True, 231 | creationflags | CREATE_UNICODE_ENVIRONMENT, 232 | env_block, 233 | cwd, 234 | ctypes.byref(siex), 235 | ctypes.byref(pi), 236 | ): 237 | # Raises: OSError 238 | raise ctypes.WinError() 239 | finally: 240 | siex.deallocate_attribute_list() 241 | else: 242 | raise NotImplementedError("Unexpected case for WindowsSessionUser properties") 243 | finally: 244 | if env_ptr.value is not None: 245 | DestroyEnvironmentBlock(env_ptr) 246 | # Child is launched. Close the parent's copy of those pipe 247 | # handles that only the child should have open. 248 | self._close_pipe_fds(p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 249 | 250 | # Retain the process handle, but close the thread handle 251 | CloseHandle(pi.hThread) 252 | 253 | self._child_created = True 254 | self.pid = pi.dwProcessId 255 | self._handle = Handle(pi.hProcess) 256 | -------------------------------------------------------------------------------- /src/openjd/sessions/_windows_permission_helper.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from ._os_checker import is_windows 4 | 5 | if is_windows(): 6 | import win32security 7 | import ntsecuritycon 8 | 9 | 10 | class WindowsPermissionHelper: 11 | """ 12 | This class contains helper methods to set permissions for files and directories on Windows. 13 | """ 14 | 15 | @staticmethod 16 | def set_permissions( 17 | file_path: str, 18 | *, 19 | principals_full_control: list[str] = [], 20 | principals_modify_access: list[str] = [], 21 | ): 22 | """ 23 | Grants access control over the object at file_path. 24 | Sets flags so both child files and directories inherit these permissions. 25 | 26 | Arguments: 27 | file_path (str): The path to the file or directory. 28 | principals_full_control (List[str]): The names of the principals to permit Full Control access. 29 | principals_modify_access (List[str]): The names of the principals to permit Modify access 30 | 31 | Raises: 32 | RuntimeError if there is a problem modifying the security attributes. 33 | """ 34 | try: 35 | # We don't want to propagate existing permissions, so create a new DACL 36 | dacl = win32security.ACL() 37 | for principal in principals_full_control: 38 | user_or_group_sid, _, _ = win32security.LookupAccountName(None, principal) 39 | 40 | # Add an ACE to the DACL giving the principal full control and enabling inheritance of the ACE 41 | dacl.AddAccessAllowedAceEx( 42 | win32security.ACL_REVISION, 43 | ntsecuritycon.OBJECT_INHERIT_ACE | ntsecuritycon.CONTAINER_INHERIT_ACE, 44 | ntsecuritycon.FILE_ALL_ACCESS, # = 0x1F01FF 45 | user_or_group_sid, 46 | ) 47 | for principal in principals_modify_access: 48 | user_or_group_sid, _, _ = win32security.LookupAccountName(None, principal) 49 | 50 | # Add an ACE to the DACL giving the principal full control and enabling inheritance of the ACE 51 | dacl.AddAccessAllowedAceEx( 52 | win32security.ACL_REVISION, 53 | ntsecuritycon.OBJECT_INHERIT_ACE | ntsecuritycon.CONTAINER_INHERIT_ACE, 54 | # Values of these constants defined in winnt.h 55 | # Constant value after ORs: 0x1301FF 56 | # Delta from FILE_ALL_ACCESS is 0xC0000 = WRITE_DAC(0x40000) | WRITE_OWNER(0x80000) 57 | ntsecuritycon.FILE_GENERIC_READ # = 0x120089 58 | | ntsecuritycon.FILE_GENERIC_WRITE # = 0x120116 59 | | ntsecuritycon.FILE_GENERIC_EXECUTE # = 0x1200A0 60 | | ntsecuritycon.DELETE # = 0x10000 61 | | ntsecuritycon.FILE_DELETE_CHILD, # = 0x0040 62 | user_or_group_sid, 63 | ) 64 | 65 | # Get the security descriptor of the tempdir 66 | sd = win32security.GetFileSecurity( 67 | str(file_path), win32security.DACL_SECURITY_INFORMATION 68 | ) 69 | 70 | # Set the security descriptor's DACL to the newly-created DACL 71 | # Arguments: 72 | # 1. bDaclPresent = 1: Indicates that the DACL is present in the security descriptor. 73 | # If set to 0, this method ignores the provided DACL and allows access to all principals. 74 | # 2. dacl: The discretionary access control list (DACL) to be set in the security descriptor. 75 | # 3. bDaclDefaulted = 0: Indicates the DACL was provided and not defaulted. 76 | # If set to 1, indicates the DACL was defaulted, as in the case of permissions inherited from a parent directory. 77 | sd.SetSecurityDescriptorDacl(1, dacl, 0) 78 | 79 | # Set the security descriptor to the tempdir 80 | # Note: This completely overwrites the DACL; so, if we don't provide a permission above then 81 | # the DACL doesn't have it. 82 | win32security.SetFileSecurity( 83 | str(file_path), win32security.DACL_SECURITY_INFORMATION, sd 84 | ) 85 | except Exception as err: 86 | raise RuntimeError( 87 | f"Could not change permissions of directory '{str(dir)}' (error: {str(err)})" 88 | ) 89 | -------------------------------------------------------------------------------- /src/openjd/sessions/_windows_process_killer.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | import time 3 | 4 | from ._logging import LogExtraInfo, LogContent 5 | from psutil import NoSuchProcess, Process, wait_procs, STATUS_STOPPED 6 | from typing import List 7 | 8 | 9 | def _suspend_process(logger, process: Process) -> bool: 10 | """ 11 | Suspend a given process. 12 | 13 | Parameters: 14 | - logger: The logging instance for logging. 15 | - process: The process to be suspended. 16 | 17 | Returns: 18 | - True if the process was successfully suspended, False otherwise. 19 | """ 20 | try: 21 | process.suspend() 22 | for _ in range(10): 23 | if process.status() == STATUS_STOPPED: 24 | return True 25 | # Wait for the process to be suspended 26 | time.sleep(0.1) 27 | except Exception as e: 28 | logger.error( 29 | f"Failed to suspend process {process.pid}: {e}", 30 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 31 | ) 32 | return False 33 | 34 | return False 35 | 36 | 37 | def _suspend_process_tree( 38 | logger, 39 | process: Process, 40 | all_processes: List[Process], 41 | procs_cannot_suspend: List[Process], 42 | suspend_subprocesses: bool, 43 | ) -> None: 44 | """ 45 | Recursively suspend the process tree and its children in pre-order. 46 | 47 | Parameters: 48 | - logger: The logging instance for logging. 49 | - process: The process to start suspension from. 50 | - all_processes: List of all processes we are trying to suspend. 51 | - procs_cannot_suspend: List of processes that couldn't be suspended. 52 | - suspend_subprocesses: Control if the child processes needed to be suspended 53 | """ 54 | # Attempt to suspend the current process. 55 | if not _suspend_process(logger, process): 56 | procs_cannot_suspend.append(process) 57 | 58 | all_processes.append(process) 59 | 60 | if not suspend_subprocesses: 61 | return 62 | 63 | # Recursively suspend child processes. 64 | for child in process.children(): 65 | _suspend_process_tree( 66 | logger, child, all_processes, procs_cannot_suspend, suspend_subprocesses 67 | ) 68 | 69 | 70 | def _kill_processes(logger, process_list: List[Process]) -> List[Process]: 71 | """ 72 | Kill all processes in the given list. 73 | 74 | Parameters: 75 | - logger: The logging instance for logging. 76 | - process_list: List of processes to be killed. 77 | 78 | Returns: 79 | - List of processes that are still alive after attempting to kill. 80 | """ 81 | 82 | for process in process_list: 83 | try: 84 | logger.info( 85 | f"Killing process with id {process.pid}.", 86 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 87 | ) 88 | process.kill() 89 | except NoSuchProcess: 90 | logger.info( 91 | f"No process with id {process.pid} for termination", 92 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 93 | ) 94 | except Exception as e: 95 | logger.error( 96 | f"Failed to kill process {process.pid}: {e}", 97 | extra=LogExtraInfo( 98 | openjd_log_content=LogContent.PROCESS_CONTROL | LogContent.EXCEPTION_INFO 99 | ), 100 | ) 101 | 102 | # Wait for the processes to be terminated 103 | _, alive = wait_procs(process_list, timeout=5) 104 | return alive 105 | 106 | 107 | def kill_windows_process_tree(logger, root_pid, signal_subprocesses=True) -> None: 108 | """ 109 | Kills the windows process tree starting from the provided root PID. 110 | The termination ordering will be from leafs to root. 111 | 112 | Parameters: 113 | - logger: The logging instance to record activities. 114 | - root_pid: The root process ID. 115 | - signal_subprocesses: Flag to determine if subprocesses should be signaled. 116 | """ 117 | 118 | try: 119 | parent_process = Process(root_pid) 120 | except NoSuchProcess: 121 | logger.error( 122 | f"Root process with PID {root_pid} not found.", 123 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 124 | ) 125 | return 126 | 127 | procs_failed_to_suspend: List[Process] = [] 128 | processes_to_be_killed: List[Process] = [] 129 | _suspend_process_tree( 130 | logger, parent_process, processes_to_be_killed, procs_failed_to_suspend, signal_subprocesses 131 | ) 132 | if procs_failed_to_suspend: 133 | logger.warning( 134 | f"Following processes cannot be suspended. Processes IDs: {[proc.pid for proc in procs_failed_to_suspend]}", 135 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 136 | ) 137 | 138 | # Ensure we kill child processes first 139 | processes_to_be_killed.reverse() 140 | alive_processes = _kill_processes(logger, processes_to_be_killed) 141 | 142 | if alive_processes: 143 | logger.warning( 144 | f"Failed to kill following process(es): {[p.pid for p in alive_processes]}. Retrying...", 145 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 146 | ) 147 | alive_processes = _kill_processes(logger, processes_to_be_killed) 148 | if alive_processes: 149 | logger.warning( 150 | f"Still failed to kill the following process(es): {[p.pid for p in alive_processes]}. Please handle manually.", 151 | extra=LogExtraInfo(openjd_log_content=LogContent.PROCESS_CONTROL), 152 | ) 153 | -------------------------------------------------------------------------------- /src/openjd/sessions/py.typed: -------------------------------------------------------------------------------- 1 | # Marker file that indicates this package supports typing 2 | -------------------------------------------------------------------------------- /test/openjd/sessions/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/app_20s_run.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # A simple app for use in testing of the LoggingSubprocess. 4 | # Prints out an increasing series of integers (0, 1, 2, ...) 5 | # every second for 10 seconds 6 | # 7 | # Hook SIGTERM (posix) or CTRL_BREAK_EVENT (windows) and print "Trapped" 8 | # and exit if we get the signal 9 | 10 | import signal 11 | import sys 12 | import time 13 | 14 | 15 | def hook(handle, frame): 16 | print("Trapped") 17 | sys.stdout.flush() 18 | sys.exit(1) 19 | 20 | 21 | if sys.platform.startswith("win"): 22 | signal.signal(signal.SIGBREAK, hook) 23 | signal.signal(signal.SIGINT, hook) 24 | else: 25 | signal.signal(signal.SIGTERM, hook) 26 | 27 | for i in range(0, 20): 28 | print(f"Log from test {str(i)}") 29 | sys.stdout.flush() 30 | time.sleep(1) 31 | -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/app_20s_run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | PYTHON="$1" 5 | 6 | SCRIPT=$(dirname $0)/app_20s_run.py 7 | 8 | "$PYTHON" "$SCRIPT" 9 | exit $? -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/app_20s_run_ignore_signal.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | # As app_20s_run.py except it does not exit when it gets a SIGTERM/SIGBREAK 4 | 5 | import signal 6 | import sys 7 | import time 8 | 9 | 10 | def hook(handle, frame): 11 | print("Trapped") 12 | sys.stdout.flush() 13 | 14 | 15 | if sys.platform.startswith("win"): 16 | signal.signal(signal.SIGBREAK, hook) 17 | signal.signal(signal.SIGINT, hook) 18 | else: 19 | signal.signal(signal.SIGTERM, hook) 20 | 21 | for i in range(0, 20): 22 | print(i) 23 | sys.stdout.flush() 24 | time.sleep(1) 25 | -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/output_signal_sender.c: -------------------------------------------------------------------------------- 1 | // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | // This is a minimal C program that sleeps until it receives a SIGTERM signal 4 | // and outputs the process ID of the sender. 5 | 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | 15 | static bool received_signal = false; 16 | 17 | static void signal_handler(int sig, siginfo_t *siginfo, void *context) { 18 | // Output PID of the sending process 19 | int pid_of_sending_process = (int) siginfo->si_pid; 20 | printf("%d\n", pid_of_sending_process); 21 | 22 | // Tell main loop to exit 23 | received_signal = true; 24 | } 25 | 26 | int main(int argc, char *argv[]) { 27 | // register signal handler 28 | struct sigaction signal_action; 29 | signal_action.sa_sigaction = *signal_handler; 30 | // get details about the signal 31 | signal_action.sa_flags |= SA_SIGINFO; 32 | if(sigaction(SIGTERM, &signal_action, NULL) != 0) { 33 | printf("Could not register signal handler\n"); 34 | return errno; 35 | } 36 | 37 | // sleep until SIGINT received 38 | while(!received_signal) { 39 | sleep(1); 40 | } 41 | 42 | return 0; 43 | } 44 | -------------------------------------------------------------------------------- /test/openjd/sessions/support_files/run_app_20s_run.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import sys 4 | import subprocess 5 | import time 6 | from pathlib import Path 7 | 8 | proc = subprocess.Popen( 9 | args=[ 10 | sys.executable, 11 | str(Path(__file__).parent / "app_20s_run.py"), 12 | ], 13 | stdout=subprocess.PIPE, 14 | stdin=subprocess.DEVNULL, 15 | stderr=subprocess.STDOUT, 16 | encoding="utf-8", 17 | ) 18 | 19 | if proc.stdout is not None: 20 | for line in iter(proc.stdout.readline, ""): 21 | line = line.rstrip("\n\r") 22 | print(line) 23 | sys.stdout.flush() 24 | 25 | for i in range(0, 20): 26 | print(f"Log from runner {str(i)}") 27 | sys.stdout.flush() 28 | time.sleep(1) 29 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_importable.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | 4 | def test_openjd_importable(): 5 | import openjd # noqa: F401 6 | 7 | 8 | def test_importable(): 9 | import openjd.sessions # noqa: F401 10 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_os_checker.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import unittest 4 | from enum import Enum 5 | from unittest.mock import patch 6 | from openjd.sessions._os_checker import is_posix, is_windows, check_os 7 | 8 | 9 | class OSName(str, Enum): 10 | POSIX = "posix" 11 | WINDOWS = "nt" 12 | 13 | 14 | class TestOSChecker(unittest.TestCase): 15 | @patch("openjd.sessions._os_checker.os") 16 | def test_is_posix(self, mock_os): 17 | mock_os.name = OSName.POSIX 18 | self.assertTrue(is_posix()) 19 | 20 | @patch("openjd.sessions._os_checker.os") 21 | def test_is_not_posix(self, mock_os): 22 | mock_os.name = OSName.WINDOWS 23 | self.assertFalse(is_posix()) 24 | 25 | @patch("openjd.sessions._os_checker.os") 26 | def test_is_windows(self, mock_os): 27 | mock_os.name = OSName.WINDOWS 28 | self.assertTrue(is_windows()) 29 | 30 | @patch("openjd.sessions._os_checker.os") 31 | def test_is_not_windows(self, mock_os): 32 | mock_os.name = OSName.POSIX 33 | self.assertFalse(is_windows()) 34 | 35 | @patch("openjd.sessions._os_checker.os") 36 | def test_check_os_posix(self, mock_os): 37 | mock_os.name = OSName.POSIX 38 | check_os() 39 | 40 | @patch("openjd.sessions._os_checker.os") 41 | def test_check_os_windows(self, mock_os): 42 | mock_os.name = OSName.WINDOWS 43 | check_os() 44 | 45 | @patch("openjd.sessions._os_checker.os") 46 | def test_check_os_unsupported(self, mock_os): 47 | mock_os.name = "unsupported_os" 48 | with self.assertRaises(NotImplementedError) as context: 49 | check_os() 50 | self.assertIn("os: unsupported_os is not supported yet.", str(context.exception)) 51 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_redaction.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | """Tests for the redaction functionality in the action filter.""" 4 | 5 | import logging 6 | import logging.handlers 7 | from queue import SimpleQueue 8 | 9 | from openjd.sessions._action_filter import ( 10 | ActionMonitoringFilter, 11 | ActionMessageKind, 12 | redact_openjd_redacted_env_requests, 13 | ) 14 | from openjd.model import RevisionExtensions, SpecificationRevision 15 | 16 | from .conftest import setup_action_filter_test 17 | 18 | 19 | def test_redact_openjd_redacted_env_requests(): 20 | """Test that redact_openjd_redacted_env_requests correctly redacts sensitive information in command strings.""" 21 | # Test command without redaction needed 22 | command = "echo hello world" 23 | assert redact_openjd_redacted_env_requests(command) == command 24 | 25 | # Test command with redacted env 26 | command = "python -c \"print('openjd_redacted_env: PASSWORD=secret123')\"" 27 | assert ( 28 | redact_openjd_redacted_env_requests(command) 29 | == "python -c \"print('openjd_redacted_env: ********" 30 | ) 31 | 32 | # Test command with multiple redacted env values 33 | command = ( 34 | 'echo "openjd_redacted_env: PASSWORD=secret123"; echo "openjd_redacted_env: API_KEY=abc123"' 35 | ) 36 | assert redact_openjd_redacted_env_requests(command) == 'echo "openjd_redacted_env: ********' 37 | 38 | 39 | def test_redaction_with_string_formatting(): 40 | """Test that redaction works correctly with string formatting.""" 41 | # Create a list to capture callback calls 42 | callback_calls = [] 43 | 44 | def callback(kind: ActionMessageKind, message: str, cancel: bool): 45 | callback_calls.append((kind, message, cancel)) 46 | 47 | # Create a RevisionExtensions with REDACTED_ENV_VARS enabled 48 | revision_extensions = RevisionExtensions( 49 | spec_rev=SpecificationRevision.v2023_09, supported_extensions=["REDACTED_ENV_VARS"] 50 | ) 51 | 52 | # Create the filter 53 | action_filter = ActionMonitoringFilter( 54 | session_id="test_session", callback=callback, revision_extensions=revision_extensions 55 | ) 56 | 57 | # Add a value to redact via redacted_env message 58 | record = logging.LogRecord( 59 | name="test", 60 | level=logging.INFO, 61 | pathname="", 62 | lineno=0, 63 | msg="openjd_redacted_env: PASSWORD=secret123", 64 | args=(), 65 | exc_info=None, 66 | ) 67 | record.session_id = "test_session" 68 | action_filter.filter(record) 69 | 70 | # Test string formatting with args 71 | record = logging.LogRecord( 72 | name="test", 73 | level=logging.INFO, 74 | msg="Command: %s", 75 | args=("echo secret123",), 76 | pathname="", 77 | lineno=0, 78 | exc_info=None, 79 | ) 80 | record.session_id = "test_session" 81 | action_filter.filter(record) 82 | 83 | # Verify redaction happened after string formatting 84 | assert record.msg == "Command: echo ********" 85 | assert not record.args # Args should be cleared after formatting 86 | 87 | # Test multiple args 88 | record = logging.LogRecord( 89 | name="test", 90 | level=logging.INFO, 91 | msg="First: %s, Second: %s", 92 | args=("secret123", "hello"), 93 | pathname="", 94 | lineno=0, 95 | exc_info=None, 96 | ) 97 | record.session_id = "test_session" 98 | action_filter.filter(record) 99 | 100 | assert record.msg == "First: ********, Second: hello" 101 | assert not record.args 102 | 103 | # Test with non-string args 104 | record = logging.LogRecord( 105 | name="test", 106 | level=logging.INFO, 107 | msg="Number: %d, Secret: %s", 108 | args=(42, "secret123"), 109 | pathname="", 110 | lineno=0, 111 | exc_info=None, 112 | ) 113 | record.session_id = "test_session" 114 | action_filter.filter(record) 115 | 116 | assert record.msg == "Number: 42, Secret: ********" 117 | assert not record.args 118 | 119 | 120 | class TestRedactionCore: 121 | """Tests for the core redaction functionality.""" 122 | 123 | def test_redaction_preserves_spaces( 124 | self, message_queue: SimpleQueue, queue_handler: logging.handlers.QueueHandler 125 | ) -> None: 126 | """Test that when redacting values in an f-string, spaces around the value are preserved.""" 127 | # Setup 128 | loga, _, _ = setup_action_filter_test( 129 | queue_handler=queue_handler, 130 | enabled_extensions=["REDACTED_ENV_VARS"], 131 | ) 132 | 133 | # Set up redaction 134 | loga.info("openjd_redacted_env: SECRETVAR=SECRETVAL") 135 | 136 | # Clear the queue of the setup messages 137 | while not message_queue.empty(): 138 | message_queue.get() 139 | 140 | # WHEN - Message with token 141 | loga.info("SECRETVAR is . SECRETVAL ;") 142 | 143 | # THEN - The spaces should be preserved in the redacted output 144 | assert message_queue.qsize() == 1 145 | log_message = message_queue.get(block=False).getMessage() 146 | assert "SECRETVAR is . ******** ;" in log_message # Spaces should be preserved 147 | assert "SECRETVAL" not in log_message 148 | 149 | def test_overlapping_redactions( 150 | self, message_queue: SimpleQueue, queue_handler: logging.handlers.QueueHandler 151 | ) -> None: 152 | """Test that overlapping redactions are handled correctly.""" 153 | # Setup 154 | loga, _, _ = setup_action_filter_test( 155 | queue_handler=queue_handler, 156 | enabled_extensions=["REDACTED_ENV_VARS"], 157 | ) 158 | 159 | # Test case 1: Overlapping redactions at boundary 160 | loga.info("openjd_redacted_env: KEY1=FOOOBAR") 161 | loga.info("openjd_redacted_env: KEY2=BARKEY") 162 | 163 | # Clear the queue of the setup messages 164 | while not message_queue.empty(): 165 | message_queue.get() 166 | 167 | # Log a message containing the overlapping string 168 | loga.info("The value is: FOOOBARKEY") 169 | 170 | # The entire overlapping string should be redacted 171 | assert message_queue.qsize() == 1 172 | log_message = message_queue.get(block=False).getMessage() 173 | assert "The value is: ********" in log_message 174 | assert "FOOOBARKEY" not in log_message 175 | 176 | # Test case 2: One redaction completely contained within another 177 | loga.info("openjd_redacted_env: KEY3=SUPERSECRETPASSWORD") 178 | loga.info("openjd_redacted_env: KEY4=SECRET") 179 | 180 | # Clear the queue of the setup messages 181 | while not message_queue.empty(): 182 | message_queue.get() 183 | 184 | # Log a message containing the nested redaction 185 | loga.info("The value is: SUPERSECRETPASSWORD") 186 | 187 | # The entire string should be redacted with a single redaction 188 | assert message_queue.qsize() == 1 189 | log_message = message_queue.get(block=False).getMessage() 190 | assert "The value is: ********" in log_message 191 | assert "SUPERSECRETPASSWORD" not in log_message 192 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_session_user.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from openjd.sessions._session_user import WindowsSessionUser 4 | from openjd.sessions._session_user import BadCredentialsException 5 | from openjd.sessions._os_checker import is_windows 6 | 7 | from unittest.mock import patch 8 | 9 | import os 10 | import pytest 11 | 12 | from .conftest import ( 13 | WIN_SET_TEST_ENV_VARS_MESSAGE, 14 | WIN_USERNAME_ENV_VAR, 15 | WIN_PASS_ENV_VAR, 16 | has_windows_user, 17 | are_tests_in_windows_session_0, 18 | ) 19 | 20 | 21 | @pytest.mark.skipif(not is_windows(), reason="Windows-specific tests") 22 | class TestWindowsSessionUser: 23 | 24 | @pytest.mark.skipif( 25 | are_tests_in_windows_session_0(), 26 | reason="Cannot create a WindowsSessionUser with a password while in Session 0.", 27 | ) 28 | @pytest.mark.parametrize( 29 | "user", 30 | ["userA", "domain\\userA", "userA@example.domain"], 31 | ) 32 | @patch("openjd.sessions._session_user.WindowsSessionUser._validate_username_password") 33 | @patch( 34 | "openjd.sessions._session_user.WindowsSessionUser.is_process_user", 35 | return_value=False, 36 | ) 37 | def test_user_not_converted(self, mock_is_process_user, mock_validate_username, user): 38 | windows_session_user = WindowsSessionUser(user, password="password") 39 | 40 | assert windows_session_user.user == user 41 | 42 | @pytest.mark.skipif( 43 | are_tests_in_windows_session_0(), 44 | reason="Cannot create a WindowsSessionUser with a password while in Session 0.", 45 | ) 46 | @pytest.mark.xfail( 47 | not has_windows_user(), 48 | reason=WIN_SET_TEST_ENV_VARS_MESSAGE, 49 | ) 50 | def test_user_logon(self) -> None: 51 | # GIVEN 52 | user = os.environ.get(WIN_USERNAME_ENV_VAR) 53 | password = os.environ.get(WIN_PASS_ENV_VAR) 54 | if user is None or password is None: 55 | pytest.xfail(WIN_SET_TEST_ENV_VARS_MESSAGE) 56 | 57 | # WHEN 58 | WindowsSessionUser(user, password=password) 59 | 60 | # THEN 61 | # Should not have raised any exceptions 62 | 63 | def test_no_password_impersonation_throws_exception(self): 64 | with pytest.raises( 65 | RuntimeError, 66 | match="Must supply a password or logon token. User is not the process owner.", 67 | ): 68 | WindowsSessionUser("nonexistent_user") 69 | 70 | @pytest.mark.skipif( 71 | are_tests_in_windows_session_0(), 72 | reason="Cannot create a WindowsSessionUser with a password while in Session 0.", 73 | ) 74 | def test_incorrect_credential(self): 75 | with pytest.raises( 76 | BadCredentialsException, 77 | match="The username or password is incorrect.", 78 | ): 79 | WindowsSessionUser("nonexistent_user", password="abc") 80 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_tempdir.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from tempfile import gettempdir 4 | import os 5 | import shutil 6 | import stat 7 | from pathlib import Path 8 | from subprocess import DEVNULL, run 9 | 10 | from openjd.sessions._os_checker import is_posix, is_windows 11 | from openjd.sessions._windows_permission_helper import WindowsPermissionHelper 12 | from utils.windows_acl_helper import ( 13 | MODIFY_READ_WRITE_MASK, 14 | FULL_CONTROL_MASK, 15 | get_aces_for_object, 16 | principal_has_access_to_object, 17 | ) 18 | 19 | if is_posix(): 20 | import grp 21 | import pwd 22 | 23 | if is_windows(): 24 | from openjd.sessions._win32._helpers import get_process_user # type: ignore 25 | 26 | import pytest 27 | from unittest.mock import patch 28 | 29 | from openjd.sessions import PosixSessionUser, WindowsSessionUser 30 | from openjd.sessions._tempdir import TempDir, custom_gettempdir 31 | 32 | from .conftest import ( 33 | has_posix_disjoint_user, 34 | has_posix_target_user, 35 | has_windows_user, 36 | WIN_SET_TEST_ENV_VARS_MESSAGE, 37 | POSIX_SET_TARGET_USER_ENV_VARS_MESSAGE, 38 | POSIX_SET_DISJOINT_USER_ENV_VARS_MESSAGE, 39 | ) 40 | 41 | 42 | @pytest.mark.skipif(not is_posix(), reason="Posix-specific tests") 43 | class TestTempDirPosix: 44 | def test_defaults(self) -> None: 45 | # GIVEN 46 | tmpdir = Path(os.path.join(gettempdir(), "OpenJD")).resolve() 47 | 48 | # WHEN 49 | result = TempDir() 50 | 51 | # THEN 52 | assert result.path.parent == tmpdir 53 | assert os.path.exists(result.path) 54 | 55 | statinfo = os.stat(result.path) 56 | assert statinfo.st_uid == os.getuid() # type: ignore 57 | assert statinfo.st_gid == os.getgid() # type: ignore 58 | 59 | os.rmdir(result.path) 60 | 61 | 62 | class TestTempDir: 63 | @pytest.mark.usefixtures("tmp_path") # Built-in fixture 64 | def test_given_dir(self, tmp_path: Path) -> None: 65 | # WHEN 66 | result = TempDir(dir=tmp_path) 67 | 68 | # THEN 69 | assert result.path.parent == tmp_path.resolve() 70 | assert os.path.exists(result.path) 71 | 72 | def test_given_prefix(self) -> None: 73 | # GIVEN 74 | tmpdir = Path(custom_gettempdir()) 75 | prefix = "testprefix" 76 | 77 | # WHEN 78 | result = TempDir(prefix=prefix) 79 | 80 | # THEN 81 | assert result.path.parent == tmpdir.resolve() 82 | assert result.path.name.startswith(prefix) 83 | assert os.path.exists(result.path) 84 | 85 | os.rmdir(result.path) 86 | 87 | def test_cleanup(self) -> None: 88 | # GIVEN 89 | tmpdir = TempDir() 90 | open(tmpdir.path / "file.txt", "w").close() 91 | 92 | # WHEN 93 | tmpdir.cleanup() 94 | 95 | # THEN 96 | assert not os.path.exists(tmpdir.path) 97 | 98 | def test_no_write_permission(self) -> None: 99 | # Test that we raise an exception if we don't have permission to create a directory 100 | # within the given directory. 101 | 102 | # GIVEN 103 | dir = Path(gettempdir()) / "a" / "very" / "unlikely" / "dir" / "to" / "exist" 104 | 105 | # WHEN 106 | with pytest.raises(RuntimeError): 107 | TempDir(dir=dir) 108 | 109 | 110 | @pytest.mark.skipif(not is_windows(), reason="Windows-specific tests") 111 | class TestTempDirWindows: 112 | 113 | @pytest.mark.xfail(not has_windows_user(), reason=WIN_SET_TEST_ENV_VARS_MESSAGE) 114 | @pytest.mark.usefixtures("windows_user") 115 | @patch("openjd.sessions.WindowsSessionUser.is_process_user", return_value=True) 116 | def test_windows_object_permissions(self, mock_user_match, windows_user: WindowsSessionUser): 117 | # Test that TempDir gives the given WindowsSessionUser Modify/R/W, but not Full Control 118 | # permissions on the created directory. 119 | 120 | # GIVEN 121 | process_owner = get_process_user() 122 | if "\\" in process_owner: 123 | # Extract user from NETBIOS name 124 | process_owner = process_owner.split("\\")[1] 125 | elif "@" in process_owner: 126 | # Extract user from domain UPN 127 | process_owner = process_owner.split("@")[0] 128 | 129 | # WHEN 130 | tempdir = TempDir(user=windows_user) 131 | aces = get_aces_for_object(str(tempdir.path)) 132 | 133 | # THEN 134 | assert len(aces) == 2 # Only self & user 135 | assert aces[process_owner][0] == [FULL_CONTROL_MASK] # allowed 136 | assert aces[process_owner][1] == [] # denied 137 | assert aces[windows_user.user][0] == [MODIFY_READ_WRITE_MASK] # allowed 138 | assert aces[windows_user.user][1] == [] # denied 139 | 140 | @pytest.mark.xfail(not has_windows_user(), reason=WIN_SET_TEST_ENV_VARS_MESSAGE) 141 | @pytest.mark.usefixtures("windows_user") 142 | @patch("openjd.sessions.WindowsSessionUser.is_process_user", return_value=True) 143 | def test_windows_permissions_inherited(self, mock_user_match, windows_user: WindowsSessionUser): 144 | # WHEN 145 | tempdir = TempDir(user=windows_user) 146 | os.mkdir(tempdir.path / "child_dir") 147 | os.mkdir(tempdir.path / "child_dir" / "grandchild_dir") 148 | open(tempdir.path / "child_file", "a").close() 149 | open(tempdir.path / "child_dir" / "grandchild_file", "a").close() 150 | 151 | # THEN 152 | assert principal_has_access_to_object( 153 | str(tempdir.path), windows_user.user, MODIFY_READ_WRITE_MASK 154 | ) 155 | assert principal_has_access_to_object( 156 | str(tempdir.path / "child_dir"), windows_user.user, MODIFY_READ_WRITE_MASK 157 | ) 158 | assert principal_has_access_to_object( 159 | str(tempdir.path / "child_file"), windows_user.user, MODIFY_READ_WRITE_MASK 160 | ) 161 | assert principal_has_access_to_object( 162 | str(tempdir.path / "child_dir" / "grandchild_dir"), 163 | windows_user.user, 164 | MODIFY_READ_WRITE_MASK, 165 | ) 166 | assert principal_has_access_to_object( 167 | str(tempdir.path / "child_dir" / "grandchild_file"), 168 | windows_user.user, 169 | MODIFY_READ_WRITE_MASK, 170 | ) 171 | 172 | @patch("openjd.sessions.WindowsSessionUser.is_process_user", return_value=True) 173 | def test_nonvalid_windows_principal_raises_exception(self, mock_user_match): 174 | # GIVEN 175 | windows_user = WindowsSessionUser("non_existent_user") 176 | 177 | # THEN 178 | with pytest.raises(RuntimeError, match="Could not change permissions of directory"): 179 | TempDir(user=windows_user) 180 | 181 | @pytest.fixture 182 | def clean_up_directory(self): 183 | created_dirs = [] 184 | yield created_dirs 185 | for dir_path in created_dirs: 186 | if os.path.exists(dir_path): 187 | shutil.rmtree(dir_path) 188 | 189 | def test_windows_temp_dir(self, monkeypatch, clean_up_directory): 190 | monkeypatch.setenv("PROGRAMDATA", r"C:\ProgramDataForOpenJDTest") 191 | expected_dir = r"C:\ProgramDataForOpenJDTest\Amazon\OpenJD" 192 | clean_up_directory.append(expected_dir) 193 | assert custom_gettempdir() == expected_dir 194 | assert os.path.exists( 195 | Path(expected_dir).parent 196 | ), r"Directory C:\ProgramDataForOpenJDTest\Amazon should be created." 197 | 198 | def test_cleanup(self, windows_user: WindowsSessionUser) -> None: 199 | # Ensure that we can delete the files in that directory that have been 200 | # created by the other user. 201 | 202 | # GIVEN 203 | tmpdir = TempDir(user=windows_user) 204 | testfilename = str(tmpdir.path / "testfile.txt") 205 | 206 | # Create a file on which only windows_user has permissions 207 | with open(testfilename, "w") as f: 208 | f.write("File content") 209 | WindowsPermissionHelper.set_permissions( 210 | testfilename, principals_full_control=[windows_user.user] 211 | ) 212 | 213 | # WHEN 214 | tmpdir.cleanup() 215 | 216 | # THEN 217 | assert not os.path.exists(testfilename) 218 | assert not os.path.exists(tmpdir.path) 219 | 220 | 221 | @pytest.mark.usefixtures("posix_target_user", "posix_disjoint_user") 222 | class TestTempDirPosixUser: 223 | """Tests of the TempDir when the resulting directory is to be owned by 224 | a different user than the current process. 225 | """ 226 | 227 | @pytest.mark.xfail( 228 | not has_posix_target_user(), 229 | reason=POSIX_SET_TARGET_USER_ENV_VARS_MESSAGE, 230 | ) 231 | def test_defaults(self, posix_target_user: PosixSessionUser) -> None: 232 | # Ensure that we can create the temporary directory. 233 | 234 | # GIVEN 235 | tmpdir = Path(gettempdir()) 236 | uid = pwd.getpwnam(posix_target_user.user).pw_uid # type: ignore 237 | gid = grp.getgrnam(posix_target_user.group).gr_gid # type: ignore 238 | 239 | # WHEN 240 | result = TempDir(user=posix_target_user) 241 | 242 | # THEN 243 | assert result.path.parent == tmpdir / "OpenJD" 244 | assert os.path.exists(result.path) 245 | statinfo = os.stat(result.path) 246 | assert statinfo.st_uid != uid, "Test: Not owned by target user" 247 | assert statinfo.st_uid == os.getuid(), "Test: Is owned by this user" # type: ignore 248 | assert statinfo.st_gid == gid, "Test: gid is changed" 249 | assert statinfo.st_mode & stat.S_IWGRP, "Test: Directory is group-writable" 250 | 251 | @pytest.mark.xfail( 252 | not has_posix_target_user(), 253 | reason=POSIX_SET_TARGET_USER_ENV_VARS_MESSAGE, 254 | ) 255 | def test_cleanup(self, posix_target_user: PosixSessionUser) -> None: 256 | # Ensure that we can delete the files in that directory that have been 257 | # created by the other user. 258 | 259 | # GIVEN 260 | tmpdir = TempDir(user=posix_target_user) 261 | testfilename = tmpdir.path / "testfile.txt" 262 | # Create a file owned by the target user and their default group. 263 | runresult = run( 264 | ["sudo", "-u", posix_target_user.user, "-i", "/usr/bin/touch", str(testfilename)], 265 | stdin=DEVNULL, 266 | stdout=DEVNULL, 267 | stderr=DEVNULL, 268 | ) 269 | 270 | # WHEN 271 | tmpdir.cleanup() 272 | 273 | # THEN 274 | assert runresult.returncode == 0 275 | assert not os.path.exists(testfilename) 276 | assert not os.path.exists(tmpdir.path) 277 | 278 | @pytest.mark.xfail( 279 | not has_posix_disjoint_user(), 280 | reason=POSIX_SET_DISJOINT_USER_ENV_VARS_MESSAGE, 281 | ) 282 | def test_cannot_change_to_group(self, posix_disjoint_user: PosixSessionUser) -> None: 283 | # Test that we raise an exception when we try to give the created directory to 284 | # a group that this process isn't a member of. 285 | 286 | # WHEN 287 | with pytest.raises(RuntimeError): 288 | TempDir(user=posix_disjoint_user) 289 | -------------------------------------------------------------------------------- /test/openjd/sessions/test_windows_process_killer.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import subprocess 4 | import sys 5 | import time 6 | from pathlib import Path 7 | import psutil 8 | 9 | from openjd.sessions._windows_process_killer import ( 10 | _suspend_process_tree, 11 | _kill_processes, 12 | kill_windows_process_tree, 13 | _suspend_process, 14 | ) 15 | from logging.handlers import QueueHandler 16 | import pytest 17 | from .conftest import build_logger 18 | from subprocess import Popen 19 | 20 | 21 | @pytest.mark.usefixtures("message_queue", "queue_handler") 22 | class TestWindowsProcessKiller: 23 | def test_suspend_process_tree(self, queue_handler: QueueHandler) -> None: 24 | # GIVEN 25 | logger = build_logger(queue_handler) 26 | python_app_loc = (Path(__file__).parent / "support_files" / "app_20s_run.py").resolve() 27 | process = Popen([sys.executable, python_app_loc], stdout=subprocess.PIPE, text=True) 28 | 29 | # When 30 | # Give a few seconds for running the python script 31 | time.sleep(3) 32 | proc = psutil.Process(process.pid) 33 | _suspend_process_tree(logger, proc, [], [], True) 34 | 35 | # Then 36 | try: 37 | assert proc.status() == psutil.STATUS_STOPPED 38 | finally: 39 | proc.kill() 40 | 41 | def test_suspend_process(self, queue_handler: QueueHandler) -> None: 42 | # GIVEN 43 | logger = build_logger(queue_handler) 44 | python_app_loc = (Path(__file__).parent / "support_files" / "app_20s_run.py").resolve() 45 | process = Popen([sys.executable, python_app_loc], stdout=subprocess.PIPE, text=True) 46 | 47 | # When 48 | # Give a few seconds for running the python script 49 | time.sleep(3) 50 | proc = psutil.Process(process.pid) 51 | result = _suspend_process(logger, proc) 52 | 53 | # Then 54 | try: 55 | assert result 56 | assert proc.status() == psutil.STATUS_STOPPED 57 | finally: 58 | proc.kill() 59 | 60 | def test_kill_processes(self, queue_handler: QueueHandler) -> None: 61 | # GIVEN 62 | logger = build_logger(queue_handler) 63 | python_app_loc = (Path(__file__).parent / "support_files" / "app_20s_run.py").resolve() 64 | process = Popen([sys.executable, python_app_loc], stdout=subprocess.PIPE, text=True) 65 | 66 | # When 67 | # Give a few seconds for running the python script 68 | time.sleep(3) 69 | proc = psutil.Process(process.pid) 70 | _kill_processes(logger, [proc]) 71 | 72 | # Then 73 | assert not psutil.pid_exists(process.pid) 74 | 75 | def test_kill_windows_process_tree(self, queue_handler: QueueHandler) -> None: 76 | # GIVEN 77 | logger = build_logger(queue_handler) 78 | python_app_loc = (Path(__file__).parent / "support_files" / "app_20s_run.py").resolve() 79 | process = Popen([sys.executable, python_app_loc], stdout=subprocess.PIPE, text=True) 80 | 81 | # When 82 | # Give a few seconds for running the python script 83 | time.sleep(3) 84 | kill_windows_process_tree(logger, process.pid) 85 | 86 | # Then 87 | assert not psutil.pid_exists(process.pid) 88 | -------------------------------------------------------------------------------- /test/openjd/test_copyright_header.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | import re 4 | from pathlib import Path 5 | 6 | # For distributed open source and proprietary code, we must include a copyright header in source every file: 7 | _copyright_header_re = re.compile( 8 | r"Copyright Amazon\.com, Inc\. or its affiliates\. All Rights Reserved\.", re.IGNORECASE 9 | ) 10 | _generated_by_scm = re.compile(r"# file generated by setuptools[_-]scm", re.IGNORECASE) 11 | 12 | 13 | def _check_file(filename: Path) -> None: 14 | with open(filename) as infile: 15 | lines_read = 0 16 | for line in infile: 17 | if _copyright_header_re.search(line): 18 | return # success 19 | lines_read += 1 20 | if lines_read > 10: 21 | raise Exception( 22 | f"Could not find a valid Amazon.com copyright header in the top of {filename}." 23 | " Please add one." 24 | ) 25 | else: 26 | # __init__.py files are usually empty, this is to catch that. 27 | raise Exception( 28 | f"Could not find a valid Amazon.com copyright header in the top of {filename}." 29 | " Please add one." 30 | ) 31 | 32 | 33 | def _is_version_file(filename: Path) -> bool: 34 | if filename.name != "_version.py": 35 | return False 36 | with open(filename) as infile: 37 | lines_read = 0 38 | for line in infile: 39 | if _generated_by_scm.search(line): 40 | return True 41 | lines_read += 1 42 | if lines_read > 10: 43 | break 44 | return False 45 | 46 | 47 | def test_copyright_headers(): 48 | """Verifies every .py file has an Amazon copyright header.""" 49 | root_project_dir = Path(__file__) 50 | # The root of the project is the directory that contains the test directory. 51 | while not (root_project_dir / "test").exists(): 52 | root_project_dir = root_project_dir.parent 53 | # Choose only a few top level directories to test. 54 | # That way we don't snag any virtual envs a developer might create, at the risk of missing 55 | # some top level .py files. 56 | # Additionally, ignore any files in the `node_modules` directory that we use in the VS Code 57 | # extension. 58 | top_level_dirs = [ 59 | "src", 60 | "test", 61 | "scripts", 62 | "testing_containers", 63 | "openjdvscode!(/node_modules)", 64 | ] 65 | file_count = 0 66 | for top_level_dir in top_level_dirs: 67 | for glob_pattern in ("**/*.py", "**/*.sh", "**/Dockerfile", "**/*.ts"): 68 | for path in Path(root_project_dir / top_level_dir).glob(glob_pattern): 69 | print(path) 70 | if not _is_version_file(path): 71 | _check_file(path) 72 | file_count += 1 73 | 74 | print(f"test_copyright_headers checked {file_count} files successfully.") 75 | assert file_count > 0, "Test misconfiguration" 76 | 77 | 78 | if __name__ == "__main__": 79 | test_copyright_headers() 80 | -------------------------------------------------------------------------------- /test/openjd/test_importable.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | 4 | def test_openjd_importable(): 5 | import openjd # noqa: F401 6 | 7 | 8 | def test_openjd_session_importable(): 9 | import openjd.sessions # noqa: F401 10 | 11 | 12 | def test_version_importable(): 13 | from openjd.sessions import version # noqa: F401 14 | -------------------------------------------------------------------------------- /test/openjd/utils/windows_acl_helper.py: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | from openjd.sessions._os_checker import is_windows 4 | 5 | if is_windows(): 6 | import win32security 7 | 8 | MODIFY_READ_WRITE_MASK = 0x1301FF 9 | FULL_CONTROL_MASK = 0x1F01FF 10 | 11 | 12 | def get_aces_for_object(object_path: str) -> dict[str, tuple[list[int], list[int]]]: 13 | """Obtain a dictionary representation of the Access Control Entities (ACEs) of the 14 | given object. The returned dictionary has the form: 15 | { 16 | : ( 17 | [ , ... ], 18 | [ , ... ] 19 | ), 20 | ... 21 | } 22 | """ 23 | return_dict = dict[str, tuple[list[int], list[int]]]() 24 | sd = win32security.GetFileSecurity(object_path, win32security.DACL_SECURITY_INFORMATION) 25 | 26 | dacl = sd.GetSecurityDescriptorDacl() 27 | 28 | for i in range(dacl.GetAceCount()): 29 | ace = dacl.GetAce(i) 30 | 31 | ace_type = ace[0][0] 32 | access_mask = ace[1] 33 | ace_principal_sid = ace[2] 34 | 35 | account_name, _, _ = win32security.LookupAccountSid(None, ace_principal_sid) 36 | if account_name not in return_dict: 37 | return_dict[account_name] = (list[int](), list[int]()) 38 | if ace_type == win32security.ACCESS_ALLOWED_ACE_TYPE: 39 | return_dict[account_name][0].append(access_mask) 40 | elif ace_type == win32security.ACCESS_DENIED_ACE_TYPE: 41 | return_dict[account_name][1].append(access_mask) 42 | 43 | return return_dict 44 | 45 | 46 | def get_aces_for_principal_on_object(object_path: str, principal_name: str): 47 | """ 48 | Returns a list of access allowed masks and a list of access denied masks for a principal's permissions on an object. 49 | Access masks for principals other than that specified by principal_name will be excluded from both lists. 50 | 51 | Arguments: 52 | object_path (str): The path to the object for which the ACE masks will be retrieved. 53 | principal_name (str): The name of the principal to filter for. 54 | 55 | Returns: 56 | access_allowed_masks (List[int]): All masks allowing principal_name access to the file. 57 | access_denied_masks (List[int]): All masks denying principal_name access to the file. 58 | """ 59 | sd = win32security.GetFileSecurity(object_path, win32security.DACL_SECURITY_INFORMATION) 60 | 61 | dacl = sd.GetSecurityDescriptorDacl() 62 | 63 | principal_to_check_sid, _, _ = win32security.LookupAccountName(None, principal_name) 64 | 65 | access_allowed_masks = [] 66 | access_denied_masks = [] 67 | 68 | for i in range(dacl.GetAceCount()): 69 | ace = dacl.GetAce(i) 70 | 71 | ace_type = ace[0][0] 72 | access_mask = ace[1] 73 | ace_principal_sid = ace[2] 74 | 75 | if ace_principal_sid == principal_to_check_sid: 76 | if ace_type == win32security.ACCESS_ALLOWED_ACE_TYPE: 77 | access_allowed_masks.append(access_mask) 78 | elif ace_type == win32security.ACCESS_DENIED_ACE_TYPE: 79 | access_denied_masks.append(access_mask) 80 | 81 | return access_allowed_masks, access_denied_masks 82 | 83 | 84 | def principal_has_access_to_object(object_path, principal_name, access_mask): 85 | access_allowed_masks, access_denied_masks = get_aces_for_principal_on_object( 86 | object_path, principal_name 87 | ) 88 | 89 | return access_allowed_masks == [access_mask] and len(access_denied_masks) == 0 90 | -------------------------------------------------------------------------------- /testing_containers/codebuild_proxy/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | FROM public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:4.0 4 | 5 | WORKDIR /root 6 | 7 | COPY run_tests.sh . 8 | 9 | USER root 10 | 11 | ENTRYPOINT [ "" ] 12 | CMD ["/bin/sh", "-c", "./run_tests.sh"] 13 | -------------------------------------------------------------------------------- /testing_containers/codebuild_proxy/run_tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | set -eux 5 | 6 | #DIR=/home/codebuild-test 7 | DIR=/root 8 | 9 | mkdir -p "${DIR}"/code/ 10 | cp -r /code/* "${DIR}"/code/ 11 | cp -r /code/.git "${DIR}"/code/ 12 | 13 | cd "${DIR}"/code 14 | find . -type d -name '__pycache__' -print0 | xargs -0 rm -rf 15 | ls -la 16 | "${DIR}"/code/pipeline/build.sh 17 | -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | FROM python:3.9-bookworm 4 | ARG PIP_INDEX_URL 5 | 6 | # Let our tests know that we"re in an environment that can run the sudo 7 | # tests 8 | ENV OPENJD_TEST_SUDO_TARGET_USER=targetuser 9 | ENV OPENJD_TEST_SUDO_SHARED_GROUP=sharedgroup 10 | ENV OPENJD_TEST_SUDO_DISJOINT_USER=disjointuser 11 | ENV OPENJD_TEST_SUDO_DISJOINT_GROUP=disjointgroup 12 | ENV PIP_INDEX_URL=$PIP_INDEX_URL 13 | ENV HATCH_DATA_DIR=/hatch/data 14 | ENV HATCH_CACHE_DIR=/hatch/cache 15 | 16 | WORKDIR /config 17 | 18 | COPY testing_containers/ldap_sudo_environment/changePassword.ldif /config 19 | COPY testing_containers/ldap_sudo_environment/addUsersGroups.ldif /config 20 | COPY testing_containers/ldap_sudo_environment/addUsersToSharedGroup.ldif /config 21 | COPY testing_containers/ldap_sudo_environment/start_ldap.sh /config 22 | 23 | # We set up two users for our tests: 24 | # 1) hostuser -- the user that will be running the pytests. 25 | # 2) targetuser -- the user that we'll be running subprocesses as in the 26 | # the cross-account tests. 27 | # 3) disjointuser -- a user used in temporary directory creation tests. 28 | # These accounts belong to the following groups: 29 | # hostuser: hostuser, sharedgroup 30 | # targetuser: targetuser, sharedgroup 31 | # disjointuser: disjointuser, disjointgroup 32 | RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && \ 33 | apt-get install -y vim screen slapd ldap-utils && \ 34 | echo slapd slapd/password1 password | debconf-set-selections -v && \ 35 | echo slapd slapd/password2 password | debconf-set-selections -v && \ 36 | echo slapd slapd/internal/adminpw password | debconf-set-selections -v && \ 37 | echo slapd slapd/internal/generated_adminpw password | debconf-set-selections -v && \ 38 | echo slapd slapd/password_mismatch note | debconf-set-selections -v && \ 39 | echo slapd slapd/no_configuration boolean false | debconf-set-selections -v && \ 40 | echo slapd slapd/dump_database select when needed | debconf-set-selections -v && \ 41 | echo slapd slapd/domain string environment.internal | debconf-set-selections -v && \ 42 | echo slapd slapd/move_old_database boolean true | debconf-set-selections -v && \ 43 | echo slapd slapd/postinst_error note | debconf-set-selections -v && \ 44 | echo slapd slapd/purge_database boolean false | debconf-set-selections -v && \ 45 | echo slapd slapd/dump_database_destdir string /var/backups/slapd-VERSION | debconf-set-selections -v && \ 46 | echo slapd shared/organization string environment.internal | debconf-set-selections -v && \ 47 | echo slapd slapd/invalid_config boolean true | debconf-set-selections -v && \ 48 | echo slapd slapd/upgrade_slapcat_failure error | debconf-set-selections -v && \ 49 | dpkg-reconfigure slapd && \ 50 | echo "BASE dc=environment,dc=internal" >> /etc/ldap/ldap.conf && \ 51 | echo "URI ldap://ldap.environment.internal" >> /etc/ldap/ldap.conf && \ 52 | update-rc.d slapd enable && service slapd start && \ 53 | ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f /config/changePassword.ldif && \ 54 | ldapadd -x -D cn=admin,dc=environment,dc=internal -w password -f /config/addUsersGroups.ldif && \ 55 | ldapmodify -xcD cn=admin,dc=environment,dc=internal -w password -f /config/addUsersToSharedGroup.ldif && \ 56 | echo nslcd nslcd/ldap-uris string ldap://ldap.environment.internal/ | debconf-set-selections -v && \ 57 | echo nslcd nslcd/ldap-base string dc=environment,dc=internal | debconf-set-selections -v && \ 58 | echo libnss-ldapd:amd64 libnss-ldapd/nsswitch multiselect passwd, group, shadow | debconf-set-selections -v && \ 59 | apt-get install -y libnss-ldapd libpam-ldapd && \ 60 | echo session optional pam_mkhomedir.so skel=/etc/skel umask=007 >> /etc/pam.d/common-session && \ 61 | touch /etc/netgroup && service nscd restart && service nslcd restart && \ 62 | # Create the home directories before login 63 | mkhomedir_helper hostuser 007 && mkhomedir_helper targetuser 007 && mkhomedir_helper disjointuser 007 && \ 64 | mkdir -p /hatch/data /hatch/cache && \ 65 | # Set up the sudoers permissions 66 | apt-get install -y sudo-ldap && \ 67 | echo "hostuser ALL=(${OPENJD_TEST_SUDO_TARGET_USER},hostuser) NOPASSWD: ALL" > /etc/sudoers.d/hostuser && \ 68 | # Clean up apt cache 69 | rm -rf /var/lib/apt/lists/* && \ 70 | apt-get clean && \ 71 | # Install hatch (for setting up environment and running tests) 72 | pip install hatch 73 | 74 | COPY . /code/ 75 | WORKDIR /code 76 | RUN hatch env create 77 | 78 | CMD ["/bin/sh", "-c", "/config/start_ldap.sh && chown -R hostuser:hostuser /hatch /code && sudo -u hostuser --preserve-env=HATCH_CACHE_DIR,HATCH_DATA_DIR,PIP_INDEX_URL,OPENJD_TEST_SUDO_TARGET_USER,OPENJD_TEST_SUDO_SHARED_GROUP,OPENJD_TEST_SUDO_DISJOINT_USER,OPENJD_TEST_SUDO_DISJOINT_GROUP hatch run test --no-cov"] 79 | -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/README.md: -------------------------------------------------------------------------------- 1 | 2 | ## Build 3 | ``` 4 | docker build --build-arg BUILDKIT_SANDBOX_HOSTNAME=ldap.environment.internal --ulimit nofile=1024 -t openjd_ldap_test -f testing_containers/ldap_sudo_environment/Dockerfile . 5 | ``` 6 | 7 | ## Run Interactive Bash 8 | To start an interactive bash session: 9 | ``` 10 | docker run -h ldap.environment.internal --rm -it openjd_ldap_test:latest bash 11 | ``` 12 | To start the LDAP Server and Client: 13 | 14 | ``` 15 | /config/start_ldap.sh 16 | ``` 17 | 18 | Login via ldap: 19 | 20 | ``` 21 | login -p hostuser 22 | ``` 23 | -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/addUsersGroups.ldif: -------------------------------------------------------------------------------- 1 | dn: ou=People,dc=environment,dc=internal 2 | objectClass: organizationalUnit 3 | ou: People 4 | 5 | dn: ou=Groups,dc=environment,dc=internal 6 | objectClass: organizationalUnit 7 | ou: Groups 8 | 9 | dn: cn=hostuser,ou=Groups,dc=environment,dc=internal 10 | objectClass: posixGroup 11 | cn: hostuser 12 | gidNumber: 5000 13 | 14 | dn: cn=targetuser,ou=Groups,dc=environment,dc=internal 15 | objectClass: posixGroup 16 | cn: targetuser 17 | gidNumber: 5001 18 | 19 | dn: cn=disjointuser,ou=Groups,dc=environment,dc=internal 20 | objectClass: posixGroup 21 | cn: disjointuser 22 | gidNumber: 5002 23 | 24 | dn: cn=sharedgroup,ou=Groups,dc=environment,dc=internal 25 | objectClass: posixGroup 26 | cn: sharedgroup 27 | gidNumber: 5003 28 | 29 | dn: cn=disjointgroup,ou=Groups,dc=environment,dc=internal 30 | objectClass: posixGroup 31 | cn: disjointgroup 32 | gidNumber: 5004 33 | 34 | dn: uid=hostuser,ou=People,dc=environment,dc=internal 35 | objectClass: inetOrgPerson 36 | objectClass: posixAccount 37 | objectClass: shadowAccount 38 | uid: hostuser 39 | sn: hostuser 40 | givenName: hostuser 41 | cn: hostuser 42 | displayName: hostuser 43 | uidNumber: 10000 44 | gidNumber: 5000 45 | userPassword: password 46 | gecos: hostuser 47 | loginShell: /bin/bash 48 | homeDirectory: /home/hostuser 49 | 50 | dn: uid=targetuser,ou=People,dc=environment,dc=internal 51 | objectClass: inetOrgPerson 52 | objectClass: posixAccount 53 | objectClass: shadowAccount 54 | uid: targetuser 55 | sn: targetuser 56 | givenName: targetuser 57 | cn: targetuser 58 | displayName: targetuser 59 | uidNumber: 10001 60 | gidNumber: 5001 61 | userPassword: password 62 | gecos: targetuser 63 | loginShell: /bin/bash 64 | homeDirectory: /home/targetuser 65 | 66 | dn: uid=disjointuser,ou=People,dc=environment,dc=internal 67 | objectClass: inetOrgPerson 68 | objectClass: posixAccount 69 | objectClass: shadowAccount 70 | uid: disjointuser 71 | sn: disjointuser 72 | givenName: disjointuser 73 | cn: disjointuser 74 | displayName: disjointuser 75 | uidNumber: 10002 76 | gidNumber: 5002 77 | userPassword: password 78 | gecos: disjointuser 79 | loginShell: /bin/bash 80 | homeDirectory: /home/disjointuser -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/addUsersToSharedGroup.ldif: -------------------------------------------------------------------------------- 1 | dn: cn=sharedgroup,ou=Groups,dc=environment,dc=internal 2 | changetype: modify 3 | add: memberUid 4 | memberUid: hostuser 5 | 6 | dn: cn=sharedgroup,ou=Groups,dc=environment,dc=internal 7 | changetype: modify 8 | add: memberUid 9 | memberUid: targetuser 10 | -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/changePassword.ldif: -------------------------------------------------------------------------------- 1 | dn: olcDatabase={1}mdb,cn=config 2 | changetype: modify 3 | replace: olcRootPW 4 | olcRootPW: password -------------------------------------------------------------------------------- /testing_containers/ldap_sudo_environment/start_ldap.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | set -eo 5 | 6 | service slapd start 7 | service nscd restart 8 | service nslcd restart -------------------------------------------------------------------------------- /testing_containers/localuser_sudo_environment/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | FROM python:3.9-bookworm 4 | ARG PIP_INDEX_URL 5 | 6 | # Let our tests know that we"re in an environment that can run the sudo 7 | # tests 8 | ENV OPENJD_TEST_SUDO_TARGET_USER=targetuser 9 | ENV OPENJD_TEST_SUDO_SHARED_GROUP=sharedgroup 10 | ENV OPENJD_TEST_SUDO_DISJOINT_USER=disjointuser 11 | ENV OPENJD_TEST_SUDO_DISJOINT_GROUP=disjointgroup 12 | ENV PIP_INDEX_URL=$PIP_INDEX_URL 13 | 14 | # We set up two users for our tests: 15 | # 1) hostuser -- the user that will be running the pytests. 16 | # 2) targetuser -- the user that we'll be running subprocesses as in the 17 | # the cross-account tests. 18 | # 3) disjointuser -- a user used in temporary directory creation tests. 19 | # These accounts belong to the following groups: 20 | # hostuser: hostuser, sharedgroup 21 | # targetuser: targetuser, sharedgroup 22 | # disjointuser: disjointuser, disjointgroup 23 | RUN apt-get update && apt-get install -y gcc libcap2-bin psmisc sudo && \ 24 | # Clean up apt cache 25 | rm -rf /var/lib/apt/lists/* && \ 26 | apt-get clean && \ 27 | # Set up OS users and groups 28 | addgroup ${OPENJD_TEST_SUDO_SHARED_GROUP} && \ 29 | useradd -ms /bin/bash -G ${OPENJD_TEST_SUDO_SHARED_GROUP} ${OPENJD_TEST_SUDO_TARGET_USER} && \ 30 | useradd -ms /bin/bash -G ${OPENJD_TEST_SUDO_SHARED_GROUP} hostuser && \ 31 | # Setup sudoers rules 32 | echo "hostuser ALL=(${OPENJD_TEST_SUDO_TARGET_USER},hostuser) NOPASSWD: ALL" > /etc/sudoers.d/hostuser && \ 33 | addgroup ${OPENJD_TEST_SUDO_DISJOINT_GROUP} && \ 34 | useradd -ms /bin/bash -G ${OPENJD_TEST_SUDO_DISJOINT_GROUP} ${OPENJD_TEST_SUDO_DISJOINT_USER} && \ 35 | # Install hatch (for setting up environment and running tests) 36 | pip install hatch 37 | 38 | USER hostuser 39 | 40 | COPY --chown=hostuser:hostuser . /code/ 41 | WORKDIR /code 42 | RUN hatch env create && \ 43 | # compile the output_signal_sender program which outputs the PID of a process sending a SIGTERM signal \ 44 | gcc -Wall /code/test/openjd/sessions/support_files/output_signal_sender.c -o /code/test/openjd/sessions/support_files/output_signal_sender 45 | 46 | CMD ["hatch", "run", "test", "--no-cov"] -------------------------------------------------------------------------------- /testing_containers/localuser_sudo_environment/README.md: -------------------------------------------------------------------------------- 1 | 2 | This is a docker container that is intended to be used to set up a Linux environment within which we can 3 | test the cross-user functionality within the `processing` submodule. 4 | e.g. Running a `LoggingSubprocess` as a user other than the process owner via sudo. --------------------------------------------------------------------------------