├── secrets └── .gitkeep ├── src └── ump │ ├── py.typed │ ├── initializers │ └── db │ │ ├── initialize_keycloak.sql │ │ ├── create_jobs_table.sql │ │ └── create_postgis_extension.sql │ ├── __init__.py │ ├── api │ ├── models │ │ ├── ogc_exception.py │ │ ├── job_status.py │ │ ├── job_comments.py │ │ ├── ensemble.py │ │ └── providers_config.py │ ├── routes │ │ ├── health.py │ │ ├── users.py │ │ ├── processes.py │ │ └── jobs.py │ ├── keycloak_utils.py │ ├── db_handler.py │ ├── jobs.py │ ├── providers.py │ └── processes.py │ ├── errors.py │ ├── config.py │ ├── utils.py │ └── geoserver │ └── geoserver.py ├── AUTHORS.md ├── examples └── README.md ├── docs ├── content │ ├── 01-intro │ │ ├── authors.md │ │ ├── intro.md │ │ └── quick_start.md │ ├── 04-developing │ │ ├── changelog.md │ │ └── contributing.md │ ├── 02-user_guide │ │ ├── deployment.md │ │ ├── provider-configuration.md │ │ └── setup.md │ ├── 03-architecture │ │ ├── geoserver.md │ │ ├── overview.md │ │ ├── keycloak.md │ │ └── api.md │ └── index.md ├── CSL_Logo.png ├── UMP-Logo.png ├── lgv-logo.png ├── UMP-Banner.png ├── UMP-Logo-Text.png ├── UMP-Sponsors-Banner.jpg ├── Architecture-Overview.jpg ├── Architecture-Overview.png ├── UMP-Developer-Banner.png ├── Logo_LGV_HHDesign_4C_eng.jpg ├── _toc.yml ├── references.bib ├── UMP-Logo.svg ├── UMP-Logo-Text.svg └── _config.yml ├── reports └── README.md ├── assets └── README.md ├── environment.yaml ├── .gitmodules ├── .dockerignore ├── data └── .gitignore ├── docker-compose-build.yaml ├── charts ├── .bumpversion.toml └── urban-model-platform │ ├── templates │ ├── configmap-providers.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ ├── tests │ │ └── test-connection.yaml │ ├── secret-keycloak.yaml │ ├── certificate.yaml │ ├── secret-geoserver.yaml │ ├── httproute.yaml │ ├── issuer-prod.yaml │ ├── configmap-settings.yaml │ ├── issuer-staging.yaml │ ├── httproute-tls.yaml │ ├── hpa.yaml │ ├── NOTES.txt │ ├── _helpers.tpl │ └── deployment.yaml │ ├── .helmignore │ ├── Chart.yaml │ └── values.yaml ├── app.py ├── .githooks └── pre-push ├── .bumpversion.toml ├── scripts └── entrypoint.sh ├── migrations ├── versions │ ├── 1.0.0_add_user.py │ ├── 1.0.7_add_version.py │ ├── 1.0.8_drop_ensemble_id.py │ ├── 1.0.1_add_process_title_and_name.py │ ├── 1.0.11_add_hash.py │ ├── 1.0.4_remove_sampling_settings_from_ensemble.py │ ├── 1.0.5_add_ensemble_job_link.py │ ├── 1.0.9_create_job_comments.py │ ├── 1.0.10_add_share_tables.py │ ├── 1.0.3_extend_ensembles.py │ ├── 1.0.2_add_ensembles.py │ └── e4478f461de1_create_jobs_table_initial.py ├── script.py.mako ├── alembic.ini └── env.py ├── .pre-commit-config.yaml ├── .vscode ├── settings.json ├── tasks.json └── launch.json ├── nginx ├── default.conf └── default-local.conf ├── providers.yaml.example ├── .copier-answers.yml ├── .env.example ├── .github └── workflows │ ├── on-chart-release.yaml │ ├── on-release.yaml │ └── deploy-docs.yaml ├── Dockerfile ├── CHANGELOG.md ├── pyproject.toml ├── README.md ├── Makefile ├── CONTRIBUTING.md ├── .gitignore ├── docker-compose-prod.yaml └── docker-compose-dev.yaml /secrets/.gitkeep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/ump/py.typed: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /AUTHORS.md: -------------------------------------------------------------------------------- 1 | # Authors -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | Ordner für einfache Beispiele -------------------------------------------------------------------------------- /docs/content/01-intro/authors.md: -------------------------------------------------------------------------------- 1 | ```{include} ../../../AUTHORS.md 2 | ``` -------------------------------------------------------------------------------- /reports/README.md: -------------------------------------------------------------------------------- 1 | Here are stored generated analysis as HTML, PDF, LaTeX, etc. -------------------------------------------------------------------------------- /src/ump/initializers/db/initialize_keycloak.sql: -------------------------------------------------------------------------------- 1 | CREATE DATABASE keycloak; 2 | -------------------------------------------------------------------------------- /docs/content/04-developing/changelog.md: -------------------------------------------------------------------------------- 1 | ```{include} ../../../CHANGELOG.md 2 | ``` -------------------------------------------------------------------------------- /src/ump/initializers/db/create_jobs_table.sql: -------------------------------------------------------------------------------- 1 | CREATE DATABASE cut_dev; 2 | 3 | -------------------------------------------------------------------------------- /src/ump/initializers/db/create_postgis_extension.sql: -------------------------------------------------------------------------------- 1 | CREATE EXTENSION postgis; 2 | -------------------------------------------------------------------------------- /docs/CSL_Logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/CSL_Logo.png -------------------------------------------------------------------------------- /docs/UMP-Logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/UMP-Logo.png -------------------------------------------------------------------------------- /docs/content/04-developing/contributing.md: -------------------------------------------------------------------------------- 1 | (contributing)= 2 | ```{include} ../../../CONTRIBUTING.md 3 | ``` -------------------------------------------------------------------------------- /docs/lgv-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/lgv-logo.png -------------------------------------------------------------------------------- /docs/UMP-Banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/UMP-Banner.png -------------------------------------------------------------------------------- /docs/UMP-Logo-Text.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/UMP-Logo-Text.png -------------------------------------------------------------------------------- /assets/README.md: -------------------------------------------------------------------------------- 1 | This folder should contain all files like images which are part of external reports, like Word documents. -------------------------------------------------------------------------------- /environment.yaml: -------------------------------------------------------------------------------- 1 | channels: 2 | - conda-forge 3 | dependencies: 4 | - python=3.11 5 | - poetry=1.8.5 6 | - copier=9 -------------------------------------------------------------------------------- /docs/UMP-Sponsors-Banner.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/UMP-Sponsors-Banner.jpg -------------------------------------------------------------------------------- /docs/Architecture-Overview.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/Architecture-Overview.jpg -------------------------------------------------------------------------------- /docs/Architecture-Overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/Architecture-Overview.png -------------------------------------------------------------------------------- /docs/UMP-Developer-Banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/UMP-Developer-Banner.png -------------------------------------------------------------------------------- /docs/Logo_LGV_HHDesign_4C_eng.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/citysciencelab/urban-model-platform/HEAD/docs/Logo_LGV_HHDesign_4C_eng.jpg -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "modelserver_example"] 2 | path = modelserver_example 3 | url = https://github.com/StefanSchuhart/modelserver_example 4 | -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | # ignore all 2 | * 3 | 4 | # except 5 | !setup.cfg 6 | !setup.py 7 | !environment.yaml 8 | !providers.yaml 9 | !poetry.lock 10 | !pyproject.toml 11 | !src 12 | !scripts 13 | !migrations 14 | -------------------------------------------------------------------------------- /data/.gitignore: -------------------------------------------------------------------------------- 1 | # ignore all 2 | * 3 | # except for 4 | !.gitignore 5 | !external 6 | !external/README.md 7 | !interim 8 | !interim/README.md 9 | !processed 10 | !processed/README.md 11 | !raw 12 | !raw/README.md 13 | -------------------------------------------------------------------------------- /docker-compose-build.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | api: 3 | image: ${CONTAINER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} 4 | build: 5 | context: . 6 | dockerfile: Dockerfile 7 | args: 8 | SOURCE_COMMIT: 9 | -------------------------------------------------------------------------------- /src/ump/__init__.py: -------------------------------------------------------------------------------- 1 | """urban-model-platform package. 2 | 3 | server federation api, OGC Api Processes-based to connect model servers and 4 | centralize access to them 5 | """ 6 | 7 | from __future__ import annotations 8 | 9 | __all__: list[str] = [] 10 | -------------------------------------------------------------------------------- /charts/.bumpversion.toml: -------------------------------------------------------------------------------- 1 | [bumpversion] 2 | current_version = "0.7.0" 3 | commit = true 4 | tag = true 5 | tag_name = "chart-v{new_version}" 6 | 7 | [[bumpversion.files]] 8 | filename = "urban-model-platform/Chart.yaml" 9 | search = 'version: {current_version}' 10 | replace = 'version: {new_version}' 11 | -------------------------------------------------------------------------------- /src/ump/api/models/ogc_exception.py: -------------------------------------------------------------------------------- 1 | from pydantic import BaseModel 2 | from typing import Optional, Dict, Any 3 | 4 | class OGCExceptionResponse(BaseModel): 5 | type: str 6 | title: str 7 | status: int 8 | detail: str 9 | instance: Optional[str] = None 10 | additional: Optional[Dict[str, Any]] = None 11 | -------------------------------------------------------------------------------- /src/ump/api/models/job_status.py: -------------------------------------------------------------------------------- 1 | from enum import Enum 2 | 3 | 4 | class JobStatus(Enum): 5 | """ 6 | Enum for the job status options specified in the WPS 2.0 specification 7 | """ 8 | accepted = 'accepted' 9 | running = 'running' 10 | successful = 'successful' 11 | failed = 'failed' 12 | dismissed = 'dismissed' 13 | -------------------------------------------------------------------------------- /docs/content/02-user_guide/deployment.md: -------------------------------------------------------------------------------- 1 | # Deployment 2 | 3 | ## Using docker 4 | There is a docker-compose file for production deployment in `docker-compose-prod.yaml`. This file is used to deploy the application in a production environment. It includes configurations for the backend API, PostgreSQL database, and Geoserver. 5 | 6 | ## Using the provided helm chart 7 | 8 | (Coming soon..) -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/configmap-providers.yaml: -------------------------------------------------------------------------------- 1 | {{- if not .Values.providers.existingConfigMap.name }} 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: {{ include "ump.fullname" . }}-providers 6 | labels: 7 | {{- include "ump.labels" . | nindent 4 }} 8 | data: 9 | providers.yaml: {{- toYaml .Values.providers.content | indent 4 }} 10 | {{- end }} -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from ump.main import app 2 | from ump.config import app_settings as config 3 | import gunicorn 4 | 5 | if __name__ == "__main__": 6 | 7 | # Run the Flask app with uvicorn 8 | gunicorn.run( 9 | app, 10 | host="0.0.0.0", 11 | port=5000, 12 | log_level=config.UMP_LOG_LEVEL, 13 | workers=1 # Adjust the number of workers as needed 14 | ) -------------------------------------------------------------------------------- /.githooks/pre-push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Check if helm is installed 4 | if ! command -v helm &> /dev/null; then 5 | echo "Error: helm is not installed" 6 | exit 1 7 | fi 8 | 9 | # Run helm lint 10 | echo "Running helm lint..." 11 | helm lint charts/urban-model-platform 12 | 13 | # Check the exit status 14 | if [ $? -ne 0 ]; then 15 | echo "Error: helm lint failed" 16 | exit 1 17 | fi 18 | 19 | exit 0 -------------------------------------------------------------------------------- /.bumpversion.toml: -------------------------------------------------------------------------------- 1 | [bumpversion] 2 | current_version = "2.1.0rc1+fix-error-handling" 3 | commit = true 4 | tag = true 5 | tag_name = "v{new_version}" 6 | 7 | [[bumpversion.files]] 8 | filename = "pyproject.toml" 9 | search = 'version = "{current_version}"' 10 | replace = 'version = "{new_version}"' 11 | 12 | [[bumpversion.files]] 13 | filename = ".env" 14 | search = 'IMAGE_TAG={current_version}' 15 | replace = 'IMAGE_TAG={new_version}' 16 | -------------------------------------------------------------------------------- /src/ump/api/routes/health.py: -------------------------------------------------------------------------------- 1 | from flask import Blueprint 2 | from ump.api.db_handler import DBHandler 3 | 4 | health_bp = Blueprint('health', __name__) 5 | 6 | @health_bp.route('/ready') 7 | def readiness(): 8 | query = "SELECT 1" 9 | with DBHandler() as db: 10 | try: 11 | db.run_query(query) 12 | return {'status': 'ok'}, 200 13 | except Exception: 14 | return {'status': 'error'}, 503 -------------------------------------------------------------------------------- /scripts/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | 5 | export UMP_SERVER_TIMEOUT="${UMP_SERVER_TIMEOUT:-30}" 6 | 7 | flask db upgrade 8 | 9 | echo "Running API Server in production mode." 10 | UMP_API_SERVER_WORKERS="${UMP_API_SERVER_WORKERS:-1}" 11 | echo "Running gunicorn with ${UMP_API_SERVER_WORKERS} workers." 12 | # export PATH=$PATH:/home/python/.local/bin 13 | exec gunicorn --workers=$UMP_API_SERVER_WORKERS --bind=0.0.0.0:5000 ump.main:app 14 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "ump.fullname" . }} 5 | labels: 6 | {{- include "ump.labels" . | nindent 4 }} 7 | spec: 8 | type: {{ .Values.service.type }} 9 | ports: 10 | - port: {{ .Values.service.port }} 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "ump.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/urban-model-platform/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ include "ump.serviceAccountName" . }} 6 | labels: 7 | {{- include "ump.labels" . | nindent 4 }} 8 | {{- with .Values.serviceAccount.annotations }} 9 | annotations: 10 | {{- toYaml . | nindent 4 }} 11 | {{- end }} 12 | automountServiceAccountToken: {{ .Values.serviceAccount.automount }} 13 | {{- end }} 14 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/tests/test-connection.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "{{ include "ump.fullname" . }}-test-connection" 5 | labels: 6 | {{- include "ump.labels" . | nindent 4 }} 7 | annotations: 8 | "helm.sh/hook": test 9 | spec: 10 | containers: 11 | - name: wget 12 | image: busybox 13 | command: ['wget'] 14 | args: ['{{ include "ump.fullname" . }}:{{ .Values.service.port }}'] 15 | restartPolicy: Never 16 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/secret-keycloak.yaml: -------------------------------------------------------------------------------- 1 | {{- if not .Values.keycloakConnection.existingSecret.name -}} 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | name: {{ include "ump.fullname" . }}-keycloak-connection 6 | labels: 7 | {{ include "ump.labels" . | nindent 4 }} 8 | data: 9 | UMP_KEYCLOAK_CLIENT_ID: "" 10 | UMP_KEYCLOAK_PASSWORD: "" 11 | UMP_KEYCLOAK_REALM: "" 12 | UMP_KEYCLOAK_URL: "aHR0cDovL2tleWNsb2FrOjgwODAvYXV0aA==" 13 | UMP_KEYCLOAK_USER: "" 14 | {{- end -}} -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/certificate.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.tls.enabled }} 2 | apiVersion: cert-manager.io/v1 3 | kind: Certificate 4 | metadata: 5 | name: {{ include "ump.fullname" . }} 6 | labels: 7 | {{- include "ump.labels" . | nindent 4 }} 8 | spec: 9 | secretName: {{ include "ump.issuer" . }}-tls 10 | dnsNames: 11 | - {{ .Values.tls.gateway.hostName }} 12 | issuerRef: 13 | name: {{ include "ump.issuer" . }} 14 | kind: Issuer 15 | group: cert-manager.io 16 | {{ end }} -------------------------------------------------------------------------------- /migrations/versions/1.0.0_add_user.py: -------------------------------------------------------------------------------- 1 | """Add user 2 | 3 | Revision ID: 1.0.0 4 | Revises: 5 | Create Date: 2024-08-20 08:13:59.521824 6 | 7 | """ 8 | from alembic import op 9 | from sqlalchemy import Column, String 10 | 11 | # revision identifiers, used by Alembic. 12 | revision = '1.0.0' 13 | down_revision = 'e4478f461de1' 14 | branch_labels = 'add_user' 15 | depends_on = None 16 | 17 | 18 | def upgrade(): 19 | op.add_column('jobs', Column('user_id', String())) 20 | 21 | def downgrade(): 22 | pass 23 | -------------------------------------------------------------------------------- /migrations/versions/1.0.7_add_version.py: -------------------------------------------------------------------------------- 1 | """Add model version 2 | 3 | Revision ID: 1.0.7 4 | Revises: 5 | Create Date: 2024-10-01 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, String 11 | 12 | revision = "1.0.7" 13 | down_revision = "1.0.5" 14 | branch_labels = "add_version" 15 | depends_on = "1.0.5" 16 | 17 | def upgrade(): 18 | op.add_column('jobs', Column("process_version", String())) 19 | 20 | def downgrade(): 21 | op.drop_column('jobs', 'process_version') 22 | -------------------------------------------------------------------------------- /migrations/versions/1.0.8_drop_ensemble_id.py: -------------------------------------------------------------------------------- 1 | """Drop ensemble id 2 | 3 | Revision ID: 1.0.8 4 | Revises: 5 | Create Date: 2024-10-01 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column 11 | 12 | revision = "1.0.8" 13 | down_revision = "1.0.7" 14 | branch_labels = "drop_ensemble_id" 15 | depends_on = "1.0.7" 16 | 17 | def upgrade(): 18 | op.drop_column('jobs', 'ensemble_id') 19 | 20 | def downgrade(): 21 | op.add_column('jobs', Column('ensemble_id', BigInteger(), index = True)) 22 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: https://github.com/pre-commit/pre-commit-hooks 3 | rev: "v4.4.0" 4 | hooks: 5 | - id: check-added-large-files 6 | - id: check-case-conflict 7 | - id: check-merge-conflict 8 | - id: check-symlinks 9 | - id: check-yaml 10 | - id: debug-statements 11 | - id: end-of-file-fixer 12 | - id: mixed-line-ending 13 | - id: name-tests-test 14 | args: ["--pytest-test-first"] 15 | - id: requirements-txt-fixer 16 | - id: trailing-whitespace -------------------------------------------------------------------------------- /migrations/versions/1.0.1_add_process_title_and_name.py: -------------------------------------------------------------------------------- 1 | """Add process title 2 | 3 | Revision ID: 1.0.1 4 | Revises: 5 | Create Date: 2024-08-29 11:14 6 | 7 | """ 8 | from alembic import op 9 | from sqlalchemy import Column, String 10 | 11 | # revision identifiers, used by Alembic. 12 | revision = '1.0.1' 13 | down_revision = '1.0.0' 14 | branch_labels = 'add_process_title_and_name' 15 | depends_on = '1.0.0' 16 | 17 | 18 | def upgrade(): 19 | op.add_column('jobs', Column('process_title', String())) 20 | op.add_column('jobs', Column('name', String())) 21 | 22 | def downgrade(): 23 | pass 24 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/secret-geoserver.yaml: -------------------------------------------------------------------------------- 1 | {{- if not .Values.keycloakConnection.existingSecret.name -}} 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | name: {{ include "ump.fullname" . }}-geoserver-connection 6 | labels: 7 | {{ include "ump.labels" . | nindent 4 }} 8 | data: 9 | UMP_GEOSERVER_URL: "aHR0cDovL2dlb3NlcnZlcjo4MDgw" 10 | UMP_GEOSERVER_DB_HOST: "" 11 | UMP_GEOSERVER_DB_PORT: "NTQzMg==" 12 | UMP_GEOSERVER_DB_NAME: "" 13 | UMP_GEOSERVER_DB_USER: "" 14 | UMP_GEOSERVER_DB_PASSWORD: "" 15 | UMP_GEOSERVER_WORKSPACE_NAME: "" 16 | UMP_GEOSERVER_USER: "" 17 | UMP_GEOSERVER_PASSWORD: "" 18 | UMP_GEOSERVER_CONNECTION_TIMEOUT: "MTA=" 19 | {{- end -}} -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | 3 | "black-formatter.args": [ 4 | "--line-length", 5 | "88", 6 | "--preview" 7 | ], 8 | "[python]": { 9 | "editor.defaultFormatter": "charliermarsh.ruff", 10 | "editor.formatOnSave": true, 11 | "editor.codeActionsOnSave": { 12 | "source.organizeImports": "explicit" 13 | } 14 | }, 15 | "python.formatting.provider": "none", 16 | "isort.check": true, 17 | "isort.args": [ 18 | "--profile", 19 | "black" 20 | ], 21 | "ruff.importStrategy": "fromEnvironment", 22 | "black-formatter.importStrategy": "fromEnvironment", 23 | "python.analysis.typeCheckingMode": "basic", 24 | } -------------------------------------------------------------------------------- /migrations/versions/1.0.11_add_hash.py: -------------------------------------------------------------------------------- 1 | """Add hash 2 | 3 | Revision ID: 1.0.11 4 | Revises: 5 | Create Date: 2024-10-02 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import Column, String 11 | 12 | revision = "1.0.11" 13 | down_revision = "1.0.10" 14 | branch_labels = "add_hash" 15 | depends_on = "1.0.10" 16 | 17 | def upgrade(): 18 | op.add_column('jobs', Column('hash', String(), index = True)) 19 | op.execute('create extension pgcrypto') 20 | op.execute("update jobs set hash = encode(sha512((parameters :: text || process_version || user_id) :: bytea), 'base64')") 21 | 22 | def downgrade(): 23 | op.drop_column('jobs', 'hash') 24 | op.execute('drop extension pgcrypto') 25 | -------------------------------------------------------------------------------- /nginx/default.conf: -------------------------------------------------------------------------------- 1 | # vim:syntax=nginx 2 | 3 | server { 4 | listen 80 default_server; 5 | listen [::]:80 default_server; 6 | server_name _; 7 | 8 | 9 | # This is the internal DNS of Docker 10 | resolver 127.0.0.11; 11 | 12 | # Some default options for all requests 13 | client_max_body_size 32m; 14 | proxy_pass_request_headers on; 15 | 16 | location /check { 17 | add_header Content-Type text/plain; 18 | return 200 'gateway works'; 19 | } 20 | 21 | location / { 22 | proxy_set_header Host localhost:3000; 23 | proxy_pass http://api:5001/; 24 | } 25 | 26 | location /geoserver { 27 | proxy_set_header Host localhost:3000; 28 | proxy_pass http://geoserver:8080$request_uri; 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /migrations/versions/1.0.4_remove_sampling_settings_from_ensemble.py: -------------------------------------------------------------------------------- 1 | """Remove sampling settings from ensemble 2 | 3 | Revision ID: 1.0.4 4 | Revises: 5 | Create Date: 2024-09-23 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, String 11 | 12 | revision = "1.0.4" 13 | down_revision = "1.0.3" 14 | branch_labels = "remove_sampling_settings_from_ensemble" 15 | depends_on = "1.0.3" 16 | 17 | 18 | def upgrade(): 19 | op.drop_column("ensembles", "sample_size") 20 | op.drop_column("ensembles", "sampling_method") 21 | 22 | 23 | def downgrade(): 24 | op.add_column("ensembles", Column("sample_size", BigInteger())) 25 | op.add_column("ensembles", Column("sampling_method", String())) 26 | -------------------------------------------------------------------------------- /migrations/versions/1.0.5_add_ensemble_job_link.py: -------------------------------------------------------------------------------- 1 | """Add link between jobs and ensembles 2 | 3 | Revision ID: 1.0.5 4 | Revises: 5 | Create Date: 2024-09-27 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, String 11 | 12 | revision = "1.0.5" 13 | down_revision = "1.0.4" 14 | branch_labels = "add_link_between_jobs_and_ensembles" 15 | depends_on = "1.0.4" 16 | 17 | def upgrade(): 18 | op.create_table( 19 | 'jobs_ensembles', 20 | Column('id', BigInteger(), primary_key = True), 21 | Column('ensemble_id', BigInteger(), index = True), 22 | Column('job_id', String(), index = True) 23 | ) 24 | 25 | def downgrade(): 26 | op.drop_table( 27 | 'jobs_ensembles' 28 | ) 29 | -------------------------------------------------------------------------------- /migrations/script.py.mako: -------------------------------------------------------------------------------- 1 | """${message} 2 | 3 | Revision ID: ${up_revision} 4 | Revises: ${down_revision | comma,n} 5 | Create Date: ${create_date} 6 | 7 | """ 8 | from typing import Sequence, Union 9 | 10 | from alembic import op 11 | import sqlalchemy as sa 12 | ${imports if imports else ""} 13 | 14 | # revision identifiers, used by Alembic. 15 | revision: str = ${repr(up_revision)} 16 | down_revision: Union[str, None] = ${repr(down_revision)} 17 | branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} 18 | depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} 19 | 20 | 21 | def upgrade() -> None: 22 | ${upgrades if upgrades else "pass"} 23 | 24 | 25 | def downgrade() -> None: 26 | ${downgrades if downgrades else "pass"} 27 | -------------------------------------------------------------------------------- /src/ump/api/models/job_comments.py: -------------------------------------------------------------------------------- 1 | """Comments for jobs.""" 2 | from datetime import datetime 3 | 4 | from sqlalchemy import DateTime, String 5 | from sqlalchemy.orm import Mapped, declarative_base, mapped_column 6 | from sqlalchemy_serializer import SerializerMixin 7 | 8 | Base = declarative_base() 9 | 10 | class JobComment(Base, SerializerMixin): 11 | """Comments for jobs.""" 12 | __tablename__ = "job_comments" 13 | 14 | id: Mapped[int] = mapped_column(primary_key=True) 15 | user_id: Mapped[str] = mapped_column(String()) 16 | job_id: Mapped[str] = mapped_column(String()) 17 | comment: Mapped[str] = mapped_column(String()) 18 | created: Mapped[datetime] = mapped_column(DateTime()) 19 | modified: Mapped[datetime] = mapped_column(DateTime()) 20 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/httproute.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.tls.enabled }} 2 | apiVersion: gateway.networking.k8s.io/v1 3 | kind: HTTPRoute 4 | metadata: 5 | name: ump-api-http 6 | labels: 7 | {{- include "ump.labels" . | nindent 4 }} 8 | spec: 9 | parentRefs: 10 | - name: {{ .Values.tls.gateway.name }} 11 | sectionName: {{ .Values.tls.gateway.sectionName }} 12 | group: gateway.networking.k8s.io 13 | kind: Gateway 14 | hostnames: 15 | - {{ .Values.tls.gateway.hostName }} 16 | rules: 17 | - matches: 18 | - path: 19 | type: PathPrefix 20 | value: / 21 | backendRefs: 22 | - name: {{ include "ump.fullname" . }} 23 | port: {{ .Values.service.port }} 24 | kind: Service 25 | {{- end }} -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/issuer-prod.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.tls.enabled }} 2 | {{ if .Values.tls.issuer.prodEnabled }} 3 | apiVersion: cert-manager.io/v1 4 | kind: Issuer 5 | metadata: 6 | name: {{ include "ump.fullname" . }}-le-prod 7 | labels: 8 | {{- include "ump.labels" . | nindent 4 }} 9 | spec: 10 | acme: 11 | server: https://acme-v02.api.letsencrypt.org/directory 12 | email: analytics@gv.hamburg.de 13 | privateKeySecretRef: 14 | name: {{ include "ump.fullname" . }}-le-prod 15 | solvers: 16 | - http01: 17 | gatewayHTTPRoute: 18 | parentRefs: 19 | - name: {{ .Values.tls.gateway.name }} 20 | kind: "Gateway" 21 | group: "gateway.networking.k8s.io" 22 | {{- end }} 23 | {{- end -}} -------------------------------------------------------------------------------- /migrations/versions/1.0.9_create_job_comments.py: -------------------------------------------------------------------------------- 1 | """Create job comments 2 | 3 | Revision ID: 1.0.9 4 | Revises: 5 | Create Date: 2024-10-01 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, DateTime, String 11 | 12 | revision = "1.0.9" 13 | down_revision = "1.0.8" 14 | branch_labels = "create_job_comments" 15 | depends_on = "1.0.8" 16 | 17 | def upgrade(): 18 | op.create_table( 19 | 'job_comments', 20 | Column('id', BigInteger(), primary_key = True), 21 | Column('user_id', String(), index = True), 22 | Column('job_id', String(), index = True), 23 | Column('comment', String()), 24 | Column("created", DateTime()), 25 | Column("modified", DateTime()) 26 | ) 27 | 28 | def downgrade(): 29 | op.drop_table('job_comments') 30 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/configmap-settings.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: {{ include "ump.fullname" . }}-settings 5 | labels: 6 | {{- include "ump.labels" . | nindent 4 }} 7 | data: 8 | UMP_LOG_LEVEL: {{ .Values.config.logLevel | quote }} 9 | UMP_PROVIDERS_FILE: {{ .Values.config.providersFilePath | quote }} 10 | UMP_API_SERVER_URL: {{ .Values.config.apiServerUrl | quote }} 11 | UMP_API_SERVER_URL_PREFIX: {{ .Values.config.apiServerUrlPrefix | quote }} 12 | UMP_REMOTE_JOB_STATUS_REQUEST_INTERVAL: {{ .Values.config.remoteJobStatusRequestInterval | quote }} 13 | UMP_JOB_DELETE_INTERVAL: {{ .Values.config.jobDeleteInterval | quote }} 14 | UMP_API_SERVER_WORKERS: {{ .Values.config.apiServerWorkers | quote }} 15 | UMP_SERVER_TIMEOUT: {{ .Values.config.serverWorkerTimeout | quote }} 16 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/issuer-staging.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.tls.enabled }} 2 | {{ if not .Values.tls.issuer.prodEnabled }} 3 | apiVersion: cert-manager.io/v1 4 | kind: Issuer 5 | metadata: 6 | name: {{ include "ump.fullname" . }}-le-staging 7 | labels: 8 | {{- include "ump.labels" . | nindent 4 }} 9 | spec: 10 | acme: 11 | server: https://acme-staging-v02.api.letsencrypt.org/directory 12 | email: analytics@gv.hamburg.de 13 | privateKeySecretRef: 14 | name: {{ include "ump.fullname" . }}-le-staging 15 | solvers: 16 | - http01: 17 | gatewayHTTPRoute: 18 | parentRefs: 19 | - name: {{ .Values.tls.gateway.name }} 20 | namespace: ump-api 21 | kind: "Gateway" 22 | group: "gateway.networking.k8s.io" 23 | {{- end }} 24 | {{- end -}} -------------------------------------------------------------------------------- /providers.yaml.example: -------------------------------------------------------------------------------- 1 | # This is the configuration file for setting up simulation servers. 2 | # The servers should provide an OGC processes api which will be retrieved 3 | # Only processes listed in this configuration file will also appear on the UMP 4 | # One can also exclude processes manually by providing the "exclude" attribute 5 | 6 | modelserver: 7 | name: example 8 | url: "http://modelserver:5000" 9 | authentication: 10 | type: "BasicAuth" 11 | user: "user" 12 | password: "password" 13 | timeout: 1800 14 | processes: 15 | hello-world: 16 | result-storage: "remote" 17 | anonymous-access: true 18 | squareroot: 19 | result-storage: "remote" 20 | anonymous-access: false 21 | hello-geo-world: 22 | result-storage: "geoserver" 23 | anonymous-access: true -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/httproute-tls.yaml: -------------------------------------------------------------------------------- 1 | # HTTPS Route 2 | {{- if .Values.tls.enabled }} 3 | apiVersion: gateway.networking.k8s.io/v1 4 | kind: HTTPRoute 5 | metadata: 6 | name: ump-api-https 7 | labels: 8 | {{- include "ump.labels" . | nindent 4 }} 9 | spec: 10 | parentRefs: 11 | - name: {{ .Values.tls.gateway.name }} 12 | sectionName: {{ .Values.tls.gateway.tlsSectionName }} 13 | group: gateway.networking.k8s.io 14 | kind: Gateway 15 | hostnames: 16 | - {{ .Values.tls.gateway.hostName }} 17 | rules: 18 | - matches: 19 | - path: 20 | type: PathPrefix 21 | value: / 22 | backendRefs: 23 | - name: {{ include "ump.fullname" . }} 24 | port: {{ .Values.service.port }} 25 | kind: Service 26 | group: '' 27 | weight: 1 28 | {{- end -}} -------------------------------------------------------------------------------- /.copier-answers.yml: -------------------------------------------------------------------------------- 1 | # Changes here will be overwritten by Copier; NEVER EDIT MANUALLY 2 | _commit: v0.0.29 3 | _src_path: git+https://StefanSchuhart@bitbucket.org/geowerkstatt-hamburg/python_project_template 4 | author_email: stefan.schuhart@gv.hamburg.de 5 | author_fullname: Stefan Schuhart 6 | author_git_username: StefanSchuhart 7 | create_global_environment: false 8 | create_project_environment: true 9 | docker_image_distribution: bookworm 10 | initial_commit: true 11 | is_datascience_project: false 12 | package_manager: poetry 13 | project_description: server federation api, OGC Api Processes-based to connect model 14 | servers and centralize access to them 15 | project_name: urban-model-platform 16 | python_package_distribution_name: urban-model-platform 17 | python_package_import_name: urban_model_platform 18 | python_version: '3.11' 19 | repository_namespace: geowerkstatt-hamburg 20 | repository_provider: github.com 21 | -------------------------------------------------------------------------------- /src/ump/api/routes/users.py: -------------------------------------------------------------------------------- 1 | """Endpoints to access user information via keycloak""" 2 | 3 | import json 4 | 5 | from apiflask import APIBlueprint 6 | from flask import Response, g 7 | 8 | from ump.api.keycloak_utils import get_user_details 9 | 10 | users = APIBlueprint("users", __name__) 11 | 12 | 13 | @users.route("//details", methods=["GET"]) 14 | def index(user_id=None): 15 | "Retrieve user name by user id" 16 | auth = g.get("auth_token") 17 | if auth is None: 18 | return Response(mimetype="application/json", status=401) 19 | 20 | details = get_user_details(user_id) 21 | 22 | response_data = { 23 | "user_id": user_id, 24 | "username": details['username'], 25 | "firstName": details['firstName'], 26 | "lastName": details['lastName'], 27 | "email": details['email'], 28 | } 29 | 30 | return Response(json.dumps(response_data), mimetype="application/json") 31 | -------------------------------------------------------------------------------- /migrations/versions/1.0.10_add_share_tables.py: -------------------------------------------------------------------------------- 1 | """Add share tables 2 | 3 | Revision ID: 1.0.10 4 | Revises: 5 | Create Date: 2024-10-02 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, String 11 | 12 | revision = "1.0.10" 13 | down_revision = "1.0.9" 14 | branch_labels = "add_share_tables" 15 | depends_on = "1.0.9" 16 | 17 | def upgrade(): 18 | op.create_table( 19 | 'jobs_users', 20 | Column('id', BigInteger(), primary_key = True), 21 | Column('job_id', String(), index = True), 22 | Column('user_id', String(), index = True), 23 | ) 24 | op.create_table( 25 | 'ensembles_users', 26 | Column('id', BigInteger(), primary_key = True), 27 | Column('ensemble_id', BigInteger(), index = True), 28 | Column('user_id', String(), index = True), 29 | ) 30 | 31 | def downgrade(): 32 | op.drop_table('jobs_users') 33 | op.drop_table('ensembles_users') 34 | -------------------------------------------------------------------------------- /src/ump/errors.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import traceback 3 | 4 | from ump.api.models.ogc_exception import OGCExceptionResponse 5 | 6 | 7 | class CustomException(Exception): 8 | status_code = 400 9 | 10 | def __init__(self, message, status_code=None, payload=None): 11 | Exception.__init__(self) 12 | self.message = message 13 | if status_code is not None: 14 | self.status_code = status_code 15 | 16 | self.payload = payload 17 | logging.error("%s: %s", type(self).__name__, self.message) 18 | traceback.print_exc() 19 | 20 | def to_dict(self): 21 | rv = dict(self.payload or ()) 22 | rv['error_message'] = self.message 23 | return rv 24 | 25 | def __str__(self) -> str: 26 | return str(self.to_dict()) 27 | 28 | class InvalidUsage(CustomException): 29 | pass 30 | 31 | class GeoserverException(CustomException): 32 | pass 33 | 34 | class OGCProcessException(Exception): 35 | def __init__(self, response: OGCExceptionResponse): 36 | self.response = response -------------------------------------------------------------------------------- /src/ump/api/keycloak_utils.py: -------------------------------------------------------------------------------- 1 | """Keycloak helper functions""" 2 | 3 | from keycloak import KeycloakAdmin, KeycloakOpenIDConnection 4 | 5 | from ump.config import app_settings as config 6 | 7 | keycloak_connection = KeycloakOpenIDConnection( 8 | server_url=str(config.UMP_KEYCLOAK_URL), 9 | username=f"{config.UMP_KEYCLOAK_USER}", 10 | password=f"{config.UMP_KEYCLOAK_PASSWORD.get_secret_value()}", 11 | realm_name="master", 12 | user_realm_name="master", 13 | client_id="admin-cli", 14 | verify=True, 15 | ) 16 | 17 | keycloak_admin = KeycloakAdmin(connection=keycloak_connection) 18 | keycloak_admin.change_current_realm(f"{config.UMP_KEYCLOAK_REALM}") 19 | 20 | 21 | def find_user_id_by_email(email): 22 | """Retrieves a user id by email""" 23 | users = keycloak_admin.get_users({"email": email}) 24 | for user in users: 25 | if user["email"] == email: 26 | return user["id"] 27 | return None 28 | 29 | 30 | def get_user_details(user_id): 31 | """Retrieve the user details by user id""" 32 | user = keycloak_admin.get_user(user_id) 33 | return user 34 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "ump.fullname" . }} 6 | labels: 7 | {{- include "ump.labels" . | nindent 4 }} 8 | spec: 9 | scaleTargetRef: 10 | apiVersion: apps/v1 11 | kind: Deployment 12 | name: {{ include "ump.fullname" . }} 13 | minReplicas: {{ .Values.autoscaling.minReplicas }} 14 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 15 | metrics: 16 | {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} 17 | - type: Resource 18 | resource: 19 | name: cpu 20 | target: 21 | type: Utilization 22 | averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} 23 | {{- end }} 24 | {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} 25 | - type: Resource 26 | resource: 27 | name: memory 28 | target: 29 | type: Utilization 30 | averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} 31 | {{- end }} 32 | {{- end }} 33 | -------------------------------------------------------------------------------- /migrations/versions/1.0.3_extend_ensembles.py: -------------------------------------------------------------------------------- 1 | """Extend ensembles 2 | 3 | Revision ID: 1.0.3 4 | Revises: 5 | Create Date: 2024-09-23 14:00 6 | 7 | """ 8 | 9 | from alembic import op 10 | from sqlalchemy import BigInteger, Column, DateTime, String 11 | 12 | revision = "1.0.3" 13 | down_revision = "1.0.2" 14 | branch_labels = "extend_ensembles" 15 | depends_on = "1.0.2" 16 | 17 | 18 | def upgrade(): 19 | op.add_column("ensembles", Column("sample_size", BigInteger())) 20 | op.add_column("ensembles", Column("sampling_method", String())) 21 | op.add_column("ensembles", Column("created", DateTime())) 22 | op.add_column("ensembles", Column("modified", DateTime())) 23 | 24 | op.add_column("ensemble_comments", Column("created", DateTime())) 25 | op.add_column("ensemble_comments", Column("modified", DateTime())) 26 | 27 | 28 | def downgrade(): 29 | op.drop_column("ensembles", "sample_size") 30 | op.drop_column("ensembles", "sampling_method") 31 | op.drop_column("ensembles", "created") 32 | op.drop_column("ensembles", "modified") 33 | 34 | op.drop_column("ensemble_comments", "created") 35 | op.drop_column("ensemble_comments", "modified") 36 | -------------------------------------------------------------------------------- /docs/content/03-architecture/geoserver.md: -------------------------------------------------------------------------------- 1 | (GeoServer)= 2 | # GeoServer 3 | 4 | 5 | ```{warning} 6 | Currently, the GeoServer is not integrated with Keycloak. This means that if you configure a process with `result-storage: geoserver`, the results will be publicly accessible without authentication. 7 | ``` 8 | 9 | All processes that are configured with `result-storage: geoserver` will be stored in a GeoServer instance. The results can be visualized on a map using the respective WFS and WMS layers. The GeoServer is configured to use the same database as the Urban Model Platform, so the results will be stored in the same database as the other data. 10 | 11 | ```{seealso} 12 | See the [Docker Configuration User Guide](dockerconfiguration) for more information on the environment variables used to configure the GeoServer. 13 | ``` 14 | 15 | ## Result Storage 16 | 17 | Once the results of a process with the `result-storage: geoserver` configuration are available, the UMP will try to store the results in a GeoServer instance. The results will be stored in a GeoServer layer which are named by their job ids with the prefix "job-". The layer will be created in the workspace specified in the environment variables. -------------------------------------------------------------------------------- /migrations/versions/1.0.2_add_ensembles.py: -------------------------------------------------------------------------------- 1 | """Add ensembles 2 | 3 | Revision ID: 1.0.2 4 | Revises: 5 | Create Date: 2024-09-17 11:14 6 | 7 | """ 8 | from alembic import op 9 | from sqlalchemy import Column, String, BigInteger 10 | from sqlalchemy.dialects.postgresql import JSONB 11 | 12 | revision = '1.0.2' 13 | down_revision = '1.0.1' 14 | branch_labels = 'add_ensembles' 15 | depends_on = '1.0.1' 16 | 17 | 18 | def upgrade(): 19 | op.create_table( 20 | 'ensembles', 21 | Column('id', BigInteger(), primary_key = True), 22 | Column('name', String()), 23 | Column('description', String()), 24 | Column('user_id', String(), index = True), 25 | Column('scenario_configs', JSONB()) 26 | ) 27 | op.create_table( 28 | 'ensemble_comments', 29 | Column('id', BigInteger(), primary_key = True), 30 | Column('user_id', String(), index = True), 31 | Column('ensemble_id', BigInteger(), index = True), 32 | Column('comment', String()) 33 | ) 34 | op.add_column('jobs', Column('ensemble_id', BigInteger(), index = True)) 35 | 36 | def downgrade(): 37 | op.drop_table('ensemble_comments') 38 | op.drop_table('ensembles') 39 | op.drop_column('jobs', 'ensemble_id') 40 | -------------------------------------------------------------------------------- /migrations/alembic.ini: -------------------------------------------------------------------------------- 1 | # A generic, single database configuration. 2 | 3 | [alembic] 4 | # template used to generate migration files 5 | # file_template = %%(rev)s_%%(slug)s 6 | 7 | # set to 'true' to run the environment during 8 | # the 'revision' command, regardless of autogenerate 9 | # revision_environment = false 10 | script_location = migrations 11 | 12 | # sys.path path, will be prepended to sys.path if present. 13 | # defaults to the current working directory. 14 | prepend_sys_path = . 15 | 16 | # Logging configuration 17 | [loggers] 18 | keys = root,sqlalchemy,alembic,flask_migrate 19 | 20 | [handlers] 21 | keys = console 22 | 23 | [formatters] 24 | keys = generic 25 | 26 | [logger_root] 27 | level = DEBUG 28 | handlers = console 29 | qualname = 30 | 31 | [logger_sqlalchemy] 32 | level = DEBUG 33 | handlers = 34 | qualname = sqlalchemy.engine 35 | 36 | [logger_alembic] 37 | level = DEBUG 38 | handlers = 39 | qualname = alembic 40 | 41 | [logger_flask_migrate] 42 | level = DEBUG 43 | handlers = 44 | qualname = flask_migrate 45 | 46 | [handler_console] 47 | class = StreamHandler 48 | args = (sys.stdout,) 49 | level = DEBUG 50 | formatter = generic 51 | 52 | [formatter_generic] 53 | format = %(levelname)-5.5s [%(name)s] %(message)s 54 | datefmt = %H:%M:%S 55 | -------------------------------------------------------------------------------- /docs/_toc.yml: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | # Learn more at https://jupyterbook.org/customize/toc.html 3 | 4 | format: jb-book 5 | root: content/index.md 6 | parts: 7 | - caption: Getting Started 8 | chapters: 9 | - file: content/01-intro/intro.md 10 | - file: content/01-intro/quick_start.md 11 | - file: content/01-intro/authors.md 12 | 13 | - caption: User Guide 14 | chapters: 15 | - file: content/02-user_guide/setup.md 16 | - file: content/02-user_guide/provider-configuration.md 17 | - file: content/02-user_guide/deployment.md 18 | # sections: 19 | # - file: start/overview 20 | 21 | - caption: Architecture 22 | chapters: 23 | - file: content/03-architecture/overview.md 24 | - file: content/03-architecture/api.md 25 | - file: content/03-architecture/keycloak.md 26 | - file: content/03-architecture/geoserver.md 27 | # sections: 28 | # - file: start/overview 29 | 30 | - caption: Developing 31 | chapters: 32 | - title: Contributing 33 | file: content/04-developing/contributing.md 34 | - title: Changelog 35 | file: content/04-developing/changelog.md 36 | - title: API reference 37 | file: autoapi/index # this directory is virtual! 38 | 39 | 40 | -------------------------------------------------------------------------------- /nginx/default-local.conf: -------------------------------------------------------------------------------- 1 | # vim:syntax=nginx 2 | 3 | server { 4 | listen 80 default_server; 5 | listen [::]:80 default_server; 6 | server_name _; 7 | 8 | # This is the internal DNS of Docker 9 | resolver 127.0.0.11; 10 | 11 | # Some default options for all requests 12 | client_max_body_size 32m; 13 | proxy_pass_request_headers on; 14 | 15 | location /check { 16 | add_header Content-Type text/plain; 17 | return 200 'gateway works'; 18 | } 19 | 20 | location / { 21 | proxy_pass http://api:5000; 22 | } 23 | 24 | location /geoserver { 25 | proxy_set_header Host $host; 26 | proxy_set_header X-Real-IP $remote_addr; 27 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 28 | proxy_set_header X-Forwarded-Proto $scheme; 29 | proxy_set_header X-Forwarded-Host $host; 30 | proxy_set_header X-Forwarded-Port $server_port; 31 | proxy_pass http://geoserver:8080; 32 | } 33 | 34 | location /auth { 35 | proxy_set_header Host $host; 36 | proxy_set_header X-Real-IP $remote_addr; 37 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 38 | proxy_set_header X-Forwarded-Proto $scheme; 39 | proxy_set_header X-Forwarded-Host $host; 40 | proxy_set_header X-Forwarded-Port $server_port; 41 | proxy_pass http://keycloak:8080; 42 | } 43 | } 44 | -------------------------------------------------------------------------------- /docs/content/03-architecture/overview.md: -------------------------------------------------------------------------------- 1 | (architecture-overview)= 2 | # Overview 3 | 4 | The Urban Model Platform is essentially a server federation API that connects multiple model servers and provides a single access point to them. It is a middleware between the model servers and the clients. 5 | 6 | ![Architecture Overview](../../Architecture-Overview.png) 7 | 8 | Its main components are: 9 | 10 | - **Flask API**: The main entry point of the Urban Model Platform. It handles incoming requests and routes them to the appropriate model server using the OGC API Processes standard. [Learn more](API) 11 | - **Keycloak**: With Keycloak, the Urban Model Platform provides authentication and authorization for users. It allows users to log in and access the platform securely and platform administrators to manage user roles and permissions.[Learn more](Keycloak) 12 | - **GeoServer**: The Urban Model Platform can be configured to store results in a GeoServer instance. This allows users to visualize the results of their simulations on a map with the respective WFS and WMS layers.[Learn more](GeoServer) 13 | 14 | Additionally you need a **PostgreSQL** database to store jobs for the UMP and geoserver layers (provided you configured "geoserver" as result storage) 15 | 16 | To learn how to properly configure providers, check out the [User Guide](providers) section. 17 | -------------------------------------------------------------------------------- /charts/urban-model-platform/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: urban-model-platform 3 | description: Urban Model Platform 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | # Versions are expected to follow Semantic Versioning (https://semver.org/) 18 | version: 0.9.5 19 | 20 | # This is the version number of the application being deployed. This version number should be 21 | # incremented each time you make changes to the application. Versions are not expected to 22 | # follow Semantic Versioning. They should reflect the version the application is using. 23 | # It is recommended to use it with quotes. 24 | appVersion: "2.1.0" 25 | kubeVersion: ">= 1.31.0" 26 | annotations: 27 | category: API 28 | licence: GNU GENERAL PUBLIC LICENSE v3 -------------------------------------------------------------------------------- /.vscode/tasks.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "2.0.0", 3 | "tasks": [{ 4 | "label": "start-postgis-container", 5 | "type": "shell", 6 | "command": "docker", 7 | "args": [ 8 | "compose", 9 | "-f", 10 | "docker-compose-dev.yaml", 11 | "up", 12 | "-d", 13 | "api-db", 14 | ], 15 | },{ 16 | "label": "stop-postgis-container", 17 | "type": "shell", 18 | "command": "docker", 19 | "args": [ 20 | "compose", 21 | "-f", 22 | "docker-compose-dev.yaml", 23 | "rm", 24 | "-s", 25 | "api-db", 26 | "--force" 27 | ], 28 | },{ 29 | "label":"run-db-migrations", 30 | "type": "shell", 31 | "command": "flask", 32 | "args": [ 33 | "db", 34 | "upgrade" 35 | ], 36 | "options": { 37 | "env": { 38 | "FLASK_APP": "src/ump/main.py", 39 | "FLASK_ENV": "development" 40 | } 41 | }, 42 | "dependsOn": [ 43 | "start-postgis-container" 44 | ], 45 | },{ 46 | "label": "setup-environment", 47 | "dependsOn": [ 48 | "start-postgis-container", 49 | "run-db-migrations" 50 | ] 51 | 52 | }] 53 | } -------------------------------------------------------------------------------- /migrations/versions/e4478f461de1_create_jobs_table_initial.py: -------------------------------------------------------------------------------- 1 | """create_jobs_table_initial 2 | 3 | Revision ID: e4478f461de1 4 | Revises: 1.0.11 5 | Create Date: 2025-02-07 08:39:56.443236 6 | 7 | """ 8 | from typing import Sequence, Union 9 | 10 | from alembic import op 11 | import sqlalchemy as sa 12 | 13 | 14 | # revision identifiers, used by Alembic. 15 | revision: str = 'e4478f461de1' 16 | down_revision: Union[str, None] = None 17 | branch_labels: Union[str, Sequence[str], None] = None 18 | depends_on: Union[str, Sequence[str], None] = None 19 | 20 | def upgrade(): 21 | op.create_table( 22 | 'jobs', 23 | sa.Column('process_id', sa.String(80)), 24 | sa.Column('job_id', sa.String(80), primary_key=True), 25 | sa.Column('remote_job_id', sa.String(80)), 26 | sa.Column('provider_prefix', sa.String(80)), 27 | sa.Column('provider_url', sa.String(80)), 28 | sa.Column('status', sa.Enum('accepted', 'running', 'successful', 'failed', 'dismissed', name='status')), 29 | sa.Column('message', sa.String), 30 | sa.Column('created', sa.DateTime), 31 | sa.Column('started', sa.DateTime), 32 | sa.Column('finished', sa.DateTime), 33 | sa.Column('updated', sa.DateTime), 34 | sa.Column('progress', sa.Integer), 35 | sa.Column('parameters', sa.JSON), 36 | sa.Column('results_metadata', sa.JSON) 37 | ) 38 | 39 | def downgrade(): 40 | op.drop_table('jobs') -------------------------------------------------------------------------------- /.env.example: -------------------------------------------------------------------------------- 1 | #---- App settings ---- 2 | # The API_SERVER_URL is only used to return the complete URL in the result of the job details as specified in OGC. 3 | # Should be the base url to the api. 4 | UMP_SERVER_TIMEOUT=30 5 | UMP_LOG_LEVEL=DEBUG 6 | UMP_PROVIDERS_FILE=providers.yaml 7 | UMP_API_SERVER_URL=localhost:5000 8 | UMP_API_SERVER_URL_PREFIX=/api 9 | UMP_REMOTE_JOB_STATUS_REQUEST_INTERVAL=5 10 | UMP_DATABASE_NAME=ump 11 | UMP_DATABASE_HOST=localhost 12 | UMP_DATABASE_PORT=5433 13 | UMP_DATABASE_USER=ump 14 | UMP_DATABASE_PASSWORD=ump 15 | UMP_GEOSERVER_URL=http://geoserver:8080/geoserver 16 | UMP_GEOSERVER_DB_HOST=localhost 17 | UMP_GEOSERVER_DB_PORT=5432 18 | UMP_GEOSERVER_WORKSPACE_NAME=UMP 19 | UMP_GEOSERVER_USER=admin 20 | UMP_GEOSERVER_PASSWORD=geoserver 21 | # seconds: 22 | UMP_GEOSERVER_CONNECTION_TIMEOUT=60 23 | # minutes: 24 | UMP_JOB_DELETE_INTERVAL=240 25 | UMP_KEYCLOAK_URL=http://keycloak:8080/auth 26 | UMP_KEYCLOAK_REALM=UrbanModelPlatform 27 | UMP_KEYCLOAK_CLIENT_ID=ump-client 28 | UMP_KEYCLOAK_USER=ump 29 | UMP_KEYCLOAK_PASSWORD=ump 30 | 31 | #---- example modelserver settings 32 | PYGEOAPI_SERVER_HOST=localhost 33 | PYGEOAPI_SERVER_PORT_INTERNAL=5000 34 | PYGEOAPI_SERVER_PORT_EXTERNAL=5005 35 | 36 | #---- docker dev environment settings ---- 37 | DOCKER_NETWORK=ump_dev 38 | WEBAPP_PORT_EXTERNAL=5003 39 | API_DB_PORT_EXTERNAL=5433 40 | GEOSERVER_PORT_EXTERNAL=8181 41 | KEYCLOAK_PORT_EXTERNAL=8282 42 | 43 | #---- Docker build settings ---- 44 | CONTAINER_REGISTRY=registry.io 45 | CONTAINER_NAMESPACE=namespace 46 | IMAGE_NAME=urban-model-platform 47 | IMAGE_TAG=1.1.0 48 | -------------------------------------------------------------------------------- /.github/workflows/on-chart-release.yaml: -------------------------------------------------------------------------------- 1 | name: Release Helm Chart 2 | 3 | on: 4 | push: 5 | tags: 6 | - 'chart-v*' 7 | 8 | env: 9 | HELM_REPO_URL: https://api.bitbucket.org/2.0/repositories/geowerkstatt-hamburg/urban-model-platform-helm-charts/src/main 10 | HELM_CHART_PATH: ./charts/urban-model-platform 11 | 12 | jobs: 13 | release: 14 | runs-on: ubuntu-latest 15 | steps: 16 | - name: Checkout source 17 | uses: actions/checkout@v3 18 | 19 | - name: Setup Helm 20 | uses: azure/setup-helm@v3 21 | with: 22 | version: v3.12.0 23 | 24 | - name: Lint Chart 25 | run: helm lint $HELM_CHART_PATH 26 | 27 | - name: Package Chart 28 | run: | 29 | helm package $HELM_CHART_PATH 30 | 31 | - name: Checkout helm repo 32 | run: git clone https://x-token-auth:${{ secrets.BB_ACCESS_TOKEN }}@bitbucket.org/geowerkstatt-hamburg/urban-model-platform-helm-charts.git helm-repo 33 | 34 | - name: Update Helm repo index 35 | run: | 36 | cp *.tgz helm-repo/ 37 | cd helm-repo 38 | if [ -f index.yaml ]; then 39 | helm repo index . --url $HELM_REPO_URL --merge index.yaml 40 | else 41 | helm repo index . --url $HELM_REPO_URL 42 | fi 43 | 44 | - name: Push changes 45 | run: | 46 | cd helm-repo 47 | git config user.name "GitHub Actions Bot" 48 | git config user.email "actions@github.com" 49 | git add . 50 | git commit -m "Update helm repository" 51 | git push origin main -------------------------------------------------------------------------------- /.github/workflows/on-release.yaml: -------------------------------------------------------------------------------- 1 | name: Publish Docker image 2 | 3 | on: 4 | push: 5 | branches: ['dev', 'main'] 6 | release: 7 | types: [published] 8 | 9 | env: 10 | REGISTRY: ghcr.io 11 | IMAGE_NAME: ${{ github.repository }} 12 | 13 | jobs: 14 | push_to_registry: 15 | name: Push Docker image to Docker Hub 16 | runs-on: ubuntu-latest 17 | permissions: 18 | packages: write 19 | contents: read 20 | attestations: write 21 | id-token: write 22 | steps: 23 | - name: Check out the repo 24 | uses: actions/checkout@v4 25 | 26 | - name: Log in to the Container registry 27 | uses: docker/login-action@v3 28 | with: 29 | registry: ${{ env.REGISTRY }} 30 | username: ${{ github.actor }} 31 | password: ${{ secrets.GITHUB_TOKEN }} 32 | 33 | - name: Extract metadata (tags, labels) for Docker 34 | id: meta 35 | uses: docker/metadata-action@v5 36 | with: 37 | images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} 38 | 39 | - name: Build and push Docker image 40 | id: push 41 | uses: docker/build-push-action@v6 42 | with: 43 | context: . 44 | file: ./Dockerfile 45 | push: true 46 | tags: ${{ steps.meta.outputs.tags }} 47 | labels: ${{ steps.meta.outputs.labels }} 48 | 49 | - name: Generate artifact attestation 50 | uses: actions/attest-build-provenance@v1 51 | with: 52 | subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}} 53 | subject-digest: ${{ steps.push.outputs.digest }} 54 | push-to-registry: true 55 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | {{- if .Values.tls.enabled }} 2 | Your application can be accessed via HTTPS: 3 | {{- if .Values.tls.hostname }} 4 | https://{{ .Values.tls.hostname }} 5 | {{- else }} 6 | WARNING: No hostname configured. Please set .Values.tls.hostname 7 | {{- end }} 8 | 9 | TLS is enabled with: 10 | - Certificate issuer: {{ .Values.tls.issuer.name }} 11 | - Certificate type: {{ .Values.tls.issuer.kind }} 12 | {{- else }} 13 | Your application can be accessed within the cluster at: 14 | http://{{ include "ump.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }} 15 | 16 | To access the application from outside the cluster: 17 | 18 | 1. Get the application URL by running these commands: 19 | {{- if contains "NodePort" .Values.service.type }} 20 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "ump.fullname" . }}) 21 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 22 | echo http://$NODE_IP:$NODE_PORT 23 | {{- else if contains "LoadBalancer" .Values.service.type }} 24 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 25 | You can watch the status by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "ump.fullname" . }}' 26 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "ump.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") 27 | echo http://$SERVICE_IP:{{ .Values.service.port }} 28 | {{- end }} 29 | {{- end }} -------------------------------------------------------------------------------- /docs/references.bib: -------------------------------------------------------------------------------- 1 | @article{fischer2021urban, 2 | title={Urban Data Platform Hamburg: Integration von Echtzeit IoT-Daten mittels SensorThings API}, 3 | author={Fischer, Michael and Gras, Pierre and L{\"o}wa, Sonja and Schuhart, Stefan}, 4 | journal={ZfV-Zeitschrift f{\"u}r Geod{\"a}sie, Geoinformation und Landmanagement}, 5 | number={zfv 1/2021}, 6 | year={2021} 7 | } 8 | 9 | @article{schubbe2023urbane, 10 | title={{Urbane Digitale Zwillinge als Baukastensystem: Ein Konzept aus dem Projekt Connected Urban Twins (CUT)}}, 11 | author={Schubbe, Nicole and Boedecker, Mathias and Moshrefzadeh, Mandana and Dietrich, Jana and Mohl, Markus and Brink, Marina and Reinecke, Nora and Tegtmeyer, Sascha and Gras, Pierre and others}, 12 | journal={ZfV-Zeitschrift f{\"u}r Geod{\"a}sie, Geoinformation und Landmanagement}, 13 | number={zfv 1/2023}, 14 | year={2023} 15 | } 16 | 17 | @article{herzog2024guide, 18 | title={Guide to Model Land: A guide to ethical questions for modeling and simulation in urban digital twins}, 19 | author={Herzog, Rico and Probst, Viktoria}, 20 | year={2024}, 21 | journal={}, 22 | publisher={HafenCity Universit{\"a}t Hamburg}, 23 | url={https://repos.hcu-hamburg.de/bitstream/hcu/1031/2/2024-07-22_City-Science-Lab_Guide-To-Model-Land_EN.pdf} 24 | } 25 | 26 | 27 | @book{thompson2022escape, 28 | title={Escape from model land: How mathematical models can lead us astray and what we can do about it}, 29 | author={Thompson, Erica}, 30 | year={2022}, 31 | publisher={Hachette UK} 32 | } 33 | 34 | 35 | @misc{batty2021multiple, 36 | title={Multiple models}, 37 | author={Batty, Michael}, 38 | journal={Environment and Planning B: Urban Analytics and City Science}, 39 | volume={48}, 40 | number={8}, 41 | pages={2129--2132}, 42 | year={2021}, 43 | publisher={SAGE Publications Sage UK: London, England} 44 | } 45 | -------------------------------------------------------------------------------- /.github/workflows/deploy-docs.yaml: -------------------------------------------------------------------------------- 1 | name: deploy-book 2 | 3 | # Run this when the master or main branch changes 4 | on: 5 | push: 6 | branches: 7 | - main 8 | - documentation 9 | # If your git repository has the Jupyter Book within some-subfolder next to 10 | # unrelated files, you can make this run only if a file within that specific 11 | # folder has been modified. 12 | # 13 | # paths: 14 | # - some-subfolder/** 15 | 16 | # This job installs dependencies, builds the book, and pushes it to `gh-pages` 17 | jobs: 18 | deploy-book: 19 | runs-on: ubuntu-latest 20 | permissions: 21 | pages: write 22 | id-token: write 23 | steps: 24 | - uses: actions/checkout@v4 25 | 26 | # Install dependencies 27 | - name: Set up Python 3.11 28 | uses: actions/setup-python@v5 29 | with: 30 | python-version: '3.11' 31 | cache: pip # Implicitly uses requirements.txt for cache key 32 | 33 | # Install the dependencies 34 | - name: Install dependencies 35 | run: | 36 | python -m pip install --upgrade pip 37 | pip install jupyter-book sphinx-autoapi sphinxcontrib-autoyaml 38 | 39 | 40 | #- name: Install and configure Poetry 41 | # uses: snok/install-poetry@v1 42 | # with: 43 | # version: 1.8.5 44 | # installer-parallel: true 45 | 46 | #- name: Install the project dependencies 47 | # run: | 48 | # poetry install --only=docs 49 | 50 | # Build the book 51 | - name: Build the book 52 | run: | 53 | jupyter-book build docs 54 | 55 | # Upload the book's HTML as an artifact 56 | - name: Upload artifact 57 | uses: actions/upload-pages-artifact@v3 58 | with: 59 | path: "docs/_build/html" 60 | 61 | # Deploy the book's HTML to GitHub Pages 62 | - name: Deploy to GitHub Pages 63 | id: deployment 64 | uses: actions/deploy-pages@v4 65 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | ARG $MAMBA_USER=mambauser 2 | 3 | FROM python:3.11-bookworm AS base 4 | 5 | ENV CACHE_DIR=/app/cache 6 | 7 | WORKDIR /app 8 | 9 | COPY environment.yaml ./ 10 | RUN --mount=type=cache,target=$CACHE_DIR apt-get update && apt-get install -y --no-install-recommends \ 11 | git \ 12 | && rm -rf /var/lib/apt/lists/* \ 13 | && poetry_version=$(grep 'poetry=' environment.yaml | awk -F '=' '{print $2}') \ 14 | && pip install poetry==$poetry_version 15 | 16 | ENV POETRY_NO_INTERACTION=1 \ 17 | POETRY_VIRTUALENVS_IN_PROJECT=1 \ 18 | POETRY_VIRTUALENVS_CREATE=1 \ 19 | POETRY_CACHE_DIR=/app/poetry_cache 20 | 21 | COPY pyproject.toml ./ 22 | #poetry.lock 23 | RUN poetry lock && poetry install --without=dev --no-root 24 | 25 | # maybe needed for psycopg2 26 | # RUN apt update \ 27 | # && apt upgrade -y \ 28 | # && apt install -qq -y --no-install-recommends \ 29 | # libpq-dev gdal-bin libgdal-dev \ 30 | # && apt clean 31 | 32 | COPY src ./src 33 | COPY migrations ./migrations 34 | RUN touch README.md \ 35 | && poetry build \ 36 | && /app/.venv/bin/python -m pip install dist/*.whl 37 | #--no-deps 38 | 39 | FROM python:3.11-slim-bookworm AS runtime 40 | 41 | ARG USER_UID=1000 42 | ARG USERNAME=pythonuser 43 | ARG USER_GID=2000 44 | ARG SOURCE_COMMIT 45 | ARG IMAGE_TAG=2.0.0 46 | 47 | LABEL maintainer="Urban Data Analytics" \ 48 | name="analytics/urban-model-platform" \ 49 | source_commit=$SOURCE_COMMIT \ 50 | version=${IMAGE_TAG} 51 | 52 | # add user and group 53 | RUN groupadd --gid $USER_GID $USERNAME && \ 54 | useradd --create-home --no-log-init --gid $USER_GID --uid $USER_UID --shell /bin/bash $USERNAME && \ 55 | chown -R $USERNAME:$USERNAME /home/$USERNAME /usr/local/lib /usr/local/bin 56 | 57 | USER $USERNAME 58 | WORKDIR /home/$USERNAME 59 | 60 | ENV VIRTUAL_ENV=/app/.venv \ 61 | PATH="/app/.venv/bin:$PATH" 62 | 63 | COPY --from=base \ 64 | --chmod=0755 \ 65 | --chown=$USERNAME:$USERNAME \ 66 | /app/.venv /app/.venv 67 | 68 | COPY scripts/entrypoint.sh entrypoint.sh 69 | COPY --from=base /app/migrations migrations 70 | 71 | EXPOSE 5000 72 | 73 | ENTRYPOINT [ "/home/pythonuser/entrypoint.sh" ] 74 | -------------------------------------------------------------------------------- /src/ump/api/routes/processes.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import copy 3 | import json 4 | 5 | from apiflask import APIBlueprint 6 | from flask import Response, g, request 7 | 8 | import ump.api.providers as providers 9 | from ump.api.models.process import Process 10 | from ump.api.processes import load_processes 11 | 12 | processes = APIBlueprint("processes", __name__) 13 | 14 | @processes.route("/", defaults={"page": "index"}) 15 | def index(page): 16 | result = asyncio.run(load_processes()) 17 | return Response(json.dumps(result), mimetype="application/json") 18 | 19 | 20 | @processes.route("/", methods=["GET"]) 21 | def show(process_id_with_prefix=None): 22 | process = Process(process_id_with_prefix) 23 | return Response(process.to_json(), mimetype="application/json") 24 | 25 | 26 | @processes.route("//execution", methods=["POST"]) 27 | def execute(process_id_with_prefix=None): 28 | auth = g.get('auth_token') 29 | process = Process(process_id_with_prefix) 30 | 31 | # extract unique user ID ('sub') from auth token if available 32 | result = process.execute(request.json, None if auth is None else auth['sub']) 33 | return Response(json.dumps(result), status=201, mimetype="application/json") 34 | 35 | # TODO: this lists ALL providers' processes in providers.yaml, ignoring "exclude: True" 36 | @processes.route("/providers", methods=["GET"]) 37 | def get_providers(): 38 | """Returns the providers config""" 39 | response = copy.deepcopy(providers.get_providers()) 40 | for key in response: 41 | if 'authentication' in response[key]: 42 | del response[key]['authentication'] 43 | del response[key]['url'] 44 | if 'timeout' in response[key]: 45 | del response[key]['timeout'] 46 | for process in response[key]['processes']: 47 | if 'deterministic' in response[key]['processes'][process]: 48 | del response[key]['processes'][process]['deterministic'] 49 | if 'anonymous-access' in response[key]['processes'][process]: 50 | del response[key]['processes'][process]['anonymous-access'] 51 | return response 52 | -------------------------------------------------------------------------------- /docs/content/index.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Documentation Urban Model Platform 4 | 5 | 11 | 12 | 13 | The Urban Model Platform is an Open Urban Platform to distribute and access (simulation) models for Urban Digital Twins. It builds on the [OGC API Processes](https://docs.ogc.org/is/18-062r2/18-062r2.html) open standard and was developed by the City Science Lab at HafenCity University Hamburg and the Agency for Geoinformation and Suveying in the context of the [Connected Urban Twins](https://www.connectedurbantwins.de/) project. 14 | 15 | It is a key building block to provide simulation models and other algorithms in urban digital twins. By providing a single access point to multiple servers that run the algorithms, the Urban Model Platform is a novel middleware that can serve various front-end digital twin applications. 16 | 17 | ![Architecture Overview](../Architecture-Overview.png) 18 | 19 | ## Key Features 20 | - Access point to multiple model servers 🌐 21 | - Build with the open OGC API Processes standard 🚀 22 | - Dynamic configuration ⚙️ 23 | - Authorization and authentification 🔐 24 | - GeoServer integration 🌍 25 | - Ensemble Modeling 🔄 26 | - Open Source (GPL-3.0 License) 💻 27 | 28 | 29 | ## Getting started 30 | To get started with the Urban Model Platform, follow these steps: 31 | 32 | 1. Check out our [Quickstart Guide](quickstart) to get familiar 33 | 2. Learn about the [Platform Architecture](architecture-overview) to understand its components and workflows. 34 | 3. Set up a local development environment by following the [Contributing](contributing) guidelines. 35 | 4. Contribute to the project by following our [Contributing Guidelines](contributing). 36 | 37 | 38 | ## Developers 39 | The Urban Model Platform is developed collaboratively by: 40 | ![UMP-Developer-Banner](https://github.com/user-attachments/assets/18f4826f-e828-4206-920a-9d1e248523e5) 41 | 42 | 43 | 44 | -------------------------------------------------------------------------------- /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | // Use IntelliSense to learn about possible attributes. 3 | // Hover to view descriptions of existing attributes. 4 | // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 5 | "version": "0.2.0", 6 | "configurations": [ 7 | { 8 | "name": "ump + db", 9 | "type": "debugpy", 10 | "request": "launch", 11 | "module": "flask", 12 | "envFile": "${workspaceFolder}/.env", 13 | "env": { 14 | "FLASK_APP": "src/ump/main.py", 15 | "FLASK_DEBUG": "1" 16 | }, 17 | "args": [ 18 | "--debug", 19 | "run", 20 | "--no-debugger", 21 | "-p", 22 | // "5005", 23 | "${command:pickArgs}" 24 | ], 25 | "jinja": true, 26 | "autoStartBrowser": false, 27 | "preLaunchTask": "start-postgis-container", 28 | "postDebugTask": "stop-postgis-container", 29 | "justMyCode": false, 30 | }, 31 | { 32 | "name": "ump + db + migrations", 33 | "type": "debugpy", 34 | "request": "launch", 35 | "module": "flask", 36 | "envFile": "${workspaceFolder}/.env", 37 | "env": { 38 | "FLASK_APP": "src/ump/main.py", 39 | "FLASK_DEBUG": "1" 40 | }, 41 | "args": [ 42 | "--debug", 43 | "run", 44 | "--no-debugger", 45 | "-p", 46 | // "5005", 47 | "${command:pickArgs}" 48 | ], 49 | "jinja": true, 50 | "autoStartBrowser": false, 51 | "preLaunchTask": "setup-environment", 52 | "postDebugTask": "stop-postgis-container", 53 | "justMyCode": false, 54 | }, 55 | { 56 | "name": "Python Debugger: Remote Attach", 57 | "type": "debugpy", 58 | "request": "attach", 59 | "connect": { 60 | "host": "localhost", 61 | "port": 5678 62 | }, 63 | "pathMappings": [ 64 | { 65 | "localRoot": "${workspaceFolder}", 66 | "remoteRoot": "." 67 | } 68 | ] 69 | } 70 | ] 71 | } 72 | -------------------------------------------------------------------------------- /docs/content/03-architecture/keycloak.md: -------------------------------------------------------------------------------- 1 | (Keycloak)= 2 | # Keycloak 3 | 4 | Keycloak is an open-source Identity and Access Management (IAM) solution that provides user authentication, authorization, and single sign-on capabilities. It enables secure access to applications and services by managing user identities and permissions. In the Urban Model Platform, Keycloak serves as the central authentication server, handling access control across components. 5 | 6 | ## Configure Keycloak 7 | 1. Open Keycloak on `http://localhost:${KEYCLOAK_PORT_EXTERNAL}/auth` 8 | 2. In order to configure a dev setup Keycloak initially, log in with admin/admin. Then: 9 | 3. Create a new realm named `UrbanModelPlatform` 10 | 4. Create a new client in that realm called `ump-client` (activate OAuth 2.0 Device Authorization Grant and Direct access grants) 11 | 5. Create a test user called `ump`, set its password to `ump` 12 | 6. Make sure to set the keycloak host in `.env` to your local hostname or IP address 13 | 14 | ## Securing Model Servers and Processes 15 | 16 | You can secure processes and model servers in keycloak by adding users to special client roles. In order to secure a specific process, create a role named `modelserver_processid`, in order to secure all processes of a model server just create a role named `modelserver`. The ids correspond to the keys used in the providers.yaml. 17 | 18 | 19 | ## Accessing secured Processes in Development 20 | 21 | If you access the `/processes` list without any authentification, you can see all processes which are configured to be `anonymous_access: True` (Learn more about the configuration of providers [here](providers)). If you want to see all processes a specific user is authorized to see, follow the following steps: 22 | 23 | 1. Log in with admin/admin 24 | 2. Go to the user (e.g. `ump`) and make sure to fill out the general information and switch on "E-Mail verified" 25 | 3. Log out and log in to the user `ump` via the following URL: `http://localhost:${KEYCLOAK_PORT_EXTERNAL}/auth/realms/UrbanModelPlatform/account` 26 | 4. Obtain the client secret by going to the client, clicking `Credentials` and copying the secret 27 | 5. If the login is working, get token for user: 28 | 29 | ```bash 30 | curl -X POST "http://localhost:${KEYCLOAK_PORT_EXTERNAL}/auth/realms/UrbanModelPlatform/protocol/openid-connect/token" \ 31 | -H "Content-Type: application/x-www-form-urlencoded" \ 32 | -d "grant_type=password" \ 33 | -d "client_id=ump-client" \ 34 | -d "client_secret=" \ 35 | -d "username=ump" \ 36 | -d "password=ump" 37 | ``` 38 | 39 | 6. With the token obtained, you can access the entire processes list by executing: 40 | ```bash 41 | curl -L -v -X GET "http://localhost:/api/processes" \ 42 | -H -H "Authorization: Bearer " 43 | ``` -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "ump.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} 6 | {{- end }} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | If release name contains chart name it will be used as a full name. 12 | */}} 13 | {{- define "ump.fullname" -}} 14 | {{- if .Values.fullnameOverride }} 15 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} 16 | {{- else }} 17 | {{- $name := default .Chart.Name .Values.nameOverride }} 18 | {{- if contains $name .Release.Name }} 19 | {{- .Release.Name | trunc 63 | trimSuffix "-" }} 20 | {{- else }} 21 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} 22 | {{- end }} 23 | {{- end }} 24 | {{- end }} 25 | 26 | {{/* 27 | Create chart name and version as used by the chart label. 28 | */}} 29 | {{- define "ump.chart" -}} 30 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} 31 | {{- end }} 32 | 33 | {{/* 34 | Common labels 35 | */}} 36 | {{- define "ump.labels" -}} 37 | helm.sh/chart: {{ include "ump.chart" . }} 38 | {{ include "ump.selectorLabels" . }} 39 | {{- if .Chart.AppVersion }} 40 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 41 | {{- end }} 42 | app.kubernetes.io/managed-by: {{ .Release.Service }} 43 | {{- end }} 44 | 45 | {{/* 46 | Selector labels 47 | */}} 48 | {{- define "ump.selectorLabels" -}} 49 | app.kubernetes.io/name: urban-model-platform-api 50 | app.kubernetes.io/component: api 51 | app.kubernetes.io/part-of: urban-model-platform 52 | {{- end }} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "ump.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create }} 59 | {{- default (include "ump.fullname" .) .Values.serviceAccount.name }} 60 | {{- else }} 61 | {{- default "default" .Values.serviceAccount.name }} 62 | {{- end }} 63 | {{- end }} 64 | 65 | {{/* 66 | Create a variable that holds the current issuer (prod or staging) 67 | */}} 68 | {{- define "ump.issuer" -}} 69 | {{- if .Values.tls.clusterIssuerRef.name }} 70 | {{- .Values.tls.clusterIssuerRef.name }} 71 | {{- else if .Values.tls.issuer.prodEnabled }} 72 | {{- printf "%s-le-prod" (include "ump.fullname" .) }} 73 | {{- else }} 74 | {{- printf "%s-le-staging" (include "ump.fullname" .) }} 75 | {{- end }} 76 | {{- end }} 77 | 78 | {{/* 79 | Validate if hostname has a value when tls is enabled 80 | */}} 81 | {{- define "ump.validateValues" -}} 82 | {{- if and .Values.tls.enabled (not .Values.tls.gateway.hostName) -}} 83 | {{- fail "tls.gateway.hostName is required when TLS is enabled" -}} 84 | {{- end -}} 85 | {{- end -}} -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | All notable changes to this project will be documented in this file. 3 | 4 | The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.1.0/) 5 | and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). 6 | 7 | # [2.x] 8 | ## [2.1.0] - 2025-07-31 9 | ### Changed: 10 | - improved error handling when requesting remote servers processes and jobs 11 | - helm chart is up-to-date with current UMP 12 | - provider loader listens on any event to be compatible with configmap-updates in k8s 13 | - provider loader improved: debouncing rapid file changes and atomic updates of providers object 14 | - improved server responses in certain situations, especially when something went wrong showing users json information (as this is a json api) in accordance with OGC api process spec 15 | - improved job starting mechanism 16 | 17 | ### Added: 18 | - a new setting to control gunicorn worker timout was introduced: UMP_SERVER_TIMEOUT 19 | - a new setting to control ump server path prefix introduced: UMP_API_SERVER_URL_PREFIX 20 | - timeouts for all requests to remote servers 21 | 22 | ### Fixed: 23 | - using setting UMP_KEYCLOAK_CLIENT_ID instead of hard-coded "ump-client" 24 | - job insert queries failed when logged-in user created a job 25 | - missing job metadata 26 | - fetch correct job status from remote server 27 | 28 | ## [2.0.0] - 2025-06-25 29 | 30 | ### Added 31 | - comprehensive documentation added 32 | - unified database connection pool handling 33 | 34 | ### Changed 35 | - created a providers pydantic class for better type safety and concise handling 36 | - improved provider.yaml loading and provider updateing mechanism 37 | - improved logging 38 | - keycloak coinnection error handling improved 39 | 40 | ### Fixed 41 | - ump ran out of database connections due to unclosed connections 42 | 43 | # [1.x] 44 | ## [1.2.0] - 2024-05-25 45 | 46 | ### Added 47 | - documentation 48 | - keycoak connection error handling 49 | - complete database migrations 50 | - a helm chart 51 | 52 | ### Fixed 53 | - missing keycloak env vars 54 | - 55 | 56 | ### Changed 57 | - Improved start-dev in Makefile, enhanced database issues 58 | - using pydantic classes in some cases now (e.g. for provider config) 59 | - load processes async 60 | - made the process of determining which are visible to the user more concise and more explicit 61 | - added base logger and logging 62 | - simplyfied keycloak connection settings 63 | - improved settings management 64 | - improved database connection pooling and connection re-use 65 | - fixed dev setup and eased dev setup mileage 66 | 67 | ## [1.1.0] - 2024-07-22 68 | 69 | ### Changed 70 | 71 | - added package management system (poetry) 72 | - using project template (copier) 73 | - moved source code inside src folder and restructured it 74 | -------------------------------------------------------------------------------- /src/ump/config.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from pathlib import Path 3 | 4 | from pydantic import FilePath, HttpUrl, SecretStr, computed_field, field_validator 5 | from pydantic_settings import BaseSettings 6 | from rich import print 7 | 8 | logger = logging.getLogger(__name__) 9 | 10 | 11 | # using pydantic_settings to manage environment variables 12 | # and do automatic type casting in a central place 13 | class UmpSettings(BaseSettings): 14 | UMP_LOG_LEVEL: str = "INFO" 15 | UMP_PROVIDERS_FILE: FilePath = Path("providers.yaml") 16 | UMP_API_SERVER_URL: str = "http://localhost:3000" 17 | UMP_API_SERVER_WORKERS: int = 4 18 | UMP_REMOTE_JOB_STATUS_REQUEST_INTERVAL: int = 5 19 | UMP_DATABASE_NAME: str = "ump" 20 | UMP_DATABASE_HOST: str = "postgres" 21 | UMP_DATABASE_PORT: int = 5432 22 | UMP_DATABASE_USER: str = "postgres" 23 | UMP_DATABASE_PASSWORD: SecretStr = SecretStr("postgres") 24 | UMP_GEOSERVER_URL: HttpUrl | None = HttpUrl("http://geoserver:8080/geoserver") 25 | UMP_GEOSERVER_DB_HOST: str = "postgis" 26 | UMP_GEOSERVER_DB_PORT: int = 5432 27 | UMP_GEOSERVER_DB_NAME: str = "ump" 28 | UMP_GEOSERVER_DB_USER: str = "ump" 29 | UMP_GEOSERVER_DB_PASSWORD: SecretStr = SecretStr("ump") 30 | UMP_GEOSERVER_WORKSPACE_NAME: str = "UMP" 31 | UMP_GEOSERVER_USER: str = "geoserver" 32 | UMP_GEOSERVER_PASSWORD: SecretStr = SecretStr("geoserver") 33 | UMP_GEOSERVER_CONNECTION_TIMEOUT: int = 60 # seconds 34 | UMP_JOB_DELETE_INTERVAL: int = 240 # minutes 35 | UMP_KEYCLOAK_URL: HttpUrl = HttpUrl("http://keycloak:8080/auth") 36 | UMP_KEYCLOAK_REALM: str = "UrbanModelPlatform" 37 | UMP_KEYCLOAK_CLIENT_ID: str = "ump-client" 38 | UMP_KEYCLOAK_USER: str = "admin" 39 | UMP_KEYCLOAK_PASSWORD: SecretStr = SecretStr("admin") 40 | UMP_API_SERVER_URL_PREFIX: str = "/" 41 | 42 | # Gunicorn default timeout is 30 seconds 43 | UMP_SERVER_TIMEOUT: int = 30 44 | 45 | @computed_field 46 | @property 47 | def UMP_GEOSERVER_URL_REST(self) -> HttpUrl: 48 | """Constructs the full URL for the GeoServer REST API""" 49 | return HttpUrl(str(self.UMP_GEOSERVER_URL) + "/rest") 50 | 51 | @computed_field 52 | @property 53 | def UMP_GEOSERVER_URL_WORKSPACE(self) -> HttpUrl: 54 | """Constructs the full URL for the GeoServer workspace""" 55 | return HttpUrl(str(self.UMP_GEOSERVER_URL) + "/rest/workspaces") 56 | 57 | def print_settings(self): 58 | """Prints the settings for debugging purposes""" 59 | logger.info("UMP Settings:") 60 | print(self) 61 | 62 | @field_validator("UMP_KEYCLOAK_URL", mode="before") 63 | def ensure_trailing_slash(cls, value: str) -> str: 64 | """Ensure UMP_KEYCLOAK_URL has a trailing slash.""" 65 | if not value.endswith("/"): 66 | value += "/" 67 | return value 68 | 69 | 70 | app_settings = UmpSettings() 71 | app_settings.print_settings() 72 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["poetry-core"] 3 | build-backend = "poetry.core.masonry.api" 4 | 5 | [tool.ruff] 6 | # Enable the pycodestyle (`E`) and Pyflakes (`F`) rules by default. 7 | # Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or 8 | # McCabe complexity (`C901`) by default. 9 | select = ["E", "F"] 10 | # same as isort, black 11 | line-length = 88 12 | # Assume Python 3.8 13 | target-version = "py311" 14 | # Exclude a variety of commonly ignored directories. 15 | exclude = [ 16 | ".bzr", 17 | ".direnv", 18 | ".eggs", 19 | ".git", 20 | ".git-rewrite", 21 | ".hg", 22 | ".mypy_cache", 23 | ".nox", 24 | ".pants.d", 25 | ".pytype", 26 | ".ruff_cache", 27 | ".svn", 28 | ".tox", 29 | ".venv", 30 | "__pypackages__", 31 | "_build", 32 | "buck-out", 33 | "build", 34 | "dist", 35 | "node_modules", 36 | "venv", 37 | ] 38 | 39 | [tool.poetry] 40 | name = "ump" 41 | version = "2.1.0" 42 | description = "server federation api, OGC Api Processes-based to connect model servers and centralize access to them" 43 | authors = [ 44 | "Rico Herzog ", 45 | "Maja Richter ", 46 | "Stefan Schuhart " 47 | ] 48 | readme = "README.md" 49 | package-mode = true 50 | 51 | [tool.poetry.urls] 52 | Homepage = "https://citysciencelab.github.io/urban-model-platform" 53 | Documentation = "https://github.com/citysciencelab/urban-model-platform" 54 | Changelog = "https://github.com/citysciencelab/urban-model-platform/changelog.md" 55 | Repository = "https://github.com/citysciencelab/urban-model-platform" 56 | 57 | [tool.poetry.dependencies] 58 | python = ">=3.11,<3.13" 59 | werkzeug = "^3.0.3" 60 | flask = "^3.0.3" 61 | flask-cors = "^4.0.1" 62 | requests = "^2.32.3" 63 | aiohttp = "^3.9.5" 64 | psycopg2-binary = "^2.9.9" 65 | numpy = "~=1.26.4" 66 | geopandas = "^1.0.1" 67 | geoalchemy2 = "^0.15.2" 68 | apiflask = "^2.2.0" 69 | python-dotenv = "^1.0.1" 70 | gunicorn = "^23.0.0" 71 | pyyaml = "^6.0.2" 72 | flask-migrate = "^4.0.7" 73 | python-keycloak = "^4.3.0" 74 | sqlalchemy-serializer = "^1.4.22" 75 | watchdog = "^5.0.3" 76 | ema-workbench = "^2.5.2" 77 | ipyparallel = "^8.8.0" 78 | schedule = "^1.2.2" 79 | pydantic-settings = "^2.8.1" 80 | pydantic = "^2.11.1" 81 | rich = "^14.0.0" 82 | 83 | [tool.poetry.group.dev.dependencies] 84 | # formatting, quality, tests 85 | autoflake = ">=1.4" 86 | black = ">=23.7" 87 | isort = ">=5.7.0" 88 | mypy = ">=0.812" 89 | pytest = ">=6.2.2" 90 | pytest-cov = ">=2.11.1" 91 | pytest-randomly = ">=3.5.0" 92 | pytest-sugar = ">=0.9.4,<1" 93 | pytest-xdist = ">=2.2.0,<3" 94 | types-toml = ">=0.10.1,<1" 95 | pre-commit = ">=3.4.0,<4" 96 | debugpy = "^1.8.5" 97 | flake8 = "^7.1.1" 98 | pylint = "^3.3.1" 99 | bump-my-version = "^1.2.0" 100 | 101 | [tool.poetry.group.docs] 102 | optional = true 103 | 104 | [tool.poetry.group.docs.dependencies] 105 | jupyter-book = "^1" 106 | sphinx-autoapi = "^3" 107 | sphinxcontrib-autoyaml = "^1.1" 108 | 109 | [tool.black] 110 | line-length = 88 111 | exclude = "tests/fixtures" 112 | 113 | [tool.isort] 114 | profile = "black" 115 | line_length = 88 116 | not_skip = "__init__.py" 117 | multi_line_output = 3 118 | force_single_line = false 119 | balanced_wrapping = true 120 | default_section = "THIRDPARTY" 121 | known_first_party = "ump" 122 | include_trailing_comma = true 123 | -------------------------------------------------------------------------------- /docs/content/01-intro/intro.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | Urban Digital Twins are digital representations of urban environments that integrate various data sources, models, and simulations to support decision-making and urban planning. Based on a modular approach {cite:p}`schubbe2023urbane`, the Urban Model Platform (UMP) serves as a middleware to provide access to simulation models and algorithms for Urban Digital Twins. It is designed to be flexible, extensible, and easy to use, enabling users to integrate different models into their digital twin applications. 4 | 5 | 6 | ## Background 7 | Reality is complex and dynamic, and urban environments are no exception. Rather, urban environments are characterized by a multitude of interrelated systems, processes, and actors. This complexity makes it challenging to understand and predict the behavior of urban systems, especially in the context of rapid urbanization, climate change, and other global challenges. In many cases, models are not solely representations of a system, but its co-creators {cite:p}`herzog2024guide, thompson2022escape`. Multiple models {cite:p}`batty2021multiple` are often needed to capture the complexity of urban systems, and these models need to be integrated into a coherent framework that allows for their interaction and collaboration. The Urban Model Platform aims to provide such a framework, enabling users to access and utilize a wide range of models and algorithms for urban analysis and decision-making. 8 | 9 | Such models can be of various types: Agent-based models, AI and Machine Learning Models, system dynamics models, and others. They can be used to simulate various aspects of urban systems, such as land use, transportation, energy consumption, and social dynamics. The Urban Model Platform provides a unified interface for accessing these models, allowing users to easily integrate them into their applications and workflows. One thing all models have in common is that they transform a number of inputs into a number of outputs. To describe such input-output relationships, the Urban Model Platform uses the [OGC API Processes standard](https://github.com/opengeospatial/ogcapi-processes), which provides a standardized way to describe and access processes and workflows in geospatial applications. This standardization enables interoperability between different models and systems, making it easier to integrate and use them in various contexts. 10 | 11 | ## Architecture 12 | The Urban Model Platform is designed as a modular and extensible architecture that allows for the integration of various models and algorithms. The platform consists of several components, including: 13 | 14 | - **Flask API**: The backend API serves as the central hub for accessing and managing models and algorithms. It provides a RESTful interface based on the OGC API Processes for users to interact with the platform, submit jobs, and retrieve results. 15 | - **Geoserver**: The Geoserver component is responsible for serving geospatial data and visualizations. It allows users to access and visualize the results of simulations and analyses performed by the models. 16 | - **Keycloak**: Keycloak is used for authentication and authorization, ensuring that only authorized users can access the platform and its resources. 17 | - **PostgreSQL**: The PostgreSQL database is used to store the data and metadata associated with the models and simulations. It provides a robust and scalable solution for managing large volumes of data. 18 | - **Model Servers**: The platform can connect to various model servers, each hosting different models and algorithms. 19 | 20 | 21 | 22 | ## Bibliography 23 | ```{bibliography} 24 | :style: plain 25 | ``` 26 | -------------------------------------------------------------------------------- /docs/content/02-user_guide/provider-configuration.md: -------------------------------------------------------------------------------- 1 | (providers)= 2 | # Configuring Providers 3 | As the Urban Model Platform does not provide any processes by itself, it needs to be connected to external model servers. This is done by configuring providers in the `providers.yaml` file. The following example shows how to configure a model server and its processes: 4 | 5 | 6 | ```yaml 7 | # providers.yaml 8 | modelserver-1: 9 | url: "http://localhost:5005" 10 | name: "Example Modelserver" 11 | authentication: 12 | type: "BasicAuth" 13 | user: "user" 14 | password: "password" 15 | timeout: 60 16 | processes: 17 | process-1: 18 | result-storage: "geoserver" 19 | result-path: simulation_geometry 20 | graph-properties: 21 | root-path: results.simulation_results 22 | x-path: results.simulation_results.x 23 | y-path: results.simulation_results.y 24 | anonymous-access: True 25 | process-2: 26 | result-storage: "remote" 27 | deterministic: True 28 | process-3: 29 | exclude: True 30 | ``` 31 | 32 | ```{warning} 33 | Currently, model servers have to provide endpoints that comply with the OGC processes API standard in order to be loaded from this API. Otherwise, they will be silently ignored, and an error will be logged. 34 | ``` 35 | 36 | ## Configuration options 37 | 38 | | Parameter | Type | Possible Values | Description | 39 | | --------- | --------| ----------------------- | --------------------------------------- | 40 | | url | String | Any http/https URL | URL of the model server. | 41 | | name | String | Any | Name of the model server. | 42 | | **authentication** | Object | | | 43 | | authentication.type | String | BasicAuth | Type of authentication (currently, only BasicAuth is supported) | 44 | | authentication.user | String | Any | Username for BasicAuth. | 45 | | authentication.password | String | Any | Password for BasicAuth. | 46 | | timeout | Integer | 60 | Time before a request to a modelserver is given up. | 47 | | **processes** | Object | | | 48 | | processes.result-storage | String | ["geoserver" \\| "remote"] | Storage option for the process results. If the attribute is set to `remote`, no results will be stored in the UMP itself, but provided directly from the model server. In case it is set to `geoserver`, UMP will load the geoserver component and tries to store the result data in a specific Geoserver layer. | 49 | | processes.result-path | String | Any | If the results are stored in the Geoserver, you can specify the object path to the feature collection using `result-path`. Use dots to separate a path with several components: `result.some_obj.some_features`. | 50 | | processes.graph-properties | Object | root-path, x-path, y-path | Configuration for graph properties. The sub-properties `root-path`, `x-path` and `y-path` can be used to simplify graph configuration in the UI. This simplifies data visualization in various UIs | 51 | | processes.anonymous-access | Boolean | [True \\| False] | If set to `True`, the process can be seen and run by anonymous users. Jobs and layers created by anonymous users will be cleaned up after some time (this can be configured in `config.py`). | 52 | | processes.deterministic | Boolean | [True \\| False] | If set to `True`, jobs will be cached based on a hash of the input parameters, the process version and the user id. | 53 | | processes.exclude | Boolean | [True \\| False] | If set to `True`, the process will be excluded from the list of available processes. | -------------------------------------------------------------------------------- /src/ump/api/models/ensemble.py: -------------------------------------------------------------------------------- 1 | """Ensemble and related entities.""" 2 | from datetime import datetime, timezone 3 | from typing import ClassVar 4 | 5 | from sqlalchemy import BigInteger, DateTime, ForeignKey, String 6 | from sqlalchemy.orm import Mapped, declarative_base, mapped_column 7 | from sqlalchemy_serializer import SerializerMixin 8 | 9 | Base = declarative_base() 10 | 11 | class JobsEnsembles(Base, SerializerMixin): 12 | """Entity linking jobs and ensembles.""" 13 | __tablename__ = 'jobs_ensembles' 14 | 15 | id: Mapped[int] = mapped_column(primary_key=True) 16 | ensemble_id: Mapped[int] = mapped_column(BigInteger()) 17 | job_id: Mapped[int] = mapped_column(String()) 18 | 19 | class JobsUsers(Base, SerializerMixin): 20 | __tablename__ = 'jobs_users' 21 | 22 | id: Mapped[int] = mapped_column(primary_key=True) 23 | job_id: Mapped[str] = mapped_column(String()) 24 | user_id: Mapped[str] = mapped_column(String()) 25 | 26 | class EnsemblesUsers(Base, SerializerMixin): 27 | __tablename__ = 'ensembles_users' 28 | 29 | id: Mapped[int] = mapped_column(primary_key=True) 30 | ensemble_id: Mapped[int] = mapped_column(BigInteger()) 31 | user_id: Mapped[str] = mapped_column(String()) 32 | 33 | class Ensemble(Base, SerializerMixin): 34 | """Ensemble entity""" 35 | __tablename__ = "ensembles" 36 | 37 | id: Mapped[int] = mapped_column(primary_key=True) 38 | name: Mapped[str] = mapped_column(String()) 39 | description: Mapped[str] = mapped_column(String()) 40 | user_id: Mapped[str] = mapped_column(String()) 41 | scenario_configs: Mapped[str] = mapped_column(String()) 42 | created: Mapped[datetime] = mapped_column(DateTime()) 43 | modified: Mapped[datetime] = mapped_column(DateTime()) 44 | jobs_metadata: ClassVar[dict] 45 | 46 | def __init__( 47 | self, 48 | name: str, 49 | description: str, 50 | user_id: str, 51 | scenario_configs: str, 52 | ): 53 | self.name = name 54 | self.description = description 55 | self.user_id = user_id 56 | self.scenario_configs = scenario_configs 57 | self.created = datetime.now(timezone.utc) 58 | self.modified = datetime.now(timezone.utc) 59 | 60 | def _to_dict(self): 61 | return { 62 | "id": self.id, 63 | "created": self.created.isoformat(), 64 | "modified": self.modified.isoformat(), 65 | "name": self.name, 66 | "description": self.description, 67 | "user_id": self.user_id, 68 | "scenario_configs": self.scenario_configs, 69 | } 70 | 71 | class Comment(Base, SerializerMixin): 72 | """Comments for ensembles""" 73 | __tablename__ = "ensemble_comments" 74 | 75 | id: Mapped[int] = mapped_column(primary_key=True) 76 | user_id: Mapped[str] = mapped_column(String()) 77 | ensemble_id: Mapped[int] = mapped_column(BigInteger(), ForeignKey("ensembles.id")) 78 | comment: Mapped[str] = mapped_column(String()) 79 | created: Mapped[datetime] = mapped_column(DateTime()) 80 | modified: Mapped[datetime] = mapped_column(DateTime()) 81 | 82 | def __init__(self, user_id: str, ensemble_id: int, comment: str): 83 | self.user_id = user_id 84 | self.ensemble_id = ensemble_id 85 | self.comment = comment 86 | self.created = datetime.now(timezone.utc) 87 | self.modified = datetime.now(timezone.utc) 88 | 89 | def _to_dict(self): 90 | return { 91 | "id": self.id, 92 | "user_id": self.user_id, 93 | "ensemble_id": self.ensemble_id, 94 | "comment": self.comment, 95 | } 96 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | ![UMP-Banner](https://github.com/user-attachments/assets/f70d498a-ef6d-4a3a-9e1e-429da130c65d) 3 | 4 | 5 | # Urban Model Platform 6 | The Urban Model Platform is an Open Urban Platform to distribute and access (simulation) models for Urban Digital Twins. It builds on the [OGC API Processes](https://docs.ogc.org/is/18-062r2/18-062r2.html) open standard and was developed by the City Science Lab at HafenCity University Hamburg and the Agency for Geoinformation and Suveying in the context of the [Connected Urban Twins](https://www.connectedurbantwins.de/) project. 7 | 8 | The repository contains a Python implementation of the OGC API Processes standard that can be used as a "system of systems" open platform. In the context of digital urban twins, such a platform can provide the infrastructure to integrate and combine domain-specific models ranging from simple regression models to advanced simulation and AI models. Instead of executing jobs and processes on the server itself, the Urban Model Platform is configured with multiple providers or model servers. 9 | 10 | This architecture is independent of any frontend application. One could use e.g. the [Scenario Explorer](https://github.com/citysciencelab/scenario-explorer-addon) as a client frontend, but due to the standardized API, mutliple frontends are possible. 11 | 12 | 13 | ## Documentation 14 | 15 | ➡️📑 Check out the full **documentation** [here](https://citysciencelab.github.io/urban-model-platform/) 16 | 17 | ➡️🧑‍💻 Check out how to [Contribute](CONTRIBUTING.md) 18 | 19 | ➡️🗓️ Find the latest [Changes](CHANGELOG.md) 20 | 21 | 22 | 23 | ## Application architecture and dependency diagram 24 | 25 | Architecture-Overview 26 | 27 | 28 | 29 | ```mermaid 30 | flowchart TB 31 | %% Define styles 32 | classDef api fill:#4a90e2,stroke:#333,stroke-width:2px,color:white 33 | classDef auth fill:#ff9,stroke:#333,stroke-width:2px,color:black 34 | classDef db fill:#ffb366,stroke:#333,stroke-width:2px,color:black 35 | classDef gateway fill:#e4e4e4,stroke:#333,stroke-width:2px,color:black 36 | classDef geoserver fill:#9acd32,stroke:#333,stroke-width:2px,color:black 37 | 38 | %% Components 39 | api[UMP API] 40 | gateway[k8s Gateway Api] 41 | keycloak[Keycloak] 42 | geoserver[GeoServer] 43 | db_api[API PostgreSQL] 44 | db_auth[Auth PostgreSQL] 45 | db_spatial[Spatial PostgreSQL] 46 | 47 | %% Dependencies 48 | gateway --> api 49 | api --> keycloak 50 | api --> db_api 51 | api --> geoserver 52 | keycloak --> db_auth 53 | geoserver --> db_spatial 54 | 55 | %% Apply styles 56 | class api api 57 | class gateway gateway 58 | class keycloak auth 59 | class geoserver geoserver 60 | class db_api,db_auth,db_spatial db 61 | 62 | %% Layered subgraphs 63 | subgraph Network Layer 64 | gateway 65 | end 66 | 67 | subgraph Application Layer 68 | subgraph Authentication 69 | keycloak 70 | end 71 | subgraph Core Application 72 | api 73 | end 74 | subgraph Geospatial Web Data 75 | geoserver 76 | end 77 | end 78 | 79 | subgraph Storage Layer 80 | subgraph Databases 81 | db_api 82 | db_auth 83 | db_spatial 84 | end 85 | end 86 | ``` 87 | _________ 88 | 89 | The Urban Model Platform was developed in the context of the "Connected Urban Twins" Project and was funded by the KfW and the Federal Ministry for Housing and Urban Development 90 | 91 | ![UMP-Sponsors-Banner](https://github.com/user-attachments/assets/cdc8c433-8c19-474d-b10f-383a11d74617) 92 | 93 | -------------------------------------------------------------------------------- /docs/content/01-intro/quick_start.md: -------------------------------------------------------------------------------- 1 | (quickstart)= 2 | # Quick Start 3 | 4 | This section provides a quick start guide to get you up and running with the Urban Model Platform. It covers the basic steps to set up your development environment, run the application, and test it. 5 | 6 | ## Requirements 7 | - Docker 8 | - Docker Compose 9 | - Python 3.8 or higher 10 | - Conda (only for local development) 11 | - Poetry (only for local development) 12 | 13 | ## Installation 14 | To install the Urban Model Platform, follow these steps: 15 | 16 | 1. Clone this repository by using ```git clone git@github.com:citysciencelab/urban-model-platform.git``` 17 | 2. Navigate to the project directory: ```cd urban-model-platform``` 18 | 4. Initiate the development environment by running: ```make initiate-dev``` 19 | 5. Build the Docker containers by running: ```make build-image``` 20 | 6. Start the local development environment by running: ```make start-dev``` 21 | 22 | ```{note} 23 | This will start the Urban Model Platform and all its dependencies, including Keycloak, PostgreSQL, and GeoServer. 24 | ``` 25 | 26 | ```{note} 27 | If you want to also start an example model server, make sure to initialize the git submodule and run the following command: 28 | ```git submodule update --init --recursive``` 29 | 30 | Then, you can start the model server by running: 31 | ```make start-dev-with-modelserver``` 32 | 33 | ``` 34 | 35 | ## Accessing the Application 36 | Once the application is running, you can access it at the following URLs: 37 | - Urban Model Platform: [http://localhost:5003](http://localhost:5003) 38 | - Keycloak: [http://localhost:8081](http://localhost:8081) 39 | - GeoServer: [http://localhost:8080](http://localhost:8080) 40 | - PostgreSQL: [http://localhost:5432](http://localhost:5432) 41 | - Example Model Server (only if set up): [http://localhost:5005](http://localhost:5005) 42 | 43 | 44 | ## Configuring Providers 45 | Providers of processes and model servers are defined in the [`providers.yaml`](../../providers.yaml) file. This file contains the configuration for connecting to external model servers and processes. Each provider entry specifies the necessary details, such as the server URL, authentication credentials, and process identifiers. Find more information about the providers in the [providers documentation](providers). 46 | 47 | ```{note} 48 | The `providers.yaml` file is essential for the Urban Model Platform to interact with external model servers and processes. Make sure to configure it correctly to ensure seamless integration. 49 | ``` 50 | 51 | 52 | ## Configuring Keycloak 53 | Keycloak is used for authentication and authorization in the Urban Model Platform. To configure Keycloak, follow these steps: 54 | 1. Open Keycloak on [http://localhost:8081/auth](http://localhost:8081/auth) 55 | 2. Log in with the admin credentials (admin/admin). 56 | 3. Create a new realm named `UrbanModelPlatform`. 57 | 4. Create a new client in that realm called `ump-client` (activate OAuth 2.0 Device Authorization Grant and Direct access grants). 58 | 5. Create a test user called `ump`, set its password to `ump`. 59 | 60 | ```{note} 61 | If a process is not configured with ```anonymous-access: True``` in [`providers.yaml`](../../providers.yaml), one has to give users the permission access the process. This can be done in two ways: 62 | 63 | 1. By adding the user to a specific client role `modelserverID_processID` in Keycloak. This will give the user access only to the specific process. 64 | 2. By adding the user to a specific client role `modelserverID` in Keycloak. This will give the user access to all processes of the model server with the specified id. 65 | 66 | `modelserverID` and `processID` correspond to the keys used in the [`providers.yaml`](../../providers.yaml) file. 67 | 68 | ``` -------------------------------------------------------------------------------- /charts/urban-model-platform/values.yaml: -------------------------------------------------------------------------------- 1 | # This is to override the chart name. 2 | nameOverride: "" 3 | fullnameOverride: "" 4 | 5 | image: 6 | repository: lgvanalytics.azurecr.io/urban-model-platform 7 | pullPolicy: IfNotPresent 8 | tag: "2.0.0" 9 | pullSecrets: 10 | - name: secret 11 | 12 | replicaCount: 1 13 | 14 | labels: {} 15 | 16 | tls: 17 | enabled: false 18 | issuer: 19 | prodEnabled: false 20 | email: "" 21 | gateway: 22 | name: "" 23 | httpSectionName: "" 24 | tlsSectionName: "" 25 | hostName: "" 26 | clusterIssuerRef: 27 | name: "" 28 | 29 | resources: 30 | limits: 31 | cpu: 500m 32 | memory: 512Mi 33 | requests: 34 | cpu: 100m 35 | memory: 128Mi 36 | 37 | service: 38 | type: ClusterIP 39 | port: 5000 # port under which the svc answers 40 | targetPort: 5000 # port the container itself uses 41 | 42 | config: 43 | logLevel: "DEBUG" 44 | providersFilePath: "./providers.yaml" 45 | providersFileMountPath: /app 46 | apiServerUrl: "http://localhost:5000" 47 | apiServerUrlPrefix: "/api" 48 | apiServerWorkers: "4" 49 | remoteJobStatusRequestInterval: "5" 50 | serverWorkerTimeout: "30" # Timeout for server workers, should be higher than all request to remote servers, to avoid server 500 errors 51 | geoserverConnectionTimeout: "60" 52 | jobDeleteInterval: "240" 53 | 54 | postgresConnection: 55 | existingSecret: 56 | name: postgres-credentials 57 | 58 | keycloakConnection: 59 | existingSecret: 60 | name: "" 61 | 62 | geoserverConnection: 63 | existingSecret: 64 | name: "" 65 | 66 | # If configMap for providers is already existing and should not be overwritten, set this to true. Default: false 67 | providers: 68 | existingConfigMap: 69 | name: "" # Set to use existing ConfigMap instead of creating a new one 70 | content: | 71 | modelserver: 72 | name: "modelserver" 73 | url: "http://localhost:5005" 74 | authentication: 75 | type: "BasicAuth" 76 | user: "user" 77 | password: "password" 78 | timeout: 1800 79 | processes: 80 | hello-world: 81 | result-storage: "remote" 82 | anonymous-access: True 83 | squareroot: 84 | result-storage: "remote" 85 | anonymous-access: True 86 | hello-geo-world: 87 | result-storage: "remote" 88 | 89 | autoscaling: 90 | enabled: false 91 | minReplicas: 1 92 | maxReplicas: 2 93 | targetCPUUtilizationPercentage: 80 94 | targetMemoryUtilizationPercentage: 80 95 | 96 | tolerations: 97 | - key: ump/reservedFor 98 | operator: "Equal" 99 | value: app 100 | effect: NoSchedule 101 | - key: ump/reservedFor 102 | operator: "Equal" 103 | value: app 104 | effect: NoExecute 105 | 106 | #This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/ 107 | serviceAccount: 108 | # Specifies whether a service account should be created 109 | create: false 110 | # Automatically mount a ServiceAccount's API credentials? 111 | automount: true 112 | # Annotations to add to the service account 113 | annotations: {} 114 | # The name of the service account to use. 115 | # If not set and create is true, a name is generated using the fullname template 116 | name: "" 117 | 118 | # This is for setting Kubernetes Annotations to a Pod. 119 | podAnnotations: {} 120 | 121 | # This is for setting Kubernetes Labels to a Pod. 122 | podLabels: {} 123 | 124 | podSecurityContext: {} 125 | # fsGroup: 2000 126 | 127 | securityContext: 128 | runAsNonRoot: true 129 | runAsUser: 1000 130 | readOnlyRootFilesystem: true 131 | 132 | affinity: {} 133 | -------------------------------------------------------------------------------- /docs/UMP-Logo.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /migrations/env.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from logging.config import fileConfig 3 | 4 | from alembic import context 5 | from flask import current_app 6 | 7 | # this is the Alembic Config object, which provides 8 | # access to the values within the .ini file in use. 9 | config = context.config 10 | 11 | # Interpret the config file for Python logging. 12 | # This line sets up loggers basically. 13 | fileConfig(config.config_file_name) 14 | logger = logging.getLogger('alembic.env') 15 | 16 | def get_engine(): 17 | try: 18 | # this works with Flask-SQLAlchemy<3 and Alchemical 19 | return current_app.extensions['migrate'].db.get_engine() 20 | except (TypeError, AttributeError): 21 | # this works with Flask-SQLAlchemy>=3 22 | return current_app.extensions['migrate'].db.engine 23 | 24 | 25 | def get_engine_url(): 26 | try: 27 | return get_engine().url.render_as_string(hide_password=False).replace( 28 | '%', '%%') 29 | except AttributeError: 30 | return str(get_engine().url).replace('%', '%%') 31 | 32 | 33 | # add your model's MetaData object here 34 | # for 'autogenerate' support 35 | # from myapp import mymodel 36 | # target_metadata = mymodel.Base.metadata 37 | config.set_main_option('sqlalchemy.url', get_engine_url()) 38 | target_db = current_app.extensions['migrate'].db 39 | 40 | # other values from the config, defined by the needs of env.py, 41 | # can be acquired: 42 | # my_important_option = config.get_main_option("my_important_option") 43 | # ... etc. 44 | 45 | 46 | def get_metadata(): 47 | if hasattr(target_db, 'metadatas'): 48 | return target_db.metadatas[None] 49 | return target_db.metadata 50 | 51 | 52 | def run_migrations_offline(): 53 | """Run migrations in 'offline' mode. 54 | 55 | This configures the context with just a URL 56 | and not an Engine, though an Engine is acceptable 57 | here as well. By skipping the Engine creation 58 | we don't even need a DBAPI to be available. 59 | 60 | Calls to context.execute() here emit the given string to the 61 | script output. 62 | 63 | """ 64 | url = config.get_main_option("sqlalchemy.url") 65 | context.configure( 66 | url=url, target_metadata=get_metadata(), literal_binds=True 67 | ) 68 | 69 | with context.begin_transaction(): 70 | context.run_migrations() 71 | 72 | 73 | def run_migrations_online(): 74 | """Run migrations in 'online' mode. 75 | 76 | In this scenario we need to create an Engine 77 | and associate a connection with the context. 78 | 79 | """ 80 | 81 | # this callback is used to prevent an auto-migration from being generated 82 | # when there are no changes to the schema 83 | # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html 84 | def process_revision_directives(context, revision, directives): 85 | if getattr(config.cmd_opts, 'autogenerate', False): 86 | script = directives[0] 87 | if script.upgrade_ops.is_empty(): 88 | directives[:] = [] 89 | logger.info('No changes in schema detected.') 90 | 91 | conf_args = current_app.extensions['migrate'].configure_args 92 | if conf_args.get("process_revision_directives") is None: 93 | conf_args["process_revision_directives"] = process_revision_directives 94 | 95 | connectable = get_engine() 96 | 97 | with connectable.connect() as connection: 98 | context.configure( 99 | connection=connection, 100 | target_metadata=get_metadata(), 101 | **conf_args 102 | ) 103 | 104 | with context.begin_transaction(): 105 | context.run_migrations() 106 | 107 | 108 | if context.is_offline_mode(): 109 | run_migrations_offline() 110 | else: 111 | run_migrations_online() 112 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .ONESHELL: 2 | SHELL=/bin/bash 3 | 4 | .PHONY: build initiate-dev build-image upload-image start-dev \ 5 | start-dev-example restart-dev stop-dev build-docs clean-docs 6 | 7 | config ?= .env 8 | 9 | # Überprüfen und erstellen der .env-Datei, falls sie nicht existiert 10 | $(shell [ -f $(config) ] || cp .env.example $(config)) 11 | 12 | include $(config) 13 | export $(shell sed 's/=.*//' $(config)) 14 | 15 | # Note that the extra activate is needed to ensure that the activate floats env to the front of PATH 16 | CONDA_ACTIVATE=source $$(conda info --base)/etc/profile.d/conda.sh ; conda activate; conda activate 17 | 18 | GIT_COMMIT := $(shell git rev-parse --short HEAD) 19 | 20 | 21 | initiate-dev: 22 | @if [ ! -d ./.venv ]; then \ 23 | echo 'Creating conda environment in ./.venv'; \ 24 | conda env create -f environment.yaml -p ./.venv; \ 25 | else \ 26 | echo 'Conda environment (./.venv) already present'; \ 27 | fi 28 | 29 | @if [ ! -f providers.yaml ]; then \ 30 | cp providers.yaml.example providers.yaml \ 31 | echo 'Creating providers.yaml from providers.yaml.example'; \ 32 | else \ 33 | echo 'providers.yaml already present'; \ 34 | fi 35 | 36 | @if [ ! -f .env ]; then \ 37 | cp .env.example .env \ 38 | echo 'Creating .env from .env.example'; \ 39 | else \ 40 | echo '.env already present'; \ 41 | fi 42 | 43 | @ echo 'Creating docker network for development' 44 | docker network create ump_dev 45 | 46 | @echo 'Installing app dependencies:' 47 | poetry install 48 | 49 | build-image: 50 | @echo 'Building release ${CONTAINER_REGISTRY}/${CONTAINER_NAMESPACE}/$(IMAGE_NAME):$(IMAGE_TAG)' 51 | # build your image 52 | docker compose -f docker-compose-build.yaml build \ 53 | --build-arg SOURCE_COMMIT=$(GIT_COMMIT) \ 54 | --build-arg TAG=$(IMAGE_TAG) \ 55 | api 56 | 57 | upload-image: build-image 58 | docker compose -f docker-compose-build.yaml push api 59 | 60 | start-dev: 61 | ($(CONDA_ACTIVATE) ./.venv) 62 | 63 | @ echo 'Starting development environment containers: ump database, geoserver, keycloak, keycloak database' 64 | 65 | docker compose -f docker-compose-dev.yaml up -d api-db keycloak kc-db 66 | 67 | @ echo 'Waiting for database to be ready' 68 | sleep 7 69 | 70 | @ echo 'initialize the database' 71 | FLASK_APP=src.ump.main flask db init 72 | 73 | @ echo 'running database migrations' 74 | FLASK_APP=src.ump.main flask db upgrade 75 | 76 | @ echo 'Current database state' 77 | FLASK_APP=src.ump.main flask db current 78 | 79 | @ echo 'Now start a debug session with your preferred IDE, e.g. VSCode using launch.json' 80 | 81 | 82 | start-dev-example: start-dev 83 | @echo 'Starting development environment containers: ump database, geoserver, keycloak, keycloak database and an example modelserver' 84 | docker compose -f docker-compose-dev.yaml up -d modelserver 85 | 86 | restart-dev: 87 | docker compose -f docker-compose-dev.yaml restart 88 | 89 | stop-dev: 90 | docker compose -f docker-compose-dev.yaml stop 91 | 92 | clean-dev: 93 | @echo 'Removing dev containers AND volumes. All data is lost!' 94 | docker compose -f docker-compose-dev.yaml down --volumes 95 | 96 | build-docs: 97 | jupyter-book build docs 98 | 99 | clean-docs: 100 | jupyter-book clean docs 101 | 102 | # Update app version: bump major, minor, or patch 103 | bump-app-version: 104 | @if [ -z "$(part)" ]; then \ 105 | echo "Usage: make bump-app part={major|minor|patch}"; \ 106 | exit 1; \ 107 | fi; \ 108 | bump-my-version bump $(part) 109 | 110 | # Update app version: set to a specific version 111 | set-app-version: 112 | @if [ -z "$(version)" ]; then \ 113 | echo "Usage: make set-app-version version={version}"; \ 114 | exit 1; \ 115 | fi; \ 116 | bump-my-version set $(version) 117 | 118 | # Update chart version: bump major, minor, or patch 119 | bump-chart-version: 120 | @if [ -z "$(part)" ]; then \ 121 | echo "Usage: make bump-chart part={major|minor|patch}"; \ 122 | exit 1; \ 123 | fi; \ 124 | (cd charts && bump-my-version bump $(part)) -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | Contributions are welcome, and they are greatly appreciated! 4 | Every little bit helps, and credit will always be given. 5 | 6 | If you have a suggestion for improvements, please fork the repo and create a pull request. You can also simply open an issue. Don't forget to star and rate the project! Thanks again! 7 | 8 | 1. Fork the Project 9 | 1. Create your Feature Branch (git checkout -b feature/AmazingFeature) 10 | 1. Commit your Changes (git commit -m 'Add some AmazingFeature') 11 | 1. Push to the Branch (git push origin feature/AmazingFeature) 12 | 1. Open a Pull Request 13 | 14 | 15 | Here is how to set up the project locally for development: 16 | 17 | 18 | ## Initial setup 19 | You only need two tools, [Poetry](https://github.com/python-poetry/poetry) 20 | and [Copier](https://github.com/copier-org/copier). 21 | 22 | Poetry is used as a package manager. Copier is used for the project structure (scaffolding). 23 | 24 | 25 | Install with pip: 26 | ```bash 27 | python3 -m pip install --user pipx 28 | pipx install poetry 29 | pipx install copier copier-templates-extensions 30 | ``` 31 | 32 | Or create a new environment with conda/mamba: 33 | 34 | ```bash 35 | conda env create -f environment.yaml -p ./.venv 36 | ``` 37 | 38 | If you have a conda environment and want to use the Makefile, use following command: 39 | ```bash 40 | make initiate-dev 41 | ``` 42 | 43 | A conda `environment.yaml` is provided inside this repo. 44 | 45 | In order to create an external docker network to connect your containers to, run: 46 | `docker network create dev` 47 | 48 | 49 | 50 | 51 | ## Installing dependencies 52 | 53 | Install the projects code and all dependencies with: 54 | 55 | ```bash 56 | poetry install 57 | ``` 58 | 59 | you can add packages with: 60 | ```bash 61 | poetry add PACKAGE-NAME 62 | ``` 63 | 64 | You can remove packages by using: 65 | ```bash 66 | poetry remove PACKAGE-NAME 67 | ``` 68 | 69 | Packages can be updated with: 70 | ```bash 71 | poetry update PACKAGE-NAME 72 | ``` 73 | 74 | In order to run an example modelserver, git submodule is used and needs to be initiated: 75 | 76 | ```bash 77 | git submodule init 78 | ``` 79 | ```bash 80 | git submodule update --recursive 81 | ``` 82 | 83 | In this folder you can find build instructions to build a container with OGC API Processes compliant example processes based on pygeoapi. Those can be utilized as example processes for the Urban Model Platform. For it to run, `cd moduleserver_example`and run: 84 | - `cp .env.example .env` and set `IMAGE_TAG` to `main` 85 | - `docker compose -f docker-compose-build.yaml build` 86 | 87 | 88 | ## Updating the Documentation 89 | Install the optional docs depdendencies with: 90 | 91 | ```bash 92 | poetry install --only=docs 93 | ``` 94 | 95 | Run the build process with: 96 | 97 | ```bash 98 | make build-docs 99 | ``` 100 | 101 | To view the docs copy the content of the [docs/_build](./docs/_build) folder to a webserver or use VSCode and the [Live server extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.live-server). 102 | 103 | 104 | ## Start Flask App 105 | To start the Flask app, run: 106 | ```python 107 | flask -A src/ump/main.py --debug run 108 | ``` 109 | 110 | ## Data Storage 111 | 112 | The Urban Model Platform uses Docker containers to run the PostGIS and Geoserver components. The data is stored in the following folders: 113 | 114 | * `postgresql_data` -> contains the postgres db files 115 | * `geoserver_data` -> contains the Geoserver data dir 116 | 117 | 118 | If you are in development and want to reset all PostGis and Geoserver data, you can delete the `postgresql_data` and the `geoserver_data` folders. 119 | 120 | ## DB-Migrations 121 | The Urban Model Platform uses Alembic for database migrations. The migration scripts are located in the `src/ump/migrations` folder. To run the migrations, use the following command: 122 | 123 | ```bash 124 | alembic upgrade head 125 | ``` 126 | This will apply all pending migrations to the database. -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Mac/Windows 2 | */.DS_Store 3 | .DS_Store 4 | 5 | # Project-related 6 | providers.yaml 7 | 8 | # Byte-compiled / optimized / DLL files 9 | __pycache__/ 10 | *.py[cod] 11 | *$py.class 12 | 13 | # C extensions 14 | *.so 15 | 16 | # Distribution / packaging 17 | .Python 18 | build/ 19 | develop-eggs/ 20 | dist/ 21 | downloads/ 22 | eggs/ 23 | .eggs/ 24 | lib/ 25 | lib64/ 26 | parts/ 27 | sdist/ 28 | var/ 29 | wheels/ 30 | share/python-wheels/ 31 | *.egg-info/ 32 | .installed.cfg 33 | *.egg 34 | MANIFEST 35 | 36 | # PyInstaller 37 | # Usually these files are written by a python script from a template 38 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 39 | *.manifest 40 | *.spec 41 | 42 | # Installer logs 43 | pip-log.txt 44 | pip-delete-this-directory.txt 45 | 46 | # Unit test / coverage reports 47 | htmlcov/ 48 | .tox/ 49 | .nox/ 50 | .coverage 51 | .coverage.* 52 | .cache 53 | nosetests.xml 54 | coverage.xml 55 | *.cover 56 | *.py,cover 57 | .hypothesis/ 58 | .pytest_cache/ 59 | cover/ 60 | 61 | # Translations 62 | *.mo 63 | *.pot 64 | 65 | # Django stuff: 66 | *.log 67 | local_settings.py 68 | db.sqlite3 69 | db.sqlite3-journal 70 | 71 | # Flask stuff: 72 | instance/ 73 | .webassets-cache 74 | 75 | # Scrapy stuff: 76 | .scrapy 77 | 78 | # Sphinx documentation 79 | docs/_build/ 80 | 81 | # PyBuilder 82 | .pybuilder/ 83 | target/ 84 | 85 | # Jupyter Notebook 86 | .ipynb_checkpoints 87 | 88 | # IPython 89 | profile_default/ 90 | ipython_config.py 91 | 92 | # pyenv 93 | # For a library or package, you might want to ignore these files since the code is 94 | # intended to run in multiple environments; otherwise, check them in: 95 | # .python-version 96 | 97 | # pipenv 98 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 99 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 100 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 101 | # install all needed dependencies. 102 | #Pipfile.lock 103 | 104 | # poetry 105 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 106 | # This is especially recommended for binary packages to ensure reproducibility, and is more 107 | # commonly ignored for libraries. 108 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 109 | #poetry.lock 110 | 111 | # pdm 112 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 113 | #pdm.lock 114 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 115 | # in version control. 116 | # https://pdm.fming.dev/#use-with-ide 117 | .pdm.toml 118 | 119 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 120 | __pypackages__/ 121 | 122 | # Celery stuff 123 | celerybeat-schedule 124 | celerybeat.pid 125 | 126 | # SageMath parsed files 127 | *.sage.py 128 | 129 | # Environments 130 | .env 131 | .venv 132 | env/ 133 | venv/ 134 | ENV/ 135 | env.bak/ 136 | venv.bak/ 137 | 138 | # Spyder project settings 139 | .spyderproject 140 | .spyproject 141 | 142 | # Rope project settings 143 | .ropeproject 144 | 145 | # mkdocs documentation 146 | /site 147 | 148 | # mypy 149 | .mypy_cache/ 150 | .dmypy.json 151 | dmypy.json 152 | 153 | # Pyre type checker 154 | .pyre/ 155 | 156 | # pytype static type analyzer 157 | .pytype/ 158 | 159 | # Cython debug symbols 160 | cython_debug/ 161 | 162 | # PyCharm 163 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 164 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 165 | # and can be added to the global gitignore or merged into this file. For a more nuclear 166 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 167 | #.idea/ 168 | 169 | 170 | # secrets 171 | #secrets/* 172 | 173 | geoserver_data 174 | postgresql_data 175 | scratch/* -------------------------------------------------------------------------------- /docs/UMP-Logo-Text.svg: -------------------------------------------------------------------------------- 1 | URBAN MODEL PLATFORM -------------------------------------------------------------------------------- /src/ump/api/db_handler.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | import psycopg2 as db 4 | import psycopg2.pool 5 | from psycopg2.extras import RealDictCursor 6 | from sqlalchemy import create_engine 7 | 8 | from ump.config import app_settings as config 9 | 10 | logger = logging.getLogger(__name__) 11 | # Note: differnt part of the code use differnt Database handling strategies, 12 | # should be unified sometime! 13 | 14 | # Initialize the connection pool 15 | connection_pool = psycopg2.pool.SimpleConnectionPool( 16 | minconn=1, # Minimum number of connections 17 | maxconn=49, # Maximum number of connections, lower than postgres default 18 | database = config.UMP_DATABASE_NAME, 19 | host = config.UMP_DATABASE_HOST, 20 | user = config.UMP_DATABASE_USER, 21 | password = config.UMP_DATABASE_PASSWORD.get_secret_value(), 22 | port = config.UMP_DATABASE_PORT 23 | ) 24 | 25 | db_engine = engine = create_engine( 26 | ( 27 | "postgresql+psycopg2://" 28 | f"{config.UMP_DATABASE_USER}:{config.UMP_DATABASE_PASSWORD.get_secret_value()}" 29 | f"@{config.UMP_DATABASE_HOST}:{config.UMP_DATABASE_PORT}" 30 | f"/{config.UMP_DATABASE_NAME}" 31 | ), 32 | pool_size=49, # Maximum number of connections in the pool 33 | max_overflow=1, # Additional connections allowed beyond pool_size 34 | pool_timeout=30, # Timeout for getting a connection from the pool 35 | pool_recycle=3600, # Recycle connections after 1 hour 36 | ) 37 | 38 | def close_pool(): 39 | """Close the connection pool.""" 40 | global connection_pool 41 | 42 | if connection_pool: 43 | try: 44 | connection_pool.closeall() 45 | connection_pool = None # Mark the pool as closed 46 | logger.info("Connection pool closed.") 47 | except psycopg2.pool.PoolError as e: 48 | logger.warning("Connection pool is already closed: %s", e) 49 | 50 | class DBHandler(): 51 | def __init__(self): 52 | self.connection = connection_pool.getconn() 53 | 54 | def set_sortable_columns(self, sortable_columns): 55 | self.sortable_columns = sortable_columns 56 | 57 | def run_query( 58 | self, 59 | query, 60 | conditions=None, 61 | query_params=None, 62 | order=None, 63 | limit=None, 64 | page=None 65 | ): 66 | if query_params is None: 67 | query_params = {} 68 | if conditions: 69 | query += " WHERE " + " AND ".join(conditions) 70 | 71 | if order and set(order).issubset(set(self.sortable_columns)): 72 | query += f" ORDER BY {', '.join(order)} DESC" 73 | elif order: 74 | logging.debug( 75 | " --> Could not order by %s since sortable_columns hasn't been set!" + 76 | " Please call set_sortable_columns!", 77 | order 78 | ) 79 | 80 | if limit: 81 | offset = 0 82 | if page: 83 | offset = (page - 1) * limit 84 | 85 | query += " LIMIT %(limit)s OFFSET %(offset)s" 86 | query_params['limit'] = limit 87 | query_params['offset'] = offset 88 | 89 | with self.connection: 90 | with self.connection.cursor(cursor_factory = RealDictCursor) as cursor: 91 | cursor.execute(query, query_params) 92 | try: 93 | results = cursor.fetchall() 94 | except db.ProgrammingError as e: 95 | if str(e) == "no results to fetch": 96 | return 97 | else: 98 | raise e 99 | 100 | return results 101 | 102 | # needed so that this class can be used as a context manager 103 | def __enter__(self): 104 | return self 105 | 106 | def __exit__(self, exc_type, value, traceback): 107 | if self.connection: 108 | connection_pool.putconn(self.connection) 109 | 110 | if exc_type is None and value is None and traceback is None: 111 | return True 112 | 113 | logger.error("%s: %s - %s", exc_type, value, traceback) 114 | return False 115 | -------------------------------------------------------------------------------- /docker-compose-prod.yaml: -------------------------------------------------------------------------------- 1 | networks: 2 | dev: 3 | external: true 4 | name: ${DOCKER_NETWORK} 5 | 6 | volumes: 7 | postgres_data: 8 | 9 | services: 10 | api: 11 | image: ${CONTAINER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} 12 | restart: "unless-stopped" 13 | ports: 14 | - ${WEBAPP_PORT_EXTERNAL}:5000 15 | environment: 16 | API_SERVER_URL: ${API_SERVER_URL} 17 | NUMBER_OF_WORKERS: ${NUMBER_OF_WORKERS} 18 | LOGLEVEL: ${LOGLEVEL} 19 | FLASK_DEBUG: ${FLASK_DEBUG} 20 | PROVIDERS_FILE: /home/pythonuser/providers.yaml 21 | CORS_URL_REGEX: ${CORS_URL_REGEX} 22 | KEYCLOAK_USER: ${KEYCLOAK_USER} 23 | KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD} 24 | KEYCLOAK_HOST: ${KEYCLOAK_HOST} 25 | KEYCLOAK_PROTOCOL: ${KEYCLOAK_PROTOCOL} 26 | KEYCLOAK_PORT_EXTERNAL: ${KEYCLOAK_PORT_EXTERNAL} 27 | POSTGRES_DB: ${POSTGRES_DB} 28 | POSTGRES_USER: ${POSTGRES_USER} 29 | POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} 30 | PGDATA: ${PGDATA} 31 | POSTGRES_HOST: ${POSTGRES_HOST} 32 | POSTGRES_PORT: 5432 33 | GEOSERVER_WORKSPACE: ${GEOSERVER_WORKSPACE} 34 | GEOSERVER_ADMIN_USER: ${GEOSERVER_ADMIN_USER} 35 | GEOSERVER_ADMIN_PASSWORD: ${GEOSERVER_ADMIN_PASSWORD} 36 | GEOSERVER_BASE_URL: ${GEOSERVER_BASE_URL} 37 | GEOSERVER_POSTGIS_HOST: ${GEOSERVER_POSTGIS_HOST} 38 | GEOSERVER_TIMEOUT: ${GEOSERVER_TIMEOUT} 39 | CLEANUP_AGE: ${CLEANUP_AGE} 40 | 41 | volumes: 42 | - ./providers.yaml:/home/pythonuser/providers.yaml 43 | networks: 44 | - dev 45 | depends_on: 46 | - keycloak 47 | - postgis 48 | 49 | postgis: 50 | image: postgis/postgis:14-3.3 51 | volumes: 52 | - ./src/ump/initializers/db:/docker-entrypoint-initdb.d:delegated 53 | - postgres_data:/var/lib/postgresql/data/ 54 | environment: 55 | POSTGRES_DB: postgres 56 | POSTGRES_USER: ${POSTGRES_USER} 57 | POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} 58 | ports: 59 | - ${POSTGRES_PORT}:5432 60 | networks: 61 | - dev 62 | 63 | geoserver: 64 | image: kartoza/geoserver:2.22.0 65 | depends_on: 66 | - postgis 67 | ports: 68 | - ${GEOSERVER_PORT_EXTERNAL}:8080 69 | volumes: 70 | - ./geoserver_data:/opt/geoserver/data_dir 71 | environment: 72 | GEOSERVER_WORKSPACE: ${GEOSERVER_WORKSPACE} 73 | GEOSERVER_ADMIN_USER: ${GEOSERVER_ADMIN_USER} 74 | GEOSERVER_ADMIN_PASSWORD: ${GEOSERVER_ADMIN_PASSWORD} 75 | GEOSERVER_BASE_URL: ${GEOSERVER_BASE_URL} 76 | GEOSERVER_POSTGIS_HOST: ${GEOSERVER_POSTGIS_HOST} 77 | GEOSERVER_TIMEOUT: ${GEOSERVER_TIMEOUT} 78 | 79 | networks: 80 | - dev 81 | 82 | # ToDo: Refactor Model Server with public image to deploy 83 | modelserver: 84 | image: lgvudh.azurecr.io/analytics/example_ogcapi_processes:main 85 | env_file: 86 | - path: .env 87 | required: true # default 88 | volumes: 89 | - ./modelserver_example/pygeoapi-config.yml:/home/pythonuser/pygeoapi-config.yaml 90 | - ./modelserver_example/example-openapi.yml:/home/pythonuser/pygeoapi-openapi.yaml 91 | ports: 92 | - ${PYGEOAPI_SERVER_PORT_EXTERNAL}:${PYGEOAPI_SERVER_PORT_INTERNAL} 93 | command: [ 94 | '/bin/bash', '-c', 95 | 'pygeoapi openapi generate /home/pythonuser/pygeoapi-config.yaml --output-file /home/pythonuser/pygeoapi-openapi.yaml && pygeoapi serve --flask' 96 | ] 97 | networks: 98 | - dev 99 | 100 | keycloak: 101 | container_name: ump-keycloak 102 | image: quay.io/keycloak/keycloak:25.0 103 | ports: 104 | - ${KEYCLOAK_PORT_EXTERNAL}:8080 105 | environment: 106 | KEYCLOAK_ADMIN: ${KEYCLOAK_USER} 107 | KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_PASSWORD} 108 | KC_DB: postgres 109 | KC_DB_URL_HOST: postgis 110 | KC_DB_URL_PORT: 5432 111 | KC_DB_URL_DATABASE: keycloak 112 | KC_DB_USERNAME: ${POSTGRES_USER} 113 | KC_DB_PASSWORD: ${POSTGRES_PASSWORD} 114 | KC_HOSTNAME: ${KEYCLOAK_HOST} 115 | KC_HOSTNAME_PATH: /auth 116 | KC_HTTP_RELATIVE_PATH: /auth 117 | depends_on: 118 | - postgis 119 | command: ['start', '--proxy-headers', 'xforwarded', '--http-enabled', 'true'] 120 | networks: 121 | - dev 122 | -------------------------------------------------------------------------------- /src/ump/api/jobs.py: -------------------------------------------------------------------------------- 1 | import re 2 | 3 | from sqlalchemy import select 4 | from sqlalchemy.orm import Session 5 | 6 | from ump import utils 7 | from ump.api.db_handler import DBHandler 8 | from ump.api.db_handler import db_engine as engine 9 | from ump.api.models.ensemble import Ensemble, JobsEnsembles 10 | from ump.api.models.job import Job 11 | from ump.api.models.job_status import JobStatus 12 | from ump.config import app_settings 13 | 14 | def append_ensemble_list(job): 15 | with Session(engine) as session: 16 | stmt = select(JobsEnsembles).where(JobsEnsembles.job_id == job['jobID']) 17 | ids = session.scalars(stmt).fetchall() 18 | results = [] 19 | for id_pair in ids: 20 | results.append(id_pair.ensemble_id) 21 | stmt = select(Ensemble).where(Ensemble.id.in_(results)) 22 | ensembles = session.scalars(stmt).fetchall() 23 | job['ensembles'] = [] 24 | for ensemble in ensembles: 25 | job['ensembles'].append(ensemble.to_dict()) 26 | 27 | def get_jobs(args, user = None): 28 | page = int(args["page"][0]) if "page" in args else 1 29 | limit = int(args["limit"][0]) if "limit" in args else None 30 | 31 | jobs = [] 32 | query = """ 33 | SELECT j.job_id FROM jobs j left join jobs_users u on j.job_id = u.job_id 34 | """ 35 | query_params = {} 36 | conditions = [] 37 | user = None if user is None else user['sub'] 38 | if user is not None: 39 | conditions.append(f"j.user_id = '{user}' or u.user_id = '{user}'") 40 | else: 41 | conditions.append('j.user_id is null') 42 | 43 | if 'processID' in args and args['processID']: 44 | # this processID is actually the process_id_with_prefix!!! 45 | # we cannot change the name because it would not be OGC processes compliant anymore 46 | process_ids = [] 47 | 48 | for process_id_with_prefix in args['processID']: 49 | match = re.search(r'(.*):(.*)', process_id_with_prefix) 50 | provider_prefix = match.group(1) 51 | process_ids.append(match.group(2)) 52 | 53 | conditions.append("process_id IN %(process_id)s") 54 | query_params['process_id'] = tuple(process_ids) 55 | 56 | conditions.append("provider_prefix = %(provider_prefix)s") 57 | query_params['provider_prefix'] = provider_prefix 58 | 59 | if 'status' in args: 60 | query_params['status'] = tuple(args['status']) 61 | 62 | else: 63 | query_params['status'] = ( 64 | JobStatus.running.value, JobStatus.successful.value, 65 | JobStatus.failed.value, JobStatus.dismissed.value 66 | ) 67 | conditions.append("status IN %(status)s") 68 | 69 | with DBHandler() as db: 70 | db.set_sortable_columns(Job.SORTABLE_COLUMNS) 71 | 72 | job_ids = db.run_query(query, 73 | conditions = conditions, 74 | query_params = query_params, 75 | order = ['created'], 76 | limit = limit, 77 | page = page 78 | ) 79 | 80 | for row in job_ids: 81 | job = Job(row['job_id'], user) 82 | jobs.append(job.display()) 83 | 84 | count_jobs = count(conditions, query_params) 85 | links = next_links(page, limit, count_jobs) 86 | 87 | return { "jobs": jobs, "links": links, "total_count": count_jobs } 88 | 89 | def next_links(page, limit, count_jobs): 90 | if not limit or count_jobs <= limit: 91 | return [] 92 | 93 | links = [] 94 | if count_jobs > (page - 1) * limit: 95 | links.append({ 96 | "href": utils.join_url_parts( 97 | app_settings.UMP_API_SERVER_URL, 98 | app_settings.UMP_API_SERVER_URL_PREFIX, 99 | f"jobs?page={page+1}&limit={limit}" 100 | ), 101 | "rel": "service", 102 | "type": "application/json", 103 | "hreflang": "en", 104 | "title": "Next page of jobs." 105 | }) 106 | 107 | if page > 1: 108 | links.append({ 109 | "href": utils.join_url_parts( 110 | app_settings.UMP_API_SERVER_URL, 111 | app_settings.UMP_API_SERVER_URL_PREFIX, 112 | f"jobs?page={page-1}&limit={limit}" 113 | ), 114 | "rel": "service", 115 | "type": "application/json", 116 | "hreflang": "en", 117 | "title": "Previous page of jobs." 118 | }) 119 | 120 | return links 121 | 122 | def count(conditions, query_params): 123 | count_query = """ 124 | SELECT count(*) FROM jobs j left join jobs_users u on j.job_id = u.job_id 125 | """ 126 | with DBHandler() as db: 127 | count_jobs = db.run_query( 128 | count_query, 129 | conditions=conditions, 130 | query_params=query_params 131 | ) 132 | return count_jobs[0]['count'] 133 | -------------------------------------------------------------------------------- /docker-compose-dev.yaml: -------------------------------------------------------------------------------- 1 | # !NOTE: This is NOT a prodiuction ready docker-compose file. It is meant for development purposes only. 2 | networks: 3 | dev: 4 | external: true 5 | name: ${DOCKER_NETWORK} 6 | 7 | volumes: 8 | postgres_data: 9 | kc-db_data: 10 | geoserver_data: 11 | 12 | services: 13 | # !NOTE: this is for testing the current app version in a container, only! 14 | # development is executed locally 15 | # and the prod deployment differs anyway! 16 | api: 17 | image: ${CONTAINER_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} 18 | restart: "unless-stopped" 19 | ports: 20 | - ${WEBAPP_PORT_EXTERNAL}:5000 21 | environment: 22 | UMP_API_SERVER_URL: localhost:${WEBAPP_PORT_EXTERNAL} 23 | UMP_API_SERVER_WORKERS: 1 24 | UMP_SERVER_TIMEOUT: 30 # seconds 25 | UMP_LOG_LEVEL: DEBUG 26 | UMP_PROVIDERS_FILE: /home/pythonuser/providers.yaml 27 | UMP_KEYCLOAK_USER: admin 28 | UMP_KEYCLOAK_PASSWORD: admin 29 | UMP_KEYCLOAK_URL: http://keycloak:8080/auth 30 | UMP_KEYCLOAK_CLIENT_ID: ump-client 31 | UMP_KEYCLOAK_REALM: UrbanModelPlatform 32 | UMP_DATABASE_NAME: ump 33 | UMP_DATABASE_USER: ump 34 | UMP_DATABASE_PASSWORD: ump 35 | UMP_DATABASE_HOST: api-db 36 | UMP_DATABASE_PORT: 5432 37 | UMP_GEOSERVER_WORKSPACE_NAME: ${UMP_GEOSERVER_WORKSPACE_NAME} # !using the same as for the local app 38 | UMP_GEOSERVER_USER: admin 39 | UMP_GEOSERVER_PASSWORD: geoserver 40 | UMP_GEOSERVER_URL: http://geoserver:8080/geoserver 41 | #--- 42 | # using the same db as for the api, to ease dev setup 43 | # for production usage consider dedicated databases! 44 | UMP_GEOSERVER_DB_PORT: 5432 45 | UMP_GEOSERVER_DB_NAME: ump 46 | UMP_GEOSERVER_DB_USER: ump 47 | UMP_GEOSERVER_DB_PASSWORD: ump 48 | UMP_GEOSERVER_DB_HOST: api-db 49 | #--- 50 | UMP_GEOSERVER_CONNECTION_TIMEOUT: 60 # seconds 51 | UMP_JOB_DELETE_INTERVAL: 240 # minutes 52 | volumes: 53 | - ./providers.yaml:/home/pythonuser/providers.yaml 54 | networks: 55 | - dev 56 | 57 | # !NOTE: here starts the dev environment setup! 58 | # these apps a required dependencies for the UMP 59 | # and are not part of the UMP itself 60 | api-db: 61 | image: postgis/postgis:14-3.3 62 | volumes: 63 | - postgres_data:/var/lib/postgresql/data/ 64 | environment: 65 | POSTGRES_DB: ump 66 | POSTGRES_USER: ump 67 | POSTGRES_PASSWORD: ump 68 | ports: 69 | - ${API_DB_PORT_EXTERNAL}:5432 70 | networks: 71 | - dev 72 | 73 | # !NOTE: using the same database for geoserver and ump to ease dev setup 74 | # for production usage consider dedicated databases! 75 | geoserver: 76 | image: kartoza/geoserver:2.22.0 77 | ports: 78 | - ${GEOSERVER_PORT_EXTERNAL}:8080 79 | volumes: 80 | - geoserver_data:/opt/geoserver/data_dir 81 | environment: 82 | GEOSERVER_ADMIN_USER: admin 83 | GEOSERVER_ADMIN_PASSWORD: geoserver 84 | #--- 85 | # using the same db as for the api, to ease dev setup 86 | # for production usage consider dedicated databases! 87 | HOST: api-db 88 | POSTGRES_DB: ump 89 | POSTGRES_USER: ump 90 | POSTGRES_PASS: ump 91 | #--- 92 | networks: 93 | - dev 94 | 95 | modelserver: 96 | image: ${CONTAINER_REGISTRY}/${IMAGE_NAME}/example_ogcapi_processes:main 97 | build: 98 | context: ./modelserver_example 99 | dockerfile: Dockerfile 100 | environment: 101 | PYGEOAPI_CONFIG: /home/pythonuser/pygeoapi-config.yaml 102 | PYGEOAPI_OPENAPI: /home/pythonuser/pygeoapi-openapi.yaml 103 | PYGEOAPI_SERVER_HOST: ${PYGEOAPI_SERVER_HOST} 104 | PYGEOAPI_SERVER_PORT_INTERNAL: ${PYGEOAPI_SERVER_PORT_INTERNAL} 105 | PYGEOAPI_SERVER_PORT_EXTERNAL: ${PYGEOAPI_SERVER_PORT_EXTERNAL} 106 | ports: 107 | - ${PYGEOAPI_SERVER_PORT_EXTERNAL}:${PYGEOAPI_SERVER_PORT_INTERNAL} 108 | command: [ 109 | '/bin/bash', '-c', 110 | 'pygeoapi openapi generate /home/pythonuser/pygeoapi-config.yaml --output-file /home/pythonuser/pygeoapi-openapi.yaml && pygeoapi serve --flask' 111 | ] 112 | networks: 113 | - dev 114 | 115 | keycloak: 116 | image: quay.io/keycloak/keycloak:25.0 117 | ports: 118 | - ${KEYCLOAK_PORT_EXTERNAL}:8080 119 | environment: 120 | KEYCLOAK_ADMIN: admin 121 | KEYCLOAK_ADMIN_PASSWORD: admin 122 | KC_DB: postgres 123 | KC_DB_URL_HOST: kc-db 124 | KC_DB_URL_PORT: 5432 125 | KC_DB_URL_DATABASE: keycloak 126 | KC_DB_USERNAME: keycloak 127 | KC_DB_PASSWORD: keycloak 128 | KC_HOSTNAME: localhost 129 | KC_HOSTNAME_PATH: /auth 130 | KC_HTTP_RELATIVE_PATH: /auth 131 | command: ['start', '--proxy-headers', 'xforwarded', '--http-enabled', 'true'] 132 | networks: 133 | - dev 134 | 135 | kc-db: 136 | image: postgis/postgis:14-3.3 137 | volumes: 138 | - kc-db_data:/var/lib/postgresql/data/ 139 | environment: 140 | POSTGRES_DB: keycloak 141 | POSTGRES_USER: keycloak 142 | POSTGRES_PASSWORD: keycloak 143 | expose: 144 | - 5432 145 | networks: 146 | - dev -------------------------------------------------------------------------------- /src/ump/api/routes/jobs.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import json 3 | import logging 4 | from datetime import datetime, timezone 5 | 6 | from apiflask import APIBlueprint 7 | from flask import Response, g, request 8 | from sqlalchemy import or_, select 9 | from sqlalchemy.orm import Session 10 | from ump.api.models.ensemble import JobsUsers 11 | from ump.api.models.job import Job 12 | from ump.api.models.job_comments import JobComment 13 | from ump.api.jobs import append_ensemble_list, get_jobs 14 | from ump.api.keycloak_utils import find_user_id_by_email 15 | from ump.api.db_handler import engine 16 | 17 | jobs = APIBlueprint("jobs", __name__) 18 | 19 | @jobs.route("/", defaults={"page": "index"}) 20 | def index(page): 21 | args = request.args.to_dict(flat=False) if request.args else {} 22 | result = get_jobs(args, g.get("auth_token")) 23 | if "include_ensembles" in args and args["include_ensembles"]: 24 | for job in result["jobs"]: 25 | append_ensemble_list(job) 26 | return Response(json.dumps(result), mimetype="application/json") 27 | 28 | 29 | @jobs.route("//results", methods=["GET"]) 30 | def get_results(job_id=None): 31 | auth = g.get("auth_token") 32 | job = Job(job_id, None if auth is None else auth["sub"]) 33 | return Response(json.dumps(asyncio.run(job.results())), mimetype="application/json") 34 | 35 | 36 | @jobs.route("//users", methods=["GET"]) 37 | def get_users(job_id=None): 38 | """Get all users that have access to a job""" 39 | auth = g.get("auth_token") 40 | if auth is None: 41 | return Response("[]", mimetype="application/json") 42 | with Session(engine) as session: 43 | stmt = select(JobsUsers).where(JobsUsers.job_id == job_id) 44 | list = [] 45 | for user in session.scalars(stmt).fetchall(): 46 | list.append(user.to_dict()) 47 | return list 48 | 49 | 50 | @jobs.route("//share/", methods=["GET"]) 51 | def share(job_id=None, email=None): 52 | """Share a job with another user""" 53 | auth = g.get("auth_token") 54 | user_id = find_user_id_by_email(email) 55 | if user_id is None: 56 | logging.error("Unable to find user by email %s.", email) 57 | return Response(status=404) 58 | if auth is None: 59 | logging.error("Authentication token is missing.") 60 | return Response(status=401) 61 | 62 | own_user_id = auth["sub"] 63 | 64 | job = Job(job_id, None if auth is None else own_user_id) 65 | if job is None: 66 | logging.error("Unable to find job with id %s.", job_id) 67 | return Response(status=404) 68 | 69 | with Session(engine) as session: 70 | own_entry = JobsUsers(job_id=job_id, user_id=own_user_id) 71 | session.add(own_entry) 72 | 73 | shared_entry = JobsUsers(job_id=job_id, user_id=user_id) 74 | session.add(shared_entry) 75 | 76 | session.commit() 77 | return Response(status=201) 78 | 79 | 80 | @jobs.route("//comments", methods=["GET"]) 81 | def get_comments(job_id): 82 | """Get all comments for a job""" 83 | auth = g.get("auth_token") 84 | if auth is None: 85 | return Response("[]", mimetype="application/json") 86 | with Session(engine) as session: 87 | stmt = ( 88 | select(JobComment) 89 | .distinct() 90 | .join(JobsUsers, JobsUsers.job_id == JobComment.job_id, isouter=True) 91 | .where( 92 | or_(JobComment.user_id == auth["sub"], JobsUsers.user_id == auth["sub"]) 93 | ) 94 | .where(JobComment.job_id == job_id) 95 | ) 96 | results = [] 97 | for comment in session.scalars(stmt).fetchall(): 98 | results.append(comment.to_dict()) 99 | return results 100 | 101 | 102 | @jobs.route("//comments", methods=["POST"]) 103 | def create_comment(job_id): 104 | """Create a comment for a job""" 105 | auth = g.get("auth_token") 106 | if auth is None: 107 | logging.error("Not creating comment, no authentication found.") 108 | return Response( 109 | '{"error_message": "not authenticated"}', 110 | mimetype="application/json", 111 | status=401, 112 | ) 113 | comment = JobComment( 114 | user_id=auth["sub"], 115 | job_id=job_id, 116 | comment=request.get_json()["comment"], 117 | created=datetime.now(timezone.utc), 118 | modified=datetime.now(timezone.utc), 119 | ) 120 | with Session(engine) as session: 121 | session.add(comment) 122 | session.commit() 123 | return Response( 124 | json.dumps(comment.to_dict()), mimetype="application/json", status=201 125 | ) 126 | 127 | 128 | @jobs.route("/", methods=["GET"]) 129 | def show(job_id=None): 130 | auth = g.get("auth_token") 131 | if request.args.get("additionalMetadata") == "true": 132 | job = Job(job_id, None if auth is None else auth["sub"]).display(additional_metadata=True) 133 | else: 134 | job = Job(job_id, None if auth is None else auth["sub"]).display() 135 | append_ensemble_list(job) 136 | return Response(json.dumps(job), mimetype="application/json") 137 | -------------------------------------------------------------------------------- /src/ump/utils.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | 4 | import aiohttp 5 | 6 | from ump.api.models.ogc_exception import OGCExceptionResponse 7 | from ump.errors import OGCProcessException 8 | 9 | 10 | logger = logging.getLogger(__name__) 11 | 12 | def join_url_parts(*parts): 13 | return '/'.join(str(part).strip('/') for part in parts if part != '') 14 | 15 | async def fetch_response_content( 16 | response: aiohttp.ClientResponse 17 | ) -> tuple[str | dict, str, int]: 18 | """ 19 | Reads the content of an aiohttp response, handling both JSON and text, 20 | regardless of HTTP status code. 21 | Returns a tuple: (content, content_type, status_code) 22 | """ 23 | content_type = response.headers.get("Content-Type", "") 24 | status = response.status 25 | 26 | try: 27 | if "application/json" in content_type: 28 | content = await response.json() 29 | else: 30 | content = await response.text() 31 | except Exception: 32 | # If JSON parsing fails, fallback to text 33 | content = await response.text() 34 | content_type = "text/plain" 35 | 36 | return content, content_type, status 37 | 38 | # TODO: retry on timeouts, connection errors, etc. 39 | async def fetch_json( 40 | session: aiohttp.ClientSession, url, 41 | raise_for_status=False, **kwargs 42 | ) -> dict: 43 | try: 44 | async with session.get(url, **kwargs) as response: 45 | try: 46 | # is the response JSON? 47 | response_data = await response.json() 48 | 49 | except aiohttp.ContentTypeError: 50 | text = await response.text() 51 | logger.error( 52 | "Invalid JSON response from remote service. URL: %s, Content: %s", 53 | url, text[:500] 54 | ) 55 | raise OGCProcessException( 56 | OGCExceptionResponse( 57 | type="about:blank", 58 | title="Invalid Response Content", 59 | status=502, 60 | detail=( 61 | "The response from the remote service was not " 62 | f"valid JSON: '{text[:100]}'" 63 | ), 64 | instance=None 65 | ) 66 | ) 67 | 68 | # is the response ok? 69 | if raise_for_status: 70 | response.raise_for_status() 71 | 72 | # if all went good, go! 73 | return response_data 74 | 75 | except asyncio.TimeoutError: 76 | logger.error( 77 | "Timeout when requesting remote service. URL: %s", 78 | url 79 | ) 80 | raise OGCProcessException( 81 | OGCExceptionResponse( 82 | type="about:blank", 83 | title="Upstream Timeout", 84 | status=504, 85 | detail="The request to the remote service timed out.", 86 | instance=None 87 | ) 88 | ) 89 | except aiohttp.ClientResponseError as e: 90 | if e.status == 401: 91 | logger.warning( 92 | "Authentication failed when requesting remote service. URL: %s, Error: %s", 93 | url, str(e) 94 | ) 95 | raise OGCProcessException( 96 | OGCExceptionResponse( 97 | type="about:blank", 98 | title="Authentication Failed", 99 | status=401, 100 | detail="Authentication with the remote service failed.", 101 | instance=None 102 | ) 103 | ) 104 | logger.error( 105 | "HTTP error when requesting remote service. URL: %s, Status: %s, Error: %s", 106 | url, e.status, str(e) 107 | ) 108 | raise OGCProcessException( 109 | OGCExceptionResponse( 110 | type="about:blank", 111 | title="Upstream HTTP Error", 112 | status=e.status, 113 | detail=f"The remote service returned an HTTP error: {response_data}", 114 | instance=None 115 | ) 116 | ) 117 | except aiohttp.ClientError as e: 118 | logger.error( 119 | "Connection error when requesting remote service. URL: %s, Error: %s", 120 | url, str(e) 121 | ) 122 | raise OGCProcessException( 123 | OGCExceptionResponse( 124 | type="about:blank", 125 | title="Upstream Connection Error", 126 | status=502, 127 | detail="There was a connection error with the remote service.", 128 | instance=None 129 | ) 130 | ) 131 | except Exception as e: 132 | logger.error( 133 | "Unexpected error for remote service. URL: %s, Error: %s", 134 | url, str(e) 135 | ) 136 | raise OGCProcessException( 137 | OGCExceptionResponse( 138 | type="about:blank", 139 | title="Internal Server Error", 140 | status=500, 141 | detail="An unexpected error occurred while processing your request.", 142 | instance=None 143 | ) 144 | ) -------------------------------------------------------------------------------- /src/ump/api/models/providers_config.py: -------------------------------------------------------------------------------- 1 | from typing import Annotated, Literal, TypeAlias 2 | 3 | from pydantic import ( 4 | BaseModel, Field, HttpUrl, SecretStr, TypeAdapter, 5 | field_validator, model_validator 6 | ) 7 | 8 | # a type alias to give context to an otherwise generic str 9 | ProviderName: TypeAlias = Annotated[str, Field( 10 | description= ( 11 | "The name of the provider. " 12 | "This should be a valid identifier." 13 | ) 14 | )] 15 | 16 | 17 | class GraphProperties(BaseModel): 18 | root_path: str = Field( 19 | alias="root-path", 20 | description= ( 21 | "If the results are stored in Geoserver," 22 | "you can specify the object path to the " 23 | "feature collection using root-path." 24 | "Use dots to separate a path with several " 25 | "components: root-path: result.some_obj.some_features." 26 | ) 27 | ) 28 | x_path: str = Field( 29 | alias="x-path", 30 | description= ( 31 | "If the results are stored in Geoserver," 32 | "you can specify the object path to the " 33 | "feature collection using x-path." 34 | "Use dots to separate a path with several " 35 | "components: x-path: result.some_obj.some_features." 36 | ) 37 | ) 38 | y_path: str = Field( 39 | alias="y-path", 40 | description= ( 41 | "If the results are stored in Geoserver," 42 | "you can specify the graph properties using " 43 | "graph-properties." 44 | ) 45 | ) 46 | 47 | class ProcessConfig(BaseModel): 48 | description: str | None = None 49 | version: str | None = None 50 | result_storage: Literal["geoserver", "remote"] = Field(alias="result-storage") 51 | exclude: bool = False 52 | result_path: str | None = Field( 53 | default=None, 54 | alias="result-path", 55 | description= ( 56 | "If the results should be stored in Geoserver," 57 | "you can specify the object path to the " 58 | "feature collection using result-path." 59 | "Use dots to separate a path with several " 60 | "components: result-path: result.some_obj.some_features." 61 | ) 62 | ) 63 | graph_properties: GraphProperties | None = Field( 64 | default=None, 65 | alias="graph-properties", 66 | description= ( 67 | "If the results are stored in Geoserver," 68 | "you can specify the graph properties using " 69 | "graph-properties." 70 | ) 71 | ) 72 | anonymous_access: bool = Field( 73 | alias="anonymous-access", default=False, 74 | description= ( 75 | "If set to True, the process can be seen and run " 76 | "by anonymous users. Jobs and layers created " 77 | "by anonymous users will be cleaned up after some time." 78 | ) 79 | ) 80 | deterministic: bool = Field( 81 | default=False, 82 | description= ( 83 | "If set to True, the process is regarded deterministic. " 84 | "This means that such a process will always produce " 85 | "the same result for the same input. So, outputs can be " 86 | "cached based in inputs" 87 | ) 88 | ) 89 | 90 | @model_validator(mode="after") 91 | def validate_result_path_for_geoserver(self): 92 | """Ensure result-path is set if result-storage is 'geoserver'.""" 93 | if self.result_storage == "geoserver" and not self.result_path: 94 | raise ValueError("result-path must be set when result-storage is 'geoserver'.") 95 | return self 96 | 97 | class Authentication(BaseModel): 98 | type: Literal["BasicAuth"] 99 | user: str 100 | password: SecretStr 101 | 102 | class ProviderConfig(BaseModel): 103 | name: str 104 | server_url: HttpUrl = Field( 105 | alias="url", 106 | description= ( 107 | "The URL of the model server pointing to an OGC Processes api. " 108 | "It should be a valid HTTP or HTTPS URL with path to the landing page." 109 | ) 110 | ) 111 | timeout: int = Field( 112 | default=60, 113 | description= ( 114 | "Timeout in seconds for the model server. " 115 | "Default is 60 seconds." 116 | ) 117 | ) 118 | authentication: Authentication | None = None 119 | processes: dict[ProviderName, ProcessConfig] = Field( 120 | description= ( 121 | "Processes are defined as a dictionary with process name as key " 122 | "and process properties as value." 123 | ) 124 | ) 125 | 126 | @field_validator("server_url", mode="before") 127 | def ensure_trailing_slash(cls, value: str) -> HttpUrl: 128 | """Ensure server_url has a trailing slash.""" 129 | 130 | if not str(value).endswith("/"): 131 | value += "/" 132 | return HttpUrl(value) 133 | 134 | # a TypeAlias to give context to an otherwise generic dict 135 | ModelServers: TypeAlias = Annotated[ 136 | dict[str, ProviderConfig], 137 | Field( 138 | description= ( 139 | "A dictionary of model servers with their names as keys and " 140 | "ModelServer objects as values." 141 | ) 142 | ) 143 | ] 144 | 145 | # a TypeAdapter allows us to use pydantics model_validate method 146 | # on arbitrary python types 147 | model_servers_adapter: TypeAdapter[ModelServers] = TypeAdapter(ModelServers) 148 | 149 | if __name__ == "__main__": 150 | 151 | print(model_servers_adapter.json_schema()) 152 | -------------------------------------------------------------------------------- /docs/content/03-architecture/api.md: -------------------------------------------------------------------------------- 1 | (API)= 2 | # Flask API 3 | 4 | The Urban Model Platform is built on top of the OGC API Processes standard. It provides a RESTful API for managing and executing processes on various model servers. 5 | 6 | ## OGC API Processes 7 | This API is built on the OGC API Processes standard. To learn more about the standard, please refer to the [OGC API Processes - Part 1: Core](https://docs.ogc.org/is/18-062r2/18-062r2.html). 8 | 9 | ## API Endpoints 10 | The API provides several endpoints for managing and executing processes. We extended the API Processes standard to include additional endpoints for managing jobs and ensembles. The following table summarizes the available endpoints: 11 | 12 | 13 | ### Top-level Endpoints 14 | | Endpoint | Method | Description | Required by OGC API Processes | 15 | |------------------------------|--------|-----------------------------------------------------------------------------|-----------------------------------| 16 | | `/` | GET | Retrieve the API root information. | ✅ | 17 | | `/processes` | GET | Retrieve a list of available processes. [See more](processes) | ✅ | 18 | | `/jobs` | GET | Retrieve a list of jobs. [See more](jobs) | ✅ | 19 | | `/ensembles` | GET | Retrieve a list of ensembles. [See more](ensembles) | ❌ | 20 | | `/ready` | GET | Check the readiness of the application. | ❌ | 21 | 22 | 23 | ```{warning} 24 | Currently, there is no HTML landing page implemented. 25 | ``` 26 | 27 | 28 | ```{warning} 29 | Currently, the conformance classes endpoint as required by the OGC API Processes is not implemented. 30 | ``` 31 | 32 | To learn more about all available routes, please see below: 33 | 34 | (processes)= 35 | ### Processes 36 | 37 | | Endpoint | Method | Description | Required by OGC API Processes | 38 | |------------------------------|--------|-----------------------------------------------------------------------------|-----------------------------------| 39 | | `/processes` | GET | Retrieve a list of available processes. | ✅ | 40 | | `/processes/{id}` | GET | Retrieve details of a specific process by its ID, such as input and output parameters |✅ | 41 | | `/processes/{id}/execution` | POST | Execute a specific process | ✅ | 42 | | `/processes/providers` | GET | Returns the providers configuration | ❌ | 43 | 44 | (jobs)= 45 | ### Jobs 46 | 47 | | Endpoint | Method | Description | Required by OGC API Processes | 48 | |------------------------------|--------|-----------------------------------------------------------------------------|-----------------------------------| 49 | | `/jobs` | GET | Retrieve a list of jobs. | ✅ | 50 | | `/jobs/{id}` | GET | Retrieve details of a specific job by its ID. | ✅ | 51 | | `/jobs/{id}/results` | GET | Retrieve the results of a specific job. | ✅ | 52 | | `/jobs/{id}/users` | GET | Retrieves all users that have access to a specific job | ❌ | 53 | | `/jobs/{id}/comments` | GET | Retrieves all comments for a specific job | ❌ | 54 | | `/jobs/{id}/comments` | POST | Creates a comment for a specific job | ❌ | 55 | | `/jobs/{id}/share/{email}` | GET | Shares a specific job with another user | ❌ | 56 | 57 | 58 | (ensembles)= 59 | ### Ensembles 60 | Ensembles are collections of jobs that can be executed together. The following endpoints are available for managing ensembles: 61 | 62 | | Endpoint | Method | Description | Required by OGC API Processes | 63 | |-----------|--------|---------------|-------------------------------| 64 | | `/ensembles` | GET | Gets all ensembles the current user has access to | ❌ | 65 | | `/ensembles` | POST | Creates an ensemble | ❌ | 66 | | `/ensembles/{id}` | GET | Gets an ensemble by its ID | ❌ | 67 | | `/ensembles/{id}` | DELETE | Deletes an ensemble by its ID | ❌ | 68 | | `/ensembles/{id}/execute` | GET | Creates and executes the jobs in an ensemble | ❌ | 69 | | `/ensembles/{id}/jobs` | GET | Gets all jobs included in an ensemble | ❌ | 70 | | `/ensembles/{id}/jobs/{id}` | DELETE | Deletes a job from an ensemble | ❌ | 71 | | `/ensembles/{id}/comments` | GET | Gets the comments for an ensemble | ❌ | 72 | | `/ensembles/{id}/users` | GET | Gets all users that have access to an ensemble | ❌ | 73 | | `/ensembles/{id}/share/{email}` | GET | Shares an ensemble with another user | ❌ | 74 | | `/ensembles/{id}/addjob/{id}` | GET | Adds a job to an ensemble | ❌ | 75 | | `/ensembles/{id}/comments` | POST | Creates a comment for an ensemble | ❌ | 76 | 77 | (users)= 78 | ### Users 79 | | Endpoint | Method | Description | Required by OGC API Processes | 80 | |--------------------------------|--------|----------------------------------------|-------------------------------| 81 | | `/users/{id}/details` | GET | Retrieves user details by user ID | ❌ | 82 | 83 | 84 | 85 | -------------------------------------------------------------------------------- /docs/_config.yml: -------------------------------------------------------------------------------- 1 | # Book settings 2 | # Learn more at https://jupyterbook.org/customize/config.html 3 | 4 | # Add a bibtex file so that we can create citations 5 | bibtex_bibfiles: 6 | - ./references.bib 7 | 8 | ####################################################################################### 9 | # Book settings 10 | title: "Urban Model Platform" # The title of the book. Will be placed in the left navbar. 11 | author: "Rico Herzog, Maja Richter, Stefan Schuhart" # The author of the book 12 | copyright: "2025" # Copyright year to be placed in the footer 13 | logo: UMP-Logo-Text.png # A path to the book logo 14 | # Patterns to skip when building the book. Can be glob-style (e.g. "*skip.ipynb") 15 | exclude_patterns: [_build, Thumbs.db, .DS_Store, "**.ipynb_checkpoints"] 16 | # Auto-exclude files not in the toc 17 | only_build_toc_files: true 18 | 19 | ####################################################################################### 20 | # Execution settings 21 | execute: 22 | execute_notebooks: auto # Whether to execute notebooks at build time. Must be one of ("auto", "force", "cache", "off") 23 | cache: "" # A path to the jupyter cache that will be used to store execution artifacts. Defaults to `_build/.jupyter_cache/` 24 | exclude_patterns: [] # A list of patterns to *skip* in execution (e.g. a notebook that takes a really long time) 25 | timeout: 30 # The maximum time (in seconds) each notebook cell is allowed to run. 26 | run_in_temp: 27 | false # If `True`, then a temporary directory will be created and used as the command working directory (cwd), 28 | # otherwise the notebook's parent directory will be the cwd. 29 | allow_errors: false # If `False`, when a code cell raises an error the execution is stopped, otherwise all cells are always run. 30 | stderr_output: show # One of 'show', 'remove', 'remove-warn', 'warn', 'error', 'severe' 31 | 32 | ####################################################################################### 33 | # Parse and render settings 34 | parse: 35 | myst_enable_extensions: # default extensions to enable in the myst parser. See https://myst-parser.readthedocs.io/en/latest/using/syntax-optional.html 36 | # - amsmath 37 | - colon_fence 38 | # - deflist 39 | - dollarmath 40 | # - html_admonition 41 | # - html_image 42 | - linkify 43 | # - replacements 44 | # - smartquotes 45 | - substitution 46 | - tasklist 47 | myst_url_schemes: [mailto, http, https] # URI schemes that will be recognised as external URLs in Markdown links 48 | myst_dmath_double_inline: true # Allow display math ($$) within an inline context 49 | 50 | ####################################################################################### 51 | # HTML-specific settings 52 | html: 53 | favicon: "UMP-Logo.png" # A path to a favicon image 54 | use_edit_page_button: false # Whether to add an "edit this page" button to pages. If `true`, repository information in repository: must be filled in 55 | use_repository_button: false # Whether to add a link to your repository button 56 | use_issues_button: false # Whether to add an "open an issue" button 57 | use_multitoc_numbering: true # Continuous numbering across parts/chapters 58 | extra_navbar: Powered by Jupyter Book # Will be displayed underneath the left navbar. 59 | extra_footer: "" # Will be displayed underneath the footer. 60 | home_page_in_navbar: true # Whether to include your home page in the left Navigation Bar 61 | baseurl: "" # The base URL where your book will be hosted. Used for creating image previews and social links. e.g.: https://mypage.com/mybook/ 62 | comments: 63 | hypothesis: false 64 | utterances: false 65 | announcement: "" # A banner announcement at the top of the site. 66 | 67 | ####################################################################################### 68 | # LaTeX-specific settings 69 | latex: 70 | latex_engine: pdflatex # one of 'pdflatex', 'xelatex' (recommended for unicode), 'luatex', 'platex', 'uplatex' 71 | use_jupyterbook_latex: true # use sphinx-jupyterbook-latex for pdf builds as default 72 | # Define the name of the latex output file for PDF builds 73 | latex_documents: 74 | targetname: book.tex 75 | 76 | ####################################################################################### 77 | # Launch button settings 78 | launch_buttons: 79 | notebook_interface: classic # The interface interactive links will activate ["classic", "jupyterlab"] 80 | binderhub_url: https://mybinder.org # The URL of the BinderHub (e.g., https://mybinder.org) 81 | jupyterhub_url: "" # The URL of the JupyterHub (e.g., https://datahub.berkeley.edu) 82 | thebe: false # Add a thebe button to pages (requires the repository to run on Binder) 83 | colab_url: "" # The URL of Google Colab (https://colab.research.google.com) 84 | 85 | # Information about where the book exists on the web 86 | repository: 87 | url: "https://github.com/citysciencelab/urban-model-platform" # The URL to your book's repository 88 | path_to_book: "docs" # A path to your book's folder, relative to the repository root. 89 | branch: dev # Which branch of the repository should be used when creating links 90 | 91 | ####################################################################################### 92 | # Advanced and power-user settings 93 | sphinx: 94 | local_extensions: # A list of local extensions to load by sphinx specified by "name: path" items 95 | recursive_update: false # A boolean indicating whether to overwrite the Sphinx config (true) or recursively update (false) 96 | extra_extensions: 97 | - "autoapi.extension" # automatic overview for each function in the package 98 | - "sphinx.ext.napoleon" # different docstring styles enabled 99 | - "sphinx.ext.viewcode" # documentation allows 'show source' button to display the code 100 | - "sphinxcontrib.autoyaml" # automatically create ducumentation for commented yaml file 101 | config: 102 | autoapi_dirs: ["../src"] # crawl src directory for modules to be documented 103 | autoapi_options: 104 | - "members" 105 | - "undoc-members" 106 | - "private-members" 107 | - "show-inheritance" 108 | - "show-module-summary" 109 | - "special-members" 110 | # - imported-members' 111 | bibtex_reference_style: author_year 112 | 113 | autoyaml_root: ../ # Look for YAML files relatively to this directory. 114 | autoyaml_doc_delimiter: "###" # (###) Character(s) which start a documentation comment. 115 | autoyaml_comment: "#" # (#) Comment start character(s). 116 | autoyaml_level: 4 # (1) Parse comments from nested structures n-levels deep. 117 | -------------------------------------------------------------------------------- /src/ump/geoserver/geoserver.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | import shutil 4 | 5 | import geopandas as gpd 6 | import requests 7 | from psycopg2.sql import Identifier 8 | 9 | from ump.api.db_handler import db_engine as engine 10 | from ump.config import app_settings as config 11 | from ump.errors import GeoserverException 12 | 13 | class Geoserver: 14 | RESULTS_FILENAME = "results.geojson" 15 | 16 | def __init__(self): 17 | self.workspace = config.UMP_GEOSERVER_WORKSPACE_NAME 18 | self.errors = [] 19 | self.path_to_results = None 20 | self.job_id = None 21 | 22 | def create_workspace(self): 23 | url = f"{config.UMP_GEOSERVER_URL_WORKSPACE}/{self.workspace}.json?quietOnNotFound=True" 24 | 25 | response = requests.get( 26 | url, 27 | auth=(config.UMP_GEOSERVER_USER, config.UMP_GEOSERVER_PASSWORD.get_secret_value()), 28 | headers={"Content-type": "application/json", "Accept": "application/json"}, 29 | timeout=60, 30 | ) 31 | 32 | if response.status_code == 200: 33 | logging.info(" --> Workspace %s already exists.", self.workspace) 34 | return True 35 | 36 | if response.status_code == 404: 37 | logging.info(" --> Workspace %s not found - creating....", self.workspace) 38 | else: 39 | raise GeoserverException( 40 | f"Geoserver workspace {self.workspace} was not found" 41 | ) 42 | 43 | response = requests.post( 44 | config.UMP_GEOSERVER_URL_WORKSPACE, 45 | auth=(config.UMP_GEOSERVER_USER, config.UMP_GEOSERVER_PASSWORD.get_secret_value()), 46 | data=f"{self.workspace}", 47 | headers={"Content-type": "text/xml", "Accept": "*/*"}, 48 | timeout=config.UMP_GEOSERVER_CONNECTION_TIMEOUT, 49 | ) 50 | 51 | if response.ok: 52 | logging.info(" --> Created new workspace %s.", self.workspace) 53 | else: 54 | raise GeoserverException("Workspace could not be created") 55 | 56 | def save_results(self, job_id: str, data: dict): 57 | self.job_id = job_id 58 | 59 | try: 60 | self.create_workspace() 61 | logging.info(f"Workspace {self.workspace} created now") 62 | 63 | self.geojson_to_postgis(data=data, table_name=job_id) 64 | 65 | success = self.create_store(store_name=job_id) 66 | 67 | success = self.publish_layer(store_name=job_id, layer_name=job_id) 68 | 69 | except Exception as e: 70 | raise GeoserverException( 71 | "Result could not be uploaded to the geoserver.", 72 | payload={"error": type(e).__name__, "message": e}, 73 | ) from e 74 | return success 75 | 76 | def publish_layer(self, store_name: str, layer_name: str): 77 | try: 78 | response = requests.post( 79 | ( 80 | f"{config.UMP_GEOSERVER_URL_WORKSPACE}/{self.workspace}" 81 | f"/datastores/{store_name}/featuretypes"), 82 | 83 | auth=( 84 | config.UMP_GEOSERVER_USER, 85 | config.UMP_GEOSERVER_PASSWORD.get_secret_value() 86 | ), 87 | 88 | data=f"{layer_name}", 89 | headers={"Content-type": "text/xml"}, 90 | timeout=config.UMP_GEOSERVER_CONNECTION_TIMEOUT, 91 | ) 92 | 93 | if not response or not response.ok: 94 | logging.error( 95 | "Could not publish layer %s from store %s. Reason: %s", 96 | layer_name, 97 | store_name, 98 | response, 99 | ) 100 | 101 | except Exception as e: 102 | raise GeoserverException( 103 | f"Could not publish layer {layer_name} from store {store_name}. Reason: {e}", 104 | payload={ 105 | "error": type(e).__name__, 106 | "message": e, 107 | }, 108 | ) from e 109 | 110 | return response.ok 111 | # TODO: to simplify the dev setup the UMP and geoserver database hosts 112 | # can be the same but in production they should be different, at least the database used 113 | # also the user should decide if he/she wants to use the same database (host) for ump and geoserver 114 | 115 | def create_store(self, store_name: str): 116 | logging.info(" --> Storing results to geoserver store %s", store_name) 117 | 118 | xml_body = f""" 119 | 120 | {store_name} 121 | 122 | {config.UMP_GEOSERVER_DB_HOST} 123 | {config.UMP_GEOSERVER_DB_PORT} 124 | {config.UMP_GEOSERVER_DB_NAME} 125 | {config.UMP_GEOSERVER_DB_USER} 126 | {config.UMP_GEOSERVER_DB_PASSWORD.get_secret_value()} 127 | postgis 128 | 129 | 130 | """ 131 | response = requests.post( 132 | ( 133 | f"{str(config.UMP_GEOSERVER_URL_WORKSPACE)}" 134 | f"/{self.workspace}/datastores" 135 | ), 136 | auth=(config.UMP_GEOSERVER_USER, config.UMP_GEOSERVER_PASSWORD.get_secret_value()), 137 | data=xml_body, 138 | headers={"Content-type": "application/xml"}, 139 | timeout=config.UMP_GEOSERVER_CONNECTION_TIMEOUT, 140 | ) 141 | 142 | if not response or not response.ok: 143 | raise GeoserverException( 144 | f"Could not store data from postgis to geoserver store {store_name}", 145 | payload={ 146 | "status_code": response.status_code, 147 | "message": response.reason, 148 | }, 149 | ) 150 | return response.ok 151 | 152 | def geojson_to_postgis(self, table_name: str, data: dict): 153 | 154 | gdf = gpd.GeoDataFrame.from_features(data["features"], crs = 'EPSG:4326') 155 | table = Identifier(table_name) 156 | gdf.to_postgis(name=table.string, con=engine) 157 | 158 | def cleanup(self): 159 | if self.path_to_results and os.path.exists(self.path_to_results): 160 | shutil.rmtree(self.path_to_results) 161 | -------------------------------------------------------------------------------- /charts/urban-model-platform/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "ump.fullname" . }} 5 | labels: 6 | {{- include "ump.labels" . | nindent 4 }} 7 | spec: 8 | # replicasets are retained for rollback purposes 9 | revisionHistoryLimit: 3 # Retain only the last 3 ReplicaSets 10 | {{- if not .Values.autoscaling.enabled }} 11 | replicas: {{ .Values.replicaCount }} 12 | {{- end }} 13 | selector: 14 | matchLabels: 15 | {{- include "ump.selectorLabels" . | nindent 6 }} 16 | template: 17 | metadata: 18 | {{- with .Values.podAnnotations }} 19 | annotations: 20 | {{- toYaml . | nindent 8 }} 21 | {{- end }} 22 | labels: 23 | {{- include "ump.labels" . | nindent 8 }} 24 | {{- with .Values.podLabels }} 25 | {{- toYaml . | nindent 8 }} 26 | {{- end }} 27 | spec: 28 | {{- with .Values.image.pullSecrets }} 29 | imagePullSecrets: 30 | {{- toYaml . | nindent 8 }} 31 | {{- end }} 32 | serviceAccountName: {{ include "ump.serviceAccountName" . }} 33 | securityContext: 34 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 35 | containers: 36 | - name: {{ .Chart.Name }} 37 | securityContext: 38 | {{- toYaml .Values.securityContext | nindent 12 }} 39 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" 40 | imagePullPolicy: {{ .Values.image.pullPolicy }} 41 | ports: 42 | - name: http 43 | containerPort: {{ .Values.service.targetPort }} 44 | protocol: TCP 45 | resources: 46 | {{- toYaml .Values.resources | nindent 12 }} 47 | volumeMounts: 48 | - name: tmp-volume 49 | mountPath: /tmp 50 | - name: providers-volume 51 | mountPath: {{ .Values.config.providersFileMountPath | quote }} 52 | envFrom: 53 | - configMapRef: 54 | name: {{ include "ump.fullname" . }}-settings 55 | # if keycloak/geoserver secrets are not referenced, they will be created 56 | {{- if not .Values.keycloakConnection.existingSecret.name }} 57 | - secretRef: 58 | name: {{ include "ump.fullname" . }}-keycloak-connection 59 | {{- end }} 60 | {{- if not .Values.geoserverConnection.existingSecret.name }} 61 | - secretRef: 62 | name: {{ include "ump.fullname" . }}-geoserver-connection 63 | {{- end }} 64 | env: 65 | - name: FLASK_APP 66 | value: ump.main 67 | # postgres connection 68 | {{- if .Values.postgresConnection.existingSecret.name }} 69 | - name: UMP_DATABASE_NAME 70 | valueFrom: 71 | secretKeyRef: 72 | name: {{ .Values.postgresConnection.existingSecret.name }} 73 | key: dbname 74 | - name: UMP_DATABASE_HOST 75 | valueFrom: 76 | secretKeyRef: 77 | name: {{ .Values.postgresConnection.existingSecret.name }} 78 | key: host 79 | - name: UMP_DATABASE_PORT 80 | valueFrom: 81 | secretKeyRef: 82 | name: {{ .Values.postgresConnection.existingSecret.name }} 83 | key: port 84 | - name: UMP_DATABASE_USER 85 | valueFrom: 86 | secretKeyRef: 87 | name: {{ .Values.postgresConnection.existingSecret.name }} 88 | key: user 89 | - name: UMP_DATABASE_PASSWORD 90 | valueFrom: 91 | secretKeyRef: 92 | name: {{ .Values.postgresConnection.existingSecret.name }} 93 | key: password 94 | # keycloak connection 95 | {{- end }} 96 | {{- if .Values.keycloakConnection.existingSecret.name }} 97 | - name: UMP_KEYCLOAK_USER 98 | valueFrom: 99 | secretKeyRef: 100 | name: {{ .Values.keycloakConnection.existingSecret.name }} 101 | key: user 102 | - name: UMP_KEYCLOAK_PASSWORD 103 | valueFrom: 104 | secretKeyRef: 105 | name: {{ .Values.keycloakConnection.existingSecret.name }} 106 | key: password 107 | - name: UMP_KEYCLOAK_REALM 108 | valueFrom: 109 | secretKeyRef: 110 | name: {{ .Values.keycloakConnection.existingSecret.name }} 111 | key: realm 112 | - name: UMP_KEYCLOAK_URL 113 | valueFrom: 114 | secretKeyRef: 115 | name: {{ .Values.keycloakConnection.existingSecret.name }} 116 | key: url 117 | - name: UMP_KEYCLOAK_CLIENT_ID 118 | valueFrom: 119 | secretKeyRef: 120 | name: {{ .Values.keycloakConnection.existingSecret.name }} 121 | key: clientId 122 | {{- end }} 123 | {{- if .Values.geoserverConnection.existingSecret.name }} 124 | - name: UMP_GEOSERVER_URL 125 | valueFrom: 126 | secretKeyRef: 127 | name: {{ .Values.geoserverConnection.existingSecret.name }} 128 | key: url 129 | - name: UMP_GEOSERVER_DB_HOST 130 | valueFrom: 131 | secretKeyRef: 132 | name: {{ .Values.geoserverConnection.existingSecret.name }} 133 | key: dbHost 134 | - name: UMP_GEOSERVER_DB_PORT 135 | valueFrom: 136 | secretKeyRef: 137 | name: {{ .Values.geoserverConnection.existingSecret.name }} 138 | key: dbPort 139 | - name: UMP_GEOSERVER_WORKSPACE_NAME 140 | valueFrom: 141 | secretKeyRef: 142 | name: {{ .Values.geoserverConnection.existingSecret.name }} 143 | key: workspaceName 144 | - name: UMP_GEOSERVER_USER 145 | valueFrom: 146 | secretKeyRef: 147 | name: {{ .Values.geoserverConnection.existingSecret.name }} 148 | key: user 149 | - name: UMP_GEOSERVER_PASSWORD 150 | valueFrom: 151 | secretKeyRef: 152 | name: {{ .Values.geoserverConnection.existingSecret.name }} 153 | key: password 154 | {{- end }} 155 | readinessProbe: 156 | httpGet: 157 | path: {{ .Values.config.apiServerUrlPrefix }}/health/ready 158 | port: http 159 | initialDelaySeconds: 5 160 | periodSeconds: 10 161 | volumes: 162 | - name: tmp-volume 163 | emptyDir: {} 164 | - name: providers-volume 165 | configMap: 166 | {{- if not .Values.providers.existingConfigMap.name }} 167 | name: {{ include "ump.fullname" . }}-providers 168 | {{- else }} 169 | name: {{ .Values.providers.existingConfigMap.name }} 170 | {{- end }} 171 | {{- if not .Values.keycloakConnection.existingSecret.name }} 172 | - name: keycloak-config 173 | secret: 174 | secretName: {{ include "ump.fullname" . -}}-keycloak-connection 175 | defaultMode: 0400 176 | {{- end }} 177 | {{- with .Values.nodeSelector }} 178 | nodeSelector: 179 | {{- toYaml . | nindent 8 }} 180 | {{- end }} 181 | {{- with .Values.affinity }} 182 | affinity: 183 | {{- toYaml . | nindent 8 }} 184 | {{- end }} 185 | {{- with .Values.tolerations }} 186 | tolerations: 187 | {{- toYaml . | nindent 8 }} 188 | {{- end }} -------------------------------------------------------------------------------- /src/ump/api/providers.py: -------------------------------------------------------------------------------- 1 | import atexit 2 | import time 3 | from logging import getLogger 4 | from threading import Lock, Timer 5 | import threading 6 | from typing import Optional 7 | 8 | import aiohttp 9 | import yaml 10 | from pydantic import ValidationError 11 | from watchdog.events import FileSystemEventHandler 12 | from watchdog.observers.polling import PollingObserver 13 | 14 | from ump.api.models.providers_config import ( 15 | ModelServers, 16 | ProcessConfig, 17 | ProviderConfig, 18 | model_servers_adapter, 19 | ) 20 | from ump.config import app_settings as config 21 | 22 | logger = getLogger(__name__) 23 | 24 | # Thread-safe provider storage 25 | PROVIDERS: ModelServers = {} 26 | PROVIDERS_LOCK = Lock() 27 | RELOAD_TIMER: Optional[Timer] = None 28 | DEBOUNCE_DELAY = 0.5 # 500ms debounce 29 | 30 | 31 | class ProviderLoader(FileSystemEventHandler): 32 | def __init__(self): 33 | self.last_reload = 0 34 | self.reload_lock = threading.Lock() 35 | 36 | # need to listen on any event, not just file changes for k8s configmap updates 37 | def on_any_event(self, event): 38 | # Ignore directory events 39 | if event.is_directory: 40 | return 41 | 42 | logger.info("File event: %s on %s", event.event_type, event.src_path) 43 | 44 | event.src_path = str(event.src_path) 45 | 46 | # Check if the event affects our config file 47 | config_path = config.UMP_PROVIDERS_FILE.absolute().as_posix() 48 | 49 | endswith = event.src_path.endswith(config.UMP_PROVIDERS_FILE.name) 50 | contains = '..data' in event.src_path 51 | 52 | if ( 53 | event.src_path == config_path or 54 | endswith or 55 | contains 56 | ): 57 | self._debounced_reload() 58 | 59 | def _debounced_reload(self): 60 | """Debounce rapid file changes to avoid reload storms""" 61 | global RELOAD_TIMER 62 | 63 | # Cancel existing timer 64 | if RELOAD_TIMER: 65 | RELOAD_TIMER.cancel() 66 | 67 | # Schedule debounced reload 68 | RELOAD_TIMER = Timer(DEBOUNCE_DELAY, self.load_providers) 69 | RELOAD_TIMER.start() 70 | 71 | def load_providers(self): 72 | logger.info("(Re)Loading providers from %s", config.UMP_PROVIDERS_FILE) 73 | 74 | # Create new providers dict (don't modify global state yet) 75 | new_providers = {} 76 | 77 | try: 78 | with open(config.UMP_PROVIDERS_FILE, encoding="UTF-8") as file: 79 | if content := yaml.safe_load(file): 80 | # Validate before applying 81 | validated_content = model_servers_adapter.validate_python(content) 82 | new_providers.update(validated_content) 83 | 84 | # Atomic update with rollback capability 85 | self._atomic_update(new_providers) 86 | logger.info("Providers (re)loaded successfully") 87 | else: 88 | logger.warning("Providers file is empty, keeping current configuration") 89 | 90 | except FileNotFoundError: 91 | logger.error("Providers file not found: %s", config.UMP_PROVIDERS_FILE) 92 | except yaml.YAMLError as e: 93 | logger.error("Failed to parse providers file: %s", e) 94 | except ValidationError as e: 95 | logger.error("Validation error in providers file: %s", e) 96 | except Exception as e: 97 | logger.error("Unexpected error loading providers: %s", e) 98 | 99 | def _atomic_update(self, new_providers: ModelServers): 100 | """Atomically update providers with rollback capability""" 101 | global PROVIDERS 102 | 103 | with PROVIDERS_LOCK: 104 | # Store old providers for potential rollback 105 | old_providers = PROVIDERS 106 | try: 107 | # Create a new dict with copied Pydantic models 108 | PROVIDERS = { 109 | name: provider.model_copy(deep=True) 110 | for name, provider in new_providers.items() 111 | } 112 | except Exception as e: 113 | PROVIDERS = old_providers 114 | raise 115 | 116 | 117 | # Initialize the ProviderLoader and load providers initially 118 | provider_loader = ProviderLoader() 119 | provider_loader.load_providers() # Trigger initial loading 120 | 121 | observer = PollingObserver() 122 | observer.schedule( 123 | provider_loader, 124 | config.UMP_PROVIDERS_FILE.parent.as_posix(), # Watch directory, not file 125 | recursive=False 126 | ) 127 | observer.start() 128 | 129 | def cleanup(): 130 | """Cleanup function for graceful shutdown""" 131 | global RELOAD_TIMER 132 | 133 | if RELOAD_TIMER: 134 | RELOAD_TIMER.cancel() 135 | 136 | observer.stop() 137 | observer.join(timeout=5) # Give it 5 seconds to stop gracefully 138 | 139 | # Graceful shutdown for observer 140 | atexit.register(cleanup) 141 | 142 | 143 | def get_providers() -> ModelServers: 144 | """Get a copy of current providers (thread-safe and immutable)""" 145 | with PROVIDERS_LOCK: 146 | return PROVIDERS.copy() 147 | 148 | 149 | def get_provider(provider_name: str) -> Optional[ProviderConfig]: 150 | """Get a specific provider by name (thread-safe)""" 151 | with PROVIDERS_LOCK: 152 | return PROVIDERS.get(provider_name) 153 | 154 | 155 | def authenticate_provider(provider: ProviderConfig): 156 | """Create authentication object for a provider""" 157 | auth = None 158 | if provider.authentication: 159 | auth = aiohttp.BasicAuth( 160 | provider.authentication.user, 161 | provider.authentication.password.get_secret_value() 162 | ) 163 | return auth 164 | 165 | 166 | def check_process_availability(provider: str, process_id: str) -> bool: 167 | """Check if a process is available and not excluded""" 168 | with PROVIDERS_LOCK: 169 | if ( 170 | provider in PROVIDERS and 171 | process_id in PROVIDERS[provider].processes 172 | ): 173 | process: ProcessConfig = PROVIDERS[provider].processes[process_id] 174 | available = not process.exclude 175 | 176 | if process.exclude: 177 | logger.debug("Excluding process %s based on configuration", process_id) 178 | 179 | return available 180 | 181 | return False 182 | 183 | 184 | def check_result_storage(provider: str, process_id: str) -> Optional[str]: 185 | """Get the result storage type for a process""" 186 | with PROVIDERS_LOCK: 187 | if ( 188 | provider in PROVIDERS 189 | and process_id in PROVIDERS[provider].processes 190 | ): 191 | return PROVIDERS[provider].processes[process_id].result_storage 192 | return None 193 | 194 | 195 | def get_process_config(provider: str, process_id: str) -> ProcessConfig: 196 | """Get complete process configuration""" 197 | with PROVIDERS_LOCK: 198 | if ( 199 | provider in PROVIDERS 200 | and process_id in PROVIDERS[provider].processes 201 | ): 202 | return PROVIDERS[provider].processes[process_id] 203 | 204 | raise ValueError( 205 | f"Process '{process_id}' not found for provider '{provider}'" 206 | ) 207 | 208 | 209 | def list_providers() -> list[str]: 210 | """Get list of all provider names""" 211 | with PROVIDERS_LOCK: 212 | return list(PROVIDERS.keys()) 213 | 214 | 215 | def list_processes(provider: str) -> list[str]: 216 | """Get list of all process IDs for a provider""" 217 | with PROVIDERS_LOCK: 218 | if provider in PROVIDERS: 219 | return list(PROVIDERS[provider].processes.keys()) 220 | return [] 221 | 222 | 223 | # Health check function 224 | def is_healthy() -> bool: 225 | """Check if the provider loader is healthy""" 226 | with PROVIDERS_LOCK: 227 | return len(PROVIDERS) > 0 and observer.is_alive() -------------------------------------------------------------------------------- /src/ump/api/processes.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import traceback 3 | from logging import getLogger 4 | 5 | import aiohttp 6 | from aiohttp import ClientSession, ClientTimeout 7 | from flask import g 8 | 9 | from ump.config import app_settings 10 | from ump.api.models.providers_config import ProcessConfig, ProviderConfig 11 | from ump.api.providers import ( 12 | authenticate_provider, 13 | get_providers, 14 | ) 15 | from ump.errors import OGCProcessException 16 | from ump.utils import fetch_json 17 | 18 | logger = getLogger(__name__) 19 | 20 | #TODO: add validation of loaded processes through pydantic model or existing Process class 21 | async def load_processes(): 22 | processes = [] 23 | 24 | auth = g.get("auth_token", {}) or {} 25 | 26 | # TODO manually parsing jwt is not recommended, use a library like PyJWT or better Authlib 27 | realm_roles: list = auth.get("realm_access", {}).get("roles", []) 28 | 29 | client_roles: list = ( 30 | auth.get( 31 | "resource_access", {} 32 | ).get( 33 | app_settings.UMP_KEYCLOAK_CLIENT_ID, {} 34 | ).get( 35 | "roles", [] 36 | ) 37 | ) 38 | 39 | client_timeout = ClientTimeout( 40 | total=5, # Set a reasonable timeout for the requests 41 | connect=2, # Connection timeout 42 | sock_connect=2, # Socket connection timeout 43 | sock_read=5, # Socket read timeout 44 | ) # remote server needs to answer in time, because we make multiple requests! 45 | 46 | async with aiohttp.ClientSession( 47 | raise_for_status=False, timeout=client_timeout 48 | ) as session: 49 | # Create a list of tasks for fetching processes concurrently 50 | #TODO: it would make more sense if, not all processes are fetched, 51 | # but only those that are configured and are accessible by the user 52 | tasks = [ 53 | fetch_provider_processes( 54 | session, provider_name, 55 | provider_config, realm_roles, client_roles 56 | ) 57 | for provider_name, provider_config in get_providers().items() 58 | ] 59 | 60 | # Run all tasks in an async manner and gather results 61 | results = await asyncio.gather(*tasks, return_exceptions=True) 62 | 63 | # Process results 64 | for result in results: 65 | if isinstance(result, BaseException): 66 | logger.error("Error fetching processes: %s", result) 67 | else: 68 | processes.extend(result) 69 | 70 | return {"processes": processes} 71 | 72 | 73 | async def fetch_provider_processes( 74 | session: ClientSession, 75 | provider_name: str, provider_config: ProviderConfig, 76 | realm_roles: list, client_roles: list 77 | ): 78 | """Fetch processes for a specific provider and filter them.""" 79 | provider_processes = [] 80 | try: 81 | provider_auth = authenticate_provider(provider_config) 82 | 83 | results = await fetch_json( 84 | session=session, 85 | url=f"{provider_config.server_url}processes", 86 | raise_for_status=True, 87 | headers={"Content-type": "application/json", "Accept": "application/json"}, 88 | auth=provider_auth 89 | ) 90 | 91 | # TODO: instead of manually checking for a key, we should validate the response 92 | # using a pydantic model or json schema! 93 | if "processes" in results: 94 | for process in results["processes"]: 95 | process_id = process["id"] 96 | if process_id not in provider_config.processes: 97 | logger.info( 98 | "No configuration found for process %s, ignoring it.", 99 | process_id 100 | ) 101 | # next process 102 | continue 103 | 104 | process_config = provider_config.processes[process_id] 105 | if has_user_access_rights( 106 | process_id, provider_name, process_config, 107 | realm_roles, client_roles 108 | ): 109 | process["id"] = f"{provider_name}:{process_id}" 110 | provider_processes.append(process) 111 | else: 112 | logger.error( 113 | "The response from the remote service was not valid. " 114 | "URL: %s, Content: %s", 115 | provider_config.server_url, 116 | results 117 | ) 118 | 119 | # Note: fetch_json raises OGCProcessException on errors 120 | except OGCProcessException as e: 121 | logger.error("HTTP error while accessing provider %s: %s", provider_name, e) 122 | 123 | except Exception as e: 124 | logger.error("Unexpected error while processing provider %s: %s", provider_name, e) 125 | traceback.print_exc() 126 | 127 | return provider_processes 128 | 129 | 130 | async def fetch_processes_from_provider(session, provider_config, provider_auth): 131 | """Fetch processes from the provider's API.""" 132 | try: 133 | response = await session.get( 134 | f"{provider_config.server_url}processes", 135 | auth=provider_auth, 136 | headers={ 137 | "Content-type": "application/json", 138 | "Accept": "application/json", 139 | }, 140 | timeout=ClientTimeout(total=provider_config.timeout), 141 | ) 142 | return await response.json() 143 | except aiohttp.ClientError as e: 144 | logger.error( 145 | "Failed to fetch processes from %s: %s", 146 | provider_config.server_url,e 147 | ) 148 | raise 149 | 150 | def has_user_access_rights( 151 | process_id: str, 152 | provider_name: str, 153 | process_config: ProcessConfig, 154 | realm_roles: list[str], 155 | client_roles: list[str], 156 | ) -> bool: 157 | """ 158 | Determines if a process is visible to the user based on the following checks: 159 | 0. The process is configured to be excluded or not. 160 | 1. Anonymous access is allowed. 161 | 2. The user has access to all processes of a provider(/ModelServer). 162 | 3. The user has access to the specific process. 163 | """ 164 | # Check if the process is excluded 165 | if process_config.exclude: 166 | logger.info("Process ID %s is configured to be excluded.", process_id) 167 | return False 168 | 169 | # Check provider/ModelServer-level access 170 | access_to_all_processes_granted = ( 171 | provider_name in realm_roles 172 | or provider_name in client_roles 173 | ) 174 | 175 | # Check process-specific access 176 | access_to_this_process_granted = ( 177 | f"{provider_name}_{process_id}" in realm_roles 178 | or f"{provider_name}_{process_id}" in client_roles 179 | ) 180 | 181 | # Log the specific condition(s) that grant access 182 | if process_config.anonymous_access: 183 | logger.info( 184 | "Granting access for process %s:%s: Anonymous access is allowed.", 185 | provider_name, 186 | process_id 187 | ) 188 | 189 | 190 | if access_to_all_processes_granted: 191 | logger.info( 192 | "Granting access for process %s: User has provider-level access. Role: %s", 193 | process_id, 194 | provider_name 195 | ) 196 | 197 | if access_to_this_process_granted: 198 | logger.info( 199 | "Granting access for process %s: User has process-specific access. Role: %s_%s", 200 | process_id, 201 | provider_name, 202 | process_id 203 | ) 204 | 205 | # Grant access if any of the conditions are met 206 | if ( 207 | process_config.anonymous_access 208 | or access_to_all_processes_granted 209 | or access_to_this_process_granted 210 | ): 211 | return True 212 | 213 | logger.info( 214 | "Not granting access for process %s", process_id 215 | ) 216 | return False 217 | -------------------------------------------------------------------------------- /docs/content/02-user_guide/setup.md: -------------------------------------------------------------------------------- 1 | # Setup 2 | 3 | This document describes the configuration options for the Urban Model Platform (UMP). The configuration is managed using environment variables, which can be set in the `.env` file. Below is a detailed explanation of the available configuration options. 4 | 5 | ## Environment Variables 6 | 7 | ### App Settings 8 | | Variable | Description | Default Value | 9 | |---------------------------------------|-------------------------------------------------------------------------------------------------|------------------------| 10 | | `UMP_LOG_LEVEL` | Logging level for the application. | `DEBUG` | 11 | | `UMP_PROVIDERS_FILE` | Path to the providers configuration file. | `providers.yaml` | 12 | | `UMP_API_SERVER_URL` | Base URL of the API server. Used in job details responses. | `localhost:5000` | 13 | | `UMP_REMOTE_JOB_STATUS_REQUEST_INTERVAL` | Interval (in seconds) for checking remote job statuses. | `5` | 14 | | `UMP_DATABASE_NAME` | Name of the PostgreSQL database. | `ump` | 15 | | `UMP_DATABASE_HOST` | Hostname of the PostgreSQL database. | `localhost` | 16 | | `UMP_DATABASE_PORT` | Port of the PostgreSQL database. | `5433` | 17 | | `UMP_DATABASE_USER` | Username for the PostgreSQL database. | `ump` | 18 | | `UMP_DATABASE_PASSWORD` | Password for the PostgreSQL database. | `ump` | 19 | | `UMP_GEOSERVER_URL` | URL of the GeoServer instance. | `http://geoserver:8080/geoserver` | 20 | | `UMP_GEOSERVER_DB_HOST` | Hostname of the GeoServer database. | `localhost` | 21 | | `UMP_GEOSERVER_DB_PORT` | Port of the GeoServer database. | `5432` | 22 | | `UMP_GEOSERVER_WORKSPACE_NAME` | Name of the GeoServer workspace. | `UMP` | 23 | | `UMP_GEOSERVER_USER` | Username for GeoServer. | `admin` | 24 | | `UMP_GEOSERVER_PASSWORD` | Password for GeoServer. | `geoserver` | 25 | | `UMP_GEOSERVER_CONNECTION_TIMEOUT` | Timeout (in seconds) for GeoServer connections. | `60` | 26 | | `UMP_JOB_DELETE_INTERVAL` | Interval (in minutes) for cleaning up old jobs. | `240` | 27 | | `UMP_KEYCLOAK_URL` | URL of the Keycloak server. | `http://keycloak:8080`| 28 | | `UMP_KEYCLOAK_REALM` | Keycloak realm name. | `UrbanModelPlatform` | 29 | | `UMP_KEYCLOAK_CLIENT_ID` | Keycloak client ID. | `ump-client` | 30 | | `UMP_KEYCLOAK_USER` | Keycloak admin username. | `admin` | 31 | | `UMP_KEYCLOAK_PASSWORD` | Keycloak admin password. | `admin` | 32 | | `UMP_API_SERVER_URL_PREFIX` | subpath prefix, e.g.: "/api" | `/` | 33 | 34 | ### Example Modelserver Settings 35 | | Variable | Description | Default Value | 36 | |---------------------------------------|-------------------------------------------------------------------------------------------------|------------------------| 37 | | `PYGEOAPI_SERVER_HOST` | Hostname for the example modelserver. | `localhost` | 38 | | `PYGEOAPI_SERVER_PORT_INTERNAL` | Internal port for the example modelserver. | `5000` | 39 | | `PYGEOAPI_SERVER_PORT_EXTERNAL` | External port for the example modelserver. | `5005` | 40 | 41 | ### Docker Dev Environment Settings 42 | | Variable | Description | Default Value | 43 | |---------------------------------------|-------------------------------------------------------------------------------------------------|------------------------| 44 | | `DOCKER_NETWORK` | Name of the Docker network for the development environment. | `ump_dev` | 45 | | `WEBAPP_PORT_EXTERNAL` | External port for the UMP web application. | `5003` | 46 | | `API_DB_PORT_EXTERNAL` | External port for the PostgreSQL database used by the API. | `5433` | 47 | | `GEOSERVER_PORT_EXTERNAL` | External port for the GeoServer instance. | `8181` | 48 | | `KEYCLOAK_PORT_EXTERNAL` | External port for the Keycloak instance. | `8282` | 49 | 50 | ### Docker Build Settings 51 | | Variable | Description | Default Value | 52 | |---------------------------------------|-------------------------------------------------------------------------------------------------|------------------------| 53 | | `CONTAINER_REGISTRY` | Container registry URL. | `registry.io` | 54 | | `CONTAINER_NAMESPACE` | Namespace for the container registry. | `namespace` | 55 | | `IMAGE_NAME` | Name of the Docker image. | `urban-model-platform`| 56 | | `IMAGE_TAG` | Tag for the Docker image. | `1.1.0` | 57 | 58 | --- 59 | 60 | ## Testing and running the Application 61 | Docker containers are used to ease the setup of required services: PostgreSQL database(s), GeoServer and Keycloak. 62 | 63 | There are two ways to test the application: 64 | 65 | ### 1. Using Docker Compose 66 | You can build and run the application in a containerized environment using the provided Docker Compose files: 67 | ```bash 68 | make initiate-dev 69 | ``` 70 | 71 | Then, adjust the newly created .env file to your needs. After that run: 72 | 73 | ```bash 74 | make build-image 75 | make start-dev # or make start-dev-example 76 | docker compose -f docker-compose-dev.yaml up api -d 77 | ``` 78 | 79 | ### 2. Running the app locally 80 | Alternatively, you can run the application locally: 81 | ```bash 82 | make initiate-dev 83 | make start-dev # or make start-dev-example 84 | gunicorn --workers=1 --bind=0.0.0.0:5000 ump.main:app 85 | ``` 86 | 87 | Both methods will set up the necessary dependencies (PostgreSQL, GeoServer, Keycloak) for the application to function correctly. 88 | 89 | 90 | **Either use the provided Makefile:** 91 | ```bash 92 | make initiate-dev 93 | ``` 94 | Then, adjust the newly created .env file to your needs. 95 | 96 | **Or do it manually:** 97 | 98 | 1. Create a virtual python environment: 99 | ```bash 100 | conda env create -f environment.yaml 101 | ``` 102 | 103 | 1. Copy providers.yaml: 104 | ```bash 105 | cp providers.yaml.example providers.yaml 106 | ``` 107 | 108 | 1. Copy .env.example 109 | ```bash 110 | cp .env.example .env 111 | ``` 112 | 113 | 1. Start the required apps with 114 | ```bash 115 | make start-dev 116 | ``` 117 | 118 | 1. Or start them with an example process 119 | ```bash 120 | make start-dev-example 121 | ``` 122 | --- 123 | 124 | Enjoy! --------------------------------------------------------------------------------