29 |
30 | ```
31 | :::
32 |
33 | ## Quick start
34 | FIAT computes the (economic) damage and risk of flood events at a specified geographic location based on flood inundation data, exposured assets and vulnerability functions. The model can be configured with a settings file and data inputs:
35 |
36 | - [Settings file](settings/index.qmd)
37 | - [Data](data/index.qmd)
38 |
39 | The HydroMT plugin [HydroMT-FIAT](https://deltares.github.io/hydromt_fiat/latest) can be used to set up the FIAT model but that is not compulsory. If a user sets up their own FIAT model, it is recommended to save the data into the same [folder structure](data/index.qmd#folder-structure) that HydroMT-FIAT creates.
40 |
41 | ## The models
42 | The data is used by either one or both of the following models:
43 |
44 | - GeomModel
45 | - GridModel
46 |
47 | Some more information about the models can be found [here](../info/models.qmd).
48 |
49 | ## General User Information
50 |
51 | FIAT derives **damages and risk** at asset-level based on flood maps and additional inputs such as depth-damage functions, asset locations and their maximum potential damages.
52 | For each asset specified in the exposure dataset, the water depth or level is sampled from the flood map at the location of the assets.
53 |
54 | ::: {.callout-note}
55 | Water elevations are converted to water depths using the ground elevation of each asset.
56 | :::
57 |
58 | See @fig-fiat for an overview of the FIAT workflow. To obtain the flood inundation level, FIAT extracts the water depth or level at the location of each asset. For line or polygon assets, either the average or maximum water depth or level and the fraction of the asset are extracted from the hazard map. The **inundation depth** within buildings or on top of for example, roads, is obtained by subtracting the **ground floor height** from the **water depth**. FIAT derives the damage fraction for each asset using its inundation depth and interpolating over its depth-damage curve. Thereafter, the damage to the asset is calculated as the product of the maximum potential damage and the damage fraction. In case an asset is only partially flooded, the damages will be reduced by the dry fraction of the building. Instead of single events, the user can also provide return-period flood maps as input. Hence, FIAT calculates and integrates the associated return-period damages to derive the expected annual damages.
59 |
60 |
61 | {#fig-fiat}
63 |
--------------------------------------------------------------------------------
/docs/user_guide/data/hazard_maps.qmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: Types of hazard maps
3 | ---
4 | ::: {.callout-tip}
5 | The most advised format for both event and risk is netCDF.
6 | :::
7 |
8 | ## Event
9 | Event maps are generally supplied in singular fashion (i.e. one band per event map).
10 | This band just simply contains the hazard values per cell. Let's have a quick peek at the data using [gdalinfo](https://gdal.org/programs/gdalinfo.html):
11 |
12 | :::: {.columns}
13 | ::: {style="height: 400px; overflow: auto;"}
14 | ```{python}
15 | #| echo: false
16 | !gdalinfo ../../../.testdata/hazard/event_map.nc
17 | ```
18 | :::
19 | ::::
20 | ::: {.callout-note}
21 | Allthough netCDF is the preferred format, GeoTIFF's are also very handy for single events.
22 | :::
23 |
24 | As one can see, this dataset has only one band (variable).
25 |
26 | When created, it is preferred that the event map is compressed to prevent unneccessary data on the hard drive.
27 | More information regarding the compression and others settings can be found [here](https://gdal.org/drivers/raster/netcdf.html#creation-options).
28 |
29 | ## Risk
30 | Risk maps in gerenal should contain multiple bands.
31 |
32 | These bands can be either supplied in either one of two ways:
33 |
34 | - Multiple variables (like normal bands)
35 | - As a subdataset
36 |
37 | Lets take a look quick look at the data (again with `gdalinfo`):
38 |
39 | :::: {.columns}
40 | ::: {style="height: 400px; overflow: auto;"}
41 | ```{python}
42 | #| echo: false
43 | !gdalinfo ../../../.testdata/hazard/risk_map.nc
44 | ```
45 | :::
46 | ::::
47 | ::: {.callout-note}
48 | Instead of seeing normal metadata, we only get an overview of the available subdatasets.
49 | :::
50 |
51 | Multiple netCDF variables are normally not recognized as bands by the [netCDF driver](https://gdal.org/drivers/raster/netcdf.html) of GDAL.
52 | Instead they are interpreted as subdatasets (!). One can set the key `var_as_band` to `true` within the [settings file](../settings/index.qmd)
53 | to read the subdatasets as bands.
54 |
55 | ```toml
56 | [hazard]
57 | file =
58 | risk = true
59 | [hazard.settings]
60 | var_as_band = true
61 | ```
62 |
63 | When supplied in one subdataset (all bands are within that subdataset),
64 | the `subset` variable within the [settings file](../settings/index.qmd) should be set under the header `hazard.settings`.
65 | The value set to `subset` should be the name of the desired subdataset.
66 |
67 | ```toml
68 | [hazard]
69 | file =
70 | risk = true
71 | [hazard.settings]
72 | var_as_band = false
73 | subset =
74 | ```
75 |
76 | ### Return periods
77 |
78 | The return periods can be set in the [settings file](../settings/optional.qmd#hazard) by supplying a list via
79 | the `hazard.return_periods` entry. However, the return periods can also be set via the hazard bands directly
80 | from the `return_period` attribute of the bands. This attribute is simply set using either [xarray](https://docs.xarray.dev/en/stable/),
81 | [gdal](https://gdal.org/api/python/raster_api.html) or [hydromt](https://deltares.github.io/hydromt/latest/).
82 |
83 | When present in all bands, this attribute will be preferred over the return periods specified in the settings file.
84 | The reason being that the return period is directly linked to the corresponding band, whereas it is inferred in the
85 | case of setting it via the settings file.
86 |
87 | Let's have a quick peek at the data using [gdalinfo](https://gdal.org/programs/gdalinfo.html) (it will be at the bottom):
88 |
89 | :::: {.columns}
90 | ::: {style="height: 400px; overflow: auto;"}
91 | ```{python}
92 | #| echo: false
93 | !gdalinfo -sd 1 -norat -nogcp ../../../.testdata/hazard/risk_map.nc
94 | ```
95 | :::
96 | ::::
97 |
--------------------------------------------------------------------------------
/src/fiat/gis/grid.py:
--------------------------------------------------------------------------------
1 | """Only raster methods for FIAT."""
2 |
3 | import gc
4 | import os
5 | from pathlib import Path
6 |
7 | from osgeo import gdal, osr
8 |
9 | from fiat.fio import Grid, GridSource, open_grid
10 | from fiat.util import NOT_IMPLEMENTED
11 |
12 |
13 | def clip(
14 | band: Grid,
15 | gtf: tuple,
16 | idx: tuple,
17 | ):
18 | """Clip a grid.
19 |
20 | Parameters
21 | ----------
22 | band : gdal.Band
23 | _description_
24 | gtf : tuple
25 | _description_
26 | idx : tuple
27 | _description_
28 | """
29 | raise NotImplementedError(NOT_IMPLEMENTED)
30 |
31 |
32 | def reproject(
33 | gs: GridSource,
34 | dst_crs: str,
35 | dst_gtf: list | tuple = None,
36 | dst_width: int = None,
37 | dst_height: int = None,
38 | out_dir: Path | str = None,
39 | resample: int = 0,
40 | ) -> object:
41 | """Reproject (warp) a grid.
42 |
43 | Parameters
44 | ----------
45 | gs : GridSource
46 | Input object.
47 | dst_crs : str
48 | Coodinates reference system (projection). An accepted format is: `EPSG:3857`.
49 | dst_gtf : list | tuple, optional
50 | The geotransform of the warped dataset. Must be defined in the same
51 | coordinate reference system as dst_crs. When defined, its only used when
52 | both 'dst_width' and 'dst_height' are defined.
53 | dst_width : int, optional
54 | The width of the warped dataset in pixels.
55 | dst_height : int, optional
56 | The height of the warped dataset in pixels.
57 | out_dir : Path | str, optional
58 | Output directory. If not defined, if will be inferred from the input object.
59 | resample : int, optional
60 | Resampling method during warping. Interger corresponds with a resampling
61 | method defined by GDAL. For more information: click \
62 | [here](https://gdal.org/api/gdalwarp_cpp.html#_CPPv415GDALResampleAlg).
63 |
64 | Returns
65 | -------
66 | GridSource
67 | Output object. A lazy reading of the just creating raster file.
68 | """
69 | _gs_kwargs = gs._kwargs
70 |
71 | if not Path(str(out_dir)).is_dir():
72 | out_dir = gs.path.parent
73 |
74 | fname_int = Path(out_dir, f"{gs.path.stem}_repr.tif")
75 | fname = Path(out_dir, f"{gs.path.stem}_repr{gs.path.suffix}")
76 |
77 | out_srs = osr.SpatialReference()
78 | out_srs.SetFromUserInput(dst_crs)
79 | out_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
80 |
81 | warp_kw = {}
82 | if all([item is not None for item in [dst_gtf, dst_width, dst_height]]):
83 | warp_kw.update(
84 | {
85 | "xRes": dst_gtf[1],
86 | "yRes": dst_gtf[5],
87 | "outputBounds": (
88 | dst_gtf[0],
89 | dst_gtf[3] + dst_gtf[5] * dst_height,
90 | dst_gtf[0] + dst_gtf[1] * dst_width,
91 | dst_gtf[3],
92 | ),
93 | "width": dst_width,
94 | "height": dst_height,
95 | }
96 | )
97 |
98 | dst_src = gdal.Warp(
99 | str(fname_int),
100 | gs.src,
101 | srcSRS=gs.srs,
102 | dstSRS=out_srs,
103 | resampleAlg=resample,
104 | **warp_kw,
105 | )
106 |
107 | out_srs = None
108 |
109 | if gs.path.suffix == ".tif":
110 | gs.close()
111 | dst_src = None
112 | return open_grid(fname_int)
113 |
114 | gs.close()
115 | gdal.Translate(str(fname), dst_src)
116 | dst_src = None
117 | gc.collect()
118 |
119 | os.unlink(fname_int)
120 |
121 | return open_grid(fname, **_gs_kwargs)
122 |
--------------------------------------------------------------------------------
/src/fiat/gis/geom.py:
--------------------------------------------------------------------------------
1 | """Only vector methods for FIAT."""
2 |
3 | import gc
4 | from pathlib import Path
5 |
6 | from osgeo import ogr, osr
7 |
8 | from fiat.fio import BufferedGeomWriter, GeomSource, open_geom
9 |
10 |
11 | def point_in_geom(
12 | ft: ogr.Feature,
13 | ) -> tuple:
14 | """Create a point within a polygon.
15 |
16 | This is in essence a very lazy centroid. Keep in mind though, it can differ quite
17 | a bit from the actual centroid.
18 |
19 | Parameters
20 | ----------
21 | ft : ogr.Feature
22 | The feature (polygon or linestring) in which to create the point.
23 |
24 | Returns
25 | -------
26 | tuple
27 | The x and y coordinate of the created point.
28 | """
29 | geom = ft.GetGeometryRef()
30 | p = geom.PointOnSurface()
31 | geom = None
32 | return p.GetX(), p.GetY()
33 |
34 |
35 | def reproject_feature(
36 | geometry: ogr.Geometry,
37 | src_crs: str,
38 | dst_crs: str,
39 | ) -> ogr.Feature:
40 | """Transform geometry/ geometries of a feature.
41 |
42 | Parameters
43 | ----------
44 | geometry : ogr.Geometry
45 | The geometry.
46 | src_crs : str
47 | Coordinate reference system of the feature.
48 | dst_crs : str
49 | Coordinate reference system to which the feature is transformed.
50 | """
51 | src_srs = osr.SpatialReference()
52 | src_srs.SetFromUserInput(src_crs)
53 | src_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
54 | dst_srs = osr.SpatialReference()
55 | dst_srs.SetFromUserInput(dst_crs)
56 | src_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
57 |
58 | transform = osr.CoordinateTransformation(src_srs, dst_srs)
59 | geometry.Transform(transform)
60 |
61 | src_srs = None
62 | dst_srs = None
63 | transform = None
64 |
65 |
66 | def reproject(
67 | gs: GeomSource,
68 | crs: str,
69 | chunk: int = 200000,
70 | out_dir: Path | str = None,
71 | ):
72 | """Reproject a geometry layer.
73 |
74 | Parameters
75 | ----------
76 | gs : GeomSource
77 | Input object.
78 | crs : str
79 | Coodinates reference system (projection). An accepted format is: `EPSG:3857`.
80 | chunk : int, optional
81 | The size of the chunks used during reprojecting.
82 | out_dir : Path | str, optional
83 | Output directory. If not defined, if will be inferred from the input object.
84 |
85 | Returns
86 | -------
87 | GeomSource
88 | Output object. A lazy reading of the just creating geometry file.
89 | """
90 | if not Path(str(out_dir)).is_dir():
91 | out_dir = gs.path.parent
92 |
93 | fname = Path(out_dir, f"{gs.path.stem}_repr{gs.path.suffix}")
94 |
95 | out_srs = osr.SpatialReference()
96 | out_srs.SetFromUserInput(crs)
97 | out_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
98 | layer_defn = gs.layer.GetLayerDefn()
99 |
100 | transform = osr.CoordinateTransformation(
101 | gs.srs,
102 | out_srs,
103 | )
104 |
105 | with open_geom(fname, mode="w", overwrite=True) as new_gs:
106 | new_gs.create_layer(out_srs, layer_defn.GetGeomType())
107 | new_gs.set_layer_from_defn(layer_defn)
108 |
109 | mem_gs = BufferedGeomWriter(
110 | fname,
111 | srs=out_srs,
112 | layer_defn=gs.layer.GetLayerDefn(),
113 | buffer_size=chunk,
114 | )
115 |
116 | for ft in gs.layer:
117 | geom = ft.GetGeometryRef()
118 | geom.Transform(transform)
119 |
120 | new_ft = ogr.Feature(mem_gs.buffer.layer.GetLayerDefn())
121 | new_ft.SetFrom(ft)
122 | new_ft.SetGeometry(geom)
123 | mem_gs.add_feature(new_ft)
124 |
125 | geom = None
126 | ft = None
127 | new_ft = None
128 | out_srs = None
129 | transform = None
130 | layer_defn = None
131 |
132 | mem_gs.close()
133 | mem_gs = None
134 | gs.close()
135 | gs = None
136 | gc.collect()
137 |
138 | return open_geom(fname)
139 |
--------------------------------------------------------------------------------
/test/test_run.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from pathlib import Path
3 |
4 | from osgeo import gdal
5 |
6 | from fiat.fio import open_csv, open_grid
7 | from fiat.models import GeomModel, GridModel
8 |
9 |
10 | def run_model(cfg, p):
11 | # Execute
12 | cfg.setup_output_dir(str(p))
13 | model_type = cfg.get("model.type")
14 | if model_type == "geom":
15 | mod = GeomModel(cfg)
16 | elif model_type == "grid":
17 | mod = GridModel(cfg)
18 | mod.run()
19 |
20 |
21 | def test_geom_event(tmp_path, configs):
22 | # run the model
23 | run_model(configs["geom_event"], tmp_path)
24 |
25 | # Check the output for this specific case
26 | out = open_csv(Path(str(tmp_path), "output.csv"), index="object_id")
27 | assert int(float(out[2, "total_damage"])) == 740
28 | assert int(float(out[3, "total_damage"])) == 1038
29 |
30 |
31 | def test_geom_missing(tmp_path, configs):
32 | # run the model
33 | run_model(configs["geom_event_missing"], tmp_path)
34 |
35 | # Check the output for this specific case
36 | assert Path(str(tmp_path), "missing.log").exists()
37 | missing = open(Path(str(tmp_path), "missing.log"), "r")
38 | assert sum(1 for _ in missing) == 1
39 |
40 |
41 | def test_geom_outside(tmp_path, configs):
42 | # run the model
43 | run_model(configs["geom_event_outside"], tmp_path)
44 |
45 | # Check the output for this specific case
46 | data = open_csv(Path(tmp_path, "output.csv"))
47 |
48 | assert data[0, "damage_structure"] == "nan"
49 | assert float(data[0, "total_damage"]) == 0.0
50 | assert float(data[1, "damage_structure"]) == 1804.0
51 |
52 |
53 | def test_geom_risk(tmp_path, configs):
54 | # run the model
55 | run_model(configs["geom_risk"], tmp_path)
56 |
57 | # Check the output for this specific case
58 | out = open_csv(Path(str(tmp_path), "output.csv"), index="object_id")
59 | assert int(float(out[2, "damage_structure_5.0y"])) == 1804
60 | assert int(float(out[4, "total_damage_10.0y"])) == 3840
61 | assert int(float(out[3, "ead_damage"]) * 100) == 102247
62 |
63 |
64 | def test_grid_event(tmp_path, configs):
65 | # run the model
66 | run_model(configs["grid_event"], tmp_path)
67 |
68 | # Check the output for this specific case
69 | src = gdal.OpenEx(
70 | str(Path(str(tmp_path), "output.nc")),
71 | )
72 | arr = src.ReadAsArray()
73 | src = None
74 | assert int(arr[2, 4] * 10) == 14092
75 | assert int(arr[7, 3] * 10) == 8700
76 |
77 | src = gdal.OpenEx(
78 | str(Path(str(tmp_path), "total_damages.nc")),
79 | )
80 | arr = src.ReadAsArray()
81 | src = None
82 | assert int(arr[2, 4] * 10) == 14092
83 | assert int(arr[7, 3] * 10) == 8700
84 |
85 |
86 | def test_grid_unequal(tmp_path, configs):
87 | # Run the model
88 | cfg = copy.deepcopy(configs["grid_unequal"])
89 | run_model(cfg, tmp_path)
90 | # Assert the output
91 | file = Path(tmp_path, "output.nc")
92 | assert file.is_file()
93 | # Check the output
94 | gs = open_grid(file)
95 | assert gs.shape == (10, 10)
96 | gs.close()
97 | gs = None
98 |
99 | # Adjust to prefer the hazard data resolution
100 | cfg = copy.deepcopy(configs["grid_unequal"])
101 | cfg.set("model.grid.prefer", "hazard")
102 | run_model(cfg, tmp_path)
103 |
104 | # Check the output
105 | gs = open_grid(file)
106 | assert gs.shape == (100, 100)
107 |
108 |
109 | def test_grid_risk(tmp_path, configs):
110 | # run the model
111 | run_model(configs["grid_risk"], tmp_path)
112 |
113 | # Check the output for this specific case
114 | src = gdal.OpenEx(
115 | str(Path(str(tmp_path), "ead.nc")),
116 | )
117 | arr = src.ReadAsArray()
118 | src = None
119 | assert int(arr[1, 2] * 10) == 10920
120 | assert int(arr[5, 6] * 10) == 8468
121 |
122 | src = gdal.OpenEx(
123 | str(Path(str(tmp_path), "ead_total.nc")),
124 | )
125 | arr = src.ReadAsArray()
126 | src = None
127 | assert int(arr[1, 2] * 10) == 10920
128 | assert int(arr[5, 6] * 10) == 8468
129 |
--------------------------------------------------------------------------------
/docs/user_guide/data/vulnerability.qmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Vulnerability data"
3 | format:
4 | html:
5 | code-fold: true
6 | jupyter: python3
7 | ---
8 | The **vulnerability** of an asset is determined by its building type (e.g. 'residential 1-story building') and the inundation depth, also refered to as water depth, during a flood event. Different assests incur different degrees of damage at varying inundation levels. This **vulnerability** can be quantified via **flood depth-damage** functions, see for example @fig-damagefunction. The damage function relates the water depth to the maximum potential damage per asset and returns the damage fraction (a value between 0 and 1). The damage fraction is multiplied by the maximum potential damage to obtain a damage value. The value of the maximum potential damage differs per asset and must be specified in the [exposure data](exposure.qmd).
9 |
10 | ```{python}
11 | #| echo: false
12 | #| label: fig-damagefunction
13 | #| fig-cap: "Damage functions of different assets/structures. "
14 | import numpy as np
15 | import matplotlib.pyplot as plt
16 | import pandas as pd
17 | from pathlib import Path
18 |
19 |
20 | file_path = Path.cwd()
21 | for _ in range(3):
22 | file_path = file_path.parent
23 |
24 | file_path = Path(file_path, ".testdata", "vulnerability", "vulnerability_curves.csv")
25 | file_path = file_path.resolve()
26 | data = df = pd.read_csv(file_path, comment='#')
27 | df = pd.DataFrame({
28 | ('water depth'): data.iloc[:, 0],
29 | ('STRUCT1'): data.iloc[:, 1],
30 | ('STRUCT2'): data.iloc[:, 2],
31 |
32 | })
33 |
34 | labels=["STRUCT1","STRUCT2"]
35 | plt.plot(df['water depth'], df['STRUCT1'])
36 | plt.plot(df['water depth'], df['STRUCT2'])
37 | plt.xlabel('Water depth (m)')
38 | plt.ylabel('Fraction of maximum potential damage')
39 | plt.legend(labels)
40 | plt.gca().get_legend().set_title('')
41 | ```
42 |
43 |
44 | The damage functions must be given in a CSV file (`vulnerability.csv`), located in the [vulnerability folder](index.qmd), see for example @tbl-damage-curve. The first column contains the water depth, and then each additional column provides the damage fraction for the corresponding damage function. Three header rows are required, describing the unit of the water depth; `#UNIT=`, e.g., `#UNIT=m`. The second row, named `#METHOD`, must be defined for each damage-curve separately above the damage curve name. The method refers to the way that multiple flood values will be aggregated per asset in case the *area* method is used for deriving the inundation depth. The damage curve name must coincide with the name of the damage function defined in the [exposure data](exposure.qmd).
45 |
46 | ::: {.callout-important}
47 | Water depth units (e.g. feet or meters) must be consistent with the units of the flood hazard map and the exposure data (i.e., ground elevation, ground floor height).
48 | :::
49 |
50 |
51 | ```{python}
52 | #| echo: false
53 | #| label: tbl-damage-curve
54 | #| tbl-cap: "Vulnerability data CSV file. The water depth must be in the outer left column followed by the damage functions. The user has the freedom to add multiple damage curves. "
55 |
56 | import pandas as pd
57 | from pathlib import Path
58 | from IPython.display import HTML
59 |
60 | file_path = Path.cwd()
61 | for _ in range(3):
62 | file_path = file_path.parent
63 |
64 | file_path = Path(file_path, ".testdata", "vulnerability", "vulnerability_curves.csv")
65 | file_path = file_path.resolve()
66 | data = pd.read_csv(file_path, comment='#')
67 |
68 | df = pd.DataFrame({
69 | ('#UNIT=m','#METHOD','water depth'): data.iloc[:, 0],
70 | ('','mean', 'STRUCT1'): data.iloc[:, 1],
71 | ('','max','STRUCT2'): data.iloc[:, 2],
72 | })
73 |
74 | HTML(df.to_html(index=False))
75 | ```
76 |
77 |
78 | Water depths may be negative for assets that incur damage below the ground floor height, and the user is free to choose any water depth increments in the CSV file. The damage functions can have any name. Multiple damage-functions can be described in the vulnerability curves CSV file by simply adding consecutive columns next to one another.
79 |
80 | ::: {.callout-tip}
81 | You can also create damage functions with the [**HydroMT-FIAT model builder**](https://deltares.github.io/hydromt_fiat/latest/#)
82 | :::
83 |
--------------------------------------------------------------------------------
/src/fiat/methods/flood.py:
--------------------------------------------------------------------------------
1 | """Functions specifically for flood risk calculation."""
2 |
3 | import math
4 |
5 | from numpy import isnan
6 | from osgeo import ogr
7 |
8 | from fiat.fio import Table
9 | from fiat.methods.util import AREA_METHODS
10 |
11 | MANDATORY_COLUMNS = ["ground_flht", "ground_elevtn"]
12 | MANDATORY_ENTRIES = ["hazard.elevation_reference"]
13 | NEW_COLUMNS = ["inun_depth"]
14 |
15 |
16 | def calculate_hazard(
17 | hazard: list,
18 | reference: str,
19 | ground_flht: float,
20 | ground_elevtn: float = 0,
21 | method: str = "mean",
22 | ) -> float:
23 | """Calculate the hazard value for flood hazard.
24 |
25 | Parameters
26 | ----------
27 | hazard : list
28 | Raw hazard values.
29 | reference : str
30 | Reference, either 'dem' or 'datum'.
31 | ground_flht : float
32 | The height of the floor of an object (.e.g the door elevation).
33 | ground_elevtn : float, optional
34 | Ground height in reference to e.g. the ocean.
35 | (Needed when 'reference' is 'datum')
36 | method : str, optional
37 | Chose 'max' or 'mean' for either the maximum value or the average,
38 | by default 'mean'.
39 |
40 | Returns
41 | -------
42 | float
43 | A representative hazard value.
44 | """
45 | _ge = 0
46 | if reference.lower() == "datum" and not math.isnan(ground_elevtn):
47 | # The hazard data is referenced to a Datum
48 | # (e.g., for flooding this is the water elevation).
49 | _ge = ground_elevtn
50 |
51 | # Remove the negative hazard values to 0.
52 | raw_l = len(hazard)
53 | hazard = [n - _ge for n in hazard if (n - _ge) > 0.0001]
54 |
55 | if not hazard:
56 | return math.nan, math.nan
57 |
58 | redf = 1
59 |
60 | if method.lower() == "mean":
61 | redf = len(hazard) / raw_l
62 |
63 | if len(hazard) > 1:
64 | hazard = AREA_METHODS[method.lower()](hazard)
65 | else:
66 | hazard = hazard[0]
67 |
68 | # Subtract the Ground Floor Height from the hazard value
69 | hazard -= ground_flht
70 |
71 | return hazard, redf
72 |
73 |
74 | def calculate_damage(
75 | hazard_value: float | int,
76 | red_fact: float | int,
77 | ft: ogr.Feature | list,
78 | type_dict: dict,
79 | vuln: Table,
80 | vul_min: float | int,
81 | vul_max: float | int,
82 | vul_round: int,
83 | ) -> tuple:
84 | """Calculate the damage corresponding with the hazard value.
85 |
86 | Parameters
87 | ----------
88 | hazard_value : float | int
89 | The representative hazard value.
90 | red_fact : float | int
91 | The reduction factor. How much to compensate for the lack of touching the grid
92 | by an object (geometry).
93 | ft : ogr.Feature | list
94 | A feature or feature info (whichever has to contain the exposure data).
95 | See docs on running FIAT with an without csv.
96 | type_dict : dict
97 | The exposure types and corresponding column id's.
98 | vuln : Table
99 | Vulnerability data.
100 | vul_min : float | int
101 | Minimum value of the index of the vulnerability data.
102 | vul_max : float | int
103 | Maximum value of the index of the vulnerability data.
104 | vul_round : int
105 | Significant decimals to be used.
106 |
107 | Returns
108 | -------
109 | tuple
110 | Damage values.
111 | """
112 | # unpack type_dict
113 | fn = type_dict["fn"]
114 | maxv = type_dict["max"]
115 |
116 | # Define outgoing list of values
117 | out = [0] * (len(fn) + 1)
118 |
119 | # Calculate the damage per catagory, and in total (_td)
120 | total = 0
121 | idx = 0
122 | for key, col in fn.items():
123 | if isnan(hazard_value) or ft[col] is None or ft[col] == "nan":
124 | val = "nan"
125 | else:
126 | hazard_value = max(min(vul_max, hazard_value), vul_min)
127 | f = vuln[round(hazard_value, vul_round), ft[col]]
128 | val = f * ft[maxv[key]] * red_fact
129 | val = round(val, 2)
130 | total += val
131 | out[idx] = val
132 | idx += 1
133 |
134 | out[-1] = round(total, 2)
135 |
136 | return out
137 |
--------------------------------------------------------------------------------
/test/test_checks.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from pathlib import Path
3 |
4 | from fiat import Configurations, GeomModel, open_grid
5 | from fiat.check import (
6 | check_config_entries,
7 | check_exp_derived_types,
8 | check_grid_exact,
9 | check_hazard_rp,
10 | check_hazard_subsets,
11 | check_internal_srs,
12 | )
13 | from fiat.error import FIATDataError
14 | from fiat.util import MANDATORY_MODEL_ENTRIES, discover_exp_columns
15 |
16 |
17 | def test_check_config_entries(settings_files):
18 | settings = settings_files["missing_hazard"]
19 |
20 | try:
21 | cfg = Configurations.from_file(settings)
22 | check_config_entries(cfg.keys(), MANDATORY_MODEL_ENTRIES)
23 | except FIATDataError:
24 | t, v, tb = sys.exc_info()
25 | assert v.msg.startswith("Missing mandatory entries")
26 | assert v.msg.endswith("['hazard.file']")
27 | finally:
28 | assert v
29 |
30 |
31 | def test_check_exp_columns(configs):
32 | cfg = configs["geom_event"]
33 | cfg.set(
34 | "exposure.csv.file",
35 | Path(Path.cwd(), ".testdata", "exposure", "spatial_missing.csv"),
36 | )
37 |
38 | try:
39 | model = GeomModel(cfg)
40 | model.get_exposure_meta()
41 | except FIATDataError:
42 | t, v, tb = sys.exc_info()
43 | assert v.msg == "Missing mandatory exposure columns: ['object_id']"
44 | finally:
45 | assert v
46 |
47 |
48 | def test_check_exp_derived_types(geom_partial_data):
49 | found, found_idx, missing = discover_exp_columns(
50 | geom_partial_data._columns, type="damage"
51 | )
52 | assert missing == ["_content"]
53 | check_exp_derived_types("damage", found, missing)
54 |
55 | found = []
56 | try:
57 | check_exp_derived_types("damage", found, missing)
58 | except FIATDataError:
59 | t, v, tb = sys.exc_info()
60 | assert v.msg.startswith("For type: 'damage' no matching")
61 | finally:
62 | assert v
63 |
64 |
65 | def test_check_exp_index_col(configs):
66 | cfg = configs["geom_event"]
67 | cfg.set("exposure.geom.settings.index", "faulty")
68 |
69 | try:
70 | _ = GeomModel(cfg)
71 | except FIATDataError:
72 | t, v, tb = sys.exc_info()
73 | assert v.msg.startswith("Index column ('faulty') not found")
74 | finally:
75 | assert v
76 |
77 |
78 | def test_check_grid_exact(configs):
79 | exact = configs["grid_event"]
80 | equal = check_grid_exact(
81 | open_grid(exact.get("hazard.file")),
82 | open_grid(exact.get("exposure.grid.file")),
83 | )
84 | assert equal == True
85 |
86 | unequal = configs["grid_unequal"]
87 | equal = check_grid_exact(
88 | open_grid(unequal.get("hazard.file")),
89 | open_grid(unequal.get("exposure.grid.file")),
90 | )
91 | assert equal == False
92 | assert unequal.get("hazard.file").exists()
93 |
94 |
95 | def test_check_hazard_rp():
96 | rp_bands = ["a", "b", "c", "d"]
97 | rp_cfg = [1, 2, 5, 10]
98 |
99 | out = check_hazard_rp(rp_bands, rp_cfg, "")
100 | assert out == [1.0, 2.0, 5.0, 10.0]
101 |
102 | rp_cfg.remove(10)
103 | try:
104 | _ = check_hazard_rp(rp_bands, rp_cfg, Path("file.ext"))
105 | except FIATDataError:
106 | t, v, tb = sys.exc_info()
107 | assert v.msg.startswith(
108 | "'file.ext': cannot determine the return periods \
109 | for the risk calculation"
110 | )
111 | finally:
112 | assert v
113 |
114 |
115 | def test_check_hazard_subsets(grid_event_data, grid_risk_data):
116 | assert grid_event_data.subset_dict is None
117 | check_hazard_subsets(grid_event_data.subset_dict, "")
118 |
119 | try:
120 | assert grid_risk_data.subset_dict is not None
121 | check_hazard_subsets(grid_risk_data.subset_dict, Path("file.ext"))
122 | except FIATDataError:
123 | t, v, tb = sys.exc_info()
124 | assert v.msg.startswith("'file.ext': cannot read this file as there")
125 | assert v
126 |
127 |
128 | def test_check_internal_srs():
129 | try:
130 | check_internal_srs(None, "file")
131 | except FIATDataError:
132 | t, v, tb = sys.exc_info()
133 | assert v.msg.startswith("Coordinate reference system is unknown for 'file'")
134 | finally:
135 | assert v
136 |
--------------------------------------------------------------------------------
/test/conftest.py:
--------------------------------------------------------------------------------
1 | import shutil
2 | from pathlib import Path
3 |
4 | import pytest
5 |
6 | from fiat.cfg import Configurations
7 | from fiat.cli.main import args_parser
8 | from fiat.fio import open_csv, open_geom, open_grid
9 | from fiat.log import LogItem
10 | from fiat.models import GeomModel, GridModel
11 |
12 | _GEOM_FILES = [
13 | "hazard.file",
14 | "exposure.geom.file1",
15 | "exposure.csv.file",
16 | "vulnerability.file",
17 | ]
18 | _MODELS = [
19 | "geom_event",
20 | "geom_event_2g",
21 | "geom_event_missing",
22 | "geom_event_outside",
23 | "geom_risk",
24 | "geom_risk_2g",
25 | "grid_event",
26 | "grid_risk",
27 | "grid_unequal",
28 | "missing_hazard",
29 | "missing_models",
30 | ]
31 | _PATH = Path.cwd()
32 |
33 |
34 | @pytest.fixture
35 | def cli_parser():
36 | return args_parser()
37 |
38 |
39 | @pytest.fixture
40 | def settings_files():
41 | _files = {}
42 | for _m in _MODELS:
43 | p = Path(_PATH, ".testdata", f"{_m}.toml")
44 | p_name = p.stem
45 | _files[p_name] = p
46 | return _files
47 |
48 |
49 | @pytest.fixture
50 | def configs(settings_files):
51 | _cfgs = {}
52 | for key, item in settings_files.items():
53 | if not key.startswith("missing"):
54 | _cfgs[key] = Configurations.from_file(item)
55 | return _cfgs
56 |
57 |
58 | ## Models
59 | @pytest.fixture
60 | def geom_tmp_model(tmp_path, configs):
61 | cfg = configs["geom_event"]
62 | settings_file = Path(tmp_path, "settings.toml")
63 | shutil.copy2(cfg.filepath, settings_file)
64 | for file in _GEOM_FILES:
65 | path = cfg.get(file)
66 | new_path = Path(tmp_path, path.parent.name)
67 | new_path.mkdir(parents=True, exist_ok=True)
68 | shutil.copy2(path, Path(new_path, path.name))
69 | assert settings_file.is_file()
70 | return settings_file
71 |
72 |
73 | @pytest.fixture
74 | def geom_risk(configs):
75 | model = GeomModel(configs["geom_risk"])
76 | return model
77 |
78 |
79 | @pytest.fixture
80 | def grid_risk(configs):
81 | model = GridModel(configs["grid_risk"])
82 | return model
83 |
84 |
85 | ## Data
86 | @pytest.fixture
87 | def geom_data():
88 | d = open_geom(Path(_PATH, ".testdata", "exposure", "spatial.geojson"))
89 | return d
90 |
91 |
92 | ## Data
93 | @pytest.fixture(scope="session")
94 | def geom_outside_data():
95 | d = open_geom(Path(_PATH, ".testdata", "exposure", "spatial_outside.geojson"))
96 | return d
97 |
98 |
99 | @pytest.fixture(scope="session")
100 | def geom_partial_data():
101 | d = open_csv(Path(_PATH, ".testdata", "exposure", "spatial_partial.csv"), lazy=True)
102 | return d
103 |
104 |
105 | @pytest.fixture
106 | def grid_event_data():
107 | d = open_grid(Path(_PATH, ".testdata", "hazard", "event_map.nc"))
108 | return d
109 |
110 |
111 | @pytest.fixture(scope="session")
112 | def grid_event_highres_data():
113 | d = open_grid(Path(_PATH, ".testdata", "hazard", "event_map_highres.nc"))
114 | return d
115 |
116 |
117 | @pytest.fixture(scope="session")
118 | def grid_exp_data():
119 | d = open_grid(Path(_PATH, ".testdata", "exposure", "spatial.nc"))
120 | return d
121 |
122 |
123 | @pytest.fixture(scope="session")
124 | def grid_risk_data():
125 | d = open_grid(Path(_PATH, ".testdata", "hazard", "risk_map.nc"))
126 | return d
127 |
128 |
129 | @pytest.fixture(scope="session")
130 | def vul_path():
131 | path = Path(_PATH, ".testdata", "vulnerability", "vulnerability_curves.csv")
132 | assert path.exists()
133 | return path
134 |
135 |
136 | @pytest.fixture(scope="session")
137 | def vul_raw_data(vul_path):
138 | with open(vul_path, mode="rb") as f:
139 | data = f.read()
140 | return data
141 |
142 |
143 | @pytest.fixture(scope="session")
144 | def vul_data(vul_path):
145 | d = open_csv(vul_path)
146 | return d
147 |
148 |
149 | @pytest.fixture(scope="session")
150 | def vul_data_win():
151 | d = open_csv(
152 | Path(_PATH, ".testdata", "vulnerability", "vulnerability_curves_win.csv"),
153 | )
154 | return d
155 |
156 |
157 | @pytest.fixture
158 | def log1():
159 | obj = LogItem(level=2, msg="Hello!")
160 | return obj
161 |
162 |
163 | @pytest.fixture
164 | def log2():
165 | obj = LogItem(level=2, msg="Good Bye!")
166 | return obj
167 |
--------------------------------------------------------------------------------
/test/test_gis.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | from numpy import mean
4 |
5 | from fiat.gis import geom, grid, overlay
6 | from fiat.util import get_srs_repr
7 |
8 |
9 | def test_get_srs_repr(geom_data):
10 | out = get_srs_repr(geom_data.srs)
11 | assert out == "EPSG:4326"
12 |
13 | try:
14 | out = get_srs_repr(None)
15 | except ValueError:
16 | t, v, tb = sys.exc_info()
17 | assert v.args[0].endswith("'srs' can not be 'None'.")
18 | finally:
19 | assert v
20 |
21 |
22 | def test_clip(geom_data, grid_event_data):
23 | ft = geom_data[3]
24 | hazard = overlay.clip(
25 | ft,
26 | grid_event_data[1],
27 | grid_event_data.geotransform,
28 | )
29 | ft = None
30 |
31 | assert len(hazard) == 6
32 | assert int(round(mean(hazard) * 100, 0)) == 170
33 |
34 |
35 | def test_clip_outside(geom_outside_data, grid_event_data):
36 | ft = geom_outside_data[0]
37 | hazard = overlay.clip(
38 | ft,
39 | grid_event_data[1],
40 | grid_event_data.geotransform,
41 | )
42 | ft = None
43 |
44 | assert len(hazard) == 0
45 |
46 | ft = geom_outside_data[1]
47 | hazard = overlay.clip(
48 | ft,
49 | grid_event_data[1],
50 | grid_event_data.geotransform,
51 | )
52 | ft = None
53 |
54 | assert len(hazard) == 2
55 | assert int(round(mean(hazard) * 100, 0)) == 270
56 |
57 |
58 | def test_clip_weighted(geom_data, grid_event_data):
59 | ft = geom_data[3]
60 | _, weights = overlay.clip_weighted(
61 | ft,
62 | grid_event_data[1],
63 | grid_event_data.geotransform,
64 | upscale=10,
65 | )
66 | assert int(weights[0, 0] * 100) == 90
67 |
68 | _, weights = overlay.clip_weighted(
69 | ft,
70 | grid_event_data[1],
71 | grid_event_data.geotransform,
72 | upscale=100,
73 | )
74 | assert int(weights[0, 0] * 100) == 81
75 |
76 |
77 | def test_pin(geom_data, grid_event_data):
78 | for ft in geom_data:
79 | XY = geom.point_in_geom(ft)
80 |
81 | hazard = overlay.pin(
82 | XY,
83 | grid_event_data[1],
84 | grid_event_data.geotransform,
85 | )
86 |
87 | assert int(round(hazard[0] * 100, 0)) == 160
88 |
89 |
90 | def test_pin_outside(geom_outside_data, grid_event_data):
91 | ft = geom_outside_data[0]
92 | XY = geom.point_in_geom(ft)
93 | hazard = overlay.pin(
94 | XY,
95 | grid_event_data[1],
96 | grid_event_data.geotransform,
97 | )
98 | ft = None
99 |
100 | assert len(hazard) == 0
101 |
102 | ft = geom_outside_data[2]
103 | XY = geom.point_in_geom(ft)
104 | hazard = overlay.pin(
105 | XY,
106 | grid_event_data[1],
107 | grid_event_data.geotransform,
108 | )
109 | ft = None
110 |
111 | assert len(hazard) == 1
112 | assert int(round(hazard[0] * 100, 0)) == 200
113 |
114 |
115 | def test_geom_reproject(tmp_path, geom_data):
116 | dst_crs = "EPSG:3857"
117 | new_gm = geom.reproject(
118 | geom_data,
119 | dst_crs,
120 | out_dir=str(tmp_path),
121 | )
122 |
123 | assert new_gm.srs.GetAuthorityCode(None) == "3857"
124 |
125 |
126 | def test_geom_reproject_single(geom_data):
127 | ft = geom_data[1]
128 | geometry = ft.GetGeometryRef()
129 |
130 | vertices = geometry.GetGeometryRef(0).GetPoints()
131 | assert 4.39 < vertices[0][0] < 4.4
132 |
133 | geom.reproject_feature(
134 | geometry,
135 | src_crs="EPSG:4326",
136 | dst_crs="EPSG:28992",
137 | )
138 |
139 | vertices = geometry.GetGeometryRef(0).GetPoints()
140 | assert 80000 < vertices[0][0] < 90000
141 |
142 |
143 | def test_grid_reproject(tmp_path, grid_event_data):
144 | dst_crs = "EPSG:3857"
145 | new_gr = grid.reproject(
146 | grid_event_data,
147 | dst_crs,
148 | out_dir=str(tmp_path),
149 | )
150 |
151 | assert new_gr.srs.GetAuthorityCode(None) == "3857"
152 |
153 |
154 | def test_grid_reproject_gtf(tmp_path, grid_event_data, grid_event_highres_data):
155 | assert grid_event_highres_data.shape == (100, 100)
156 | new_gr = grid.reproject(
157 | grid_event_highres_data,
158 | get_srs_repr(grid_event_data.srs),
159 | dst_gtf=grid_event_data.geotransform,
160 | dst_width=10,
161 | dst_height=10,
162 | out_dir=str(tmp_path),
163 | )
164 |
165 | assert new_gr.shape == (10, 10)
166 |
--------------------------------------------------------------------------------
/make_env.py:
--------------------------------------------------------------------------------
1 | """A simple script to generate enviroment.yml files from pyproject.toml."""
2 |
3 | import argparse
4 | import fnmatch
5 | import platform
6 | import re
7 | from pathlib import Path
8 | from sys import version_info
9 | from typing import List
10 |
11 | if version_info.minor >= 11:
12 | from tomllib import load
13 | else:
14 | from tomli import load
15 |
16 | _FILE_DIR = Path(__file__).parent
17 |
18 |
19 | # our quick and dirty implementation of recursive depedencies
20 | def _parse_profile(profile_str: str, opt_deps: dict, project_name: str) -> List[str]:
21 | if profile_str is None or profile_str == "":
22 | return []
23 |
24 | pat = re.compile(r"\s*" + project_name + r"\[(.*)\]\s*")
25 | parsed = []
26 | queue = [f"{project_name}[{x.strip()}]" for x in profile_str.split(",")]
27 | while len(queue) > 0:
28 | dep = queue.pop(0)
29 | if dep == "":
30 | continue
31 | m = pat.match(dep)
32 | if m:
33 | # if we match the patern, all list elts have to be dependenciy groups
34 | dep_groups = [d.strip() for d in m.groups(0)[0].split(",")]
35 | unknown_dep_groups = set(dep_groups) - set(opt_deps.keys())
36 | if len(unknown_dep_groups) > 0:
37 | raise RuntimeError(f"unknown dependency group(s): {unknown_dep_groups}")
38 | queue.extend(dep_groups)
39 | continue
40 |
41 | if dep in opt_deps:
42 | queue.extend([x.strip() for x in opt_deps[dep]])
43 | else:
44 | parsed.append(dep)
45 |
46 | return parsed
47 |
48 |
49 | parser = argparse.ArgumentParser()
50 |
51 | parser.add_argument("profile", default="dev", nargs="?")
52 | parser.add_argument("--output", "-o", default="environment.yml")
53 | parser.add_argument("--channels", "-c", default=None)
54 | parser.add_argument("--name", "-n", default=None)
55 | parser.add_argument("--py-version", "-p", default=None)
56 | args = parser.parse_args()
57 |
58 | #
59 | with open(Path(_FILE_DIR, "pyproject.toml"), "rb") as f:
60 | toml = load(f)
61 | deps = toml["project"]["dependencies"]
62 | opt_deps = toml["project"]["optional-dependencies"]
63 | project_name = toml["project"]["name"]
64 | # specific conda_install settings
65 | install_config = toml["tool"].get("make_env", {})
66 | conda_only = install_config.get("conda_only", [])
67 | deps_not_in_conda = install_config.get("deps_not_in_conda", [])
68 | channels = install_config.get("channels", ["conda-forge"])
69 | if args.channels is not None:
70 | channels.extend(args.channels.split(","))
71 | channels = list(set(channels))
72 |
73 | # parse environment name
74 | name = args.name
75 | if name is None:
76 | name = project_name.split("_")[1]
77 | if args.profile:
78 | name += f"_{args.profile}"
79 | print(f"Environment name: {name}")
80 |
81 | # parse dependencies groups and flavours
82 | # "min" equals no optional dependencies
83 | deps_to_install = deps.copy()
84 | if args.profile not in ["", "min"]:
85 | extra_deps = _parse_profile(args.profile, opt_deps, project_name)
86 | deps_to_install.extend(extra_deps)
87 |
88 | conda_deps = []
89 | pip_deps = []
90 | for dep in deps_to_install:
91 | if dep in deps_not_in_conda:
92 | pip_deps.append(dep)
93 | else:
94 | conda_deps.append(dep)
95 | if args.py_version is not None:
96 | conda_deps.append(f"python=={args.py_version}")
97 |
98 | pip_deps = sorted(list(set(pip_deps)))
99 |
100 | # Make an exception for the build environment
101 | if args.profile == "build":
102 | if platform.system().lower() == "windows":
103 | py = fnmatch.filter(conda_deps, "python*")
104 | gd = fnmatch.filter(conda_deps, "gdal*")
105 | np = fnmatch.filter(conda_deps, "numpy*")
106 | conda_deps.remove(*gd)
107 | conda_deps.remove(*np)
108 | if py:
109 | conda_deps.remove(*py)
110 | py = ["python==3.12.*"]
111 | pip_deps += conda_deps
112 | conda_deps = []
113 | if py:
114 | conda_deps += py
115 | pip_deps.append(
116 | "https://github.com/cgohlke/geospatial-wheels/releases/download/v2024.2.18/GDAL-3.8.4-cp312-cp312-win_amd64.whl",
117 | )
118 | pip_deps.append("numpy<2.0.0")
119 |
120 | for item in conda_only:
121 | im = fnmatch.filter(pip_deps, item)
122 | pip_deps.remove(*im)
123 | conda_deps.append(*im)
124 |
125 | pip_deps = sorted(list(set(pip_deps)))
126 | pip_deps.append("-e .")
127 |
128 | # add pip as a conda dependency if we have pip deps
129 | if len(pip_deps) > 0:
130 | conda_deps.append("pip")
131 |
132 | # the list(set()) is to remove duplicates
133 | conda_deps_to_install_string = "\n- ".join(sorted(list(set(conda_deps))))
134 | channels_string = "\n- ".join(set(channels))
135 |
136 | # create environment.yml
137 | env_spec = f"""name: {name}
138 |
139 | channels:
140 | - {channels_string}
141 |
142 | dependencies:
143 | - {conda_deps_to_install_string}
144 | """
145 | if len(pip_deps) > 0:
146 | pip_deps_to_install_string = "\n - ".join(pip_deps)
147 | env_spec += f"""- pip:
148 | - {pip_deps_to_install_string}
149 | """
150 |
151 | with open(Path(_FILE_DIR, args.output), "w") as out:
152 | out.write(env_spec)
153 |
--------------------------------------------------------------------------------
/docs/user_guide/settings/index.qmd:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Settings"
3 | ---
4 | The user must set the model settings in the `settings.toml` configuration file.
5 |
6 | Besides the necessary/ required setttings one can set:
7 |
8 | - input in regard to the computational side of FIAT (e.g. chunking, number of threads etc),
9 | - see this [page](computation.qmd)
10 | - optional/ additional input that is not necessary or more data specific,
11 | - see this [page](optional.qmd)
12 |
13 | ### Basic input
14 | This section pertains to all input that is vital for running a FIAT model.
15 |
16 | These inputs/ entries are listed in the table down below with more detailed information per entry underneath the table.
17 |
18 | ::: {.callout-note}
19 | File paths in the settings can be relative to the settings.toml file or absolute.
20 | :::
21 |
22 | | Entry | Type | Required | Default |
23 | |:-------------------------------|---------|----------|-----------------|
24 | | **[model]** | | | |
25 | | [model](#type) | string | No | geom |
26 | | **[output]** | | | |
27 | | [path](#output) | string | No | output |
28 | | **[output.csv]** | | | |
29 | | [name[n]](#output.csv) | string | No | - |
30 | | **[output.geom]** | | | |
31 | | [name[n]](#output.geom) | string | No | spatial[n].gpkg |
32 | | **[output.grid]** | | | |
33 | | [name](#output.grid) | string | No | ead.nc |
34 | | **[hazard]** | | | |
35 | | [file](#hazard) | string | Yes | |
36 | | [elevation_reference](#hazard) | string | Yes | |
37 | | **[exposure.csv]** | | | |
38 | | [file](#exposure.csv) | string | No | - |
39 | | **[exposure.geom]** | | | |
40 | | [file[n]](#exposure.geom) | string | Yes | |
41 | | **[exposure.grid]** | | | |
42 | | [file](#exposure.grid) | string | Yes | |
43 | | **[vulnerability]** | | | |
44 | | [file](#vulnerability) | string | Yes | |
45 | : Most basic settings file input {#tbl-toml .hover}
46 |
47 | #### [model]
48 |
49 | - `type`: The type of model. Choice between 'geom' and 'grid'.
50 |
51 | #### [output]
52 |
53 | - `path`: The path to the output folder in the working directory.
54 |
55 | #### [output.csv]
56 |
57 | - `name[n]`: The path to the output CSV file(s) that will be created. These are linked to the input geometry files.
58 |
59 | #### [output.geom]
60 |
61 | - `name[n]`: This sets the name and location of the output vector file that contains the geometry, location and the damages per asset.
62 |
63 | ::: {.callout-warning}
64 | If provided, the suffix is mandatory. The suffix should match the suffix of the input geometry file for which it is set.
65 | :::
66 |
67 | #### [output.grid]
68 |
69 | - `name`: This sets the name and location of the output raster file that contains damages per grid cell.
70 |
71 | #### [hazard]
72 |
73 | - `file`: The file path to the hazard file.
74 |
75 | - `elevation_reference`: This indicates the elevation reference of the flood map. In case of a flood-depth map this should be "DEM" while in case of a flood-elevation map this should be "datum".
76 |
77 | #### [exposure.csv]
78 |
79 | - `file`: The path to the exposure CSV file (recommended to be within the [exposure folder](../data/index.qmd)) that contains the [required information](../data/exposure.qmd) per asset. There can only be one exposure CSV file.
80 |
81 | #### [exposure.geom]
82 |
83 | - `file[n]`: The path to the exposure vector file (recommended to be within the [exposure folder](../data/index.qmd)) with the assets' geometry and object_id. The user can provide multiple vector files. Therefore the '[n]' suffix, as the user can create mulitple entries for vector files (e.g. `file1`, `file2` etc.).
84 |
85 | ::: {.callout-warning}
86 | The suffix is mandatory. So if only one file is provided, name it `file1`.
87 | :::
88 | ::: {.callout-note}
89 | Only required when running the geometry based model.
90 | :::
91 |
92 | #### [exposure.grid]
93 |
94 | - `file`: The path to the exposure raster file (recommended to be within the [exposure folder](../data/index.qmd)).
95 |
96 | ::: {.callout-note}
97 | Only required when running the raster based model.
98 | :::
99 |
100 | #### [vulnerability]
101 |
102 | - `file`: The path to the vulnerability curves CSV file within the [vulnerability folder](../data/index.qmd) that contains the [damage curves](../data/vulnerability.qmd). Only one vulnerability curves file is allowed.
103 |
104 | ### Example
105 |
106 | An example of settings file for running a geometry model is given below:
107 |
108 | ```toml
109 | [output]
110 | path = "output"
111 |
112 | [output.csv]
113 | name = "output.csv"
114 |
115 | [output.geom]
116 | name1 = "spatial.gpkg"
117 |
118 | [hazard]
119 | file = "hazard/SL_10yr_reprojected.tif"
120 | elevation_reference = "DEM"
121 | risk = false
122 |
123 | [exposure.geom]
124 | file1 = "./exposure/buildings.gpkg"
125 |
126 | [exposure.csv]
127 | file = "./exposure/exposure.csv"
128 |
129 | [vulnerability]
130 | file = "./vulnerability/vulnerability_curves.csv"
131 | ```
132 |
--------------------------------------------------------------------------------
/res/fiat.svg:
--------------------------------------------------------------------------------
1 |
2 |
156 |
--------------------------------------------------------------------------------
/docs/_static/fiat.svg:
--------------------------------------------------------------------------------
1 |
2 |
156 |
--------------------------------------------------------------------------------
/docs/_static/version.js:
--------------------------------------------------------------------------------
1 | function checkPathExists(url) {
2 | return new Promise((resolve, reject) => {
3 | var xhr = new XMLHttpRequest();
4 | xhr.open('HEAD', url, true);
5 | xhr.onreadystatechange = function() {
6 | if (xhr.readyState === 4) {
7 | if (xhr.status === 200) {
8 | resolve(true);
9 | } else if (xhr.status === 404) {
10 | resolve(false);
11 | } else {
12 | reject(new Error(xhr.statusText));
13 | }
14 | }
15 | };
16 | xhr.onerror = function() {
17 | reject(new Error('Network Error'));
18 | };
19 | xhr.send();
20 | });
21 | }
22 |
23 | window.onload = function() {
24 | // Assuming you have a ul element in your HTML like this:
25 | //
26 |
27 | // Fetch the JSON data
28 | fetch("https://raw.githubusercontent.com/Deltares/Delft-FIAT/gh-pages/switcher.json")
29 | .then(response => response.json())
30 | .then(data => {
31 | console.log('Data loaded:', data); // Log the loaded data
32 |
33 | const dropdown = document.querySelector('#nav-menu-version').nextElementSibling;
34 | console.log('Dropdown element:', dropdown); // Log the dropdown element
35 |
36 | // Clear all existing dropdown items
37 | dropdown.innerHTML = '';
38 |
39 | data.forEach(item => {
40 | console.log('Adding item:', item); // Log the item being added
41 |
42 | // Create a new li element
43 | const li = document.createElement('li');
44 |
45 | // Create a new a element
46 | const a = document.createElement('a');
47 | a.className = 'dropdown-item';
48 | a.href = item.url; // Use the 'url' property as the href
49 | a.textContent = item.name; // Use the 'name' property as the text
50 |
51 | // Add the a element to the li
52 | li.appendChild(a);
53 |
54 | // Add the li to the dropdown
55 | dropdown.appendChild(li);
56 | });
57 |
58 | console.log('Dropdown after adding items:', dropdown); // Log the dropdown after adding items
59 |
60 | // Get all dropdown items within the specific dropdown menu
61 | var dropdownMenu = document.querySelector('#nav-menu-version').nextElementSibling;
62 |
63 | var dropdownItems = dropdownMenu.querySelectorAll('.dropdown-item');
64 |
65 | // Get the current page in chunks
66 | var currentPagePath = window.location.pathname.split('/');
67 |
68 | for (var i = 0; i < dropdownItems.length; i++) {
69 | // Get textcontent
70 | var textContent = dropdownItems[i].textContent;
71 |
72 | // Get the index of the current version
73 | var index = currentPagePath.indexOf(textContent);
74 |
75 | if (index !== -1) {
76 | // Remove the active-item class from all items
77 | for (var j = 0; j < dropdownItems.length; j++) {
78 | dropdownItems[j].classList.remove('active-item');
79 | }
80 |
81 | dropdownItems[i].classList.add('active-item');
82 | break
83 | }
84 | }
85 |
86 | console.log('current page path', currentPagePath);
87 |
88 | // Loop through each dropdown item
89 | for (var i = 0; i < dropdownItems.length; i++) {
90 | // Add click event listener to each item
91 | dropdownItems[i].addEventListener('click', function(event) {
92 | // Prevent default action
93 | event.preventDefault();
94 |
95 | // Get the clicked item's text
96 | var itemText = this.textContent;
97 | // var itemHref = this.getAttribute('href')
98 |
99 | // Loop through each dropdown item again to find a match in the current page's path
100 | for (var j = 0; j < dropdownItems.length; j++) {
101 | // Get the dropdown item's text
102 | var dropdownText = dropdownItems[j].textContent;
103 | console.log('Dropdown item:', dropdownText);
104 |
105 | // Find the index of the dropdownText in the current page's path
106 | var index = currentPagePath.indexOf(dropdownText);
107 |
108 | // If the dropdownText is found in the current page's path
109 | if (index !== -1) {
110 | // Construct the new URL relative to the dropdownText and append the itemText
111 | addElements = currentPagePath.slice(index + 1, )
112 | relativePath = '../'.repeat(addElements.length)
113 | var newUrl = relativePath + itemText + '/' + addElements.join('/')
114 | console.log('Clicked item:', newUrl);
115 |
116 | // Redirect to the new URL
117 | checkPathExists(newUrl)
118 | .then(exists => {
119 | if (exists) {
120 | window.location.href = newUrl;
121 | } else {
122 | console.log('Path does not exist, referring to home page');
123 | window.location.href = relativePath + itemText + '/';
124 | }
125 | })
126 |
127 | // Exit the loop
128 | break;
129 | }
130 | }
131 | });
132 | }
133 |
134 | })
135 | .catch(error => console.error('Error:', error)); // Log any errors
136 | }
137 |
--------------------------------------------------------------------------------