├── .gitignore
├── CONTRIBUTING.md
├── LICENSE.md
├── README.md
├── docs
├── Makefile
├── api_reference.rst
├── conf.py
├── index.rst
├── installation.rst
├── plot_references
│ └── plot_reference.rst
├── quickstart
│ └── quickstart.ipynb
└── user_guide
│ ├── Customizing_Plots.ipynb
│ ├── Working_with_Geospatial_Data.ipynb
│ └── Working_with_Projections.ipynb
├── environment.yml
├── examples
├── README.rst
├── plot_boston_airbnb_kde.py
├── plot_california_districts.py
├── plot_dc_street_network.py
├── plot_largest_cities_usa.py
├── plot_los_angeles_flights.py
├── plot_melbourne_schools.py
├── plot_minard_napoleon_russia.py
├── plot_ny_state_demographics.py
├── plot_nyc_collision_factors.py
├── plot_nyc_collisions_map.py
├── plot_nyc_collisions_quadtree.py
├── plot_nyc_parking_tickets.py
├── plot_obesity.py
├── plot_san_francisco_trees.py
└── plot_usa_city_elevations.py
├── figures
├── dc-street-network.png
├── los-angeles-flights.png
├── nyc-collision-factors.png
├── nyc-parking-tickets.png
└── usa-city-elevations.png
├── geoplot
├── __init__.py
├── crs.py
├── datasets.py
├── geoplot.py
├── ops.py
└── utils.py
├── setup.py
└── tests
├── environment.yml
├── mixin_tests.py
├── proj_tests.py
└── viz_tests.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode/
2 | .cache/
3 | .DS_Store
4 | *.pyc
5 | __pycache__/
6 | build/
7 | docs/_build/
8 | _build/
9 | bin/
10 | dist/
11 | geoplot.egg-info/
12 | tests/baseline/
13 | .pytest_cache/
14 |
15 | # sphinx-gallery
16 | *.md5
17 | docs/gallery/
18 | *.zip
19 | examples/*.png
20 | _downloads
21 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing
2 |
3 | ## Cloning
4 |
5 | To work on `geoplot` locally, you will need to clone it.
6 |
7 | ```git
8 | git clone https://github.com/ResidentMario/geoplot.git
9 | ```
10 |
11 | You can then set up your own branch version of the code, and work on your changes for a pull request from there.
12 |
13 | ```bash
14 | cd geoplot
15 | git checkout -B new-branch-name
16 | ```
17 |
18 | ## Environment
19 |
20 | To install the `geoplot` development environment run the following in the root directory:
21 |
22 | ```bash
23 | conda env create -f environment.yml
24 | conda activate geoplot-dev
25 | pip install -e .[develop]
26 | ```
27 |
28 | ## Testing
29 |
30 | `geoplot` tests are located in the `tests` folder. Any PRs you submit should eventually pass all of the tests located in this folder.
31 |
32 | `mixin_tests.py` are static unit tests which can be run via `pytest` the usual way (by running `pytest mixin_tests.py` from the command line).
33 |
34 | `proj_tests.py` and `viz_tests.py` are visual tests run via the `pytest-mpl` plugin to be run: [see here](https://github.com/matplotlib/pytest-mpl#using) for instructions on how it's used. These tests are passed by visual inspection: e.g. does the output figure look the way it _should_ look, given the inputs?
35 |
36 | ## Documentation
37 |
38 | Documentation is provided via `sphinx`. To regenerate the documentation from the current source in one shot, navigate to the `docs` folder and run `make html`. Alternatively, to regenerate a single specific section, see the following section.
39 |
40 | ### Static example images
41 |
42 | The static example images on the repo and documentation homepages are located in the `figures/` folder.
43 |
44 | ### Gallery
45 |
46 | The gallery is generated using `sphinx-gallery`, and use the `examples/` folder as their source. The webmap examples are hosted on [bl.ocks.org](https://bl.ocks.org/) and linked to from their gallery landing pages.
47 |
48 | ### Quickstart
49 |
50 | The Quickstart is a Jupyter notebook in the `docs/quickstart/` directory. To rebuild the quickstart, edit the notebook, then run `make html` again.
51 |
52 | ### Tutorials
53 |
54 | The tutorials are Jupyter notebooks in the `docs/user_guide/` directory. To rebuild the tutorials, edit the notebook(s), then run `make html` again.
55 |
56 | ### Example data
57 |
58 | Most of the image resources in the documentation use real-world example data that is packaged as an accessory to this library. The home repo for these datasets is the [`geoplot-data`](https://github.com/ResidentMario/geoplot-data) repository. Use the `geoplot.datasets.get_path` function to get a path to a specific dataset readable by `geopandas`.
59 |
60 | ### Everything else
61 |
62 | The remaining pages are all written as `rst` files accessible from the top level of the `docs` folder.
63 |
64 | ### Serving
65 |
66 | The documentation is served at [residentmario.github.io](https://residentmario.github.io/geoplot/index.html) via GitHub's site export feature, served out of the `gh-pages` branch. To export a new version of the documentation to the website, run the following:
67 |
68 | ```bash
69 | git checkout gh-pages
70 | rm -rf *
71 | git checkout master -- docs/ examples/ geoplot/ .gitignore
72 | cd docs
73 | make html
74 | cd ..
75 | mv docs/_build/html/* ./
76 | rm -rf docs/ examples/ geoplot/
77 | git add .
78 | git commit -m "Publishing update docs..."
79 | git push origin gh-pages
80 | git checkout master
81 | ```
82 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | The MIT License
2 |
3 | Copyright (c) 2016 Aleksey Bilogur
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in
13 | all copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # geoplot: geospatial data visualization
2 |
3 | [](https://github.com/conda-forge/geoplot-feedstock)    [](https://zenodo.org/record/3475569)
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 | `geoplot` is a high-level Python geospatial plotting library. It's an extension to `cartopy` and `matplotlib` which makes mapping easy: like `seaborn` for geospatial. It comes with the following features:
26 |
27 | * **High-level plotting API**: geoplot is cartographic plotting for the 90% of use cases. All of the standard-bearermaps that you’ve probably seen in your geography textbook are easily accessible.
28 | * **Native projection support**: The most fundamental peculiarity of geospatial plotting is projection: how do you unroll a sphere onto a flat surface (a map) in an accurate way? The answer depends on what you’re trying to depict. `geoplot` provides these options.
29 | * **Compatibility with `matplotlib`**: While `matplotlib` is not a good fit for working with geospatial data directly, it’s a format that’s well-incorporated by other tools.
30 |
31 | Installation is simple with `conda install geoplot -c conda-forge`. [See the documentation for help getting started](https://residentmario.github.io/geoplot/index.html).
32 |
33 | ----
34 |
35 | Author note: `geoplot` is currently in a **maintenence** state. I will continue to provide bugfixes and investigate user-reported issues on a best-effort basis, but do not expect to see any new library features anytime soon.
36 |
--------------------------------------------------------------------------------
/docs/Makefile:
--------------------------------------------------------------------------------
1 | # Minimal makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line.
5 | SPHINXOPTS =
6 | SPHINXBUILD = sphinx-build
7 | SOURCEDIR = .
8 | BUILDDIR = _build
9 |
10 | # Put it first so that "make" without argument is like "make help".
11 | help:
12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
13 |
14 | .PHONY: help Makefile
15 |
16 | # Catch-all target: route all unknown targets to Sphinx using the new
17 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
18 | %: Makefile
19 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
20 |
--------------------------------------------------------------------------------
/docs/api_reference.rst:
--------------------------------------------------------------------------------
1 | =============
2 | API Reference
3 | =============
4 |
5 | Plots
6 | -----
7 |
8 | .. currentmodule:: geoplot
9 |
10 | .. automethod:: geoplot.pointplot
11 |
12 | .. automethod:: geoplot.polyplot
13 |
14 | .. automethod:: geoplot.webmap
15 |
16 | .. automethod:: geoplot.choropleth
17 |
18 | .. automethod:: geoplot.kdeplot
19 |
20 | .. automethod:: geoplot.cartogram
21 |
22 | .. automethod:: geoplot.sankey
23 |
24 | .. automethod:: geoplot.quadtree
25 |
26 | .. automethod:: geoplot.voronoi
27 |
28 | Projections
29 | -----------
30 |
31 | .. automethod:: geoplot.crs.PlateCarree
32 |
33 | .. automethod:: geoplot.crs.LambertCylindrical
34 |
35 | .. automethod:: geoplot.crs.Mercator
36 |
37 | .. automethod:: geoplot.crs.WebMercator
38 |
39 | .. automethod:: geoplot.crs.Miller
40 |
41 | .. automethod:: geoplot.crs.Mollweide
42 |
43 | .. automethod:: geoplot.crs.Robinson
44 |
45 | .. automethod:: geoplot.crs.Sinusoidal
46 |
47 | .. automethod:: geoplot.crs.InterruptedGoodeHomolosine
48 |
49 | .. automethod:: geoplot.crs.Geostationary
50 |
51 | .. automethod:: geoplot.crs.NorthPolarStereo
52 |
53 | .. automethod:: geoplot.crs.SouthPolarStereo
54 |
55 | .. automethod:: geoplot.crs.Gnomonic
56 |
57 | .. automethod:: geoplot.crs.AlbersEqualArea
58 |
59 | .. automethod:: geoplot.crs.AzimuthalEquidistant
60 |
61 | .. automethod:: geoplot.crs.LambertConformal
62 |
63 | .. automethod:: geoplot.crs.Orthographic
64 |
65 | .. automethod:: geoplot.crs.Stereographic
66 |
67 | .. automethod:: geoplot.crs.TransverseMercator
68 |
69 | .. automethod:: geoplot.crs.LambertAzimuthalEqualArea
70 |
71 | .. automethod:: geoplot.crs.OSGB
72 |
73 | .. automethod:: geoplot.crs.EuroPP
74 |
75 | .. automethod:: geoplot.crs.OSNI
76 |
77 | .. automethod:: geoplot.crs.EckertI
78 |
79 | .. automethod:: geoplot.crs.EckertII
80 |
81 | .. automethod:: geoplot.crs.EckertIII
82 |
83 | .. automethod:: geoplot.crs.EckertIV
84 |
85 | .. automethod:: geoplot.crs.EckertV
86 |
87 | .. automethod:: geoplot.crs.EckertVI
88 |
89 | .. automethod:: geoplot.crs.NearsidePerspective
90 |
91 | Utilities
92 | ---------
93 |
94 | .. automethod:: geoplot.datasets.get_path
--------------------------------------------------------------------------------
/docs/conf.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Configuration file for the Sphinx documentation builder.
4 | #
5 | # This file does only contain a selection of the most common options. For a
6 | # full list see the documentation:
7 | # https://www.sphinx-doc.org/en/master/config
8 |
9 | # -- Path setup --------------------------------------------------------------
10 |
11 | # If extensions (or modules to document with autodoc) are in another directory,
12 | # add these directories to sys.path here. If the directory is relative to the
13 | # documentation root, use os.path.abspath to make it absolute, like shown here.
14 | #
15 | # import os
16 | # import sys
17 | # sys.path.insert(0, os.path.abspath('.'))
18 |
19 |
20 | # -- Project information -----------------------------------------------------
21 |
22 | project = 'geoplot'
23 | copyright = '2016-2022, Aleksey Bilogur'
24 | author = 'Aleksey Bilogur'
25 |
26 | # The short X.Y version
27 | version = ''
28 | # The full version, including alpha/beta/rc tags
29 | release = '0.5.1'
30 |
31 |
32 | # -- General configuration ---------------------------------------------------
33 |
34 | # If your documentation needs a minimal Sphinx version, state it here.
35 | #
36 | # needs_sphinx = '1.0'
37 |
38 | # Add any Sphinx extension module names here, as strings. They can be
39 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 | # ones.
41 | extensions = [
42 | 'sphinx.ext.autodoc',
43 | 'sphinx.ext.viewcode',
44 | 'sphinx.ext.githubpages',
45 | 'sphinx.ext.napoleon',
46 | 'matplotlib.sphinxext.plot_directive',
47 | 'nbsphinx',
48 | 'sphinx_gallery.gen_gallery'
49 | ]
50 |
51 | # Sphinx gallery configuration
52 | sphinx_gallery_conf = {
53 | 'examples_dirs': '../examples', # path to scripts
54 | 'gallery_dirs': 'gallery', # path to save generated examples to
55 | }
56 |
57 | # Plot directive configuration
58 | plot_include_source = True
59 | plot_rcparams = {
60 | 'savefig.bbox': 'tight',
61 | 'savefig.pad_inches': 0.25,
62 | }
63 |
64 | # Add any paths that contain templates here, relative to this directory.
65 | templates_path = ['_templates']
66 |
67 | # The suffix(es) of source filenames.
68 | # You can specify multiple suffix as a list of string:
69 | #
70 | # source_suffix = ['.rst', '.md']
71 | source_suffix = '.rst'
72 |
73 | # The master toctree document.
74 | master_doc = 'index'
75 |
76 | # The language for content autogenerated by Sphinx. Refer to documentation
77 | # for a list of supported languages.
78 | #
79 | # This is also used if you do content translation via gettext catalogs.
80 | # Usually you set "language" from the command line for these cases.
81 | language = None
82 |
83 | # List of patterns, relative to source directory, that match files and
84 | # directories to ignore when looking for source files.
85 | # This pattern also affects html_static_path and html_extra_path.
86 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
87 |
88 | # The name of the Pygments (syntax highlighting) style to use.
89 | pygments_style = None
90 |
91 |
92 | # -- Options for HTML output -------------------------------------------------
93 |
94 | # The theme to use for HTML and HTML Help pages. See the documentation for
95 | # a list of builtin themes.
96 | #
97 | html_theme = 'sphinx_rtd_theme'
98 |
99 | # Theme options are theme-specific and customize the look and feel of a theme
100 | # further. For a list of options available for each theme, see the
101 | # documentation.
102 | #
103 | # html_theme_options = {}
104 |
105 | # Add any paths that contain custom static files (such as style sheets) here,
106 | # relative to this directory. They are copied after the builtin static files,
107 | # so a file named "default.css" will overwrite the builtin "default.css".
108 | html_static_path = ['_static']
109 |
110 | # Custom sidebar templates, must be a dictionary that maps document names
111 | # to template names.
112 | #
113 | # The default sidebars (for documents that don't match any pattern) are
114 | # defined by theme itself. Builtin themes are using these templates by
115 | # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
116 | # 'searchbox.html']``.
117 | #
118 | # html_sidebars = {}
119 |
120 |
121 | # -- Options for HTMLHelp output ---------------------------------------------
122 |
123 | # Output file base name for HTML help builder.
124 | htmlhelp_basename = 'geoplotdoc'
125 |
126 |
127 | # -- Options for LaTeX output ------------------------------------------------
128 |
129 | latex_elements = {
130 | # The paper size ('letterpaper' or 'a4paper').
131 | #
132 | # 'papersize': 'letterpaper',
133 |
134 | # The font size ('10pt', '11pt' or '12pt').
135 | #
136 | # 'pointsize': '10pt',
137 |
138 | # Additional stuff for the LaTeX preamble.
139 | #
140 | # 'preamble': '',
141 |
142 | # Latex figure (float) alignment
143 | #
144 | # 'figure_align': 'htbp',
145 | }
146 |
147 | # Grouping the document tree into LaTeX files. List of tuples
148 | # (source start file, target name, title,
149 | # author, documentclass [howto, manual, or own class]).
150 | latex_documents = [
151 | (master_doc, 'geoplot.tex', 'geoplot Documentation',
152 | 'Aleksey Bilogur', 'manual'),
153 | ]
154 |
155 |
156 | # -- Options for manual page output ------------------------------------------
157 |
158 | # One entry per manual page. List of tuples
159 | # (source start file, name, description, authors, manual section).
160 | man_pages = [
161 | (master_doc, 'geoplot', 'geoplot Documentation',
162 | [author], 1)
163 | ]
164 |
165 |
166 | # -- Options for Texinfo output ----------------------------------------------
167 |
168 | # Grouping the document tree into Texinfo files. List of tuples
169 | # (source start file, target name, title, author,
170 | # dir menu entry, description, category)
171 | texinfo_documents = [
172 | (master_doc, 'geoplot', 'geoplot Documentation',
173 | author, 'geoplot', 'One line description of project.',
174 | 'Miscellaneous'),
175 | ]
176 |
177 |
178 | # -- Options for Epub output -------------------------------------------------
179 |
180 | # Bibliographic Dublin Core info.
181 | epub_title = project
182 |
183 | # The unique identifier of the text. This can be a ISBN number
184 | # or the project homepage.
185 | #
186 | # epub_identifier = ''
187 |
188 | # A unique identification for the text.
189 | #
190 | # epub_uid = ''
191 |
192 | # A list of files that should not be packed into the epub file.
193 | epub_exclude_files = ['search.html']
194 |
195 |
196 | # -- Extension configuration -------------------------------------------------
197 |
--------------------------------------------------------------------------------
/docs/index.rst:
--------------------------------------------------------------------------------
1 | geoplot: geospatial data visualization
2 | ======================================
3 |
4 | .. raw:: html
5 |
6 |
7 |
13 |
14 |
20 |
21 |
27 |
28 |
34 |
35 |
41 |
42 |
43 |
44 |
45 | ``geoplot`` is a high-level Python geospatial plotting library. It's an extension to ``cartopy`` and ``matplotlib``
46 | which makes mapping easy: like ``seaborn`` for geospatial. It comes with the following features:
47 |
48 | * **High-level plotting API**: ``geoplot`` is cartographic plotting for the 90% of use cases. All of the standard-bearermaps that you’ve probably seen in your geography textbook are easily accessible.
49 |
50 | * **Native projection support**: The most fundamental peculiarity of geospatial plotting is projection: how do you unroll a sphere onto a flat surface (a map) in an accurate way? The answer depends on what you’re trying to depict. ``geoplot`` provides these options.
51 |
52 | * **Compatibility with Matplotlib**: While ``matplotlib`` is not a good fit for working with geospatial data directly, it’s a format that’s well-incorporated by other tools.
53 |
54 | For a brief introduction refer to the `Quickstart`_.
55 |
56 | .. _Quickstart: quickstart/Quickstart.ipynb
57 |
58 |
59 | .. toctree::
60 | :maxdepth: 1
61 | :caption: Getting Started
62 |
63 | installation.rst
64 | quickstart/quickstart.ipynb
65 |
66 | .. toctree::
67 | :maxdepth: 1
68 | :caption: User Guide
69 |
70 | user_guide/Working_with_Geospatial_Data.ipynb
71 | user_guide/Working_with_Projections.ipynb
72 | user_guide/Customizing_Plots.ipynb
73 | plot_references/plot_reference.rst
74 |
75 | .. toctree::
76 | :maxdepth: 1
77 | :caption: Gallery
78 |
79 | Gallery
80 |
81 | .. toctree::
82 | :maxdepth: 1
83 | :caption: API Reference
84 |
85 | api_reference.rst
86 |
--------------------------------------------------------------------------------
/docs/installation.rst:
--------------------------------------------------------------------------------
1 | ============
2 | Installation
3 | ============
4 |
5 | ``geoplot`` supports Python 3.7 and higher.
6 |
7 | With Conda (Recommended)
8 | ------------------------
9 |
10 | If you haven't already, `install conda `_. Then run
11 | ``conda install geoplot -c conda-forge`` and you're done. This works on all platforms (Linux, macOS, and Windows).
12 |
13 | Without Conda
14 | -------------
15 |
16 | You can install ``geoplot`` using ``pip install geoplot``. Use caution however, as this probably will not work on
17 | Windows, and possibly will not work on macOS and Linux.
18 |
--------------------------------------------------------------------------------
/docs/plot_references/plot_reference.rst:
--------------------------------------------------------------------------------
1 | ==============
2 | Plot Reference
3 | ==============
4 |
5 | Pointplot
6 | ---------
7 |
8 | The ``pointplot`` is a `geospatial scatter plot
9 | `_ that represents each observation in your dataset
10 | as a single point on a map. It is simple and easily interpretable plot, making it an ideal
11 | choice for showing simple pointwise relationships between observations.
12 |
13 | ``pointplot`` requires, at a minimum, some points for plotting:
14 |
15 | .. plot::
16 | :context: reset
17 |
18 | import geoplot as gplt
19 | import geoplot.crs as gcrs
20 | import geopandas as gpd
21 |
22 | cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
23 | gplt.pointplot(cities)
24 |
25 |
26 | The ``hue`` parameter applies a colormap to a data column. The ``legend`` parameter toggles a
27 | legend.
28 |
29 | .. plot::
30 | :context: close-figs
31 |
32 | gplt.pointplot(cities, projection=gcrs.AlbersEqualArea(), hue='ELEV_IN_FT', legend=True)
33 |
34 | Change the colormap using ``cmap``. For a categorical colormap, use ``scheme``.
35 |
36 | .. plot::
37 | :context: close-figs
38 |
39 | import mapclassify as mc
40 |
41 | scheme = mc.Quantiles(cities['ELEV_IN_FT'], k=5)
42 | gplt.pointplot(
43 | cities, projection=gcrs.AlbersEqualArea(),
44 | hue='ELEV_IN_FT', scheme=scheme, cmap='inferno_r',
45 | legend=True
46 | )
47 |
48 | Keyword arguments that are not part of the ``geoplot`` API are passed to the underlying
49 | `matplotlib.pyplot.scatter instance
50 | `_,
51 | which can be used to customize the appearance of the
52 | plot. To pass keyword argument to the legend, use the ``legend_kwargs`` argument.
53 |
54 | .. plot::
55 | :context: close-figs
56 |
57 | gplt.pointplot(
58 | cities, projection=gcrs.AlbersEqualArea(),
59 | hue='ELEV_IN_FT',
60 | legend=True, legend_kwargs={'orientation': 'horizontal'},
61 | edgecolor='lightgray', linewidth=0.5
62 | )
63 |
64 | ``scale`` provides an alternative or additional visual variable. The minimum and maximum size
65 | of the points can be adjusted to fit your data using the ``limits`` parameter. It is often
66 | benefitial to combine both ``scale`` and ``hue`` in a single plot. In this case, you can use
67 | the ``legend_var`` variable to control which visual variable the legend is keyed on.
68 |
69 | .. plot::
70 | :context: close-figs
71 |
72 | gplt.pointplot(
73 | cities, projection=gcrs.AlbersEqualArea(),
74 | hue='ELEV_IN_FT', scale='ELEV_IN_FT', limits=(1, 10), cmap='inferno_r',
75 | legend=True, legend_var='scale'
76 | )
77 |
78 | Polyplot
79 | --------
80 |
81 | The polyplot draws polygons on a map.
82 |
83 | .. plot::
84 | :context: reset
85 |
86 | import geoplot as gplt
87 | import geoplot.crs as gcrs
88 | import geopandas as gpd
89 |
90 | boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
91 | gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea())
92 |
93 | ``polyplot`` is intended to act as a basemap for other, more interesting plot types.
94 |
95 | .. plot::
96 | :context: close-figs
97 |
98 | collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
99 |
100 | ax = gplt.polyplot(
101 | boroughs, projection=gcrs.AlbersEqualArea(),
102 | edgecolor='None', facecolor='lightgray'
103 | )
104 | gplt.pointplot(
105 | collisions[collisions['BOROUGH'].notnull()],
106 | hue='BOROUGH', ax=ax, legend=True
107 | )
108 |
109 | Webmap
110 | ------
111 |
112 | The webmap creates a static webmap.
113 |
114 | .. plot::
115 | :context: reset
116 |
117 | import geoplot as gplt
118 | import geoplot.crs as gcrs
119 | import geopandas as gpd
120 |
121 | boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
122 |
123 | gplt.webmap(boroughs, projection=gcrs.WebMercator())
124 |
125 | ``webmap`` is intended to act as a basemap for other plot types.
126 |
127 | .. plot::
128 | :context: close-figs
129 |
130 | collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
131 |
132 | ax = gplt.webmap(boroughs, projection=gcrs.WebMercator())
133 | gplt.pointplot(
134 | collisions[collisions['BOROUGH'].notnull()],
135 | hue='BOROUGH', ax=ax, legend=True
136 | )
137 |
138 |
139 | Choropleth
140 | ----------
141 |
142 | A choropleth takes observations that have been aggregated on some meaningful polygonal level
143 | (e.g. census tract, state, country, or continent) and displays the data to the reader using
144 | color. It is a well-known plot type, and likeliest the most general-purpose and well-known of
145 | the specifically spatial plot types.
146 |
147 | A basic choropleth requires polygonal geometries and a ``hue`` variable.
148 |
149 | .. plot::
150 | :context: reset
151 |
152 | import geoplot as gplt
153 | import geoplot.crs as gcrs
154 | import geopandas as gpd
155 |
156 | contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
157 |
158 | gplt.choropleth(contiguous_usa, hue='population')
159 |
160 | Change the colormap using ``cmap``. The ``legend`` parameter toggles the legend.
161 |
162 | .. plot::
163 | :context: close-figs
164 |
165 | gplt.choropleth(
166 | contiguous_usa, hue='population', projection=gcrs.AlbersEqualArea(),
167 | cmap='Greens', legend=True
168 | )
169 |
170 | Keyword arguments that are not part of the ``geoplot`` API are passed to the underlying
171 | ``matplotlib.patches.Polygon`` objects; this can be used to control plot aesthetics. To pass
172 | keyword argument to the legend, use the ``legend_kwargs`` argument.
173 |
174 | .. plot::
175 | :context: close-figs
176 |
177 | gplt.choropleth(
178 | contiguous_usa, hue='population', projection=gcrs.AlbersEqualArea(),
179 | edgecolor='white', linewidth=1,
180 | cmap='Greens', legend=True, legend_kwargs={'orientation': 'horizontal'}
181 | )
182 |
183 | To specify a categorical colormap, use ``scheme``.
184 |
185 | .. plot::
186 | :context: close-figs
187 |
188 | import mapclassify as mc
189 |
190 | scheme = mc.FisherJenks(contiguous_usa['population'], k=5)
191 | gplt.choropleth(
192 | contiguous_usa, hue='population', projection=gcrs.AlbersEqualArea(),
193 | edgecolor='white', linewidth=1,
194 | cmap='Greens',
195 | legend=True, legend_kwargs={'loc': 'lower left'},
196 | scheme=scheme
197 | )
198 |
199 | Use ``legend_labels`` and ``legend_values`` to customize the labels and values that appear
200 | in the legend.
201 |
202 | .. plot::
203 | :context: close-figs
204 |
205 | import mapclassify as mc
206 |
207 | scheme = mc.FisherJenks(contiguous_usa['population'], k=5)
208 | gplt.choropleth(
209 | contiguous_usa, hue='population', projection=gcrs.AlbersEqualArea(),
210 | edgecolor='white', linewidth=1,
211 | cmap='Greens', legend=True, legend_kwargs={'loc': 'lower left'},
212 | scheme=scheme,
213 | legend_labels=[
214 | '<3 million', '3-6.7 million', '6.7-12.8 million',
215 | '12.8-25 million', '25-37 million'
216 | ]
217 | )
218 |
219 | KDEPlot
220 | -------
221 |
222 | `Kernel density estimation `_ is a
223 | technique that non-parameterically estimates a distribution function for a sample of point
224 | observations. KDEs are a popular tool for analyzing data distributions; this plot applies this
225 | technique to the geospatial setting.
226 |
227 | A basic ``kdeplot`` takes pointwise data as input. For interpretability, let's also plot the
228 | underlying borough geometry.
229 |
230 | .. plot::
231 | :context: reset
232 |
233 | import geoplot as gplt
234 | import geoplot.crs as gcrs
235 | import geopandas as gpd
236 |
237 | boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
238 | collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
239 |
240 | ax = gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea())
241 | gplt.kdeplot(collisions, ax=ax)
242 |
243 | ``n_levels`` controls the number of isochrones. ``cmap`` control the colormap.
244 |
245 | .. plot::
246 | :context: close-figs
247 |
248 | ax = gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea())
249 | gplt.kdeplot(collisions, n_levels=20, cmap='Reds', ax=ax)
250 |
251 | ``shade`` toggles shaded isochrones. Use ``clip`` to constrain the plot to the surrounding
252 | geometry.
253 |
254 | .. plot::
255 | :context: close-figs
256 |
257 | ax = gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea(), zorder=1)
258 | gplt.kdeplot(collisions, cmap='Reds', shade=True, clip=boroughs, ax=ax)
259 |
260 | Additional keyword arguments that are not part of the ``geoplot`` API are passed to
261 | `the underlying seaborn.kdeplot instance `_.
262 | One of the most useful of these parameters is ``thresh=0.05``, which toggles shading on the
263 | lowest (basal) layer of the kernel density estimate.
264 |
265 | .. plot::
266 | :context: close-figs
267 |
268 | ax = gplt.polyplot(boroughs, projection=gcrs.AlbersEqualArea(), zorder=1)
269 | gplt.kdeplot(collisions, cmap='Reds', shade=True, thresh=0.05,
270 | clip=boroughs, ax=ax)
271 |
272 | Cartogram
273 | ---------
274 |
275 | A cartogram distorts (grows or shrinks) polygons on a map according to the magnitude of some
276 | input data. A basic cartogram specifies data, a projection, and a ``scale`` parameter.
277 |
278 | .. plot::
279 | :context: reset
280 |
281 | import geoplot as gplt
282 | import geoplot.crs as gcrs
283 | import geopandas as gpd
284 |
285 | contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
286 |
287 | gplt.cartogram(contiguous_usa, scale='population', projection=gcrs.AlbersEqualArea())
288 |
289 | Toggle the legend with ``legend``. Keyword arguments can be passed to the legend using the
290 | ``legend_kwargs`` argument. These arguments will be passed to the underlying legend.
291 |
292 | .. plot::
293 | :context: close-figs
294 |
295 | gplt.cartogram(
296 | contiguous_usa, scale='population', projection=gcrs.AlbersEqualArea(),
297 | legend=True, legend_kwargs={'loc': 'lower right'}
298 | )
299 |
300 | To add a colormap to the plot, specify ``hue``. Use ``cmap`` to control the colormap. For a
301 | categorical colormap, specify a ``scheme``. In this plot we also add a backing outline of the
302 | original state shapes, for better geospatial context.
303 |
304 | .. plot::
305 | :context: close-figs
306 |
307 | import mapclassify as mc
308 |
309 | scheme = mc.Quantiles(contiguous_usa['population'], k=5)
310 | ax = gplt.cartogram(
311 | contiguous_usa, scale='population', projection=gcrs.AlbersEqualArea(),
312 | legend=True, legend_kwargs={'bbox_to_anchor': (1, 0.9)}, legend_var='hue',
313 | hue='population', scheme=scheme, cmap='Greens'
314 | )
315 | gplt.polyplot(contiguous_usa, facecolor='lightgray', edgecolor='white', ax=ax)
316 |
317 | Use ``legend_labels`` and ``legend_values`` to customize the labels and values that appear
318 | in the legend.
319 |
320 | .. plot::
321 | :context: close-figs
322 |
323 | gplt.cartogram(
324 | contiguous_usa, scale='population', projection=gcrs.AlbersEqualArea(),
325 | legend=True, legend_kwargs={'bbox_to_anchor': (1, 0.9)}, legend_var='hue',
326 | hue='population', scheme=scheme, cmap='Greens',
327 | legend_labels=[
328 | '<1.4 million', '1.4-3.2 million', '3.2-5.6 million',
329 | '5.6-9 million', '9-37 million'
330 | ]
331 | )
332 | gplt.polyplot(contiguous_usa, facecolor='lightgray', edgecolor='white', ax=ax)
333 |
334 | Use the ``limits`` parameter to adjust the minimum and maximum scaling factors.
335 | You can also pass a custom scaling function to ``scale_func`` to apply a
336 | different scale to the plot (the default scaling function is linear); see the
337 | :doc:`/gallery/plot_usa_city_elevations` for an example.
338 |
339 | .. plot::
340 | :context: close-figs
341 |
342 | ax = gplt.cartogram(
343 | contiguous_usa, scale='population', projection=gcrs.AlbersEqualArea(),
344 | legend=True, legend_kwargs={'bbox_to_anchor': (1, 0.9)}, legend_var='hue',
345 | hue='population', scheme=scheme, cmap='Greens',
346 | legend_labels=[
347 | '<1.4 million', '1.4-3.2 million', '3.2-5.6 million',
348 | '5.6-9 million', '9-37 million'
349 | ],
350 | limits=(0.5, 1)
351 | )
352 | gplt.polyplot(contiguous_usa, facecolor='lightgray', edgecolor='white', ax=ax)
353 |
354 | Sankey
355 | ------
356 |
357 | A `Sankey diagram `_ visualizes flow through a
358 | network. It can be used to show the magnitudes of data moving through a system. This plot
359 | brings the Sankey diagram into the geospatial context; useful for analyzing traffic load a road
360 | network, for example, or travel volumes between different airports.
361 |
362 | A basic ``sankey`` requires a ``GeoDataFrame`` of ``LineString`` or ``MultiPoint`` geometries.
363 | For interpretability, these examples also include world geometry.
364 |
365 | .. plot::
366 | :context: reset
367 |
368 | import geoplot as gplt
369 | import geoplot.crs as gcrs
370 | import geopandas as gpd
371 |
372 | la_flights = gpd.read_file(gplt.datasets.get_path('la_flights'))
373 | world = gpd.read_file(gplt.datasets.get_path('world'))
374 |
375 | ax = gplt.sankey(la_flights, projection=gcrs.Mollweide())
376 | gplt.polyplot(world, ax=ax, facecolor='lightgray', edgecolor='white')
377 | ax.set_global(); ax.outline_patch.set_visible(True)
378 |
379 | ``hue`` adds color gradation to the map. Use ``cmap`` to control the colormap. For a categorical
380 | colormap, specify ``scheme``. ``legend`` toggles a legend.
381 |
382 | .. plot::
383 | :context: close-figs
384 |
385 | import mapclassify as mc
386 |
387 | scheme = mc.Quantiles(la_flights['Passengers'], k=5)
388 | ax = gplt.sankey(
389 | la_flights, projection=gcrs.Mollweide(),
390 | hue='Passengers', cmap='Greens', scheme=scheme, legend=True
391 | )
392 | gplt.polyplot(
393 | world, ax=ax, facecolor='lightgray', edgecolor='white'
394 | )
395 | ax.set_global(); ax.outline_patch.set_visible(True)
396 |
397 | ``scale`` adds volumetric scaling to the plot. ``limits`` can be used to control the minimum
398 | and maximum line width.
399 |
400 | .. plot::
401 | :context: close-figs
402 |
403 | import mapclassify as mc
404 |
405 | scheme = mc.Quantiles(la_flights['Passengers'], k=5)
406 | ax = gplt.sankey(
407 | la_flights, projection=gcrs.Mollweide(),
408 | scale='Passengers', limits=(1, 10),
409 | hue='Passengers', cmap='Greens', scheme=scheme, legend=True
410 | )
411 | gplt.polyplot(
412 | world, ax=ax, facecolor='lightgray', edgecolor='white'
413 | )
414 | ax.set_global(); ax.outline_patch.set_visible(True)
415 |
416 | Keyword arguments can be passed to the legend using the ``legend_kwargs`` argument. These
417 | arguments will be passed to the underlying legend.
418 |
419 | .. plot::
420 | :context: close-figs
421 |
422 | import mapclassify as mc
423 |
424 | scheme = mc.Quantiles(la_flights['Passengers'], k=5)
425 | ax = gplt.sankey(
426 | la_flights, projection=gcrs.Mollweide(),
427 | scale='Passengers', limits=(1, 10),
428 | hue='Passengers', scheme=scheme, cmap='Greens',
429 | legend=True, legend_kwargs={'loc': 'lower left'}
430 | )
431 | gplt.polyplot(
432 | world, ax=ax, facecolor='lightgray', edgecolor='white'
433 | )
434 | ax.set_global(); ax.outline_patch.set_visible(True)
435 |
436 | Voronoi
437 | -------
438 |
439 | The `Voronoi region `_ of an point is the set
440 | of points which is closer to that point than to any other observation in a dataset. A Voronoi
441 | diagram is a space-filling diagram that constructs all of the Voronoi regions of a dataset and
442 | plots them.
443 |
444 | Voronoi plots are efficient for judging point density and, combined with colormap, can be used
445 | to infer regional trends in a set of data.
446 |
447 | A basic ``voronoi`` specifies some point data. We overlay geometry to aid interpretability.
448 |
449 | .. plot::
450 | :context: reset
451 |
452 | import geoplot as gplt
453 | import geoplot.crs as gcrs
454 | import geopandas as gpd
455 |
456 | boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
457 | injurious_collisions = gpd.read_file(gplt.datasets.get_path('nyc_injurious_collisions'))
458 |
459 | ax = gplt.voronoi(injurious_collisions.head(1000))
460 | gplt.polyplot(boroughs, ax=ax)
461 |
462 | Use ``clip`` to clip the result to surrounding geometry. This is recommended in most cases.
463 | Note that if the clip geometry is complicated, this operation will take a long time; consider
464 | simplifying complex geometries with ``simplify`` to speed it up.
465 |
466 | .. plot::
467 | :context: close-figs
468 |
469 | ax = gplt.voronoi(
470 | injurious_collisions.head(100),
471 | clip=boroughs.simplify(0.001), projection=gcrs.AlbersEqualArea()
472 | )
473 | gplt.polyplot(boroughs, ax=ax)
474 |
475 | Use ``hue`` to add color as a visual variable to the plot. Change the colormap using ``cmap``. To
476 | use a categorical colormap, set ``scheme``. ``legend`` toggles the legend.
477 |
478 | .. plot::
479 | :context: close-figs
480 |
481 | import mapclassify as mc
482 |
483 | scheme = mc.NaturalBreaks(injurious_collisions['NUMBER OF PERSONS INJURED'], k=3)
484 | ax = gplt.voronoi(
485 | injurious_collisions.head(1000), projection=gcrs.AlbersEqualArea(),
486 | clip=boroughs.simplify(0.001),
487 | hue='NUMBER OF PERSONS INJURED', scheme=scheme, cmap='Reds',
488 | legend=True
489 | )
490 | gplt.polyplot(boroughs, ax=ax)
491 |
492 | Keyword arguments that are not part of the ``geoplot`` API are passed to the underlying
493 | ``matplotlib``
494 | `Polygon patches `_,
495 | which can be used to customize the appearance of the plot. To pass keyword argument to the
496 | legend, use the ``legend_kwargs`` argument.
497 |
498 | .. plot::
499 | :context: close-figs
500 |
501 | import mapclassify as mc
502 |
503 | scheme = mc.NaturalBreaks(injurious_collisions['NUMBER OF PERSONS INJURED'], k=3)
504 | ax = gplt.voronoi(
505 | injurious_collisions.head(1000), projection=gcrs.AlbersEqualArea(),
506 | clip=boroughs.simplify(0.001),
507 | hue='NUMBER OF PERSONS INJURED', scheme=scheme, cmap='Reds',
508 | legend=True,
509 | edgecolor='white', legend_kwargs={'loc': 'upper left'}
510 | )
511 | gplt.polyplot(boroughs, edgecolor='black', zorder=1, ax=ax)
512 |
513 | Quadtree
514 | --------
515 |
516 | A quadtree is a tree data structure which splits a space into increasingly small rectangular
517 | fractals. This plot takes a sequence of point or polygonal geometries as input and builds a
518 | choropleth out of their centroids, where each region is a fractal quadrangle with at least
519 | ``nsig`` observations.
520 |
521 | A quadtree demonstrates density quite effectively. It's more flexible than a conventional
522 | choropleth, and given a sufficiently large number of points `can construct a very detailed
523 | view of a space `_.
524 |
525 | A simple ``quadtree`` specifies a dataset. It's recommended to also set a maximum number of
526 | observations per bin, ``nmax``. The smaller the ``nmax``, the more detailed the plot (the
527 | minimum value is 1).
528 |
529 | .. plot::
530 | :context: reset
531 |
532 | import geoplot as gplt
533 | import geoplot.crs as gcrs
534 | import geopandas as gpd
535 |
536 | boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
537 | collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
538 |
539 | gplt.quadtree(collisions, nmax=1)
540 |
541 | Use ``clip`` to clip the result to surrounding geometry. Note that if the clip geometry is
542 | complicated, this operation will take a long time; consider simplifying complex geometries with
543 | ``simplify`` to speed it up.
544 |
545 | Keyword arguments that are not part of the ``geoplot`` API are passed to the
546 | `underlying matplotlib.patches.Patch instances
547 | `_, which can be used
548 | to customize the appearance of the plot.
549 |
550 | .. plot::
551 | :context: close-figs
552 |
553 | gplt.quadtree(
554 | collisions, nmax=1,
555 | projection=gcrs.AlbersEqualArea(), clip=boroughs.simplify(0.001),
556 | facecolor='lightgray', edgecolor='white'
557 | )
558 |
559 | A basic clipped quadtree plot such as this can be used as an alternative to ``polyplot`` as
560 | a basemap.
561 |
562 | .. plot::
563 | :context: close-figs
564 |
565 | ax = gplt.quadtree(
566 | collisions, nmax=1,
567 | projection=gcrs.AlbersEqualArea(), clip=boroughs.simplify(0.001),
568 | facecolor='lightgray', edgecolor='white', zorder=0
569 | )
570 | gplt.pointplot(collisions, s=1, ax=ax)
571 |
572 | Use ``hue`` to add color as a visual variable to the plot. ``cmap`` controls the colormap
573 | used. ``legend`` toggles the legend. The individual
574 | values of the points included in the partitions are aggregated, and each partition is colormapped
575 | based on this aggregate value.
576 |
577 | This type of plot is an effective gauge of distribution: the less random the plot output, the
578 | more spatially correlated the variable.
579 |
580 | The default aggregation function is ``np.mean``, but you can configure the aggregation
581 | by passing a different function to ``agg``.
582 |
583 | .. plot::
584 | :context: close-figs
585 |
586 | gplt.quadtree(
587 | collisions, nmax=1,
588 | projection=gcrs.AlbersEqualArea(), clip=boroughs.simplify(0.001),
589 | hue='NUMBER OF PEDESTRIANS INJURED', cmap='Reds',
590 | edgecolor='white', legend=True,
591 | )
592 |
593 | To use a categorical colormap, set ``scheme``.
594 |
595 | .. plot::
596 | :context: close-figs
597 |
598 | gplt.quadtree(
599 | collisions, nmax=1,
600 | projection=gcrs.AlbersEqualArea(), clip=boroughs.simplify(0.001),
601 | hue='NUMBER OF PEDESTRIANS INJURED', cmap='Reds', scheme='Quantiles',
602 | edgecolor='white', legend=True
603 | )
604 |
605 | Here is a demo of an alternative aggregation function.
606 |
607 | .. plot::
608 | :context: close-figs
609 |
610 | gplt.quadtree(
611 | collisions, nmax=1, agg=np.max,
612 | projection=gcrs.AlbersEqualArea(), clip=boroughs.simplify(0.001),
613 | hue='NUMBER OF PEDESTRIANS INJURED', cmap='Reds',
614 | edgecolor='white', legend=True
615 | )
616 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | name: geoplot-dev
2 |
3 | channels:
4 | - conda-forge
5 |
6 | dependencies:
7 | - python==3.8
8 | - geopandas
9 | - mapclassify
10 | - matplotlib
11 | - seaborn
12 | - cartopy
13 |
--------------------------------------------------------------------------------
/examples/README.rst:
--------------------------------------------------------------------------------
1 | Gallery
2 | =======
--------------------------------------------------------------------------------
/examples/plot_boston_airbnb_kde.py:
--------------------------------------------------------------------------------
1 | """
2 | KDEPlot of Boston AirBnB Locations
3 | ==================================
4 |
5 | This example demonstrates a combined application of ``kdeplot`` and ``pointplot`` to a
6 | dataset of AirBnB locations in Boston. The result is outputted to a webmap using the nifty
7 | ``mplleaflet`` library. We sample just 1000 points, which captures the overall trend without
8 | overwhelming the renderer.
9 |
10 | `Click here to see this plot as an interactive webmap.
11 | `_
12 | """
13 |
14 | import geopandas as gpd
15 | import geoplot as gplt
16 | import geoplot.crs as gcrs
17 | import matplotlib.pyplot as plt
18 |
19 | boston_airbnb_listings = gpd.read_file(gplt.datasets.get_path('boston_airbnb_listings'))
20 |
21 | ax = gplt.kdeplot(
22 | boston_airbnb_listings, cmap='viridis', projection=gcrs.WebMercator(), figsize=(12, 12),
23 | shade=True
24 | )
25 | gplt.pointplot(boston_airbnb_listings, s=1, color='black', ax=ax)
26 | gplt.webmap(boston_airbnb_listings, ax=ax)
27 | plt.title('Boston AirBnB Locations, 2016', fontsize=18)
28 |
--------------------------------------------------------------------------------
/examples/plot_california_districts.py:
--------------------------------------------------------------------------------
1 | """
2 | Choropleth of California districts with alternative binning schemes
3 | ===================================================================
4 |
5 | This example demonstrates the continuous and categorical binning schemes available in ``geoplot``
6 | on a sample dataset of California congressional districts. A binning scheme (or classifier) is a
7 | methodology for splitting a sequence of observations into some number of bins (classes). It is also
8 | possible to have no binning scheme, in which case the data is passed through to ``cmap`` as-is.
9 |
10 | The options demonstrated are:
11 |
12 | * scheme=None—A continuous colormap.
13 | * scheme="Quantiles"—Bins the data such that the bins contain equal numbers of samples.
14 | * scheme="EqualInterval"—Bins the data such that bins are of equal length.
15 | * scheme="FisherJenks"—Bins the data using the Fisher natural breaks optimization
16 | procedure.
17 |
18 | To learn more about colormaps in general, refer to the
19 | :ref:`/user_guide/Customizing_Plots.ipynb#hue` reference in the documentation.
20 |
21 | This demo showcases a small subset of the classifiers available in ``mapclassify``, the library
22 | that ``geoplot`` relies on for this feature. To learn more about ``mapclassify``, including how
23 | you can build your own custom ``UserDefined`` classifier, refer to `the mapclassify docs
24 | `_.
25 | """
26 |
27 |
28 | import geopandas as gpd
29 | import geoplot as gplt
30 | import geoplot.crs as gcrs
31 | import mapclassify as mc
32 | import matplotlib.pyplot as plt
33 |
34 | cali = gpd.read_file(gplt.datasets.get_path('california_congressional_districts'))
35 | cali = cali.assign(area=cali.geometry.area)
36 |
37 |
38 | proj = gcrs.AlbersEqualArea(central_latitude=37.16611, central_longitude=-119.44944)
39 | fig, axarr = plt.subplots(2, 2, figsize=(12, 12), subplot_kw={'projection': proj})
40 |
41 | gplt.choropleth(
42 | cali, hue='area', linewidth=0, scheme=None, ax=axarr[0][0]
43 | )
44 | axarr[0][0].set_title('scheme=None', fontsize=18)
45 |
46 | scheme = mc.Quantiles(cali.area, k=5)
47 | gplt.choropleth(
48 | cali, hue='area', linewidth=0, scheme=scheme, ax=axarr[0][1]
49 | )
50 | axarr[0][1].set_title('scheme="Quantiles"', fontsize=18)
51 |
52 | scheme = mc.EqualInterval(cali.area, k=5)
53 | gplt.choropleth(
54 | cali, hue='area', linewidth=0, scheme=scheme, ax=axarr[1][0]
55 | )
56 | axarr[1][0].set_title('scheme="EqualInterval"', fontsize=18)
57 |
58 | scheme = mc.FisherJenks(cali.area, k=5)
59 | gplt.choropleth(
60 | cali, hue='area', linewidth=0, scheme=scheme, ax=axarr[1][1]
61 | )
62 | axarr[1][1].set_title('scheme="FisherJenks"', fontsize=18)
63 |
64 | plt.subplots_adjust(top=0.92)
65 | plt.suptitle('California State Districts by Area, 2010', fontsize=18)
66 |
--------------------------------------------------------------------------------
/examples/plot_dc_street_network.py:
--------------------------------------------------------------------------------
1 | """
2 | Sankey of traffic volumes in Washington DC
3 | ==========================================
4 |
5 | This example plots
6 | `annual average daily traffic volume `_
7 | in Washington DC.
8 | """
9 |
10 | import geopandas as gpd
11 | import geoplot as gplt
12 | import geoplot.crs as gcrs
13 | import matplotlib.pyplot as plt
14 |
15 | dc_roads = gpd.read_file(gplt.datasets.get_path('dc_roads'))
16 |
17 | gplt.sankey(
18 | dc_roads, projection=gcrs.AlbersEqualArea(),
19 | scale='aadt', limits=(0.1, 10), color='black'
20 | )
21 |
22 | plt.title("Streets in Washington DC by Average Daily Traffic, 2015")
23 |
--------------------------------------------------------------------------------
/examples/plot_largest_cities_usa.py:
--------------------------------------------------------------------------------
1 | """
2 | Pointplot of US cities by population
3 | ====================================
4 |
5 | This example, taken from the User Guide, plots cities in the contiguous United States by their
6 | population. It demonstrates some of the range of styling options available in ``geoplot``.
7 | """
8 |
9 |
10 | import geopandas as gpd
11 | import geoplot as gplt
12 | import geoplot.crs as gcrs
13 | import matplotlib.pyplot as plt
14 | import mapclassify as mc
15 |
16 | continental_usa_cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
17 | continental_usa_cities = continental_usa_cities.query('STATE not in ["AK", "HI", "PR"]')
18 | contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
19 | scheme = mc.Quantiles(continental_usa_cities['POP_2010'], k=5)
20 |
21 | ax = gplt.polyplot(
22 | contiguous_usa,
23 | zorder=-1,
24 | linewidth=1,
25 | projection=gcrs.AlbersEqualArea(),
26 | edgecolor='white',
27 | facecolor='lightgray',
28 | figsize=(12, 7)
29 | )
30 | gplt.pointplot(
31 | continental_usa_cities,
32 | scale='POP_2010',
33 | limits=(2, 30),
34 | hue='POP_2010',
35 | cmap='Blues',
36 | scheme=scheme,
37 | legend=True,
38 | legend_var='scale',
39 | legend_values=[8000000, 2000000, 1000000, 100000],
40 | legend_labels=['8 million', '2 million', '1 million', '100 thousand'],
41 | legend_kwargs={'frameon': False, 'loc': 'lower right'},
42 | ax=ax
43 | )
44 |
45 | plt.title("Large cities in the contiguous United States, 2010")
46 |
--------------------------------------------------------------------------------
/examples/plot_los_angeles_flights.py:
--------------------------------------------------------------------------------
1 | """
2 | Sankey of Los Angeles flight volumes with Cartopy globes
3 | ========================================================
4 |
5 | This example plots passenger volumes for commercial flights out of Los Angeles International
6 | Airport. Some globe-modification options available in ``cartopy`` are demonstrated. Visit
7 | `the cartopy docs `_
8 | for more information.
9 | """
10 |
11 | import geopandas as gpd
12 | import geoplot as gplt
13 | import geoplot.crs as gcrs
14 | import matplotlib.pyplot as plt
15 | import cartopy
16 | import mapclassify as mc
17 |
18 | la_flights = gpd.read_file(gplt.datasets.get_path('la_flights'))
19 | scheme = mc.Quantiles(la_flights['Passengers'], k=5)
20 |
21 | f, axarr = plt.subplots(2, 2, figsize=(12, 12), subplot_kw={
22 | 'projection': gcrs.Orthographic(central_latitude=40.7128, central_longitude=-74.0059)
23 | })
24 | plt.suptitle('Popular Flights out of Los Angeles, 2016', fontsize=16)
25 | plt.subplots_adjust(top=0.95)
26 |
27 | ax = gplt.sankey(
28 | la_flights, scale='Passengers', hue='Passengers', cmap='Purples', scheme=scheme, ax=axarr[0][0]
29 | )
30 | ax.set_global()
31 | ax.outline_patch.set_visible(True)
32 | ax.coastlines()
33 |
34 | ax = gplt.sankey(
35 | la_flights, scale='Passengers', hue='Passengers', cmap='Purples', scheme=scheme, ax=axarr[0][1]
36 | )
37 | ax.set_global()
38 | ax.outline_patch.set_visible(True)
39 | ax.stock_img()
40 |
41 | ax = gplt.sankey(
42 | la_flights, scale='Passengers', hue='Passengers', cmap='Purples', scheme=scheme, ax=axarr[1][0]
43 | )
44 | ax.set_global()
45 | ax.outline_patch.set_visible(True)
46 | ax.gridlines()
47 | ax.coastlines()
48 | ax.add_feature(cartopy.feature.BORDERS)
49 |
50 | ax = gplt.sankey(
51 | la_flights, scale='Passengers', hue='Passengers', cmap='Purples', scheme=scheme, ax=axarr[1][1]
52 | )
53 | ax.set_global()
54 | ax.outline_patch.set_visible(True)
55 | ax.coastlines()
56 | ax.add_feature(cartopy.feature.LAND)
57 | ax.add_feature(cartopy.feature.OCEAN)
58 | ax.add_feature(cartopy.feature.LAKES)
59 | ax.add_feature(cartopy.feature.RIVERS)
60 |
--------------------------------------------------------------------------------
/examples/plot_melbourne_schools.py:
--------------------------------------------------------------------------------
1 | """
2 | Voronoi of Melbourne primary schools
3 | ====================================
4 |
5 | This example shows a ``pointplot`` combined with a ``voronoi`` mapping primary schools in
6 | Melbourne. Schools in outlying, less densely populated areas serve larger zones than those in
7 | central Melbourne.
8 |
9 | This example inspired by the `Melbourne Schools Zones Webmap `_.
10 | """
11 |
12 | import geopandas as gpd
13 | import geoplot as gplt
14 | import geoplot.crs as gcrs
15 | import matplotlib.pyplot as plt
16 |
17 | melbourne = gpd.read_file(gplt.datasets.get_path('melbourne'))
18 | melbourne_primary_schools = gpd.read_file(gplt.datasets.get_path('melbourne_schools'))\
19 | .query('School_Type == "Primary"')
20 |
21 |
22 | ax = gplt.voronoi(
23 | melbourne_primary_schools, clip=melbourne, linewidth=0.5, edgecolor='white',
24 | projection=gcrs.Mercator()
25 | )
26 | gplt.polyplot(melbourne, edgecolor='None', facecolor='lightgray', ax=ax)
27 | gplt.pointplot(melbourne_primary_schools, color='black', ax=ax, s=1, extent=melbourne.total_bounds)
28 | plt.title('Primary Schools in Greater Melbourne, 2018')
29 |
--------------------------------------------------------------------------------
/examples/plot_minard_napoleon_russia.py:
--------------------------------------------------------------------------------
1 | """
2 | Sankey of Napoleon's march on Moscow with custom colormap
3 | =========================================================
4 |
5 | This example reproduces a famous historical flow map: Charles Joseph Minard's map depicting
6 | Napoleon's disastrously costly 1812 march on Russia during the Napoleonic Wars.
7 |
8 | This plot demonstrates building and using a custom ``matplotlib`` colormap. To learn more refer to
9 | `the matplotlib documentation
10 | `_.
11 |
12 | `Click here `_ to see an
13 | interactive scrolly-panny version of this webmap built with ``mplleaflet``. To learn more about
14 | ``mplleaflet``, refer to `the mplleaflet GitHub repo `_.
15 | """
16 |
17 | import geopandas as gpd
18 | import geoplot as gplt
19 | from matplotlib.colors import LinearSegmentedColormap
20 |
21 | napoleon_troop_movements = gpd.read_file(gplt.datasets.get_path('napoleon_troop_movements'))
22 |
23 | colors = [(215 / 255, 193 / 255, 126 / 255), (37 / 255, 37 / 255, 37 / 255)]
24 | cm = LinearSegmentedColormap.from_list('minard', colors)
25 |
26 | gplt.sankey(
27 | napoleon_troop_movements,
28 | scale='survivors', limits=(0.5, 45),
29 | hue='direction',
30 | cmap=cm
31 | )
32 |
33 | # Uncomment and run the following lines of code to save as an interactive webmap.
34 | # import matplotlib.pyplot as plt
35 | # import mplleaflet
36 | # fig = plt.gcf()
37 | # mplleaflet.save_html(fig, fileobj='minard-napoleon-russia.html')
38 |
--------------------------------------------------------------------------------
/examples/plot_ny_state_demographics.py:
--------------------------------------------------------------------------------
1 | """
2 | Choropleth of New York State population demographics
3 | ====================================================
4 |
5 | This example plots the percentage of residents in New York State by county who self-identified as
6 | "white" in the 2000 census. New York City is far more ethnically diversity than the rest of the
7 | state.
8 | """
9 |
10 |
11 | import geopandas as gpd
12 | import geoplot as gplt
13 | import geoplot.crs as gcrs
14 | import matplotlib.pyplot as plt
15 |
16 | ny_census_tracts = gpd.read_file(gplt.datasets.get_path('ny_census'))
17 | ny_census_tracts = ny_census_tracts.assign(
18 | percent_white=ny_census_tracts['WHITE'] / ny_census_tracts['POP2000']
19 | )
20 |
21 | gplt.choropleth(
22 | ny_census_tracts,
23 | hue='percent_white',
24 | cmap='Purples', linewidth=0.5,
25 | edgecolor='white',
26 | legend=True,
27 | projection=gcrs.AlbersEqualArea()
28 | )
29 | plt.title("Percentage White Residents, 2000")
30 |
--------------------------------------------------------------------------------
/examples/plot_nyc_collision_factors.py:
--------------------------------------------------------------------------------
1 | """
2 | KDEPlot of two NYC traffic accident contributing factors
3 | ========================================================
4 |
5 | This example shows traffic accident densities for two common contributing factors: loss of
6 | consciousness and failure to yield right-of-way. These factors have very different geospatial
7 | distributions: loss of consciousness crashes are more localized to Manhattan.
8 | """
9 |
10 |
11 | import geopandas as gpd
12 | import geoplot as gplt
13 | import geoplot.crs as gcrs
14 | import matplotlib.pyplot as plt
15 |
16 | nyc_boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
17 | nyc_collision_factors = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
18 |
19 |
20 | proj = gcrs.AlbersEqualArea(central_latitude=40.7128, central_longitude=-74.0059)
21 | fig = plt.figure(figsize=(10, 5))
22 | ax1 = plt.subplot(121, projection=proj)
23 | ax2 = plt.subplot(122, projection=proj)
24 |
25 | gplt.kdeplot(
26 | nyc_collision_factors[
27 | nyc_collision_factors['CONTRIBUTING FACTOR VEHICLE 1'] == "Failure to Yield Right-of-Way"
28 | ],
29 | cmap='Reds',
30 | projection=proj,
31 | shade=True, thresh=0.05,
32 | clip=nyc_boroughs.geometry,
33 | ax=ax1
34 | )
35 | gplt.polyplot(nyc_boroughs, zorder=1, ax=ax1)
36 | ax1.set_title("Failure to Yield Right-of-Way Crashes, 2016")
37 |
38 | gplt.kdeplot(
39 | nyc_collision_factors[
40 | nyc_collision_factors['CONTRIBUTING FACTOR VEHICLE 1'] == "Lost Consciousness"
41 | ],
42 | cmap='Reds',
43 | projection=proj,
44 | shade=True, thresh=0.05,
45 | clip=nyc_boroughs.geometry,
46 | ax=ax2
47 | )
48 | gplt.polyplot(nyc_boroughs, zorder=1, ax=ax2)
49 | ax2.set_title("Loss of Consciousness Crashes, 2016")
50 |
--------------------------------------------------------------------------------
/examples/plot_nyc_collisions_map.py:
--------------------------------------------------------------------------------
1 | """
2 | Pointplot of NYC fatal and injurious traffic collisions
3 | =======================================================
4 |
5 | The example plots fatal (>=1 fatality) and injurious (>=1 injury requiring hospitalization)
6 | vehicle collisions in New York City. Injuries are far more common than fatalities.
7 | """
8 |
9 |
10 | import geopandas as gpd
11 | import geoplot as gplt
12 | import geoplot.crs as gcrs
13 | import matplotlib.pyplot as plt
14 |
15 | # load the data
16 | nyc_boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
17 | nyc_fatal_collisions = gpd.read_file(gplt.datasets.get_path('nyc_fatal_collisions'))
18 | nyc_injurious_collisions = gpd.read_file(gplt.datasets.get_path('nyc_injurious_collisions'))
19 |
20 |
21 | fig = plt.figure(figsize=(10, 5))
22 | proj = gcrs.AlbersEqualArea(central_latitude=40.7128, central_longitude=-74.0059)
23 | ax1 = plt.subplot(121, projection=proj)
24 | ax2 = plt.subplot(122, projection=proj)
25 |
26 | ax1 = gplt.pointplot(
27 | nyc_fatal_collisions, projection=proj,
28 | hue='BOROUGH', cmap='Set1',
29 | edgecolor='white', linewidth=0.5,
30 | scale='NUMBER OF PERSONS KILLED', limits=(8, 24),
31 | legend=True, legend_var='scale',
32 | legend_kwargs={'loc': 'upper left', 'markeredgecolor': 'black'},
33 | legend_values=[2, 1], legend_labels=['2 Fatalities', '1 Fatality'],
34 | ax=ax1
35 | )
36 | gplt.polyplot(nyc_boroughs, ax=ax1)
37 | ax1.set_title("Fatal Crashes in New York City, 2016")
38 |
39 | gplt.pointplot(
40 | nyc_injurious_collisions, projection=proj,
41 | hue='BOROUGH', cmap='Set1',
42 | edgecolor='white', linewidth=0.5,
43 | scale='NUMBER OF PERSONS INJURED', limits=(4, 20),
44 | legend=True, legend_var='scale',
45 | legend_kwargs={'loc': 'upper left', 'markeredgecolor': 'black'},
46 | legend_values=[20, 15, 10, 5, 1],
47 | legend_labels=['20 Injuries', '15 Injuries', '10 Injuries', '5 Injuries', '1 Injury'],
48 | ax=ax2
49 | )
50 | gplt.polyplot(nyc_boroughs, ax=ax2, projection=proj)
51 | ax2.set_title("Injurious Crashes in New York City, 2016")
52 |
--------------------------------------------------------------------------------
/examples/plot_nyc_collisions_quadtree.py:
--------------------------------------------------------------------------------
1 | """
2 | Quadtree of NYC traffic collisions
3 | ==================================
4 |
5 | This example plots traffic collisions in New York City. Overlaying a ``pointplot`` on a
6 | ``quadtree`` like this communicates information on two visual channels, position and texture,
7 | simultaneously.
8 | """
9 |
10 |
11 | import geopandas as gpd
12 | import geoplot as gplt
13 | import geoplot.crs as gcrs
14 | import matplotlib.pyplot as plt
15 |
16 | nyc_boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
17 | collisions = gpd.read_file(gplt.datasets.get_path('nyc_collision_factors'))
18 |
19 | ax = gplt.quadtree(
20 | collisions, nmax=1,
21 | projection=gcrs.AlbersEqualArea(), clip=nyc_boroughs,
22 | facecolor='lightgray', edgecolor='white', zorder=0
23 | )
24 | gplt.pointplot(collisions, s=1, ax=ax)
25 |
26 | plt.title("New York Ciy Traffic Collisions, 2016")
27 |
--------------------------------------------------------------------------------
/examples/plot_nyc_parking_tickets.py:
--------------------------------------------------------------------------------
1 | """
2 | Choropleth of parking tickets issued to state by precinct in NYC
3 | ================================================================
4 |
5 | This example plots a subset of parking tickets issued to drivers in New York City.
6 | Specifically, it plots the subset of tickets issued in the city which are more common than average
7 | for that state than the average. This difference between "expected tickets issued" and "actual
8 | tickets issued" is interesting because it shows which areas visitors driving into the city
9 | from a specific state are more likely visit than their peers from other states.
10 |
11 | Observations that can be made based on this plot include:
12 |
13 | * Only New Yorkers visit Staten Island.
14 | * Drivers from New Jersey, many of whom likely work in New York City, bias towards Manhattan.
15 | * Drivers from Pennsylvania and Connecticut bias towards the borough closest to their state:
16 | The Bronx for Connecticut, Brooklyn for Pennsylvania.
17 |
18 | This example was inspired by the blog post `"Californians love Brooklyn, New Jerseyans love
19 | Midtown: Mapping NYC’s Visitors Through Parking Tickets"
20 | `_.
21 | """
22 |
23 |
24 | import geopandas as gpd
25 | import geoplot as gplt
26 | import geoplot.crs as gcrs
27 | import matplotlib.pyplot as plt
28 |
29 | # load the data
30 | nyc_boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
31 | tickets = gpd.read_file(gplt.datasets.get_path('nyc_parking_tickets'))
32 |
33 | proj = gcrs.AlbersEqualArea(central_latitude=40.7128, central_longitude=-74.0059)
34 |
35 |
36 | def plot_state_to_ax(state, ax):
37 | gplt.choropleth(
38 | tickets.set_index('id').loc[:, [state, 'geometry']],
39 | hue=state, cmap='Blues',
40 | linewidth=0.0, ax=ax
41 | )
42 | gplt.polyplot(
43 | nyc_boroughs, edgecolor='black', linewidth=0.5, ax=ax
44 | )
45 |
46 |
47 | f, axarr = plt.subplots(2, 2, figsize=(12, 13), subplot_kw={'projection': proj})
48 |
49 | plt.suptitle('Parking Tickets Issued to State by Precinct, 2016', fontsize=16)
50 | plt.subplots_adjust(top=0.95)
51 |
52 | plot_state_to_ax('ny', axarr[0][0])
53 | axarr[0][0].set_title('New York (n=6,679,268)')
54 |
55 | plot_state_to_ax('nj', axarr[0][1])
56 | axarr[0][1].set_title('New Jersey (n=854,647)')
57 |
58 | plot_state_to_ax('pa', axarr[1][0])
59 | axarr[1][0].set_title('Pennsylvania (n=215,065)')
60 |
61 | plot_state_to_ax('ct', axarr[1][1])
62 | axarr[1][1].set_title('Connecticut (n=126,661)')
63 |
--------------------------------------------------------------------------------
/examples/plot_obesity.py:
--------------------------------------------------------------------------------
1 | """
2 | Cartogram of US states by obesity rate
3 | ======================================
4 |
5 | This example ``cartogram`` showcases regional trends for obesity in the United States. Rugged
6 | mountain states are the healthiest; the deep South, the unhealthiest.
7 |
8 | This example inspired by the `"Non-Contiguous Cartogram" `_
9 | example in the D3.JS example gallery.
10 | """
11 |
12 |
13 | import pandas as pd
14 | import geopandas as gpd
15 | import geoplot as gplt
16 | import geoplot.crs as gcrs
17 | import matplotlib.pyplot as plt
18 | import mapclassify as mc
19 |
20 | # load the data
21 | obesity_by_state = pd.read_csv(gplt.datasets.get_path('obesity_by_state'), sep='\t')
22 | contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
23 | contiguous_usa['Obesity Rate'] = contiguous_usa['state'].map(
24 | lambda state: obesity_by_state.query("State == @state").iloc[0]['Percent']
25 | )
26 | scheme = mc.Quantiles(contiguous_usa['Obesity Rate'], k=5)
27 |
28 |
29 | ax = gplt.cartogram(
30 | contiguous_usa,
31 | scale='Obesity Rate', limits=(0.75, 1),
32 | projection=gcrs.AlbersEqualArea(central_longitude=-98, central_latitude=39.5),
33 | hue='Obesity Rate', cmap='Reds', scheme=scheme,
34 | linewidth=0.5,
35 | legend=True, legend_kwargs={'loc': 'lower right'}, legend_var='hue',
36 | figsize=(12, 7)
37 | )
38 | gplt.polyplot(contiguous_usa, facecolor='lightgray', edgecolor='None', ax=ax)
39 |
40 | plt.title("Adult Obesity Rate by State, 2013")
41 |
--------------------------------------------------------------------------------
/examples/plot_san_francisco_trees.py:
--------------------------------------------------------------------------------
1 | """
2 | Quadtree of San Francisco street trees
3 | ======================================
4 |
5 | This example shows the geospatial nullity pattern (whether records are more or less likely to be
6 | null in one region versus another) of a dataset on city-maintained street trees by species in San
7 | Francisco.
8 |
9 | In this case we see that there is small but significant amount of variation in the percentage
10 | of trees classified per area, which ranges from 88% to 98%.
11 |
12 | For more tools for visualizing data nullity, `check out the ``missingno`` library
13 | `_.
14 | """
15 |
16 | import geopandas as gpd
17 | import geoplot as gplt
18 | import geoplot.crs as gcrs
19 |
20 |
21 | trees = gpd.read_file(gplt.datasets.get_path('san_francisco_street_trees_sample'))
22 | sf = gpd.read_file(gplt.datasets.get_path('san_francisco'))
23 |
24 |
25 | ax = gplt.quadtree(
26 | trees.assign(nullity=trees['Species'].notnull().astype(int)),
27 | projection=gcrs.AlbersEqualArea(),
28 | hue='nullity', nmax=1, cmap='Greens', scheme='Quantiles', legend=True,
29 | clip=sf, edgecolor='white', linewidth=1
30 | )
31 | gplt.polyplot(sf, facecolor='None', edgecolor='gray', linewidth=1, zorder=2, ax=ax)
32 |
--------------------------------------------------------------------------------
/examples/plot_usa_city_elevations.py:
--------------------------------------------------------------------------------
1 | """
2 | Pointplot of US city elevations with custom scale functions
3 | ===========================================================
4 |
5 | This example plots United States cities by their elevation. Several different possible scaling
6 | functions for determining point size are demonstrated.
7 |
8 | The first plot is a default linear-scale one. We can see from the results that this is clearly
9 | the most appropriate one for this specific data.
10 |
11 | The second plot shows a trivial identity scale. This results in a plot where every city has the
12 | same size.
13 |
14 | A more interesting scale is the logarithmic scale. This scale works very well when the data in
15 | question is log-linear, that is, it is distributed linearly with respect to its own logarithm.
16 | In our demonstratory case the data is linear and not logorithmic in shape, so this doesn't come
17 | out too well, but in other cases using the logorithm is the way to go.
18 |
19 | The last demo shows a power scale. This is useful for data that follows a power law distribution
20 | of some kind. Again, this doesn't work too well in our case.
21 | """
22 |
23 | import geopandas as gpd
24 | import geoplot as gplt
25 | import geoplot.crs as gcrs
26 | import numpy as np
27 | import matplotlib.pyplot as plt
28 |
29 | continental_usa_cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
30 | continental_usa_cities = continental_usa_cities.query('STATE not in ["AK", "HI", "PR"]')
31 | contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
32 |
33 |
34 | proj = gcrs.AlbersEqualArea(central_longitude=-98, central_latitude=39.5)
35 | f, axarr = plt.subplots(2, 2, figsize=(12, 9), subplot_kw={'projection': proj})
36 |
37 | polyplot_kwargs = {'facecolor': (0.9, 0.9, 0.9), 'linewidth': 0}
38 | pointplot_kwargs = {
39 | 'scale': 'ELEV_IN_FT', 'edgecolor': 'white', 'linewidth': 0.5, 'color': 'black'
40 | }
41 |
42 |
43 | gplt.polyplot(contiguous_usa.geometry, ax=axarr[0][0], **polyplot_kwargs)
44 | gplt.pointplot(
45 | continental_usa_cities.query("POP_2010 > 10000"),
46 | ax=axarr[0][0], limits=(0.1, 10), **pointplot_kwargs
47 | )
48 | axarr[0][0].set_title("Linear Scale")
49 |
50 |
51 | def identity_scale(minval, maxval):
52 | def scalar(val):
53 | return 2
54 | return scalar
55 |
56 |
57 | gplt.polyplot(contiguous_usa.geometry, ax=axarr[0][1], **polyplot_kwargs)
58 | gplt.pointplot(
59 | continental_usa_cities.query("POP_2010 > 10000"),
60 | ax=axarr[0][1], scale_func=identity_scale, **pointplot_kwargs
61 | )
62 | axarr[0][1].set_title("Identity Scale")
63 |
64 |
65 | def log_scale(minval, maxval):
66 | def scalar(val):
67 | val = val + abs(minval) + 1
68 | return np.log10(val)
69 | return scalar
70 |
71 |
72 | gplt.polyplot(
73 | contiguous_usa.geometry,
74 | ax=axarr[1][0], **polyplot_kwargs
75 | )
76 | gplt.pointplot(
77 | continental_usa_cities.query("POP_2010 > 10000"),
78 | ax=axarr[1][0], scale_func=log_scale, **pointplot_kwargs
79 | )
80 | axarr[1][0].set_title("Log Scale")
81 |
82 |
83 | def power_scale(minval, maxval):
84 | def scalar(val):
85 | val = val + abs(minval) + 1
86 | return (val / 1000)**2
87 | return scalar
88 |
89 |
90 | gplt.polyplot(
91 | contiguous_usa.geometry,
92 | ax=axarr[1][1], **polyplot_kwargs
93 | )
94 | gplt.pointplot(
95 | continental_usa_cities.query("POP_2010 > 10000"),
96 | ax=axarr[1][1], scale_func=power_scale, **pointplot_kwargs
97 | )
98 | axarr[1][1].set_title("Power Scale")
99 |
100 | plt.suptitle('Continental US Cities by Elevation, 2016', fontsize=16)
101 | plt.subplots_adjust(top=0.95)
102 |
--------------------------------------------------------------------------------
/figures/dc-street-network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ResidentMario/geoplot/5ca4fabdb91f91e3153c2b7d376b390466354b0b/figures/dc-street-network.png
--------------------------------------------------------------------------------
/figures/los-angeles-flights.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ResidentMario/geoplot/5ca4fabdb91f91e3153c2b7d376b390466354b0b/figures/los-angeles-flights.png
--------------------------------------------------------------------------------
/figures/nyc-collision-factors.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ResidentMario/geoplot/5ca4fabdb91f91e3153c2b7d376b390466354b0b/figures/nyc-collision-factors.png
--------------------------------------------------------------------------------
/figures/nyc-parking-tickets.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ResidentMario/geoplot/5ca4fabdb91f91e3153c2b7d376b390466354b0b/figures/nyc-parking-tickets.png
--------------------------------------------------------------------------------
/figures/usa-city-elevations.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ResidentMario/geoplot/5ca4fabdb91f91e3153c2b7d376b390466354b0b/figures/usa-city-elevations.png
--------------------------------------------------------------------------------
/geoplot/__init__.py:
--------------------------------------------------------------------------------
1 | from .geoplot import (
2 | pointplot, polyplot, choropleth, cartogram, kdeplot, sankey, voronoi, quadtree, webmap,
3 | __version__
4 | )
5 | from .crs import (
6 | PlateCarree, LambertCylindrical, Mercator, WebMercator, Miller, Mollweide, Robinson,
7 | Sinusoidal, InterruptedGoodeHomolosine, Geostationary, NorthPolarStereo, SouthPolarStereo,
8 | Gnomonic, AlbersEqualArea, AzimuthalEquidistant, LambertConformal, Orthographic,
9 | Stereographic, TransverseMercator, LambertAzimuthalEqualArea, OSGB, EuroPP, OSNI,
10 | EckertI, EckertII, EckertIII, EckertIV, EckertV, EckertVI, NearsidePerspective
11 | )
12 | from .datasets import get_path
13 |
--------------------------------------------------------------------------------
/geoplot/crs.py:
--------------------------------------------------------------------------------
1 | """
2 | This module defines the ``geoplot`` coordinate reference system classes, wrappers on
3 | ``cartopy.crs`` objects meant to be used as parameters to the ``projection`` parameter of all
4 | front-end ``geoplot`` outputs. For the list of Cartopy CRS objects this module derives from,
5 | refer to https://scitools.org.uk/cartopy/docs/latest/crs/projections.html.
6 | """
7 |
8 | import cartopy.crs as ccrs
9 | import geopandas as gpd
10 |
11 |
12 | class Base:
13 | # TODO: RotatedPole
14 | """
15 | Generate instances of ``cartopy.crs``.*name* where *name* matches the instance's class name.
16 |
17 | Parameters
18 | ----------
19 | `load` : Return a Cartopy CRS initialized with defaults from the `centerings` dictionary,
20 | overridden by initialization parameters.
21 |
22 | `_as_mpl_axes` : Return the result of calling cartopy's ``_as_mpl_axes`` for `self.load`
23 | called with empty `df` and `centerings`.
24 | """
25 | def __init__(self, **kwargs):
26 | """Save parameters that initialize Cartopy CRSs."""
27 | self.args = kwargs
28 |
29 | def load(self, df, centerings):
30 | """
31 | A meta-method which abstracts the internals of individual projections' load procedures.
32 |
33 | Parameters
34 | ----------
35 | df : GeoDataFrame
36 | The GeoDataFrame which has been passed as input to the plotter at the top level.
37 | This data is needed to calculate reasonable centering variables in cases in which the
38 | user does not already provide them; which is, incidentally, the reason behind all of
39 | this funny twice-instantiation loading in the first place.
40 | centerings: dict
41 | A dictionary containing names and centering methods. Certain projections have certain
42 | centering parameters whilst others lack them. For example, the geospatial projection
43 | contains both ``central_longitude`` and ``central_latitude`` instance parameter, which
44 | together control the center of the plot, while the North Pole Stereo projection has
45 | only a ``central_longitude`` instance parameter, implying that latitude is fixed (as
46 | indeed it is, as this projection is centered on the North Pole!).
47 |
48 | A top-level centerings method is provided in each of the ``geoplot`` top-level plot
49 | functions; each of the projection wrapper classes defined here in turn selects the
50 | functions from this list relevant to this particular instance and passes them to
51 | the ``_generic_load`` method here.
52 |
53 | We then in turn execute these functions to get defaults for our ``df`` and pass them
54 | off to our output ``cartopy.crs`` instance.
55 |
56 | Returns
57 | -------
58 | crs : ``cartopy.crs`` object instance
59 | Returns a ``cartopy.crs`` object instance whose appropriate instance variables have
60 | been set to reasonable defaults wherever not already provided by the user.
61 | """
62 | # WebMercator is a special case.
63 | if self.__class__.__name__ == 'WebMercator':
64 | class WebMercator(ccrs.Mercator):
65 | def __init__(self, args):
66 | super().__init__(
67 | central_longitude=0,
68 | min_latitude=-85.0511287798066,
69 | max_latitude=85.0511287798066,
70 | globe=ccrs.Globe(
71 | ellipse=None,
72 | semimajor_axis=ccrs.WGS84_SEMIMAJOR_AXIS,
73 | semiminor_axis=ccrs.WGS84_SEMIMAJOR_AXIS,
74 | nadgrids='@null'
75 | ),
76 | **args
77 | )
78 | return WebMercator(self.args)
79 | else:
80 | return getattr(ccrs, self.__class__.__name__)(**{**centerings, **self.args})
81 |
82 | def _as_mpl_axes(self):
83 | """
84 | When Matplotlib is provided a projection via a ``projection`` keyword argument, it
85 | expects to get something with a callable ``as_mpl_axes`` method. The precise details of
86 | what this method does, exactly, are not important: it suffices to know that every
87 | ``cartopy`` coordinate reference system object has one.
88 |
89 | When we pass a ``geoplot.crs`` crs object to a ``geoplot`` function, the loading and
90 | centering of the data occurs automatically (using the function defined immediately above).
91 | Since we control what ``geoplot`` does at execution, we gracefully integrate this two-step
92 | procedure into the function body.
93 |
94 | But there are also use cases outside of our control in which we are forced to pass a
95 | ``geoplot.crs`` object without having first called ``load``: most prominently, when
96 | creating a plot containing subplots, the "overall" projection must be pre-loaded. It's
97 | possible to get around this by using ``cartopy.crs`` objects instead, but this is
98 | inelegant. This method is a better way: when a ``geoplot.crs`` object called by
99 | Matplotlib, it silently swaps itself out for a vanilla version of its ``cartopy.crs``
100 | mirror, and calls that function's ``_as_mpl_axes`` instead.
101 |
102 | Parameters
103 | ----------
104 | proj : geoplot.crs projection instance
105 | The instance in question (self, in the method body).
106 |
107 | Returns
108 | -------
109 | Mutates into a ``cartopy.crs`` object and returns the result of executing ``_as_mpl_axes``
110 | on that object instead.
111 | """
112 | proj = self.load(gpd.GeoDataFrame(), dict())
113 | return proj._as_mpl_axes()
114 |
115 |
116 | class Filtering(Base):
117 | """CRS that `load`s with `centering` restricted to keys in `self.filter_`."""
118 |
119 | def load(self, df, centerings):
120 | """Call `load` method with `centerings` filtered to keys in `self.filter_`."""
121 | return super().load(
122 | df,
123 | {key: value
124 | for key, value in centerings.items()
125 | if key in self.filter_}
126 | )
127 |
128 |
129 | class LongitudeCentering(Filtering):
130 | """Form a CRS that centers by longitude."""
131 | filter_ = {'central_longitude'}
132 |
133 |
134 | class LatitudeCentering(Filtering):
135 | """For a CRS that centers by latitude."""
136 | filter_ = {'central_latitude'}
137 |
138 |
139 | class LongitudeLatitudeCentering(Filtering):
140 | """Form a CRS that accepts neither longitude or latitude centering."""
141 | filter_ = {}
142 |
143 |
144 | PlateCarree,\
145 | LambertCylindrical,\
146 | Mercator,\
147 | Miller,\
148 | Mollweide,\
149 | Robinson,\
150 | Sinusoidal,\
151 | InterruptedGoodeHomolosine,\
152 | Geostationary,\
153 | NorthPolarStereo,\
154 | SouthPolarStereo,\
155 | EckertI,\
156 | EckertII,\
157 | EckertIII,\
158 | EckertIV,\
159 | EckertV,\
160 | EckertVI,\
161 | EqualEarth = tuple(
162 | type(name, (LongitudeCentering,), {})
163 | for name in ('PlateCarree',
164 | 'LambertCylindrical',
165 | 'Mercator',
166 | 'Miller',
167 | 'Mollweide',
168 | 'Robinson',
169 | 'Sinusoidal',
170 | 'InterruptedGoodeHomolosine',
171 | 'Geostationary',
172 | 'NorthPolarStereo',
173 | 'SouthPolarStereo',
174 | 'EckertI',
175 | 'EckertII',
176 | 'EckertIII',
177 | 'EckertIV',
178 | 'EckertV',
179 | 'EckertVI',
180 | 'EqualEarth')
181 | )
182 |
183 | Gnomonic = type('Gnomonic', (LatitudeCentering,), {})
184 |
185 | EuroPP,\
186 | OSGB,\
187 | NearsidePerspective,\
188 | WebMercator = tuple(
189 | type(name, (LongitudeLatitudeCentering,), {})
190 | for name in ('EuroPP', 'OSGB', 'NearsidePerspective', 'WebMercator')
191 | )
192 |
193 | AlbersEqualArea,\
194 | AzimuthalEquidistant,\
195 | LambertConformal,\
196 | Orthographic,\
197 | Stereographic,\
198 | TransverseMercator,\
199 | LambertAzimuthalEqualArea,\
200 | OSNI,\
201 | WebMercator = tuple(
202 | type(name, (Base,), {})
203 | for name in ('AlbersEqualArea',
204 | 'AzimuthalEquidistant',
205 | 'LambertConformal',
206 | 'Orthographic',
207 | 'Stereographic',
208 | 'TransverseMercator',
209 | 'LambertAzimuthalEqualArea',
210 | 'OSNI',
211 | 'WebMercator')
212 | )
213 |
--------------------------------------------------------------------------------
/geoplot/datasets.py:
--------------------------------------------------------------------------------
1 | """
2 | Example dataset fetching utility. Used in docs.
3 | """
4 |
5 | src = 'https://raw.githubusercontent.com/ResidentMario/geoplot-data/master'
6 |
7 |
8 | def get_path(dataset_name):
9 | """
10 | Returns the URL path to an example dataset suitable for reading into ``geopandas``.
11 | """
12 | if dataset_name == 'usa_cities':
13 | return f'{src}/usa-cities.geojson'
14 | elif dataset_name == 'contiguous_usa':
15 | return f'{src}/contiguous-usa.geojson'
16 | elif dataset_name == 'nyc_collision_factors':
17 | return f'{src}/nyc-collision-factors.geojson'
18 | elif dataset_name == 'nyc_boroughs':
19 | return f'{src}/nyc-boroughs.geojson'
20 | elif dataset_name == 'ny_census':
21 | return f'{src}/ny-census-partial.geojson'
22 | elif dataset_name == 'obesity_by_state':
23 | return f'{src}/obesity-by-state.tsv'
24 | elif dataset_name == 'la_flights':
25 | return f'{src}/la-flights.geojson'
26 | elif dataset_name == 'dc_roads':
27 | return f'{src}/dc-roads.geojson'
28 | elif dataset_name == 'nyc_map_pluto_sample':
29 | return f'{src}/nyc-map-pluto-sample.geojson'
30 | elif dataset_name == 'nyc_collisions_sample':
31 | return f'{src}/nyc-collisions-sample.csv'
32 | elif dataset_name == 'boston_zip_codes':
33 | return f'{src}/boston-zip-codes.geojson'
34 | elif dataset_name == 'boston_airbnb_listings':
35 | return f'{src}/boston-airbnb-listings.geojson'
36 | elif dataset_name == 'napoleon_troop_movements':
37 | return f'{src}/napoleon-troop-movements.geojson'
38 | elif dataset_name == 'nyc_fatal_collisions':
39 | return f'{src}/nyc-fatal-collisions.geojson'
40 | elif dataset_name == 'nyc_injurious_collisions':
41 | return f'{src}/nyc-injurious-collisions.geojson'
42 | elif dataset_name == 'nyc_police_precincts':
43 | return f'{src}/nyc-police-precincts.geojson'
44 | elif dataset_name == 'nyc_parking_tickets':
45 | return f'{src}/nyc-parking-tickets-sample.geojson'
46 | elif dataset_name == 'world':
47 | return f'{src}/world.geojson'
48 | elif dataset_name == 'melbourne':
49 | return f'{src}/melbourne.geojson'
50 | elif dataset_name == 'melbourne_schools':
51 | return f'{src}/melbourne-schools.geojson'
52 | elif dataset_name == 'san_francisco':
53 | return f'{src}/san-francisco.geojson'
54 | elif dataset_name == 'san_francisco_street_trees_sample':
55 | return f'{src}/san-francisco-street-trees-sample.geojson'
56 | elif dataset_name == 'california_congressional_districts':
57 | return f'{src}/california-congressional-districts.geojson'
58 | else:
59 | raise ValueError(
60 | f'The dataset_name value {dataset_name!r} is not in the list of valid names.'
61 | )
62 |
--------------------------------------------------------------------------------
/geoplot/ops.py:
--------------------------------------------------------------------------------
1 | """
2 | This module implements a naive equal-split four-way quadtree algorithm
3 | (https://en.wikipedia.org/wiki/Quadtree). It has been written in way meant to make it convenient
4 | to use for splitting and aggregating rectangular geometries up to a certain guaranteed minimum
5 | instance threshold.
6 |
7 | The routines here are used by the ``geoplot.quadtree`` plot type.
8 | """
9 |
10 | from collections.abc import Iterable
11 | import geopandas as gpd
12 | import numpy as np
13 | import pandas as pd
14 | import shapely
15 |
16 |
17 | class QuadTree:
18 | """
19 | This class implements a naive equal-split four-way quadtree algorithm
20 | (https://en.wikipedia.org/wiki/Quadtree). It has been written in way meant to make it
21 | convenient to use for splitting and aggregating rectangular geometries up to a certain
22 | guaranteed minimum instance threshold.
23 |
24 | Properties
25 | ----------
26 | data : GeoDataFrame
27 | An efficient shallow copy reference to the class's ``gdf`` data initialization input.
28 | This is retained for downstream aggregation purposes.
29 | bounds : (minx, maxx, miny, maxy)
30 | A tuple of boundaries for data contained in the quadtree. May be passed as an
31 | initialization input via ``bounds`` or left to the ``QuadTree`` instance to compute for
32 | itself.
33 | agg : dict
34 | An aggregated dictionary whose keys consist of coordinates within the instance's
35 | ``bounds`` and whose values consist of the indices of rows in the ``data`` property
36 | corresponding with those points. This additional bookkeeping is necessary because a
37 | single coordinate may contain many individual data points.
38 | n : int
39 | The number of points contained in the current QuadTree instance.
40 | """
41 | def __init__(self, gdf, bounds=None):
42 | """
43 | Instantiation method.
44 |
45 | Parameters
46 | ----------
47 | gdf : GeoDataFrame
48 | The data being geospatially aggregated.
49 | bounds : None or (minx, maxx, miny, maxy), optional
50 | Precomputed extrema of the ``gdf`` input. If not provided beforehand
51 |
52 | Returns
53 | -------
54 | A baked `QuadTree` class instance.
55 | """
56 | if bounds:
57 | self.bounds = minx, maxx, miny, maxy = bounds
58 | gdf = gdf[
59 | gdf.geometry.map(lambda b: (minx < b.x < maxx) & (miny < b.y < maxy))
60 | ]
61 | else:
62 | b = gdf.geometry.bounds
63 | minx, miny = b[['minx', 'miny']].min().values
64 | maxx, maxy = b[['maxx', 'maxy']].max().values
65 | self.bounds = (minx, maxx, miny, maxy)
66 | gdf = gdf[[not p.is_empty for p in gdf.geometry]]
67 | if len(gdf) > 0:
68 | points = gdf.geometry
69 | xs, ys = [p.x for p in points], [p.y for p in points]
70 | geo = gpd.GeoDataFrame(index=gdf.index).assign(x=xs, y=ys).reset_index()
71 | groups = geo.groupby(['x', 'y'])
72 | self.agg = dict()
73 | self.n = 0
74 | for ind, subgroup in groups:
75 | self.n += len(subgroup.index.values)
76 | self.agg[ind] = subgroup.index.values
77 | self.data = gdf
78 | else:
79 | self.agg = dict()
80 | self.n = 0
81 | self.data = gdf
82 |
83 | def split(self):
84 | """
85 | Splits the current QuadTree instance four ways through the midpoint.
86 |
87 | Returns
88 | -------
89 | A list of four "sub" QuadTree instances, corresponding with the first, second, third, and
90 | fourth quartiles, respectively.
91 | """
92 | # TODO: Investigate why a small number of entries are lost every time this method is run.
93 | min_x, max_x, min_y, max_y = self.bounds
94 | mid_x, mid_y = (min_x + max_x) / 2, (min_y + max_y) / 2
95 | q1 = (min_x, mid_x, mid_y, max_y)
96 | q2 = (min_x, mid_x, min_y, mid_y)
97 | q3 = (mid_x, max_x, mid_y, max_y)
98 | q4 = (mid_x, max_x, min_y, mid_y)
99 | return [
100 | QuadTree(self.data, bounds=q1),
101 | QuadTree(self.data, bounds=q2),
102 | QuadTree(self.data, bounds=q3),
103 | QuadTree(self.data, bounds=q4)
104 | ]
105 |
106 | def partition(self, nmin, nmax):
107 | """
108 | This method call decomposes a QuadTree instances into a list of sub- QuadTree instances
109 | which are the smallest possible geospatial "buckets", given the current splitting rules,
110 | containing at least ``thresh`` points.
111 |
112 | Parameters
113 | ----------
114 | thresh : int
115 | The minimum number of points per partition. Care should be taken not to set this
116 | parameter to be too low, as in large datasets a small cluster of highly adjacent
117 | points may result in a number of sub-recursive splits possibly in excess of Python's
118 | global recursion limit.
119 |
120 | Returns
121 | -------
122 | partitions : list of QuadTree object instances
123 | A list of sub- QuadTree instances which are the smallest possible geospatial
124 | "buckets", given the current splitting rules, containing at least ``thresh`` points.
125 | """
126 | if self.n < nmin:
127 | return [self]
128 | else:
129 | ret = self.subpartition(nmin, nmax)
130 | return self.flatten(ret)
131 |
132 | def subpartition(self, nmin, nmax):
133 | """
134 | Recursive core of the ``QuadTree.partition`` method. Just five lines of code, amazingly.
135 |
136 | Parameters
137 | ----------
138 | quadtree : QuadTree object instance
139 | The QuadTree object instance being partitioned.
140 | nmin : int
141 | The splitting threshold. If this is not met this method will return a listing
142 | containing the root tree alone.
143 |
144 | Returns
145 | -------
146 | A (probably nested) list of QuadTree object instances containing a number of points
147 | respecting the threshold parameter.
148 | """
149 | subtrees = self.split()
150 | if self.n > nmax:
151 | return [q.partition(nmin, nmax) for q in subtrees]
152 | elif any([t.n < nmin for t in subtrees]):
153 | return [self]
154 | else:
155 | return [q.partition(nmin, nmax) for q in subtrees]
156 |
157 | @staticmethod
158 | def flatten(items):
159 | """
160 | Yield items from any nested iterable. Used by ``QuadTree.flatten`` to one-dimensionalize a
161 | list of sublists. cf.
162 | https://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python
163 | """
164 | for x in items:
165 | if isinstance(x, Iterable):
166 | yield from QuadTree.flatten(x)
167 | else:
168 | yield x
169 |
170 |
171 | def build_voronoi_polygons(df):
172 | """
173 | Given a GeoDataFrame of point geometries and pre-computed plot extrema, build Voronoi
174 | simplexes for the given points in the given space and returns them.
175 |
176 | Voronoi simplexes which are located on the edges of the graph may extend into infinity in some
177 | direction. In other words, the set of points nearest the given point does not necessarily have
178 | to be a closed polygon. We force these non-hermetic spaces into polygons using a subroutine.
179 |
180 | Returns a list of shapely.geometry.Polygon objects, each one a Voronoi polygon.
181 | """
182 | try:
183 | from scipy.spatial import Voronoi
184 | except ImportError:
185 | raise ImportError("Install scipy >= 0.12.0 for Voronoi support")
186 | geom = np.array(df.geometry.map(lambda p: [p.x, p.y]).tolist())
187 |
188 | # Jitter the points. Otherwise points sharing the same coordinate value will cause
189 | # undefined behavior from the Voronoi algorithm (see GH#192). Jitter is applied randomly
190 | # on 10**-5 scale, inducing maximum additive inaccuracy of ~1cm - good enough for the
191 | # vast majority of geospatial applications. If the meaningful precision of your dataset
192 | # exceeds 1cm, jitter the points yourself. cf. https://xkcd.com/2170/
193 | df = df.assign(geometry=jitter_points(df.geometry))
194 |
195 | vor = Voronoi(geom)
196 |
197 | polygons = []
198 |
199 | for idx_point, _ in enumerate(vor.points):
200 | idx_point_region = vor.point_region[idx_point]
201 | idxs_vertices = np.array(vor.regions[idx_point_region])
202 |
203 | is_finite = not np.any(idxs_vertices == -1)
204 |
205 | if is_finite:
206 | # Easy case, the region is closed. Make a polygon out of the Voronoi ridge points.
207 | idx_point_region = vor.point_region[idx_point]
208 | idxs_vertices = np.array(vor.regions[idx_point_region])
209 | region_vertices = vor.vertices[idxs_vertices]
210 | region_poly = shapely.geometry.Polygon(region_vertices)
211 | polygons.append(region_poly)
212 |
213 | else:
214 | # Hard case, the region is open. Project new edges out to the margins of the plot.
215 | # See `scipy.spatial.voronoi_plot_2d` for the source of this calculation.
216 | point_idx_ridges_idx = np.where((vor.ridge_points == idx_point).any(axis=1))[0]
217 |
218 | # TODO: why does this happen?
219 | if len(point_idx_ridges_idx) == 0:
220 | continue
221 |
222 | ptp_bound = vor.points.ptp(axis=0)
223 | center = vor.points.mean(axis=0)
224 |
225 | finite_segments = []
226 | infinite_segments = []
227 |
228 | pointwise_ridge_points = vor.ridge_points[point_idx_ridges_idx]
229 | pointwise_ridge_vertices = np.asarray(vor.ridge_vertices)[point_idx_ridges_idx]
230 |
231 | for pointidx, simplex in zip(pointwise_ridge_points, pointwise_ridge_vertices):
232 | simplex = np.asarray(simplex)
233 |
234 | if np.all(simplex >= 0):
235 | finite_segments.append(vor.vertices[simplex])
236 |
237 | else:
238 | i = simplex[simplex >= 0][0] # finite end Voronoi vertex
239 |
240 | t = vor.points[pointidx[1]] - vor.points[pointidx[0]] # tangent
241 | t /= np.linalg.norm(t)
242 | n = np.array([-t[1], t[0]]) # normal
243 |
244 | midpoint = vor.points[pointidx].mean(axis=0)
245 | direction = np.sign(np.dot(midpoint - center, n)) * n
246 | far_point = vor.vertices[i] + direction * ptp_bound.max()
247 |
248 | infinite_segments.append(np.asarray([vor.vertices[i], far_point]))
249 |
250 | finite_segments = finite_segments if finite_segments else np.zeros(shape=(0, 2, 2))
251 | ls = np.vstack([np.asarray(infinite_segments), np.asarray(finite_segments)])
252 |
253 | # We have to trivially sort the line segments into polygonal order. The algorithm that
254 | # follows is inefficient, being O(n^2), but "good enough" for this use-case.
255 | ls_sorted = []
256 |
257 | while len(ls_sorted) < len(ls):
258 | l1 = ls[0] if len(ls_sorted) == 0 else ls_sorted[-1]
259 | matches = []
260 |
261 | for l2 in [ln for ln in ls if not (ln == l1).all()]:
262 | if np.any(l1 == l2):
263 | matches.append(l2)
264 | elif np.any(l1 == l2[::-1]):
265 | l2 = l2[::-1]
266 | matches.append(l2)
267 |
268 | if len(ls_sorted) == 0:
269 | ls_sorted.append(l1)
270 |
271 | for match in matches:
272 | # in list syntax this would be "if match not in ls_sorted"
273 | # in NumPy things are more complicated...
274 | if not any((match == ls_sort).all() for ls_sort in ls_sorted):
275 | ls_sorted.append(match)
276 | break
277 |
278 | # Build and return the final polygon.
279 | polyline = np.vstack(ls_sorted)
280 | geom = shapely.geometry.Polygon(polyline).convex_hull
281 | polygons.append(geom)
282 |
283 | return polygons
284 |
285 |
286 | def jitter_points(geoms):
287 | working_df = gpd.GeoDataFrame().assign(
288 | _x=geoms.x,
289 | _y=geoms.y,
290 | geometry=geoms
291 | )
292 | group = working_df.groupby(['_x', '_y'])
293 | group_sizes = group.size()
294 |
295 | if not (group_sizes > 1).any():
296 | return geoms
297 |
298 | else:
299 | jitter_indices = []
300 |
301 | group_indices = group.indices
302 | group_keys_of_interest = group_sizes[group_sizes > 1].index
303 | for group_key_of_interest in group_keys_of_interest:
304 | jitter_indices += group_indices[group_key_of_interest].tolist()
305 |
306 | _x_jitter = (
307 | pd.Series([0] * len(working_df))
308 | + pd.Series(
309 | ((np.random.random(len(jitter_indices)) - 0.5) * 10**(-5)),
310 | index=jitter_indices
311 | )
312 | )
313 | _x_jitter = _x_jitter.fillna(0)
314 |
315 | _y_jitter = (
316 | pd.Series([0] * len(working_df))
317 | + pd.Series(
318 | ((np.random.random(len(jitter_indices)) - 0.5) * 10**(-5)),
319 | index=jitter_indices
320 | )
321 | )
322 | _y_jitter = _y_jitter.fillna(0)
323 |
324 | out = gpd.GeoSeries([
325 | shapely.geometry.Point(x, y) for x, y in
326 | zip(working_df._x + _x_jitter, working_df._y + _y_jitter)
327 | ])
328 |
329 | # guarantee that no two points have the exact same coordinates
330 | regroup_sizes = (
331 | gpd.GeoDataFrame()
332 | .assign(_x=out.x, _y=out.y)
333 | .groupby(['_x', '_y'])
334 | .size()
335 | )
336 | assert not (regroup_sizes > 1).any()
337 |
338 | return out
339 |
--------------------------------------------------------------------------------
/geoplot/utils.py:
--------------------------------------------------------------------------------
1 | """
2 | Utilities, principally example data generation algorithms, for use in geoplot testing and
3 | documentation.
4 | """
5 | import numpy as np
6 | import shapely
7 | import geopandas as gpd
8 | import pandas as pd
9 | try:
10 | from sklearn.cluster import KMeans
11 | except ImportError: # Optional dependency, only used for development.
12 | pass
13 |
14 |
15 | def gaussian_points(loc=(0, 0), scale=(10, 10), n=100):
16 | """
17 | Generates and returns `n` normally distributed points centered at `loc` with `scale` x and y
18 | directionality.
19 | """
20 |
21 | arr = np.random.normal(loc, scale, (n, 2))
22 | return gpd.GeoSeries([shapely.geometry.Point(x, y) for (x, y) in arr])
23 |
24 |
25 | def classify_clusters(points, n=10):
26 | """
27 | Return an array of K-Means cluster classes for an array of `shapely.geometry.Point` objects.
28 | """
29 | arr = [[p.x, p.y] for p in points.values]
30 | clf = KMeans(n_clusters=n)
31 | clf.fit(arr)
32 | classes = clf.predict(arr)
33 | return classes
34 |
35 |
36 | def gaussian_polygons(points, n=10):
37 | """
38 | Returns an array of approximately `n` `shapely.geometry.Polygon` objects for an array of
39 | `shapely.geometry.Point` objects.
40 | """
41 | gdf = gpd.GeoDataFrame(
42 | data={'cluster_number': classify_clusters(points, n=n)}, geometry=points
43 | )
44 | polygons = []
45 | for i in range(n):
46 | sel_points = gdf[gdf['cluster_number'] == i].dropna().geometry
47 | polygons.append(shapely.geometry.MultiPoint([(p.x, p.y) for p in sel_points]).convex_hull)
48 | polygons = [
49 | p for p in polygons if (
50 | (not isinstance(p, shapely.geometry.Point))
51 | and (not isinstance(p, shapely.geometry.LineString)))
52 | ]
53 | return gpd.GeoSeries(polygons)
54 |
55 |
56 | def gaussian_multi_polygons(points, n=10):
57 | """
58 | Returns an array of approximately `n` `shapely.geometry.MultiPolygon` objects for an array of
59 | `shapely.geometry.Point` objects.
60 | """
61 | polygons = gaussian_polygons(points, n * 2)
62 | # Randomly stitch them together.
63 | polygon_pairs = [
64 | shapely.geometry.MultiPolygon(list(pair)) for pair in np.array_split(polygons.values, n)
65 | ]
66 | return gpd.GeoSeries(polygon_pairs)
67 |
68 |
69 | def gaussian_linestrings(points):
70 | """
71 | Returns a GeoSeries of len(points) / 2 `shapely.geometry.LineString` objects for an array of
72 | `shapely.geometry.Point` objects.
73 | """
74 | polys = [poly for _, poly in gaussian_polygons(points).iteritems()]
75 | linestrings = []
76 | for poly in polys:
77 | start_point = shapely.geometry.Point(poly.exterior.coords[0])
78 | end_point = shapely.geometry.Point(poly.exterior.coords[1])
79 | linestrings.append(shapely.geometry.LineString([start_point, end_point]))
80 | return linestrings
81 |
82 |
83 | def uniform_random_global_points(n=100):
84 | """
85 | Returns an array of `n` uniformly distributed `shapely.geometry.Point` objects. Points are
86 | coordinates distributed equivalently across the Earth's surface.
87 | """
88 | xs = np.random.uniform(-180, 180, n)
89 | ys = np.random.uniform(-90, 90, n)
90 | return [shapely.geometry.Point(x, y) for x, y in zip(xs, ys)]
91 |
92 |
93 | def uniform_random_global_network(loc=2000, scale=250, n=100):
94 | """
95 | Returns an array of `n` uniformly randomly distributed `shapely.geometry.Point` objects.
96 | """
97 | arr = (np.random.normal(loc, scale, n)).astype(int)
98 | return pd.DataFrame(
99 | data={
100 | 'mock_variable': arr,
101 | 'from': uniform_random_global_points(n),
102 | 'to': uniform_random_global_points(n)
103 | }
104 | )
105 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 |
4 | doc_requires = [
5 | 'sphinx', 'sphinx-gallery', 'sphinx_rtd_theme', 'nbsphinx', 'ipython',
6 | 'mplleaflet', 'scipy',
7 | ]
8 | test_requires = ['pytest', 'pytest-mpl', 'scipy']
9 |
10 | setup(
11 | name='geoplot',
12 | packages=['geoplot'],
13 | install_requires=[
14 | 'matplotlib>=3.1.2', # seaborn GH#1773
15 | 'seaborn', 'pandas', 'geopandas>=0.9.0', 'cartopy', 'mapclassify>=2.1',
16 | 'contextily>=1.0.0'
17 | ],
18 | extras_require={
19 | 'doc': doc_requires,
20 | 'test': test_requires,
21 | 'develop': [*doc_requires, *test_requires, 'pylint'],
22 | },
23 | py_modules=['geoplot', 'crs', 'utils', 'ops'],
24 | version='0.5.1',
25 | python_requires='>=3.7.0',
26 | description='High-level geospatial plotting for Python.',
27 | author='Aleksey Bilogur',
28 | author_email='aleksey.bilogur@gmail.com',
29 | url='https://github.com/ResidentMario/geoplot',
30 | download_url='https://github.com/ResidentMario/geoplot/tarball/0.5.1',
31 | keywords=[
32 | 'data', 'data visualization', 'data analysis', 'data science', 'pandas', 'geospatial data',
33 | 'geospatial analytics'
34 | ],
35 | classifiers=['Framework :: Matplotlib'],
36 | )
37 |
--------------------------------------------------------------------------------
/tests/environment.yml:
--------------------------------------------------------------------------------
1 | name: test-environment
2 | channels:
3 | - conda-forge
4 | dependencies:
5 | - attrs=21.2.0
6 | - boost-cpp=1.74.0
7 | - bzip2=1.0.8
8 | - c-ares=1.17.1
9 | - ca-certificates=2021.5.30
10 | - cairo=1.16.0
11 | - cartopy=0.18.0
12 | - certifi=2021.5.30
13 | - cfitsio=3.470
14 | - click=7.1.2
15 | - click-plugins=1.1.1
16 | - cligj=0.7.2
17 | - curl=7.76.1
18 | - cycler=0.10.0
19 | - decorator=5.0.9
20 | - expat=2.4.1
21 | - fiona=1.8.18
22 | - fontconfig=2.13.1
23 | - freetype=2.10.4
24 | - freexl=1.0.6
25 | - gdal=3.2.1
26 | - geopandas=0.9.0
27 | - geopandas-base=0.9.0
28 | - geos=3.8.1
29 | - geotiff=1.6.0
30 | - gettext=0.19.8.1
31 | - giflib=5.2.1
32 | - glib=2.66.3
33 | - hdf4=4.2.15
34 | - hdf5=1.10.6
35 | - icu=67.1
36 | - joblib=1.0.1
37 | - jpeg=9d
38 | - json-c=0.13.1
39 | - kealib=1.4.14
40 | - kiwisolver=1.3.1
41 | - krb5=1.17.2
42 | - lcms2=2.12
43 | - libblas=3.9.0
44 | - libcblas=3.9.0
45 | - libcurl=7.76.1
46 | - libcxx=11.1.0
47 | - libdap4=3.20.6
48 | - libedit=3.1.20191231
49 | - libev=4.33
50 | - libffi=3.2.1
51 | - libgdal=3.2.1
52 | - libgfortran=5.0.0
53 | - libgfortran5=9.3.0
54 | - libglib=2.66.3
55 | - libiconv=1.16
56 | - libkml=1.3.0
57 | - liblapack=3.9.0
58 | - libnetcdf=4.7.4
59 | - libnghttp2=1.43.0
60 | - libopenblas=0.3.15
61 | - libpng=1.6.37
62 | - libpq=12.3
63 | - librttopo=1.1.0
64 | - libspatialindex=1.9.3
65 | - libspatialite=5.0.1
66 | - libssh2=1.9.0
67 | - libtiff=4.2.0
68 | - libwebp-base=1.2.0
69 | - libxml2=2.9.10
70 | - llvm-openmp=11.1.0
71 | - lz4-c=1.9.3
72 | - mapclassify=2.4.2
73 | - matplotlib=3.4.2
74 | - matplotlib-base=3.4.2
75 | - munch=2.5.0
76 | - ncurses=6.2
77 | - networkx=2.5
78 | - numpy=1.21.0
79 | - olefile=0.46
80 | - openjpeg=2.3.1
81 | - openssl=1.1.1k
82 | - pandas=1.3.0
83 | - patsy=0.5.1
84 | - pcre=8.45
85 | - pillow=8.1.2
86 | - pip=21.1.3
87 | - pixman=0.40.0
88 | - poppler=0.89.0
89 | - poppler-data=0.4.10
90 | - postgresql=12.3
91 | - proj=7.1.1
92 | - pyparsing=2.4.7
93 | - pyproj=2.6.1.post1
94 | - pyshp=2.1.3
95 | - python=3.8.0
96 | - python-dateutil=2.8.1
97 | - python_abi=3.8
98 | - pytz=2021.1
99 | - readline=8.1
100 | - rtree=0.9.7
101 | - scikit-learn=0.24.2
102 | - scipy=1.7.0
103 | - seaborn=0.11.1
104 | - seaborn-base=0.11.1
105 | - setuptools=49.6.0
106 | - shapely=1.7.1
107 | - six=1.16.0
108 | - sqlite=3.36.0
109 | - statsmodels=0.12.2
110 | - threadpoolctl=2.1.0
111 | - tiledb=2.1.6
112 | - tk=8.6.10
113 | - tornado=6.1
114 | - tzcode=2021a
115 | - wheel=0.36.2
116 | - xerces-c=3.2.3
117 | - xz=5.2.5
118 | - zlib=1.2.11
119 | - zstd=1.4.9
120 | - pip:
121 | - affine==2.3.0
122 | - alabaster==0.7.12
123 | - appnope==0.1.2
124 | - argon2-cffi==20.1.0
125 | - astroid==2.6.2
126 | - async-generator==1.10
127 | - babel==2.9.1
128 | - backcall==0.2.0
129 | - bleach==3.3.0
130 | - cffi==1.14.5
131 | - chardet==4.0.0
132 | - contextily==1.1.0
133 | - debugpy==1.3.0
134 | - defusedxml==0.7.1
135 | - docutils==0.16
136 | - entrypoints==0.3
137 | - geographiclib==1.52
138 | - geopy==2.1.0
139 | - idna==2.10
140 | - imagesize==1.2.0
141 | - iniconfig==1.1.1
142 | - ipykernel==6.0.1
143 | - ipython==7.25.0
144 | - ipython-genutils==0.2.0
145 | - ipywidgets==7.6.3
146 | - isort==5.9.1
147 | - jedi==0.18.0
148 | - jinja2==3.0.1
149 | - jsonschema==3.2.0
150 | - jupyter==1.0.0
151 | - jupyter-client==6.1.12
152 | - jupyter-console==6.4.0
153 | - jupyter-core==4.7.1
154 | - jupyterlab-pygments==0.1.2
155 | - jupyterlab-widgets==1.0.0
156 | - lazy-object-proxy==1.6.0
157 | - markupsafe==2.0.1
158 | - matplotlib-inline==0.1.2
159 | - mccabe==0.6.1
160 | - mercantile==1.2.1
161 | - mistune==0.8.4
162 | - mplleaflet==0.0.5
163 | - nbclient==0.5.3
164 | - nbconvert==6.1.0
165 | - nbformat==5.1.3
166 | - nest-asyncio==1.5.1
167 | - notebook==6.4.0
168 | - packaging==21.0
169 | - pandocfilters==1.4.3
170 | - parso==0.8.2
171 | - pexpect==4.8.0
172 | - pickleshare==0.7.5
173 | - pluggy==0.13.1
174 | - prometheus-client==0.11.0
175 | - prompt-toolkit==3.0.19
176 | - ptyprocess==0.7.0
177 | - py==1.10.0
178 | - pycparser==2.20
179 | - pygments==2.9.0
180 | - pylint==2.9.3
181 | - pyrsistent==0.18.0
182 | - pytest==6.2.4
183 | - pytest-mpl==0.13
184 | - pyzmq==22.1.0
185 | - qtconsole==5.1.1
186 | - qtpy==1.9.0
187 | - rasterio==1.2.6
188 | - requests==2.25.1
189 | - send2trash==1.7.1
190 | - snowballstemmer==2.1.0
191 | - snuggs==1.4.7
192 | - sphinx==4.0.2
193 | - sphinx-gallery==0.9.0
194 | - sphinx-rtd-theme==0.5.2
195 | - sphinxcontrib-applehelp==1.0.2
196 | - sphinxcontrib-devhelp==1.0.2
197 | - sphinxcontrib-htmlhelp==2.0.0
198 | - sphinxcontrib-jsmath==1.0.1
199 | - sphinxcontrib-qthelp==1.0.3
200 | - sphinxcontrib-serializinghtml==1.1.5
201 | - terminado==0.10.1
202 | - testpath==0.5.0
203 | - toml==0.10.2
204 | - traitlets==5.0.5
205 | - urllib3==1.26.6
206 | - wcwidth==0.2.5
207 | - webencodings==0.5.1
208 | - widgetsnbextension==3.5.1
209 | - wrapt==1.12.1
210 |
--------------------------------------------------------------------------------
/tests/mixin_tests.py:
--------------------------------------------------------------------------------
1 | from geoplot.geoplot import (Plot, HueMixin, ScaleMixin, ClipMixin, LegendMixin, webmap)
2 | import geoplot.utils as utils
3 |
4 | import pandas as pd
5 | import geopandas as gpd
6 | import unittest
7 | import pytest
8 | import matplotlib.pyplot as plt
9 | from matplotlib.axes import SubplotBase
10 | from matplotlib.cm import ScalarMappable
11 | from matplotlib.colors import LinearSegmentedColormap, Normalize
12 | import cartopy.crs as ccrs
13 | from cartopy.mpl.geoaxes import GeoAxesSubplot
14 | import geoplot.crs as gcrs
15 | from shapely.geometry import Point, Polygon
16 | import numpy as np
17 |
18 |
19 | def figure_cleanup(f):
20 | def wrapped(_self):
21 | try:
22 | f(_self)
23 | finally:
24 | plt.close('all')
25 | return wrapped
26 |
27 |
28 | class TestPlot(unittest.TestCase):
29 | def setUp(self):
30 | self.kwargs = {'figsize': (8, 6), 'ax': None, 'extent': None, 'projection': None}
31 | self.gdf = gpd.GeoDataFrame(geometry=[])
32 | self.nonempty_gdf = gpd.GeoDataFrame(geometry=[Point(-1, -1), Point(1, 1)])
33 |
34 | @figure_cleanup
35 | def test_base_init(self):
36 | """Test the base init all plotters pass to Plot."""
37 | plot = Plot(self.gdf, **self.kwargs)
38 | assert plot.figsize == (8, 6)
39 | assert isinstance(plot.ax, SubplotBase) # SO 11690597
40 | assert plot.extent is None
41 | assert plot.projection is None
42 |
43 | plot = Plot(self.gdf, **{**self.kwargs, **{'projection': gcrs.PlateCarree()}})
44 | assert plot.figsize == (8, 6)
45 | assert isinstance(plot.ax, GeoAxesSubplot)
46 | assert plot.extent is None
47 | assert isinstance(plot.projection, ccrs.PlateCarree)
48 |
49 | @figure_cleanup
50 | def test_no_geometry_col(self):
51 | """Test the requirement that the geometry column is set."""
52 | with pytest.raises(ValueError):
53 | Plot(gpd.GeoDataFrame(), **self.kwargs)
54 |
55 | @figure_cleanup
56 | def test_init_ax(self):
57 | """Test that the passed Axes is set."""
58 | _, ax = plt.subplots(figsize=(2, 2))
59 | plot = Plot(self.gdf, **{**self.kwargs, **{'ax': ax}})
60 | assert plot.figsize == (2, 2)
61 |
62 | ax = plt.axes(projection=ccrs.PlateCarree())
63 | plot = Plot(self.gdf, **{**self.kwargs, **{'ax': ax}})
64 | assert plot.ax == ax
65 |
66 | # non-default user-set figure sizes are ignored with a warning when ax is also set
67 | with pytest.warns(UserWarning):
68 | Plot(self.gdf, **{**self.kwargs, **{'figsize': (1, 1), 'ax': ax}})
69 |
70 | @figure_cleanup
71 | def test_init_projection(self):
72 | """Test that passing a projection works as expected."""
73 | plot = Plot(self.gdf, **{**self.kwargs, 'projection': gcrs.PlateCarree()})
74 | assert isinstance(plot.projection, ccrs.PlateCarree)
75 |
76 | @figure_cleanup
77 | def test_init_extent_axes(self):
78 | """Test the extent setter code in the Axes case."""
79 | # default, empty geometry case: set extent to default value of (0, 1)
80 | plot = Plot(self.gdf, **self.kwargs)
81 | assert plot.ax.get_xlim() == plot.ax.get_ylim() == (0, 1)
82 |
83 | # default, non-empty geometry case: use a (relaxed) geometry envelope
84 | plot = Plot(gpd.GeoDataFrame(geometry=[Point(-1, -1), Point(1, 1)]), **self.kwargs)
85 | xmin, xmax = plot.ax.get_xlim()
86 | ymin, ymax = plot.ax.get_ylim()
87 | assert xmin < -1
88 | assert xmax > 1
89 | assert ymin < -1
90 | assert ymax > 1
91 |
92 | # empty geometry, valid extent case: reuse prior extent, which is (0, 1) by default
93 | plot = Plot(self.gdf, **{**self.kwargs, **{'extent': (-1, -1, 1, 1)}})
94 | assert plot.ax.get_xlim() == plot.ax.get_ylim() == (0, 1)
95 |
96 | # nonempty geometry, valid extent case: use extent
97 | plot = Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (-1, -1, 1, 1)}})
98 | xmin, xmax = plot.ax.get_xlim()
99 | ymin, ymax = plot.ax.get_ylim()
100 | assert xmin == -1
101 | assert xmax == 1
102 | assert ymin == -1
103 | assert ymax == 1
104 |
105 | # nonempty geometry, numerically invalid extent case: raise
106 | with pytest.raises(ValueError):
107 | Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (-181, 0, 1, 1)}})
108 | with pytest.raises(ValueError):
109 | Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (0, -91, 1, 1)}})
110 | with pytest.raises(ValueError):
111 | Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (0, 0, 181, 1)}})
112 | with pytest.raises(ValueError):
113 | Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (0, 0, 1, 91)}})
114 |
115 | # nonempty geometry, zero extent case: warn and relax (cartopy behavior)
116 | with pytest.warns(UserWarning):
117 | Plot(self.nonempty_gdf, **{**self.kwargs, **{'extent': (0, 0, 0, 0)}})
118 |
119 | @figure_cleanup
120 | def test_init_extent_geoaxes(self):
121 | """Test the extent setter code in the GeoAxes case."""
122 | # default, empty geometry case: set extent to default value of (0, 1)
123 | plot = Plot(self.gdf, **{**self.kwargs, **{'projection': gcrs.PlateCarree()}})
124 | assert plot.ax.get_xlim() == plot.ax.get_ylim() == (0, 1)
125 |
126 | # default, non-empty geometry case: use a (relaxed) geometry envelope
127 | plot = Plot(
128 | gpd.GeoDataFrame(geometry=[Point(-1, -1), Point(1, 1)]),
129 | **{**self.kwargs, **{'projection': gcrs.PlateCarree()}}
130 | )
131 | xmin, xmax = plot.ax.get_xlim()
132 | ymin, ymax = plot.ax.get_ylim()
133 | assert xmin < -1
134 | assert xmax > 1
135 | assert ymin < -1
136 | assert ymax > 1
137 |
138 | # empty geometry, valid extent case: reuse prior extent, which is (0, 1) by default
139 | plot = Plot(self.gdf, **{
140 | **self.kwargs, **{'extent': (-1, -1, 1, 1), 'projection': gcrs.PlateCarree()}
141 | })
142 | assert plot.ax.get_xlim() == plot.ax.get_ylim() == (0, 1)
143 |
144 | # nonempty geometry, valid extent case: use extent
145 | plot = Plot(self.nonempty_gdf, **{
146 | **self.kwargs, **{'extent': (-1, -1, 1, 1), 'projection': gcrs.PlateCarree()}
147 | })
148 | xmin, xmax = plot.ax.get_xlim()
149 | ymin, ymax = plot.ax.get_ylim()
150 | assert xmin == -1
151 | assert xmax == 1
152 | assert ymin == -1
153 | assert ymax == 1
154 |
155 | # nonempty geometry, unsatisfiable extent case: warn and fall back to default
156 | with pytest.warns(UserWarning):
157 | # Orthographic can only show one half of the world at a time
158 | Plot(self.nonempty_gdf, **{
159 | **self.kwargs,
160 | **{'extent': (-180, -90, 180, 90), 'projection': gcrs.Orthographic()}
161 | })
162 |
163 |
164 | class TestHue(unittest.TestCase):
165 | def setUp(self):
166 | def create_huemixin():
167 | huemixin = HueMixin()
168 | # set props the mixin is responsible for
169 | huemixin.kwargs = {'hue': 'foo', 'cmap': 'viridis', 'norm': None}
170 | # set props set by the plot object initializer
171 | huemixin.ax = None
172 | huemixin.figsize = (8, 6)
173 | huemixin.extent = None
174 | huemixin.projection = None
175 |
176 | np.random.seed(42)
177 | huemixin.df = gpd.GeoDataFrame(
178 | {'foo': np.random.random(100), 'geometry': utils.gaussian_points(n=100)}
179 | )
180 | return huemixin
181 |
182 | self.create_huemixin = create_huemixin
183 |
184 | def test_hue_init_defaults(self):
185 | huemixin = self.create_huemixin()
186 | huemixin.set_hue_values(supports_categorical=False)
187 | assert len(huemixin.colors) == 100
188 | assert isinstance(huemixin.hue, pd.Series) and len(huemixin.hue) == 100
189 | assert huemixin.scheme is None
190 | assert huemixin.k is None
191 | assert isinstance(huemixin.mpl_cm_scalar_mappable, ScalarMappable)
192 | assert huemixin.k is None
193 | assert huemixin.categories is None
194 | assert huemixin.color_kwarg == 'color'
195 | assert huemixin.default_color == 'steelblue'
196 |
197 | def test_hue_init_hue(self):
198 | # hue is initialized as a string: source from the backing GeoDataFrame
199 | huemixin = self.create_huemixin()
200 | huemixin.set_hue_values(supports_categorical=False)
201 | assert (huemixin.hue == huemixin.df['foo']).all()
202 |
203 | # hue is initialized as a Series: pass that directly to the param
204 | huemixin = self.create_huemixin()
205 | hue = pd.Series(np.random.random(100))
206 | huemixin.kwargs['hue'] = hue
207 | huemixin.set_hue_values(supports_categorical=False)
208 | assert(huemixin.hue == hue).all()
209 |
210 | # hue is initialized as a list: transform that into a GeoSeries
211 | huemixin = self.create_huemixin()
212 | hue = list(np.random.random(100))
213 | huemixin.kwargs['hue'] = hue
214 | huemixin.set_hue_values(supports_categorical=False)
215 | assert(huemixin.hue == hue).all()
216 |
217 | # hue is initialized as an array: transform that into a GeoSeries
218 | huemixin = self.create_huemixin()
219 | hue = np.random.random(100)
220 | huemixin.kwargs['hue'] = hue
221 | huemixin.set_hue_values(supports_categorical=False)
222 | assert(huemixin.hue == hue).all()
223 |
224 | def test_hue_init_cmap(self):
225 | # cmap is None: 'viridis' is used
226 | expected = self.create_huemixin()
227 | expected.kwargs['cmap'] = 'viridis'
228 | result = self.create_huemixin()
229 | result.kwargs['cmap'] = None
230 | expected.set_hue_values(supports_categorical=False)
231 | result.set_hue_values(supports_categorical=False)
232 | assert result.colors == expected.colors
233 |
234 | # cmap is the name of a colormap: its value is propogated
235 | huemixin = self.create_huemixin()
236 | huemixin.kwargs['cmap'] = 'jet'
237 | huemixin.set_hue_values(supports_categorical=False)
238 | assert huemixin.mpl_cm_scalar_mappable.cmap.name == 'jet'
239 |
240 | # cmap is a Colormap instance: it is propogated
241 | # Colormap is an abstract class, LinearSegmentedColormap stands in as a test object
242 | huemixin = self.create_huemixin()
243 | colors = [(215 / 255, 193 / 255, 126 / 255), (37 / 255, 37 / 255, 37 / 255)]
244 | cm = LinearSegmentedColormap.from_list('test_colormap', colors)
245 | huemixin.kwargs['cmap'] = cm
246 | huemixin.set_hue_values(supports_categorical=False)
247 | assert huemixin.mpl_cm_scalar_mappable.cmap.name == 'test_colormap'
248 |
249 | # cmap is not None but hue is None: raise
250 | huemixin = self.create_huemixin()
251 | huemixin.kwargs['cmap'] = 'viridis'
252 | huemixin.kwargs['hue'] = None
253 | with pytest.raises(ValueError):
254 | huemixin.set_hue_values(supports_categorical=False)
255 |
256 | def test_hue_init_norm(self):
257 | # norm is None: a Normalize instance is used with vmin, vmax boundaries
258 | huemixin = self.create_huemixin()
259 | huemixin.set_hue_values(supports_categorical=False)
260 | assert huemixin.mpl_cm_scalar_mappable.norm.vmin == np.min(huemixin.hue)
261 | assert huemixin.mpl_cm_scalar_mappable.norm.vmax == np.max(huemixin.hue)
262 |
263 | # norm is not None: it is propogated
264 | huemixin = self.create_huemixin()
265 | norm = Normalize(vmin=-0.1, vmax=0.1)
266 | huemixin.kwargs['norm'] = norm
267 | huemixin.set_hue_values(supports_categorical=False)
268 | assert huemixin.mpl_cm_scalar_mappable.norm == norm
269 |
270 | def test_hue_init_color_kwarg(self):
271 | # color_kwarg in keyword arguments and hue is not None: raise
272 | huemixin = self.create_huemixin()
273 | huemixin.kwargs['color'] = 'black'
274 | huemixin.kwargs['hue'] = 'viridis'
275 | with pytest.raises(ValueError):
276 | huemixin.set_hue_values(supports_categorical=False)
277 |
278 | # color_kwarg in keyword arguments and hue is None: set color
279 | huemixin = self.create_huemixin()
280 | huemixin.kwargs['color'] = 'black'
281 | huemixin.kwargs['hue'] = None
282 | huemixin.kwargs['cmap'] = None
283 | huemixin.set_hue_values(supports_categorical=False)
284 | huemixin.colors == ['black'] * 100
285 |
286 | # non-default color_kwarg case
287 | huemixin = self.create_huemixin()
288 | huemixin.color_kwarg = 'foofacecolor'
289 | huemixin.kwargs['foofacecolor'] = 'black'
290 | huemixin.kwargs['hue'] = None
291 | huemixin.kwargs['cmap'] = None
292 | huemixin.set_hue_values(supports_categorical=False)
293 | huemixin.colors == ['black'] * 100
294 |
295 | # no hue non-default color case
296 | huemixin = self.create_huemixin()
297 | huemixin.kwargs['hue'] = None
298 | huemixin.kwargs['cmap'] = None
299 | huemixin.set_hue_values(supports_categorical=False, default_color='black')
300 | huemixin.colors == ['black'] * 100
301 |
302 | def test_hue_init_scheme_kwarg(self):
303 | # k is not None, scheme is not None, hue is None: raise
304 | huemixin = self.create_huemixin()
305 | huemixin.kwargs['k'] = 5
306 | huemixin.kwargs['scheme'] = 'FisherJenks'
307 | huemixin.kwargs['hue'] = None
308 | huemixin.kwargs['cmap'] = None
309 | with pytest.raises(ValueError):
310 | huemixin.set_hue_values(supports_categorical=True)
311 |
312 |
313 | class TestScale(unittest.TestCase):
314 | def setUp(self):
315 | def create_scalemixin():
316 | scalemixin = ScaleMixin()
317 | # set props the mixin is responsible for
318 | scalemixin.kwargs = {'scale': 'foo', 'limits': (1, 5), 'scale_func': None}
319 | # set props set by the plot object initializer
320 | scalemixin.ax = None
321 | scalemixin.figsize = (8, 6)
322 | scalemixin.extent = None
323 | scalemixin.projection = None
324 |
325 | np.random.seed(42)
326 | scalemixin.df = gpd.GeoDataFrame(
327 | {'foo': np.random.random(100), 'geometry': utils.gaussian_points(n=100)}
328 | )
329 | return scalemixin
330 |
331 | self.create_scalemixin = create_scalemixin
332 |
333 | def test_scale_init_defaults(self):
334 | scalemixin = self.create_scalemixin()
335 | scalemixin.set_scale_values()
336 | assert scalemixin.limits == (1, 5)
337 | assert len(scalemixin.scale) == 100
338 | assert len(scalemixin.sizes) == 100
339 | assert (scalemixin.sizes <= 5).all()
340 | assert (scalemixin.sizes >= 1).all()
341 | assert scalemixin.scale_func is None
342 | assert scalemixin.dscale is not None # dscale is the calibrated internal scale
343 |
344 | def test_scale_init_scale_dtypes(self):
345 | # scale is initialized as a str: transform to GeoSeries
346 | scalemixin = self.create_scalemixin()
347 | scale = np.random.random(100)
348 | scalemixin.kwargs['scale'] = scale
349 | scalemixin.set_scale_values()
350 | assert(scalemixin.scale == scale).all()
351 |
352 | # scale is initialized as a GeoSeries: set as-is
353 | scalemixin = self.create_scalemixin()
354 | scale = pd.Series(np.random.random(100))
355 | scalemixin.kwargs['scale'] = scale
356 | scalemixin.set_scale_values()
357 | assert(scalemixin.scale == scale).all()
358 |
359 | # scale is initialized as a list: transform to GeoSeries
360 | scalemixin = self.create_scalemixin()
361 | scale = pd.Series(np.random.random(100))
362 | scalemixin.kwargs['scale'] = scale
363 | scalemixin.set_scale_values()
364 | assert(scalemixin.scale == scale).all()
365 |
366 | # scale is initialized as an array: transform to GeoSeries
367 | scalemixin = self.create_scalemixin()
368 | scale = np.random.random(100)
369 | scalemixin.kwargs['scale'] = scale
370 | scalemixin.set_scale_values()
371 | assert(scalemixin.scale == scale).all()
372 |
373 | def test_scale_init_scale_func(self):
374 | # if scale is None and scale_func is not None, raise
375 | scalemixin = self.create_scalemixin()
376 | scalemixin.kwargs['scale'] = None
377 | scalemixin.kwargs['scale_func'] = lambda v: v
378 | with pytest.raises(ValueError):
379 | scalemixin.set_scale_values()
380 |
381 | # if scale is not None and scale_func is not None, apply that func
382 | def identity_scale(minval, maxval):
383 | def scalar(val):
384 | return 2
385 | return scalar
386 |
387 | scalemixin = self.create_scalemixin()
388 | scalemixin.kwargs['scale_func'] = identity_scale
389 | scalemixin.set_scale_values()
390 | assert (scalemixin.sizes == 2).all()
391 |
392 | def test_scale_init_param_size_kwarg(self):
393 | scalemixin = self.create_scalemixin()
394 | scalemixin.kwargs['scale'] = None
395 | scalemixin.kwargs['scale_func'] = None
396 | scalemixin.kwargs['foosize'] = 2
397 | scalemixin.set_scale_values(size_kwarg='foosize')
398 | assert scalemixin.sizes == [2] * 100
399 |
400 | def test_scale_init_param_default_size(self):
401 | scalemixin = self.create_scalemixin()
402 | scalemixin.kwargs['scale'] = None
403 | scalemixin.kwargs['scale_func'] = None
404 | scalemixin.set_scale_values(default_size=2)
405 | assert scalemixin.sizes == [2] * 100
406 |
407 |
408 | class TestLegend(unittest.TestCase):
409 | def setUp(self):
410 | def create_legendmixin(legend_vars):
411 | legendmixin = LegendMixin()
412 | # set props the mixin is responsible for
413 | legendmixin.kwargs = {
414 | 'legend': True, 'legend_labels': None, 'legend_values': None,
415 | 'legend_kwargs': None, 'legend_var': None
416 | }
417 | # set props controlled by the plot object initializer
418 | _, ax = plt.subplots(figsize=(8, 6))
419 | legendmixin.ax = ax
420 | legendmixin.figsize = (8, 6)
421 | legendmixin.extent = None
422 | legendmixin.projection = None
423 |
424 | # set data prop
425 | np.random.seed(42)
426 | legendmixin.df = gpd.GeoDataFrame(
427 | {'foo': np.random.random(100), 'geometry': utils.gaussian_points(n=100)}
428 | )
429 |
430 | # set props controlled by the hue initializer, if appropriate
431 | if 'hue' in legend_vars:
432 | legendmixin.colors = ['black'] * 100
433 | legendmixin.hue = legendmixin.df.foo
434 | legendmixin.mpl_cm_scalar_mappable = ScalarMappable(cmap='Reds')
435 | legendmixin.k = None
436 | legendmixin.categorical = False
437 | legendmixin.categories = None
438 | legendmixin.color_kwarg = 'color'
439 | legendmixin.default_color = 'steelblue'
440 |
441 | # set props controlled by the scale initializer, if appropriate
442 | if 'scale' in legend_vars:
443 | legendmixin.scale = legendmixin.df.foo
444 | legendmixin.limits = (1, 5)
445 | legendmixin.scale_func = None
446 | legendmixin.dscale = lambda v: 1
447 | legendmixin.sizes = [1] * 100
448 |
449 | return legendmixin
450 |
451 | self.create_legendmixin = create_legendmixin
452 |
453 | @figure_cleanup
454 | def test_legend_init_defaults(self):
455 | legendmixin = self.create_legendmixin(['hue'])
456 | legendmixin.paint_legend()
457 | # legendmixin is a painter, not a value-setter, so this is a smoke test
458 |
459 | @figure_cleanup
460 | def test_legend_invalid_inputs(self):
461 | # no hue or scale, but legend is True: raise
462 | legendmixin = self.create_legendmixin(['hue', 'scale'])
463 | legendmixin.hue = None
464 | legendmixin.scale = None
465 | with pytest.raises(ValueError):
466 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
467 |
468 | # only hue, but legend_var is scale: raise
469 | legendmixin = self.create_legendmixin(['hue', 'scale'])
470 | legendmixin.scale = None
471 | legendmixin.kwargs['legend_var'] = 'scale'
472 | with pytest.raises(ValueError):
473 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
474 |
475 | # # only scale, but legend_var is hue: raise
476 | legendmixin = self.create_legendmixin(['hue', 'scale'])
477 | legendmixin.hue = None
478 | legendmixin.kwargs['legend_var'] = 'hue'
479 | with pytest.raises(ValueError):
480 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
481 |
482 | # legend_var is set to an invalid input: raise
483 | legendmixin = self.create_legendmixin(['hue', 'scale'])
484 | legendmixin.kwargs['legend_var'] = 'foovar'
485 | with pytest.raises(ValueError):
486 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
487 |
488 | # legend if False and legend_var is not None: raise
489 | legendmixin = self.create_legendmixin(['hue', 'scale'])
490 | legendmixin.kwargs['legend'] = False
491 | legendmixin.kwargs['legend_var'] = 'hue'
492 | with pytest.raises(ValueError):
493 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
494 |
495 | # legend if False and legend_values is not None: raise
496 | legendmixin = self.create_legendmixin(['hue', 'scale'])
497 | legendmixin.kwargs['legend'] = False
498 | legendmixin.kwargs['legend_values'] = [1] * 5
499 | with pytest.raises(ValueError):
500 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
501 |
502 | # legend if False and legend_labels is not None: raise
503 | legendmixin = self.create_legendmixin(['hue', 'scale'])
504 | legendmixin.kwargs['legend'] = False
505 | legendmixin.kwargs['legend_labels'] = [1] * 5
506 | with pytest.raises(ValueError):
507 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
508 |
509 | # legend if False and legend_kwargs is not None: raise
510 | legendmixin = self.create_legendmixin(['hue', 'scale'])
511 | legendmixin.kwargs['legend'] = False
512 | legendmixin.kwargs['legend_kwargs'] = {'fancybox': True}
513 | with pytest.raises(ValueError):
514 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
515 |
516 | # legend is True, but legend_labels and legend_values are different lengths: raise
517 | legendmixin = self.create_legendmixin(['hue', 'scale'])
518 | legendmixin.kwargs['legend_values'] = [1] * 5
519 | legendmixin.kwargs['legend_labels'] = [1] * 4
520 | with pytest.raises(ValueError):
521 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
522 |
523 | # legend is in hue mode, and the user passes a markerfacecolor: raise
524 | legendmixin = self.create_legendmixin(['hue', 'scale'])
525 | legendmixin.kwargs['legend_var'] = 'hue'
526 | legendmixin.kwargs['legend_kwargs'] = {'markerfacecolor': 'black'}
527 | with pytest.raises(ValueError):
528 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
529 |
530 | # legend is in scale mode, and the user passes a markersize: raise
531 | legendmixin = self.create_legendmixin(['hue', 'scale'])
532 | legendmixin.kwargs['legend_var'] = 'scale'
533 | legendmixin.kwargs['legend_kwargs'] = {'markersize': 12}
534 | with pytest.raises(ValueError):
535 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
536 |
537 | # legend is in colorbar hue mode (e.g. k = None, legend=True, legend_var = 'hue') but
538 | # legend_kwargs includes marker* parameters, which can only be applied to the marker
539 | # style legend: raise
540 | legendmixin = self.create_legendmixin(['hue', 'scale'])
541 | legendmixin.kwargs['legend_var'] = 'hue'
542 | legendmixin.k = None
543 | legendmixin.kwargs['legend_kwargs'] = {'markerfacecolor': 'black'}
544 | with pytest.raises(ValueError):
545 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
546 |
547 | # legend is in colorbar mode, but legend_values is specified: raise NotImplementedError
548 | legendmixin = self.create_legendmixin(['hue', 'scale'])
549 | legendmixin.kwargs['legend_var'] = 'hue'
550 | legendmixin.k = None
551 | legendmixin.kwargs['legend_values'] = [1] * 5
552 | with pytest.raises(NotImplementedError):
553 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
554 |
555 | # legend is in colorbar mode, but legend_labels is specified: raise NotImplementedError
556 | legendmixin = self.create_legendmixin(['hue', 'scale'])
557 | legendmixin.kwargs['legend_var'] = 'hue'
558 | legendmixin.k = None
559 | legendmixin.kwargs['legend_labels'] = [1] * 5
560 | with pytest.raises(NotImplementedError):
561 | legendmixin.paint_legend(supports_scale=True, supports_hue=True)
562 |
563 |
564 | class TestClip(unittest.TestCase):
565 | def setUp(self):
566 | def create_clipmixin():
567 | clipmixin = ClipMixin()
568 | clipmixin.kwargs = {
569 | 'clip': gpd.GeoSeries(Polygon([[0, 0], [0, 100], [100, 100], [100, 0]]))
570 | }
571 | clipmixin.ax = None
572 | clipmixin.figsize = (8, 6)
573 | clipmixin.extent = None
574 | clipmixin.projection = None
575 |
576 | np.random.seed(42)
577 | points = utils.gaussian_points(n=2000)
578 | geoms = np.hstack([
579 | utils.gaussian_polygons(points, n=50), utils.gaussian_multi_polygons(points, n=50)
580 | ])
581 | clipmixin.df = gpd.GeoDataFrame({
582 | 'foo': np.random.random(len(geoms)),
583 | 'geometry': geoms,
584 | })
585 | return clipmixin
586 |
587 | self.create_clipmixin = create_clipmixin
588 |
589 | def test_clip_init_default(self):
590 | clipmixin = self.create_clipmixin()
591 |
592 | # UserWarning because we have a narrow clip (before geopandas 0.11)
593 | df_result = clipmixin.set_clip(clipmixin.df)
594 | expected = Polygon([[0, 0], [0, 100], [100, 100], [100, 0]])
595 | result = df_result.geometry.unary_union.envelope
596 | assert expected.contains(result)
597 |
598 |
599 | class TestWebmapInput(unittest.TestCase):
600 | # TODO: stub out network requests to the tile service
601 |
602 | def setUp(self):
603 | np.random.seed(42)
604 | p_srs = gpd.GeoSeries(utils.gaussian_points(n=100))
605 | self.p_df = gpd.GeoDataFrame(geometry=p_srs)
606 |
607 | @figure_cleanup
608 | def test_webmap_input_restrictions(self):
609 | """Test webmap-specific plot restrictions."""
610 | with pytest.raises(ValueError):
611 | webmap(self.p_df, projection=gcrs.AlbersEqualArea())
612 |
613 | _, ax = plt.subplots(figsize=(2, 2))
614 | with pytest.raises(ValueError):
615 | webmap(self.p_df, ax=ax)
616 |
617 | ax = plt.axes(projection=ccrs.PlateCarree())
618 | with pytest.raises(ValueError):
619 | webmap(self.p_df, ax=ax)
620 |
621 | with pytest.warns(UserWarning):
622 | webmap(self.p_df)
623 |
--------------------------------------------------------------------------------
/tests/proj_tests.py:
--------------------------------------------------------------------------------
1 | """
2 | Test that projections in `geoplot` function correctly.
3 | """
4 |
5 | import pytest
6 | import geoplot as gplt
7 | import geoplot.crs as gcrs
8 | import geopandas as gpd
9 | import matplotlib.pyplot as plt
10 | import warnings
11 |
12 |
13 | @pytest.fixture(scope="module")
14 | def countries():
15 | return gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
16 |
17 |
18 | @pytest.mark.mpl_image_compare
19 | @pytest.mark.parametrize("proj", [
20 | gcrs.PlateCarree(),
21 | gcrs.LambertCylindrical(),
22 | gcrs.Mercator(),
23 | gcrs.Miller(),
24 | gcrs.Mollweide(),
25 | gcrs.Robinson(),
26 | gcrs.Sinusoidal(),
27 | pytest.param(gcrs.InterruptedGoodeHomolosine(), marks=pytest.mark.xfail),
28 | pytest.param(gcrs.Geostationary(), marks=pytest.mark.xfail),
29 | gcrs.NorthPolarStereo(),
30 | gcrs.SouthPolarStereo(),
31 | gcrs.Gnomonic(),
32 | gcrs.AlbersEqualArea(),
33 | gcrs.AzimuthalEquidistant(),
34 | gcrs.LambertConformal(),
35 | gcrs.Orthographic(),
36 | gcrs.Stereographic(),
37 | pytest.param(gcrs.TransverseMercator(), marks=pytest.mark.xfail),
38 | gcrs.LambertAzimuthalEqualArea(),
39 | gcrs.WebMercator()
40 | ])
41 | def test_basic_global_projections(proj, countries):
42 | gplt.polyplot(countries, proj)
43 | ax = plt.gca()
44 | ax.set_global()
45 | return plt.gcf()
46 |
47 |
48 | @pytest.mark.mpl_image_compare
49 | @pytest.mark.parametrize("proj", [
50 | gcrs.EuroPP(),
51 | gcrs.OSGB(),
52 | ])
53 | def test_basic_non_global_projections(proj, countries):
54 | with warnings.catch_warnings():
55 | warnings.simplefilter("ignore")
56 | gplt.polyplot(countries, proj)
57 | return plt.gcf()
58 |
59 |
60 | @pytest.mark.mpl_image_compare
61 | @pytest.mark.parametrize("proj", [
62 | gcrs.PlateCarree(central_longitude=45),
63 | gcrs.LambertCylindrical(central_longitude=45),
64 | gcrs.Mercator(central_longitude=45),
65 | gcrs.Miller(central_longitude=45),
66 | gcrs.Mollweide(central_longitude=45),
67 | gcrs.Robinson(central_longitude=45),
68 | gcrs.Sinusoidal(central_longitude=45),
69 | pytest.param(gcrs.InterruptedGoodeHomolosine(central_longitude=45), marks=pytest.mark.xfail),
70 | pytest.param(gcrs.Geostationary(central_longitude=45), marks=pytest.mark.xfail),
71 | gcrs.NorthPolarStereo(central_longitude=45),
72 | gcrs.SouthPolarStereo(central_longitude=45),
73 | gcrs.Gnomonic(central_latitude=45),
74 | gcrs.AlbersEqualArea(central_longitude=45, central_latitude=45),
75 | gcrs.AzimuthalEquidistant(central_longitude=45, central_latitude=45),
76 | gcrs.LambertConformal(central_longitude=45, central_latitude=45),
77 | gcrs.Orthographic(central_longitude=45, central_latitude=45),
78 | gcrs.Stereographic(central_longitude=45, central_latitude=45),
79 | pytest.param(
80 | gcrs.TransverseMercator(central_longitude=45, central_latitude=45),
81 | marks=pytest.mark.xfail
82 | ),
83 | gcrs.LambertAzimuthalEqualArea(central_longitude=45, central_latitude=45),
84 | ])
85 | def test_fully_parameterized_global_projections(proj, countries):
86 | gplt.polyplot(countries, proj)
87 | ax = plt.gca()
88 | ax.set_global()
89 | return plt.gcf()
90 |
91 |
92 | @pytest.mark.mpl_image_compare
93 | @pytest.mark.parametrize("proj", [
94 | gcrs.AlbersEqualArea(central_longitude=45),
95 | gcrs.AlbersEqualArea(central_latitude=45),
96 | gcrs.AzimuthalEquidistant(central_longitude=45),
97 | gcrs.AzimuthalEquidistant(central_latitude=45),
98 | gcrs.LambertConformal(central_longitude=45),
99 | gcrs.LambertConformal(central_latitude=45),
100 | gcrs.Orthographic(central_longitude=45),
101 | gcrs.Orthographic(central_latitude=45),
102 | gcrs.Stereographic(central_longitude=45),
103 | gcrs.Stereographic(central_latitude=45),
104 | pytest.param(gcrs.TransverseMercator(central_longitude=45), marks=pytest.mark.xfail),
105 | pytest.param(gcrs.TransverseMercator(central_latitude=45), marks=pytest.mark.xfail),
106 | pytest.param(gcrs.LambertAzimuthalEqualArea(central_longitude=45), marks=pytest.mark.xfail),
107 | gcrs.LambertAzimuthalEqualArea(central_latitude=45),
108 | ])
109 | def test_partially_parameterized_global_projections(proj, countries):
110 | gplt.polyplot(countries, proj)
111 | ax = plt.gca()
112 | ax.set_global()
113 | return plt.gcf()
114 |
115 |
116 | @pytest.mark.mpl_image_compare
117 | @pytest.mark.parametrize("proj", [
118 | gcrs.PlateCarree(),
119 | gcrs.LambertCylindrical(),
120 | gcrs.Mercator(),
121 | gcrs.Miller(),
122 | gcrs.Mollweide(),
123 | gcrs.Robinson(),
124 | gcrs.Sinusoidal(),
125 | pytest.param(gcrs.InterruptedGoodeHomolosine(), marks=pytest.mark.xfail),
126 | pytest.param(gcrs.Geostationary(), marks=pytest.mark.xfail),
127 | gcrs.NorthPolarStereo(),
128 | gcrs.SouthPolarStereo(),
129 | pytest.param(gcrs.Gnomonic(), marks=pytest.mark.xfail),
130 | gcrs.AlbersEqualArea(),
131 | gcrs.AzimuthalEquidistant(),
132 | gcrs.LambertConformal(),
133 | pytest.param(gcrs.Orthographic(), marks=pytest.mark.xfail),
134 | gcrs.Stereographic(),
135 | pytest.param(gcrs.TransverseMercator(), marks=pytest.mark.xfail),
136 | gcrs.LambertAzimuthalEqualArea(),
137 | gcrs.WebMercator()
138 | ])
139 | def test_subplots_global_projections(proj, countries):
140 | gplt.polyplot(countries, proj, ax=plt.subplot(2, 1, 1, projection=proj)).set_global()
141 | gplt.polyplot(countries, proj, ax=plt.subplot(2, 1, 2, projection=proj)).set_global()
142 | return plt.gcf()
143 |
--------------------------------------------------------------------------------
/tests/viz_tests.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import numpy as np
3 | import geopandas as gpd
4 | from shapely.geometry import Polygon
5 | from matplotlib.colors import Normalize
6 |
7 | from geoplot import utils
8 | from geoplot import (
9 | pointplot, voronoi, kdeplot, polyplot, webmap, choropleth, cartogram, quadtree,
10 | sankey
11 | )
12 | from geoplot.crs import AlbersEqualArea, WebMercator
13 |
14 |
15 | np.random.seed(42)
16 | p_srs = gpd.GeoSeries(utils.gaussian_points(n=100))
17 | p_df = gpd.GeoDataFrame(geometry=p_srs)
18 | p_df = p_df.assign(var=p_df.geometry.map(lambda p: abs(p.y) + abs(p.x)))
19 | p_df = p_df.assign(var_cat=np.floor(p_df['var'] // (p_df['var'].max() / 5)).astype(str))
20 |
21 | poly_df = gpd.GeoDataFrame(geometry=utils.gaussian_polygons(p_srs.geometry, n=10))
22 | poly_df = poly_df.assign(
23 | var=poly_df.geometry.centroid.x.abs() + poly_df.geometry.centroid.y.abs()
24 | )
25 |
26 | ls_df = gpd.GeoDataFrame(geometry=utils.gaussian_linestrings(p_srs.geometry))
27 | ls_df = ls_df.assign(var=ls_df.geometry.centroid.x.abs() + ls_df.geometry.centroid.y.abs())
28 |
29 | clip_geom = gpd.GeoSeries(Polygon([[-10, -10], [10, -10], [10, 10], [-10, 10]]))
30 | non_clip_geom = gpd.GeoSeries(Polygon([[-30, -30], [30, -30], [30, 30], [-30, 30]]))
31 |
32 |
33 | def identity_scale(minval, maxval):
34 | def scalar(val):
35 | return 10
36 | return scalar
37 |
38 |
39 | @pytest.mark.mpl_image_compare
40 | @pytest.mark.parametrize("kwargs", [
41 | {'hue': 'var', 'linewidth': 0, 's': 10},
42 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'FisherJenks'},
43 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'quantiles'},
44 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'EqualInterval'},
45 | {'hue': 'var_cat', 'linewidth': 0, 's': 10},
46 | {'hue': 'var_cat', 'linewidth': 0, 's': 10, 'scheme': 'categorical'},
47 | {'hue': 'var', 'linewidth': 0, 's': 10, 'cmap': 'Greens', 'scheme': 'quantiles'},
48 | {'hue': 'var', 'linewidth': 0, 's': 10, 'cmap': 'Greens'},
49 | {'hue': p_df['var'], 'linewidth': 0, 's': 10},
50 | {'hue': np.array(p_df['var']), 'linewidth': 0, 's': 10},
51 | {'hue': list(p_df['var']), 'linewidth': 0, 's': 10}
52 | ])
53 | def test_hue_params(kwargs):
54 | return pointplot(p_df, **kwargs).get_figure()
55 |
56 |
57 | @pytest.mark.mpl_image_compare
58 | @pytest.mark.parametrize("kwargs", [
59 | pytest.param({'cmap': 'Reds'}),
60 | pytest.param({'cmap': 'Blues', 'shade': True}),
61 | pytest.param({'cmap': 'Greens', 'shade': True, 'thresh': 0.05})
62 | ])
63 | def test_hue_params_kdeplot(kwargs):
64 | return kdeplot(p_df, **kwargs).get_figure()
65 |
66 |
67 | @pytest.mark.mpl_image_compare
68 | @pytest.mark.parametrize("kwargs", [
69 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10)},
70 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'scale_func': identity_scale}
71 | ])
72 | def test_scale_params(kwargs):
73 | return pointplot(p_df, **kwargs).get_figure()
74 |
75 |
76 | @pytest.mark.mpl_image_compare
77 | @pytest.mark.parametrize("kwargs", [
78 | {'clip': clip_geom, 'edgecolor': 'white', 'facecolor': 'lightgray'},
79 | {'clip': non_clip_geom, 'edgecolor': 'white', 'facecolor': 'lightgray'},
80 | {'clip': clip_geom, 'edgecolor': 'white', 'facecolor': 'lightgray',
81 | 'projection': AlbersEqualArea()},
82 | {'clip': non_clip_geom, 'edgecolor': 'white', 'facecolor': 'lightgray',
83 | 'projection': AlbersEqualArea()}
84 | ])
85 | def test_clip_params_geometric(kwargs):
86 | # ignore warning from changed GeoSeries.isna behavior
87 | import warnings
88 | with warnings.catch_warnings():
89 | warnings.filterwarnings('ignore', 'GeoSeries.isna', UserWarning)
90 | return voronoi(p_df, **kwargs).get_figure()
91 |
92 |
93 | # xfail due to seaborn#1773
94 | @pytest.mark.mpl_image_compare
95 | @pytest.mark.parametrize("kwargs", [
96 | pytest.param({'clip': clip_geom}, marks=pytest.mark.xfail),
97 | pytest.param({'clip': non_clip_geom}, marks=pytest.mark.xfail),
98 | pytest.param({'clip': clip_geom, 'projection': AlbersEqualArea()}, marks=pytest.mark.xfail),
99 | pytest.param({'clip': non_clip_geom, 'projection': AlbersEqualArea()}, marks=pytest.mark.xfail)
100 | ])
101 | def test_clip_params_overlay(kwargs):
102 | return kdeplot(p_df, **kwargs).get_figure()
103 |
104 |
105 | @pytest.mark.mpl_image_compare
106 | @pytest.mark.parametrize("kwargs", [
107 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True},
108 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'quantiles', 'legend': True},
109 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': False},
110 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True},
111 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': False},
112 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'scale_func': identity_scale,
113 | 'legend': True},
114 | {'hue': 'var', 'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
115 | 'legend_var': 'hue'},
116 | {'hue': 'var', 'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
117 | 'legend_var': 'hue', 'scheme': 'quantiles'},
118 | {'hue': 'var', 'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
119 | 'legend_var': 'scale'},
120 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'quantiles', 'legend': True,
121 | 'legend_labels': list('ABCDE')},
122 | # kwargs[10], this one is broken (below)
123 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'quantiles', 'legend': True,
124 | 'legend_values': [1, 1, 2, 4, 4]},
125 | # kwargs[11], this one is also broken
126 | {'hue': 'var', 'linewidth': 0, 's': 10, 'scheme': 'quantiles', 'legend': True,
127 | 'legend_labels': list('ABCDE'), 'legend_values': [1, 1, 2, 4, 4]},
128 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
129 | 'legend_labels': list('ABCDE')},
130 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
131 | 'legend_values': [1, 1, 2, 4, 4]},
132 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10), 'legend': True,
133 | 'legend_labels': list('ABCDE'), 'legend_values': [1, 1, 2, 4, 4]},
134 | {'hue': 'var', 'linewidth': 0, 's': 10, 'norm': Normalize(vmin=0, vmax=10), 'legend': True},
135 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True,
136 | 'legend_kwargs': {'orientation': 'horizontal'}},
137 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True, 'scheme': 'quantiles',
138 | 'legend_kwargs': {'bbox_to_anchor': (1.0, 1.2)}},
139 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True, 'scheme': 'quantiles',
140 | 'legend_kwargs': {'markeredgecolor': 'purple', 'markeredgewidth': 5}},
141 | {'scale': 'var', 'linewidth': 0, 'limits': (5, 10),
142 | 'legend': True, 'legend_kwargs': {'markerfacecolor': 'purple'}}
143 | ])
144 | def test_legend_params(kwargs):
145 | return pointplot(p_df, **kwargs).get_figure()
146 |
147 |
148 | @pytest.mark.mpl_image_compare
149 | @pytest.mark.parametrize("func,df,kwargs", [
150 | [pointplot, p_df, {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True}],
151 | [pointplot, p_df,
152 | {'hue': 'var', 'linewidth': 0, 's': 10, 'legend': True,
153 | 'projection': AlbersEqualArea()}],
154 | # xfail due to seaborn#1773
155 | pytest.param(*[kdeplot, p_df, {}], marks=pytest.mark.xfail),
156 | pytest.param(*[kdeplot, p_df, {'projection': AlbersEqualArea()}], marks=pytest.mark.xfail),
157 | [polyplot, poly_df, {}],
158 | [polyplot, poly_df, {'projection': AlbersEqualArea()}],
159 | # xfail because webmap tiles are subject to remote change
160 | pytest.param(*[webmap, p_df, {'projection': WebMercator()}], marks=pytest.mark.xfail),
161 | [choropleth, poly_df, {'hue': 'var', 'linewidth': 0, 'legend': True}],
162 | [choropleth, poly_df,
163 | {'hue': 'var', 'linewidth': 0, 'legend': True,
164 | 'projection': AlbersEqualArea()}],
165 | [cartogram, poly_df, {'scale': 'var', 'linewidth': 0, 'legend': True}],
166 | [cartogram, poly_df,
167 | {'scale': 'var', 'linewidth': 0, 'legend': True,
168 | 'projection': AlbersEqualArea()}],
169 | [voronoi, p_df, {'facecolor': 'lightgray', 'edgecolor': 'white'}],
170 | [voronoi, p_df,
171 | {'facecolor': 'lightgray', 'edgecolor': 'white',
172 | 'projection': AlbersEqualArea()}],
173 | [quadtree, p_df, {'facecolor': 'lightgray', 'edgecolor': 'white'}],
174 | [quadtree, p_df,
175 | {'facecolor': 'lightgray', 'edgecolor': 'white', 'projection': AlbersEqualArea()}],
176 | [sankey, ls_df, {'scale': 'var', 'legend': True}],
177 | [sankey, ls_df, {'scale': 'var', 'legend': True, 'projection': AlbersEqualArea()}]
178 | ])
179 | def test_plot_basic(func, df, kwargs):
180 | return func(df, **kwargs).get_figure()
181 |
182 |
183 | @pytest.mark.mpl_image_compare
184 | def test_param_extent_unproj():
185 | # invalid extent: raise
186 | with pytest.raises(ValueError):
187 | pointplot(p_df, extent=(-181, 0, 1, 1))
188 | with pytest.raises(ValueError):
189 | pointplot(p_df, extent=(0, -91, 1, 1))
190 | with pytest.raises(ValueError):
191 | pointplot(p_df, extent=(0, 0, 181, 1))
192 | with pytest.raises(ValueError):
193 | pointplot(p_df, extent=(0, 0, 1, 91))
194 |
195 | # valid extent: set
196 | return pointplot(p_df, hue='var', linewidth=0, s=10, extent=(-10, -10, 10, 10)).get_figure()
197 |
198 |
199 | @pytest.mark.mpl_image_compare
200 | def test_param_extent_proj():
201 | # invalid extent: raise
202 | with pytest.raises(ValueError):
203 | pointplot(p_df, extent=(-181, 0, 1, 1))
204 | with pytest.raises(ValueError):
205 | pointplot(p_df, extent=(0, -91, 1, 1))
206 | with pytest.raises(ValueError):
207 | pointplot(p_df, extent=(0, 0, 181, 1))
208 | with pytest.raises(ValueError):
209 | pointplot(p_df, extent=(0, 0, 1, 91))
210 |
211 | # valid extent: set
212 | return pointplot(
213 | p_df, hue='var', linewidth=0, s=10, extent=(-10, -10, 10, 10),
214 | projection=AlbersEqualArea()
215 | ).get_figure()
216 |
--------------------------------------------------------------------------------