├── .gitignore ├── LICENSE.md ├── README.rst ├── docs ├── Makefile ├── _config.yml ├── conf.py ├── gasp.rst ├── index.rst ├── make.bat └── usage.md ├── examples ├── Al-ZrAl-CuAl_pd_lammps │ ├── README.rst │ ├── ZrCuAl.eam.alloy │ ├── calllammps │ ├── ga_input.yaml │ ├── in.min │ ├── partial_phase_diagram.png │ └── ref_states │ │ ├── POSCAR.Al │ │ ├── POSCAR.AlCu │ │ └── POSCAR.AlZr ├── Au_clusters_lammps │ ├── Au_u3.eam │ ├── README.rst │ ├── calllammps │ ├── ga_input.yaml │ ├── icosahedron.png │ └── in.min ├── Au_wires_lammps │ ├── Au_u3.eam │ ├── README.rst │ ├── calllammps │ ├── ga_input.yaml │ ├── in.min │ └── wire.png └── SiO2_2D_gulp │ ├── README.rst │ ├── callgulp │ ├── ga_input.yaml │ ├── header_file │ └── potential_file ├── gasp ├── __init__.py ├── development.py ├── energy_calculators.py ├── general.py ├── geometry.py ├── objects_maker.py ├── organism_creators.py ├── parameters_printer.py ├── population.py ├── post_processing │ ├── __init__.py │ └── plotter.py ├── scripts │ ├── plot_phase_diagram.py │ ├── plot_progress.py │ ├── plot_system_size.py │ ├── replace_tabs.py │ └── run.py ├── tests │ ├── __init__.py │ └── test.py └── variations.py └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | *~ 3 | 4 | *.egg 5 | *.egg-info 6 | dist 7 | build 8 | eggs 9 | parts 10 | bin 11 | var 12 | sdist 13 | develop-eggs 14 | .installed.cfg 15 | lib 16 | lib64 17 | 18 | # Installer logs 19 | pip-log.txt 20 | 21 | # Unit test / coverage reports 22 | .coverage 23 | .tox 24 | nosetests.xml 25 | 26 | .project 27 | .pydevproject 28 | 29 | # Pycharm 30 | .idea/* 31 | 32 | _build/* 33 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Copyright (c) 2016-2017 Henniggroup Cornell University/University of Florida 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | GASP is a genetic algorithm for structure and phase prediction written in Python and interfaced to GULP_, LAMMPS_ and VASP_. It can search for the structures of clusters, 2D materials, wires, and bulk materials and do both fixed-composition and phase diagram searches. 2 | 3 | .. _VASP: http://www.vasp.at/ 4 | .. _LAMMPS: http://lammps.sandia.gov/ 5 | .. _GULP: https://gulp.curtin.edu.au/gulp/ 6 | 7 | 8 | Getting GASP 9 | ============ 10 | It is easiest to install GASP and all its dependencies into a conda_ environment. GASP makes extensive use of pymatgen_, an open source Python library for materials analysis, and these instructions have been adapted from pymatgen. More details on installing pymatgen can be found at http://pymatgen.org/installation.html. 11 | 12 | If pymatgen is already installed, steps 1-3 may be skipped. 13 | 14 | .. _conda: http://conda.pydata.org/docs/index.html 15 | .. _pymatgen: http://pymatgen.org/ 16 | 17 | 1. Install conda 18 | ---------------- 19 | 20 | Download and install the version of conda for your operating system from http://conda.pydata.org/miniconda.html. Although GASP is compatible with both Python 2.7 and 3.6, pymatgen recommends using Python 3.6. 21 | 22 | For Windows, make sure you have the Miniconda3 installer, and simply double-click the .exe file. 23 | 24 | For Mac or Linux, run the bash script:: 25 | 26 | # if Mac 27 | bash Miniconda3-latest-MacOSX-x86_64.sh 28 | 29 | # if Linux 30 | bash Miniconda3-latest-Linux-x86_64.sh 31 | 32 | The installer will ask you to approve the terms of the license and then tell you where the installation will be located. The default location is fine, so press Enter. Finally, the installer will ask if you would like it to add the install location to your PATH by prepending a line to your .bash_profile (for Mac) or .bashrc (for Linux) file. Type 'yes' and press Enter. 33 | 34 | After completing the installation, create a new terminal in order for the environment variables added by conda to be loaded. 35 | 36 | 37 | 2. Create a conda environment 38 | ----------------------------- 39 | 40 | To create a new conda environment named 'my_gasp', type:: 41 | 42 | conda create --name my_gasp python=3.6 43 | 44 | When conda asks you:: 45 | 46 | proceed ([y]/n)? 47 | 48 | Type 'y' and press Enter. 49 | 50 | Now activate the environment so that packages can be installed into it:: 51 | 52 | # if Mac or Linux 53 | source activate my_gasp 54 | 55 | # if Windows 56 | activate my_gasp 57 | 58 | 59 | 3. Install pymatgen and its dependencies 60 | ---------------------------------------- 61 | 62 | pymatgen recommends using the gcc compiler. To do so, type:: 63 | 64 | export CC=gcc 65 | 66 | Install numpy, scipy, matplotlib and pymatgen with pip:: 67 | 68 | pip install numpy 69 | pip install scipy 70 | pip install matplotlib 71 | pip install pymatgen 72 | 73 | When searching for clusters and wires, GASP uses features of pymatgen that depend on openbabel. So if you plan to use GASP to search for clusters or wires, install openbabel in your conda environment:: 74 | 75 | conda install -c openbabel openbabel 76 | 77 | For Mac, an additional step is needed in order to use the scripts included with GASP for making plots. These scripts depend on the matplotlib_ library, which requires a framework build of Python to run properly on Mac OS X. However, a regular Python build comes with conda by default. To install a framework build in your conda environment, type:: 78 | 79 | conda install python.app 80 | 81 | See the 'Visualizing output' section of the the `usage file`_ for more information on making plots. 82 | 83 | .. _matplotlib: http://matplotlib.org/index.html 84 | 85 | 86 | 4. Install GASP-python 87 | ---------------------- 88 | 89 | Clone the repository from github:: 90 | 91 | git clone https://github.com/henniggroup/GASP-python.git 92 | 93 | Move into the 'GASP-python' directory and run the setup.py script:: 94 | 95 | cd GASP-python 96 | python setup.py develop 97 | 98 | 99 | Using GASP 100 | ========== 101 | 102 | See the `usage file`_. 103 | 104 | .. _usage file: docs/usage.md 105 | 106 | 107 | License 108 | ======= 109 | 110 | GASP-python is released under the MIT License:: 111 | 112 | Copyright (c) 2016-2017 Henniggroup Cornell/University of Florida 113 | 114 | Permission is hereby granted, free of charge, to any person obtaining a copy of 115 | this software and associated documentation files (the "Software"), to deal in 116 | the Software without restriction, including without limitation the rights to 117 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 118 | the Software, and to permit persons to whom the Software is furnished to do so, 119 | subject to the following conditions: 120 | 121 | The above copyright notice and this permission notice shall be included in all 122 | copies or substantial portions of the Software. 123 | 124 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 125 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 126 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 127 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 128 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 129 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 130 | 131 | 132 | Contributing 133 | ============ 134 | 135 | We try to follow the PEP8 coding style used by pymatgen: http://pymatgen.org/contributing.html#coding-guidelines. 136 | 137 | Authors 138 | ======= 139 | 140 | Benjamin Revard 141 | 142 | William W. Tipton 143 | 144 | Richard G. Hennig 145 | 146 | 147 | How to cite 148 | =========== 149 | 150 | DOI: 10.5281/zenodo.2554076 151 | 152 | BibTex entry for the Github repository:: 153 | 154 | @misc{GASP-Python, 155 | title = {Genetic algorithm for structure and phase prediction}, 156 | author = {B. C. Revard, W. W. Tipton, and R. G. Hennig}, 157 | year = 2018, 158 | publisher = {GitHub}, 159 | journal = {GitHub repository}, 160 | howpublished = {\url{https://github.com/henniggroup/GASP-python}}, 161 | url = {https://github.com/henniggroup/GASP-python}, 162 | doi = {10.5281/zenodo.2554076} 163 | } 164 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 14 | # the i18n builder cannot share the environment and doctrees with the others 15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 16 | 17 | .PHONY: help 18 | help: 19 | @echo "Please use \`make ' where is one of" 20 | @echo " html to make standalone HTML files" 21 | @echo " dirhtml to make HTML files named index.html in directories" 22 | @echo " singlehtml to make a single large HTML file" 23 | @echo " pickle to make pickle files" 24 | @echo " json to make JSON files" 25 | @echo " htmlhelp to make HTML files and a HTML help project" 26 | @echo " qthelp to make HTML files and a qthelp project" 27 | @echo " applehelp to make an Apple Help Book" 28 | @echo " devhelp to make HTML files and a Devhelp project" 29 | @echo " epub to make an epub" 30 | @echo " epub3 to make an epub3" 31 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 32 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 33 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 34 | @echo " text to make text files" 35 | @echo " man to make manual pages" 36 | @echo " texinfo to make Texinfo files" 37 | @echo " info to make Texinfo files and run them through makeinfo" 38 | @echo " gettext to make PO message catalogs" 39 | @echo " changes to make an overview of all changed/added/deprecated items" 40 | @echo " xml to make Docutils-native XML files" 41 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 42 | @echo " linkcheck to check all external links for integrity" 43 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 44 | @echo " coverage to run coverage check of the documentation (if enabled)" 45 | @echo " dummy to check syntax errors of document sources" 46 | 47 | .PHONY: clean 48 | clean: 49 | rm -rf $(BUILDDIR)/* 50 | 51 | .PHONY: html 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | .PHONY: dirhtml 58 | dirhtml: 59 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 60 | @echo 61 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 62 | 63 | .PHONY: singlehtml 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | .PHONY: pickle 70 | pickle: 71 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 72 | @echo 73 | @echo "Build finished; now you can process the pickle files." 74 | 75 | .PHONY: json 76 | json: 77 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 78 | @echo 79 | @echo "Build finished; now you can process the JSON files." 80 | 81 | .PHONY: htmlhelp 82 | htmlhelp: 83 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 84 | @echo 85 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 86 | ".hhp project file in $(BUILDDIR)/htmlhelp." 87 | 88 | .PHONY: qthelp 89 | qthelp: 90 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 91 | @echo 92 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 93 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 94 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/gasp.qhcp" 95 | @echo "To view the help file:" 96 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/gasp.qhc" 97 | 98 | .PHONY: applehelp 99 | applehelp: 100 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 101 | @echo 102 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 103 | @echo "N.B. You won't be able to view it unless you put it in" \ 104 | "~/Library/Documentation/Help or install it in your application" \ 105 | "bundle." 106 | 107 | .PHONY: devhelp 108 | devhelp: 109 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 110 | @echo 111 | @echo "Build finished." 112 | @echo "To view the help file:" 113 | @echo "# mkdir -p $$HOME/.local/share/devhelp/gasp" 114 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/gasp" 115 | @echo "# devhelp" 116 | 117 | .PHONY: epub 118 | epub: 119 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 120 | @echo 121 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 122 | 123 | .PHONY: epub3 124 | epub3: 125 | $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 126 | @echo 127 | @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." 128 | 129 | .PHONY: latex 130 | latex: 131 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 132 | @echo 133 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 134 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 135 | "(use \`make latexpdf' here to do that automatically)." 136 | 137 | .PHONY: latexpdf 138 | latexpdf: 139 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 140 | @echo "Running LaTeX files through pdflatex..." 141 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 142 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 143 | 144 | .PHONY: latexpdfja 145 | latexpdfja: 146 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 147 | @echo "Running LaTeX files through platex and dvipdfmx..." 148 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 149 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 150 | 151 | .PHONY: text 152 | text: 153 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 154 | @echo 155 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 156 | 157 | .PHONY: man 158 | man: 159 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 160 | @echo 161 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 162 | 163 | .PHONY: texinfo 164 | texinfo: 165 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 166 | @echo 167 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 168 | @echo "Run \`make' in that directory to run these through makeinfo" \ 169 | "(use \`make info' here to do that automatically)." 170 | 171 | .PHONY: info 172 | info: 173 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 174 | @echo "Running Texinfo files through makeinfo..." 175 | make -C $(BUILDDIR)/texinfo info 176 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 177 | 178 | .PHONY: gettext 179 | gettext: 180 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 181 | @echo 182 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 183 | 184 | .PHONY: changes 185 | changes: 186 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 187 | @echo 188 | @echo "The overview file is in $(BUILDDIR)/changes." 189 | 190 | .PHONY: linkcheck 191 | linkcheck: 192 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 193 | @echo 194 | @echo "Link check complete; look for any errors in the above output " \ 195 | "or in $(BUILDDIR)/linkcheck/output.txt." 196 | 197 | .PHONY: doctest 198 | doctest: 199 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 200 | @echo "Testing of doctests in the sources finished, look at the " \ 201 | "results in $(BUILDDIR)/doctest/output.txt." 202 | 203 | .PHONY: coverage 204 | coverage: 205 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 206 | @echo "Testing of coverage in the sources finished, look at the " \ 207 | "results in $(BUILDDIR)/coverage/python.txt." 208 | 209 | .PHONY: xml 210 | xml: 211 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 212 | @echo 213 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 214 | 215 | .PHONY: pseudoxml 216 | pseudoxml: 217 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 218 | @echo 219 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 220 | 221 | .PHONY: dummy 222 | dummy: 223 | $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy 224 | @echo 225 | @echo "Build finished. Dummy builder generates no files." 226 | -------------------------------------------------------------------------------- /docs/_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | # 4 | # gasp documentation build configuration file, created by 5 | # sphinx-quickstart on Wed Dec 7 20:02:23 2016. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | # If extensions (or modules to document with autodoc) are in another directory, 17 | # add these directories to sys.path here. If the directory is relative to the 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. 19 | # 20 | # import os 21 | # import sys 22 | # sys.path.insert(0, os.path.abspath('.')) 23 | 24 | # -- General configuration ------------------------------------------------ 25 | 26 | # If your documentation needs a minimal Sphinx version, state it here. 27 | # 28 | # needs_sphinx = '1.0' 29 | 30 | # Add any Sphinx extension module names here, as strings. They can be 31 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 32 | # ones. 33 | extensions = [ 34 | 'sphinx.ext.autodoc', 35 | 'sphinx.ext.todo', 36 | 'sphinx.ext.viewcode', 37 | ] 38 | 39 | # Add any paths that contain templates here, relative to this directory. 40 | templates_path = ['_templates'] 41 | 42 | # The suffix(es) of source filenames. 43 | # You can specify multiple suffix as a list of string: 44 | # 45 | # source_suffix = ['.rst', '.md'] 46 | source_suffix = '.rst' 47 | 48 | # The encoding of source files. 49 | # 50 | # source_encoding = 'utf-8-sig' 51 | 52 | # The master toctree document. 53 | master_doc = 'index' 54 | 55 | # General information about the project. 56 | project = 'gasp' 57 | copyright = '2017, Benjamin Revard, Richard G. Hennig' 58 | author = 'Benjamin Revard, Richard G. Hennig' 59 | 60 | # The version info for the project you're documenting, acts as replacement for 61 | # |version| and |release|, also used in various other places throughout the 62 | # built documents. 63 | # 64 | # The short X.Y version. 65 | version = '' 66 | # The full version, including alpha/beta/rc tags. 67 | release = '' 68 | 69 | # The language for content autogenerated by Sphinx. Refer to documentation 70 | # for a list of supported languages. 71 | # 72 | # This is also used if you do content translation via gettext catalogs. 73 | # Usually you set "language" from the command line for these cases. 74 | language = 'en' 75 | 76 | # There are two options for replacing |today|: either, you set today to some 77 | # non-false value, then it is used: 78 | # 79 | # today = '' 80 | # 81 | # Else, today_fmt is used as the format for a strftime call. 82 | # 83 | # today_fmt = '%B %d, %Y' 84 | 85 | # List of patterns, relative to source directory, that match files and 86 | # directories to ignore when looking for source files. 87 | # This patterns also effect to html_static_path and html_extra_path 88 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 89 | 90 | # The reST default role (used for this markup: `text`) to use for all 91 | # documents. 92 | # 93 | # default_role = None 94 | 95 | # If true, '()' will be appended to :func: etc. cross-reference text. 96 | # 97 | # add_function_parentheses = True 98 | 99 | # If true, the current module name will be prepended to all description 100 | # unit titles (such as .. function::). 101 | # 102 | # add_module_names = True 103 | 104 | # If true, sectionauthor and moduleauthor directives will be shown in the 105 | # output. They are ignored by default. 106 | # 107 | # show_authors = False 108 | 109 | # The name of the Pygments (syntax highlighting) style to use. 110 | pygments_style = 'sphinx' 111 | 112 | # A list of ignored prefixes for module index sorting. 113 | # modindex_common_prefix = [] 114 | 115 | # If true, keep warnings as "system message" paragraphs in the built documents. 116 | # keep_warnings = False 117 | 118 | # If true, `todo` and `todoList` produce output, else they produce nothing. 119 | todo_include_todos = True 120 | 121 | 122 | # -- Options for HTML output ---------------------------------------------- 123 | 124 | # The theme to use for HTML and HTML Help pages. See the documentation for 125 | # a list of builtin themes. 126 | # 127 | html_theme = 'alabaster' 128 | 129 | # Theme options are theme-specific and customize the look and feel of a theme 130 | # further. For a list of options available for each theme, see the 131 | # documentation. 132 | # 133 | # html_theme_options = {} 134 | 135 | # Add any paths that contain custom themes here, relative to this directory. 136 | # html_theme_path = [] 137 | 138 | # The name for this set of Sphinx documents. 139 | # " v documentation" by default. 140 | # 141 | # html_title = 'gasp v' 142 | 143 | # A shorter title for the navigation bar. Default is the same as html_title. 144 | # 145 | # html_short_title = None 146 | 147 | # The name of an image file (relative to this directory) to place at the top 148 | # of the sidebar. 149 | # 150 | # html_logo = None 151 | 152 | # The name of an image file (relative to this directory) to use as a favicon of 153 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 154 | # pixels large. 155 | # 156 | # html_favicon = None 157 | 158 | # Add any paths that contain custom static files (such as style sheets) here, 159 | # relative to this directory. They are copied after the builtin static files, 160 | # so a file named "default.css" will overwrite the builtin "default.css". 161 | html_static_path = ['_static'] 162 | 163 | # Add any extra paths that contain custom files (such as robots.txt or 164 | # .htaccess) here, relative to this directory. These files are copied 165 | # directly to the root of the documentation. 166 | # 167 | # html_extra_path = [] 168 | 169 | # If not None, a 'Last updated on:' timestamp is inserted at every page 170 | # bottom, using the given strftime format. 171 | # The empty string is equivalent to '%b %d, %Y'. 172 | # 173 | # html_last_updated_fmt = None 174 | 175 | # If true, SmartyPants will be used to convert quotes and dashes to 176 | # typographically correct entities. 177 | # 178 | # html_use_smartypants = True 179 | 180 | # Custom sidebar templates, maps document names to template names. 181 | # 182 | # html_sidebars = {} 183 | 184 | # Additional templates that should be rendered to pages, maps page names to 185 | # template names. 186 | # 187 | # html_additional_pages = {} 188 | 189 | # If false, no module index is generated. 190 | # 191 | # html_domain_indices = True 192 | 193 | # If false, no index is generated. 194 | # 195 | # html_use_index = True 196 | 197 | # If true, the index is split into individual pages for each letter. 198 | # 199 | # html_split_index = False 200 | 201 | # If true, links to the reST sources are added to the pages. 202 | # 203 | # html_show_sourcelink = True 204 | 205 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 206 | # 207 | # html_show_sphinx = True 208 | 209 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 210 | # 211 | # html_show_copyright = True 212 | 213 | # If true, an OpenSearch description file will be output, and all pages will 214 | # contain a tag referring to it. The value of this option must be the 215 | # base URL from which the finished HTML is served. 216 | # 217 | # html_use_opensearch = '' 218 | 219 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 220 | # html_file_suffix = None 221 | 222 | # Language to be used for generating the HTML full-text search index. 223 | # Sphinx supports the following languages: 224 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja' 225 | # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh' 226 | # 227 | # html_search_language = 'en' 228 | 229 | # A dictionary with options for the search language support, empty by default. 230 | # 'ja' uses this config value. 231 | # 'zh' user can custom change `jieba` dictionary path. 232 | # 233 | # html_search_options = {'type': 'default'} 234 | 235 | # The name of a javascript file (relative to the configuration directory) that 236 | # implements a search results scorer. If empty, the default will be used. 237 | # 238 | # html_search_scorer = 'scorer.js' 239 | 240 | # Output file base name for HTML help builder. 241 | htmlhelp_basename = 'gaspdoc' 242 | 243 | # -- Options for LaTeX output --------------------------------------------- 244 | 245 | latex_elements = { 246 | # The paper size ('letterpaper' or 'a4paper'). 247 | # 248 | # 'papersize': 'letterpaper', 249 | 250 | # The font size ('10pt', '11pt' or '12pt'). 251 | # 252 | # 'pointsize': '10pt', 253 | 254 | # Additional stuff for the LaTeX preamble. 255 | # 256 | # 'preamble': '', 257 | 258 | # Latex figure (float) alignment 259 | # 260 | # 'figure_align': 'htbp', 261 | } 262 | 263 | # Grouping the document tree into LaTeX files. List of tuples 264 | # (source start file, target name, title, 265 | # author, documentclass [howto, manual, or own class]). 266 | latex_documents = [ 267 | (master_doc, 'gasp.tex', 'gasp Documentation', 268 | 'Author', 'manual'), 269 | ] 270 | 271 | # The name of an image file (relative to this directory) to place at the top of 272 | # the title page. 273 | # 274 | # latex_logo = None 275 | 276 | # For "manual" documents, if this is true, then toplevel headings are parts, 277 | # not chapters. 278 | # 279 | # latex_use_parts = False 280 | 281 | # If true, show page references after internal links. 282 | # 283 | # latex_show_pagerefs = False 284 | 285 | # If true, show URL addresses after external links. 286 | # 287 | # latex_show_urls = False 288 | 289 | # Documents to append as an appendix to all manuals. 290 | # 291 | # latex_appendices = [] 292 | 293 | # It false, will not define \strong, \code, itleref, \crossref ... but only 294 | # \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added 295 | # packages. 296 | # 297 | # latex_keep_old_macro_names = True 298 | 299 | # If false, no module index is generated. 300 | # 301 | # latex_domain_indices = True 302 | 303 | 304 | # -- Options for manual page output --------------------------------------- 305 | 306 | # One entry per manual page. List of tuples 307 | # (source start file, name, description, authors, manual section). 308 | man_pages = [ 309 | (master_doc, 'gasp', 'gasp Documentation', 310 | [author], 1) 311 | ] 312 | 313 | # If true, show URL addresses after external links. 314 | # 315 | # man_show_urls = False 316 | 317 | 318 | # -- Options for Texinfo output ------------------------------------------- 319 | 320 | # Grouping the document tree into Texinfo files. List of tuples 321 | # (source start file, target name, title, author, 322 | # dir menu entry, description, category) 323 | texinfo_documents = [ 324 | (master_doc, 'gasp', 'gasp Documentation', 325 | author, 'gasp', 'One line description of project.', 326 | 'Miscellaneous'), 327 | ] 328 | 329 | # Documents to append as an appendix to all manuals. 330 | # 331 | # texinfo_appendices = [] 332 | 333 | # If false, no module index is generated. 334 | # 335 | # texinfo_domain_indices = True 336 | 337 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 338 | # 339 | # texinfo_show_urls = 'footnote' 340 | 341 | # If true, do not generate a @detailmenu in the "Top" node's menu. 342 | # 343 | # texinfo_no_detailmenu = False 344 | 345 | 346 | # -- Options for Epub output ---------------------------------------------- 347 | 348 | # Bibliographic Dublin Core info. 349 | epub_title = project 350 | epub_author = author 351 | epub_publisher = author 352 | epub_copyright = copyright 353 | 354 | # The basename for the epub file. It defaults to the project name. 355 | # epub_basename = project 356 | 357 | # The HTML theme for the epub output. Since the default themes are not 358 | # optimized for small screen space, using the same theme for HTML and epub 359 | # output is usually not wise. This defaults to 'epub', a theme designed to save 360 | # visual space. 361 | # 362 | # epub_theme = 'epub' 363 | 364 | # The language of the text. It defaults to the language option 365 | # or 'en' if the language is not set. 366 | # 367 | # epub_language = '' 368 | 369 | # The scheme of the identifier. Typical schemes are ISBN or URL. 370 | # epub_scheme = '' 371 | 372 | # The unique identifier of the text. This can be a ISBN number 373 | # or the project homepage. 374 | # 375 | # epub_identifier = '' 376 | 377 | # A unique identification for the text. 378 | # 379 | # epub_uid = '' 380 | 381 | # A tuple containing the cover image and cover page html template filenames. 382 | # 383 | # epub_cover = () 384 | 385 | # A sequence of (type, uri, title) tuples for the guide element of content.opf. 386 | # 387 | # epub_guide = () 388 | 389 | # HTML files that should be inserted before the pages created by sphinx. 390 | # The format is a list of tuples containing the path and title. 391 | # 392 | # epub_pre_files = [] 393 | 394 | # HTML files that should be inserted after the pages created by sphinx. 395 | # The format is a list of tuples containing the path and title. 396 | # 397 | # epub_post_files = [] 398 | 399 | # A list of files that should not be packed into the epub file. 400 | epub_exclude_files = ['search.html'] 401 | 402 | # The depth of the table of contents in toc.ncx. 403 | # 404 | # epub_tocdepth = 3 405 | 406 | # Allow duplicate toc entries. 407 | # 408 | # epub_tocdup = True 409 | 410 | # Choose between 'default' and 'includehidden'. 411 | # 412 | # epub_tocscope = 'default' 413 | 414 | # Fix unsupported image types using the Pillow. 415 | # 416 | # epub_fix_images = False 417 | 418 | # Scale large images. 419 | # 420 | # epub_max_image_width = 0 421 | 422 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 423 | # 424 | # epub_show_urls = 'inline' 425 | 426 | # If false, no index is generated. 427 | # 428 | # epub_use_index = True 429 | -------------------------------------------------------------------------------- /docs/gasp.rst: -------------------------------------------------------------------------------- 1 | gasp package 2 | ============ 3 | 4 | Submodules 5 | ---------- 6 | 7 | gasp.development module 8 | ----------------------- 9 | 10 | .. automodule:: gasp.development 11 | :members: 12 | :undoc-members: 13 | :show-inheritance: 14 | 15 | gasp.energy_calculators module 16 | ------------------------------ 17 | 18 | .. automodule:: gasp.energy_calculators 19 | :members: 20 | :undoc-members: 21 | :show-inheritance: 22 | 23 | gasp.general module 24 | ------------------- 25 | 26 | .. automodule:: gasp.general 27 | :members: 28 | :undoc-members: 29 | :show-inheritance: 30 | 31 | gasp.objects_maker module 32 | ------------------------- 33 | 34 | .. automodule:: gasp.objects_maker 35 | :members: 36 | :undoc-members: 37 | :show-inheritance: 38 | 39 | gasp.organism_creators module 40 | ----------------------------- 41 | 42 | .. automodule:: gasp.organism_creators 43 | :members: 44 | :undoc-members: 45 | :show-inheritance: 46 | 47 | gasp.parameters_printer module 48 | ------------------------------ 49 | 50 | .. automodule:: gasp.parameters_printer 51 | :members: 52 | :undoc-members: 53 | :show-inheritance: 54 | 55 | gasp.variations module 56 | ---------------------- 57 | 58 | .. automodule:: gasp.variations 59 | :members: 60 | :undoc-members: 61 | :show-inheritance: 62 | 63 | Module contents 64 | --------------- 65 | 66 | .. automodule:: gasp 67 | :members: 68 | :undoc-members: 69 | :show-inheritance: 70 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. gasp documentation master file, created by 2 | sphinx-quickstart on Wed Dec 7 20:02:23 2016. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to gasp's documentation! 7 | ================================ 8 | 9 | Contents: 10 | 11 | .. toctree:: 12 | :maxdepth: 4 13 | 14 | gasp 15 | 16 | Indices and tables 17 | ================== 18 | 19 | * :ref:`genindex` 20 | * :ref:`modindex` 21 | * :ref:`search` 22 | 23 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. epub3 to make an epub3 31 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 32 | echo. text to make text files 33 | echo. man to make manual pages 34 | echo. texinfo to make Texinfo files 35 | echo. gettext to make PO message catalogs 36 | echo. changes to make an overview over all changed/added/deprecated items 37 | echo. xml to make Docutils-native XML files 38 | echo. pseudoxml to make pseudoxml-XML files for display purposes 39 | echo. linkcheck to check all external links for integrity 40 | echo. doctest to run all doctests embedded in the documentation if enabled 41 | echo. coverage to run coverage check of the documentation if enabled 42 | echo. dummy to check syntax errors of document sources 43 | goto end 44 | ) 45 | 46 | if "%1" == "clean" ( 47 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 48 | del /q /s %BUILDDIR%\* 49 | goto end 50 | ) 51 | 52 | 53 | REM Check if sphinx-build is available and fallback to Python version if any 54 | %SPHINXBUILD% 1>NUL 2>NUL 55 | if errorlevel 9009 goto sphinx_python 56 | goto sphinx_ok 57 | 58 | :sphinx_python 59 | 60 | set SPHINXBUILD=python -m sphinx.__init__ 61 | %SPHINXBUILD% 2> nul 62 | if errorlevel 9009 ( 63 | echo. 64 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 65 | echo.installed, then set the SPHINXBUILD environment variable to point 66 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 67 | echo.may add the Sphinx directory to PATH. 68 | echo. 69 | echo.If you don't have Sphinx installed, grab it from 70 | echo.http://sphinx-doc.org/ 71 | exit /b 1 72 | ) 73 | 74 | :sphinx_ok 75 | 76 | 77 | if "%1" == "html" ( 78 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 79 | if errorlevel 1 exit /b 1 80 | echo. 81 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 82 | goto end 83 | ) 84 | 85 | if "%1" == "dirhtml" ( 86 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 87 | if errorlevel 1 exit /b 1 88 | echo. 89 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 90 | goto end 91 | ) 92 | 93 | if "%1" == "singlehtml" ( 94 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 95 | if errorlevel 1 exit /b 1 96 | echo. 97 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 98 | goto end 99 | ) 100 | 101 | if "%1" == "pickle" ( 102 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 103 | if errorlevel 1 exit /b 1 104 | echo. 105 | echo.Build finished; now you can process the pickle files. 106 | goto end 107 | ) 108 | 109 | if "%1" == "json" ( 110 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 111 | if errorlevel 1 exit /b 1 112 | echo. 113 | echo.Build finished; now you can process the JSON files. 114 | goto end 115 | ) 116 | 117 | if "%1" == "htmlhelp" ( 118 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 119 | if errorlevel 1 exit /b 1 120 | echo. 121 | echo.Build finished; now you can run HTML Help Workshop with the ^ 122 | .hhp project file in %BUILDDIR%/htmlhelp. 123 | goto end 124 | ) 125 | 126 | if "%1" == "qthelp" ( 127 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 128 | if errorlevel 1 exit /b 1 129 | echo. 130 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 131 | .qhcp project file in %BUILDDIR%/qthelp, like this: 132 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\gasp.qhcp 133 | echo.To view the help file: 134 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\gasp.ghc 135 | goto end 136 | ) 137 | 138 | if "%1" == "devhelp" ( 139 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 140 | if errorlevel 1 exit /b 1 141 | echo. 142 | echo.Build finished. 143 | goto end 144 | ) 145 | 146 | if "%1" == "epub" ( 147 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 148 | if errorlevel 1 exit /b 1 149 | echo. 150 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 151 | goto end 152 | ) 153 | 154 | if "%1" == "epub3" ( 155 | %SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3 156 | if errorlevel 1 exit /b 1 157 | echo. 158 | echo.Build finished. The epub3 file is in %BUILDDIR%/epub3. 159 | goto end 160 | ) 161 | 162 | if "%1" == "latex" ( 163 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 164 | if errorlevel 1 exit /b 1 165 | echo. 166 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 167 | goto end 168 | ) 169 | 170 | if "%1" == "latexpdf" ( 171 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 172 | cd %BUILDDIR%/latex 173 | make all-pdf 174 | cd %~dp0 175 | echo. 176 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 177 | goto end 178 | ) 179 | 180 | if "%1" == "latexpdfja" ( 181 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 182 | cd %BUILDDIR%/latex 183 | make all-pdf-ja 184 | cd %~dp0 185 | echo. 186 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 187 | goto end 188 | ) 189 | 190 | if "%1" == "text" ( 191 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 192 | if errorlevel 1 exit /b 1 193 | echo. 194 | echo.Build finished. The text files are in %BUILDDIR%/text. 195 | goto end 196 | ) 197 | 198 | if "%1" == "man" ( 199 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 200 | if errorlevel 1 exit /b 1 201 | echo. 202 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 203 | goto end 204 | ) 205 | 206 | if "%1" == "texinfo" ( 207 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 208 | if errorlevel 1 exit /b 1 209 | echo. 210 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 211 | goto end 212 | ) 213 | 214 | if "%1" == "gettext" ( 215 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 216 | if errorlevel 1 exit /b 1 217 | echo. 218 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 219 | goto end 220 | ) 221 | 222 | if "%1" == "changes" ( 223 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 224 | if errorlevel 1 exit /b 1 225 | echo. 226 | echo.The overview file is in %BUILDDIR%/changes. 227 | goto end 228 | ) 229 | 230 | if "%1" == "linkcheck" ( 231 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 232 | if errorlevel 1 exit /b 1 233 | echo. 234 | echo.Link check complete; look for any errors in the above output ^ 235 | or in %BUILDDIR%/linkcheck/output.txt. 236 | goto end 237 | ) 238 | 239 | if "%1" == "doctest" ( 240 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 241 | if errorlevel 1 exit /b 1 242 | echo. 243 | echo.Testing of doctests in the sources finished, look at the ^ 244 | results in %BUILDDIR%/doctest/output.txt. 245 | goto end 246 | ) 247 | 248 | if "%1" == "coverage" ( 249 | %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage 250 | if errorlevel 1 exit /b 1 251 | echo. 252 | echo.Testing of coverage in the sources finished, look at the ^ 253 | results in %BUILDDIR%/coverage/python.txt. 254 | goto end 255 | ) 256 | 257 | if "%1" == "xml" ( 258 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 259 | if errorlevel 1 exit /b 1 260 | echo. 261 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 262 | goto end 263 | ) 264 | 265 | if "%1" == "pseudoxml" ( 266 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 267 | if errorlevel 1 exit /b 1 268 | echo. 269 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 270 | goto end 271 | ) 272 | 273 | if "%1" == "dummy" ( 274 | %SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy 275 | if errorlevel 1 exit /b 1 276 | echo. 277 | echo.Build finished. Dummy builder generates no files. 278 | goto end 279 | ) 280 | 281 | :end 282 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/README.rst: -------------------------------------------------------------------------------- 1 | Files to run a genetic algorithm search for the Al-ZrAl-CuAl partial phase diagram, using the EAM potentials of Cheng *et al.* and LAMMPS for the energy calculations. 2 | 3 | 4 | Description of files 5 | ==================== 6 | 7 | ga_input.yaml 8 | 9 | Main input file for GASP. See the `usage file`_ for descriptions of each keyword and option. 10 | 11 | .. _usage file: ../../docs/usage.md 12 | 13 | 14 | in.min 15 | 16 | LAMMPS input script. 17 | 18 | 19 | ZrCuAl.eam.alloy 20 | 21 | LAMMPS file containing the EAM potentials. 22 | 23 | 24 | ref_states/ 25 | 26 | Directory containing the files for the structures at the endpoints of the composition space (Al, ZrAl, CuAl) in POSCAR format. 27 | 28 | 29 | calllammps 30 | 31 | Script called by GASP to run a LAMMPS calculation. 32 | 33 | 34 | Running the search 35 | ================== 36 | 37 | Instructions for running this search on your system are given below. 38 | 39 | Note that we assume GASP-python and LAMMPS are already installed on your system. If this is not the case, see the `main README file`_ and the `LAMMPS documentation`_ for instructions on how to get GASP-python and LAMMPS, respectively. 40 | 41 | .. _main README file: ../../README.rst 42 | .. _LAMMPS documentation: http://lammps.sandia.gov/download.html 43 | 44 | 1. Copy all files to the location on your computer where you would like to run the search. 45 | 46 | 2. Modify the location of the LAMMPS input script given in ga_input.yaml (on line 8). Replace it with the location of the in.min file on your computer. 47 | 48 | 3. Modify the location of the ref_states directory given in ga_input.yaml (on line 12). Replace it with the location of the ref_states directory on your computer. 49 | 50 | 4. Modify the location of the LAMMPS potential file given in in.min (on line 8). Replace it with the location of the ZrCuAl.eam.alloy file on your computer. Note that 'Zr Al Cu' must appear at the end of the line after the location. 51 | 52 | 5. Modify the location of the LAMMPS binary given in calllammps (on line 15). Replace it with the location of the LAMMPS binary on your computer. 53 | 54 | 6. Move calllammps to a location in your system's PATH and make it executable. 55 | 56 | 7. To start the search and save the output to a file called ga_output, move into the folder containing ga_input.yaml and type:: 57 | 58 | run.py ga_input.yaml 2>&1 | tee ga_output 59 | 60 | Shown below is the partial phase diagram GASP found when we ran this search. 61 | 62 | .. image:: partial_phase_diagram.png 63 | :height: 283 px 64 | :width: 387 px 65 | :scale: 100 % 66 | :align: center 67 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/calllammps: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # script to run lammps 3 | # usage: calllammps /path/to/lammps/input/file 4 | 5 | # get the name of the input file, without the path 6 | input_file=$(basename $1) 7 | 8 | # get the path to the job directory containing the input file 9 | dir_path=$(dirname $1) 10 | 11 | # move into the job directory 12 | cd $dir_path 13 | 14 | # run lammps on the input file 15 | /location/of/lammps/binary < $input_file 16 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/ga_input.yaml: -------------------------------------------------------------------------------- 1 | CompositionSpace: 2 | - ZrAl 3 | - CuAl 4 | - Al 5 | 6 | EnergyCode: 7 | lammps: 8 | input_script: /location/of/in.min 9 | 10 | InitialPopulation: 11 | from_files: 12 | path_to_folder: /location/of/ref_states/ 13 | random: 14 | number: 102 15 | 16 | StoppingCriteria: 17 | num_energy_calcs: 3000 18 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/in.min: -------------------------------------------------------------------------------- 1 | units metal 2 | dimension 3 3 | atom_style charge 4 | boundary p p p 5 | read_data in.data 6 | 7 | pair_style eam/alloy 8 | pair_coeff * * /location/of/ZrCuAl.eam.alloy Zr Al Cu 9 | 10 | neigh_modify one 5000 11 | minimize 0.0 1.0e-8 1 1 12 | fix 1 all box/relax tri 1e4 vmax 0.001 13 | minimize 0.0 1.0e-8 1000000 10000000 14 | dump myDump all atom 100000000000000 dump.atom 15 | dump_modify myDump sort 1 scale no 16 | fix 1 all box/relax tri 0 vmax 0.001 17 | minimize 0.0 1.0e-8 1000000 10000000 18 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/partial_phase_diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/examples/Al-ZrAl-CuAl_pd_lammps/partial_phase_diagram.png -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/ref_states/POSCAR.Al: -------------------------------------------------------------------------------- 1 | Al2 2 | 1.0 3 | 3.224200 0.000000 0.000000 4 | 0.000000 3.224200 0.000000 5 | 0.000000 0.000000 3.224200 6 | Al 7 | 2 8 | direct 9 | 0.940059 0.308889 0.069608 Al 10 | 0.456407 0.308891 0.553248 Al 11 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/ref_states/POSCAR.AlCu: -------------------------------------------------------------------------------- 1 | Al2 Cu2 2 | 1.0 3 | 2.982574 -0.000000 0.000000 4 | -0.000000 4.217522 -0.000000 5 | 0.000002 0.000000 4.217523 6 | Al Cu 7 | 2 2 8 | direct 9 | 0.219761 0.238230 0.260646 Al 10 | 0.219762 0.738230 0.760645 Al 11 | 0.719763 0.738230 0.260645 Cu 12 | 0.719761 0.238230 0.760645 Cu 13 | -------------------------------------------------------------------------------- /examples/Al-ZrAl-CuAl_pd_lammps/ref_states/POSCAR.AlZr: -------------------------------------------------------------------------------- 1 | Zr2 Al2 2 | 1.0 3 | 3.197220 0.000000 0.000000 4 | 0.000000 4.917443 -0.000000 5 | -0.000000 -0.533827 4.888382 6 | Zr Al 7 | 2 2 8 | direct 9 | 0.884622 0.068041 0.280740 Zr 10 | 0.884622 0.616082 0.732697 Zr 11 | 0.384622 0.120583 0.785240 Al 12 | 0.384622 0.563542 0.228197 Al 13 | -------------------------------------------------------------------------------- /examples/Au_clusters_lammps/README.rst: -------------------------------------------------------------------------------- 1 | Files to run a genetic algorithm structure search for Au clusters containing 13 atoms, using the EAM potential of Foiles *et al.* that ships with LAMMPS. 2 | 3 | 4 | Description of files 5 | ==================== 6 | 7 | ga_input.yaml 8 | 9 | Main input file for GASP. See the `usage file`_ for descriptions of each keyword and option. 10 | 11 | .. _usage file: ../../docs/usage.md 12 | 13 | 14 | in.min 15 | 16 | LAMMPS input script. 17 | 18 | 19 | Au_u3.eam 20 | 21 | LAMMPS file containing the EAM potential. 22 | 23 | 24 | calllammps 25 | 26 | Script called by GASP to run a LAMMPS calculation. 27 | 28 | 29 | Running the search 30 | ================== 31 | 32 | Instructions for running this search on your system are given below. 33 | 34 | Note that we assume GASP-python and LAMMPS are already installed on your system. If this is not the case, see the `main README file`_ and the `LAMMPS documentation`_ for instructions on how to get GASP-python and LAMMPS, respectively. 35 | 36 | .. _main README file: ../../README.rst 37 | .. _LAMMPS documentation: http://lammps.sandia.gov/download.html 38 | 39 | 1. Copy all files to the location on your computer where you would like to run the search. 40 | 41 | 2. Modify the location of the LAMMPS input script given in ga_input.yaml (on line 6). Replace it with the location of the in.min file on your computer. 42 | 43 | 3. Modify the location of the LAMMPS potential file given in in.min (on line 8). Replace it with the location of the Au_u3.eam file on your computer. Note that '## Au' must appear at the end of the line after the location. 44 | 45 | 4. Modify the location of the LAMMPS binary given in calllammps (on line 15). Replace it with the location of the LAMMPS binary on your computer. 46 | 47 | 5. Move calllammps to a location in your system's PATH and make it executable. 48 | 49 | 6. To start the search and save the output to a file called ga_output, move into the folder containing ga_input.yaml and type:: 50 | 51 | run.py ga_input.yaml 2>&1 | tee ga_output 52 | 53 | When we ran this search, GASP found a cluster forming a perfect icosahedron (shown below) to have the lowest energy. 54 | 55 | .. image:: icosahedron.png 56 | :height: 429 px 57 | :width: 375 px 58 | :scale: 50 % 59 | :align: center 60 | -------------------------------------------------------------------------------- /examples/Au_clusters_lammps/calllammps: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # script to run lammps 3 | # usage: calllammps /path/to/lammps/input/file 4 | 5 | # get the name of the input file, without the path 6 | input_file=$(basename $1) 7 | 8 | # get the path to the job directory containing the input file 9 | dir_path=$(dirname $1) 10 | 11 | # move into the job directory 12 | cd $dir_path 13 | 14 | # run lammps on the input file 15 | /location/of/lammps/binary < $input_file 16 | -------------------------------------------------------------------------------- /examples/Au_clusters_lammps/ga_input.yaml: -------------------------------------------------------------------------------- 1 | CompositionSpace: 2 | - Au 3 | 4 | EnergyCode: 5 | lammps: 6 | input_script: /location/of/in.min 7 | 8 | Constraints: 9 | min_num_atoms: 13 10 | max_num_atoms: 13 11 | max_lattice_length: 40 12 | 13 | Geometry: 14 | shape: cluster 15 | max_size: 50 16 | 17 | RedundancyGuard: 18 | epa_diff: 0.00001 19 | 20 | StoppingCriteria: 21 | num_energy_calcs: 1000 22 | -------------------------------------------------------------------------------- /examples/Au_clusters_lammps/icosahedron.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/examples/Au_clusters_lammps/icosahedron.png -------------------------------------------------------------------------------- /examples/Au_clusters_lammps/in.min: -------------------------------------------------------------------------------- 1 | units real 2 | dimension 3 3 | atom_style charge 4 | boundary f f f 5 | read_data in.data 6 | 7 | pair_style eam 8 | pair_coeff * * /location/of/Au_u3.eam ## Au 9 | 10 | neigh_modify one 5000 11 | minimize 0.0 1.0e-8 1 1 12 | minimize 0.0 1.0e-8 1000000 10000000 13 | dump myDump all atom 100000000000000 dump.atom 14 | dump_modify myDump sort 1 scale no 15 | minimize 0.0 1.0e-8 1000000 10000000 16 | -------------------------------------------------------------------------------- /examples/Au_wires_lammps/README.rst: -------------------------------------------------------------------------------- 1 | Files to run a genetic algorithm structure search for Au wires with a maximum diameter of 5 Angstroms, using the EAM potential of Foiles *et al.* that ships with LAMMPS. 2 | 3 | 4 | Description of files 5 | ==================== 6 | 7 | ga_input.yaml 8 | 9 | Main input file for GASP. See the `usage file`_ for descriptions of each keyword and option. 10 | 11 | .. _usage file: ../../docs/usage.md 12 | 13 | 14 | in.min 15 | 16 | LAMMPS input script. 17 | 18 | 19 | Au_u3.eam 20 | 21 | LAMMPS file containing the EAM potential. 22 | 23 | 24 | calllammps 25 | 26 | Script called by GASP to run a LAMMPS calculation. 27 | 28 | 29 | Running the search 30 | ================== 31 | 32 | Instructions for running this search on your system are given below. 33 | 34 | Note that we assume GASP-python and LAMMPS are already installed on your system. If this is not the case, see the `main README file`_ and the `LAMMPS documentation`_ for instructions on how to get GASP-python and LAMMPS, respectively. 35 | 36 | .. _main README file: ../../README.rst 37 | .. _LAMMPS documentation: http://lammps.sandia.gov/download.html 38 | 39 | 1. Copy all files to the location on your computer where you would like to run the search. 40 | 41 | 2. Modify the location of the LAMMPS input script given in ga_input.yaml (on line 6). Replace it with the location of the in.min file on your computer. 42 | 43 | 3. Modify the location of the LAMMPS potential file given in in.min (on line 8). Replace it with the location of the Au_u3.eam file on your computer. Note that '## Au' must appear at the end of the line after the location. 44 | 45 | 4. Modify the location of the LAMMPS binary given in calllammps (on line 15). Replace it with the location of the LAMMPS binary on your computer. 46 | 47 | 5. Move calllammps to a location in your system's PATH and make it executable. 48 | 49 | 6. To start the search and save the output to a file called ga_output, move into the folder containing ga_input.yaml and type:: 50 | 51 | run.py ga_input.yaml 2>&1 | tee ga_output 52 | 53 | When we ran this search, GASP found a wire structure composed of 5-atom rings surrounding a central linear chain (shown below) to have the lowest energy. 54 | 55 | .. image:: wire.png 56 | :height: 164 px 57 | :width: 617 px 58 | :scale: 100 % 59 | :align: center 60 | -------------------------------------------------------------------------------- /examples/Au_wires_lammps/calllammps: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # script to run lammps 3 | # usage: calllammps /path/to/lammps/input/file 4 | 5 | # get the name of the input file, without the path 6 | input_file=$(basename $1) 7 | 8 | # get the path to the job directory containing the input file 9 | dir_path=$(dirname $1) 10 | 11 | # move into the job directory 12 | cd $dir_path 13 | 14 | # run lammps on the input file 15 | /location/of/lammps/binary < $input_file 16 | -------------------------------------------------------------------------------- /examples/Au_wires_lammps/ga_input.yaml: -------------------------------------------------------------------------------- 1 | CompositionSpace: 2 | - Au 3 | 4 | EnergyCode: 5 | lammps: 6 | input_script: /location/of/in.min 7 | 8 | InitialPopulation: 9 | random: 10 | number: 20 11 | 12 | Pool: 13 | size: 15 14 | num_promoted: 2 15 | 16 | Constraints: 17 | max_num_atoms: 30 18 | max_lattice_length: 50 19 | 20 | Geometry: 21 | shape: wire 22 | max_size: 5 23 | 24 | RedundancyGuard: 25 | epa_diff: 0.00001 26 | 27 | StoppingCriteria: 28 | num_energy_calcs: 1500 29 | -------------------------------------------------------------------------------- /examples/Au_wires_lammps/in.min: -------------------------------------------------------------------------------- 1 | units real 2 | dimension 3 3 | atom_style charge 4 | boundary f f p 5 | read_data in.data 6 | 7 | pair_style eam 8 | pair_coeff * * /location/of/Au_u3.eam ## Au 9 | 10 | neigh_modify one 5000 11 | minimize 0.0 1.0e-8 1 1 12 | fix 1 all box/relax z 0 vmax 0.001 13 | minimize 0.0 1.0e-8 1000000 10000000 14 | dump myDump all atom 100000000000000 dump.atom 15 | dump_modify myDump sort 1 scale no 16 | fix 1 all box/relax z 0 vmax 0.001 17 | minimize 0.0 1.0e-8 1000000 10000000 18 | -------------------------------------------------------------------------------- /examples/Au_wires_lammps/wire.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/examples/Au_wires_lammps/wire.png -------------------------------------------------------------------------------- /examples/SiO2_2D_gulp/README.rst: -------------------------------------------------------------------------------- 1 | Files to run a genetic algorithm structure search for 2D SiO2 with a maximum layer thickness of 5 Angstroms, using the potential of Vashishta *et al.* that ships with GULP. 2 | 3 | 4 | Description of files 5 | ==================== 6 | 7 | ga_input.yaml 8 | 9 | Main input file for GASP. See the `usage file`_ for descriptions of each keyword and option. 10 | 11 | .. _usage file: ../../docs/usage.md 12 | 13 | 14 | header_file 15 | 16 | Header for GULP input files created by GASP. 17 | 18 | 19 | potential_file 20 | 21 | GULP file containing the empirical potential. 22 | 23 | 24 | callgulp 25 | 26 | Script called by GASP to run a GULP calculation. 27 | 28 | 29 | Running the search 30 | ================== 31 | 32 | Instructions for running this search on your system are given below. 33 | 34 | Note that we assume GASP-python and GULP are already installed on your system. If this is not the case, see the `main README file`_ and the `GULP documentation`_ for instructions on how to get GASP-python and GULP, respectively. 35 | 36 | .. _main README file: ../../README.rst 37 | .. _GULP documentation: https://nanochemistry.curtin.edu.au/gulp/request.cfm?rel=download 38 | 39 | 1. Copy all files to the location on your computer where you would like to run the search. 40 | 41 | 2. Modify the location of the GULP header file given in ga_input.yaml (on line 6). Replace it with the location of header_file on your computer. 42 | 43 | 3. Modify the location of the GULP potential file given in ga_input.yaml (on line 7). Replace it with the location of potential_file on your computer. 44 | 45 | 4. Modify the location of the GULP binary given in callgulp (on line 15). Replace it with the location of the GULP binary on your computer. 46 | 47 | 5. Move callgulp to a location in your system's PATH and make it executable. 48 | 49 | 6. To start the search and save the output to a file called ga_output, move into the folder containing ga_input.yaml and type:: 50 | 51 | run.py ga_input.yaml 2>&1 | tee ga_output 52 | -------------------------------------------------------------------------------- /examples/SiO2_2D_gulp/callgulp: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # script to run gulp 3 | # usage: callgulp /path/to/gulp/input/file 4 | 5 | # get the name of the input file, without the path 6 | input_file=$(basename $1) 7 | 8 | # get the path to the job directory containing the input file 9 | dir_path=$(dirname $1) 10 | 11 | # move into the job directory 12 | cd $dir_path 13 | 14 | # run gulp on the input file 15 | /location/of/gulp/binary < $input_file 16 | -------------------------------------------------------------------------------- /examples/SiO2_2D_gulp/ga_input.yaml: -------------------------------------------------------------------------------- 1 | CompositionSpace: 2 | - SiO2 3 | 4 | EnergyCode: 5 | gulp: 6 | header_file: /location/of/header_file 7 | potential_file: /location/of/potential_file 8 | 9 | Geometry: 10 | shape: sheet 11 | max_size: 5 12 | -------------------------------------------------------------------------------- /examples/SiO2_2D_gulp/header_file: -------------------------------------------------------------------------------- 1 | opti conj 2 | switch_minimiser bfgs gnorm 0.02 3 | -------------------------------------------------------------------------------- /examples/SiO2_2D_gulp/potential_file: -------------------------------------------------------------------------------- 1 | # 2 | # Potentials for silica from the following paper: 3 | # 4 | # Vashishta et al, Phys. Rev. B, 41, 12197 (1990) 5 | # 6 | # Note that the polarisability term is coded as a two-body potential 7 | # 8 | species 9 | Si core 1.60 10 | O core -0.80 11 | lennard 11 6 12 | Si core Si core 0.8207862288 0.0 0.0 16.0 13 | lennard 9 6 14 | Si core O core 163.97004890 0.0 0.0 16.0 15 | lennard 7 6 16 | O core O core 744.35231121 0.0 0.0 16.0 17 | general 4 6 18 | Si core O core -44.2360578 4.43 0.0 0.0 16.0 19 | O core O core -22.1180289 4.43 0.0 0.0 16.0 20 | sw3 21 | Si core O core O core 5.03991544 109.47 1.0 1.0 2.60 2.60 5.20 22 | O core Si core Si core 20.15966176 141.00 1.0 1.0 2.60 2.60 5.20 23 | -------------------------------------------------------------------------------- /gasp/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/gasp/__init__.py -------------------------------------------------------------------------------- /gasp/energy_calculators.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | 8 | """ 9 | Energy Calculators module: 10 | 11 | This module contains the classes used to compute the energies of structures 12 | with external energy codes. All energy calculator classes must implement a 13 | do_energy_calculation() method. 14 | 15 | 1. VaspEnergyCalculator: for using VASP to compute energies 16 | 17 | 2. LammpsEnergyCalculator: for using LAMMSP to compute energies 18 | 19 | 3. GulpEnergyCalculator: for using GULP to compute energies 20 | 21 | """ 22 | 23 | from gasp.general import Cell 24 | 25 | from pymatgen.core.lattice import Lattice 26 | from pymatgen.core.periodic_table import Element 27 | from pymatgen.io.lammps.data import LammpsData, LammpsBox, ForceField, Topology 28 | import pymatgen.command_line.gulp_caller as gulp_caller 29 | 30 | import shutil 31 | import subprocess 32 | import os 33 | import collections 34 | 35 | 36 | class VaspEnergyCalculator(object): 37 | """ 38 | Calculates the energy of an organism using VASP. 39 | """ 40 | 41 | def __init__(self, incar_file, kpoints_file, potcar_files, geometry): 42 | ''' 43 | Makes a VaspEnergyCalculator. 44 | 45 | Args: 46 | incar_file: the path to the INCAR file 47 | 48 | kpoints_file: the path to the KPOINTS file 49 | 50 | potcar_files: a dictionary containing the paths to the POTCAR 51 | files, with the element symbols as keys 52 | 53 | geometry: the Geometry of the search 54 | ''' 55 | 56 | self.name = 'vasp' 57 | 58 | # paths to the INCAR, KPOINTS and POTCARs files 59 | self.incar_file = incar_file 60 | self.kpoints_file = kpoints_file 61 | self.potcar_files = potcar_files 62 | 63 | def do_energy_calculation(self, organism, dictionary, key, 64 | composition_space): 65 | """ 66 | Calculates the energy of an organism using VASP, and stores the relaxed 67 | organism in the provided dictionary at the provided key. If the 68 | calculation fails, stores None in the dictionary instead. 69 | 70 | Args: 71 | organism: the Organism whose energy we want to calculate 72 | 73 | dictionary: a dictionary in which to store the relaxed Organism 74 | 75 | key: the key specifying where to store the relaxed Organism in the 76 | dictionary 77 | 78 | composition_space: the CompositionSpace of the search 79 | 80 | Precondition: the garun directory and temp subdirectory exist, and we 81 | are currently located inside the garun directory 82 | 83 | TODO: maybe use the custodian package for error handling 84 | """ 85 | 86 | # make the job directory 87 | job_dir_path = str(os.getcwd()) + '/temp/' + str(organism.id) 88 | os.mkdir(job_dir_path) 89 | 90 | # copy the INCAR and KPOINTS files to the job directory 91 | shutil.copy(self.incar_file, job_dir_path) 92 | shutil.copy(self.kpoints_file, job_dir_path) 93 | 94 | # sort the organism's cell and write to POSCAR file 95 | organism.cell.sort() 96 | organism.cell.to(fmt='poscar', filename=job_dir_path + '/POSCAR') 97 | 98 | # get a list of the element symbols in the sorted order 99 | symbols = [] 100 | for site in organism.cell.sites: 101 | if site.specie.symbol not in symbols: 102 | symbols.append(site.specie.symbol) 103 | 104 | # write the POTCAR file by concatenating the appropriate elemental 105 | # POTCAR files 106 | total_potcar_path = job_dir_path + '/POTCAR' 107 | with open(total_potcar_path, 'w') as total_potcar_file: 108 | for symbol in symbols: 109 | with open(self.potcar_files[symbol], 'r') as potcar_file: 110 | for line in potcar_file: 111 | total_potcar_file.write(line) 112 | 113 | # run 'callvasp' script as a subprocess to run VASP 114 | print('Starting VASP calculation on organism {} '.format(organism.id)) 115 | devnull = open(os.devnull, 'w') 116 | try: 117 | subprocess.call(['callvasp', job_dir_path], stdout=devnull, 118 | stderr=devnull) 119 | except: 120 | print('Error running VASP on organism {} '.format(organism.id)) 121 | dictionary[key] = None 122 | return 123 | 124 | # parse the relaxed structure from the CONTCAR file 125 | try: 126 | relaxed_cell = Cell.from_file(job_dir_path + '/CONTCAR') 127 | except: 128 | print('Error reading structure of organism {} from CONTCAR ' 129 | 'file '.format(organism.id)) 130 | dictionary[key] = None 131 | return 132 | 133 | # check if the VASP calculation converged 134 | converged = False 135 | with open(job_dir_path + '/OUTCAR') as f: 136 | for line in f: 137 | if 'reached' in line and 'required' in line and \ 138 | 'accuracy' in line: 139 | converged = True 140 | if not converged: 141 | print('VASP relaxation of organism {} did not converge '.format( 142 | organism.id)) 143 | dictionary[key] = None 144 | return 145 | 146 | # parse the internal energy and pV (if needed) and compute the enthalpy 147 | pv = 0 148 | with open(job_dir_path + '/OUTCAR') as f: 149 | for line in f: 150 | if 'energy(sigma->0)' in line: 151 | u = float(line.split()[-1]) 152 | elif 'enthalpy' in line: 153 | pv = float(line.split()[-1]) 154 | enthalpy = u + pv 155 | 156 | organism.cell = relaxed_cell 157 | organism.total_energy = enthalpy 158 | organism.epa = enthalpy/organism.cell.num_sites 159 | print('Setting energy of organism {} to {} ' 160 | 'eV/atom '.format(organism.id, organism.epa)) 161 | dictionary[key] = organism 162 | 163 | 164 | class LammpsEnergyCalculator(object): 165 | """ 166 | Calculates the energy of an organism using LAMMPS. 167 | """ 168 | 169 | def __init__(self, input_script, geometry): 170 | """ 171 | Makes a LammpsEnergyCalculator. 172 | 173 | Args: 174 | input_script: the path to the lammps input script 175 | 176 | geometry: the Geometry of the search 177 | 178 | Precondition: the input script exists and is valid 179 | """ 180 | 181 | self.name = 'lammps' 182 | 183 | # the path to the lammps input script 184 | self.input_script = input_script 185 | 186 | def do_energy_calculation(self, organism, dictionary, key, 187 | composition_space): 188 | """ 189 | Calculates the energy of an organism using LAMMPS, and stores the 190 | relaxed organism in the provided dictionary at the provided key. If the 191 | calculation fails, stores None in the dictionary instead. 192 | 193 | Args: 194 | organism: the Organism whose energy we want to calculate 195 | 196 | dictionary: a dictionary in which to store the relaxed Organism 197 | 198 | key: the key specifying where to store the relaxed Organism in the 199 | dictionary 200 | 201 | composition_space: the CompositionSpace of the search 202 | 203 | Precondition: the garun directory and temp subdirectory exist, and we 204 | are currently located inside the garun directory 205 | """ 206 | 207 | # make the job directory 208 | job_dir_path = str(os.getcwd()) + '/temp/' + str(organism.id) 209 | os.mkdir(job_dir_path) 210 | 211 | # copy the lammps input script to the job directory and get its path 212 | shutil.copy(self.input_script, job_dir_path) 213 | script_name = os.path.basename(self.input_script) 214 | input_script_path = job_dir_path + '/' + str(script_name) 215 | 216 | # write the in.data file 217 | self.conform_to_lammps(organism.cell) 218 | self.write_data_file(organism, job_dir_path, composition_space) 219 | 220 | # write out the unrelaxed structure to a poscar file 221 | organism.cell.to(fmt='poscar', filename=job_dir_path + '/POSCAR.' + 222 | str(organism.id) + '_unrelaxed') 223 | 224 | # run 'calllammps' script as a subprocess to run LAMMPS 225 | print('Starting LAMMPS calculation on organism {} '.format( 226 | organism.id)) 227 | try: 228 | lammps_output = subprocess.check_output( 229 | ['calllammps', input_script_path], stderr=subprocess.STDOUT) 230 | # convert from bytes to string (for Python 3) 231 | lammps_output = lammps_output.decode('utf-8') 232 | except subprocess.CalledProcessError as e: 233 | # write the output of a bad LAMMPS call to for the user's reference 234 | with open(job_dir_path + '/log.lammps', 'w') as log_file: 235 | log_file.write(e.output.decode('utf-8')) 236 | print('Error running LAMMPS on organism {} '.format(organism.id)) 237 | dictionary[key] = None 238 | return 239 | 240 | # write the LAMMPS output 241 | with open(job_dir_path + '/log.lammps', 'w') as log_file: 242 | log_file.write(lammps_output) 243 | 244 | # parse the relaxed structure from the atom.dump file 245 | symbols = [] 246 | all_elements = composition_space.get_all_elements() 247 | for element in all_elements: 248 | symbols.append(element.symbol) 249 | try: 250 | relaxed_cell = self.get_relaxed_cell( 251 | job_dir_path + '/dump.atom', job_dir_path + '/in.data', 252 | symbols) 253 | except: 254 | print('Error reading structure of organism {} from LAMMPS ' 255 | 'output '.format(organism.id)) 256 | dictionary[key] = None 257 | return 258 | 259 | # parse the total energy from the log.lammps file 260 | try: 261 | total_energy = self.get_energy(job_dir_path + '/log.lammps') 262 | except: 263 | print('Error reading energy of organism {} from LAMMPS ' 264 | 'output '.format(organism.id)) 265 | dictionary[key] = None 266 | return 267 | 268 | # check that the total energy isn't unphysically large 269 | # (can be a problem for empirical potentials) 270 | epa = total_energy/organism.cell.num_sites 271 | if epa < -50: 272 | print('Discarding organism {} due to unphysically large energy: ' 273 | '{} eV/atom.'.format(organism.id, str(epa))) 274 | dictionary[key] = None 275 | return 276 | 277 | organism.cell = relaxed_cell 278 | organism.total_energy = total_energy 279 | organism.epa = epa 280 | print('Setting energy of organism {} to {} eV/atom '.format( 281 | organism.id, organism.epa)) 282 | dictionary[key] = organism 283 | 284 | def conform_to_lammps(self, cell): 285 | """ 286 | Modifies a cell to satisfy the requirements of lammps, which are: 287 | 288 | 1. the lattice vectors lie in the principal directions 289 | 290 | 2. the maximum extent in the Cartesian x-direction is achieved by 291 | lattice vector a 292 | 293 | 3. the maximum extent in the Cartesian y-direction is achieved by 294 | lattice vector b 295 | 296 | 4. the maximum extent in the Cartesian z-direction is achieved by 297 | lattice vector c 298 | 299 | by taking supercells along lattice vectors when needed. 300 | 301 | Args: 302 | cell: the Cell to modify 303 | """ 304 | 305 | cell.rotate_to_principal_directions() 306 | lattice_coords = cell.lattice.matrix 307 | ax = lattice_coords[0][0] 308 | bx = lattice_coords[1][0] 309 | cx = lattice_coords[2][0] 310 | by = lattice_coords[1][1] 311 | cy = lattice_coords[2][1] 312 | if ax < bx or ax < cx: 313 | cell.make_supercell([2, 1, 1]) 314 | self.conform_to_lammps(cell) 315 | elif by < cy: 316 | cell.make_supercell([1, 2, 1]) 317 | self.conform_to_lammps(cell) 318 | 319 | def write_data_file(self, organism, job_dir_path, composition_space): 320 | """ 321 | Writes the file (called in.data) containing the structure that LAMMPS 322 | reads. 323 | 324 | Args: 325 | organism: the Organism whose structure to write 326 | 327 | job_dir_path: path to the job directory (as a string) where the 328 | file will be written 329 | 330 | composition_space: the CompositionSpace of the search 331 | """ 332 | 333 | # get xhi, yhi and zhi from the lattice vectors 334 | lattice_coords = organism.cell.lattice.matrix 335 | xhi = lattice_coords[0][0] 336 | yhi = lattice_coords[1][1] 337 | zhi = lattice_coords[2][2] 338 | box_bounds = [[0.0, xhi], [0.0, yhi], [0.0, zhi]] 339 | 340 | # get xy, xz and yz from the lattice vectors 341 | xy = lattice_coords[1][0] 342 | xz = lattice_coords[2][0] 343 | yz = lattice_coords[2][1] 344 | box_tilts = [xy, xz, yz] 345 | 346 | # make a LammpsBox object from the bounds and tilts 347 | lammps_box = LammpsBox(box_bounds, tilt=box_tilts) 348 | 349 | # parse the element symbols and atom_style from the lammps input 350 | # script, preserving the order in which the element symbols appear 351 | # TODO: not all formats give the element symbols at the end of the line 352 | # containing the pair_coeff keyword. Find a better way. 353 | elements_dict = collections.OrderedDict() 354 | num_elements = len(composition_space.get_all_elements()) 355 | 356 | is_single_element = (num_elements == 1) 357 | if is_single_element: 358 | single_element = composition_space.get_all_elements() 359 | elements_dict[single_element[0].symbol] = single_element[0] 360 | 361 | with open(self.input_script, 'r') as f: 362 | lines = f.readlines() 363 | for line in lines: 364 | if 'atom_style' in line: 365 | atom_style_in_script = line.split()[1] 366 | elif not is_single_element and 'pair_coeff' in line: 367 | element_symbols = line.split()[-1*num_elements:] 368 | 369 | if not is_single_element: 370 | for symbol in element_symbols: 371 | elements_dict[symbol] = Element(symbol) 372 | 373 | # make a LammpsData object and use it write the in.data file 374 | force_field = ForceField(elements_dict.items()) 375 | topology = Topology(organism.cell.sites) 376 | lammps_data = LammpsData.from_ff_and_topologies( 377 | lammps_box, force_field, [topology], 378 | atom_style=atom_style_in_script) 379 | lammps_data.write_file(job_dir_path + '/in.data') 380 | 381 | def get_relaxed_cell(self, atom_dump_path, data_in_path, element_symbols): 382 | """ 383 | Parses the relaxed cell from the dump.atom file. 384 | 385 | Returns the relaxed cell as a Cell object. 386 | 387 | Args: 388 | atom_dump_path: the path (as a string) to the dump.atom file 389 | 390 | in_data_path: the path (as a string) to the in.data file 391 | 392 | element_symbols: a tuple containing the set of chemical symbols of 393 | all the elements in the compositions space 394 | """ 395 | 396 | # read the dump.atom file as a list of strings 397 | with open(atom_dump_path, 'r') as atom_dump: 398 | lines = atom_dump.readlines() 399 | 400 | # get the lattice vectors 401 | a_data = lines[5].split() 402 | b_data = lines[6].split() 403 | c_data = lines[7].split() 404 | 405 | # parse the tilt factors 406 | xy = float(a_data[2]) 407 | xz = float(b_data[2]) 408 | yz = float(c_data[2]) 409 | 410 | # parse the bounds 411 | xlo_bound = float(a_data[0]) 412 | xhi_bound = float(a_data[1]) 413 | ylo_bound = float(b_data[0]) 414 | yhi_bound = float(b_data[1]) 415 | zlo_bound = float(c_data[0]) 416 | zhi_bound = float(c_data[1]) 417 | 418 | # compute xlo, xhi, ylo, yhi, zlo and zhi according to the conversion 419 | # given by LAMMPS 420 | # http://lammps.sandia.gov/doc/Section_howto.html#howto-12 421 | xlo = xlo_bound - min([0.0, xy, xz, xy + xz]) 422 | xhi = xhi_bound - max([0.0, xy, xz, xy + xz]) 423 | ylo = ylo_bound - min(0.0, yz) 424 | yhi = yhi_bound - max([0.0, yz]) 425 | zlo = zlo_bound 426 | zhi = zhi_bound 427 | 428 | # construct a Lattice object from the lo's and hi's and tilts 429 | a = [xhi - xlo, 0.0, 0.0] 430 | b = [xy, yhi - ylo, 0.0] 431 | c = [xz, yz, zhi - zlo] 432 | relaxed_lattice = Lattice([a, b, c]) 433 | 434 | # get the number of atoms 435 | num_atoms = int(lines[3]) 436 | 437 | # get the atom types and their Cartesian coordinates 438 | types = [] 439 | relaxed_cart_coords = [] 440 | for i in range(num_atoms): 441 | atom_info = lines[9 + i].split() 442 | types.append(int(atom_info[1])) 443 | relaxed_cart_coords.append([float(atom_info[2]) - xlo, 444 | float(atom_info[3]) - ylo, 445 | float(atom_info[4]) - zlo]) 446 | 447 | # read the atom types and corresponding atomic masses from in.data 448 | with open(data_in_path, 'r') as data_in: 449 | lines = data_in.readlines() 450 | types_masses = {} 451 | for i in range(len(lines)): 452 | if 'Masses' in lines[i]: 453 | for j in range(len(element_symbols)): 454 | types_masses[int(lines[i + j + 2].split()[0])] = float( 455 | lines[i + j + 2].split()[1]) 456 | 457 | # map the atom types to chemical symbols 458 | types_symbols = {} 459 | for symbol in element_symbols: 460 | for atom_type in types_masses: 461 | # round the atomic masses to one decimal point for comparison 462 | if format(float(Element(symbol).atomic_mass), '.1f') == format( 463 | types_masses[atom_type], '.1f'): 464 | types_symbols[atom_type] = symbol 465 | 466 | # make a list of chemical symbols (one for each site) 467 | relaxed_symbols = [] 468 | for atom_type in types: 469 | relaxed_symbols.append(types_symbols[atom_type]) 470 | 471 | return Cell(relaxed_lattice, relaxed_symbols, relaxed_cart_coords, 472 | coords_are_cartesian=True) 473 | 474 | def get_energy(self, lammps_log_path): 475 | """ 476 | Parses the final energy from the log.lammps file written by LAMMPS. 477 | 478 | Returns the total energy as a float. 479 | 480 | Args: 481 | lammps_log_path: the path (as a string) to the log.lammps file 482 | """ 483 | 484 | # read the log.lammps file as a list of strings 485 | with open(lammps_log_path, 'r') as f: 486 | lines = f.readlines() 487 | 488 | # get the last line with the keywords (where the final energy is) 489 | match_strings = ['Step', 'Temp', 'E_pair', 'E_mol', 'TotEng'] 490 | for i in range(len(lines)): 491 | if all(match in lines[i] for match in match_strings): 492 | energy = float(lines[i + 2].split()[4]) 493 | return energy 494 | 495 | 496 | class GulpEnergyCalculator(object): 497 | """ 498 | Calculates the energy of an organism using GULP. 499 | """ 500 | 501 | def __init__(self, header_file, potential_file, geometry): 502 | """ 503 | Makes a GulpEnergyCalculator. 504 | 505 | Args: 506 | header_file: the path to the gulp header file 507 | 508 | potential_file: the path to the gulp potential file 509 | 510 | geometry: the Geometry of the search 511 | 512 | Precondition: the header and potential files exist and are valid 513 | """ 514 | 515 | self.name = 'gulp' 516 | 517 | # the paths to the header and potential files 518 | self.header_path = header_file 519 | self.potential_path = potential_file 520 | 521 | # read the gulp header and potential files 522 | with open(header_file, 'r') as gulp_header_file: 523 | self.header = gulp_header_file.readlines() 524 | with open(potential_file, 'r') as gulp_potential_file: 525 | self.potential = gulp_potential_file.readlines() 526 | 527 | # for processing gulp input and output 528 | self.gulp_io = gulp_caller.GulpIO() 529 | 530 | # whether the anions and cations are polarizable in the gulp potential 531 | self.anions_shell, self.cations_shell = self.get_shells() 532 | 533 | # determine which lattice parameters should be relaxed 534 | # and make the corresponding flags for the input file 535 | # 536 | # relax a, b, c, alpha, beta, gamma 537 | if geometry.shape == 'bulk': 538 | self.lattice_flags = None 539 | # relax a, b and gamma but not c, alpha and beta 540 | elif geometry.shape == 'sheet': 541 | self.lattice_flags = '1 1 0 0 0 1' 542 | # relax c, but not a, b, alpha, beta and gamma 543 | elif geometry.shape == 'wire': 544 | self.lattice_flags = '0 0 1 0 0 0' 545 | # don't relax any of the lattice parameters 546 | elif geometry.shape == 'cluster': 547 | self.lattice_flags = '0 0 0 0 0 0' 548 | 549 | def get_shells(self): 550 | """ 551 | Determines whether the anions and cations have shells by looking at the 552 | potential file. 553 | 554 | Returns two booleans indicating whether the anions and cations have 555 | shells, respectively. 556 | """ 557 | 558 | # get the symbols of the elements with shells 559 | shells = [] 560 | for line in self.potential: 561 | if 'shel' in line: 562 | line_parts = line.split() 563 | shells.append(str(line_parts[line_parts.index('shel') - 1])) 564 | shells = list(set(shells)) 565 | 566 | # determine whether the elements with shells are anions and/or cations 567 | anions_shell = False 568 | cations_shell = False 569 | for symbol in shells: 570 | element = Element(symbol) 571 | if element in gulp_caller._anions: 572 | anions_shell = True 573 | elif element in gulp_caller._cations: 574 | cations_shell = True 575 | return anions_shell, cations_shell 576 | 577 | def do_energy_calculation(self, organism, dictionary, key, 578 | composition_space): 579 | """ 580 | Calculates the energy of an organism using GULP, and stores the relaxed 581 | organism in the provided dictionary at the provided key. If the 582 | calculation fails, stores None in the dictionary instead. 583 | 584 | Args: 585 | organism: the Organism whose energy we want to calculate 586 | 587 | dictionary: a dictionary in which to store the relaxed Organism 588 | 589 | key: the key specifying where to store the relaxed Organism in the 590 | dictionary 591 | 592 | composition_space: the CompositionSpace of the search 593 | 594 | Precondition: the garun directory and temp subdirectory exist, and we 595 | are currently located inside the garun directory 596 | 597 | TODO: maybe use the custodian package for error handling 598 | """ 599 | 600 | # make the job directory 601 | job_dir_path = str(os.getcwd()) + '/temp/' + str(organism.id) 602 | os.mkdir(job_dir_path) 603 | 604 | # just for testing, write out the unrelaxed structure to a poscar file 605 | # organism.cell.to(fmt='poscar', filename= job_dir_path + 606 | # '/POSCAR.' + str(organism.id) + '_unrelaxed') 607 | 608 | # write the GULP input file 609 | gin_path = job_dir_path + '/' + str(organism.id) + '.gin' 610 | self.write_input_file(organism, gin_path) 611 | 612 | # run 'calllgulp' script as a subprocess to run GULP 613 | print('Starting GULP calculation on organism {} '.format(organism.id)) 614 | try: 615 | gulp_output = subprocess.check_output(['callgulp', gin_path], 616 | stderr=subprocess.STDOUT) 617 | # convert from bytes to string (for Python 3) 618 | gulp_output = gulp_output.decode('utf-8') 619 | except subprocess.CalledProcessError as e: 620 | # write the output of a bad GULP call to for the user's reference 621 | with open(job_dir_path + '/' + str(organism.id) + '.gout', 622 | 'w') as gout_file: 623 | gout_file.write(e.output.decode('utf-8')) 624 | print('Error running GULP on organism {} '.format(organism.id)) 625 | dictionary[key] = None 626 | return 627 | 628 | # write the GULP output for the user's reference 629 | with open(job_dir_path + '/' + str(organism.id) + '.gout', 630 | 'w') as gout_file: 631 | gout_file.write(gulp_output) 632 | 633 | # check if not converged (part of this is copied from pymatgen) 634 | conv_err_string = 'Conditions for a minimum have not been satisfied' 635 | gradient_norm = self.get_grad_norm(gulp_output) 636 | if conv_err_string in gulp_output and gradient_norm > 0.1: 637 | print('The GULP calculation on organism {} did not ' 638 | 'converge '.format(organism.id)) 639 | dictionary[key] = None 640 | return 641 | 642 | # parse the relaxed structure from the gulp output 643 | try: 644 | # TODO: change this line if pymatgen fixes the gulp parser 645 | relaxed_cell = self.get_relaxed_cell(gulp_output) 646 | except: 647 | print('Error reading structure of organism {} from GULP ' 648 | 'output '.format(organism.id)) 649 | dictionary[key] = None 650 | return 651 | 652 | # parse the total energy from the gulp output 653 | try: 654 | total_energy = self.get_energy(gulp_output) 655 | except: 656 | print('Error reading energy of organism {} from GULP ' 657 | 'output '.format(organism.id)) 658 | dictionary[key] = None 659 | return 660 | 661 | # sometimes gulp takes a supercell 662 | num_atoms = self.get_num_atoms(gulp_output) 663 | 664 | organism.cell = relaxed_cell 665 | organism.epa = total_energy/num_atoms 666 | organism.total_energy = organism.epa*organism.cell.num_sites 667 | print('Setting energy of organism {} to {} eV/atom '.format( 668 | organism.id, organism.epa)) 669 | dictionary[key] = organism 670 | 671 | def write_input_file(self, organism, gin_path): 672 | """ 673 | Writes the gulp input file. 674 | 675 | Args: 676 | organism: the Organism whose energy we want to calculate 677 | 678 | gin_path: the path to the GULP input file 679 | """ 680 | 681 | # get the structure lines 682 | structure_lines = self.gulp_io.structure_lines( 683 | organism.cell, anion_shell_flg=self.anions_shell, 684 | cation_shell_flg=self.cations_shell, symm_flg=False) 685 | structure_lines = structure_lines.split('\n') 686 | del structure_lines[-1] # remove empty line that gets added 687 | 688 | # GULP errors out if too many decimal places in lattice parameters 689 | lattice_parameters = structure_lines[1].split() 690 | for i in range(len(lattice_parameters)): 691 | lattice_parameters[i] = round(float(lattice_parameters[i]), 10) 692 | 693 | rounded_lattice_parameters = "" 694 | for lattice_parameter in lattice_parameters: 695 | rounded_lattice_parameters += str(lattice_parameter) 696 | rounded_lattice_parameters += " " 697 | structure_lines[1] = rounded_lattice_parameters 698 | 699 | # add flags for relaxing lattice parameters and ion positions 700 | if self.lattice_flags is not None: 701 | structure_lines[1] = structure_lines[1] + self.lattice_flags 702 | for i in range(3, len(structure_lines)): 703 | structure_lines[i] = structure_lines[i] + ' 1 1 1' 704 | 705 | # add newline characters to the end of each of the structure lines 706 | for i in range(len(structure_lines)): 707 | structure_lines[i] = structure_lines[i] + '\n' 708 | 709 | # construct complete input 710 | gulp_input = self.header + structure_lines + self.potential 711 | 712 | # print gulp input to a file 713 | with open(gin_path, 'w') as gin_file: 714 | for line in gulp_input: 715 | gin_file.write(line) 716 | 717 | def get_grad_norm(self, gout): 718 | """ 719 | Parses the final gradient norm from the GULP output. 720 | 721 | Args: 722 | gout: the GULP output, as a string 723 | """ 724 | 725 | output_lines = gout.split('\n') 726 | for line in output_lines: 727 | if 'Final Gnorm' in line: 728 | line_parts = line.split() 729 | return float(line_parts[3]) 730 | 731 | def get_energy(self, gout): 732 | """ 733 | Parses the final energy from the GULP output. 734 | 735 | Args: 736 | gout: the GULP output, as a string 737 | """ 738 | 739 | output_lines = gout.split('\n') 740 | for line in output_lines: 741 | if 'Final energy' in line: 742 | return float(line.split()[3]) 743 | 744 | def get_num_atoms(self, gout): 745 | """ 746 | Parses the number of atoms used by GULP in the calculation. 747 | 748 | Args: 749 | gout: the GULP output, as a string 750 | """ 751 | 752 | output_lines = gout.split('\n') 753 | for line in output_lines: 754 | if 'Total number atoms' in line: 755 | line_parts = line.split() 756 | return int(line_parts[-1]) 757 | 758 | # This method is copied from GulpIO.get_relaxed_structure, and I modified 759 | # it slightly to get it to work. 760 | # TODO: if pymatgen fixes this method, then I can delete this. 761 | # Alternatively, could submit a pull request with my fix 762 | def get_relaxed_cell(self, gout): 763 | # Find the structure lines 764 | structure_lines = [] 765 | cell_param_lines = [] 766 | output_lines = gout.split("\n") 767 | no_lines = len(output_lines) 768 | i = 0 769 | # Compute the input lattice parameters 770 | while i < no_lines: 771 | line = output_lines[i] 772 | if "Full cell parameters" in line: 773 | i += 2 774 | line = output_lines[i] 775 | a = float(line.split()[8]) 776 | alpha = float(line.split()[11]) 777 | line = output_lines[i + 1] 778 | b = float(line.split()[8]) 779 | beta = float(line.split()[11]) 780 | line = output_lines[i + 2] 781 | c = float(line.split()[8]) 782 | gamma = float(line.split()[11]) 783 | i += 3 784 | break 785 | elif "Cell parameters" in line: 786 | i += 2 787 | line = output_lines[i] 788 | a = float(line.split()[2]) 789 | alpha = float(line.split()[5]) 790 | line = output_lines[i + 1] 791 | b = float(line.split()[2]) 792 | beta = float(line.split()[5]) 793 | line = output_lines[i + 2] 794 | c = float(line.split()[2]) 795 | gamma = float(line.split()[5]) 796 | i += 3 797 | break 798 | else: 799 | i += 1 800 | 801 | while i < no_lines: 802 | line = output_lines[i] 803 | if "Final fractional coordinates of atoms" in line or \ 804 | "Final asymmetric unit coordinates" in line: # Ben's add 805 | # read the site coordinates in the following lines 806 | i += 6 807 | line = output_lines[i] 808 | while line[0:2] != '--': 809 | structure_lines.append(line) 810 | i += 1 811 | line = output_lines[i] 812 | # read the cell parameters 813 | i += 9 814 | line = output_lines[i] 815 | if "Final cell parameters" in line: 816 | i += 3 817 | for del_i in range(6): 818 | line = output_lines[i + del_i] 819 | cell_param_lines.append(line) 820 | break 821 | else: 822 | i += 1 823 | 824 | # Process the structure lines 825 | if structure_lines: 826 | sp = [] 827 | coords = [] 828 | for line in structure_lines: 829 | fields = line.split() 830 | if fields[2] == 'c': 831 | sp.append(fields[1]) 832 | coords.append(list(float(x) for x in fields[3:6])) 833 | else: 834 | raise IOError("No structure found") 835 | 836 | if cell_param_lines: 837 | a = float(cell_param_lines[0].split()[1]) 838 | b = float(cell_param_lines[1].split()[1]) 839 | c = float(cell_param_lines[2].split()[1]) 840 | alpha = float(cell_param_lines[3].split()[1]) 841 | beta = float(cell_param_lines[4].split()[1]) 842 | gamma = float(cell_param_lines[5].split()[1]) 843 | latt = Lattice.from_parameters(a, b, c, alpha, beta, gamma) 844 | 845 | return Cell(latt, sp, coords) 846 | -------------------------------------------------------------------------------- /gasp/geometry.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | 8 | """ 9 | Geometry module: 10 | 11 | This module contains classes to hold geometry-specific data and operations, 12 | including any additional constraints. All geometry classes must implement 13 | pad(), unpad() and get_size() methods. 14 | 15 | 1. Bulk: Data and operations for 3D bulk structures 16 | 17 | 2. Sheet: Data and operations for 2D sheet structures 18 | 19 | 3. Wire: Data and operations for 1D wire structures 20 | 21 | 4. Cluster: Data and operations for 0D cluster structures 22 | 23 | """ 24 | 25 | from pymatgen.core.lattice import Lattice 26 | from pymatgen.core.sites import Site 27 | 28 | import numpy as np 29 | 30 | 31 | class Bulk(object): 32 | ''' 33 | Contains data and operations specific to bulk structures (so not much...). 34 | ''' 35 | 36 | def __init__(self): 37 | ''' 38 | Makes a Bulk object. 39 | ''' 40 | 41 | self.shape = 'bulk' 42 | self.max_size = np.inf 43 | self.min_size = -np.inf 44 | self.padding = None 45 | 46 | def pad(self, cell, padding='from_geometry'): 47 | ''' 48 | Does nothing. 49 | 50 | Args: 51 | cell: the Cell to pad 52 | 53 | padding: the amount of vacuum padding to add. If set to 54 | 'from_geometry', then the value in self.padding is used. 55 | ''' 56 | 57 | pass 58 | 59 | def unpad(self, cell, constraints): 60 | ''' 61 | Does nothing. 62 | 63 | Args: 64 | cell: the Cell to unpad 65 | 66 | constraints: the Constraints of the search 67 | ''' 68 | 69 | pass 70 | 71 | def get_size(self, cell): 72 | ''' 73 | Returns 0. 74 | 75 | Args: 76 | cell: the Cell whose size to get 77 | ''' 78 | 79 | return 0 80 | 81 | 82 | class Sheet(object): 83 | ''' 84 | Contains data and operations specific to sheet structures. 85 | ''' 86 | 87 | def __init__(self, geometry_parameters): 88 | ''' 89 | Makes a Sheet, and sets default parameter values if necessary. 90 | 91 | Args: 92 | geometry_parameters: a dictionary of parameters 93 | ''' 94 | 95 | self.shape = 'sheet' 96 | 97 | # default values 98 | self.default_max_size = np.inf 99 | self.default_min_size = -np.inf 100 | self.default_padding = 10 101 | 102 | # parse the parameters, and set defaults if necessary 103 | # max size 104 | if 'max_size' not in geometry_parameters: 105 | self.max_size = self.default_max_size 106 | elif geometry_parameters['max_size'] in (None, 'default'): 107 | self.max_size = self.default_max_size 108 | else: 109 | self.max_size = geometry_parameters['max_size'] 110 | 111 | # min size 112 | if 'min_size' not in geometry_parameters: 113 | self.min_size = self.default_min_size 114 | elif geometry_parameters['min_size'] in (None, 'default'): 115 | self.min_size = self.default_min_size 116 | else: 117 | self.min_size = geometry_parameters['min_size'] 118 | 119 | # padding 120 | if 'padding' not in geometry_parameters: 121 | self.padding = self.default_padding 122 | elif geometry_parameters['padding'] in (None, 'default'): 123 | self.padding = self.default_padding 124 | else: 125 | self.padding = geometry_parameters['padding'] 126 | 127 | def pad(self, cell, padding='from_geometry'): 128 | ''' 129 | Modifies a cell by adding vertical vacuum padding and making the 130 | c-lattice vector normal to the plane of the sheet. The atoms are 131 | shifted to the center of the padded sheet. 132 | 133 | Args: 134 | cell: the Cell to pad 135 | 136 | padding: the amount of vacuum padding to add (in Angstroms). If not 137 | set, then the value in self.padding is used. 138 | ''' 139 | 140 | # get the padding amount 141 | if padding == 'from_geometry': 142 | pad_amount = self.padding 143 | else: 144 | pad_amount = padding 145 | 146 | # make the padded lattice 147 | cell.rotate_to_principal_directions() 148 | species = cell.species 149 | cartesian_coords = cell.cart_coords 150 | cart_bounds = cell.get_bounding_box(cart_coords=True) 151 | minz = cart_bounds[2][0] 152 | maxz = cart_bounds[2][1] 153 | layer_thickness = maxz - minz 154 | ax = cell.lattice.matrix[0][0] 155 | bx = cell.lattice.matrix[1][0] 156 | by = cell.lattice.matrix[1][1] 157 | padded_lattice = Lattice([[ax, 0.0, 0.0], [bx, by, 0.0], 158 | [0.0, 0.0, layer_thickness + pad_amount]]) 159 | 160 | # modify the cell to correspond to the padded lattice 161 | cell.modify_lattice(padded_lattice) 162 | site_indices = [] 163 | for i in range(len(cell.sites)): 164 | site_indices.append(i) 165 | cell.remove_sites(site_indices) 166 | for i in range(len(cartesian_coords)): 167 | cell.append(species[i], cartesian_coords[i], 168 | coords_are_cartesian=True) 169 | 170 | # translate the atoms back into the cell if needed, and shift them to 171 | # the vertical center 172 | cell.translate_atoms_into_cell() 173 | frac_bounds = cell.get_bounding_box(cart_coords=False) 174 | z_center = frac_bounds[2][0] + (frac_bounds[2][1] - 175 | frac_bounds[2][0])/2 176 | translation_vector = [0, 0, 0.5 - z_center] 177 | site_indices = [i for i in range(len(cell.sites))] 178 | cell.translate_sites(site_indices, translation_vector, 179 | frac_coords=True, to_unit_cell=False) 180 | 181 | def unpad(self, cell, constraints): 182 | ''' 183 | Modifies a cell by removing vertical vacuum padding, leaving only 184 | enough to satisfy the per-species MID constraints, and makes the 185 | c-lattice vector normal to the plane of the sheet (if it isn't 186 | already). 187 | 188 | Args: 189 | cell: the Cell to unpad 190 | 191 | constraints: the Constraints of the search 192 | ''' 193 | 194 | # make the unpadded lattice 195 | cell.rotate_to_principal_directions() 196 | species = cell.species 197 | cartesian_coords = cell.cart_coords 198 | layer_thickness = self.get_size(cell) 199 | max_mid = constraints.get_max_mid() + 0.01 # just to be safe... 200 | ax = cell.lattice.matrix[0][0] 201 | bx = cell.lattice.matrix[1][0] 202 | by = cell.lattice.matrix[1][1] 203 | unpadded_lattice = Lattice([[ax, 0.0, 0.0], [bx, by, 0.0], 204 | [0.0, 0.0, layer_thickness + max_mid]]) 205 | 206 | # modify the cell to correspond to the unpadded lattice 207 | cell.modify_lattice(unpadded_lattice) 208 | site_indices = [] 209 | for i in range(len(cell.sites)): 210 | site_indices.append(i) 211 | cell.remove_sites(site_indices) 212 | for i in range(len(cartesian_coords)): 213 | cell.append(species[i], cartesian_coords[i], 214 | coords_are_cartesian=True) 215 | 216 | # translate the atoms back into the cell if needed, and shift them to 217 | # the vertical center 218 | cell.translate_atoms_into_cell() 219 | frac_bounds = cell.get_bounding_box(cart_coords=False) 220 | z_center = frac_bounds[2][0] + (frac_bounds[2][1] - 221 | frac_bounds[2][0])/2 222 | translation_vector = [0, 0, 0.5 - z_center] 223 | site_indices = [i for i in range(len(cell.sites))] 224 | cell.translate_sites(site_indices, translation_vector, 225 | frac_coords=True, to_unit_cell=False) 226 | 227 | def get_size(self, cell): 228 | ''' 229 | Returns the layer thickness of a sheet structure, which is the maximum 230 | vertical distance between atoms in the cell. 231 | 232 | Precondition: the cell has already been put into sheet format (c 233 | lattice vector parallel to the z-axis and a and b lattice vectors 234 | in the x-y plane) 235 | 236 | Args: 237 | cell: the Cell whose size to get 238 | ''' 239 | 240 | cart_bounds = cell.get_bounding_box(cart_coords=True) 241 | layer_thickness = cart_bounds[2][1] - cart_bounds[2][0] 242 | return layer_thickness 243 | 244 | 245 | class Wire(object): 246 | ''' 247 | Contains data and operations specific to wire structures. 248 | ''' 249 | 250 | def __init__(self, geometry_parameters): 251 | ''' 252 | Makes a Wire, and sets default parameter values if necessary. 253 | 254 | Args: 255 | geometry_parameters: a dictionary of parameters 256 | ''' 257 | 258 | self.shape = 'wire' 259 | 260 | # default values 261 | self.default_max_size = np.inf 262 | self.default_min_size = -np.inf 263 | self.default_padding = 10 264 | 265 | # parse the parameters, and set defaults if necessary 266 | # max size 267 | if 'max_size' not in geometry_parameters: 268 | self.max_size = self.default_max_size 269 | elif geometry_parameters['max_size'] in (None, 'default'): 270 | self.max_size = self.default_max_size 271 | else: 272 | self.max_size = geometry_parameters['max_size'] 273 | 274 | # min size 275 | if 'min_size' not in geometry_parameters: 276 | self.min_size = self.default_min_size 277 | elif geometry_parameters['min_size'] in (None, 'default'): 278 | self.min_size = self.default_min_size 279 | else: 280 | self.min_size = geometry_parameters['min_size'] 281 | 282 | # padding 283 | if 'padding' not in geometry_parameters: 284 | self.padding = self.default_padding 285 | elif geometry_parameters['padding'] in (None, 'default'): 286 | self.padding = self.default_padding 287 | else: 288 | self.padding = geometry_parameters['padding'] 289 | 290 | def pad(self, cell, padding='from_geometry'): 291 | ''' 292 | Modifies a cell by making the c lattice vector parallel to z-axis, and 293 | adds vacuum padding around the structure in the x and y directions by 294 | replacing a and b lattice vectors with padded vectors along the x and y 295 | axes, respectively. The atoms are shifted to the center of the padded 296 | cell. 297 | 298 | Args: 299 | cell: the Cell to pad 300 | 301 | padding: the amount of vacuum padding to add (in Angstroms). If not 302 | set, then the value in self.padding is used. 303 | ''' 304 | 305 | # get the padding amount 306 | if padding == 'from_geometry': 307 | pad_amount = self.padding 308 | else: 309 | pad_amount = padding 310 | 311 | # make the padded lattice 312 | cell.rotate_c_parallel_to_z() 313 | species = cell.species 314 | cartesian_coords = cell.cart_coords 315 | cart_bounds = cell.get_bounding_box(cart_coords=True) 316 | x_min = cart_bounds[0][0] 317 | x_max = cart_bounds[0][1] 318 | y_min = cart_bounds[1][0] 319 | y_max = cart_bounds[1][1] 320 | x_extent = x_max - x_min 321 | y_extent = y_max - y_min 322 | cz = cell.lattice.matrix[2][2] 323 | padded_lattice = Lattice([[x_extent + pad_amount, 0, 0], 324 | [0, y_extent + pad_amount, 0], [0, 0, cz]]) 325 | 326 | # modify the cell to correspond to the padded lattice 327 | cell.modify_lattice(padded_lattice) 328 | site_indices = [] 329 | for i in range(len(cell.sites)): 330 | site_indices.append(i) 331 | cell.remove_sites(site_indices) 332 | for i in range(len(cartesian_coords)): 333 | cell.append(species[i], cartesian_coords[i], 334 | coords_are_cartesian=True) 335 | 336 | # translate the atoms back into the cell if needed, and shift them to 337 | # the horizontal center 338 | cell.translate_atoms_into_cell() 339 | frac_bounds = cell.get_bounding_box(cart_coords=False) 340 | x_center = frac_bounds[0][0] + (frac_bounds[0][1] - 341 | frac_bounds[0][0])/2 342 | y_center = frac_bounds[1][0] + (frac_bounds[1][1] - 343 | frac_bounds[1][0])/2 344 | translation_vector = [0.5 - x_center, 0.5 - y_center, 0.0] 345 | site_indices = [i for i in range(len(cell.sites))] 346 | cell.translate_sites(site_indices, translation_vector, 347 | frac_coords=True, to_unit_cell=False) 348 | 349 | def unpad(self, cell, constraints): 350 | ''' 351 | Modifies a cell by removing horizontal vacuum padding around a wire, 352 | leaving only enough to satisfy the per-species MID constraints, and 353 | makes the three lattice vectors lie along the three Cartesian 354 | directions. 355 | 356 | Args: 357 | cell: the Cell to unpad 358 | 359 | constraints: the Constraints of the search 360 | ''' 361 | 362 | # make the unpadded lattice 363 | cell.rotate_c_parallel_to_z() 364 | species = cell.species 365 | cartesian_coords = cell.cart_coords 366 | cart_bounds = cell.get_bounding_box(cart_coords=True) 367 | x_min = cart_bounds[0][0] 368 | x_max = cart_bounds[0][1] 369 | y_min = cart_bounds[1][0] 370 | y_max = cart_bounds[1][1] 371 | x_extent = x_max - x_min 372 | y_extent = y_max - y_min 373 | cz = cell.lattice.matrix[2][2] 374 | max_mid = constraints.get_max_mid() + 0.01 # just to be safe... 375 | unpadded_lattice = Lattice([[x_extent + max_mid, 0.0, 0.0], 376 | [0, y_extent + max_mid, 0.0], 377 | [0.0, 0.0, cz]]) 378 | 379 | # modify the cell to correspond to the unpadded lattice 380 | cell.modify_lattice(unpadded_lattice) 381 | site_indices = [] 382 | for i in range(len(cell.sites)): 383 | site_indices.append(i) 384 | cell.remove_sites(site_indices) 385 | for i in range(len(cartesian_coords)): 386 | cell.append(species[i], cartesian_coords[i], 387 | coords_are_cartesian=True) 388 | 389 | # translate the atoms back into the cell if needed, and shift them to 390 | # the horizontal center 391 | cell.translate_atoms_into_cell() 392 | frac_bounds = cell.get_bounding_box(cart_coords=False) 393 | x_center = frac_bounds[0][0] + (frac_bounds[0][1] - 394 | frac_bounds[0][0])/2 395 | y_center = frac_bounds[1][0] + (frac_bounds[1][1] - 396 | frac_bounds[1][0])/2 397 | translation_vector = [0.5 - x_center, 0.5 - y_center, 0.0] 398 | site_indices = [i for i in range(len(cell.sites))] 399 | cell.translate_sites(site_indices, translation_vector, 400 | frac_coords=True, to_unit_cell=False) 401 | 402 | def get_size(self, cell): 403 | ''' 404 | Returns the diameter of a wire structure, defined as the maximum 405 | distance between atoms projected to the x-y plane. 406 | 407 | Precondition: the cell has already been put into wire format (c 408 | lattice vector is parallel to z-axis and a and b lattice vectors in 409 | the x-y plane), and all sites are located inside the cell (i.e., 410 | have fractional coordinates between 0 and 1). 411 | 412 | Args: 413 | cell: the Cell whose size to get 414 | ''' 415 | 416 | max_distance = 0 417 | for site_i in cell.sites: 418 | # make Site versions of each PeriodicSite so that the computed 419 | # distance won't include periodic images 420 | non_periodic_site_i = Site(site_i.species_and_occu, 421 | [site_i.coords[0], site_i.coords[1], 422 | 0.0]) 423 | for site_j in cell.sites: 424 | non_periodic_site_j = Site(site_j.species_and_occu, 425 | [site_j.coords[0], site_j.coords[1], 426 | 0.0]) 427 | distance = non_periodic_site_i.distance(non_periodic_site_j) 428 | if distance > max_distance: 429 | max_distance = distance 430 | return max_distance 431 | 432 | 433 | class Cluster(object): 434 | ''' 435 | Contains data and operations specific to clusters. 436 | ''' 437 | 438 | def __init__(self, geometry_parameters): 439 | ''' 440 | Makes a Cluster, and sets default parameter values if necessary. 441 | 442 | Args: 443 | geometry_parameters: a dictionary of parameters 444 | ''' 445 | 446 | self.shape = 'cluster' 447 | 448 | # default values 449 | self.default_max_size = np.inf 450 | self.default_min_size = -np.inf 451 | self.default_padding = 10 452 | 453 | # parse the parameters, and set defaults if necessary 454 | # max size 455 | if 'max_size' not in geometry_parameters: 456 | self.max_size = self.default_max_size 457 | elif geometry_parameters['max_size'] in (None, 'default'): 458 | self.max_size = self.default_max_size 459 | else: 460 | self.max_size = geometry_parameters['max_size'] 461 | 462 | # min size 463 | if 'min_size' not in geometry_parameters: 464 | self.min_size = self.default_min_size 465 | elif geometry_parameters['min_size'] in (None, 'default'): 466 | self.min_size = self.default_min_size 467 | else: 468 | self.min_size = geometry_parameters['min_size'] 469 | 470 | # padding 471 | if 'padding' not in geometry_parameters: 472 | self.padding = self.default_padding 473 | elif geometry_parameters['padding'] in (None, 'default'): 474 | self.padding = self.default_padding 475 | else: 476 | self.padding = geometry_parameters['padding'] 477 | 478 | def pad(self, cell, padding='from_geometry'): 479 | ''' 480 | Modifies a cell by replacing the three lattice vectors with ones along 481 | the three Cartesian directions and adding vacuum padding to each one. 482 | The atoms are shifted to the center of the padded cell. 483 | 484 | Args: 485 | cell: the Cell to pad 486 | 487 | padding: the amount of vacuum padding to add (in Angstroms). If not 488 | set, then the value in self.padding is used. 489 | ''' 490 | 491 | # get the padding amount 492 | if padding == 'from_geometry': 493 | pad_amount = self.padding 494 | else: 495 | pad_amount = padding 496 | 497 | # make the padded lattice 498 | species = cell.species 499 | cartesian_coords = cell.cart_coords 500 | cart_bounds = cell.get_bounding_box(cart_coords=True) 501 | x_min = cart_bounds[0][0] 502 | x_max = cart_bounds[0][1] 503 | y_min = cart_bounds[1][0] 504 | y_max = cart_bounds[1][1] 505 | z_min = cart_bounds[2][0] 506 | z_max = cart_bounds[2][1] 507 | x_extent = x_max - x_min 508 | y_extent = y_max - y_min 509 | z_extent = z_max - z_min 510 | padded_lattice = Lattice([[x_extent + pad_amount, 0, 0], 511 | [0, y_extent + pad_amount, 0], 512 | [0, 0, z_extent + pad_amount]]) 513 | 514 | # modify the cell to correspond to the padded lattice 515 | cell.modify_lattice(padded_lattice) 516 | site_indices = [] 517 | for i in range(len(cell.sites)): 518 | site_indices.append(i) 519 | cell.remove_sites(site_indices) 520 | for i in range(len(cartesian_coords)): 521 | cell.append(species[i], cartesian_coords[i], 522 | coords_are_cartesian=True) 523 | 524 | # translate the atoms back into the cell if needed, and shift them to 525 | # the center 526 | cell.translate_atoms_into_cell() 527 | frac_bounds = cell.get_bounding_box(cart_coords=False) 528 | x_center = frac_bounds[0][0] + (frac_bounds[0][1] - 529 | frac_bounds[0][0])/2 530 | y_center = frac_bounds[1][0] + (frac_bounds[1][1] - 531 | frac_bounds[1][0])/2 532 | z_center = frac_bounds[2][0] + (frac_bounds[2][1] - 533 | frac_bounds[2][0])/2 534 | translation_vector = [0.5 - x_center, 0.5 - y_center, 0.5 - z_center] 535 | site_indices = [i for i in range(len(cell.sites))] 536 | cell.translate_sites(site_indices, translation_vector, 537 | frac_coords=True, to_unit_cell=False) 538 | 539 | def unpad(self, cell, constraints): 540 | ''' 541 | Modifies a cell by removing vacuum padding in every direction, leaving 542 | only enough to satisfy the per-species MID constraints, and makes the 543 | three lattice vectors lie along the three Cartesian directions. 544 | 545 | Args: 546 | cell: the Cell to unpad 547 | 548 | constraints: the Constraints of the search 549 | ''' 550 | 551 | # make the unpadded lattice 552 | species = cell.species 553 | cartesian_coords = cell.cart_coords 554 | cart_bounds = cell.get_bounding_box(cart_coords=True) 555 | x_min = cart_bounds[0][0] 556 | x_max = cart_bounds[0][1] 557 | y_min = cart_bounds[1][0] 558 | y_max = cart_bounds[1][1] 559 | z_min = cart_bounds[2][0] 560 | z_max = cart_bounds[2][1] 561 | x_extent = x_max - x_min 562 | y_extent = y_max - y_min 563 | z_extent = z_max - z_min 564 | max_mid = constraints.get_max_mid() + 0.01 # just to be safe... 565 | unpadded_lattice = Lattice([[x_extent + max_mid, 0.0, 0.0], 566 | [0, y_extent + max_mid, 0.0], 567 | [0.0, 0.0, z_extent + max_mid]]) 568 | 569 | # modify the cell to correspond to the unpadded lattice 570 | cell.modify_lattice(unpadded_lattice) 571 | site_indices = [] 572 | for i in range(len(cell.sites)): 573 | site_indices.append(i) 574 | cell.remove_sites(site_indices) 575 | for i in range(len(cartesian_coords)): 576 | cell.append(species[i], cartesian_coords[i], 577 | coords_are_cartesian=True) 578 | 579 | # translate the atoms back into the cell if needed, and shift them to 580 | # the center 581 | cell.translate_atoms_into_cell() 582 | frac_bounds = cell.get_bounding_box(cart_coords=False) 583 | x_center = frac_bounds[0][0] + (frac_bounds[0][1] - 584 | frac_bounds[0][0])/2 585 | y_center = frac_bounds[1][0] + (frac_bounds[1][1] - 586 | frac_bounds[1][0])/2 587 | z_center = frac_bounds[2][0] + (frac_bounds[2][1] - 588 | frac_bounds[2][0])/2 589 | translation_vector = [0.5 - x_center, 0.5 - y_center, 0.5 - z_center] 590 | site_indices = [i for i in range(len(cell.sites))] 591 | cell.translate_sites(site_indices, translation_vector, 592 | frac_coords=True, to_unit_cell=False) 593 | 594 | def get_size(self, cell): 595 | ''' 596 | Returns the diameter of a cluster structure, defined as the maximum 597 | distance between atoms in the cell. 598 | 599 | Precondition: all sites are located inside the cell (i.e., have 600 | fractional coordinates between 0 and 1) 601 | 602 | Args: 603 | cell: the Cell whose size to get 604 | ''' 605 | 606 | max_distance = 0 607 | for site_i in cell.sites: 608 | # make Site versions of each PeriodicSite so that the computed 609 | # distance won't include periodic images 610 | non_periodic_site_i = Site(site_i.species_and_occu, site_i.coords) 611 | for site_j in cell.sites: 612 | non_periodic_site_j = Site(site_j.species_and_occu, 613 | site_j.coords) 614 | distance = non_periodic_site_i.distance(non_periodic_site_j) 615 | if distance > max_distance: 616 | max_distance = distance 617 | return max_distance 618 | -------------------------------------------------------------------------------- /gasp/organism_creators.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | 8 | """ 9 | Organism Creators module: 10 | 11 | This module contains the classes used to create organisms for the initial 12 | population. 13 | 14 | 1. RandomOrganismCreator: creates random organisms 15 | 16 | 2. FileOrganismCreator: creates organisms by reading their structures from 17 | files 18 | 19 | """ 20 | from gasp.general import Organism, Cell 21 | 22 | from pymatgen.core.lattice import Lattice 23 | from pymatgen.core.composition import Composition 24 | 25 | from fractions import Fraction 26 | import warnings 27 | import os 28 | import math 29 | import numpy as np 30 | 31 | 32 | class RandomOrganismCreator(object): 33 | """ 34 | Creates random organisms for the initial population. 35 | """ 36 | 37 | def __init__(self, random_org_parameters, composition_space, constraints): 38 | """ 39 | Makes a RandomOrganismCreator, and sets default parameter values if 40 | necessary. 41 | 42 | Args: 43 | random_org_parameters: the parameters for generating random 44 | organisms 45 | 46 | composition_space: the CompositionSpace of the search 47 | 48 | constraints: the Constraints of the search 49 | """ 50 | 51 | self.name = 'random organism creator' 52 | 53 | # defaults 54 | # 55 | # number of random organisms to make (only used for epa searches) 56 | self.default_number = 28 57 | # max number of atoms 58 | if composition_space.objective_function == 'epa': 59 | # make sure we can sample cells with two formula units 60 | target_number = constraints.min_num_atoms + 6 61 | num_formulas = target_number/composition_space.endpoints[ 62 | 0].num_atoms 63 | if num_formulas < 2: 64 | min_of_max = int(2*composition_space.endpoints[0].num_atoms) 65 | else: 66 | min_of_max = int(round( 67 | num_formulas)*composition_space.endpoints[0].num_atoms) 68 | else: 69 | min_of_max = constraints.min_num_atoms + 6 70 | self.default_max_num_atoms = min(min_of_max, constraints.max_num_atoms) 71 | # allow structure with compositions at the endpoints (for pd searches) 72 | self.default_allow_endpoints = True 73 | # volume scaling behavior 74 | # default volumes per atom of elemental ground state structures 75 | # computed from structures on materials project (materialsproject.org) 76 | self.all_default_vpas = {'H': 13.89, 'He': 15.79, 'Li': 20.12, 77 | 'Be': 7.94, 'B': 7.25, 'C': 10.58, 78 | 'N': 42.73, 'O': 13.46, 'F': 16.00, 79 | 'Ne': 19.93, 'Na': 37.12, 'Mg': 23.04, 80 | 'Al': 16.47, 'Si': 20.44, 'P': 23.93, 81 | 'S': 36.03, 'Cl': 34.90, 'Ar': 44.87, 82 | 'K': 73.51, 'Ca': 42.42, 'Sc': 24.64, 83 | 'Ti': 17.11, 'V': 13.41, 'Cr': 11.57, 84 | 'Mn': 11.04, 'Fe': 11.55, 'Co': 10.92, 85 | 'Ni': 10.79, 'Cu': 11.82, 'Zn': 15.56, 86 | 'Ga': 20.34, 'Ge': 23.92, 'As': 22.45, 87 | 'Se': 38.13, 'Br': 37.53, 'Kr': 65.09, 88 | 'Rb': 90.44, 'Sr': 54.88, 'Y': 32.85, 89 | 'Zr': 23.50, 'Nb': 18.31, 'Mo': 15.89, 90 | 'Tc': 14.59, 'Ru': 13.94, 'Rh': 14.25, 91 | 'Pd': 15.45, 'Ag': 18.00, 'Cd': 23.28, 92 | 'In': 27.56, 'Sn': 36.70, 'Sb': 31.78, 93 | 'Te': 35.03, 'I': 50.34, 'Xe': 83.51, 94 | 'Cs': 116.17, 'Ba': 63.64, 'Hf': 22.50, 95 | 'Ta': 18.25, 'W': 16.19, 'Re': 15.06, 96 | 'Os': 14.36, 'Ir': 14.55, 'Pt': 15.72, 97 | 'Au': 18.14, 'Hg': 31.45, 'Tl': 31.13, 98 | 'Pb': 32.30, 'Bi': 36.60, 'La': 37.15, 99 | 'Ce': 26.30, 'Pr': 36.47, 'Nd': 35.44, 100 | 'Pm': 34.58, 'Sm': 33.88, 'Eu': 46.28, 101 | 'Gd': 33.33, 'Tb': 32.09, 'Dy': 31.57, 102 | 'Ho': 31.45, 'Er': 30.90, 'Tm': 30.30, 103 | 'Yb': 40.45, 'Lu': 29.43, 'Ac': 45.52, 104 | 'Th': 32.03, 'Pa': 25.21, 'U': 19.98, 105 | 'Np': 18.43, 'Pu': 18.34} 106 | 107 | self.default_vpas = self.get_default_vpas(composition_space) 108 | 109 | # set to defaults 110 | if random_org_parameters in (None, 'default'): 111 | self.number = self.default_number 112 | self.max_num_atoms = self.default_max_num_atoms 113 | self.allow_endpoints = self.default_allow_endpoints 114 | self.vpas = self.default_vpas 115 | # parse the parameters and set to defaults if necessary 116 | else: 117 | # the number to make 118 | if 'number' not in random_org_parameters: 119 | self.number = self.default_number 120 | elif random_org_parameters['number'] in (None, 'default'): 121 | self.number = self.default_number 122 | else: 123 | self.number = random_org_parameters['number'] 124 | 125 | # the max number of atoms 126 | if 'max_num_atoms' not in random_org_parameters: 127 | self.max_num_atoms = self.default_max_num_atoms 128 | elif random_org_parameters['max_num_atoms'] in (None, 'default'): 129 | self.max_num_atoms = self.default_max_num_atoms 130 | elif random_org_parameters['max_num_atoms'] > \ 131 | constraints.max_num_atoms: 132 | print('The value passed to the "max_num_atoms" keyword in the ' 133 | 'InitialPopulation block may not exceed the value passed' 134 | ' to the "max_num_atoms" keyword in the Constraints ' 135 | 'block.') 136 | print('Quitting...') 137 | quit() 138 | elif random_org_parameters['max_num_atoms'] < \ 139 | constraints.min_num_atoms: 140 | print('The value passed to the "max_num_atoms" keyword in the ' 141 | 'InitialPopulation block may not be smaller than the ' 142 | 'value passed to the "min_num_atoms" keyword in the ' 143 | 'Constraints block.') 144 | print('Quitting...') 145 | quit() 146 | else: 147 | self.max_num_atoms = random_org_parameters['max_num_atoms'] 148 | 149 | # allowing composition space endpoints (only used for pd searches) 150 | if 'allow_endpoints' not in random_org_parameters: 151 | self.allow_endpoints = self.default_allow_endpoints 152 | elif random_org_parameters['allow_endpoints'] in (None, 'default'): 153 | self.allow_endpoints = self.default_allow_endpoints 154 | else: 155 | self.allow_endpoints = random_org_parameters['allow_endpoints'] 156 | 157 | # volume scaling 158 | self.vpas = self.default_vpas 159 | if 'volumes_per_atom' not in random_org_parameters: 160 | pass 161 | elif random_org_parameters['volumes_per_atom'] in (None, 162 | 'default'): 163 | pass 164 | else: 165 | # replace the specified volumes per atom with the given values 166 | for symbol in random_org_parameters['volumes_per_atom']: 167 | self.vpas[symbol] = random_org_parameters[ 168 | 'volumes_per_atom'][symbol] 169 | 170 | self.num_made = 0 # number added to initial population 171 | self.is_successes_based = True # it's based on number added 172 | self.is_finished = False 173 | 174 | def get_default_vpas(self, composition_space): 175 | """ 176 | Returns a dictionary containing the default volumes per atom for all 177 | the elements in the composition space. 178 | 179 | Args: 180 | composition_space: the CompositionSpace of the search 181 | """ 182 | 183 | default_vpas = {} 184 | for element in composition_space.get_all_elements(): 185 | default_vpas[element.symbol] = self.all_default_vpas[ 186 | element.symbol] 187 | return default_vpas 188 | 189 | def create_organism(self, id_generator, composition_space, constraints, 190 | random): 191 | """ 192 | Creates a random organism for the initial population. 193 | 194 | Returns a random organism, or None if an error was encountered during 195 | volume scaling. 196 | 197 | Note: for phase diagram searches, this is will not create structures 198 | with compositions equivalent to the endpoints of the composition 199 | space. Reference structures at those compositions should be 200 | provided with the FileOrganismCreator. 201 | 202 | Args: 203 | id_generator: the IDGenerator used to assign id numbers to all 204 | organisms 205 | 206 | composition_space: the CompositionSpace of the search 207 | 208 | constraints: the Constraints of the search 209 | 210 | random: a copy of Python's built in PRNG 211 | """ 212 | 213 | # make a random lattice 214 | random_lattice = self.make_random_lattice(constraints, random) 215 | 216 | # get a list of species for the random organism 217 | species = self.get_species_list(composition_space, constraints, random) 218 | if species is None: # could happen for pd searches... 219 | return None 220 | 221 | # for each specie, generate a set of random fractional coordinates 222 | random_coordinates = [] 223 | for _ in range(len(species)): 224 | random_coordinates.append([random.random(), random.random(), 225 | random.random()]) 226 | 227 | # make a random cell 228 | random_cell = Cell(random_lattice, species, random_coordinates) 229 | 230 | # optionally scale the volume of the random structure 231 | if not self.scale_volume(random_cell): 232 | return None # sometimes pymatgen's scaling algorithm crashes 233 | 234 | # make the random organism 235 | random_org = Organism(random_cell, id_generator, self.name, 236 | composition_space) 237 | print('Random organism creator making organism {} '.format( 238 | random_org.id)) 239 | return random_org 240 | 241 | def make_random_lattice(self, constraints, random): 242 | """ 243 | Returns a random lattice that satisfies the constraints on maximum and 244 | minimum lengths and angles. 245 | 246 | Args: 247 | constraints: the Constraints of the search 248 | 249 | random: a copy of Python's built in PRNG 250 | """ 251 | 252 | # make three random lattice vectors that satisfy the length constraints 253 | a = constraints.min_lattice_length + random.random()*( 254 | constraints.max_lattice_length - constraints.min_lattice_length) 255 | b = constraints.min_lattice_length + random.random()*( 256 | constraints.max_lattice_length - constraints.min_lattice_length) 257 | c = constraints.min_lattice_length + random.random()*( 258 | constraints.max_lattice_length - constraints.min_lattice_length) 259 | 260 | # make three random lattice angles that satisfy the angle constraints 261 | alpha = constraints.min_lattice_angle + random.random()*( 262 | constraints.max_lattice_angle - constraints.min_lattice_angle) 263 | beta = constraints.min_lattice_angle + random.random()*( 264 | constraints.max_lattice_angle - constraints.min_lattice_angle) 265 | gamma = constraints.min_lattice_angle + random.random()*( 266 | constraints.max_lattice_angle - constraints.min_lattice_angle) 267 | 268 | # build the random lattice 269 | return Lattice.from_parameters(a, b, c, alpha, beta, gamma) 270 | 271 | def get_species_list(self, composition_space, constraints, random): 272 | """ 273 | Returns a list containing the species in the random organism. 274 | 275 | Args: 276 | composition_space: the CompositionSpace of the search 277 | 278 | constraints: the Constraints of the search 279 | 280 | random: a copy of Python's built in PRNG 281 | """ 282 | if composition_space.objective_function == 'epa': 283 | return self.get_epa_species_list(composition_space, constraints, 284 | random) 285 | elif composition_space.objective_function == 'pd': 286 | return self.get_pd_species_list(composition_space, constraints, 287 | random) 288 | 289 | def get_epa_species_list(self, composition_space, constraints, random): 290 | """ 291 | Returns a list containing the species in the random organism. 292 | 293 | Precondition: the composition space contains only one endpoint 294 | (it's a fixed-composition search) 295 | 296 | Args: 297 | composition_space: the CompositionSpace of the search 298 | 299 | constraints: the Constraints of the search 300 | 301 | random: a copy of Python's built in PRNG 302 | 303 | Description: 304 | 305 | 1. Computes the minimum and maximum number of formula units from 306 | the minimum (constraints.min_num_atoms) and maximum 307 | (self.max_num_atoms) number of atoms and the number of atoms 308 | per formula unit. 309 | 310 | 2. Gets a random number of formula units within the range allowed 311 | by the minimum and maximum number of formula units. 312 | 313 | 3. Computes the number of atoms of each species from the random 314 | number of formula units. 315 | """ 316 | # get random number of formula units and resulting number of atoms 317 | reduced_formula = composition_space.endpoints[0].reduced_composition 318 | num_atoms_in_formula = reduced_formula.num_atoms 319 | max_num_formulas = int(math.floor( 320 | self.max_num_atoms/num_atoms_in_formula)) 321 | min_num_formulas = int(math.ceil( 322 | constraints.min_num_atoms/num_atoms_in_formula)) 323 | # round up the next formula unit if necessary 324 | if max_num_formulas < min_num_formulas: 325 | max_num_formulas += 1 326 | random_num_formulas = random.randint(min_num_formulas, 327 | max_num_formulas) 328 | 329 | # add the right number of each specie 330 | species = [] 331 | for specie in reduced_formula: 332 | for _ in range(random_num_formulas*int(reduced_formula[specie])): 333 | species.append(specie) 334 | return species 335 | 336 | def get_pd_species_list(self, composition_space, constraints, random): 337 | """ 338 | Returns a list containing the species in the random organism. 339 | 340 | Precondition: the composition space contains multiple endpoints 341 | (it's a fixed-composition search) 342 | 343 | Args: 344 | composition_space: the CompositionSpace of the search 345 | 346 | constraints: the Constraints of the search 347 | 348 | random: a copy of Python's built in PRNG 349 | 350 | Description: 351 | 352 | 1. Gets a random fraction of each composition space endpoint such 353 | that the fractions sum to 1. 354 | 355 | 2. Computes the fraction of each specie from the fraction of each 356 | endpoint and the amount of each specie within each endpoint. 357 | 358 | 3. Approximates the fraction of each specie as a rational number 359 | with a maximum possible denominator of self.max_num_atoms. 360 | 361 | 4. Takes the product of the denominators of all the species' 362 | rational fractions, and then multiplies each specie's rational 363 | fraction by this product to obtain the number of atoms of that 364 | species. 365 | 366 | 5. Checks if the total number of atoms exceeds self.max_num_atoms. 367 | If so, reduce the amount of each atom with a multiplicative 368 | factor. 369 | 370 | 6. Reduces the resulting composition (i.e., find the smallest 371 | number of atoms needed to describe the composition). 372 | 373 | 7. Optionally increases the number of atoms (w/o changing the 374 | composition) such that the min num atoms constraint is 375 | satisfied if possible. 376 | 377 | 8. Checks that the resulting number of atoms satisfies the maximum 378 | (self.max_num_atoms) number of atoms constraint, and optionally 379 | checks that the resulting composition is not equivalent to one 380 | of the endpoint compositions. 381 | """ 382 | 383 | # get random fractions for each endpoint that sum to one 1 (i.e., a 384 | # random location in the composition space 385 | fracs = self.get_random_endpoint_fractions(composition_space, random) 386 | composition_space.endpoints.sort() 387 | endpoint_fracs = {} 388 | for i in range(len(fracs)): 389 | endpoint_fracs[composition_space.endpoints[i]] = fracs[i] 390 | 391 | # compute amount of each element from amount of each endpoint 392 | all_elements = composition_space.get_all_elements() 393 | element_amounts = {} 394 | for element in all_elements: 395 | element_amounts[element] = 0 396 | for formula in endpoint_fracs: 397 | for element in formula: 398 | element_amounts[element] += endpoint_fracs[ 399 | formula]*formula[element] 400 | 401 | # normalize the amounts of the elements 402 | amounts_sum = 0 403 | for element in element_amounts: 404 | amounts_sum += element_amounts[element] 405 | for element in element_amounts: 406 | element_amounts[element] = element_amounts[element]/amounts_sum 407 | 408 | # approximate the decimal amount of each element as a fraction 409 | # (rational number) 410 | rational_amounts = {} 411 | for element in element_amounts: 412 | rational_amounts[element] = Fraction( 413 | element_amounts[element]).limit_denominator( 414 | self.max_num_atoms) 415 | 416 | # multiply the denominators together, then multiply each fraction 417 | # by this result to get the number of atoms of each element 418 | denom_product = 1.0 419 | for element in rational_amounts: 420 | denom_product *= rational_amounts[element].denominator 421 | for element in rational_amounts: 422 | element_amounts[element] = round(float( 423 | denom_product)*rational_amounts[element]) 424 | 425 | # see how many total atoms we have 426 | num_atoms = 0 427 | for element in element_amounts: 428 | num_atoms += element_amounts[element] 429 | 430 | # reduce the number of atoms of each element if needed 431 | if num_atoms > self.max_num_atoms: 432 | numerator = random.randint( 433 | int(round(0.5*(constraints.min_num_atoms + 434 | self.max_num_atoms))), self.max_num_atoms) 435 | factor = numerator/num_atoms 436 | for element in element_amounts: 437 | element_amounts[element] = round( 438 | factor*element_amounts[element]) 439 | 440 | # make a Composition object from the amounts of each element 441 | random_composition = Composition(element_amounts) 442 | random_composition = random_composition.reduced_composition 443 | 444 | # possibly increase the number of atoms by a random (allowed) amount 445 | min_multiple = int( 446 | math.ceil(constraints.min_num_atoms/random_composition.num_atoms)) 447 | max_multiple = int( 448 | math.floor(self.max_num_atoms/random_composition.num_atoms)) 449 | if max_multiple > min_multiple: 450 | random_multiple = random.randint(min_multiple, max_multiple) 451 | bigger_composition = {} 452 | for element in random_composition: 453 | bigger_composition[element] = \ 454 | random_multiple*random_composition[element] 455 | random_composition = Composition(bigger_composition) 456 | 457 | # check the max number of atoms constraints (should be ok) 458 | if int(random_composition.num_atoms) > self.max_num_atoms: 459 | return None 460 | 461 | # check the composition - only allow endpoints if specified 462 | if not self.allow_endpoints: 463 | for endpoint in composition_space.endpoints: 464 | if endpoint.almost_equals( 465 | random_composition.reduced_composition): 466 | return None 467 | 468 | # save the element objects 469 | species = [] 470 | for specie in random_composition: 471 | for _ in range(int(random_composition[specie])): 472 | species.append(specie) 473 | return species 474 | 475 | def get_random_endpoint_fractions(self, composition_space, random): 476 | """ 477 | Uniformly samples the composition space. Returns a list containing the 478 | fractions of each endpoint composition (that sum to 1). 479 | 480 | Args: 481 | composition_space: the CompositionSpace of the search 482 | 483 | random: a copy of Python's built-in PRNG 484 | 485 | Description: 486 | 487 | 1. Computes vectors that span the normalized composition space 488 | (e.g., the triangular facet for a ternary system) by 489 | subtracting the first composition fraction unit vector from the 490 | others. 491 | 492 | 2. Takes a random linear combination of these vectors by 493 | multiplying each one by a uniform random number and then taking 494 | their sum. 495 | 496 | 3. Adds the first composition unit vector to the result from step 2 497 | to obtain a vector with random fractions of each endpoint 498 | composition. 499 | 500 | 4. Checks that the vector from step 3 lies in the portion of the 501 | plane that corresponds to normalized amounts. This is done be 502 | checking that amount of the first endpoint composition is 503 | non-negative. If it's negative, calls itself recursively until 504 | a valid solution is found. 505 | """ 506 | 507 | # compute the vectors corresponding to the needed binary edges of the 508 | # phase diagram (w.r.t. to the first endpoint of the composition space) 509 | num_endpoints = len(composition_space.endpoints) 510 | bindary_edges = [] 511 | for i in range(1, num_endpoints): 512 | edge = [-1] 513 | for j in range(1, num_endpoints): 514 | if j == i: 515 | edge.append(1) 516 | else: 517 | edge.append(0) 518 | bindary_edges.append(np.array(edge)) 519 | 520 | # take a linear combination of the edge vectors, where the weight of 521 | # each vector is drawn from a uniform distribution 522 | weighted_average = random.random()*bindary_edges[0] 523 | for i in range(1, len(bindary_edges)): 524 | weighted_average = np.add(weighted_average, 525 | random.random()*bindary_edges[i]) 526 | 527 | # add the first unit vector to the weighted average of the edge 528 | # vectors to obtain the fractions of each endpoint 529 | endpoint_fracs = weighted_average.tolist() 530 | endpoint_fracs[0] = endpoint_fracs[0] + 1 531 | 532 | # check that the computed fraction of the first endpoint is not less 533 | # than zero. If it is, try again. 534 | if endpoint_fracs[0] < 0: 535 | return self.get_random_endpoint_fractions(composition_space, 536 | random) 537 | else: 538 | return endpoint_fracs 539 | 540 | def scale_volume(self, random_cell): 541 | """ 542 | Scales the volume of the random cell according the values in 543 | self.vpas. 544 | 545 | Returns a boolean indicating whether volume scaling was completed 546 | without errors. 547 | 548 | Args: 549 | random_cell: the random Cell whose volume to possibly scale 550 | """ 551 | 552 | # compute the volume to scale to 553 | composition = random_cell.composition 554 | total_volume = 0 555 | for specie in composition: 556 | total_volume += composition[specie]*self.vpas[specie.symbol] 557 | 558 | # scale the volume 559 | with warnings.catch_warnings(): 560 | warnings.simplefilter('ignore') 561 | random_cell.scale_lattice(total_volume) 562 | if str(random_cell.lattice.a) == 'nan' or \ 563 | random_cell.lattice.a > 100: 564 | return False 565 | else: 566 | return True 567 | 568 | def update_status(self): 569 | ''' 570 | Increments num_made, and if necessary, updates is_finished. 571 | ''' 572 | self.num_made = self.num_made + 1 573 | print('Organisms left for {}: {} '.format( 574 | self.name, self.number - self.num_made)) 575 | if self.num_made == self.number: 576 | self.is_finished = True 577 | 578 | 579 | class FileOrganismCreator(object): 580 | """ 581 | Creates organisms from files (poscar or cif) for the initial population. 582 | """ 583 | 584 | def __init__(self, path_to_folder): 585 | """ 586 | Makes a FileOrganismCreator. 587 | 588 | Args: 589 | path_to_folder: the path to the folder containing the files from 590 | which to make organisms 591 | 592 | Precondition: the folder exists and contains files 593 | """ 594 | 595 | self.name = 'file organism creator' 596 | self.path_to_folder = path_to_folder 597 | self.files = [f for f in os.listdir(self.path_to_folder) if 598 | os.path.isfile(os.path.join(self.path_to_folder, f))] 599 | self.number = len(self.files) 600 | self.num_made = 0 # number of attempts (usually number of files given) 601 | self.is_successes_based = False # it's based on number attempted 602 | self.is_finished = False 603 | 604 | def create_organism(self, id_generator, composition_space, constraints, 605 | random): 606 | """ 607 | Creates an organism for the initial population from a poscar or cif 608 | file. 609 | 610 | Returns an organism, or None if one could not be created. 611 | 612 | Args: 613 | id_generator: the IDGenerator used to assign id numbers to all 614 | organisms 615 | 616 | composition_space: the CompositionSpace of the search 617 | 618 | constraints: the Constraints of the search 619 | 620 | random: a copy of Python's built in PRNG 621 | 622 | TODO: the last three arguments are never actually used in this method, 623 | but I included them so the method has the same arguments as 624 | RandomOrganismCreator.create_organism() to allow the 625 | create_organism method to be called on both RandomOrganismCreator 626 | and FileOrganismCreator without having to know in advance which one 627 | it is. Maybe there's a better way to deal with this... 628 | """ 629 | 630 | if self.files[self.num_made - 1].endswith('.cif') or self.files[ 631 | self.num_made - 1].startswith('POSCAR'): 632 | try: 633 | new_cell = Cell.from_file( 634 | str(self.path_to_folder) + '/' + str( 635 | self.files[self.num_made - 1])) 636 | new_org = Organism(new_cell, id_generator, self.name, 637 | composition_space) 638 | print('Making organism {} from file: {} '.format( 639 | new_org.id, self.files[self.num_made - 1])) 640 | self.update_status() 641 | return new_org 642 | except: 643 | print('Error reading structure from file: {} '.format( 644 | self.files[self.num_made - 1])) 645 | self.update_status() 646 | return None 647 | else: 648 | print('File {} has invalid extension - file must end with .cif or ' 649 | 'begin with POSCAR '.format(self.files[self.num_made - 1])) 650 | self.update_status() 651 | return None 652 | 653 | def get_cells(self): 654 | """ 655 | Creates cells from the files and puts them in a list. 656 | 657 | Returns the list of Cell objects. 658 | 659 | Used for checking if all the composition space endpoint are included 660 | for phase diagram searches. 661 | """ 662 | 663 | file_cells = [] 664 | for cell_file in self.files: 665 | if cell_file.endswith('.cif') or cell_file.startswith( 666 | 'POSCAR'): 667 | try: 668 | new_cell = Cell.from_file( 669 | str(self.path_to_folder) + "/" + str(cell_file)) 670 | file_cells.append(new_cell) 671 | except: 672 | pass 673 | return file_cells 674 | 675 | def update_status(self): 676 | """ 677 | Increments num_made, and if necessary, updates is_finished. 678 | """ 679 | 680 | self.num_made = self.num_made + 1 681 | print('Organisms left for {}: {} '.format( 682 | self.name, self.number - self.num_made)) 683 | if self.num_made == len(self.files): 684 | self.is_finished = True 685 | -------------------------------------------------------------------------------- /gasp/parameters_printer.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | 8 | """ 9 | Parameters Printer module: 10 | 11 | This module contains a function for printing the parameters 12 | of a structure search to a file for the user's reference. 13 | 14 | """ 15 | 16 | import os 17 | 18 | 19 | def print_parameters(objects_dict): 20 | """ 21 | Prints out the parameters for the search to a file called 'ga_parameters' 22 | inside the garun directory. 23 | 24 | Args: 25 | objects_dict: a dictionary of objects used by the algorithm, as 26 | returned by the make_objects method 27 | """ 28 | 29 | # get all the objects from the dictionary 30 | run_dir_name = objects_dict['run_dir_name'] 31 | organism_creators = objects_dict['organism_creators'] 32 | num_calcs_at_once = objects_dict['num_calcs_at_once'] 33 | composition_space = objects_dict['composition_space'] 34 | developer = objects_dict['developer'] 35 | constraints = objects_dict['constraints'] 36 | geometry = objects_dict['geometry'] 37 | redundancy_guard = objects_dict['redundancy_guard'] 38 | stopping_criteria = objects_dict['stopping_criteria'] 39 | energy_calculator = objects_dict['energy_calculator'] 40 | pool = objects_dict['pool'] 41 | variations = objects_dict['variations'] 42 | 43 | # make the file where the parameters will be printed 44 | with open(os.getcwd() + '/ga_parameters', 'w') as parameters_file: 45 | 46 | # write the title of the run 47 | run_title = run_dir_name 48 | if run_dir_name == 'garun': 49 | run_title = 'default' 50 | else: 51 | run_title = run_title[6:] 52 | parameters_file.write('RunTitle: ' + run_title + 53 | '\n') 54 | parameters_file.write('\n') 55 | 56 | # write the endpoints of the composition space 57 | parameters_file.write('CompositionSpace: \n') 58 | for endpoint in composition_space.endpoints: 59 | parameters_file.write(' - ' + 60 | endpoint.reduced_formula.replace(' ', '') + 61 | '\n') 62 | parameters_file.write('\n') 63 | 64 | # write the name of the energy code being used 65 | parameters_file.write('EnergyCode: \n') 66 | parameters_file.write(' ' + energy_calculator.name + ': \n') 67 | if energy_calculator.name == 'gulp': 68 | parameters_file.write(' header_file: ' + 69 | energy_calculator.header_path + '\n') 70 | parameters_file.write(' potential_file: ' + 71 | energy_calculator.potential_path + '\n') 72 | elif energy_calculator.name == 'lammps': 73 | parameters_file.write(' input_script: ' + 74 | energy_calculator.input_script + '\n') 75 | elif energy_calculator.name == 'vasp': 76 | parameters_file.write(' incar: ' + 77 | energy_calculator.incar_file + '\n') 78 | parameters_file.write(' kpoints: ' + 79 | energy_calculator.kpoints_file + '\n') 80 | parameters_file.write(' potcars: \n') 81 | for key in energy_calculator.potcar_files: 82 | parameters_file.write(' ' + key + ': ' + 83 | energy_calculator.potcar_files[key] + 84 | '\n') 85 | parameters_file.write('\n') 86 | 87 | # write the number of energy calculations to run at once 88 | parameters_file.write('NumCalcsAtOnce: ' + str(num_calcs_at_once) + 89 | '\n') 90 | parameters_file.write('\n') 91 | 92 | # write the methods used to create the initial population 93 | parameters_file.write('InitialPopulation: \n') 94 | for creator in organism_creators: 95 | if creator.name == 'random organism creator': 96 | parameters_file.write(' random: \n') 97 | parameters_file.write(' number: ' + 98 | str(creator.number) + '\n') 99 | parameters_file.write(' max_num_atoms: ' + 100 | str(creator.max_num_atoms) + '\n') 101 | parameters_file.write(' allow_endpoints: ' + 102 | str(creator.allow_endpoints) + '\n') 103 | parameters_file.write(' volumes_per_atom: ' + '\n') 104 | for vpa in creator.vpas: 105 | parameters_file.write(' ' + str(vpa) + ': ' + 106 | str(creator.vpas[vpa]) + '\n') 107 | elif creator.name == 'file organism creator': 108 | parameters_file.write(' from_files: \n') 109 | parameters_file.write(' number: ' + 110 | str(creator.number) + '\n') 111 | parameters_file.write(' path_to_folder: ' + 112 | str(creator.path_to_folder) + '\n') 113 | parameters_file.write('\n') 114 | 115 | # write the pool info 116 | parameters_file.write('Pool: \n') 117 | parameters_file.write(' size: ' + str(pool.size) + '\n') 118 | parameters_file.write(' num_promoted: ' + str(pool.num_promoted) + 119 | '\n') 120 | parameters_file.write('\n') 121 | 122 | # write the selection probability distribution 123 | parameters_file.write('Selection: \n') 124 | parameters_file.write(' num_parents: ' + 125 | str(pool.selection.num_parents) + '\n') 126 | parameters_file.write(' power: ' + str(pool.selection.power) + '\n') 127 | parameters_file.write('\n') 128 | 129 | # write the composition fitness weight if phase diagram search 130 | if composition_space.objective_function == 'pd': 131 | parameters_file.write('CompositionFitnessWeight: \n') 132 | parameters_file.write(' max_weight: ' + 133 | str(pool.comp_fitness_weight.max_weight) + 134 | '\n') 135 | parameters_file.write(' power: ' + 136 | str(pool.comp_fitness_weight.power) + '\n') 137 | parameters_file.write('\n') 138 | 139 | # write the variations info 140 | parameters_file.write('Variations: \n') 141 | for variation in variations: 142 | if variation.fraction == 0: 143 | pass 144 | else: 145 | if variation.name == 'mating': 146 | parameters_file.write(' Mating: \n') 147 | parameters_file.write(' fraction: ' + 148 | str(variation.fraction) + '\n') 149 | parameters_file.write(' mu_cut_loc: ' + 150 | str(variation.mu_cut_loc) + '\n') 151 | parameters_file.write(' sigma_cut_loc: ' + 152 | str(variation.sigma_cut_loc) + '\n') 153 | parameters_file.write(' shift_prob: ' + 154 | str(variation.shift_prob) + '\n') 155 | parameters_file.write(' rotate_prob: ' + 156 | str(variation.rotate_prob) + '\n') 157 | parameters_file.write(' doubling_prob: ' + 158 | str(variation.doubling_prob) + '\n') 159 | parameters_file.write(' grow_parents: ' + 160 | str(variation.grow_parents) + '\n') 161 | parameters_file.write(' merge_cutoff: ' + 162 | str(variation.merge_cutoff) + '\n') 163 | 164 | elif variation.name == 'structure mutation': 165 | parameters_file.write(' StructureMut: \n') 166 | parameters_file.write(' fraction: ' + 167 | str(variation.fraction) + '\n') 168 | parameters_file.write(' frac_atoms_perturbed: ' + 169 | str(variation.frac_atoms_perturbed) + 170 | '\n') 171 | parameters_file.write( 172 | ' sigma_atomic_coord_perturbation: ' + 173 | str(variation.sigma_atomic_coord_perturbation) + '\n') 174 | parameters_file.write( 175 | ' max_atomic_coord_perturbation: ' + 176 | str(variation.max_atomic_coord_perturbation) + '\n') 177 | parameters_file.write( 178 | ' sigma_strain_matrix_element: ' + 179 | str(variation.sigma_strain_matrix_element) + '\n') 180 | 181 | elif variation.name == 'number of atoms mutation': 182 | parameters_file.write(' NumAtomsMut: \n') 183 | parameters_file.write(' fraction: ' + 184 | str(variation.fraction) + '\n') 185 | parameters_file.write(' mu_num_adds: ' + 186 | str(variation.mu_num_adds) + '\n') 187 | parameters_file.write(' sigma_num_adds: ' + 188 | str(variation.sigma_num_adds) + '\n') 189 | parameters_file.write(' scale_volume: ' + 190 | str(variation.scale_volume) + '\n') 191 | 192 | elif variation.name == 'permutation': 193 | parameters_file.write(' Permutation: \n') 194 | parameters_file.write(' fraction: ' + 195 | str(variation.fraction) + '\n') 196 | parameters_file.write(' mu_num_swaps: ' + 197 | str(variation.mu_num_swaps) + '\n') 198 | parameters_file.write(' sigma_num_swaps: ' + 199 | str(variation.sigma_num_swaps) + 200 | '\n') 201 | parameters_file.write(' pairs_to_swap: \n') 202 | for pair in variation.pairs_to_swap: 203 | parameters_file.write(' - ' + pair + '\n') 204 | parameters_file.write('\n') 205 | 206 | # write the development info 207 | parameters_file.write('Development: \n') 208 | parameters_file.write(' niggli: ' + str(developer.niggli) + '\n') 209 | parameters_file.write(' scale_density: ' + 210 | str(developer.scale_density) + '\n') 211 | parameters_file.write('\n') 212 | 213 | # write the constraints info 214 | parameters_file.write('Constraints: \n') 215 | parameters_file.write(' min_num_atoms: ' + 216 | str(constraints.min_num_atoms) + '\n') 217 | parameters_file.write(' max_num_atoms: ' + 218 | str(constraints.max_num_atoms) + '\n') 219 | parameters_file.write(' min_lattice_length: ' + 220 | str(constraints.min_lattice_length) + '\n') 221 | parameters_file.write(' max_lattice_length: ' + 222 | str(constraints.max_lattice_length) + '\n') 223 | parameters_file.write(' min_lattice_angle: ' + 224 | str(constraints.min_lattice_angle) + '\n') 225 | parameters_file.write(' max_lattice_angle: ' + 226 | str(constraints.max_lattice_angle) + '\n') 227 | parameters_file.write(' allow_endpoints: ' + 228 | str(constraints.allow_endpoints) + '\n') 229 | parameters_file.write(' per_species_mids: \n') 230 | for pair in constraints.per_species_mids: 231 | parameters_file.write(' ' + pair + ': ' + 232 | str(float( 233 | constraints.per_species_mids[pair])) + 234 | '\n') 235 | parameters_file.write('\n') 236 | 237 | # write the redundancy guard info 238 | parameters_file.write('RedundancyGuard: \n') 239 | parameters_file.write(' lattice_length_tol: ' + 240 | str(redundancy_guard.lattice_length_tol) + '\n') 241 | parameters_file.write(' lattice_angle_tol: ' + 242 | str(redundancy_guard.lattice_angle_tol) + '\n') 243 | parameters_file.write(' site_tol: ' + 244 | str(redundancy_guard.site_tol) + '\n') 245 | parameters_file.write(' use_primitive_cell: ' + 246 | str(redundancy_guard.use_primitive_cell) + '\n') 247 | parameters_file.write(' attempt_supercell: ' + 248 | str(redundancy_guard.attempt_supercell) + '\n') 249 | parameters_file.write(' rmsd_tol: ' + 250 | str(redundancy_guard.rmsd_tol) + '\n') 251 | parameters_file.write(' epa_diff: ' + 252 | str(redundancy_guard.epa_diff) + '\n') 253 | parameters_file.write('\n') 254 | 255 | # write the geometry info 256 | parameters_file.write('Geometry: \n') 257 | parameters_file.write(' shape: ' + geometry.shape + '\n') 258 | parameters_file.write(' max_size: ' + str(geometry.max_size) + '\n') 259 | parameters_file.write(' min_size: ' + str(geometry.min_size) + '\n') 260 | parameters_file.write(' padding: ' + str(geometry.padding) + '\n') 261 | parameters_file.write('\n') 262 | 263 | # write the stopping criteria 264 | parameters_file.write('StoppingCriteria: \n') 265 | if stopping_criteria.num_energy_calcs is not None: 266 | parameters_file.write(' num_energy_calcs: ' + 267 | str(stopping_criteria.num_energy_calcs) + 268 | '\n') 269 | if stopping_criteria.epa_achieved is not None: 270 | parameters_file.write(' epa_achieved: ' + 271 | str(stopping_criteria.epa_achieved) + '\n') 272 | if stopping_criteria.found_cell is not None: 273 | parameters_file.write(' found_structure: ' + 274 | stopping_criteria.path_to_structure_file + 275 | '\n') 276 | parameters_file.write('\n') 277 | -------------------------------------------------------------------------------- /gasp/post_processing/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/gasp/post_processing/__init__.py -------------------------------------------------------------------------------- /gasp/post_processing/plotter.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Plotter module: 9 | 10 | This module contains the Plotter class, which is used to plot various data 11 | from the genetic algorithm structure search. 12 | 13 | """ 14 | 15 | from pymatgen.core.composition import Composition 16 | from pymatgen.phasediagram.entries import PDEntry 17 | from pymatgen.phasediagram.maker import CompoundPhaseDiagram 18 | from pymatgen.phasediagram.plotter import PDPlotter 19 | 20 | import matplotlib.pyplot as plt 21 | import os 22 | 23 | 24 | class Plotter(object): 25 | """ 26 | Used to to plot various data from a structure search. 27 | """ 28 | 29 | def __init__(self, data_file_path): 30 | """ 31 | Makes a Plotter. 32 | 33 | Args: 34 | data_file_path: the path to file (called run_data) containing the 35 | data for the search 36 | """ 37 | 38 | # get the input file contents 39 | input_file = os.path.abspath(data_file_path) 40 | try: 41 | with open(input_file) as input_data: 42 | self.lines = input_data.readlines() 43 | except: 44 | print('Error reading data file.') 45 | print('Quitting...') 46 | quit() 47 | 48 | def get_progress_plot(self): 49 | """ 50 | Returns a plot of the best value versus the number of energy 51 | calculations, as a matplotlib plot object. 52 | """ 53 | 54 | # set the font to Times, rendered with Latex 55 | plt.rc('font', **{'family': 'serif', 'serif': ['Times']}) 56 | plt.rc('text', usetex=True) 57 | 58 | # parse the number of composition space endpoints 59 | endpoints_line = self.lines[0].split() 60 | endpoints = [] 61 | for word in endpoints_line[::-1]: 62 | if word == 'endpoints:': 63 | break 64 | else: 65 | endpoints.append(word) 66 | num_endpoints = len(endpoints) 67 | 68 | if num_endpoints == 1: 69 | y_label = r'Best value (eV/atom)' 70 | elif num_endpoints == 2: 71 | y_label = r'Area of convex hull' 72 | else: 73 | y_label = r'Volume of convex hull' 74 | 75 | # parse the best values and numbers of energy calculations 76 | best_values = [] 77 | num_calcs = [] 78 | for i in range(4, len(self.lines)): 79 | line = self.lines[i].split() 80 | num_calcs.append(int(line[4])) 81 | best_values.append(line[5]) 82 | 83 | # check for None best values 84 | none_indices = [] 85 | for value in best_values: 86 | if value == 'None': 87 | none_indices.append(best_values.index(value)) 88 | 89 | for index in none_indices: 90 | del best_values[index] 91 | del num_calcs[index] 92 | 93 | # make the plot 94 | plt.plot(num_calcs, best_values, color='blue', linewidth=2) 95 | plt.xlabel(r'Number of energy calculations', fontsize=22) 96 | plt.ylabel(y_label, fontsize=22) 97 | plt.tick_params(which='both', width=1, labelsize=18) 98 | plt.tick_params(which='major', length=8) 99 | plt.tick_params(which='minor', length=4) 100 | plt.xlim(xmin=0) 101 | plt.tight_layout() 102 | return plt 103 | 104 | def plot_progress(self): 105 | """ 106 | Plots the best value versus the number of energy calculations. 107 | """ 108 | 109 | self.get_progress_plot().show() 110 | 111 | def get_system_size_plot(self): 112 | """ 113 | Returns a plot of the system size versus the number of energy 114 | calculations, as a matplotlib plot object. 115 | """ 116 | 117 | # set the font to Times, rendered with Latex 118 | plt.rc('font', **{'family': 'serif', 'serif': ['Times']}) 119 | plt.rc('text', usetex=True) 120 | 121 | # parse the compositions and numbers of energy calculations 122 | compositions = [] 123 | num_calcs = [] 124 | for i in range(4, len(self.lines)): 125 | line = self.lines[i].split() 126 | compositions.append(line[1]) 127 | num_calcs.append(int(line[4])) 128 | 129 | # get the numbers of atoms from the compositions 130 | nums_atoms = [] 131 | for composition in compositions: 132 | comp = Composition(composition) 133 | nums_atoms.append(comp.num_atoms) 134 | 135 | # make the plot 136 | plt.plot(num_calcs, nums_atoms, 'D', markersize=5, 137 | markeredgecolor='blue', markerfacecolor='blue') 138 | plt.xlabel(r'Number of energy calculations', fontsize=22) 139 | plt.ylabel(r'Number of atoms in the cell', fontsize=22) 140 | plt.tick_params(which='both', width=1, labelsize=18) 141 | plt.tick_params(which='major', length=8) 142 | plt.tick_params(which='minor', length=4) 143 | plt.xlim(xmin=0) 144 | plt.ylim(ymin=0) 145 | plt.tight_layout() 146 | return plt 147 | 148 | def plot_system_size(self): 149 | """ 150 | Plots the system size versus the number of energy calculations. 151 | """ 152 | 153 | self.get_system_size_plot().show() 154 | 155 | def get_phase_diagram_plot(self): 156 | """ 157 | Returns a phase diagram plot, as a matplotlib plot object. 158 | """ 159 | 160 | # set the font to Times, rendered with Latex 161 | plt.rc('font', **{'family': 'serif', 'serif': ['Times']}) 162 | plt.rc('text', usetex=True) 163 | 164 | # parse the composition space endpoints 165 | endpoints_line = self.lines[0].split() 166 | endpoints = [] 167 | for word in endpoints_line[::-1]: 168 | if word == 'endpoints:': 169 | break 170 | else: 171 | endpoints.append(Composition(word)) 172 | 173 | if len(endpoints) < 2: 174 | print('There must be at least 2 endpoint compositions to make a ' 175 | 'phase diagram.') 176 | quit() 177 | 178 | # parse the compositions and total energies of all the structures 179 | compositions = [] 180 | total_energies = [] 181 | for i in range(4, len(self.lines)): 182 | line = self.lines[i].split() 183 | compositions.append(Composition(line[1])) 184 | total_energies.append(float(line[2])) 185 | 186 | # make a list of PDEntries 187 | pdentries = [] 188 | for i in range(len(compositions)): 189 | pdentries.append(PDEntry(compositions[i], total_energies[i])) 190 | 191 | # make a CompoundPhaseDiagram 192 | compound_pd = CompoundPhaseDiagram(pdentries, endpoints) 193 | 194 | # make a PhaseDiagramPlotter 195 | pd_plotter = PDPlotter(compound_pd, show_unstable=100) 196 | return pd_plotter.get_plot(label_unstable=False) 197 | 198 | def plot_phase_diagram(self): 199 | """ 200 | Plots the phase diagram. 201 | """ 202 | 203 | self.get_phase_diagram_plot().show() 204 | -------------------------------------------------------------------------------- /gasp/scripts/plot_phase_diagram.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Utility script for making a phase diagram plot. 9 | 10 | Usage: python plot_phase_diagram.py /path/to/run_data/file 11 | """ 12 | 13 | from gasp.post_processing.plotter import Plotter 14 | 15 | import sys 16 | 17 | plotter = Plotter(sys.argv[1]) 18 | plotter.plot_phase_diagram() 19 | -------------------------------------------------------------------------------- /gasp/scripts/plot_progress.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Utility script for making a progress plot. 9 | 10 | Usage: python plot_progress.py /path/to/run_data/file 11 | """ 12 | 13 | from gasp.post_processing.plotter import Plotter 14 | 15 | import sys 16 | 17 | plotter = Plotter(sys.argv[1]) 18 | plotter.plot_progress() 19 | -------------------------------------------------------------------------------- /gasp/scripts/plot_system_size.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Utility script for making a plot of the system size. 9 | 10 | Usage: python plot_system_size.py /path/to/run_data/file 11 | """ 12 | 13 | from gasp.post_processing.plotter import Plotter 14 | 15 | import sys 16 | 17 | plotter = Plotter(sys.argv[1]) 18 | plotter.plot_system_size() 19 | -------------------------------------------------------------------------------- /gasp/scripts/replace_tabs.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copywrite (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Utility script for replacing each tab in a file with four spaces. 9 | 10 | Usage: python replace_tabs.py /path/to/file 11 | """ 12 | 13 | import sys 14 | 15 | with open(sys.argv[1], 'r') as in_file: 16 | in_file_lines = in_file.readlines() 17 | 18 | handled_lines = [] 19 | for line in in_file_lines: 20 | handled_lines.append(line.replace('\t', ' ')) 21 | 22 | with open(sys.argv[1], 'w') as in_file: 23 | for line in handled_lines: 24 | in_file.write(line) 25 | -------------------------------------------------------------------------------- /gasp/scripts/run.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | # Copyright (c) Henniggroup. 3 | # Distributed under the terms of the MIT License. 4 | 5 | from __future__ import division, unicode_literals, print_function 6 | 7 | """ 8 | Run module: 9 | 10 | This module is run to do a genetic algorithm structure search. 11 | 12 | Usage: python run.py /path/to/gasp/input/file 13 | 14 | """ 15 | 16 | from gasp import general 17 | from gasp import population 18 | from gasp import objects_maker 19 | from gasp import parameters_printer 20 | 21 | import copy 22 | import threading 23 | import random 24 | import yaml 25 | import sys 26 | import os 27 | import datetime 28 | 29 | 30 | def main(): 31 | # get dictionaries from the input file (in yaml format) 32 | if len(sys.argv) < 2: 33 | print('No input file given.') 34 | print('Quitting...') 35 | quit() 36 | else: 37 | input_file = os.path.abspath(sys.argv[1]) 38 | 39 | try: 40 | with open(input_file, 'r') as f: 41 | parameters = yaml.load(f) 42 | except: 43 | print('Error reading input file.') 44 | print('Quitting...') 45 | quit() 46 | 47 | # make the objects needed by the algorithm 48 | objects_dict = objects_maker.make_objects(parameters) 49 | 50 | # get the objects from the dictionary for convenience 51 | run_dir_name = objects_dict['run_dir_name'] 52 | organism_creators = objects_dict['organism_creators'] 53 | num_calcs_at_once = objects_dict['num_calcs_at_once'] 54 | composition_space = objects_dict['composition_space'] 55 | constraints = objects_dict['constraints'] 56 | geometry = objects_dict['geometry'] 57 | developer = objects_dict['developer'] 58 | redundancy_guard = objects_dict['redundancy_guard'] 59 | stopping_criteria = objects_dict['stopping_criteria'] 60 | energy_calculator = objects_dict['energy_calculator'] 61 | pool = objects_dict['pool'] 62 | variations = objects_dict['variations'] 63 | id_generator = objects_dict['id_generator'] 64 | 65 | # get the path to the run directory - append date and time if 66 | # the given or default run directory already exists 67 | garun_dir = str(os.getcwd()) + '/' + run_dir_name 68 | if os.path.isdir(garun_dir): 69 | print('Directory {} already exists'.format(garun_dir)) 70 | time = datetime.datetime.now().time() 71 | date = datetime.datetime.now().date() 72 | current_date = str(date.month) + '_' + str(date.day) + '_' + \ 73 | str(date.year) 74 | current_time = str(time.hour) + '_' + str(time.minute) + '_' + \ 75 | str(time.second) 76 | garun_dir += '_' + current_date + '_' + current_time 77 | print('Setting the run directory to {}'.format(garun_dir)) 78 | 79 | # make the run directory and move into it 80 | os.mkdir(garun_dir) 81 | os.chdir(garun_dir) 82 | 83 | # make the temp subdirectory where the energy calculations will be done 84 | os.mkdir(garun_dir + '/temp') 85 | 86 | # print the search parameters to a file in the run directory 87 | parameters_printer.print_parameters(objects_dict) 88 | 89 | # make the data writer 90 | data_writer = general.DataWriter(garun_dir + '/run_data', 91 | composition_space) 92 | 93 | whole_pop = [] 94 | num_finished_calcs = 0 95 | threads = [] 96 | initial_population = population.InitialPopulation(run_dir_name) 97 | 98 | # To temporarily hold relaxed organisms. The key to each relaxed organism 99 | # is the index of the Thread in the list threads that did the energy 100 | # calculation. 101 | relaxed_organisms = {} 102 | 103 | # populate the initial population 104 | for creator in organism_creators: 105 | print('Making {} organisms with {}'.format(creator.number, 106 | creator.name)) 107 | while not creator.is_finished and not stopping_criteria.are_satisfied: 108 | 109 | # start initial batch of energy calculations 110 | if len(threads) < num_calcs_at_once: 111 | # make a new organism - keep trying until we get one 112 | new_organism = creator.create_organism( 113 | id_generator, composition_space, constraints, random) 114 | while new_organism is None and not creator.is_finished: 115 | new_organism = creator.create_organism( 116 | id_generator, composition_space, constraints, random) 117 | if new_organism is not None: # loop above could return None 118 | geometry.unpad(new_organism.cell, constraints) 119 | if developer.develop(new_organism, composition_space, 120 | constraints, geometry, pool): 121 | redundant_organism = redundancy_guard.check_redundancy( 122 | new_organism, whole_pop, geometry) 123 | if redundant_organism is None: # no redundancy 124 | # add a copy to whole_pop so the organisms in 125 | # whole_pop don't change upon relaxation 126 | whole_pop.append(copy.deepcopy(new_organism)) 127 | geometry.pad(new_organism.cell) 128 | stopping_criteria.update_calc_counter() 129 | index = len(threads) 130 | thread = threading.Thread( 131 | target=energy_calculator.do_energy_calculation, 132 | args=[new_organism, relaxed_organisms, 133 | index, composition_space]) 134 | thread.start() 135 | threads.append(thread) 136 | 137 | # process finished calculations and start new ones 138 | else: 139 | for index, thread in enumerate(threads): 140 | if not thread.is_alive(): 141 | num_finished_calcs += 1 142 | relaxed_organism = relaxed_organisms[index] 143 | relaxed_organisms[index] = None 144 | 145 | # take care of relaxed organism 146 | if relaxed_organism is not None: 147 | geometry.unpad(relaxed_organism.cell, constraints) 148 | if developer.develop(relaxed_organism, 149 | composition_space, 150 | constraints, geometry, pool): 151 | redundant_organism = \ 152 | redundancy_guard.check_redundancy( 153 | relaxed_organism, whole_pop, geometry) 154 | if redundant_organism is not None: # redundant 155 | if redundant_organism.is_active and \ 156 | redundant_organism.epa > \ 157 | relaxed_organism.epa: 158 | initial_population.replace_organism( 159 | redundant_organism, 160 | relaxed_organism, 161 | composition_space) 162 | progress = \ 163 | initial_population.get_progress( 164 | composition_space) 165 | data_writer.write_data( 166 | relaxed_organism, 167 | num_finished_calcs, progress) 168 | print('Number of energy calculations ' 169 | 'so far: {} '.format( 170 | num_finished_calcs)) 171 | else: # not redundant 172 | stopping_criteria.check_organism( 173 | relaxed_organism, redundancy_guard, 174 | geometry) 175 | initial_population.add_organism( 176 | relaxed_organism, composition_space) 177 | whole_pop.append(relaxed_organism) 178 | progress = \ 179 | initial_population.get_progress( 180 | composition_space) 181 | data_writer.write_data( 182 | relaxed_organism, num_finished_calcs, 183 | progress) 184 | print('Number of energy calculations so ' 185 | 'far: {} '.format( 186 | num_finished_calcs)) 187 | if creator.is_successes_based and \ 188 | relaxed_organism.made_by == \ 189 | creator.name: 190 | creator.update_status() 191 | 192 | # make another organism for the initial population 193 | started_new_calc = False 194 | while not started_new_calc and not creator.is_finished: 195 | new_organism = creator.create_organism( 196 | id_generator, composition_space, 197 | constraints, random) 198 | while new_organism is None and not \ 199 | creator.is_finished: 200 | new_organism = creator.create_organism( 201 | id_generator, composition_space, 202 | constraints, random) 203 | if new_organism is not None: 204 | geometry.unpad(new_organism.cell, constraints) 205 | if developer.develop(new_organism, 206 | composition_space, 207 | constraints, geometry, 208 | pool): 209 | redundant_organism = \ 210 | redundancy_guard.check_redundancy( 211 | new_organism, whole_pop, geometry) 212 | if redundant_organism is None: # not redundant 213 | whole_pop.append( 214 | copy.deepcopy(new_organism)) 215 | geometry.pad(new_organism.cell) 216 | stopping_criteria.update_calc_counter() 217 | new_thread = threading.Thread( 218 | target=energy_calculator.do_energy_calculation, 219 | args=[new_organism, 220 | relaxed_organisms, index, 221 | composition_space]) 222 | new_thread.start() 223 | threads[index] = new_thread 224 | started_new_calc = True 225 | 226 | # depending on how the loop above exited, update bookkeeping 227 | if not stopping_criteria.are_satisfied: 228 | num_finished_calcs = num_finished_calcs - 1 229 | 230 | # process all the calculations that were still running when the last 231 | # creator finished 232 | num_to_get = num_calcs_at_once # number of threads left to handle 233 | handled_indices = [] # the indices of the threads we've already handled 234 | while num_to_get > 0: 235 | for index, thread in enumerate(threads): 236 | if not thread.is_alive() and index not in handled_indices: 237 | num_finished_calcs += 1 238 | relaxed_organism = relaxed_organisms[index] 239 | num_to_get = num_to_get - 1 240 | handled_indices.append(index) 241 | relaxed_organisms[index] = None 242 | 243 | # take care of relaxed organism 244 | if relaxed_organism is not None: 245 | geometry.unpad(relaxed_organism.cell, constraints) 246 | if developer.develop(relaxed_organism, composition_space, 247 | constraints, geometry, pool): 248 | redundant_organism = redundancy_guard.check_redundancy( 249 | relaxed_organism, whole_pop, geometry) 250 | if redundant_organism is not None: # redundant 251 | if redundant_organism.is_active and \ 252 | redundant_organism.epa > \ 253 | relaxed_organism.epa: 254 | initial_population.replace_organism( 255 | redundant_organism, relaxed_organism, 256 | composition_space) 257 | progress = initial_population.get_progress( 258 | composition_space) 259 | data_writer.write_data(relaxed_organism, 260 | num_finished_calcs, 261 | progress) 262 | print('Number of energy calculations so far: ' 263 | '{} '.format(num_finished_calcs)) 264 | else: # no redundancy 265 | stopping_criteria.check_organism( 266 | relaxed_organism, redundancy_guard, geometry) 267 | initial_population.add_organism(relaxed_organism, 268 | composition_space) 269 | whole_pop.append(relaxed_organism) 270 | progress = initial_population.get_progress( 271 | composition_space) 272 | data_writer.write_data(relaxed_organism, 273 | num_finished_calcs, 274 | progress) 275 | print('Number of energy calculations so far: ' 276 | '{} '.format(num_finished_calcs)) 277 | 278 | # check if the stopping criteria were already met when making the initial 279 | # population 280 | if stopping_criteria.are_satisfied: 281 | quit() 282 | 283 | # populate the pool with the initial population 284 | pool.add_initial_population(initial_population, composition_space) 285 | 286 | # To temporarily hold relaxed organisms. The key to each relaxed organism 287 | # is the index of the Thread in the list threads that did the energy 288 | # calculation. 289 | relaxed_organisms = {} 290 | 291 | offspring_generator = general.OffspringGenerator() 292 | threads = [] 293 | 294 | # create the initial batch of offspring organisms and submit them for 295 | # energy calculations 296 | for _ in range(num_calcs_at_once): 297 | unrelaxed_offspring = offspring_generator.make_offspring_organism( 298 | random, pool, variations, geometry, id_generator, whole_pop, 299 | developer, redundancy_guard, composition_space, constraints) 300 | whole_pop.append(copy.deepcopy(unrelaxed_offspring)) 301 | geometry.pad(unrelaxed_offspring.cell) 302 | stopping_criteria.update_calc_counter() 303 | index = len(threads) 304 | new_thread = threading.Thread( 305 | target=energy_calculator.do_energy_calculation, 306 | args=[unrelaxed_offspring, relaxed_organisms, index, 307 | composition_space]) 308 | new_thread.start() 309 | threads.append(new_thread) 310 | 311 | # process finished calculations and start new ones 312 | while not stopping_criteria.are_satisfied: 313 | for index, thread in enumerate(threads): 314 | if not thread.is_alive(): 315 | num_finished_calcs += 1 316 | relaxed_offspring = relaxed_organisms[index] 317 | relaxed_organisms[index] = None 318 | 319 | # take care of relaxed offspring organism 320 | if relaxed_offspring is not None: 321 | geometry.unpad(relaxed_offspring.cell, constraints) 322 | if developer.develop(relaxed_offspring, composition_space, 323 | constraints, geometry, pool): 324 | # check for redundancy with the the pool first 325 | redundant_organism = redundancy_guard.check_redundancy( 326 | relaxed_offspring, pool.to_list(), geometry) 327 | if redundant_organism is not None: # redundant 328 | if redundant_organism.epa > relaxed_offspring.epa: 329 | pool.replace_organism(redundant_organism, 330 | relaxed_offspring, 331 | composition_space) 332 | pool.compute_fitnesses() 333 | pool.compute_selection_probs() 334 | pool.print_summary(composition_space) 335 | progress = pool.get_progress(composition_space) 336 | data_writer.write_data(relaxed_offspring, 337 | num_finished_calcs, 338 | progress) 339 | print('Number of energy calculations so far: ' 340 | '{} '.format(num_finished_calcs)) 341 | # check for redundancy with all the organisms 342 | else: 343 | redundant_organism = \ 344 | redundancy_guard.check_redundancy( 345 | relaxed_offspring, whole_pop, geometry) 346 | if redundant_organism is None: # not redundant 347 | stopping_criteria.check_organism( 348 | relaxed_offspring, redundancy_guard, geometry) 349 | pool.add_organism(relaxed_offspring, 350 | composition_space) 351 | whole_pop.append(relaxed_offspring) 352 | 353 | # check if we've added enough new offspring 354 | # organisms to the pool that we can remove the 355 | # initial population organisms from the front 356 | # (right end) of the queue. 357 | if pool.num_adds == pool.size: 358 | print('Removing the initial population from ' 359 | 'the pool ') 360 | for _ in range(len( 361 | initial_population.initial_population)): 362 | removed_org = pool.queue.pop() 363 | removed_org.is_active = False 364 | print('Removing organism {} from the ' 365 | 'pool '.format(removed_org.id)) 366 | 367 | # if the initial population organisms have already 368 | # been removed from the pool's queue, then just 369 | # need to pop one organism from the front (right 370 | # end) of the queue. 371 | elif pool.num_adds > pool.size: 372 | removed_org = pool.queue.pop() 373 | removed_org.is_active = False 374 | print('Removing organism {} from the ' 375 | 'pool '.format(removed_org.id)) 376 | 377 | pool.compute_fitnesses() 378 | pool.compute_selection_probs() 379 | pool.print_summary(composition_space) 380 | progress = pool.get_progress(composition_space) 381 | data_writer.write_data(relaxed_offspring, 382 | num_finished_calcs, 383 | progress) 384 | print('Number of energy calculations so far: ' 385 | '{} '.format(num_finished_calcs)) 386 | 387 | # make another offspring organism 388 | if not stopping_criteria.are_satisfied: 389 | unrelaxed_offspring = \ 390 | offspring_generator.make_offspring_organism( 391 | random, pool, variations, geometry, id_generator, 392 | whole_pop, developer, redundancy_guard, 393 | composition_space, constraints) 394 | whole_pop.append(copy.deepcopy(unrelaxed_offspring)) 395 | geometry.pad(unrelaxed_offspring.cell) 396 | stopping_criteria.update_calc_counter() 397 | new_thread = threading.Thread( 398 | target=energy_calculator.do_energy_calculation, 399 | args=[unrelaxed_offspring, relaxed_organisms, 400 | index, composition_space]) 401 | new_thread.start() 402 | threads[index] = new_thread 403 | 404 | # process all the calculations that were still running when the 405 | # stopping criteria were achieved 406 | num_to_get = num_calcs_at_once # how many threads we have left to handle 407 | handled_indices = [] # the indices of the threads we've already handled 408 | while num_to_get > 0: 409 | for index, thread in enumerate(threads): 410 | if not thread.is_alive() and index not in handled_indices: 411 | num_finished_calcs += 1 412 | relaxed_offspring = relaxed_organisms[index] 413 | num_to_get -= 1 414 | handled_indices.append(index) 415 | relaxed_organisms[index] = None 416 | 417 | # take care of relaxed offspring organism 418 | if relaxed_offspring is not None: 419 | geometry.unpad(relaxed_offspring.cell, constraints) 420 | if developer.develop(relaxed_offspring, composition_space, 421 | constraints, geometry, pool): 422 | # check for redundancy with the pool first 423 | redundant_organism = redundancy_guard.check_redundancy( 424 | relaxed_offspring, pool.to_list(), geometry) 425 | if redundant_organism is not None: # redundant 426 | if redundant_organism.epa > relaxed_offspring.epa: 427 | pool.replace_organism(redundant_organism, 428 | relaxed_offspring, 429 | composition_space) 430 | pool.compute_fitnesses() 431 | pool.compute_selection_probs() 432 | pool.print_summary(composition_space) 433 | progress = pool.get_progress(composition_space) 434 | data_writer.write_data(relaxed_offspring, 435 | num_finished_calcs, 436 | progress) 437 | print('Number of energy calculations so far: ' 438 | '{} '.format(num_finished_calcs)) 439 | # check for redundancy with all the organisms 440 | else: 441 | redundant_organism = \ 442 | redundancy_guard.check_redundancy( 443 | relaxed_offspring, whole_pop, geometry) 444 | if redundant_organism is None: # not redundant 445 | pool.add_organism(relaxed_offspring, 446 | composition_space) 447 | whole_pop.append(relaxed_offspring) 448 | removed_org = pool.queue.pop() 449 | removed_org.is_active = False 450 | print('Removing organism {} from the pool '.format( 451 | removed_org.id)) 452 | pool.compute_fitnesses() 453 | pool.compute_selection_probs() 454 | pool.print_summary(composition_space) 455 | progress = pool.get_progress(composition_space) 456 | data_writer.write_data(relaxed_offspring, 457 | num_finished_calcs, 458 | progress) 459 | print('Number of energy calculations so far: ' 460 | '{} '.format(num_finished_calcs)) 461 | 462 | 463 | if __name__ == "__main__": 464 | main() 465 | -------------------------------------------------------------------------------- /gasp/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/henniggroup/GASP-python/0c8d993c82e0e1c69a05b3c34bbb2fcbbdbb7f07/gasp/tests/__init__.py -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import os 4 | import glob 5 | 6 | from setuptools import setup, find_packages 7 | 8 | module_dir = os.path.dirname(os.path.abspath(__file__)) 9 | 10 | if __name__ == "__main__": 11 | setup( 12 | name='GASP', 13 | version='0.1', 14 | description='Genetic algorithm for structure and phase prediction', 15 | long_description=open(os.path.join(module_dir, 'README.rst')).read(), 16 | url='https://github.com/henniggroup/GASP-python', 17 | author='Benjamin Revard', 18 | author_email='bcr48@cornell.edu', 19 | license='MIT', 20 | packages=find_packages(), 21 | package_data={}, 22 | zip_safe=False, 23 | install_requires=['pymatgen>=4.5.2'], 24 | classifiers=['Programming Language :: Python :: 2.7', 25 | "Programming Language :: Python :: 3", 26 | "Programming Language :: Python :: 3.5", 27 | 'Development Status :: 4 - Beta', 28 | 'Intended Audience :: Science/Research', 29 | 'Operating System :: OS Independent', 30 | 'Topic :: Other/Nonlisted Topic', 31 | 'Topic :: Scientific/Engineering'], 32 | test_suite='nose.collector', 33 | tests_require=['nose'], 34 | scripts=glob.glob(os.path.join(module_dir, "gasp", "scripts", "*")), 35 | entry_points={ 36 | 'console_scripts': ['run_gasp = gasp.scripts.run:main'] 37 | } 38 | ) 39 | --------------------------------------------------------------------------------