├── .gitignore ├── .readthedocs.yaml ├── LICENSE ├── README.md ├── bin ├── config_example.yaml └── pyrecon ├── doc ├── Makefile ├── api │ └── api.rst ├── conf.py ├── developer │ ├── changes.rst │ ├── contributing.rst │ ├── documentation.rst │ └── tests.rst ├── index.rst ├── make.bat ├── requirements.txt └── user │ └── building.rst ├── nb ├── basic_examples.ipynb └── e2e_examples.ipynb ├── pyproject.toml ├── pyrecon ├── __init__.py ├── _multigrid.pyx ├── _multigrid_generics.h ├── _multigrid_imp.h ├── _version.py ├── iterative_fft.py ├── iterative_fft_particle.py ├── mesh.py ├── metrics.py ├── mpi.py ├── multigrid.py ├── plane_parallel_fft.py ├── recon.py ├── tests │ ├── config_iterativefft_particle.yaml │ ├── config_iterativefft_particle_no_randoms.yaml │ ├── config_multigrid.yaml │ ├── config_multigrid_no_randoms.yaml │ ├── test_iterative_fft.py │ ├── test_iterative_fft_particle.py │ ├── test_mesh.py │ ├── test_metrics.py │ ├── test_multigrid.py │ ├── test_plane_parallel_fft.py │ ├── test_utils.py │ ├── test_zevolve.py │ └── utils.py ├── utils.h └── utils.py └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | **/__pycache__ 2 | **/pytest_cache 3 | **/*.egg 4 | **/*.egg-info 5 | **/.cache 6 | **/.config 7 | **/.wget* 8 | **/dist 9 | **/eggs 10 | **/build 11 | **/.ipynb_checkpoints 12 | **/*.npy 13 | **/*.rdzw 14 | **/*.xyzw 15 | **/*.blg 16 | **/*.bbl 17 | **/*.pdf 18 | **/*.aux 19 | **/*.log 20 | **/*.out 21 | **/_build 22 | **/bak* 23 | **/lib 24 | **/wisdom* 25 | **/_tests 26 | **/_catalogs 27 | **/_codes 28 | **/*_multigrid.c 29 | **/*.so 30 | 31 | # Unit test / coverage reports 32 | **/.coverage 33 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # .readthedocs.yml 2 | # Read the Docs configuration file 3 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 4 | 5 | # Required 6 | version: 2 7 | 8 | # Set the version of Python and other tools you might need 9 | build: 10 | os: ubuntu-22.04 11 | tools: 12 | python: '3.10' 13 | apt_packages: 14 | - libopenmpi-dev 15 | jobs: 16 | pre_build: 17 | - python setup.py build_ext --inplace # compile Cython extensions so that they can be imported 18 | 19 | # Build documentation in the docs/ directory with Sphinx 20 | sphinx: 21 | configuration: doc/conf.py 22 | 23 | python: 24 | install: 25 | - requirements: doc/requirements.txt 26 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2021, cosmodesi 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | 3. Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pyrecon - Python reconstruction code 2 | 3 | ## Introduction 4 | 5 | **pyrecon** is a package to perform reconstruction within Python, using different algorithms, so far: 6 | 7 | - MultiGridReconstruction, based on Martin J. White's code https://github.com/martinjameswhite/recon_code 8 | - IterativeFFTParticleReconstruction, based on Julian E. Bautista's code https://github.com/julianbautista/eboss_clustering/blob/master/python/recon.py 9 | - IterativeFFTReconstruction, iterative algorithm of Burden et al. 2015 (https://arxiv.org/abs/1504.02591) at the field-level (as opposed to IterativeFFTParticleReconstruction) 10 | - PlaneParallelFFTReconstruction, base algorithm of Eisenstein et al. 2007 (https://arxiv.org/pdf/astro-ph/0604362.pdf), in the plane-parallel approximation. 11 | 12 | With Python, a typical reconstruction run is (e.g. for MultiGridReconstruction; the same works for other algorithms): 13 | ``` 14 | from pyrecon import MultiGridReconstruction 15 | 16 | # line-of-sight "los" can be local (None, default) or an axis, 'x', 'y', 'z', or a 3-vector 17 | # Instead of boxsize and boxcenter, one can provide a (N, 3) array of Cartesian positions: positions= 18 | recon = MultiGridReconstruction(f=0.8, bias=2.0, los=None, nmesh=512, boxsize=1000., boxcenter=2000.) 19 | recon.assign_data(data_positions, data_weights) # data_positions is a (N, 3) array of Cartesian positions, data_weights a (N,) array 20 | # You can skip the following line if you assume uniform selection function (randoms) 21 | recon.assign_randoms(randoms_positions, randoms_weights) 22 | recon.set_density_contrast(smoothing_radius=15.) 23 | recon.run() 24 | # A shortcut of the above is: 25 | # recon = MultiGridReconstruction(f=0.8, bias=2.0, data_positions=data_positions, data_weights=data_weights, randoms_positions=randoms_positions, randoms_weights=randoms_weights, los=None, nmesh=512, boxsize=1000., boxcenter=2000.) 26 | # If you are using IterativeFFTParticleReconstruction, displacements are to be taken at the reconstructed data real-space positions; 27 | # in this case, do: data_positions_rec = recon.read_shifted_positions('data') 28 | data_positions_rec = recon.read_shifted_positions(data_positions) 29 | # RecSym = remove large scale RSD from randoms 30 | randoms_positions_rec = recon.read_shifted_positions(randoms_positions) 31 | # or RecIso 32 | # randoms_positions_rec = recon.read_shifted_positions(randoms_positions, field='disp') 33 | ``` 34 | Also provided a script to run reconstruction as a standalone: 35 | ``` 36 | pyrecon [-h] config-fn [--data-fn []] [--randoms-fn []] [--output-data-fn []] [--output-randoms-fn []] 37 | ``` 38 | An example of configuration file is provided in [config](https://github.com/cosmodesi/pyrecon/blob/main/bin/config_example.yaml). 39 | data-fn, randoms-fn are input data and random file names to override those in configuration file. 40 | The same holds for output files output-data-fn, output-randoms-fn. 41 | 42 | ## In progress 43 | 44 | Check algorithm details (see notes in docstrings). 45 | 46 | ## Documentation 47 | 48 | Documentation is hosted on Read the Docs, [pyrecon docs](https://pyrecon.readthedocs.io/). 49 | 50 | # Requirements 51 | 52 | Only strict requirements are: 53 | 54 | - numpy 55 | - scipy 56 | - pmesh 57 | 58 | Extra requirements are: 59 | 60 | - mpytools, fitsio, h5py to run **pyrecon** as a standalone 61 | - pypower to evaluate reconstruction metrics (correlation, transfer function and propagator) 62 | 63 | ## Installation 64 | 65 | See [pyrecon docs](https://pyrecon.readthedocs.io/en/latest/user/building.html). 66 | 67 | ## License 68 | 69 | **pyrecon** is free software distributed under a BSD3 license. For details see the [LICENSE](https://github.com/cosmodesi/pyrecon/blob/main/LICENSE). 70 | 71 | ## Credits 72 | 73 | - Martin J. White for https://github.com/martinjameswhite/recon_code 74 | - Julian E. Bautista for https://github.com/julianbautista/eboss_clustering/blob/master/python/recon.py 75 | - Pedro Rangel Caetano for inspiration for the script bin/recon 76 | - Sesh Nadathur for careful checks against Revolver https://github.com/seshnadathur/Revolver/blob/main/python_tools/recon.py 77 | - Enrique Paillas for bug reports 78 | - Grant Merz for propagator https://github.com/grantmerz/DESI_Recon 79 | -------------------------------------------------------------------------------- /bin/config_example.yaml: -------------------------------------------------------------------------------- 1 | input: 2 | dir: ./test # input directory to use as prefix 3 | data_fn: test_data.fits # data fits and hdf5 files are accepted 4 | randoms_fn: test_randoms.fits # randoms fits and hdf5 files are accepted 5 | rdz: [RA, DEC, Z_RSD] # data columns to get positions from 6 | rdz_randoms: [ra, dec, z] # random columns to get positions from 7 | position_type: rdz # column position type (rdz, xyz, pos), required 8 | weights: ${WEIGHT} # formula to get weight 9 | weights_randoms: # random columns to get weights from 10 | mask: (${RA}>0.) & (${RA}<30.) & (${DEC}>0.) & (${DEC}<30.) # mask for row selection 11 | mask_randoms: (${RA}>0.) & (${RA}<30.) & (${DEC}>0.) & (${DEC}<30.) # same for randoms (defaults to mask) 12 | 13 | output: 14 | dir: ./test # output directory to use as prefix 15 | data_fn: data_rec_split.fits # fits and hdf5 files are accepted 16 | randoms_fn: rand_rec_split.fits # fits and hdf5 files are accepted 17 | rdz_rec: [RA_REC, DEC_REC, Z_RSD_REC] # column names for reconstructed RA/DEC/Z 18 | xyz_rec: POSITION_REC # column name for reconstructed cartesian position 19 | xyz: POSITION # column name for input cartesian position 20 | columns: [NZ] # from input to keep on output file (if null or unprovided, keep them all) 21 | columns_randoms: [nz] # from input to keep on output file (if null or unprovided, keep them all) (defaults to columns) 22 | 23 | algorithm: 24 | name: MultiGridReconstruction # name of reconstruction algorithm (Julian's IterativeFFTReconstruction, Martin's MultiGridReconstruction) 25 | convention: RecSym # RecSym = data and randoms shifted by 'disp+rsd', RecIso = randoms shifted by 'disp' only, RSD = data shifted by 'rsd', no shift on randoms 26 | los: 'local' # line-of-sight can be 'local' (default) or an axis ('x', 'y', 'z') or a 3-vector. 27 | # other algorithm-related parameters 28 | 29 | delta: 30 | smoothing_radius: 15 # smoothing radius for reconstruction 31 | selection_function: 'randoms' # selection function, either from 'randoms', or 'uniform' (no input randoms required) 32 | 33 | cosmology: 34 | bias: 1.4 # galaxy bias 35 | f: 0.87 # growth rate 36 | fiducial: DESI # fiducial cosmology for rdz <=> cartesian position conversion 37 | 38 | mesh: 39 | nmesh: 512 # mesh size (int or list of 3 ints) 40 | boxsize: # box size (floot or list of 3 floats) 41 | boxcenter: # box center 42 | wrap: False # whether to wrap positions using periodic boundary conditions over the box 43 | dtype: f4 # mesh data-type for f4 (float32) or f8 (float64) 44 | fft_plan: 'estimate' # FFT planning for FFTW engine 45 | -------------------------------------------------------------------------------- /bin/pyrecon: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | """ 5 | **pyrecon** entry point for standalone runs. 6 | Not fully tested. 7 | """ 8 | 9 | import os 10 | import re 11 | import logging 12 | import datetime 13 | import argparse 14 | from numbers import Number 15 | 16 | import numpy as np 17 | import yaml 18 | 19 | from mpytools import Catalog 20 | import pyrecon 21 | from pyrecon import __version__, setup_logging, utils, IterativeFFTParticleReconstruction 22 | 23 | 24 | ascii_art = r""" 25 | _ __ _ _ _ __ ___ ___ ___ _ __ 26 | | '_ \| | | | '__/ _ \/ __/ _ \| '_ \ 27 | | |_) | |_| | | | __/ (_| (_) | | | | 28 | | .__/ \__, |_| \___|\___\___/|_| |_| 29 | | | __/ | 30 | |_| |___/ """ + """\n\n""" + \ 31 | """version: {} date: {}\n""".format(__version__, datetime.date.today()) 32 | 33 | 34 | class YamlLoader(yaml.SafeLoader): 35 | """ 36 | *yaml* loader that correctly parses numbers. 37 | Taken from https://stackoverflow.com/questions/30458977/yaml-loads-5e-6-as-string-and-not-a-number. 38 | """ 39 | 40 | 41 | YamlLoader.add_implicit_resolver(u'tag:yaml.org,2002:float', 42 | re.compile(u'''^(?: 43 | [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)? 44 | |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+) 45 | |\\.[0-9_]+(?:[eE][-+][0-9]+)? 46 | |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]* 47 | |[-+]?\\.(?:inf|Inf|INF) 48 | |\\.(?:nan|NaN|NAN))$''', re.X), 49 | list(u'-+0123456789.')) 50 | 51 | YamlLoader.add_implicit_resolver('!none', re.compile('None$'), first='None') 52 | 53 | 54 | def none_constructor(loader, node): 55 | return None 56 | 57 | 58 | YamlLoader.add_constructor('!none', none_constructor) 59 | 60 | 61 | class ConfigError(Exception): 62 | 63 | """Exception raised when issue with configuration.""" 64 | 65 | 66 | def main(args=None, **input_config): 67 | 68 | print(ascii_art) 69 | logger = logging.getLogger('Main') 70 | 71 | parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) 72 | parser.add_argument('config_fn', action='store', type=str, help='Name of configuration file') 73 | parser.add_argument('--data-fn', nargs='?', metavar='', help='Path(s) to input data file (overrides configuration file)') 74 | parser.add_argument('--randoms-fn', nargs='?', metavar='', help='Path(s) to input randoms file (overrides configuration file)') 75 | parser.add_argument('--output-data-fn', nargs='?', metavar='', help='Path to output data file (overrides configuration file)') 76 | parser.add_argument('--output-randoms-fn', nargs='?', metavar='', help='Path to output randoms file (overrides configuration file)') 77 | parser.add_argument('--growth-rate', nargs='?', type=float, help='Value of the logarithmic growth rate (overrides configuration file)') 78 | parser.add_argument('--galaxy-bias', nargs='?', type=float, help='Value of the linear galaxy bias (overrides configuration file)') 79 | 80 | opt = parser.parse_args(args=args) 81 | setup_logging() 82 | 83 | config = {} 84 | config['input'] = {} 85 | config['output'] = {} 86 | config['algorithm'] = {'name': 'MultiGridReconstruction', 'convention': 'RecSym'} 87 | config['delta'] = {'smoothing_radius': 15.} 88 | config['cosmology'] = {} 89 | config['mesh'] = {'nmesh': 512, 'dtype': 'f4'} 90 | 91 | def update_config(c1, c2): 92 | # Update (in-place) config dictionary c1 with c2 93 | for section, value in c2.items(): 94 | c1.setdefault(section, {}) 95 | c1[section].update(value) 96 | 97 | # Turn empty things (dict, list, str) to None 98 | for section in c1: 99 | for name, value in c1[section].items(): 100 | if not isinstance(value, Number) and not value: c1[section][name] = None 101 | 102 | update_config(config, input_config) 103 | 104 | if opt.config_fn is not None: 105 | logger.info('Loading config file {}.'.format(opt.config_fn)) 106 | with open(opt.config_fn, 'r') as file: 107 | update_config(config, yaml.safe_load(file)) 108 | 109 | # Override with command-line arguments 110 | if opt.data_fn: 111 | config['input']['data_fn'] = opt.data_fn 112 | if opt.randoms_fn: 113 | config['input']['randoms_fn'] = opt.randoms_fn 114 | if opt.output_data_fn: 115 | config['output']['data_fn'] = opt.output_data_fn 116 | if opt.output_randoms_fn: 117 | config['output']['randoms_fn'] = opt.output_randoms_fn 118 | if opt.growth_rate: 119 | config['cosmology']['f'] = opt.growth_rate 120 | if opt.galaxy_bias: 121 | config['cosmology']['bias'] = opt.galaxy_bias 122 | 123 | # Fetch reconstruction algorithm, e.g. MultiGridReconstruction 124 | ReconstructionAlgorithm = getattr(pyrecon, config['algorithm'].pop('name')) 125 | config_cosmo = {name: value for name, value in config['cosmology'].items() if name in ['f', 'bias']} 126 | config_cosmo['los'] = config['algorithm'].pop('los', None) 127 | convention = config['algorithm'].pop('convention').lower() 128 | allowed_conventions = ['recsym', 'reciso', 'rsd'] 129 | if convention not in allowed_conventions: 130 | raise ConfigError('Unknown convention {}. Choices are {}'.format(convention, allowed_conventions)) 131 | logger.info('Convention is {}.'.format(convention)) 132 | 133 | def get_comoving_distance(): 134 | # Return z -> distance callables 135 | from cosmoprimo import fiducial 136 | cosmo = getattr(fiducial, config['cosmology'].get('fiducial', 'DESI'))() 137 | return cosmo.comoving_radial_distance 138 | 139 | def make_list(cols): 140 | # Turn single column name to list of column names 141 | if cols is None: return [] 142 | if isinstance(cols, str): cols = [cols] 143 | return cols 144 | 145 | def remove_duplicates(cols): 146 | # Remove duplicate column names 147 | toret = [] 148 | for col in cols: 149 | if col not in toret: toret.append(col) 150 | return toret 151 | 152 | def decode_eval_str(s): 153 | # Change ${col} => col, and return list of columns 154 | if s is None: 155 | return '', [] 156 | toret = str(s) 157 | columns = [] 158 | for replace in re.finditer(r'(\${.*?})', s): 159 | value = replace.group(1) 160 | col = value[2:-1] 161 | toret = toret.replace(value, col) 162 | if col not in columns: columns.append(col) 163 | return toret, columns 164 | 165 | # Whether nbar is provided by randoms catalog, or nbar is assumed uniform 166 | allowed_selection_functions = ['uniform', 'randoms', ''] 167 | selection_function = config['delta'].pop('selection_function', '').lower() 168 | if selection_function not in allowed_selection_functions: 169 | raise ConfigError('Unknown input selection function {}. Choices are {}'.format(selection_function, allowed_selection_functions)) 170 | # First check what we have in input/output 171 | input_fns, output_fns = {}, {} 172 | for name in ['data', 'randoms']: 173 | tmp_fn = config['input'].get('{}_fn'.format(name), None) 174 | if tmp_fn is None: 175 | if name == 'randoms': 176 | if selection_function == 'randoms': 177 | raise ConfigError('Please provide randoms catalog.') 178 | # No randoms provided and no instruction on selection function, defaults to uniform nbar 179 | if not selection_function: 180 | logger.info('No randoms provided.') 181 | selection_function = 'uniform' 182 | else: 183 | raise ConfigError('Please provide data catalog.') 184 | else: # We've got a file name! 185 | input_fns[name] = tmp_fn 186 | tmp_fn = config['output'].get('{}_fn'.format(name), None) 187 | if tmp_fn is not None: 188 | # Check that requested catalog can be supplied given input 189 | if name not in input_fns: 190 | raise ConfigError('Cannot output {} catalog if not provided as input.'.format(name)) 191 | output_fns[name] = tmp_fn 192 | # Randoms catalog provided and no instruction on selection function, defaults to nbar from randoms 193 | if not selection_function: 194 | selection_function = 'randoms' 195 | logger.info('Using {} selection function.'.format(selection_function)) 196 | 197 | # Allowed inputs are RA, DEC, Z or X, Y, Z 198 | allowed_position_types = ['rdz', 'xyz', 'pos'] 199 | catalogs = {} 200 | positions = {} 201 | weights = {} 202 | input_position_type = {} 203 | keep_input_columns = {} 204 | 205 | for name in input_fns: 206 | # First get format, xyz or rdz, from e.g. 'xyz', 'xyz_data', 'rdz_randoms' 207 | input_position_type[name] = None 208 | for position_type in allowed_position_types: 209 | cols = config['input'].get('{}_{}'.format(position_type, name), None) 210 | if cols is not None: 211 | # Check whether e.g. 'rdz_data' but 'xyz_data' has been specified previously 212 | if input_position_type[name] is not None: 213 | raise ConfigError('Cannot use two different input position types') 214 | input_position_type[name] = position_type 215 | position_columns = cols 216 | # Fallback on non-differenciation between data and randoms, 'xyz', 'rdz' 217 | if input_position_type[name] is None: 218 | for position_type in allowed_position_types: 219 | cols = config['input'].get(position_type, None) 220 | if cols is not None: 221 | # Check whether e.g. 'rdz' but 'xyz' has been specified previously 222 | if input_position_type[name] is not None: 223 | raise ConfigError('Cannot use two different input position types') 224 | input_position_type[name] = position_type 225 | position_columns = cols 226 | position_type = input_position_type[name] 227 | # No format 'xyz', 'xyz_data', 'rdz_randoms', ... found 228 | if position_type is None: 229 | raise ConfigError('Unknown input position type. Choices are {}'.format(position_type)) 230 | position_columns = make_list(position_columns) 231 | 232 | # Get catalog file name 233 | fn = os.path.join(config['input'].get('dir', ''), input_fns[name]) 234 | # Optionally, mask to apply 235 | mask_str = config['input'].get('mask_{}'.format(name), config['input'].get('mask', None)) 236 | mask_str, mask_columns = decode_eval_str(mask_str) 237 | weight_str = config['input'].get('weights_{}'.format(name), config['input'].get('weights', None)) 238 | weight_str, weight_columns = decode_eval_str(weight_str) 239 | # Input columns to keep for ouput 240 | keep_input_columns[name] = make_list(config['output'].get('columns_{}'.format(name), config['output'].get('columns', None))) 241 | columns = [] 242 | # All columns to actually read from catalogs (positions, weights, masks, and columns to be saved in output) 243 | for cols in [position_columns, weight_columns, mask_columns, keep_input_columns[name]]: columns += cols 244 | columns = remove_duplicates(columns) 245 | catalog = {} 246 | # Read in catalog, depending on file format 247 | catalog = Catalog.read(fn)[columns] # to load all columns 248 | 249 | csize = catalog.csize 250 | if catalog.mpicomm.rank == 0: 251 | logger.info('Size of catalog {} is {:d}.'.format(name, csize)) 252 | # Apply masks 253 | if mask_str: 254 | dglobals = {'np': np} 255 | dglobals.update(catalog.data) 256 | mask = eval(mask_str, dglobals, {}) 257 | catalog = catalog[mask] 258 | csize = catalog.csize 259 | logger.info('Size of catalog {} after masking is {:d}.'.format(name, csize)) 260 | 261 | # Prepare Cartesian positions from input columns 262 | if position_type == 'rdz': 263 | if not len(position_columns) == 3: # RA, DEC, Z 264 | raise ConfigError('Position type rdz requires 3 position columns') 265 | comoving_distance = get_comoving_distance() 266 | distance = comoving_distance(catalog[position_columns[2]]) 267 | positions[name] = utils.sky_to_cartesian(distance, catalog[position_columns[0]], catalog[position_columns[1]]) 268 | position_type = 'pos' 269 | elif position_type == 'xyz': 270 | if not len(position_columns) == 3: 271 | raise ConfigError('Position type xyz requires 3 position columns') 272 | positions[name] = [catalog[col] for col in position_columns] 273 | else: # 'pos' 274 | if not len(position_columns) == 1: 275 | raise ConfigError('Position type pos requires 1 position column') 276 | positions[name] = catalog[position_columns[0]] 277 | 278 | # Build up weights 279 | weights.setdefault(name, None) 280 | if weight_str: 281 | dglobals = {'np': np} 282 | dglobals.update(catalog.data) 283 | weights[name] = eval(weight_str, dglobals, {}) 284 | 285 | # Remove all columns that are not requested in output catalogs 286 | catalogs[name] = catalog[keep_input_columns[name]] 287 | 288 | # Run reconstruction 289 | recon = ReconstructionAlgorithm(**config_cosmo, **config['mesh'], positions=positions['randoms'] if selection_function == 'randoms' else None, position_type=position_type) 290 | recon.assign_data(positions['data'], weights['data']) 291 | if selection_function == 'randoms': recon.assign_randoms(positions['randoms'], weights['randoms']) 292 | recon.set_density_contrast(**config['delta']) 293 | recon.run(**config['algorithm']) 294 | 295 | # Read shifts 296 | positions_rec = {} 297 | field = 'rsd' if convention == 'rsd' else 'disp+rsd' 298 | if type(recon) is IterativeFFTParticleReconstruction: 299 | positions_rec['data'] = recon.read_shifted_positions('data', field=field) 300 | else: 301 | positions_rec['data'] = recon.read_shifted_positions(positions['data'], field=field) 302 | if 'randoms' in output_fns: 303 | # RSD removal only: no effect on the randoms 304 | if convention == 'rsd': 305 | positions_rec['randoms'] = positions['randoms'] 306 | else: 307 | # convention == recsym: move randoms by Zeldovich + RSD displacement 308 | # convention == reciso: move randoms by Zeldovich displacement 309 | field = 'disp+rsd' if convention == 'recsym' else 'disp' 310 | # Note that if wrap is True, output reconstructed random positions will be wrapped so may differ from input positions 311 | # even if convention is 'rsd' if input positions are not wrapped 312 | positions_rec['randoms'] = recon.read_shifted_positions(positions['randoms'], field=field) 313 | 314 | # Now dump reconstructed catalogs to disk 315 | for name in output_fns: 316 | catalog = catalogs[name] 317 | columns = list(keep_input_columns[name]) 318 | # How do we save position columns? 319 | for rec, positions_ in zip(['', '_rec'], [positions[name], positions_rec[name]]): 320 | for position_type in allowed_position_types: 321 | # First look for e.g. 'xyz_data', 'rdz_randoms', 'xyz_rec_data' 322 | position_columns = config['output'].get('{}{}_{}'.format(position_type, rec, name), None) 323 | # Fallback on non-differenciation between data and randoms, 'xyz', 'rdz', 'xyz_rec' 324 | if position_columns is None: 325 | position_columns = config['output'].get('{}{}'.format(position_type, rec), None) 326 | if position_columns is not None: 327 | position_columns = make_list(position_columns) 328 | if position_type == 'rdz': 329 | if not len(position_columns) == 3: # RA, DEC, Z columns 330 | raise ConfigError('Format rdz requires 3 position columns') 331 | distance, ra, dec = utils.cartesian_to_sky(positions_) 332 | distance_to_redshift = utils.DistanceToRedshift(get_comoving_distance()) 333 | z = distance_to_redshift(distance) 334 | for col, value in zip(position_columns, [ra, dec, z]): 335 | catalog[col] = value 336 | elif position_type == 'xyz': 337 | if not len(position_columns) == 3: 338 | raise ConfigError('Position type xyz requires 3 position columns') 339 | for icol, col in enumerate(position_columns): 340 | catalog[col] = positions_[:, icol] 341 | else: # pos 342 | if not len(position_columns) == 1: 343 | raise ConfigError('Position type pos requires 1 position column') 344 | catalog[position_columns[0]] = positions_ 345 | columns += position_columns 346 | columns = remove_duplicates(columns) 347 | fn = os.path.join(config['output'].get('dir', ''), output_fns[name]) 348 | catalog = catalog[columns] 349 | catalog.write(fn) 350 | 351 | 352 | if __name__ == '__main__': 353 | 354 | main() 355 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line, and also 5 | # from the environment for the first two. 6 | SPHINXOPTS ?= 7 | SPHINXBUILD ?= sphinx-build 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @cd ..; python setup.py build_ext --inplace 21 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 22 | -------------------------------------------------------------------------------- /doc/api/api.rst: -------------------------------------------------------------------------------- 1 | API 2 | === 3 | 4 | Base reconstruction class 5 | ------------------------- 6 | 7 | .. automodule:: pyrecon.recon 8 | :members: 9 | :inherited-members: 10 | :show-inheritance: 11 | 12 | Multi grid reconstruction 13 | ------------------------- 14 | 15 | .. automodule:: pyrecon.multigrid 16 | :members: 17 | :inherited-members: 18 | :show-inheritance: 19 | 20 | Iterative FFT reconstruction 21 | ---------------------------- 22 | 23 | .. automodule:: pyrecon.iterative_fft 24 | :members: 25 | :inherited-members: 26 | :show-inheritance: 27 | 28 | Iterative FFT particle reconstruction 29 | ------------------------------------- 30 | 31 | .. automodule:: pyrecon.iterative_fft_particle 32 | :members: 33 | :inherited-members: 34 | :show-inheritance: 35 | 36 | Plane-parallel FFT reconstruction 37 | --------------------------------- 38 | 39 | .. automodule:: pyrecon.plane_parallel_fft 40 | :members: 41 | :inherited-members: 42 | :show-inheritance: 43 | 44 | Metrics 45 | ------- 46 | 47 | .. automodule:: pyrecon.metrics 48 | :members: 49 | :inherited-members: 50 | :show-inheritance: 51 | 52 | 53 | Utilities 54 | --------- 55 | 56 | .. automodule:: pyrecon.utils 57 | :members: 58 | :inherited-members: 59 | :show-inheritance: 60 | -------------------------------------------------------------------------------- /doc/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # This file only contains a selection of the most common options. For a full 4 | # list see the documentation: 5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 6 | 7 | # -- Path setup -------------------------------------------------------------- 8 | 9 | # If extensions (or modules to document with autodoc) are in another directory, 10 | # add these directories to sys.path here. If the directory is relative to the 11 | # documentation root, use os.path.abspath to make it absolute, like shown here. 12 | # 13 | import os 14 | import sys 15 | sys.path.insert(0, os.path.abspath('..')) 16 | sys.path.insert(0, os.path.abspath(os.path.join('..', 'pyrecon'))) 17 | from _version import __version__ 18 | 19 | # -- General configuration ------------------------------------------------ 20 | 21 | # If your documentation needs a minimal Sphinx version, state it here. 22 | #needs_sphinx = '1.0' 23 | 24 | # Add any Sphinx extension module names here, as strings. They can be 25 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 26 | # ones. 27 | extensions = [ 28 | 'sphinx.ext.autodoc', 29 | 'sphinx.ext.intersphinx', 30 | 'sphinx.ext.extlinks', 31 | 'sphinx.ext.napoleon', 32 | 'sphinx_rtd_theme' 33 | ] 34 | 35 | # -- Project information ----------------------------------------------------- 36 | 37 | project = 'pyrecon' 38 | copyright = '2021, cosmodesi' 39 | 40 | # The full version, including alpha/beta/rc tags 41 | release = __version__ 42 | 43 | html_theme = 'sphinx_rtd_theme' 44 | 45 | autodoc_mock_imports = ['pyfftw'] 46 | 47 | # Add any paths that contain templates here, relative to this directory. 48 | templates_path = ['_templates'] 49 | 50 | # List of patterns, relative to source directory, that match files and 51 | # directories to ignore when looking for source files. 52 | # This pattern also affects html_static_path and html_extra_path. 53 | exclude_patterns = ['build', '**.ipynb_checkpoints'] 54 | 55 | # -- Options for HTML output ------------------------------------------------- 56 | 57 | # The theme to use for HTML and HTML Help pages. See the documentation for 58 | # a list of builtin themes. 59 | 60 | # Add any paths that contain custom static files (such as style sheets) here, 61 | # relative to this directory. They are copied after the builtin static files, 62 | # so a file named "default.css" will overwrite the builtin "default.css". 63 | html_static_path = [] 64 | 65 | git_repo = 'https://github.com/cosmodesi/pyrecon.git' 66 | git_root = 'https://github.com/cosmodesi/pyrecon/blob/main/' 67 | 68 | extlinks = {'root': (git_root + '%s', '%s')} 69 | 70 | intersphinx_mapping = { 71 | 'numpy': ('https://docs.scipy.org/doc/numpy/', None) 72 | } 73 | 74 | # thanks to: https://github.com/sphinx-doc/sphinx/issues/4054#issuecomment-329097229 75 | def _replace(app, docname, source): 76 | result = source[0] 77 | for key in app.config.ultimate_replacements: 78 | result = result.replace(key, app.config.ultimate_replacements[key]) 79 | source[0] = result 80 | 81 | 82 | ultimate_replacements = { 83 | '{gitrepo}': git_repo 84 | } 85 | 86 | def setup(app): 87 | app.add_config_value('ultimate_replacements', {}, True) 88 | app.connect('source-read',_replace) 89 | 90 | 91 | autoclass_content = 'both' 92 | -------------------------------------------------------------------------------- /doc/developer/changes.rst: -------------------------------------------------------------------------------- 1 | .. _developer-changes: 2 | 3 | Change Log 4 | ========== 5 | 6 | 1.0.0 (2022-02-22) 7 | ------------------ 8 | 9 | * First version 10 | 11 | 0.0.1 (2021-09-07) 12 | ------------------ 13 | 14 | * Init git repo 15 | -------------------------------------------------------------------------------- /doc/developer/contributing.rst: -------------------------------------------------------------------------------- 1 | .. _developer-contributing: 2 | 3 | Contributing 4 | ============ 5 | 6 | Contributions to **pyrecon** are more than welcome! 7 | Please follow these guidelines before filing a pull request: 8 | 9 | * Please abide by `PEP8`_ as much as possible in your code writing, add docstrings and tests for each new functionality. 10 | 11 | * Check documentation compiles, with the expected result; see :ref:`developer-documentation`. 12 | 13 | * Submit your pull request. 14 | 15 | References 16 | ---------- 17 | 18 | .. target-notes:: 19 | 20 | .. _`prospector`: http://prospector.landscape.io/en/master/ 21 | 22 | .. _`PEP8`: https://www.python.org/dev/peps/pep-0008/ 23 | 24 | .. _`Codacy`: https://app.codacy.com/ 25 | -------------------------------------------------------------------------------- /doc/developer/documentation.rst: -------------------------------------------------------------------------------- 1 | .. _developer-documentation: 2 | 3 | Documentation 4 | ============= 5 | 6 | Please follow `Sphinx style guide`_ when writing the documentation (except for file names) and `PEP257`_ for docstrings. 7 | 8 | Building 9 | -------- 10 | 11 | The documentation can be built following:: 12 | 13 | cd $HOME/pyrecon/doc 14 | make html 15 | 16 | Documentation pages can be displayed by opening ``_build/html/index.html`` with your web browser. 17 | 18 | Finally, to push the documentation, `Read the Docs`_. 19 | 20 | 21 | References 22 | ---------- 23 | 24 | .. target-notes:: 25 | 26 | .. _`Sphinx style guide`: https://documentation-style-guide-sphinx.readthedocs.io/en/latest/style-guide.html 27 | 28 | .. _`PEP257`: https://www.python.org/dev/peps/pep-0257/ 29 | 30 | .. _`Read the Docs`: https://sphinx-rtd-tutorial.readthedocs.io/en/latest/read-the-docs.html 31 | -------------------------------------------------------------------------------- /doc/developer/tests.rst: -------------------------------------------------------------------------------- 1 | .. _developer-tests: 2 | 3 | Tests 4 | ===== 5 | 6 | Tests are located in :root:`pyrecon/tests`. 7 | To perform tests, run: 8 | 9 | pytest 10 | -------------------------------------------------------------------------------- /doc/index.rst: -------------------------------------------------------------------------------- 1 | .. title:: pyrecon docs 2 | 3 | *********************************** 4 | Welcome to pyrecon's documentation! 5 | *********************************** 6 | 7 | .. toctree:: 8 | :maxdepth: 1 9 | :caption: User documentation 10 | 11 | user/building 12 | api/api 13 | 14 | .. toctree:: 15 | :maxdepth: 1 16 | :caption: Developer documentation 17 | 18 | developer/documentation 19 | developer/tests 20 | developer/contributing 21 | developer/changes 22 | 23 | .. toctree:: 24 | :hidden: 25 | 26 | ************ 27 | Introduction 28 | ************ 29 | 30 | **pyrecon** is a Python package to perform reconstruction in BAO analyses with various algorithms with MPI. 31 | 32 | A typical reconstruction run is (e.g. for MultiGridReconstruction; the same works for other algorithms): 33 | 34 | .. code-block:: python 35 | 36 | from pyrecon import MultiGridReconstruction 37 | 38 | # line-of-sight "los" can be local (None, default) or an axis, 'x', 'y', 'z', or a 3-vector 39 | # Instead of boxsize and boxcenter, one can provide a (N, 3) array of Cartesian positions: positions= 40 | recon = MultiGridReconstruction(f=0.8, bias=2.0, los=None, nmesh=512, boxsize=1000., boxcenter=2000.) 41 | recon.assign_data(data_positions, data_weights) # data_positions is a (N, 3) array of Cartesian positions, data_weights a (N,) array 42 | # You can skip the following line if you assume uniform selection function (randoms) 43 | recon.assign_randoms(randoms_positions, randoms_weights) 44 | recon.set_density_contrast(smoothing_radius=15.) 45 | recon.run() 46 | # A shortcut of the above is: 47 | # recon = MultiGridReconstruction(f=0.8, bias=2.0, data_positions=data_positions, data_weights=data_weights, randoms_positions=randoms_positions, randoms_weights=randoms_weights, los=None, nmesh=512, boxsize=1000., boxcenter=2000.) 48 | # If you are using IterativeFFTParticleReconstruction, displacements are to be taken at the reconstructed data real-space positions; 49 | # in this case, do: data_positions_rec = recon.read_shifted_positions('data') 50 | data_positions_rec = recon.read_shifted_positions(data_positions) 51 | # RecSym = remove large scale RSD from randoms 52 | randoms_positions_rec = recon.read_shifted_positions(randoms_positions) 53 | # or RecIso 54 | # randoms_positions_rec = recon.read_shifted_positions(randoms_positions, field='disp') 55 | 56 | 57 | ************** 58 | Code structure 59 | ************** 60 | 61 | The code structure is the following: 62 | 63 | - recon.py implements the base reconstruction class 64 | - mesh.py implements mesh utilies 65 | - utils.py implements various utilities 66 | - a module for each algorithm 67 | - metrics.py implements calculation of correlator, propagator and transfer function 68 | 69 | 70 | Changelog 71 | ========= 72 | 73 | * :doc:`developer/changes` 74 | 75 | Indices and tables 76 | ================== 77 | 78 | * :ref:`genindex` 79 | * :ref:`modindex` 80 | * :ref:`search` 81 | -------------------------------------------------------------------------------- /doc/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=. 11 | set BUILDDIR=_build 12 | 13 | if "%1" == "" goto help 14 | 15 | %SPHINXBUILD% >NUL 2>NUL 16 | if errorlevel 9009 ( 17 | echo. 18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 19 | echo.installed, then set the SPHINXBUILD environment variable to point 20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 21 | echo.may add the Sphinx directory to PATH. 22 | echo. 23 | echo.If you don't have Sphinx installed, grab it from 24 | echo.http://sphinx-doc.org/ 25 | exit /b 1 26 | ) 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /doc/requirements.txt: -------------------------------------------------------------------------------- 1 | setuptools 2 | sphinx>=4.3.0 3 | sphinx-rtd-theme>=0.5.1 4 | numpy 5 | scipy 6 | pyyaml 7 | git+https://github.com/MP-Gadget/pfft-python 8 | git+https://github.com/MP-Gadget/pmesh 9 | git+https://github.com/cosmodesi/pypower -------------------------------------------------------------------------------- /doc/user/building.rst: -------------------------------------------------------------------------------- 1 | .. _user-building: 2 | 3 | Building 4 | ======== 5 | 6 | Requirements 7 | ------------ 8 | Only strict requirements are: 9 | 10 | - numpy 11 | - scipy 12 | - pmesh 13 | 14 | Extra requirements are: 15 | 16 | - mpytools, fitsio, h5py to run **pyrecon** as a standalone 17 | - pypower to evaluate reconstruction metrics (correlation, transfer function and propagator) 18 | 19 | pip 20 | --- 21 | To install **pyrecon**, simply run:: 22 | 23 | python -m pip install git+https://github.com/cosmodesi/pyrecon 24 | 25 | To run **pyrecon** as a standalone, a couple of extra dependencies are required (fitsio, h5py), which can be installed through:: 26 | 27 | python -m pip install git+https://github.com/cosmodesi/pyrecon#egg=pyrecon[extras] 28 | 29 | git 30 | --- 31 | First:: 32 | 33 | git clone https://github.com/cosmodesi/pyrecon.git 34 | 35 | To install the code:: 36 | 37 | python setup.py install --user 38 | 39 | Or in development mode (any change to Python code will take place immediately):: 40 | 41 | python setup.py develop --user 42 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ['setuptools', 'wheel', 'numpy', 'Cython'] 3 | build-backend = 'setuptools.build_meta' 4 | -------------------------------------------------------------------------------- /pyrecon/__init__.py: -------------------------------------------------------------------------------- 1 | """Implementation of reconstruction algorithms.""" 2 | 3 | from ._version import __version__ 4 | from .recon import ReconstructionError 5 | from .multigrid import MultiGridReconstruction 6 | from .iterative_fft import IterativeFFTReconstruction 7 | from .iterative_fft_particle import IterativeFFTParticleReconstruction 8 | from .plane_parallel_fft import PlaneParallelFFTReconstruction 9 | from .utils import setup_logging 10 | -------------------------------------------------------------------------------- /pyrecon/_multigrid.pyx: -------------------------------------------------------------------------------- 1 | #from cython cimport floating 2 | import numpy as np 3 | cimport numpy as np 4 | 5 | 6 | cdef extern from "_multigrid_imp.h": 7 | 8 | void _jacobi_float(float *v, const float *f, const size_t* nmesh, const size_t localnmeshx, const int offsetx, const double* boxsize, const double* boxcenter, 9 | const double beta, const double damping_factor, const double* los) 10 | 11 | void _jacobi_double(double *v, const double *f, const size_t* nmesh, const size_t localnmeshx, const int offsetx, const double* boxsize, const double* boxcenter, 12 | const double beta, const double damping_factor, const double* los) 13 | 14 | void _residual_float(const float* v, const float* f, float* r, const size_t* nmesh, const size_t localnmeshx, const int offsetx, const double* boxsize, const double* boxcenter, 15 | const double beta, const double* los) 16 | 17 | void _residual_double(const double* v, const double* f, double* r, const size_t* nmesh, const size_t localnmeshx, const int offsetx, const double* boxsize, const double* boxcenter, 18 | const double beta, const double* los) 19 | 20 | void _prolong_float(const float* v2h, float* v1h, const size_t* nmesh, const size_t localnmeshx, const int offsetx) 21 | 22 | void _prolong_double(const double* v2h, double* v1h, const size_t* nmesh, const size_t localnmeshx, const int offsetx) 23 | 24 | void _reduce_float(const float* v1h, float* v2h, const size_t* nmesh, const size_t localnmeshx, const int offsetx) 25 | 26 | void _reduce_double(const double* v1h, double* v2h, const size_t* nmesh, const size_t localnmeshx, const int offsetx) 27 | 28 | void _read_finite_difference_cic_float(const float* mesh, const size_t* nmesh, const size_t localnmeshx, const double* boxsize, const float* positions, float* shifts, size_t npositions) 29 | 30 | void _read_finite_difference_cic_double(const double* mesh, const size_t* nmesh, const size_t localnmeshx, const double* boxsize, const double* positions, double* shifts, size_t npositions) 31 | 32 | 33 | def cslice(mesh, cstart, cstop, concatenate=True): 34 | mpicomm = mesh.pm.comm 35 | cstart_out = mpicomm.allgather(cstart < 0) 36 | if any(cstart_out): 37 | cstart1, cstop1 = cstart, cstop 38 | cstart2, cstop2 = 0, 0 39 | if cstart_out[mpicomm.rank]: 40 | cstart1, cstop1 = cstart + mesh.pm.Nmesh[0], mesh.pm.Nmesh[0] 41 | cstart2, cstop2 = 0, cstop 42 | toret = cslice(mesh, cstart1, cstop1, concatenate=False) 43 | toret += cslice(mesh, cstart2, cstop2, concatenate=False) 44 | if concatenate: toret = np.concatenate(toret, axis=0) 45 | return toret 46 | cstop_out = mpicomm.allgather(cstop > mesh.pm.Nmesh[0] and cstop > cstart) # as above test may call with cstop = cstart 47 | if any(cstop_out): 48 | cstart1, cstop1 = 0, 0 49 | cstart2, cstop2 = cstart, cstop 50 | if cstop_out[mpicomm.rank]: 51 | cstart1, cstop1 = cstart, mesh.pm.Nmesh[0] 52 | cstart2, cstop2 = 0, cstop - mesh.pm.Nmesh[0] 53 | toret = cslice(mesh, cstart1, cstop1, concatenate=False) 54 | toret += cslice(mesh, cstart2, cstop2, concatenate=False) 55 | if concatenate: toret = np.concatenate(toret, axis=0) 56 | return toret 57 | 58 | mpicomm = mesh.pm.comm 59 | ranges = mpicomm.allgather((mesh.start[0], mesh.start[0] + mesh.shape[0])) 60 | argsort = np.argsort([start for start, stop in ranges]) 61 | # Send requested slices 62 | sizes, all_slices = [], [] 63 | for irank, (start, stop) in enumerate(ranges): 64 | lstart, lstop = max(cstart - start, 0), min(max(cstop - start, 0), stop - start) 65 | sizes.append(max(lstop - lstart, 0)) 66 | all_slices.append(mpicomm.allgather(slice(lstart, lstop))) 67 | assert sum(sizes) == cstop - cstart 68 | toret = [] 69 | for root in range(mpicomm.size): 70 | if mpicomm.rank == root: 71 | for rank in range(mpicomm.size): 72 | sl = all_slices[root][rank] 73 | if rank == root: 74 | tmp = mesh.value[sl] 75 | else: 76 | mpicomm.Send(np.ascontiguousarray(mesh.value[sl]), dest=rank, tag=43) 77 | #mpi.send(mesh.value[all_slices[root][irank]], dest=irank, tag=44, mpicomm=mpicomm) 78 | else: 79 | tmp = np.empty_like(mesh.value, shape=(sizes[root],) + mesh.shape[1:], order='C') 80 | mpicomm.Recv(tmp, source=root, tag=43) 81 | #tmp = mpi.recv(source=root, tag=44, mpicomm=mpicomm) 82 | toret.append(tmp) 83 | 84 | toret = [toret[ii] for ii in argsort] 85 | if concatenate: toret = np.concatenate(toret, axis=0) 86 | return toret 87 | 88 | 89 | def jacobi(v, f, np.ndarray[double, ndim=1, mode='c'] boxcenter, double beta, double damping_factor=0.4, int niterations=5, double[:] los=None): 90 | dtype = v.dtype 91 | 92 | start, stop = v.start[0], v.start[0] + v.shape[0] 93 | cdef np.ndarray bv 94 | cdef np.ndarray bf = cslice(f, start - 1, stop + 1) 95 | 96 | cdef np.ndarray[size_t, ndim=1, mode='c'] nmesh = np.array(v.pm.Nmesh, dtype=np.uint64) 97 | cdef np.ndarray[double, ndim=1, mode='c'] boxsize = v.pm.BoxSize 98 | cdef double * plos = NULL 99 | if los is not None: 100 | plos = &los[0] 101 | cdef int offsetx = v.start[0] - 1 102 | 103 | for iter in range(niterations): 104 | bv = cslice(v, start - 1, stop + 1) 105 | if dtype.itemsize == 4: 106 | _jacobi_float( bv.data, bf.data, &nmesh[0], bv.shape[0], offsetx, &boxsize[0], &boxcenter[0], beta, damping_factor, plos) 107 | else: 108 | _jacobi_double( bv.data, bf.data, &nmesh[0], bv.shape[0], offsetx, &boxsize[0], &boxcenter[0], beta, damping_factor, plos) 109 | v.value = bv[1:-1] 110 | return v 111 | 112 | 113 | def residual(v, f, np.ndarray[double, ndim=1, mode='c'] boxcenter, double beta, double[:] los=None): 114 | dtype = v.dtype 115 | 116 | start, stop = v.start[0], v.start[0] + v.shape[0] 117 | cdef np.ndarray bv = cslice(v, start - 1, stop + 1) 118 | cdef np.ndarray bf = cslice(f, start - 1, stop + 1) 119 | cdef np.ndarray br = np.empty_like(bv, order='C') 120 | 121 | cdef np.ndarray[size_t, ndim=1, mode='c'] nmesh = np.array(v.pm.Nmesh, dtype=np.uint64) 122 | cdef np.ndarray[double, ndim=1, mode='c'] boxsize = v.pm.BoxSize 123 | cdef double * plos = NULL 124 | if los is not None: 125 | plos = &los[0] 126 | cdef int offsetx = v.start[0] - 1 127 | 128 | if dtype.itemsize == 4: 129 | _residual_float( bv.data, bf.data, br.data, &nmesh[0], bv.shape[0], offsetx, &boxsize[0], &boxcenter[0], beta, plos) 130 | else: 131 | _residual_double( bv.data, bf.data, br.data, &nmesh[0], bv.shape[0], offsetx, &boxsize[0], &boxcenter[0], beta, plos) 132 | 133 | toret = v.pm.create(type='real') 134 | toret.value = br[1:-1] 135 | return toret 136 | 137 | 138 | def prolong(v2h): 139 | dtype = v2h.dtype 140 | v1h = v2h.pm.reshape(v2h.pm.Nmesh * 2).create(type='real') 141 | start, stop = v1h.start[0], v1h.start[0] + v1h.shape[0] 142 | offsetx = int(v1h.start[0] % 2) 143 | cdef np.ndarray bv2h = cslice(v2h, start // 2, (stop - start + 1) // 2 + start // 2 + 1) 144 | cdef np.ndarray bv1h = np.ascontiguousarray(v1h.value) 145 | 146 | cdef np.ndarray[size_t, ndim=1, mode='c'] nmesh = np.array(v2h.pm.Nmesh, dtype=np.uint64) 147 | 148 | if dtype.itemsize == 4: 149 | _prolong_float( bv2h.data, bv1h.data, &nmesh[0], v1h.shape[0], offsetx) 150 | else: 151 | _prolong_double( bv2h.data, bv1h.data, &nmesh[0], v1h.shape[0], offsetx) 152 | 153 | v1h.value = bv1h 154 | return v1h 155 | 156 | 157 | def reduce(v1h): 158 | dtype = v1h.dtype 159 | v2h = v1h.pm.reshape(v1h.pm.Nmesh // 2).create(type='real') 160 | start, stop = v2h.start[0], v2h.start[0] + v2h.shape[0] 161 | cdef np.ndarray bv1h = cslice(v1h, 2*start - 1, 2*stop) 162 | cdef np.ndarray bv2h = np.ascontiguousarray(v2h.value) 163 | 164 | cdef np.ndarray[size_t, ndim=1, mode='c'] nmesh = np.array(v1h.pm.Nmesh, dtype=np.uint64) 165 | 166 | if dtype.itemsize == 4: 167 | _reduce_float( bv1h.data, bv2h.data, &nmesh[0], v2h.shape[0], 1) 168 | else: 169 | _reduce_double( bv1h.data, bv2h.data, &nmesh[0], v2h.shape[0], 1) 170 | 171 | v2h.value = bv2h 172 | return v2h 173 | 174 | 175 | def read_finite_difference_cic(mesh, np.ndarray positions, np.ndarray[double, ndim=1, mode='c'] boxcenter): 176 | dtype = mesh.dtype 177 | rdtype = positions.dtype 178 | offset = boxcenter - mesh.pm.BoxSize / 2. 179 | positions = (positions - offset) % mesh.pm.BoxSize 180 | cellsize = mesh.pm.BoxSize / mesh.pm.Nmesh 181 | layout = mesh.pm.decompose(positions, smoothing=4) 182 | positions = np.ascontiguousarray(layout.exchange(positions) / cellsize, dtype=dtype) 183 | positions[:, 0] = (positions[:, 0] - mesh.start[0]) % mesh.pm.Nmesh[0] 184 | 185 | cdef np.ndarray v = np.ascontiguousarray(mesh.value) 186 | cdef np.ndarray values = np.empty_like(positions) 187 | cdef np.ndarray[size_t, ndim=1, mode='c'] nmesh = np.array(mesh.pm.Nmesh, dtype=np.uint64) 188 | cdef np.ndarray[double, ndim=1, mode='c'] boxsize = mesh.pm.BoxSize 189 | 190 | if dtype.itemsize == 4: 191 | _read_finite_difference_cic_float( v.data, &nmesh[0], mesh.shape[0], &boxsize[0], positions.data, values.data, len(positions)) 192 | else: 193 | _read_finite_difference_cic_double( v.data, &nmesh[0], mesh.shape[0], &boxsize[0], positions.data, values.data, len(positions)) 194 | 195 | return layout.gather(values, mode='sum', out=None).astype(rdtype) 196 | -------------------------------------------------------------------------------- /pyrecon/_multigrid_generics.h: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include "utils.h" 4 | 5 | #if defined(_OPENMP) 6 | #include 7 | #endif 8 | 9 | // This is a readaptation of Martin J. White's code, available at https://github.com/martinjameswhite/recon_code 10 | // C++ dependencies have been removed, solver parameters e.g. niterations exposed 11 | // Grid can be non-cubic, with a cellsize different along each direction (but why would we want that?) 12 | // los can be global (to test the algorithm in the plane-parallel limit) 13 | 14 | // The multigrid code for solving our modified Poisson-like equation. 15 | // See the notes for details. 16 | // The only place that the equations appear explicitly is in 17 | // gauss_seidel, jacobi and residual 18 | // 19 | // 20 | // The finite difference equation we are solving is written schematically 21 | // as A.v=f where A is the matrix operator formed from the discretized 22 | // partial derivatives and f is the source term (density in our case). 23 | // The matrix, A, is never explicitly constructed. 24 | // The solution, phi, is in v. 25 | // 26 | // 27 | // Author: Martin White (UCB/LBNL) 28 | // Written: 20-Apr-2015 29 | // Modified: 20-Apr-2015 30 | // 31 | 32 | /* 33 | int get_num_threads() 34 | { 35 | //Calculate number of threads 36 | int num_threads=0; 37 | #pragma omp parallel 38 | { 39 | #pragma omp atomic 40 | num_threads++; 41 | } 42 | return num_threads; 43 | } 44 | */ 45 | 46 | 47 | void mkname(_jacobi)(FLOAT *v, const FLOAT *f, const size_t* nmesh, const size_t localnmeshx, const int offsetx, const double* boxsize, const double* boxcenter, 48 | const double beta, const double damping_factor, const double* los) { 49 | // Does an update using damped Jacobi. This, and in residual below, 50 | // is where the explicit equation we are solving appears. 51 | // See notes for more details. 52 | const size_t localsize = localnmeshx*nmesh[1]*nmesh[2]; 53 | const size_t nmeshz = nmesh[2]; 54 | const size_t nmeshyz = nmesh[2]*nmesh[1]; 55 | FLOAT* jac = (FLOAT *) my_malloc(localsize, sizeof(FLOAT)); 56 | FLOAT cellsize, cellsize2[NDIM], icellsize2[NDIM], offset[NDIM], losn[NDIM]; 57 | for (int idim=0; idim= 0) && (lix0 < localnmeshx)) { 329 | wt = (1-dx)*(1-dy)*(1-dz); 330 | py += (mesh[ix0+iyp+iz0]-mesh[ix0+iym+iz0])*wt; 331 | pz += (mesh[ix0+iy0+izp]-mesh[ix0+iy0+izm])*wt; 332 | 333 | wt = (1-dx)*dy*(1-dz); 334 | py += (mesh[ix0+iypp+iz0]-mesh[ix0+iy0+iz0])*wt; 335 | pz += (mesh[ix0+iyp+izp]-mesh[ix0+iyp+izm])*wt; 336 | 337 | wt = (1-dx)*(1-dy)*dz; 338 | py += (mesh[ix0+iyp+izp]-mesh[ix0+iym+izp])*wt; 339 | pz += (mesh[ix0+iy0+izpp]-mesh[ix0+iy0+iz0])*wt; 340 | 341 | wt = (1-dx)*dy*dz; 342 | py += (mesh[ix0+iypp+izp]-mesh[ix0+iy0+izp])*wt; 343 | pz += (mesh[ix0+iyp+izpp]-mesh[ix0+iyp+iz0])*wt; 344 | 345 | wt = dx*(1-dy)*(1-dz); 346 | px -= mesh[ix0+iy0+iz0]*wt; 347 | 348 | wt = dx*dy*(1-dz); 349 | px -= mesh[ix0+iyp+iz0]*wt; 350 | 351 | wt = dx*(1-dy)*dz; 352 | px -= mesh[ix0+iy0+izp]*wt; 353 | 354 | wt = dx*dy*dz; 355 | px -= mesh[ix0+iyp+izp]*wt; 356 | } 357 | if ((lixm >= 0) && (lixm < localnmeshx)) { 358 | wt = (1-dx)*(1-dy)*(1-dz); 359 | px -= mesh[ixm+iy0+iz0]*wt; 360 | 361 | wt = (1-dx)*dy*(1-dz); 362 | px -= mesh[ixm+iyp+iz0]*wt; 363 | 364 | wt = (1-dx)*(1-dy)*dz; 365 | px -= mesh[ixm+iy0+izp]*wt; 366 | 367 | wt = (1-dx)*dy*dz; 368 | px -= mesh[ixm+iyp+izp]*wt; 369 | } 370 | if ((lixp >= 0) && (lixp < localnmeshx)) { 371 | wt = (1-dx)*(1-dy)*(1-dz); 372 | px += mesh[ixp+iy0+iz0]*wt; 373 | 374 | wt = (1-dx)*dy*(1-dz); 375 | px += mesh[ixp+iyp+iz0]*wt; 376 | 377 | wt = (1-dx)*(1-dy)*dz; 378 | px += mesh[ixp+iy0+izp]*wt; 379 | 380 | wt = (1-dx)*dy*dz; 381 | px += mesh[ixp+iyp+izp]*wt; 382 | 383 | wt = dx*(1-dy)*(1-dz); 384 | py += (mesh[ixp+iyp+iz0]-mesh[ixp+iym+iz0])*wt; 385 | pz += (mesh[ixp+iy0+izp]-mesh[ixp+iy0+izm])*wt; 386 | 387 | wt = dx*dy*(1-dz); 388 | py += (mesh[ixp+iypp+iz0]-mesh[ixp+iy0+iz0])*wt; 389 | pz += (mesh[ixp+iyp+izp]-mesh[ixp+iyp+izm])*wt; 390 | 391 | wt = dx*(1-dy)*dz; 392 | py += (mesh[ixp+iyp+izp]-mesh[ixp+iym+izp])*wt; 393 | pz += (mesh[ixp+iy0+izpp]-mesh[ixp+iy0+iz0])*wt; 394 | 395 | wt = dx*dy*dz; 396 | py += (mesh[ixp+iypp+izp]-mesh[ixp+iy0+izp])*wt; 397 | pz += (mesh[ixp+iyp+izpp]-mesh[ixp+iyp+iz0])*wt; 398 | } 399 | if ((lixpp >= 0) && (lixpp < localnmeshx)) { 400 | wt = dx*(1-dy)*(1-dz); 401 | px += mesh[ixpp+iy0+iz0]*wt; 402 | 403 | wt = dx*dy*(1-dz); 404 | px += mesh[ixpp+iyp+iz0]*wt; 405 | 406 | wt = dx*(1-dy)*dz; 407 | px += mesh[ixpp+iy0+izp]*wt; 408 | 409 | wt = dx*dy*dz; 410 | px += mesh[ixpp+iyp+izp]*wt; 411 | } 412 | FLOAT *sh = &(shifts[ii*NDIM]); 413 | sh[0] = px/cellsize[0]; 414 | sh[1] = py/cellsize[1]; 415 | sh[2] = pz/cellsize[2]; 416 | } 417 | return 0; 418 | } 419 | -------------------------------------------------------------------------------- /pyrecon/_multigrid_imp.h: -------------------------------------------------------------------------------- 1 | #define FLOAT float 2 | #define mkname(a) a ## _ ## float 3 | #include "_multigrid_generics.h" 4 | #undef FLOAT 5 | #undef mkname 6 | #define mkname(a) a ## _ ## double 7 | #define FLOAT double 8 | #include "_multigrid_generics.h" 9 | #undef FLOAT 10 | #undef mkname 11 | 12 | 13 | #if defined(_OPENMP) 14 | #include 15 | 16 | void set_num_threads(int num_threads) 17 | { 18 | if (num_threads>0) omp_set_num_threads(num_threads); 19 | } 20 | #endif 21 | -------------------------------------------------------------------------------- /pyrecon/_version.py: -------------------------------------------------------------------------------- 1 | __version__ = '1.0.0' 2 | -------------------------------------------------------------------------------- /pyrecon/iterative_fft.py: -------------------------------------------------------------------------------- 1 | """Implementation of Burden et al. 2015 (https://arxiv.org/abs/1504.02591) algorithm.""" 2 | 3 | from .recon import BaseReconstruction 4 | from . import utils 5 | 6 | 7 | class IterativeFFTReconstruction(BaseReconstruction): 8 | """ 9 | Implementation of Burden et al. 2015 (https://arxiv.org/abs/1504.02591) 10 | field-level (as opposed to :class:`IterativeFFTParticleReconstruction`) algorithm. 11 | """ 12 | _compressed = True 13 | _f_z = True 14 | _bias_z = True 15 | 16 | def run(self, niterations=3): 17 | """ 18 | Run reconstruction, i.e. compute Zeldovich displacement fields :attr:`mesh_psi`. 19 | 20 | Parameters 21 | ---------- 22 | niterations : int, default=3 23 | Number of iterations. 24 | """ 25 | self._iter = 0 26 | self.mesh_delta_real = self.mesh_delta.copy() 27 | for iter in range(niterations): 28 | self._iterate() 29 | del self.mesh_delta 30 | self.mesh_psi = self._compute_psi() 31 | del self.mesh_delta_real 32 | 33 | def _iterate(self): 34 | if self.mpicomm.rank == 0: 35 | self.log_info('Running iteration {:d}.'.format(self._iter)) 36 | # This is an implementation of eq. 22 and 24 in https://arxiv.org/pdf/1504.02591.pdf 37 | # \delta_{g,\mathrm{real},n} is self.mesh_delta_real 38 | # \delta_{g,\mathrm{red}} is self.mesh_delta 39 | # First compute \delta(k)/k^{2} based on current \delta_{g,\mathrm{real},n} to estimate \phi_{\mathrm{est},n} (eq. 24) 40 | delta_k = self.mesh_delta_real.r2c() 41 | for kslab, slab in zip(delta_k.slabs.x, delta_k.slabs): 42 | utils.safe_divide(slab, sum(kk**2 for kk in kslab), inplace=True) 43 | 44 | self.mesh_delta_real = self.mesh_delta.copy() 45 | # Now compute \beta \nabla \cdot (\nabla \phi_{\mathrm{est},n} \cdot \hat{r}) \hat{r} 46 | # In the plane-parallel case (self.los is a given vector), this is simply \beta IFFT((\hat{k} \cdot \hat{\eta})^{2} \delta(k)) 47 | if self.los is not None: 48 | # global los 49 | disp_deriv_k = delta_k.copy() 50 | for kslab, slab in zip(disp_deriv_k.slabs.x, disp_deriv_k.slabs): 51 | slab[...] *= sum(kk * ll for kk, ll in zip(kslab, self.los))**2 # delta_k already divided by k^{2} 52 | factor = self.beta 53 | # remove RSD part 54 | if self._iter == 0: 55 | # Burden et al. 2015: 1504.02591, eq. 12 (flat sky approximation) 56 | factor /= (1. + self.beta) 57 | self.mesh_delta_real -= factor * disp_deriv_k.c2r() 58 | del disp_deriv_k 59 | else: 60 | # In the local los case, \beta \nabla \cdot (\nabla \phi_{\mathrm{est},n} \cdot \hat{r}) \hat{r} is: 61 | # \beta \partial_{i} \partial_{j} \phi_{\mathrm{est},n} \hat{r}_{j} \hat{r}_{i} 62 | # i.e. \beta IFFT(k_{i} k_{j} \delta(k) / k^{2}) \hat{r}_{i} \hat{r}_{j} => 6 FFTs 63 | for iaxis in range(delta_k.ndim): 64 | for jaxis in range(iaxis, delta_k.ndim): 65 | disp_deriv = delta_k.copy() 66 | for kslab, islab, slab in zip(disp_deriv.slabs.x, disp_deriv.slabs.i, disp_deriv.slabs): 67 | mask = (islab[iaxis] != self.nmesh[iaxis] // 2) & (islab[jaxis] != self.nmesh[jaxis] // 2) 68 | mask |= (islab[iaxis] == self.nmesh[iaxis] // 2) & (islab[jaxis] == self.nmesh[jaxis] // 2) 69 | slab[...] *= kslab[iaxis] * kslab[jaxis] * mask # delta_k already divided by k^{2} 70 | disp_deriv = disp_deriv.c2r() 71 | for rslab, slab in zip(disp_deriv.slabs.x, disp_deriv.slabs): 72 | rslab = self._transform_rslab(rslab) 73 | slab[...] *= utils.safe_divide(rslab[iaxis] * rslab[jaxis], sum(rr**2 for rr in rslab)) 74 | factor = (1. + (iaxis != jaxis)) * self.beta # we have j >= i and double-count j > i to account for j < i 75 | if self._iter == 0: 76 | # Burden et al. 2015: 1504.02591, eq. 12 (flat sky approximation) 77 | factor /= (1. + self.beta) 78 | # remove RSD part 79 | self.mesh_delta_real -= factor * disp_deriv 80 | self._iter += 1 81 | 82 | def _compute_psi(self): 83 | # Compute Zeldovich displacements given reconstructed real space density 84 | delta_k = self.mesh_delta_real.r2c() 85 | psis = [] 86 | for iaxis in range(delta_k.ndim): 87 | psi = delta_k.copy() 88 | for kslab, islab, slab in zip(psi.slabs.x, psi.slabs.i, psi.slabs): 89 | mask = islab[iaxis] != self.nmesh[iaxis] // 2 90 | slab[...] *= 1j * utils.safe_divide(kslab[iaxis], sum(kk**2 for kk in kslab)) * mask 91 | psis.append(psi.c2r()) 92 | del psi 93 | return psis 94 | -------------------------------------------------------------------------------- /pyrecon/iterative_fft_particle.py: -------------------------------------------------------------------------------- 1 | """Re-implementation of Bautista et al. 2018 (https://arxiv.org/pdf/1712.08064.pdf) algorithm.""" 2 | 3 | import numpy as np 4 | 5 | from .recon import BaseReconstruction, ReconstructionError, format_positions_wrapper, format_positions_weights_wrapper 6 | from . import utils 7 | 8 | 9 | class OriginalIterativeFFTParticleReconstruction(BaseReconstruction): 10 | """ 11 | Exact re-implementation of Bautista et al. 2018 (https://arxiv.org/pdf/1712.08064.pdf) algorithm 12 | at https://github.com/julianbautista/eboss_clustering/blob/master/python/recon.py. 13 | Numerical agreement in the Zeldovich displacements between original codes and this re-implementation is machine precision 14 | (absolute and relative difference of 1e-12). 15 | """ 16 | _compressed = True 17 | 18 | @format_positions_weights_wrapper 19 | def assign_data(self, positions, weights=None): 20 | """ 21 | Assign (paint) data to :attr:`mesh_data`. 22 | Keeps track of input positions (for :meth:`run`) and weights (for :meth:`set_density_contrast`). 23 | See :meth:`BaseReconstruction.assign_data` for parameters. 24 | """ 25 | if weights is None: 26 | weights = np.ones_like(positions, shape=(len(positions),)) 27 | if getattr(self, 'mesh_data', None) is None: 28 | self.mesh_data = self.pm.create(type='real', value=0.) 29 | self._positions_data = positions 30 | self._weights_data = weights 31 | else: 32 | self._positions_data = np.concatenate([self._positions_data, positions], axis=0) 33 | self._weights_data = np.concatenate([self._weights_data, weights], axis=0) 34 | self._paint(positions, weights=weights, out=self.mesh_data) 35 | 36 | def set_density_contrast(self, ran_min=0.01, smoothing_radius=15., check=False, kw_weights=None): 37 | r""" 38 | Set :math:`\delta` field :attr:`mesh_delta` from data and randoms fields :attr:`mesh_data` and :attr:`mesh_randoms`. 39 | 40 | Note 41 | ---- 42 | This method follows Julian's reconstruction code. 43 | :attr:`mesh_data` and :attr:`mesh_randoms` fields are assumed to be smoothed already. 44 | 45 | Parameters 46 | ---------- 47 | ran_min : float, default=0.01 48 | :attr:`mesh_randoms` points below this threshold times mean random weights have their density contrast set to 0. 49 | 50 | smoothing_radius : float, default=15 51 | Smoothing scale, see :meth:`RealMesh.smooth_gaussian`. 52 | 53 | check : bool, default=False 54 | If ``True``, run some tests (printed in logger) to assess whether enough randoms have been used. 55 | """ 56 | self.ran_min = ran_min 57 | self.smoothing_radius = smoothing_radius 58 | 59 | self.mesh_delta = self.mesh_data.copy() 60 | 61 | if self.has_randoms: 62 | 63 | if check: 64 | nnonzero = self.mpicomm.allreduce(sum(np.sum(randoms > 0.) for randoms in self.mesh_randoms)) 65 | if nnonzero < 2: raise ValueError('Very few randoms!') 66 | 67 | sum_data, sum_randoms = self.mesh_data.csum(), self.mesh_randoms.csum() 68 | alpha = sum_data * 1. / sum_randoms 69 | 70 | for delta, randoms in zip(self.mesh_delta.slabs, self.mesh_randoms.slabs): 71 | delta[...] -= alpha * randoms 72 | 73 | threshold = ran_min * sum_randoms / self._size_randoms 74 | 75 | for delta, randoms in zip(self.mesh_delta.slabs, self.mesh_randoms.slabs): 76 | mask = randoms > threshold 77 | delta[mask] /= (self.bias * alpha * randoms[mask]) 78 | delta[~mask] = 0. 79 | 80 | if check: 81 | mean_nran_per_cell = self.mpicomm.allreduce(sum(randoms[randoms > 0] for randoms in self.mesh_randoms)) 82 | std_nran_per_cell = self.mpicomm.allreduce(sum(randoms[randoms > 0]**2 for randoms in self.mesh_randoms)) - mean_nran_per_cell**2 83 | if self.mpicomm.rank == 0: 84 | self.log_info('Mean smoothed random density in non-empty cells is {:.4f} (std = {:.4f}), threshold is (ran_min * mean weight) = {:.4f}.'.format(mean_nran_per_cell, std_nran_per_cell, threshold)) 85 | 86 | frac_nonzero_masked = 1. - self.mpicomm.allreduce(sum(np.sum(randoms > 0.) for randoms in self.mesh_randoms)) / nnonzero 87 | del mask_nonzero 88 | if self.mpicomm.rank == 0: 89 | if frac_nonzero_masked > 0.1: 90 | self.log_warning('Masking a large fraction {:.4f} of non-empty cells. You should probably increase the number of randoms.'.format(frac_nonzero_masked)) 91 | else: 92 | self.log_info('Masking a fraction {:.4f} of non-empty cells.'.format(frac_nonzero_masked)) 93 | if kw_weights: 94 | self._set_optimal_weights(**{'alpha': alpha, **kw_weights}) 95 | 96 | else: 97 | self.mesh_delta /= (self.mesh_delta.cmean() * self.bias) 98 | self.mesh_delta -= 1. / self.bias 99 | 100 | def run(self, niterations=3): 101 | """ 102 | Run reconstruction, i.e. compute reconstructed data real-space positions (:attr:`_positions_rec_data`) 103 | and Zeldovich displacements fields :attr:`mesh_psi`. 104 | 105 | Parameters 106 | ---------- 107 | niterations : int 108 | Number of iterations. 109 | """ 110 | self._iter = 0 111 | # Gaussian smoothing before density contrast calculation 112 | self.mesh_data = self._smooth_gaussian(self.mesh_data) 113 | if self.has_randoms: 114 | self.mesh_randoms = self._smooth_gaussian(self.mesh_randoms) 115 | self._positions_rec_data = self._positions_data.copy() 116 | for iter in range(niterations): 117 | self.mesh_psi = self._iterate(return_psi=iter == niterations - 1) 118 | del self.mesh_data 119 | if self.has_randoms: 120 | del self.mesh_randoms 121 | 122 | def _iterate(self, return_psi=False): 123 | if self.mpicomm.rank == 0: 124 | self.log_info('Running iteration {:d}.'.format(self._iter)) 125 | 126 | if self._iter > 0: 127 | self.mesh_data[...] = 0. # to reset mesh values 128 | # Painting reconstructed data real-space positions 129 | # super in order not to save positions_rec_data 130 | super(OriginalIterativeFFTParticleReconstruction, self).assign_data(self._positions_rec_data, weights=self._weights_data, position_type='pos', mpiroot=None) 131 | # Gaussian smoothing before density contrast calculation 132 | self.mesh_data = self._smooth_gaussian(self.mesh_data) 133 | 134 | self.set_density_contrast(ran_min=self.ran_min, smoothing_radius=self.smoothing_radius) 135 | delta_k = self.mesh_delta.r2c() 136 | del self.mesh_delta 137 | 138 | for kslab, slab in zip(delta_k.slabs.x, delta_k.slabs): 139 | utils.safe_divide(slab, sum(kk**2 for kk in kslab), inplace=True) 140 | 141 | if self.mpicomm.rank == 0: 142 | self.log_info('Computing displacement field.') 143 | 144 | shifts = np.empty_like(self._positions_rec_data) 145 | psis = [] 146 | for iaxis in range(delta_k.ndim): 147 | # No need to compute psi on axis where los is 0 148 | if not return_psi and self.los is not None and self.los[iaxis] == 0: 149 | shifts[:, iaxis] = 0. 150 | continue 151 | 152 | psi = delta_k.copy() 153 | for kslab, islab, slab in zip(psi.slabs.x, psi.slabs.i, psi.slabs): 154 | mask = islab[iaxis] != self.nmesh[iaxis] // 2 155 | slab[...] *= 1j * kslab[iaxis] * mask 156 | 157 | psi = psi.c2r() 158 | # Reading shifts at reconstructed data real-space positions 159 | shifts[:, iaxis] = self._readout(psi, self._positions_rec_data) 160 | if return_psi: psis.append(psi) 161 | del psi 162 | # self.log_info('A few displacements values:') 163 | # for s in shifts[:3]: self.log_info('{}'.format(s)) 164 | if self.los is None: 165 | los = utils.safe_divide(self._positions_data, utils.distance(self._positions_data)[:, None]) 166 | else: 167 | los = self.los 168 | # Comments in Julian's code: 169 | # For first loop need to approximately remove RSD component from psi to speed up convergence 170 | # See Burden et al. 2015: 1504.02591v2, eq. 12 (flat sky approximation) 171 | if self._iter == 0: 172 | shifts -= self.beta / (1 + self.beta) * np.sum(shifts * los, axis=-1)[:, None] * los 173 | # Comments in Julian's code: 174 | # Remove RSD from original positions of galaxies to give new positions 175 | # these positions are then used in next determination of psi, 176 | # assumed to not have RSD. 177 | # The iterative procedure then uses the new positions as if they'd been read in from the start 178 | self._positions_rec_data = self._positions_data - self.f * np.sum(shifts * los, axis=-1)[:, None] * los 179 | self._iter += 1 180 | if return_psi: 181 | return psis 182 | 183 | @format_positions_wrapper(return_input_type=False) 184 | def read_shifts(self, positions, field='disp+rsd'): 185 | """ 186 | Read displacement at input positions. 187 | 188 | Note 189 | ---- 190 | Data shifts are read at the reconstructed real-space positions, 191 | while random shifts are read at the redshift-space positions, is that consistent? 192 | 193 | Parameters 194 | ---------- 195 | positions : array of shape (N, 3), string 196 | Cartesian positions. 197 | Pass string 'data' to get the displacements for the input data positions passed to :meth:`assign_data`. 198 | Note that in this case, shifts are read at the reconstructed data real-space positions. 199 | 200 | field : string, default='disp+rsd' 201 | Either 'disp' (Zeldovich displacement), 'rsd' (RSD displacement), or 'disp+rsd' (Zeldovich + RSD displacement). 202 | 203 | Returns 204 | ------- 205 | shifts : array of shape (N, 3) 206 | Displacements. 207 | """ 208 | field = field.lower() 209 | allowed_fields = ['disp', 'rsd', 'disp+rsd'] 210 | if field not in allowed_fields: 211 | raise ReconstructionError('Unknown field {}. Choices are {}'.format(field, allowed_fields)) 212 | 213 | def _read_shifts(positions): 214 | shifts = np.empty_like(positions) 215 | for iaxis, psi in enumerate(self.mesh_psi): 216 | shifts[:, iaxis] = self._readout(psi, positions) 217 | return shifts 218 | 219 | if isinstance(positions, str) and positions == 'data': 220 | # _positions_rec_data already wrapped during iteration 221 | shifts = _read_shifts(self._positions_rec_data) 222 | if field == 'disp': 223 | return shifts 224 | rsd = self._positions_data - self._positions_rec_data 225 | if field == 'rsd': 226 | return rsd 227 | # field == 'disp+rsd' 228 | shifts += rsd 229 | return shifts 230 | 231 | if self.wrap: positions = self._wrap(positions) # wrap here for local los 232 | shifts = _read_shifts(positions) # aleady wrapped 233 | 234 | if field == 'disp': 235 | return shifts 236 | 237 | if self.los is None: 238 | los = utils.safe_divide(positions, utils.distance(positions)[:, None]) 239 | else: 240 | los = self.los.astype(positions.dtype) 241 | rsd = self.f * np.sum(shifts * los, axis=-1)[:, None] * los 242 | 243 | if field == 'rsd': 244 | return rsd 245 | 246 | # field == 'disp+rsd' 247 | # we follow convention of original algorithm: remove RSD first, 248 | # then remove Zeldovich displacement 249 | real_positions = positions - rsd 250 | diff = real_positions - self.offset 251 | if (not self.wrap) and any(self.mpicomm.allgather(np.any((diff < 0) | (diff > self.boxsize - self.cellsize)))): 252 | if self.mpicomm.rank == 0: 253 | self.log_warning('Some particles are out-of-bounds.') 254 | shifts = _read_shifts(real_positions) 255 | 256 | return shifts + rsd 257 | 258 | @format_positions_wrapper(return_input_type=True) 259 | def read_shifted_positions(self, positions, field='disp+rsd'): 260 | """ 261 | Read shifted positions i.e. the difference ``positions - self.read_shifts(positions, field=field)``. 262 | Output (and input) positions are wrapped if :attr:`wrap`. 263 | 264 | Parameters 265 | ---------- 266 | positions : array of shape (N, 3), string 267 | Cartesian positions. 268 | Pass string 'data' to get the shift positions for the input data positions passed to :meth:`assign_data`. 269 | Note that in this case, shifts are read at the reconstructed data real-space positions. 270 | 271 | field : string, default='disp+rsd' 272 | Apply either 'disp' (Zeldovich displacement), 'rsd' (RSD displacement), or 'disp+rsd' (Zeldovich + RSD displacement). 273 | 274 | Returns 275 | ------- 276 | positions : array of shape (N, 3) 277 | Shifted positions. 278 | """ 279 | shifts = self.read_shifts(positions, field=field, position_type='pos', mpiroot=None) 280 | if isinstance(positions, str) and positions == 'data': 281 | positions = self._positions_data 282 | positions = positions - shifts 283 | if self.wrap: positions = self._wrap(positions) 284 | return positions 285 | 286 | 287 | class IterativeFFTParticleReconstruction(OriginalIterativeFFTParticleReconstruction): 288 | 289 | """Any update / test / improvement upon original algorithm.""" 290 | -------------------------------------------------------------------------------- /pyrecon/mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from pmesh.window import FindResampler, ResampleWindow 3 | 4 | from .utils import _get_box, _make_array 5 | from . import mpi 6 | 7 | 8 | def _get_resampler(resampler): 9 | # Return :class:`ResampleWindow` from string or :class:`ResampleWindow` instance 10 | if isinstance(resampler, ResampleWindow): 11 | return resampler 12 | conversions = {'ngp': 'nnb', 'cic': 'cic', 'tsc': 'tsc', 'pcs': 'pcs'} 13 | if resampler not in conversions: 14 | raise ValueError('Unknown resampler {}, choices are {}'.format(resampler, list(conversions.keys()))) 15 | resampler = conversions[resampler] 16 | return FindResampler(resampler) 17 | 18 | 19 | def _get_resampler_name(resampler): 20 | # Translate input :class:`ResampleWindow` instance to string 21 | conversions = {'nearest': 'ngp', 'tunednnb': 'ngp', 'tunedcic': 'cic', 'tunedtsc': 'tsc', 'tunedpcs': 'pcs'} 22 | return conversions[resampler.kind] 23 | 24 | 25 | def _wrap_positions(positions, boxsize, offset=0.): 26 | return np.asarray((positions - offset) % boxsize + offset, dtype=positions.dtype) 27 | 28 | 29 | def _get_mesh_attrs(nmesh=None, boxsize=None, boxcenter=None, cellsize=None, positions=None, boxpad=1.5, check=True, select_nmesh=None, mpicomm=mpi.COMM_WORLD): 30 | """ 31 | Compute enclosing box. 32 | 33 | Parameters 34 | ---------- 35 | nmesh : array, int, default=None 36 | Mesh size, i.e. number of mesh nodes along each axis. 37 | If not provided, see ``value``. 38 | 39 | boxsize : float, default=None 40 | Physical size of the box. 41 | If not provided, see ``positions``. 42 | 43 | boxcenter : array, float, default=None 44 | Box center. 45 | If not provided, see ``positions``. 46 | 47 | cellsize : array, float, default=None 48 | Physical size of mesh cells. 49 | If not ``None``, ``boxsize`` is ``None`` and mesh size ``nmesh`` is not ``None``, used to set ``boxsize`` to ``nmesh * cellsize``. 50 | If ``nmesh`` is ``None``, it is set to (the nearest integer(s) to) ``boxsize / cellsize`` if ``boxsize`` is provided, 51 | else to the nearest even integer to ``boxsize / cellsize``, and ``boxsize`` is then reset to ``nmesh * cellsize``. 52 | 53 | positions : (list of) (N, 3) arrays, default=None 54 | If ``boxsize`` and / or ``boxcenter`` is ``None``, use this (list of) position arrays 55 | to determine ``boxsize`` and / or ``boxcenter``. 56 | 57 | boxpad : float, default=1.5 58 | When ``boxsize`` is determined from ``positions``, take ``boxpad`` times the smallest box enclosing ``positions`` as ``boxsize``. 59 | 60 | check : bool, default=True 61 | If ``True``, and input ``positions`` (if provided) are not contained in the box, raise a :class:`ValueError`. 62 | 63 | select_nmesh : callable, default=True 64 | Function that takes in a 3-array ``nmesh``, and returns the 3-array ``nmesh``. 65 | Used by :class:`MultiGridReconstruction` to select mesh sizes compatible with the algorithm. 66 | 67 | mpicomm : MPI communicator, default=MPI.COMM_WORLD 68 | The MPI communicator. 69 | 70 | Returns 71 | ------- 72 | nmesh : array of shape (3,) 73 | Mesh size, i.e. number of mesh nodes along each axis. 74 | 75 | boxsize : array 76 | Physical size of the box. 77 | 78 | boxcenter : array 79 | Box center. 80 | """ 81 | if boxsize is None or boxcenter is None or check: 82 | if positions is None: 83 | raise ValueError('positions must be provided if boxsize and boxcenter are not specified, or check is True') 84 | if not isinstance(positions, (tuple, list)): 85 | positions = [positions] 86 | positions = [pos for pos in positions if pos is not None] 87 | # Find bounding coordinates 88 | if mpicomm.allreduce(sum(pos.shape[0] for pos in positions)) <= 1: 89 | raise ValueError('<= 1 particles found; cannot infer boxsize or boxcenter') 90 | pos_min, pos_max = _get_box(*positions) 91 | pos_min, pos_max = np.min(mpicomm.allgather(pos_min), axis=0), np.max(mpicomm.allgather(pos_max), axis=0) 92 | delta = np.abs(pos_max - pos_min) 93 | if boxcenter is None: boxcenter = 0.5 * (pos_min + pos_max) 94 | if boxsize is None: 95 | if cellsize is not None and nmesh is not None: 96 | boxsize = nmesh * cellsize 97 | else: 98 | boxsize = delta * boxpad 99 | if check and (boxsize < delta).any(): 100 | raise ValueError('boxsize {} too small to contain all data (max {})'.format(boxsize, delta)) 101 | 102 | boxsize = _make_array(boxsize, 3, dtype='f8') 103 | if nmesh is None: 104 | if cellsize is not None: 105 | cellsize = _make_array(cellsize, 3, dtype='f8') 106 | nmesh = boxsize / cellsize 107 | nmesh = np.ceil(nmesh).astype('i8') 108 | nmesh += nmesh % 2 # to make it even 109 | if select_nmesh is not None: nmesh = select_nmesh(nmesh) 110 | boxsize = nmesh * cellsize # enforce exact cellsize 111 | else: 112 | raise ValueError('nmesh (or cellsize) must be specified') 113 | nmesh = _make_array(nmesh, 3, dtype='i4') 114 | if select_nmesh is not None: 115 | recommended_nmesh = select_nmesh(nmesh) 116 | if not np.all(recommended_nmesh == nmesh): 117 | import warnings 118 | warnings.warn('Recommended nmesh is {}, provided nmesh is {}'.format(recommended_nmesh, nmesh)) 119 | boxcenter = _make_array(boxcenter, 3, dtype='f8') 120 | if np.any(nmesh % 2): 121 | raise NotImplementedError('Odd sizes not supported by pmesh for now') 122 | return nmesh, boxsize, boxcenter 123 | -------------------------------------------------------------------------------- /pyrecon/mpi.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from mpi4py import MPI 4 | 5 | 6 | COMM_WORLD = MPI.COMM_WORLD 7 | COMM_SELF = MPI.COMM_SELF 8 | 9 | 10 | def gather(data, mpiroot=0, mpicomm=COMM_WORLD): 11 | """ 12 | Taken from https://github.com/bccp/nbodykit/blob/master/nbodykit/utils.py. 13 | Gather the input data array from all ranks to the specified ``mpiroot``. 14 | This uses ``Gatherv``, which avoids mpi4py pickling, and also 15 | avoids the 2 GB mpi4py limit for bytes using a custom datatype. 16 | 17 | Parameters 18 | ---------- 19 | data : array_like 20 | The data on each rank to gather. 21 | 22 | mpiroot : int, Ellipsis, default=0 23 | The rank number to gather the data to. If mpiroot is Ellipsis or None, 24 | broadcast the result to all ranks. 25 | 26 | mpicomm : MPI communicator, default=MPI.COMM_WORLD 27 | The MPI communicator. 28 | 29 | Returns 30 | ------- 31 | recvbuffer : array_like, None 32 | The gathered data on mpiroot, and `None` otherwise. 33 | """ 34 | if mpiroot is None: mpiroot = Ellipsis 35 | 36 | if all(mpicomm.allgather(np.isscalar(data))): 37 | if mpiroot is Ellipsis: 38 | return np.array(mpicomm.allgather(data)) 39 | gathered = mpicomm.gather(data, root=mpiroot) 40 | if mpicomm.rank == mpiroot: 41 | return np.array(gathered) 42 | return None 43 | 44 | # Need C-contiguous order 45 | data = np.asarray(data) 46 | shape, dtype = data.shape, data.dtype 47 | data = np.ascontiguousarray(data) 48 | 49 | local_length = data.shape[0] 50 | 51 | # check dtypes and shapes 52 | shapes = mpicomm.allgather(data.shape) 53 | dtypes = mpicomm.allgather(data.dtype) 54 | 55 | # check for structured data 56 | if dtypes[0].char == 'V': 57 | 58 | # check for structured data mismatch 59 | names = set(dtypes[0].names) 60 | if any(set(dt.names) != names for dt in dtypes[1:]): 61 | raise ValueError('mismatch between data type fields in structured data') 62 | 63 | # check for 'O' data types 64 | if any(dtypes[0][name] == 'O' for name in dtypes[0].names): 65 | raise ValueError('object data types ("O") not allowed in structured data in gather') 66 | 67 | # compute the new shape for each rank 68 | newlength = mpicomm.allreduce(local_length) 69 | newshape = list(data.shape) 70 | newshape[0] = newlength 71 | 72 | # the return array 73 | if mpiroot is Ellipsis or mpicomm.rank == mpiroot: 74 | recvbuffer = np.empty(newshape, dtype=dtypes[0], order='C') 75 | else: 76 | recvbuffer = None 77 | 78 | for name in dtypes[0].names: 79 | d = gather(data[name], mpiroot=mpiroot, mpicomm=mpicomm) 80 | if mpiroot is Ellipsis or mpicomm.rank == mpiroot: 81 | recvbuffer[name] = d 82 | 83 | return recvbuffer 84 | 85 | # check for 'O' data types 86 | if dtypes[0] == 'O': 87 | raise ValueError('object data types ("O") not allowed in structured data in gather') 88 | 89 | # check for bad dtypes and bad shapes 90 | if mpiroot is Ellipsis or mpicomm.rank == mpiroot: 91 | bad_shape = any(s[1:] != shapes[0][1:] for s in shapes[1:]) 92 | bad_dtype = any(dt != dtypes[0] for dt in dtypes[1:]) 93 | else: 94 | bad_shape, bad_dtype = None, None 95 | 96 | if mpiroot is not Ellipsis: 97 | bad_shape, bad_dtype = mpicomm.bcast((bad_shape, bad_dtype), root=mpiroot) 98 | 99 | if bad_shape: 100 | raise ValueError('mismatch between shape[1:] across ranks in gather') 101 | if bad_dtype: 102 | raise ValueError('mismatch between dtypes across ranks in gather') 103 | 104 | shape = data.shape 105 | dtype = data.dtype 106 | 107 | # setup the custom dtype 108 | duplicity = np.prod(shape[1:], dtype='intp') 109 | itemsize = duplicity * dtype.itemsize 110 | dt = MPI.BYTE.Create_contiguous(itemsize) 111 | dt.Commit() 112 | 113 | # compute the new shape for each rank 114 | newlength = mpicomm.allreduce(local_length) 115 | newshape = list(shape) 116 | newshape[0] = newlength 117 | 118 | # the return array 119 | if mpiroot is Ellipsis or mpicomm.rank == mpiroot: 120 | recvbuffer = np.empty(newshape, dtype=dtype, order='C') 121 | else: 122 | recvbuffer = None 123 | 124 | # the recv counts 125 | counts = mpicomm.allgather(local_length) 126 | counts = np.array(counts, order='C') 127 | 128 | # the recv offsets 129 | offsets = np.zeros_like(counts, order='C') 130 | offsets[1:] = counts.cumsum()[:-1] 131 | 132 | # gather to mpiroot 133 | if mpiroot is Ellipsis: 134 | mpicomm.Allgatherv([data, dt], [recvbuffer, (counts, offsets), dt]) 135 | else: 136 | mpicomm.Gatherv([data, dt], [recvbuffer, (counts, offsets), dt], root=mpiroot) 137 | 138 | dt.Free() 139 | 140 | return recvbuffer 141 | 142 | 143 | def local_size(size, mpicomm=COMM_WORLD): 144 | """ 145 | Divide global ``size`` into local (process) size. 146 | 147 | Parameters 148 | ---------- 149 | size : int 150 | Global size. 151 | 152 | mpicomm : MPI communicator, default=MPI.COMM_WORLD 153 | The MPI communicator. 154 | 155 | Returns 156 | ------- 157 | localsize : int 158 | Local size. Sum of local sizes over all processes equals global size. 159 | """ 160 | start = mpicomm.rank * size // mpicomm.size 161 | stop = (mpicomm.rank + 1) * size // mpicomm.size 162 | return stop - start 163 | 164 | 165 | def scatter(data, size=None, mpiroot=0, mpicomm=COMM_WORLD): 166 | """ 167 | Taken from https://github.com/bccp/nbodykit/blob/master/nbodykit/utils.py 168 | Scatter the input data array across all ranks, assuming ``data`` is 169 | initially only on `mpiroot` (and `None` on other ranks). 170 | This uses ``Scatterv``, which avoids mpi4py pickling, and also 171 | avoids the 2 GB mpi4py limit for bytes using a custom datatype 172 | 173 | Parameters 174 | ---------- 175 | data : array_like or None 176 | On `mpiroot`, this gives the data to split and scatter. 177 | 178 | size : int 179 | Length of data on current rank. 180 | 181 | mpiroot : int, default=0 182 | The rank number that initially has the data. 183 | 184 | mpicomm : MPI communicator, default=MPI.COMM_WORLD 185 | The MPI communicator. 186 | 187 | Returns 188 | ------- 189 | recvbuffer : array_like 190 | The chunk of ``data`` that each rank gets. 191 | """ 192 | counts = None 193 | if size is not None: 194 | counts = np.asarray(mpicomm.allgather(size), order='C') 195 | 196 | if mpicomm.rank == mpiroot: 197 | # Need C-contiguous order 198 | data = np.ascontiguousarray(data) 199 | shape_and_dtype = (data.shape, data.dtype) 200 | else: 201 | shape_and_dtype = None 202 | 203 | # each rank needs shape/dtype of input data 204 | shape, dtype = mpicomm.bcast(shape_and_dtype, root=mpiroot) 205 | 206 | # object dtype is not supported 207 | fail = False 208 | if dtype.char == 'V': 209 | fail = any(dtype[name] == 'O' for name in dtype.names) 210 | else: 211 | fail = dtype == 'O' 212 | if fail: 213 | raise ValueError('"object" data type not supported in scatter; please specify specific data type') 214 | 215 | # initialize empty data on non-mpiroot ranks 216 | if mpicomm.rank != mpiroot: 217 | np_dtype = np.dtype((dtype, shape[1:])) 218 | data = np.empty(0, dtype=np_dtype) 219 | 220 | # setup the custom dtype 221 | duplicity = np.prod(shape[1:], dtype='intp') 222 | itemsize = duplicity * dtype.itemsize 223 | dt = MPI.BYTE.Create_contiguous(itemsize) 224 | dt.Commit() 225 | 226 | # compute the new shape for each rank 227 | newshape = list(shape) 228 | 229 | if counts is None: 230 | newshape[0] = newlength = local_size(shape[0], mpicomm=mpicomm) 231 | else: 232 | if counts.sum() != shape[0]: 233 | raise ValueError('The sum of the `size` needs to be equal to data length') 234 | newshape[0] = counts[mpicomm.rank] 235 | 236 | # the return array 237 | recvbuffer = np.empty(newshape, dtype=dtype, order='C') 238 | 239 | # the send counts, if not provided 240 | if counts is None: 241 | counts = mpicomm.allgather(newlength) 242 | counts = np.array(counts, order='C') 243 | 244 | # the send offsets 245 | offsets = np.zeros_like(counts, order='C') 246 | offsets[1:] = counts.cumsum()[:-1] 247 | 248 | # do the scatter 249 | mpicomm.Barrier() 250 | mpicomm.Scatterv([data, (counts, offsets), dt], [recvbuffer, dt], root=mpiroot) 251 | dt.Free() 252 | return recvbuffer 253 | 254 | 255 | def send(data, dest, tag=0, mpicomm=COMM_WORLD): 256 | """ 257 | Send input array ``data`` to process ``dest``. 258 | 259 | Parameters 260 | ---------- 261 | data : array 262 | Array to send. 263 | 264 | dest : int 265 | Rank of process to send array to. 266 | 267 | tag : int, default=0 268 | Message identifier. 269 | 270 | mpicomm : MPI communicator, default=None 271 | Communicator. Defaults to current communicator. 272 | """ 273 | data = np.asarray(data) 274 | shape, dtype = (data.shape, data.dtype) 275 | data = np.ascontiguousarray(data) 276 | 277 | fail = False 278 | if dtype.char == 'V': 279 | fail = any(dtype[name] == 'O' for name in dtype.names) 280 | else: 281 | fail = dtype == 'O' 282 | if fail: 283 | raise ValueError('"object" data type not supported in send; please specify specific data type') 284 | 285 | duplicity = np.prod(shape[1:], dtype='intp') 286 | itemsize = duplicity * dtype.itemsize 287 | dt = MPI.BYTE.Create_contiguous(itemsize) 288 | dt.Commit() 289 | 290 | mpicomm.send((shape, dtype), dest=dest, tag=tag) 291 | mpicomm.Send([data, dt], dest=dest, tag=tag) 292 | dt.Free() 293 | 294 | 295 | def recv(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG, mpicomm=COMM_WORLD): 296 | """ 297 | Receive array from process ``source``. 298 | 299 | Parameters 300 | ---------- 301 | source : int, default=MPI.ANY_SOURCE 302 | Rank of process to receive array from. 303 | 304 | tag : int, default=MPI.ANY_TAG 305 | Message identifier. 306 | 307 | mpicomm : MPI communicator, default=None 308 | Communicator. Defaults to current communicator. 309 | 310 | Returns 311 | ------- 312 | data : array 313 | """ 314 | shape, dtype = mpicomm.recv(source=source, tag=tag) 315 | data = np.zeros(shape, dtype=dtype) 316 | 317 | duplicity = np.prod(shape[1:], dtype='intp') 318 | itemsize = duplicity * dtype.itemsize 319 | dt = MPI.BYTE.Create_contiguous(itemsize) 320 | dt.Commit() 321 | 322 | mpicomm.Recv([data, dt], source=source, tag=tag) 323 | dt.Free() 324 | return data 325 | -------------------------------------------------------------------------------- /pyrecon/multigrid.py: -------------------------------------------------------------------------------- 1 | """Re-implementation of Martin J. White's reconstruction code.""" 2 | 3 | import numpy as np 4 | 5 | from .recon import BaseReconstruction, ReconstructionError, format_positions_wrapper 6 | from . import _multigrid, utils, mpi 7 | 8 | 9 | class OriginalMultiGridReconstruction(BaseReconstruction): 10 | """ 11 | :mod:`ctypes`-based implementation for Martin J. White's reconstruction code, 12 | using full multigrid V-cycle based on damped Jacobi iteration. 13 | We re-implemented https://github.com/martinjameswhite/recon_code/blob/master/multigrid.cpp, allowing for non-cubic (rectangular) mesh. 14 | Numerical agreement in the Zeldovich displacements between original code and this re-implementation is numerical precision (absolute and relative difference of 1e-10). 15 | To test this, change float to double and increase precision in io.cpp/write_data in the original code. 16 | """ 17 | _compressed = True 18 | 19 | @staticmethod 20 | def _select_nmesh(nmesh): 21 | # Return mesh size, equal or larger than nmesh, that can be written as 2**n, 3 * 2**n, 5 * 2**n or 7 * 2**n 22 | toret = [] 23 | for n in nmesh: 24 | nbits = int(n).bit_length() - 1 25 | ntries = [2**nbits, 3 * 2**(nbits - 1), 5 * 2**(nbits - 2), 7 * 2**(nbits - 2), 2**(nbits + 1)] 26 | mindiff, iclosest = n, None 27 | for itry, ntry in enumerate(ntries): 28 | diff = ntry - n 29 | if diff >= 0 and diff < mindiff: 30 | mindiff, iclosest = diff, itry 31 | toret.append(ntries[iclosest]) 32 | return np.array(toret, dtype='i8') 33 | 34 | def __init__(self, *args, mpicomm=mpi.COMM_WORLD, **kwargs): 35 | # We require a split, along axis x. 36 | super(OriginalMultiGridReconstruction, self).__init__(*args, decomposition=(mpicomm.size, 1), mpicomm=mpicomm, **kwargs) 37 | 38 | def set_density_contrast(self, ran_min=0.75, smoothing_radius=15., **kwargs): 39 | r""" 40 | Set :math:`\delta` field :attr:`mesh_delta` from data and randoms fields :attr:`mesh_data` and :attr:`mesh_randoms`. 41 | 42 | Note 43 | ---- 44 | This method follows Martin's reconstruction code: we are not satisfied with the ``ran_min`` prescription. 45 | At least ``ran_min`` should depend on random weights. See also Martin's notes below. 46 | 47 | Parameters 48 | ---------- 49 | ran_min : float, default=0.75 50 | :attr:`mesh_randoms` points below this threshold have their density contrast set to 0. 51 | 52 | smoothing_radius : float, default=15 53 | Smoothing scale, see :meth:`RealMesh.smooth_gaussian`. 54 | 55 | kwargs : dict 56 | Optional arguments for :meth:`RealMesh.smooth_gaussian`. 57 | """ 58 | self.smoothing_radius = smoothing_radius 59 | if self.has_randoms: 60 | # Martin's notes: 61 | # We remove any points which have too few randoms for a decent 62 | # density estimate -- this is "fishy", but it tames some of the 63 | # worst swings due to 1/eps factors. Better would be an interpolation 64 | # or a pre-smoothing (or many more randoms). 65 | # alpha = np.sum(self.mesh_data[mask])/np.sum(self.mesh_randoms[mask]) 66 | # Following two lines are how things are done in original code 67 | for data, randoms in zip(self.mesh_data.slabs, self.mesh_randoms.slabs): 68 | data[(randoms > 0) & (randoms < ran_min)] = 0. 69 | alpha = self.mesh_data.csum() / self.mpicomm.allreduce(sum(np.sum(randoms[randoms >= ran_min]) for randoms in self.mesh_randoms)) 70 | for data, randoms in zip(self.mesh_data.slabs, self.mesh_randoms.slabs): 71 | mask = randoms >= ran_min 72 | data[mask] /= alpha * randoms[mask] 73 | data[...] -= 1. 74 | data[~mask] = 0. 75 | data[...] /= self.bias 76 | self.mesh_delta = self.mesh_data 77 | del self.mesh_data 78 | del self.mesh_randoms 79 | # At this stage also remove the mean, so the source is genuinely mean 0. 80 | # So as to not disturb the padding regions, we only compute and subtract the mean for the regions with delta != 0. 81 | mean = self.mesh_delta.csum() / self.mpicomm.allreduce(sum(np.sum(delta != 0.) for delta in self.mesh_delta)) 82 | for delta in self.mesh_delta.slabs: 83 | mask = delta != 0. 84 | delta[mask] -= mean 85 | else: 86 | self.mesh_delta /= (self.mesh_data.cmean() * self.bias) 87 | self.mesh_delta -= 1. / self.bias 88 | del self.mesh_data 89 | self.mesh_delta = self._smooth_gaussian(self.mesh_delta) 90 | 91 | def _vcycle(self, v, f): 92 | _multigrid.jacobi(v, f, self.boxcenter, self.beta, damping_factor=self.jacobi_damping_factor, niterations=self.jacobi_niterations, los=self.los) 93 | nmesh = v.pm.Nmesh 94 | recurse = np.all((nmesh > 4) & (nmesh % 2 == 0)) 95 | if recurse: 96 | f2h = _multigrid.reduce(_multigrid.residual(v, f, self.boxcenter, self.beta, los=self.los)) 97 | v2h = f2h.pm.create(type='real', value=0.) 98 | self._vcycle(v2h, f2h) 99 | v.value += _multigrid.prolong(v2h).value 100 | _multigrid.jacobi(v, f, self.boxcenter, self.beta, damping_factor=self.jacobi_damping_factor, niterations=self.jacobi_niterations, los=self.los) 101 | 102 | def _fmg(self, f1h): 103 | nmesh = f1h.pm.Nmesh 104 | recurse = np.all((nmesh > 4) & (nmesh % 2 == 0)) 105 | if recurse: 106 | # Recurse to a coarser grid 107 | v1h = _multigrid.prolong(self._fmg(_multigrid.reduce(f1h))) 108 | else: 109 | # Start with a guess of zeros 110 | v1h = f1h.pm.create(type='real', value=0.) 111 | for iter in range(self.vcycle_niterations): 112 | self._vcycle(v1h, f1h) 113 | return v1h 114 | 115 | def run(self, jacobi_damping_factor=0.4, jacobi_niterations=5, vcycle_niterations=6): 116 | """ 117 | Run reconstruction, i.e. set displacement potential attr:`mesh_phi` from :attr:`mesh_delta`. 118 | Default parameter values are the same as in Martin's code. 119 | 120 | Parameters 121 | ---------- 122 | jacobi_damping_factor : float, default=0.4 123 | Damping factor for Jacobi iterations. 124 | 125 | jacobi_niterations : int, default=5 126 | Number of Jacobi iterations. 127 | 128 | vcycle_niterations : int, default=6 129 | Number of V-cycle calls. 130 | """ 131 | self.jacobi_damping_factor = float(jacobi_damping_factor) 132 | self.jacobi_niterations = int(jacobi_niterations) 133 | self.vcycle_niterations = int(vcycle_niterations) 134 | if self.mpicomm.rank == 0: self.log_info('Computing displacement potential.') 135 | self.mesh_phi = self._fmg(self.mesh_delta) 136 | del self.mesh_delta 137 | 138 | @format_positions_wrapper(return_input_type=False) 139 | def read_shifts(self, positions, field='disp+rsd'): 140 | """ 141 | Read displacement at input positions by deriving the computed displacement potential :attr:`mesh_phi` (finite difference scheme). 142 | See :meth:`BaseReconstruction.read_shifts` for input parameters. 143 | """ 144 | field = field.lower() 145 | allowed_fields = ['disp', 'rsd', 'disp+rsd'] 146 | if field not in allowed_fields: 147 | raise ReconstructionError('Unknown field {}. Choices are {}'.format(field, allowed_fields)) 148 | shifts = _multigrid.read_finite_difference_cic(self.mesh_phi, positions, self.boxcenter) 149 | if field == 'disp': 150 | return shifts 151 | if self.los is None: 152 | los = utils.safe_divide(positions, utils.distance(positions)[:, None]) 153 | else: 154 | los = self.los.astype(shifts.dtype) 155 | rsd = self.f * np.sum(shifts * los, axis=-1)[:, None] * los 156 | if field == 'rsd': 157 | return rsd 158 | # field == 'disp+rsd' 159 | shifts += rsd 160 | return shifts 161 | 162 | 163 | class MultiGridReconstruction(OriginalMultiGridReconstruction): 164 | 165 | """Any update / test / improvement upon original algorithm.""" 166 | 167 | def set_density_contrast(self, *args, **kwargs): 168 | """See :class:`BaseReconstruction.set_density_contrast`.""" 169 | BaseReconstruction.set_density_contrast(self, *args, **kwargs) 170 | -------------------------------------------------------------------------------- /pyrecon/plane_parallel_fft.py: -------------------------------------------------------------------------------- 1 | """Implementation of Burden et al. 2015 (https://arxiv.org/abs/1504.02591) algorithm.""" 2 | 3 | 4 | from .recon import BaseReconstruction, ReconstructionError 5 | 6 | 7 | class PlaneParallelFFTReconstruction(BaseReconstruction): 8 | """ 9 | Implementation of Eisenstein et al. 2007 (https://arxiv.org/pdf/astro-ph/0604362.pdf) algorithm. 10 | Section 3, paragraph starting with 'Restoring in full the ...' 11 | """ 12 | _compressed = True 13 | 14 | def __init__(self, los=None, **kwargs): 15 | """ 16 | Initialize :class:`IterativeFFTReconstruction`. 17 | 18 | Parameters 19 | ---------- 20 | los : string, array, default=None 21 | May be 'x', 'y' or 'z', for one of the Cartesian axes. 22 | Else, a 3-vector. 23 | 24 | kwargs : dict 25 | See :class:`BaseReconstruction` for parameters. 26 | """ 27 | super(PlaneParallelFFTReconstruction, self).__init__(los=los, **kwargs) 28 | 29 | def set_los(self, los=None): 30 | """ 31 | Set line-of-sight. 32 | 33 | Parameters 34 | ---------- 35 | los : string, array_like 36 | May be 'x', 'y' or 'z', for one of the Cartesian axes. 37 | Else, a 3-vector. 38 | """ 39 | super(PlaneParallelFFTReconstruction, self).set_los(los=los) 40 | if self.los is None: 41 | raise ReconstructionError('A (global) line-of-sight must be provided') 42 | 43 | def run(self): 44 | """Run reconstruction, i.e. compute Zeldovich displacement fields :attr:`mesh_psi`.""" 45 | delta_k = self.mesh_delta.r2c() 46 | del self.mesh_delta 47 | psis = [] 48 | for iaxis in range(delta_k.ndim): 49 | psi = delta_k.copy() 50 | for kslab, islab, slab in zip(psi.slabs.x, psi.slabs.i, psi.slabs): 51 | k2 = sum(kk**2 for kk in kslab) 52 | k2[k2 == 0.] = 1. # avoid dividing by zero 53 | mu2 = sum(kk * ll for kk, ll in zip(kslab, self.los))**2 / k2 54 | # i = N / 2 is pure complex, we can remove it safely 55 | # ... and we have to, because it is turned to real when hermitian symmetry is assumed? 56 | mask = islab[iaxis] != self.nmesh[iaxis] // 2 57 | slab[...] *= 1j * kslab[iaxis] / k2 / (1. + self.beta * mu2) * mask 58 | psis.append(psi.c2r()) 59 | del psi 60 | self.mesh_psi = psis 61 | -------------------------------------------------------------------------------- /pyrecon/tests/config_iterativefft_particle.yaml: -------------------------------------------------------------------------------- 1 | input: 2 | dir: ./_catalogs 3 | pos: Position 4 | data_fn: data.fits 5 | randoms_fn: randoms.fits 6 | weight: ${Weight} 7 | 8 | output: 9 | dir: ./_catalogs 10 | data_fn: data_rec_script.fits 11 | randoms_fn: randoms_rec_script.fits 12 | pos: Position 13 | pos_rec: Position_rec 14 | rdz_rec: [ra_rec, dec_rec, z_rec] 15 | columns: [NZ, Weight] 16 | 17 | algorithm: 18 | name: IterativeFFTParticleReconstruction 19 | convention: RecSym 20 | los: local 21 | # other algorithm-related parameters 22 | 23 | delta: 24 | smoothing_radius: 15 25 | 26 | cosmology: 27 | bias: 2.0 28 | f: 0.8 29 | 30 | mesh: 31 | nmesh: 128 32 | dtype: f8 33 | -------------------------------------------------------------------------------- /pyrecon/tests/config_iterativefft_particle_no_randoms.yaml: -------------------------------------------------------------------------------- 1 | input: 2 | dir: ./_catalogs 3 | pos: RSDPosition 4 | #rdz: [RA, DEC, Z] 5 | data_fn: box_data.fits 6 | 7 | output: 8 | dir: ./_catalogs 9 | data_fn: script_box_data_rec.fits 10 | pos_rec: Position_rec 11 | columns: [RSDPosition] 12 | 13 | algorithm: 14 | name: IterativeFFTParticleReconstruction 15 | convention: RecSym 16 | los: x 17 | 18 | delta: 19 | smoothing_radius: 15 20 | 21 | cosmology: 22 | bias: 2.0 23 | f: 0.8 24 | 25 | mesh: 26 | boxsize: 800. 27 | boxcenter: 400. 28 | nmesh: 128 29 | dtype: f8 30 | fft_plan: 'estimate' 31 | -------------------------------------------------------------------------------- /pyrecon/tests/config_multigrid.yaml: -------------------------------------------------------------------------------- 1 | input: 2 | dir: ./_catalogs 3 | pos: Position 4 | #rdz: [RA, DEC, Z] 5 | data_fn: data.fits 6 | randoms_fn: randoms.fits 7 | weight: ${Weight} 8 | 9 | output: 10 | dir: ./_catalogs 11 | data_fn: data_rec_script.fits 12 | randoms_fn: randoms_rec_script.fits 13 | pos: Position 14 | pos_rec: Position_rec 15 | rdz_rec: [ra_rec, dec_rec, z_rec] 16 | columns: [NZ, Weight] 17 | 18 | algorithm: 19 | name: MultiGridReconstruction 20 | #name: IterativeFFTReconstruction 21 | convention: RecSym 22 | los: local 23 | # other algorithm-related parameters 24 | 25 | delta: 26 | smoothing_radius: 15 27 | 28 | cosmology: 29 | bias: 2.0 30 | f: 0.8 31 | Omega_m: 0.3 32 | 33 | mesh: 34 | nmesh: 128 35 | dtype: f8 36 | -------------------------------------------------------------------------------- /pyrecon/tests/config_multigrid_no_randoms.yaml: -------------------------------------------------------------------------------- 1 | input: 2 | dir: ./_catalogs 3 | pos: Position 4 | #rdz: [RA, DEC, Z] 5 | data_fn: box_data.fits 6 | #randoms_fn: box_data.fits 7 | 8 | output: 9 | dir: ./_catalogs 10 | data_fn: script_box_data_rec.fits 11 | pos_rec: Position_rec 12 | columns: [Position] 13 | columns_randoms: [Position] 14 | 15 | algorithm: 16 | name: MultiGridReconstruction 17 | #name: IterativeFFTReconstruction 18 | convention: RecSym 19 | los: x 20 | # other algorithm-related parameters 21 | 22 | delta: 23 | smoothing_radius: 15 24 | selection_function: uniform 25 | 26 | cosmology: 27 | bias: 2.0 28 | f: 0.8 29 | 30 | mesh: 31 | boxsize: 800. 32 | boxcenter: 0. 33 | nmesh: 128 34 | dtype: f8 35 | -------------------------------------------------------------------------------- /pyrecon/tests/test_iterative_fft.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from pyrecon import IterativeFFTReconstruction 4 | from pyrecon.utils import MemoryMonitor 5 | from utils import get_random_catalog, Catalog, test_mpi 6 | 7 | 8 | def test_mem(): 9 | data = get_random_catalog(seed=42) 10 | randoms = get_random_catalog(seed=84) 11 | with MemoryMonitor() as mem: 12 | recon = IterativeFFTReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=256, dtype='f8') 13 | mem('init') 14 | recon.assign_data(data['Position'], data['Weight']) 15 | mem('data') 16 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 17 | mem('randoms') 18 | recon.set_density_contrast() 19 | mem('delta') 20 | recon.run() 21 | mem('recon') # 3 meshes 22 | 23 | 24 | def test_dtype(): 25 | data = get_random_catalog(seed=42) 26 | randoms = get_random_catalog(seed=81) 27 | for los in [None, 'x']: 28 | all_shifts = [] 29 | for dtype in ['f4', 'f8']: 30 | dtype = np.dtype(dtype) 31 | itemsize = np.empty(0, dtype=dtype).real.dtype.itemsize 32 | recon = IterativeFFTReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=64, los=los, dtype=dtype) 33 | recon.assign_data(data['Position'], data['Weight']) 34 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 35 | recon.set_density_contrast() 36 | assert recon.mesh_delta.dtype.itemsize == itemsize 37 | recon.run() 38 | assert recon.mesh_psi[0].dtype.itemsize == itemsize 39 | all_shifts2 = [] 40 | for dtype2 in ['f4', 'f8']: 41 | dtype2 = np.dtype(dtype2) 42 | shifts = recon.read_shifts(data['Position'].astype(dtype2), field='disp+rsd') 43 | assert shifts.dtype.itemsize == dtype2.itemsize 44 | all_shifts2.append(shifts) 45 | if dtype2 == dtype: all_shifts.append(shifts) 46 | assert np.allclose(*all_shifts2, atol=1e-2, rtol=1e-2) 47 | 48 | 49 | def test_wrap(): 50 | size = 100000 51 | boxsize = 1000 52 | for boxcenter in [-500, 0, 500]: 53 | data = get_random_catalog(size, boxsize, seed=42) 54 | # set one of the data positions to be outside the fiducial box by hand 55 | data['Position'][-1] = np.array([boxsize, boxsize, boxsize]) + 1 56 | data['Position'] += boxcenter 57 | randoms = get_random_catalog(size, boxsize, seed=42) 58 | # set one of the random positions to be outside the fiducial box by hand 59 | randoms['Position'][-1] = np.array([0, 0, 0]) - 1 60 | randoms['Position'] += boxcenter 61 | recon = IterativeFFTReconstruction(f=0.8, bias=2, los='z', boxsize=boxsize, boxcenter=boxcenter, nmesh=64, wrap=True) 62 | # following steps should run without error if wrapping is correctly implemented 63 | recon.assign_data(data['Position'], data['Weight']) 64 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 65 | recon.set_density_contrast() 66 | recon.run() 67 | 68 | # following steps test the implementation coded into standalone pyrecon code 69 | for field in ['rsd', 'disp', 'disp+rsd']: 70 | shifts = recon.read_shifts(data['Position'], field=field) 71 | diff = data['Position'] - shifts 72 | positions_rec = (diff - recon.offset) % recon.boxsize + recon.offset 73 | assert np.all(positions_rec >= boxcenter - boxsize / 2.) and np.all(positions_rec <= boxcenter + boxsize / 2.) 74 | assert np.allclose(recon.read_shifted_positions(data['Position'], field=field), positions_rec) 75 | 76 | 77 | def test_ref(data_fn, randoms_fn, data_fn_rec=None, randoms_fn_rec=None): 78 | boxsize = 1200. 79 | boxcenter = [1754, 0, 0] 80 | data = Catalog.read(data_fn) 81 | randoms = Catalog.read(randoms_fn) 82 | recon = IterativeFFTReconstruction(f=0.8, bias=2., los=None, boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 83 | recon.assign_data(data['Position'], data['Weight']) 84 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 85 | recon.set_density_contrast() 86 | recon.mesh_delta += 10. 87 | recon.run(niterations=3) 88 | 89 | from pypower import CatalogFFTPower 90 | from matplotlib import pyplot as plt 91 | 92 | for cat, fn in zip([data, randoms], [data_fn_rec, randoms_fn_rec]): 93 | rec = recon.read_shifted_positions(cat['Position']) 94 | if 'Position_rec' in cat: 95 | if recon.mpicomm.rank == 0: print('Checking...') 96 | assert np.allclose(rec, cat['Position_rec']) 97 | else: 98 | cat['Position_rec'] = rec 99 | if fn is not None: 100 | cat.write(fn) 101 | 102 | kwargs = dict(edges={'min': 0., 'step': 0.01}, ells=(0, 2, 4), boxsize=1000., nmesh=64, resampler='tsc', interlacing=3, position_type='pos') 103 | power = CatalogFFTPower(data_positions1=data['Position'], randoms_positions1=randoms['Position'], **kwargs) 104 | poles = power.poles 105 | power = CatalogFFTPower(data_positions1=data['Position_rec'], randoms_positions1=randoms['Position_rec'], **kwargs) 106 | poles_rec = power.poles 107 | 108 | for ill, ell in enumerate(poles.ells): 109 | plt.plot(poles.k, poles.k * poles(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='-') 110 | plt.plot(poles_rec.k, poles_rec.k * poles_rec(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='--') 111 | 112 | if power.mpicomm.rank == 0: 113 | plt.show() 114 | 115 | 116 | if __name__ == '__main__': 117 | 118 | from utils import data_fn, randoms_fn, catalog_rec_fn 119 | from pyrecon.utils import setup_logging 120 | 121 | setup_logging() 122 | # Run utils.py to generate catalogs needed for these tests 123 | # test_mem() 124 | test_dtype() 125 | test_wrap() 126 | test_mpi(IterativeFFTReconstruction) 127 | data_fn_rec, randoms_fn_rec = [catalog_rec_fn(fn, 'iterative_fft') for fn in [data_fn, randoms_fn]] 128 | # test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec) 129 | test_ref(data_fn_rec, randoms_fn_rec, None, None) 130 | -------------------------------------------------------------------------------- /pyrecon/tests/test_iterative_fft_particle.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import subprocess 4 | import time 5 | import importlib 6 | 7 | import numpy as np 8 | 9 | from pyrecon.iterative_fft_particle import OriginalIterativeFFTParticleReconstruction, IterativeFFTParticleReconstruction 10 | from pyrecon.utils import distance 11 | from pyrecon import mpi 12 | from utils import get_random_catalog, Catalog, test_mpi 13 | 14 | 15 | def test_mem(): 16 | data = get_random_catalog(seed=42) 17 | randoms = get_random_catalog(seed=84) 18 | from pyrecon.utils import MemoryMonitor 19 | with MemoryMonitor() as mem: 20 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=256, dtype='f8') 21 | mem('init') 22 | recon.assign_data(data['Position'], data['Weight']) 23 | mem('data') 24 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 25 | mem('randoms') 26 | recon.set_density_contrast() 27 | mem('delta') 28 | recon.run() 29 | mem('recon') # 3 meshes 30 | 31 | 32 | def test_dtype(): 33 | data = get_random_catalog(seed=42) 34 | randoms = get_random_catalog(seed=84) 35 | for los in [None, 'x']: 36 | all_shifts = [] 37 | for dtype in ['f4', 'f8']: 38 | dtype = np.dtype(dtype) 39 | itemsize = np.empty(0, dtype=dtype).real.dtype.itemsize 40 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=64, los=los, dtype=dtype) 41 | positions_bak, weights_bak = data['Position'].copy(), data['Weight'].copy() 42 | recon.assign_data(data['Position'], data['Weight']) 43 | assert np.allclose(data['Position'], positions_bak) 44 | assert np.allclose(data['Weight'], weights_bak) 45 | positions_bak, weights_bak = randoms['Position'].copy(), randoms['Weight'].copy() 46 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 47 | assert np.allclose(randoms['Position'], positions_bak) 48 | assert np.allclose(randoms['Weight'], weights_bak) 49 | recon.set_density_contrast() 50 | assert recon.mesh_delta.dtype.itemsize == itemsize 51 | recon.run() 52 | assert recon.mesh_psi[0].dtype.itemsize == itemsize 53 | all_shifts2 = [] 54 | for dtype2 in ['f4', 'f8']: 55 | dtype2 = np.dtype(dtype2) 56 | shifts = recon.read_shifts(data['Position'].astype(dtype2), field='disp+rsd') 57 | assert shifts.dtype.itemsize == dtype2.itemsize 58 | all_shifts2.append(shifts) 59 | if dtype2 == dtype: all_shifts.append(shifts) 60 | assert np.allclose(*all_shifts2, atol=1e-2, rtol=1e-2) 61 | assert np.allclose(*all_shifts, atol=1e-2, rtol=1e-2) 62 | 63 | 64 | def test_wrap(): 65 | size = 100000 66 | boxsize = 1000 67 | for boxcenter in [-500, 0, 500]: 68 | data = get_random_catalog(size, boxsize, seed=42) 69 | # set one of the data positions to be outside the fiducial box by hand 70 | data['Position'][-1] = np.array([boxsize, boxsize, boxsize]) + 1 71 | data['Position'] += boxcenter 72 | randoms = get_random_catalog(size, boxsize, seed=42) 73 | # set one of the random positions to be outside the fiducial box by hand 74 | randoms['Position'][-1] = np.array([0, 0, 0]) - 1 75 | randoms['Position'] += boxcenter 76 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2, los='z', boxsize=boxsize, boxcenter=boxcenter, nmesh=64, wrap=True) 77 | # following steps should run without error if wrapping is correctly implemented 78 | recon.assign_data(data['Position'], data['Weight']) 79 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 80 | recon.set_density_contrast() 81 | recon.run() 82 | 83 | # following steps test the implementation coded into standalone pyrecon code 84 | for field in ['rsd', 'disp', 'disp+rsd']: 85 | shifts = recon.read_shifts('data', field=field) 86 | diff = data['Position'] - shifts 87 | positions_rec = (diff - recon.offset) % recon.boxsize + recon.offset 88 | assert np.all(positions_rec >= boxcenter - boxsize / 2.) and np.all(positions_rec <= boxcenter + boxsize / 2.) 89 | assert np.allclose(recon.read_shifted_positions('data', field=field), positions_rec) 90 | 91 | 92 | def test_no_nrandoms(): 93 | boxsize = 1000. 94 | data = get_random_catalog(boxsize=boxsize, seed=42) 95 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2., los='x', boxcenter=0., boxsize=boxsize, nmesh=8, dtype='f8', wrap=True) 96 | recon.assign_data(data['Position'], data['Weight']) 97 | assert not recon.has_randoms 98 | recon.set_density_contrast() 99 | assert np.allclose(recon.mesh_delta.cmean(), 0.) 100 | recon.run() 101 | assert np.all(np.abs(recon.read_shifts(data['Position'])) < 5.) 102 | 103 | 104 | def test_los(): 105 | boxsize = 1000. 106 | boxcenter = [0] * 3 107 | data = get_random_catalog(boxsize=boxsize, seed=42) 108 | randoms = get_random_catalog(boxsize=boxsize, seed=84) 109 | 110 | def get_shifts(los='x'): 111 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2., los='x', boxcenter=boxcenter, boxsize=boxsize, nmesh=64, dtype='f8', wrap=True) 112 | recon.assign_data(data['Position'], data['Weight']) 113 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 114 | recon.set_density_contrast() 115 | recon.run() 116 | return recon.read_shifts(data['Position'], field='disp+rsd') 117 | 118 | shifts_global = get_shifts(los='x') 119 | offset = 1e8 120 | boxcenter[0] += offset 121 | data['Position'][:, 0] += offset 122 | randoms['Position'][:, 0] += offset 123 | shifts_local = get_shifts(los=None) 124 | assert np.allclose(shifts_local, shifts_global, rtol=1e-3, atol=1e-3) 125 | 126 | 127 | def test_eboss(data_fn, randoms_fn): 128 | # here path to reference Julian's code: https://github.com/julianbautista/eboss_clustering/blob/master/python (python setup.py build_ext --inplace) 129 | sys.path.insert(0, '../../../../reconstruction/eboss_clustering/python') 130 | from cosmo import CosmoSimple 131 | from recon import Recon 132 | nmesh = 128 133 | smooth = 15. 134 | f = 0.81 135 | bias = 2.0 136 | Omega_m = 0.3 137 | boxpad = 200 138 | nthreads = 4 139 | niter = 3 140 | data = Catalog.read(data_fn) 141 | randoms = Catalog.read(randoms_fn) 142 | 143 | cosmo = CosmoSimple(omega_m=Omega_m) 144 | from pyrecon.utils import cartesian_to_sky 145 | for catalog in [data, randoms]: 146 | catalog['Z'], catalog['RA'], catalog['DEC'] = cartesian_to_sky(catalog['Position']) 147 | catalog['Z'] = cosmo.get_redshift(catalog['Z']) 148 | for field in catalog: 149 | if catalog[field].dtype.byteorder == '>': 150 | catalog[field] = catalog[field].byteswap().newbyteorder() 151 | data_radecz = [data.cget(name, mpiroot=0) for name in ['RA', 'DEC', 'Z', 'Weight']] 152 | randoms_radecz = [randoms.cget(name, mpiroot=0) for name in ['RA', 'DEC', 'Z', 'Weight']] 153 | data_weights, randoms_weights = data.cget('Weight', mpiroot=0), randoms.cget('Weight', mpiroot=0) 154 | data_positions, randoms_positions = None, None 155 | boxsize, boxcenter = None, None 156 | mpicomm = data.mpicomm 157 | if mpicomm.rank == 0: 158 | recon_ref = Recon(*data_radecz, *randoms_radecz, nbins=nmesh, smooth=smooth, f=f, bias=bias, padding=boxpad, nthreads=nthreads) 159 | data_positions = np.array([recon_ref.dat.x, recon_ref.dat.y, recon_ref.dat.z]).T 160 | randoms_positions = np.array([recon_ref.ran.x, recon_ref.ran.y, recon_ref.ran.z]).T 161 | for i in range(niter): 162 | recon_ref.iterate(i) 163 | recon_ref.apply_shifts_full() 164 | shifts_data_ref = np.array([getattr(recon_ref.dat, x) - getattr(recon_ref.dat, 'new{}'.format(x)) for x in 'xyz']).T 165 | shifts_randoms_ref = np.array([getattr(recon_ref.ran, x) - getattr(recon_ref.ran, 'new{}'.format(x)) for x in 'xyz']).T 166 | recon_ref.apply_shifts_rsd() 167 | recon_ref.apply_shifts_full() 168 | shifts_randoms_rsd_ref = np.array([getattr(recon_ref.ran, x) - getattr(recon_ref.ran, 'new{}'.format(x)) for x in 'xyz']).T 169 | # recon_ref.summary() 170 | boxsize = recon_ref.binsize * recon_ref.nbins 171 | boxcenter = np.array([getattr(recon_ref, '{}min'.format(x)) for x in 'xyz']) + boxsize / 2. 172 | print('') 173 | print('#' * 50) 174 | print('') 175 | f, bias = mpicomm.bcast((recon_ref.f, recon_ref.bias) if mpicomm.rank == 0 else None, root=0) 176 | boxsize, boxcenter = mpicomm.bcast((boxsize, boxcenter), root=0) 177 | recon = OriginalIterativeFFTParticleReconstruction(f=f, bias=bias, boxsize=boxsize, boxcenter=boxcenter, nmesh=nmesh) 178 | recon.assign_data(data_positions, data_weights, mpiroot=0) 179 | recon.assign_randoms(randoms_positions, randoms_weights, mpiroot=0) 180 | recon.set_density_contrast(smoothing_radius=smooth) 181 | recon.run(niterations=niter) 182 | shifts_data = recon.read_shifts('data', field='disp+rsd', mpiroot=0) 183 | shifts_randoms = recon.read_shifts(randoms_positions, field='disp', mpiroot=0) 184 | shifts_randoms_rsd = recon.read_shifts(randoms_positions, field='disp+rsd', mpiroot=0) 185 | if recon.mpicomm.rank == 0: 186 | # print(np.abs(np.diff(shifts_data-shifts_data_ref)).max(),np.abs(np.diff(shifts_randoms-shifts_randoms_ref)).max()) 187 | print('abs test - ref', np.max(distance(shifts_data - shifts_data_ref))) 188 | print('rel test - ref', np.max(distance(shifts_data - shifts_data_ref) / distance(shifts_data_ref))) 189 | assert np.allclose(shifts_data, shifts_data_ref, rtol=1e-7, atol=1e-7) 190 | assert np.allclose(shifts_randoms, shifts_randoms_ref, rtol=1e-7, atol=1e-7) 191 | assert np.allclose(shifts_randoms_rsd, shifts_randoms_rsd_ref, rtol=1e-7, atol=1e-7) 192 | 193 | 194 | def test_revolver(data_fn, randoms_fn=None): 195 | 196 | # here path to reference Julian's code: https://github.com/seshnadathur/Revolver (python python_tools/setup.py build_ext --inplace) 197 | sys.path.insert(0, '../../../../reconstruction/Revolver') 198 | spec = importlib.util.spec_from_file_location('name', '../../../../reconstruction/Revolver/parameters/default_params.py') 199 | parms = importlib.util.module_from_spec(spec) 200 | spec.loader.exec_module(parms) 201 | parms.f = 0.8 202 | parms.bias = 1.4 203 | parms.verbose = False 204 | parms.nbins = 256 205 | isbox = parms.is_box = randoms_fn is None 206 | parms.nthreads = 4 207 | boxsize = 800. 208 | parms.box_length = boxsize 209 | niter = 3 210 | 211 | data = Catalog.read(data_fn) 212 | 213 | if isbox: 214 | randoms = data 215 | else: 216 | randoms = Catalog.read(randoms_fn) 217 | 218 | for catalog in [data, randoms]: 219 | for field in catalog: 220 | if catalog[field].dtype.byteorder == '>': 221 | catalog[field] = np.array(catalog[field].byteswap().newbyteorder(), dtype='f8') 222 | if isbox: 223 | catalog['Position'] += boxsize / 2. 224 | catalog['Position'] %= boxsize 225 | catalog['Distance'] = distance(catalog['Position']) 226 | catalog['Weight'] = catalog.get('Weight', catalog.ones()) 227 | 228 | class SimpleCatalog(object): 229 | 230 | def __init__(self, **kwargs): 231 | self.__dict__.update(kwargs) 232 | 233 | cdata = data.gather(mpiroot=0) 234 | crandoms = randoms.gather(mpiroot=0) 235 | mpicomm = data.mpicomm 236 | 237 | los, boxcenter, smooth = None, None, None 238 | if mpicomm.rank == 0: 239 | datacat = SimpleCatalog(**{axis: cdata['Position'][:, iaxis] for iaxis, axis in enumerate('xyz')}, 240 | **{'new{}'.format(axis): cdata['Position'][:, iaxis].copy() for iaxis, axis in enumerate('xyz')}, 241 | dist=cdata['Distance'], weight=cdata['Weight'], size=len(cdata['Position']), box_length=boxsize) 242 | 243 | rancat = SimpleCatalog(**{axis: crandoms['Position'][:, iaxis] for iaxis, axis in enumerate('xyz')}, 244 | **{'new{}'.format(axis): crandoms['Position'][:, iaxis].copy() for iaxis, axis in enumerate('xyz')}, 245 | dist=crandoms['Distance'], weight=crandoms['Weight'], size=len(crandoms['Position']), box_length=boxsize) 246 | 247 | from python_tools.recon import Recon as Revolver 248 | t0 = time.time() 249 | recon_ref = Revolver(datacat, ran=rancat, parms=parms) 250 | # if isbox: 251 | # recon_ref.ran = recon_ref.cat # fudge to prevent error in call to apply_shifts_full 252 | for i in range(niter): 253 | recon_ref.iterate(i, debug=True) 254 | 255 | # save the full shifted version of the catalogue 256 | recon_ref.apply_shifts_full() 257 | print('Revolver completed in {:.2f} s'.format(time.time() - t0), flush=True) 258 | if isbox: 259 | shifts_data_ref = np.array([getattr(recon_ref.cat, x) - getattr(recon_ref.cat, 'new{}'.format(x)) for x in 'xyz']).T % boxsize 260 | # shifts_data_ref = np.array([getattr(recon_ref.cat,'new{}'.format(x)) for x in 'xyz']).T 261 | else: 262 | shifts_data_ref = np.array([getattr(recon_ref.cat, x) - getattr(recon_ref.cat, 'new{}'.format(x)) for x in 'xyz']).T 263 | shifts_randoms_ref = np.array([getattr(recon_ref.ran, x) - getattr(recon_ref.ran, 'new{}'.format(x)) for x in 'xyz']).T 264 | 265 | if isbox: 266 | los = 'z' 267 | boxcenter = boxsize / 2. 268 | else: 269 | los = None 270 | boxsize = recon_ref.binsize * recon_ref.nbins 271 | boxcenter = np.array([getattr(recon_ref, '{}min'.format(x)) for x in 'xyz']) + boxsize / 2. 272 | smooth = recon_ref.smooth 273 | 274 | print('') 275 | print('#' * 50) 276 | print('') 277 | 278 | f, bias = mpicomm.bcast((recon_ref.f, recon_ref.bias) if mpicomm.rank == 0 else None, root=0) 279 | nmesh = mpicomm.bcast(recon_ref.nbins if mpicomm.rank == 0 else None, root=0) 280 | boxsize, boxcenter, los, smooth = mpicomm.bcast((boxsize, boxcenter, los, smooth), root=0) 281 | t0 = time.time() 282 | recon = OriginalIterativeFFTParticleReconstruction(f=f, bias=bias, boxsize=boxsize, boxcenter=boxcenter, nmesh=nmesh, los=los, wrap=True) 283 | # recon = OriginalIterativeFFTParticleReconstruction(f=recon_ref.f, bias=recon_ref.bias, boxsize=boxsize, boxcenter=boxcenter, nmesh=recon_ref.nbins, los=los) 284 | recon.assign_data(data['Position'], data['Weight']) 285 | if not isbox: 286 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 287 | recon.set_density_contrast(smoothing_radius=smooth) 288 | recon.run(niterations=niter) 289 | 290 | if mpicomm.rank == 0: 291 | print('pyrecon completed in {:.2f} s'.format(time.time() - t0), flush=True) 292 | if isbox: 293 | shifts_data = mpi.gather(recon.read_shifts('data', field='disp+rsd') % boxsize, mpicomm=mpicomm, mpiroot=0) 294 | # shifts_data = (data['Position'] - shifts_data) #% boxsize 295 | else: 296 | shifts_data = mpi.gather(recon.read_shifts('data', field='disp+rsd'), mpicomm=mpicomm, mpiroot=0) 297 | shifts_randoms = mpi.gather(recon.read_shifts(randoms['Position'], field='disp'), mpicomm=mpicomm, mpiroot=0) 298 | 299 | if mpicomm.rank == 0: 300 | print(shifts_data_ref.min(), shifts_data_ref.max()) 301 | print(shifts_data.min(), shifts_data.max()) 302 | 303 | print('abs test - ref', np.max(distance(shifts_data - shifts_data_ref))) 304 | print('rel test - ref', np.max(distance(shifts_data - shifts_data_ref) / distance(shifts_data_ref))) 305 | assert np.allclose(shifts_data, shifts_data_ref, rtol=1e-7, atol=1e-7) 306 | if not isbox: 307 | assert np.allclose(shifts_randoms, shifts_randoms_ref, rtol=1e-7, atol=1e-7) 308 | 309 | 310 | def test_script(data_fn, randoms_fn, output_data_fn, output_randoms_fn): 311 | 312 | catalog_dir = '_catalogs' 313 | command = 'mpiexec -np 4 pyrecon config_iterativefft_particle.yaml --data-fn {} --randoms-fn {} --output-data-fn {} --output-randoms-fn {}'.format( 314 | os.path.relpath(data_fn, catalog_dir), os.path.relpath(randoms_fn, catalog_dir), 315 | os.path.relpath(output_data_fn, catalog_dir), os.path.relpath(output_randoms_fn, catalog_dir)) 316 | subprocess.call(command, shell=True) 317 | data = Catalog.read(data_fn) 318 | randoms = Catalog.read(randoms_fn) 319 | recon = IterativeFFTParticleReconstruction(positions=randoms['Position'], nmesh=128, dtype='f8') 320 | recon.set_cosmo(f=0.8, bias=2.) 321 | recon.assign_data(data['Position'], data['Weight']) 322 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 323 | 324 | recon.set_density_contrast() 325 | recon.run() 326 | 327 | ref_positions_rec_data = data['Position'] - recon.read_shifts('data') 328 | ref_positions_rec_randoms = randoms['Position'] - recon.read_shifts(randoms['Position']) 329 | 330 | data = Catalog.read(output_data_fn) 331 | randoms = Catalog.read(output_randoms_fn) 332 | 333 | # print(ref_positions_rec_data, data['Position_rec'], ref_positions_rec_data-data['Position_rec']) 334 | assert np.allclose(ref_positions_rec_data, data['Position_rec']) 335 | assert np.allclose(ref_positions_rec_randoms, randoms['Position_rec']) 336 | 337 | 338 | def test_script_no_randoms(data_fn, output_data_fn): 339 | 340 | catalog_dir = '_catalogs' 341 | command = 'mpiexec -np 4 pyrecon config_iterativefft_particle_no_randoms.yaml --data-fn {} --output-data-fn {}'.format( 342 | os.path.relpath(data_fn, catalog_dir), os.path.relpath(output_data_fn, catalog_dir)) 343 | subprocess.call(command, shell=True) 344 | data = Catalog.read(data_fn) 345 | boxsize = 800 346 | boxcenter = boxsize / 2. 347 | recon = IterativeFFTParticleReconstruction(los='x', boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 348 | recon.set_cosmo(f=0.8, bias=2.) 349 | recon.assign_data(data['RSDPosition']) 350 | recon.set_density_contrast() 351 | recon.run() 352 | 353 | ref_positions_rec_data = data['RSDPosition'] - recon.read_shifts('data') 354 | data = Catalog.read(output_data_fn) 355 | assert np.allclose(ref_positions_rec_data, data['Position_rec']) 356 | 357 | 358 | def test_ref(data_fn, randoms_fn, data_fn_rec=None, randoms_fn_rec=None): 359 | boxsize = 1200. 360 | boxcenter = [1754, 0, 0] 361 | data = Catalog.read(data_fn) 362 | randoms = Catalog.read(randoms_fn) 363 | recon = IterativeFFTParticleReconstruction(f=0.8, bias=2., los=None, boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 364 | recon.assign_data(data['Position'], data['Weight']) 365 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 366 | recon.set_density_contrast() 367 | recon.mesh_delta += 10. 368 | recon.run(niterations=3) 369 | 370 | from pypower import CatalogFFTPower 371 | from matplotlib import pyplot as plt 372 | 373 | for cat, fn in zip([data, randoms], [data_fn_rec, randoms_fn_rec]): 374 | rec = recon.read_shifted_positions(cat['Position']) 375 | if 'Position_rec' in cat: 376 | if recon.mpicomm.rank == 0: print('Checking...') 377 | assert np.allclose(rec, cat['Position_rec']) 378 | else: 379 | cat['Position_rec'] = rec 380 | if fn is not None: 381 | cat.write(fn) 382 | 383 | kwargs = dict(edges={'min': 0., 'step': 0.01}, ells=(0, 2, 4), boxsize=1000., nmesh=64, resampler='tsc', interlacing=3, position_type='pos') 384 | power = CatalogFFTPower(data_positions1=data['Position'], randoms_positions1=randoms['Position'], **kwargs) 385 | poles = power.poles 386 | power = CatalogFFTPower(data_positions1=data['Position_rec'], randoms_positions1=randoms['Position_rec'], **kwargs) 387 | poles_rec = power.poles 388 | 389 | for ill, ell in enumerate(poles.ells): 390 | plt.plot(poles.k, poles.k * poles(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='-') 391 | plt.plot(poles_rec.k, poles_rec.k * poles_rec(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='--') 392 | 393 | if power.mpicomm.rank == 0: 394 | plt.show() 395 | 396 | 397 | if __name__ == '__main__': 398 | 399 | from utils import box_data_fn, data_fn, randoms_fn, catalog_dir, catalog_rec_fn 400 | from pyrecon.utils import setup_logging 401 | 402 | setup_logging() 403 | # Run utils.py to generate catalogs needed for these tests 404 | 405 | script_output_box_data_fn = os.path.join(catalog_dir, 'script_box_data_rec.fits') 406 | script_output_data_fn = os.path.join(catalog_dir, 'script_data_rec.fits') 407 | script_output_randoms_fn = os.path.join(catalog_dir, 'script_randoms_rec.fits') 408 | ''' 409 | # test_mem() 410 | test_dtype() 411 | test_wrap() 412 | test_mpi(IterativeFFTParticleReconstruction) 413 | test_no_nrandoms() 414 | test_los() 415 | test_eboss(data_fn, randoms_fn) 416 | test_revolver(data_fn, randoms_fn) 417 | test_revolver(box_data_fn) 418 | ''' 419 | # To be run without MPI 420 | # test_script(data_fn, randoms_fn, script_output_data_fn, script_output_randoms_fn) 421 | # test_script_no_randoms(box_data_fn, script_output_box_data_fn) 422 | data_fn_rec, randoms_fn_rec = [catalog_rec_fn(fn, 'iterative_fft_particle') for fn in [data_fn, randoms_fn]] 423 | # test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec) 424 | test_ref(data_fn_rec, randoms_fn_rec, None, None) 425 | -------------------------------------------------------------------------------- /pyrecon/tests/test_mesh.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from pmesh.pm import ParticleMesh 3 | 4 | from pyrecon.utils import MemoryMonitor 5 | 6 | 7 | def test_mesh(): 8 | nmesh = 4 9 | pm = ParticleMesh(BoxSize=[1. * nmesh] * 3, Nmesh=[nmesh] * 3, dtype='f8') 10 | field = pm.create('real', value=0) 11 | field = pm.paint(0.1 * np.array([1.] * 3)[None, :], resampler='cic', hold=True, out=field) 12 | print(field.value) 13 | 14 | with MemoryMonitor() as mem: 15 | nmesh = [256] * 3 16 | pm = ParticleMesh(BoxSize=[1.] * 3, Nmesh=nmesh) 17 | mem('init') 18 | v = np.zeros(shape=nmesh, dtype='f8') 19 | mesh = pm.create('real', value=v) 20 | v[...] = 1. 21 | mem('create') 22 | 23 | nmesh = 8 24 | pm = ParticleMesh(BoxSize=[1.] * 3, Nmesh=[nmesh] * 3, dtype='c16') 25 | field = pm.create('complex') 26 | ik = [] 27 | for iik in field.i: 28 | iik = np.ravel(iik) 29 | iik[iik >= nmesh // 2] -= nmesh 30 | ik.append(iik) 31 | print(ik) 32 | 33 | 34 | if __name__ == '__main__': 35 | 36 | test_mesh() 37 | -------------------------------------------------------------------------------- /pyrecon/tests/test_metrics.py: -------------------------------------------------------------------------------- 1 | import os 2 | import tempfile 3 | 4 | import numpy as np 5 | from matplotlib import pyplot as plt 6 | 7 | # For mockfactory installation, see https://github.com/cosmodesi/mockfactory 8 | from mockfactory import LagrangianLinearMock, setup_logging 9 | # For cosmoprimo installation see https://cosmoprimo.readthedocs.io/en/latest/user/building.html 10 | from cosmoprimo.fiducial import DESI 11 | 12 | from pyrecon import PlaneParallelFFTReconstruction 13 | from pyrecon.metrics import MeshFFTCorrelator, MeshFFTPropagator, MeshFFTTransfer, CatalogMesh 14 | 15 | 16 | def test_metrics(): 17 | z = 1. 18 | # Load DESI fiducial cosmology 19 | cosmo = DESI() 20 | power = cosmo.get_fourier().pk_interpolator().to_1d(z=z) 21 | f = cosmo.sigma8_z(z=z, of='theta_cb') / cosmo.sigma8_z(z=z, of='delta_cb') # growth rate 22 | 23 | bias, nbar, nmesh, boxsize, boxcenter, los = 2.0, 1e-3, 128, 1000., (10000., 0., 0.), (1., 0, 0) 24 | mock_los = los 25 | mock = LagrangianLinearMock(power, nmesh=nmesh, boxsize=boxsize, boxcenter=boxcenter, seed=42, unitary_amplitude=False) 26 | # This is Lagrangian bias, Eulerian bias - 1 27 | mock.set_real_delta_field(bias=bias - 1) 28 | mesh_real = mock.mesh_delta_r + 1. 29 | mock.set_analytic_selection_function(nbar=nbar) 30 | mock.poisson_sample(seed=43) 31 | mock.set_rsd(f=f, los=los) 32 | data = mock.to_catalog() 33 | offset = data.boxcenter - data.boxsize / 2. 34 | data['Position'] = (data['Position'] - offset) % data.boxsize + offset 35 | 36 | # recon = MultiGridReconstruction(f=f, bias=bias, los=los, nmesh=nmesh, boxsize=boxsize, boxcenter=boxcenter) 37 | recon = PlaneParallelFFTReconstruction(f=f, bias=bias, los=los, nmesh=nmesh, boxsize=boxsize, boxcenter=boxcenter) 38 | recon.assign_data(data['Position']) 39 | recon.set_density_contrast() 40 | # Run reconstruction 41 | recon.run() 42 | 43 | from mockfactory.make_survey import RandomBoxCatalog 44 | randoms = RandomBoxCatalog(nbar=nbar, boxsize=boxsize, boxcenter=boxcenter, seed=44) 45 | 46 | data['Position_rec'] = data['Position'] - recon.read_shifts(data['Position'], field='disp+rsd') 47 | randoms['Position_rec'] = randoms['Position'] - recon.read_shifts(randoms['Position'], field='disp') 48 | offset = data.boxcenter - data.boxsize / 2. 49 | for catalog in [data, randoms]: 50 | catalog['Position_rec'] = (catalog['Position_rec'] - offset) % catalog.boxsize + offset 51 | 52 | kedges = np.arange(0.005, 0.4, 0.005) 53 | # kedges = np.arange(0.005, 0.4, 0.05) 54 | muedges = np.linspace(-1., 1., 5) 55 | dtype = 'f8' 56 | 57 | def get_correlator(los=los, kedges=kedges): 58 | mesh_recon = CatalogMesh(data['Position_rec'], shifted_positions=randoms['Position_rec'], 59 | boxsize=boxsize, boxcenter=boxcenter, nmesh=nmesh, resampler='cic', interlacing=2, position_type='pos', dtype=dtype) 60 | return MeshFFTCorrelator(mesh_recon, mesh_real, edges=(kedges, muedges), los=los) 61 | 62 | def get_propagator(los=los, growth=1.): 63 | mesh_recon = CatalogMesh(data['Position_rec'], shifted_positions=randoms['Position_rec'], 64 | boxsize=boxsize, boxcenter=boxcenter, nmesh=nmesh, resampler='cic', interlacing=2, position_type='pos', dtype=dtype) 65 | return MeshFFTPropagator(mesh_recon, mesh_real, edges=(kedges, muedges), los=los, growth=growth) 66 | 67 | def get_transfer(los=los, growth=1.): 68 | mesh_recon = CatalogMesh(data['Position_rec'], shifted_positions=randoms['Position_rec'], 69 | boxsize=boxsize, boxcenter=boxcenter, nmesh=nmesh, resampler='cic', interlacing=2, position_type='pos', dtype=dtype) 70 | return MeshFFTTransfer(mesh_recon, mesh_real, edges=(kedges, muedges), los=los, growth=growth) 71 | 72 | def get_propagator_ref(los=los): 73 | # Taken from https://github.com/cosmodesi/desi_cosmosim/blob/master/reconstruction/propagator_and_multipole/DESI_Recon/propagator_catalog_calc.py 74 | from nbodykit.lab import FFTPower 75 | from pmesh.pm import ParticleMesh 76 | for cat in [data, randoms]: 77 | cat['Position_rec_shifted'] = cat['Position_rec'] - boxcenter + boxsize / 2. 78 | meshp = data.to_nbodykit().to_mesh(position='Position_rec_shifted', Nmesh=nmesh, BoxSize=boxsize, resampler='cic', compensated=True, interlaced=True, dtype='c16') 79 | meshran = randoms.to_nbodykit().to_mesh(position='Position_rec_shifted', Nmesh=nmesh, BoxSize=boxsize, resampler='cic', compensated=True, interlaced=True, dtype='c16') 80 | # mesh_recon = ArrayMesh(meshp.compute() - meshran.compute(), BoxSize=boxsize) 81 | mesh_recon = meshp.compute() - meshran.compute() 82 | Nmu = len(muedges) - 1 83 | kmin, kmax, dk = kedges[0], kedges[-1] + 1e-9, kedges[1] - kedges[0] 84 | pm = ParticleMesh(BoxSize=mesh_real.pm.BoxSize, Nmesh=mesh_real.pm.Nmesh, dtype='c16', comm=mesh_real.pm.comm) 85 | mesh_complex = pm.create(type='real') 86 | mesh_complex[...] = mesh_real[...] 87 | r_cross = FFTPower(mesh_complex, mode='2d', Nmesh=nmesh, Nmu=Nmu, dk=dk, second=mesh_recon, los=los, kmin=kmin, kmax=kmax) 88 | # r_auto = FFTPower(mesh_recon, mode='2d', Nmesh=nmesh, Nmu=Nmu, dk=dk, los=los, kmin=kmin, kmax=kmax) 89 | r_auto_init = FFTPower(mesh_complex, mode='2d', Nmesh=nmesh, Nmu=Nmu, dk=dk, los=los, kmin=kmin, kmax=kmax) 90 | # print(r_auto_init.power['modes']) 91 | return (r_cross.power['power'] / r_auto_init.power['power']).real / bias, r_cross.power['power'].real, r_auto_init.power['power'].real 92 | 93 | propagator_ref, cross_ref, auto_init_ref = get_propagator_ref(los=mock_los) 94 | correlator = get_correlator(los=mock_los) 95 | propagator = correlator.to_propagator(growth=bias) 96 | assert np.allclose(propagator.ratio, propagator_ref, atol=1e-6, rtol=1e-4, equal_nan=True) 97 | 98 | list_los = ['x', 'firstpoint'] 99 | assert np.allclose(*[get_correlator(los=los).ratio[2:] for los in list_los], atol=1e-6, rtol=0.3, equal_nan=True) 100 | 101 | for los in list_los: 102 | 103 | correlator = get_correlator(los=los) 104 | correlator_rebin = correlator.copy() 105 | correlator_rebin.rebin((2, 1)) 106 | assert correlator_rebin.ratio.shape[0] == correlator.ratio.shape[0] // 2 107 | correlator_rebin2 = correlator[::2] 108 | assert np.allclose(correlator_rebin2.ratio, correlator_rebin.ratio, equal_nan=True) 109 | correlator_rebin2.select((0., 0.1)) 110 | assert correlator_rebin2.k[0][-1] <= 0.1 111 | correlator_rebin2 = correlator[:, ::correlator.shape[1]] 112 | assert correlator_rebin2.shape[1] == 1 113 | k, c = correlator_rebin2.to_propagator(growth=bias)(mu=0., return_k=True) 114 | assert c.shape == k.shape and k.ndim == 1 115 | transfer = correlator.to_transfer(growth=bias) 116 | propagator = correlator.to_propagator(growth=bias) 117 | 118 | for complex in [False, True]: 119 | assert correlator(k=[0.1, 0.2], complex=complex).shape == (2, correlator.shape[1]) 120 | assert correlator(k=[0.1, 0.2], mu=[0.3], complex=complex).shape == (2, 1) 121 | assert correlator(k=[[0.1, 0.2]] * 3, mu=[[0.3]] * 2, complex=complex).shape == (3, 2, 2, 1) 122 | assert correlator(k=[0.1, 0.2], mu=0., complex=complex).shape == (2, ) 123 | assert correlator(k=0.1, mu=0., complex=complex).shape == () 124 | assert correlator(k=0.1, mu=[0., 0.1], complex=complex).shape == (2, ) 125 | assert np.allclose(correlator(k=[0.2, 0.1], mu=[0.2, 0.1], complex=complex), correlator(k=[0.1, 0.2], mu=[0.1, 0.2], complex=complex)[::-1, ::-1], atol=0) 126 | 127 | with tempfile.TemporaryDirectory() as tmp_dir: 128 | # tmp_dir = '_tests' 129 | 130 | fn = correlator.num.mpicomm.bcast(os.path.join(tmp_dir, 'tmp.npy'), root=0) 131 | fn_txt = correlator.num.mpicomm.bcast(os.path.join(tmp_dir, 'tmp.txt'), root=0) 132 | 133 | correlator.save(fn) 134 | correlator.save_txt(fn_txt) 135 | correlator.mpicomm.Barrier() 136 | test = np.loadtxt(fn_txt, unpack=True) 137 | mids = np.meshgrid(*(correlator.modeavg(axis=axis, method='mid') for axis in range(correlator.ndim)), indexing='ij') 138 | assert np.allclose([tt.reshape(correlator.shape) for tt in test], [correlator.nmodes, mids[0], correlator.modes[0], mids[1], correlator.modes[1], correlator.ratio.real], equal_nan=True) 139 | correlator.save_txt(fn_txt, complex=True) 140 | test = np.loadtxt(fn_txt, unpack=True, dtype=np.complex_) 141 | assert np.allclose([tt.reshape(correlator.shape) for tt in test], [correlator.nmodes, mids[0], correlator.modes[0], mids[1], correlator.modes[1], correlator.get_ratio(complex=True)], equal_nan=True) 142 | 143 | correlator = MeshFFTCorrelator.load(fn) 144 | 145 | propagator.save(fn) 146 | propagator.save_txt(fn_txt) 147 | propagator.mpicomm.Barrier() 148 | propagator = MeshFFTPropagator.load(fn) 149 | 150 | transfer.save(fn) 151 | transfer.save_txt(fn_txt) 152 | transfer.mpicomm.Barrier() 153 | transfer = MeshFFTTransfer.load(fn) 154 | 155 | fn = os.path.join(tmp_dir, 'tmp.npy') 156 | correlator.save(fn) 157 | propagator.save(fn) 158 | transfer.save(fn) 159 | 160 | assert np.allclose(get_propagator(los=los, growth=bias).ratio, propagator.ratio, equal_nan=True) 161 | assert np.allclose(get_transfer(los=los, growth=bias).ratio, transfer.ratio, equal_nan=True) 162 | 163 | correlator = get_correlator(los='firstpoint', kedges=np.linspace(0., 1., 61)) 164 | transfer = correlator.to_transfer(growth=bias) 165 | propagator = correlator.to_propagator(growth=bias) 166 | k, c = correlator(mu=0., return_k=True) 167 | assert len(c) == len(k) 168 | 169 | fig, lax = plt.subplots(nrows=1, ncols=3, figsize=(14, 4)) 170 | fig.subplots_adjust(wspace=0.3) 171 | lax = lax.flatten() 172 | for imu, mu in enumerate(correlator.muavg[3:]): 173 | k = correlator(mu=mu, return_k=True)[0] 174 | mask = k < 0.6 175 | k = k[mask] 176 | lax[0].plot(k, correlator(k=k, mu=mu), label=r'$\mu = {:.2f}$'.format(mu)) 177 | lax[1].plot(k, transfer(k=k, mu=mu), label=r'$\mu = {:.2f}$'.format(mu)) 178 | lax[2].plot(k, propagator(k=k, mu=mu), label=r'$\mu = {:.2f}$'.format(mu)) 179 | for ax in lax: 180 | ax.legend() 181 | ax.grid(True) 182 | ax.set_xlabel(r'$k$ [$\mathrm{Mpc}/h$]') 183 | lax[0].set_ylabel(r'$r(k) = P_{\mathrm{rec}, \mathrm{init}}/\sqrt{P_{\mathrm{rec}}P_{\mathrm{init}}}$') 184 | lax[1].set_ylabel(r'$t(k) = \sqrt{P_{\mathrm{rec}}/P_{\mathrm{init}}}$') 185 | lax[2].set_ylabel(r'$g(k) = P_{\mathrm{rec}, \mathrm{init}}/P_{\mathrm{init}}$') 186 | if correlator.mpicomm.rank == 0: 187 | plt.show() 188 | 189 | correlator = get_correlator() 190 | ax = plt.gca() 191 | auto = correlator.auto_initial 192 | auto.rebin((1, len(auto.edges[-1]) - 1)) 193 | ax.plot(auto.k[:, 0], auto.k[:, 0] * auto.power[:, 0].real * bias**2, label='initial') 194 | auto = correlator.auto_reconstructed 195 | auto.rebin((1, len(auto.edges) - 1)) 196 | ax.plot(auto.k[:, 0], auto.k[:, 0] * auto.power[:, 0].real, label='reconstructed') 197 | ax.legend() 198 | if correlator.mpicomm.rank == 0: 199 | plt.show() 200 | 201 | 202 | if __name__ == '__main__': 203 | # Set up logging 204 | setup_logging() 205 | 206 | test_metrics() 207 | -------------------------------------------------------------------------------- /pyrecon/tests/test_multigrid.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | import subprocess 4 | 5 | import numpy as np 6 | 7 | from pyrecon.multigrid import OriginalMultiGridReconstruction, MultiGridReconstruction 8 | from pyrecon.utils import distance, MemoryMonitor 9 | from pyrecon import mpi 10 | from utils import get_random_catalog, Catalog, test_mpi 11 | 12 | 13 | def test_mem(): 14 | data = get_random_catalog(seed=42) 15 | randoms = get_random_catalog(seed=84) 16 | 17 | with MemoryMonitor() as mem: 18 | recon = MultiGridReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=256, dtype='f8') 19 | mem('init') 20 | recon.assign_data(data['Position'], data['Weight']) 21 | mem('data') 22 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 23 | mem('randoms') 24 | recon.set_density_contrast() 25 | mem('delta') 26 | recon.run() 27 | mem('recon') # 1 mesh 28 | 29 | 30 | def test_random(): 31 | data = get_random_catalog(seed=42) 32 | randoms = get_random_catalog(seed=84) 33 | recon = MultiGridReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=8, dtype='f8') 34 | recon.assign_data(data['Position'], data['Weight']) 35 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 36 | recon.set_density_contrast() 37 | #recon.run(jacobi_niterations=1, vcycle_niterations=1) 38 | recon.run() 39 | # print(recon.read_shifts(data['Position'])) 40 | # print(np.abs(recon.read_shifts(data['Position'])).max()) 41 | # assert np.all(np.abs(recon.read_shifts(data['Position'])) < 10.) 42 | 43 | 44 | def test_no_nrandoms(): 45 | boxsize = 1000. 46 | data = get_random_catalog(boxsize=boxsize, seed=42) 47 | recon = MultiGridReconstruction(f=0.8, bias=2., los='x', boxcenter=0., boxsize=boxsize, nmesh=8, dtype='f8') 48 | recon.assign_data(data['Position'], data['Weight']) 49 | assert not recon.has_randoms 50 | recon.set_density_contrast() 51 | assert np.allclose(recon.mesh_delta.csum(), 0.) 52 | recon.run(jacobi_niterations=1, vcycle_niterations=1) 53 | # recon.run() 54 | assert np.all(np.abs(recon.read_shifts(data['Position'])) < 2.) 55 | 56 | 57 | def test_dtype(): 58 | # ran_min threshold in set_density_contrast() may not mask exactly the same number of cells in f4 and f8 cases, hence big difference in the end 59 | # With current seeds masks are the same in f4 and f8 cases 60 | data = get_random_catalog(seed=42) 61 | randoms = get_random_catalog(seed=81) 62 | for los in [None, 'x']: 63 | all_shifts = [] 64 | for dtype in ['f4', 'f8']: 65 | dtype = np.dtype(dtype) 66 | itemsize = np.empty(0, dtype=dtype).real.dtype.itemsize 67 | recon = MultiGridReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=64, los=los, dtype=dtype) 68 | recon.assign_data(data['Position'], data['Weight']) 69 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 70 | recon.set_density_contrast() 71 | assert recon.mesh_delta.dtype.itemsize == itemsize 72 | recon.run() 73 | assert recon.mesh_phi.dtype.itemsize == itemsize 74 | all_shifts2 = [] 75 | for dtype2 in ['f4', 'f8']: 76 | dtype2 = np.dtype(dtype2) 77 | shifts = recon.read_shifts(data['Position'].astype(dtype2), field='disp+rsd') 78 | assert shifts.dtype.itemsize == dtype2.itemsize 79 | all_shifts2.append(shifts) 80 | if dtype2 == dtype: all_shifts.append(shifts) 81 | assert np.allclose(*all_shifts2, atol=1e-2, rtol=1e-2) 82 | assert np.allclose(*all_shifts, atol=5e-2, rtol=5e-2) 83 | 84 | 85 | def test_nmesh(): 86 | randoms = get_random_catalog(seed=81) 87 | recon = MultiGridReconstruction(f=0.8, bias=2., positions=randoms['Position'], cellsize=[10, 8, 9]) 88 | assert np.all(recon.nmesh % 2 == 0) 89 | 90 | import pytest 91 | with pytest.warns(): 92 | recon = MultiGridReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=[12, 14, 18]) 93 | 94 | 95 | def test_wrap(): 96 | size = 100000 97 | boxsize = 1000 98 | for boxcenter in [-500, 0, 500]: 99 | data = get_random_catalog(size, boxsize, seed=42) 100 | # set one of the data positions to be outside the fiducial box by hand 101 | data['Position'][-1] = np.array([boxsize, boxsize, boxsize]) + 1 102 | data['Position'] += boxcenter 103 | randoms = get_random_catalog(size, boxsize, seed=42) 104 | # set one of the random positions to be outside the fiducial box by hand 105 | randoms['Position'][-1] = np.array([0, 0, 0]) - 1 106 | randoms['Position'] += boxcenter 107 | recon = MultiGridReconstruction(f=0.8, bias=2, los='z', boxsize=boxsize, boxcenter=boxcenter, nmesh=64, wrap=True) 108 | # following steps should run without error if wrapping is correctly implemented 109 | recon.assign_data(data['Position'], data['Weight']) 110 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 111 | recon.set_density_contrast() 112 | recon.run() 113 | 114 | # following steps test the implementation coded into standalone pyrecon code 115 | for field in ['rsd', 'disp', 'disp+rsd']: 116 | shifts = recon.read_shifts(data['Position'], field=field) 117 | diff = data['Position'] - shifts 118 | positions_rec = (diff - recon.offset) % recon.boxsize + recon.offset 119 | assert np.all(positions_rec >= boxcenter - boxsize / 2.) and np.all(positions_rec <= boxcenter + boxsize / 2.) 120 | assert np.allclose(recon.read_shifted_positions(data['Position'], field=field), positions_rec) 121 | 122 | 123 | def test_los(): 124 | boxsize = 1000. 125 | data = get_random_catalog(boxsize=boxsize, seed=42) 126 | randoms = get_random_catalog(boxsize=boxsize, seed=84) 127 | recon = MultiGridReconstruction(f=0.8, bias=2., los='x', boxcenter=0., boxsize=boxsize, nmesh=64, dtype='f8') 128 | recon.assign_data(data['Position'], data['Weight']) 129 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 130 | recon.set_density_contrast() 131 | recon.run() 132 | shifts_global = recon.read_shifts(data['Position'], field='disp+rsd') 133 | offset = 1e8 134 | data['Position'][:, 0] += offset 135 | randoms['Position'][:, 0] += offset 136 | recon = MultiGridReconstruction(f=0.8, bias=2., boxcenter=[offset, 0, 0], boxsize=boxsize, nmesh=64, dtype='f8') 137 | recon.assign_data(data['Position'], data['Weight']) 138 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 139 | recon.set_density_contrast() 140 | recon.run() 141 | shifts_local = recon.read_shifts(data['Position'], field='disp+rsd') 142 | assert np.allclose(shifts_local, shifts_global, rtol=1e-3, atol=1e-3) 143 | 144 | 145 | def compute_ref(data_fn, randoms_fn, output_data_fn, output_randoms_fn): 146 | 147 | from astropy.cosmology import FlatLambdaCDM 148 | cosmo = FlatLambdaCDM(H0=70, Om0=0.3) 149 | 150 | def comoving_distance(z): 151 | return cosmo.comoving_distance(z).value * cosmo.h 152 | 153 | input_fn = [fn.replace('.fits', '.rdzw') for fn in [data_fn, randoms_fn]] 154 | 155 | for fn, infn in zip([data_fn, randoms_fn], input_fn): 156 | catalog = Catalog.read(fn) 157 | # distance, ra, dec = cartesian_to_sky(catalog['Position']) 158 | # rdzw = [ra, dec, DistanceToRedshift(comoving_distance)(distance)] + [catalog['Weight']] 159 | rdzw = list(catalog['Position'].T) + [catalog['Weight']] 160 | np.savetxt(infn, np.array(rdzw).T) 161 | 162 | catalog_dir = os.path.dirname(infn) 163 | command = '{0} {1} {2} {2} 2 0.81 15'.format(recon_code, *[os.path.basename(infn) for infn in input_fn]) 164 | t0 = time.time() 165 | print(command) 166 | subprocess.call(command, shell=True, cwd=catalog_dir) 167 | print('recon_code completed in {:.2f} s'.format(time.time() - t0), flush=True) 168 | 169 | output_fn = [os.path.join(catalog_dir, base) for base in ['data_rec.xyzw', 'rand_rec.xyzw']] 170 | for infn, fn, outfn in zip([data_fn, randoms_fn], [output_data_fn, output_randoms_fn], output_fn): 171 | x, y, z, w = np.loadtxt(outfn, unpack=True) 172 | positions = np.array([x, y, z]).T 173 | catalog = Catalog.read(infn).gather() 174 | if catalog is not None: 175 | # print(np.mean(distance(positions - catalog['Position']))) 176 | catalog['Position_rec'] = positions 177 | catalog['Weight'] = w 178 | catalog.write(fn) 179 | 180 | 181 | def test_recon(data_fn, randoms_fn, output_data_fn, output_randoms_fn): 182 | # boxsize = 1199.9995117188 in float32 183 | # boxcenter = [1753.8884277344, 400.0001831055, 400.0003662109] in float64 184 | boxsize = 1199.9988620158 185 | boxcenter = [1741.8557233434, -0.0002247471, 0.0001600799] 186 | recon = OriginalMultiGridReconstruction(boxsize=boxsize, boxcenter=boxcenter, nmesh=128, dtype='f8') 187 | mpicomm = recon.mpicomm 188 | recon.set_cosmo(f=0.81, bias=2.) 189 | 190 | # recon = OriginalMultiGridReconstruction(positions=fitsio.read(randoms_fn, columns=['Position'])['Position'], nmesh=128, dtype='f4') 191 | # recon.set_cosmo(f=0.81, bias=2.) 192 | # print(recon.mesh_data.boxsize, recon.mesh_data.boxcenter) 193 | 194 | nslabs = 1 195 | for fn, assign in zip([data_fn, randoms_fn], [recon.assign_data, recon.assign_randoms]): 196 | for islab in range(nslabs): 197 | data = Catalog.read(fn) 198 | start = islab * data.csize // nslabs 199 | stop = (islab + 1) * data.csize // nslabs 200 | data = data.cslice(start, stop) 201 | assign(data['Position'], data['Weight']) 202 | recon.set_density_contrast() 203 | # print(np.max(recon.mesh_delta)) 204 | t0 = time.time() 205 | recon.run() 206 | #recon.run(jacobi_niterations=5, vcycle_niterations=1) 207 | if mpicomm.rank == 0: 208 | print('pyrecon completed in {:.4f} s'.format(time.time() - t0)) 209 | # print(np.std(recon.mesh_phi)) 210 | # recon.f = recon.beta 211 | 212 | for input_fn, output_fn in zip([data_fn, randoms_fn], [output_data_fn, output_randoms_fn]): 213 | catalog = Catalog.read(input_fn) 214 | shifts = recon.read_shifts(catalog['Position'], field='disp+rsd') 215 | catalog['Position_rec'] = catalog['Position'] - shifts 216 | catalog.write(output_fn) 217 | shifts = mpi.gather(shifts, mpicomm=mpicomm, mpiroot=0) 218 | if mpicomm.rank == 0: 219 | print('RMS', (np.mean(np.sum(shifts**2, axis=-1)) / 3)**0.5) 220 | 221 | 222 | def compare_ref(data_fn, output_data_fn, ref_output_data_fn): 223 | positions = Catalog.read(data_fn)['Position'] 224 | output_positions = Catalog.read(output_data_fn)['Position_rec'] 225 | ref_output_positions = Catalog.read(ref_output_data_fn)['Position_rec'] 226 | 227 | print('abs test - ref', np.max(distance(output_positions - ref_output_positions))) 228 | print('rel test - ref', np.max(distance(output_positions - ref_output_positions) / distance(ref_output_positions - positions))) 229 | print('mean diff test', np.mean(distance(output_positions - positions))) 230 | print('mean diff ref', np.mean(distance(ref_output_positions - positions))) 231 | assert np.allclose(output_positions, ref_output_positions, rtol=1e-7, atol=1e-7) 232 | 233 | 234 | def test_script(data_fn, randoms_fn, output_data_fn, output_randoms_fn): 235 | 236 | catalog_dir = '_catalogs' 237 | command = 'pyrecon config_multigrid.yaml --data-fn {} --randoms-fn {} --output-data-fn {} --output-randoms-fn {}'.format( 238 | os.path.relpath(data_fn, catalog_dir), os.path.relpath(randoms_fn, catalog_dir), 239 | os.path.relpath(output_data_fn, catalog_dir), os.path.relpath(output_randoms_fn, catalog_dir)) 240 | subprocess.call(command, shell=True) 241 | data = Catalog.read(data_fn) 242 | randoms = Catalog.read(randoms_fn) 243 | recon = MultiGridReconstruction(positions=randoms['Position'], nmesh=128, dtype='f8') 244 | recon.set_cosmo(f=0.8, bias=2.) 245 | recon.assign_data(data['Position'], data['Weight']) 246 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 247 | recon.set_density_contrast() 248 | recon.run() 249 | 250 | ref_positions_rec_data = recon.read_shifted_positions(data['Position']) 251 | ref_positions_rec_randoms = recon.read_shifted_positions(randoms['Position']) 252 | 253 | data = Catalog.read(output_data_fn) 254 | randoms = Catalog.read(output_randoms_fn) 255 | 256 | # print(ref_positions_rec_data, data['Position_rec'], ref_positions_rec_data-data['Position_rec']) 257 | assert np.allclose(ref_positions_rec_data, data['Position_rec']) 258 | assert np.allclose(ref_positions_rec_randoms, randoms['Position_rec']) 259 | 260 | 261 | def test_script_no_randoms(data_fn, output_data_fn): 262 | 263 | catalog_dir = '_catalogs' 264 | command = 'pyrecon config_multigrid_no_randoms.yaml --data-fn {} --output-data-fn {}'.format( 265 | os.path.relpath(data_fn, catalog_dir), os.path.relpath(output_data_fn, catalog_dir)) 266 | subprocess.call(command, shell=True) 267 | data = Catalog.read(data_fn) 268 | boxsize = 800 269 | boxcenter = 0. 270 | recon = MultiGridReconstruction(nthreads=4, los=0, boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 271 | recon.set_cosmo(f=0.8, bias=2.) 272 | recon.assign_data(data['Position']) 273 | recon.set_density_contrast() 274 | recon.run() 275 | 276 | ref_positions_rec_data = data['Position'] - recon.read_shifts(data['Position']) 277 | data = Catalog.read(output_data_fn) 278 | 279 | # print(ref_positions_rec_data, data['Position_rec'], ref_positions_rec_data-data['Position_rec']) 280 | assert np.allclose(ref_positions_rec_data, data['Position_rec']) 281 | 282 | 283 | def test_ref(data_fn, randoms_fn, data_fn_rec=None, randoms_fn_rec=None): 284 | boxsize = 1200. 285 | boxcenter = [1754, 0, 0] 286 | data = Catalog.read(data_fn) 287 | randoms = Catalog.read(randoms_fn) 288 | recon = MultiGridReconstruction(f=0.8, bias=2., los=None, boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 289 | recon.assign_data(data['Position'], data['Weight']) 290 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 291 | recon.set_density_contrast() 292 | recon.mesh_delta += 10. 293 | recon.run() 294 | 295 | from pypower import CatalogFFTPower 296 | from matplotlib import pyplot as plt 297 | 298 | for cat, fn in zip([data, randoms], [data_fn_rec, randoms_fn_rec]): 299 | rec = recon.read_shifted_positions(cat['Position']) 300 | if 'Position_rec' in cat: 301 | if recon.mpicomm.rank == 0: print('Checking...') 302 | assert np.allclose(rec, cat['Position_rec'], rtol=1e-4, atol=1e-4) 303 | else: 304 | cat['Position_rec'] = rec 305 | if fn is not None: 306 | cat.write(fn) 307 | return 308 | kwargs = dict(edges={'min': 0., 'step': 0.01}, ells=(0, 2, 4), boxsize=1000., nmesh=64, resampler='tsc', interlacing=3, position_type='pos') 309 | power = CatalogFFTPower(data_positions1=data['Position'], randoms_positions1=randoms['Position'], **kwargs) 310 | poles = power.poles 311 | power = CatalogFFTPower(data_positions1=data['Position_rec'], randoms_positions1=randoms['Position_rec'], **kwargs) 312 | poles_rec = power.poles 313 | 314 | for ill, ell in enumerate(poles.ells): 315 | plt.plot(poles.k, poles.k * poles(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='-') 316 | plt.plot(poles_rec.k, poles_rec.k * poles_rec(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='--') 317 | 318 | if power.mpicomm.rank == 0: 319 | plt.show() 320 | 321 | 322 | def test_finite_difference(): 323 | from pmesh.pm import ParticleMesh 324 | from pyrecon import mpi 325 | mpicomm = mpi.COMM_WORLD 326 | nmesh = 16 327 | pm = ParticleMesh(BoxSize=[1.] * 3, Nmesh=[nmesh] * 3, np=(mpicomm.size, 1), dtype='f8') 328 | mesh = pm.create('real', value=2.) 329 | from pyrecon import _multigrid 330 | size = 10 331 | positions = np.column_stack([np.linspace(0., 1., size)] * 3) 332 | boxcenter = np.array([0.5] * 3) 333 | values = _multigrid.read_finite_difference_cic(mesh, positions, boxcenter) 334 | print(values) 335 | print(np.abs(values).min(), np.abs(values).max()) 336 | #if mpicomm.rank == 0: print(values) 337 | 338 | 339 | if __name__ == '__main__': 340 | 341 | from utils import box_data_fn, data_fn, randoms_fn, catalog_dir, catalog_rec_fn 342 | from pyrecon.utils import setup_logging 343 | 344 | setup_logging() 345 | # Run utils.py to generate catalogs needed for these tests 346 | 347 | recon_code = os.path.join(os.path.abspath(os.path.dirname(__file__)), '_codes', 'recon') 348 | output_data_fn = os.path.join(catalog_dir, 'data_rec.fits') 349 | output_randoms_fn = os.path.join(catalog_dir, 'randoms_rec.fits') 350 | ref_output_data_fn = os.path.join(catalog_dir, 'ref_data_rec.fits') 351 | ref_output_randoms_fn = os.path.join(catalog_dir, 'ref_randoms_rec.fits') 352 | script_output_box_data_fn = os.path.join(catalog_dir, 'script_box_data_rec.fits') 353 | script_output_data_fn = os.path.join(catalog_dir, 'script_data_rec.fits') 354 | script_output_randoms_fn = os.path.join(catalog_dir, 'script_randoms_rec.fits') 355 | # test_mem() 356 | test_dtype() 357 | test_nmesh() 358 | test_wrap() 359 | test_mpi(MultiGridReconstruction) 360 | test_random() 361 | test_no_nrandoms() 362 | test_los() 363 | # test_finite_difference() 364 | #test_recon(data_fn, randoms_fn, output_data_fn, output_randoms_fn) 365 | #compute_ref(data_fn, randoms_fn, ref_output_data_fn, ref_output_randoms_fn) 366 | #compare_ref(data_fn, output_data_fn, ref_output_data_fn) 367 | #compare_ref(randoms_fn, output_randoms_fn, ref_output_randoms_fn) 368 | 369 | #test_script(data_fn, randoms_fn, script_output_data_fn, script_output_randoms_fn) 370 | #test_script_no_randoms(box_data_fn, script_output_box_data_fn) 371 | # compute_power_no_randoms([script_output_box_data_fn]*2, ['RSDPosition', 'Position_rec']) 372 | # compute_power((data_fn, randoms_fn), (output_data_fn, output_randoms_fn)) 373 | # compute_power((data_fn, randoms_fn), (ref_output_data_fn, ref_output_randoms_fn)) 374 | # compute_power((ref_output_data_fn, ref_output_randoms_fn), (output_data_fn, output_randoms_fn)) 375 | data_fn_rec, randoms_fn_rec = [catalog_rec_fn(fn, 'multigrid') for fn in [data_fn, randoms_fn]] 376 | # test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec) 377 | test_ref(data_fn_rec, randoms_fn_rec, None, None) 378 | -------------------------------------------------------------------------------- /pyrecon/tests/test_plane_parallel_fft.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from pyrecon import PlaneParallelFFTReconstruction 4 | from pyrecon.utils import MemoryMonitor 5 | from utils import get_random_catalog, Catalog, test_mpi 6 | 7 | 8 | def test_dtype(): 9 | data = get_random_catalog(seed=42) 10 | randoms = get_random_catalog(seed=81) 11 | for los in ['x']: 12 | all_shifts = [] 13 | for dtype in ['f4', 'f8']: 14 | dtype = np.dtype(dtype) 15 | itemsize = np.empty(0, dtype=dtype).real.dtype.itemsize 16 | recon = PlaneParallelFFTReconstruction(f=0.8, bias=2., positions=randoms['Position'], nmesh=64, los=los, dtype=dtype) 17 | recon.assign_data(data['Position'], data['Weight']) 18 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 19 | recon.set_density_contrast() 20 | assert recon.mesh_delta.dtype.itemsize == itemsize 21 | recon.run() 22 | assert recon.mesh_psi[0].dtype.itemsize == itemsize 23 | all_shifts2 = [] 24 | for dtype2 in ['f4', 'f8']: 25 | dtype2 = np.dtype(dtype2) 26 | shifts = recon.read_shifts(data['Position'].astype(dtype2), field='disp+rsd') 27 | assert shifts.dtype.itemsize == dtype2.itemsize 28 | all_shifts2.append(shifts) 29 | if dtype2 == dtype: all_shifts.append(shifts) 30 | assert np.allclose(*all_shifts2, atol=1e-2, rtol=1e-2) 31 | assert np.allclose(*all_shifts, atol=1e-2, rtol=1e-2) 32 | 33 | 34 | def test_wrap(): 35 | size = 100000 36 | boxsize = 1000 37 | for boxcenter in [-500, 0, 500]: 38 | data = get_random_catalog(size, boxsize, seed=42) 39 | # set one of the data positions to be outside the fiducial box by hand 40 | data['Position'][-1] = np.array([boxsize, boxsize, boxsize]) + 1 41 | data['Position'] += boxcenter 42 | randoms = get_random_catalog(size, boxsize, seed=42) 43 | # set one of the random positions to be outside the fiducial box by hand 44 | randoms['Position'][-1] = np.array([0, 0, 0]) - 1 45 | randoms['Position'] += boxcenter 46 | recon = PlaneParallelFFTReconstruction(f=0.8, bias=2, los='z', boxsize=boxsize, boxcenter=boxcenter, nmesh=64, wrap=True) 47 | # following steps should run without error if wrapping is correctly implemented 48 | recon.assign_data(data['Position'], data['Weight']) 49 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 50 | recon.set_density_contrast() 51 | recon.run() 52 | 53 | # following steps test the implementation coded into standalone pyrecon code 54 | for field in ['rsd', 'disp', 'disp+rsd']: 55 | shifts = recon.read_shifts(data['Position'], field=field) 56 | diff = data['Position'] - shifts 57 | positions_rec = (diff - recon.offset) % recon.boxsize + recon.offset 58 | assert np.all(positions_rec >= boxcenter - boxsize / 2.) and np.all(positions_rec <= boxcenter + boxsize / 2.) 59 | assert np.allclose(recon.read_shifted_positions(data['Position'], field=field), positions_rec) 60 | 61 | 62 | def test_mem(): 63 | data = get_random_catalog(seed=42) 64 | randoms = get_random_catalog(seed=84) 65 | with MemoryMonitor() as mem: 66 | recon = PlaneParallelFFTReconstruction(f=0.8, bias=2., nthreads=4, positions=randoms['Position'], nmesh=256, los='x', dtype='f8') 67 | mem('init') 68 | recon.assign_data(data['Position'], data['Weight']) 69 | mem('data') 70 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 71 | mem('randoms') 72 | recon.set_density_contrast() 73 | mem('delta') 74 | recon.run() 75 | mem('recon') # 3 meshes 76 | 77 | 78 | def test_ref(data_fn, randoms_fn, data_fn_rec=None, randoms_fn_rec=None): 79 | boxsize = 1200. 80 | boxcenter = [1754, 0, 0] 81 | data = Catalog.read(data_fn) 82 | randoms = Catalog.read(randoms_fn) 83 | recon = PlaneParallelFFTReconstruction(f=0.8, bias=2., los='x', boxcenter=boxcenter, boxsize=boxsize, nmesh=128, dtype='f8') 84 | recon.assign_data(data['Position'], data['Weight']) 85 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 86 | recon.set_density_contrast() 87 | recon.mesh_delta += 10. 88 | recon.run() 89 | 90 | from pypower import CatalogFFTPower 91 | from matplotlib import pyplot as plt 92 | 93 | for cat, fn in zip([data, randoms], [data_fn_rec, randoms_fn_rec]): 94 | rec = recon.read_shifted_positions(cat['Position']) 95 | if 'Position_rec' in cat: 96 | if recon.mpicomm.rank == 0: print('Checking...') 97 | assert np.allclose(rec, cat['Position_rec']) 98 | else: 99 | cat['Position_rec'] = rec 100 | if fn is not None: 101 | cat.write(fn) 102 | 103 | kwargs = dict(edges={'min': 0., 'step': 0.01}, ells=(0, 2, 4), boxsize=1000., nmesh=64, resampler='tsc', interlacing=3, position_type='pos') 104 | power = CatalogFFTPower(data_positions1=data['Position'], randoms_positions1=randoms['Position'], **kwargs) 105 | poles = power.poles 106 | power = CatalogFFTPower(data_positions1=data['Position_rec'], randoms_positions1=randoms['Position_rec'], **kwargs) 107 | poles_rec = power.poles 108 | 109 | for ill, ell in enumerate(poles.ells): 110 | plt.plot(poles.k, poles.k * poles(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='-') 111 | plt.plot(poles_rec.k, poles_rec.k * poles_rec(ell=ell, complex=False), color='C{:d}'.format(ill), linestyle='--') 112 | 113 | if power.mpicomm.rank == 0: 114 | plt.show() 115 | 116 | 117 | if __name__ == '__main__': 118 | 119 | from utils import data_fn, randoms_fn, catalog_rec_fn 120 | from pyrecon.utils import setup_logging 121 | 122 | setup_logging() 123 | # Run utils.py to generate catalogs needed for these tests 124 | 125 | # test_mem() 126 | test_dtype() 127 | test_wrap() 128 | test_mpi(PlaneParallelFFTReconstruction, with_local_los=False) 129 | data_fn_rec, randoms_fn_rec = [catalog_rec_fn(fn, 'plane_parallel_fft') for fn in [data_fn, randoms_fn]] 130 | # test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec) 131 | test_ref(data_fn_rec, randoms_fn_rec, None, None) 132 | -------------------------------------------------------------------------------- /pyrecon/tests/test_utils.py: -------------------------------------------------------------------------------- 1 | import re 2 | 3 | import numpy as np 4 | 5 | from pyrecon import utils 6 | from pyrecon.utils import DistanceToRedshift 7 | 8 | from test_multigrid import get_random_catalog 9 | 10 | 11 | def decode_eval_str(s): 12 | # change ${col} => col, and return list of columns 13 | toret = str(s) 14 | columns = [] 15 | for replace in re.finditer(r'(\${.*?})', s): 16 | value = replace.group(1) 17 | col = value[2:-1] 18 | toret = toret.replace(value, col) 19 | if col not in columns: columns.append(col) 20 | return toret, columns 21 | 22 | 23 | def test_decode_eval_str(): 24 | s = '(${RA}>0.) & (${RA}<30.) & (${DEC}>0.) & (${DEC}<30.)' 25 | s, cols = decode_eval_str(s) 26 | print(s, cols) 27 | 28 | 29 | def test_distance_to_redshift(): 30 | 31 | def distance(z): 32 | return z**2 33 | 34 | d2z = DistanceToRedshift(distance) 35 | z = np.linspace(0., 20., 200) 36 | d = distance(z) 37 | assert np.allclose(d2z(d), z) 38 | for itemsize in [4, 8]: 39 | assert d2z(d.astype('f{:d}'.format(itemsize))).itemsize == itemsize 40 | 41 | 42 | def test_cartesian_to_sky(): 43 | for dtype in ['f4', 'f8']: 44 | dtype = np.dtype(dtype) 45 | positions = get_random_catalog(csize=100)['Position'].astype(dtype) 46 | drd = utils.cartesian_to_sky(positions) 47 | assert all(array.dtype.itemsize == dtype.itemsize for array in drd) 48 | positions2 = utils.sky_to_cartesian(*drd) 49 | assert positions2.dtype.itemsize == dtype.itemsize 50 | assert np.allclose(positions2, positions, rtol=1e-4 if dtype.itemsize == 4 else 1e-9) 51 | 52 | 53 | def cslice(mesh, cstart, cstop, concatenate=True): 54 | mpicomm = mesh.pm.comm 55 | cstart_out = mpicomm.allgather(cstart < 0) 56 | if any(cstart_out): 57 | cstart1, cstop1 = cstart, cstop 58 | cstart2, cstop2 = 0, 0 59 | if cstart_out[mpicomm.rank]: 60 | cstart1, cstop1 = cstart + mesh.pm.Nmesh[0], mesh.pm.Nmesh[0] 61 | cstart2, cstop2 = 0, cstop 62 | toret = cslice(mesh, cstart1, cstop1, concatenate=False) 63 | toret += cslice(mesh, cstart2, cstop2, concatenate=False) 64 | if concatenate: toret = np.concatenate(toret, axis=0) 65 | return toret 66 | cstop_out = mpicomm.allgather(cstop > mesh.pm.Nmesh[0] and cstop > cstart) # as above test may call with cstop = cstart 67 | if any(cstop_out): 68 | cstart1, cstop1 = 0, 0 69 | cstart2, cstop2 = cstart, cstop 70 | if cstop_out[mpicomm.rank]: 71 | cstart1, cstop1 = cstart, mesh.pm.Nmesh[0] 72 | cstart2, cstop2 = 0, cstop - mesh.pm.Nmesh[0] 73 | toret = cslice(mesh, cstart1, cstop1, concatenate=False) 74 | toret += cslice(mesh, cstart2, cstop2, concatenate=False) 75 | if concatenate: toret = np.concatenate(toret, axis=0) 76 | return toret 77 | 78 | mpicomm = mesh.pm.comm 79 | ranges = mpicomm.allgather((mesh.start[0], mesh.start[0] + mesh.shape[0])) 80 | argsort = np.argsort([start for start, stop in ranges]) 81 | # Send requested slices 82 | sizes, all_slices = [], [] 83 | for irank, (start, stop) in enumerate(ranges): 84 | lstart, lstop = max(cstart - start, 0), min(max(cstop - start, 0), stop - start) 85 | sizes.append(max(lstop - lstart, 0)) 86 | all_slices.append(mpicomm.allgather(slice(lstart, lstop))) 87 | assert sum(sizes) == cstop - cstart 88 | toret = [] 89 | for root in range(mpicomm.size): 90 | if mpicomm.rank == root: 91 | for rank in range(mpicomm.size): 92 | sl = all_slices[root][rank] 93 | if rank == root: 94 | tmp = mesh.value[sl] 95 | else: 96 | mpicomm.Send(np.ascontiguousarray(mesh.value[sl]), dest=rank, tag=43) 97 | #mpi.send(mesh.value[all_slices[root][irank]], dest=irank, tag=44, mpicomm=mpicomm) 98 | else: 99 | tmp = np.empty_like(mesh.value, shape=(sizes[root],) + mesh.shape[1:], order='C') 100 | mpicomm.Recv(tmp, source=root, tag=43) 101 | #tmp = mpi.recv(source=root, tag=44, mpicomm=mpicomm) 102 | toret.append(tmp) 103 | 104 | toret = [toret[ii] for ii in argsort] 105 | if concatenate: toret = np.concatenate(toret, axis=0) 106 | return toret 107 | 108 | 109 | def test_cslice(): 110 | from pmesh.pm import ParticleMesh 111 | from pyrecon import mpi 112 | mpicomm = mpi.COMM_WORLD 113 | nmesh = 16 114 | pm = ParticleMesh(BoxSize=[1.] * 3, Nmesh=[nmesh] * 3, np=(mpicomm.size, 1), dtype='f8') 115 | mesh = pm.create('real') 116 | cvalue = np.arange(pm.Nmesh.prod()).reshape(pm.Nmesh) 117 | mesh.value[...] = cvalue[mesh.start[0]:mesh.start[0] + mesh.shape[0]] 118 | array = cslice(mesh, -1, nmesh + 1) 119 | assert array.shape == (nmesh + 2, nmesh, nmesh) 120 | assert np.allclose(array[0], cvalue[-1]) 121 | assert np.allclose(array[-1], cvalue[0]) 122 | array = cslice(mesh, -3, nmesh + 3) 123 | assert array.shape == (nmesh + 6, nmesh, nmesh) 124 | array = cslice(mesh, -9, nmesh + 3) 125 | assert array.shape == (nmesh + 12, nmesh, nmesh) 126 | array = cslice(mesh, 2, 6) 127 | assert array.shape == (4, nmesh, nmesh) 128 | sl = [(0, 3), (-1, 2), (1, 4), (2, 5)][mpicomm.rank] 129 | array = cslice(mesh, *sl) 130 | assert array[1:-1].shape == (1, nmesh, nmesh) 131 | 132 | 133 | if __name__ == '__main__': 134 | 135 | test_decode_eval_str() 136 | test_distance_to_redshift() 137 | test_cartesian_to_sky() 138 | test_cslice() 139 | -------------------------------------------------------------------------------- /pyrecon/tests/test_zevolve.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | import numpy as np 4 | 5 | from pyrecon import IterativeFFTReconstruction, mpi, setup_logging 6 | from pyrecon.utils import DistanceToRedshift, sky_to_cartesian, cartesian_to_sky 7 | from utils import data_fn, randoms_fn, catalog_rec_fn, Catalog 8 | from cosmoprimo.fiducial import DESI 9 | from scipy.interpolate import InterpolatedUnivariateSpline 10 | 11 | 12 | def get_clustering_positions_weights(data, distance): 13 | mask = (data['Z'] > 0.) & (data['Z'] < 10.) 14 | ra = data['RA'][mask] 15 | dec = data['DEC'][mask] 16 | dist = distance(data['Z'][mask]) 17 | positions = sky_to_cartesian(ra=ra, dec=dec, dist=dist) 18 | weights = data['Weight'][mask] 19 | return positions, weights, mask 20 | 21 | 22 | def bias_evolution(z, tracer='QSO'): 23 | """ 24 | Bias model fitted from DR1 unblinded data (the formula from Laurent et al. 2016 (1705.04718)) 25 | """ 26 | if tracer == 'QSO': 27 | alpha = 0.237 28 | beta = 2.328 29 | elif tracer == 'LRG': 30 | alpha = 0.209 31 | beta = 2.790 32 | elif tracer == 'ELG_LOPnotqso': 33 | alpha = 0.153 34 | beta = 1.541 35 | else: 36 | raise NotImplementedError(f'{tracer} not implemented.') 37 | return alpha * ((1+z)**2 - 6.565) + beta 38 | 39 | 40 | def interpolate_f_bias(cosmo, tracer, zdependent=False): 41 | P0 = {'LRG': 8.9e3, 'QSO': 5.0e3}[tracer] 42 | if zdependent: 43 | z = np.linspace(0.0, 5.0, 10000) 44 | growth_rate = cosmo.growth_rate(z) 45 | bias = bias_evolution(z, tracer) 46 | distance = cosmo.comoving_radial_distance 47 | f_at_dist = InterpolatedUnivariateSpline(distance(z), growth_rate, k=3) 48 | bias_at_dist = InterpolatedUnivariateSpline(distance(z), bias, k=3) 49 | f_at_dist, bias_at_dist 50 | else: 51 | f_at_dist = {'LRG': 0.834, 'QSO': 0.928}[tracer] 52 | bias_at_dist = {'LRG': 2.0, 'QSO': 2.1}[tracer] 53 | return f_at_dist, bias_at_dist, P0 54 | 55 | 56 | def interpolate_nbar(data, randoms, distance): 57 | from mockfactory import RedshiftDensityInterpolator 58 | alpha = data['Weight'].csum() / randoms['Weight'].csum() 59 | density = RedshiftDensityInterpolator(distance(randoms['Z']), weights=alpha * randoms.ones(), bins=distance(np.linspace(0., 3., 100)), 60 | fsky=0.01) 61 | return density 62 | 63 | 64 | def test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec, tracer='LRG', recon_weights=True, fmesh=True, bmesh=True): 65 | cosmo = DESI() 66 | data = Catalog.read(data_fn) 67 | randoms = Catalog.read(randoms_fn) 68 | f, bias, P0 = interpolate_f_bias(cosmo, tracer, zdependent=False) 69 | f_at_dist, bias_at_dist, P0 = interpolate_f_bias(cosmo, tracer, zdependent=True) 70 | nbar_at_dist = interpolate_nbar(data, randoms, distance=cosmo.comoving_radial_distance) 71 | f = f_at_dist if fmesh else f 72 | bias = bias_at_dist if bmesh else bias 73 | for mode in ['std', 'fast']: 74 | if mode == 'std': 75 | recon = IterativeFFTReconstruction(f=f, bias=bias, positions=randoms['Position'], cellsize=7., 76 | los='local', position_type='pos', dtype='f8') 77 | recon.assign_data(data['Position'], data['Weight']) 78 | recon.assign_randoms(randoms['Position'], randoms['Weight']) 79 | #recon.set_density_contrast(smoothing_radius=15.) 80 | #if recon_weights: recon.set_optimal_weights(**{'P0': P0, 'n_at_dist': nbar_at_dist}) 81 | recon.set_density_contrast(smoothing_radius=15., kw_weights={'P0': P0, 'nbar': nbar_at_dist} if recon_weights else None) 82 | recon.run() 83 | else: 84 | recon = IterativeFFTReconstruction(f=f, bias=bias, data_positions=data['Position'], 85 | randoms_positions=randoms['Position'], cellsize=7., 86 | smoothing_radius=15., kw_weights={'P0': P0, 'nbar': nbar_at_dist} if recon_weights else None, 87 | los='local', position_type='pos', dtype='f8') 88 | 89 | data['Position_rec'] = recon.read_shifted_positions(data['Position']) 90 | randoms['Position_rec'] = recon.read_shifted_positions(randoms['Position']) 91 | 92 | for cat, fn in zip([data, randoms], [data_fn_rec, randoms_fn_rec]): 93 | rec = recon.read_shifted_positions(cat['Position']) 94 | if 'Position_rec' in cat: 95 | if recon.mpicomm.rank == 0: print('Checking...') 96 | assert np.allclose(rec, cat['Position_rec']) 97 | else: 98 | cat['Position_rec'] = rec 99 | if fn is not None: 100 | cat.write(fn) 101 | 102 | 103 | if __name__ == '__main__': 104 | 105 | setup_logging() 106 | 107 | data_fn_rec, randoms_fn_rec = [catalog_rec_fn(fn, 'zevolve') for fn in [data_fn, randoms_fn]] 108 | #test_ref(data_fn, randoms_fn, data_fn_rec, randoms_fn_rec) 109 | test_ref(data_fn_rec, randoms_fn_rec, None, None) -------------------------------------------------------------------------------- /pyrecon/tests/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import numpy as np 4 | from mockfactory import LagrangianLinearMock, RandomBoxCatalog, Catalog, cartesian_to_sky, DistanceToRedshift, setup_logging 5 | 6 | 7 | catalog_dir = '_catalogs' 8 | box_data_fn = os.path.join(catalog_dir, 'box_data.fits') 9 | data_fn = os.path.join(catalog_dir, 'data.fits') 10 | randoms_fn = os.path.join(catalog_dir, 'randoms.fits') 11 | 12 | 13 | def catalog_rec_fn(fn, algorithm): 14 | base, ext = os.path.splitext(fn) 15 | return '{}_{}{}'.format(base, algorithm, ext) 16 | 17 | 18 | def mkdir(dirname): 19 | try: 20 | os.makedirs(dirname) 21 | except OSError: 22 | pass 23 | 24 | 25 | def get_random_catalog(csize=100000, boxsize=1000., seed=42): 26 | import mpytools as mpy 27 | catalog = RandomBoxCatalog(csize=csize, boxsize=boxsize, seed=seed) 28 | catalog['Weight'] = mpy.random.MPIRandomState(size=catalog.size, seed=seed).uniform(0.5, 1.) 29 | return catalog 30 | 31 | 32 | def save_box_lognormal_catalogs(data_fn, seed=42): 33 | from cosmoprimo.fiducial import DESI 34 | z, bias, nbar, nmesh, boxsize, boxcenter = 0.7, 2.0, 3e-4, 256, 800., 0. 35 | cosmo = DESI() 36 | pklin = cosmo.get_fourier().pk_interpolator().to_1d(z=z) 37 | f = cosmo.sigma8_z(z=z, of='theta_cb') / cosmo.sigma8_z(z=z, of='delta_cb') # growth rate 38 | mock = LagrangianLinearMock(pklin, nmesh=nmesh, boxsize=boxsize, boxcenter=boxcenter, seed=42, unitary_amplitude=False) 39 | offset = boxcenter - boxsize / 2. 40 | # this is Lagrangian bias, Eulerian bias - 1 41 | mock.set_real_delta_field(bias=bias - 1) 42 | mock.set_analytic_selection_function(nbar=nbar) 43 | mock.poisson_sample(seed=43) 44 | mock.set_rsd(f=f, los='x') 45 | catalog = mock.to_catalog() 46 | catalog['Position'] = (catalog['Position'] - offset) % boxsize + offset 47 | catalog.write(box_data_fn) 48 | 49 | 50 | def save_lognormal_catalogs(data_fn, randoms_fn, seed=42): 51 | from cosmoprimo.fiducial import DESI 52 | z, bias, nbar, nmesh, boxsize = 0.7, 2.0, 3e-4, 256, 800. 53 | cosmo = DESI() 54 | d2z = DistanceToRedshift(cosmo.comoving_radial_distance) 55 | pklin = cosmo.get_fourier().pk_interpolator().to_1d(z=z) 56 | f = cosmo.sigma8_z(z=z, of='theta_cb') / cosmo.sigma8_z(z=z, of='delta_cb') # growth rate 57 | dist = cosmo.comoving_radial_distance(z) 58 | boxcenter = [dist, 0, 0] 59 | mock = LagrangianLinearMock(pklin, nmesh=nmesh, boxsize=boxsize, boxcenter=boxcenter, seed=42, unitary_amplitude=False) 60 | # This is Lagrangian bias, Eulerian bias - 1 61 | mock.set_real_delta_field(bias=bias - 1) 62 | mock.set_analytic_selection_function(nbar=nbar) 63 | mock.poisson_sample(seed=43) 64 | mock.set_rsd(f=f, los=None) 65 | data = mock.to_catalog() 66 | 67 | # We've got data, now turn to randoms 68 | randoms = RandomBoxCatalog(nbar=10. * nbar, boxsize=boxsize, boxcenter=boxcenter, seed=44) 69 | # Add columns to test pyrecon script 70 | for cat in [data, randoms]: 71 | cat['Weight'] = cat.ones() 72 | cat['NZ'] = nbar * cat.ones() 73 | dist, cat['RA'], cat['DEC'] = cartesian_to_sky(cat['Position']) 74 | cat['Z'] = d2z(dist) 75 | 76 | data.write(data_fn) 77 | randoms.write(randoms_fn) 78 | 79 | 80 | def test_mpi(algorithm, with_local_los=True): 81 | from pyrecon.utils import cartesian_to_sky 82 | from pyrecon import mpi 83 | data, randoms = get_random_catalog(seed=42), get_random_catalog(seed=81) 84 | gathered_data, gathered_randoms = data.gather(mpiroot=0), randoms.gather(mpiroot=0) 85 | mpicomm = data.mpicomm 86 | 87 | def get_shifts(data, randoms, position_type='pos', weight_type=True, mpicomm=None, mpiroot=None, mode='std', los='x'): 88 | data_positions, data_weights = data['Position'], data['Weight'] 89 | randoms_positions, randoms_weights = randoms['Position'], randoms['Weight'] 90 | if mpiroot is not None: 91 | data_positions, data_weights = mpi.gather(data_positions, mpicomm=mpicomm), mpi.gather(data_weights, mpicomm=mpicomm) 92 | randoms_positions, randoms_weights = mpi.gather(randoms_positions, mpicomm=mpicomm), mpi.gather(randoms_weights, mpicomm=mpicomm) 93 | if mpiroot is None or mpicomm.rank == mpiroot: 94 | if position_type == 'xyz': 95 | data_positions = data_positions.T 96 | randoms_positions = randoms_positions.T 97 | if position_type == 'rdd': 98 | data_positions = cartesian_to_sky(data_positions) 99 | randoms_positions = cartesian_to_sky(randoms_positions) 100 | data_positions = list(data_positions[1:]) + [data_positions[0]] 101 | randoms_positions = list(randoms_positions[1:]) + [randoms_positions[0]] 102 | if not weight_type: 103 | data_weights = randoms_weights = None 104 | 105 | if mode == 'std': 106 | recon = algorithm(positions=data_positions, randoms_positions=randoms_positions, nmesh=64, position_type=position_type, los=los, dtype='f8', mpicomm=mpicomm, mpiroot=mpiroot) 107 | assert recon.f is None 108 | recon.set_cosmo(f=0.8, bias=2.) 109 | recon.assign_data(data_positions, data_weights) 110 | recon.assign_randoms(randoms_positions, randoms_weights) 111 | recon.set_density_contrast() 112 | recon.run() 113 | else: 114 | recon = algorithm(f=0.8, bias=2., data_positions=data_positions, data_weights=data_weights, 115 | randoms_positions=randoms_positions, randoms_weights=randoms_weights, 116 | nmesh=64, position_type=position_type, los=los, dtype='f8', mpicomm=mpicomm, mpiroot=mpiroot) 117 | shifted_positions = recon.read_shifted_positions(data_positions, field='disp+rsd') 118 | if mpiroot is None or mpicomm.rank == mpiroot: 119 | assert np.array(shifted_positions).shape == np.array(data_positions).shape 120 | return recon.read_shifts(data_positions, field='disp+rsd') 121 | 122 | for weight_type in [True, False]: 123 | if mpicomm.rank == 0: 124 | shifts_ref = get_shifts(gathered_data, gathered_randoms, position_type='pos', weight_type=weight_type, mpicomm=gathered_data.mpicomm) 125 | 126 | for mpiroot in [None, 0]: 127 | for los in ['x'] + (['local'] if with_local_los else []): 128 | for mode in ['std', 'fast']: 129 | for position_type in ['pos', 'rdd', 'xyz']: 130 | shifts = get_shifts(data, randoms, position_type=position_type, weight_type=weight_type, mpicomm=mpicomm, mpiroot=mpiroot, mode=mode) 131 | if mpiroot is None: 132 | shifts = mpi.gather(shifts, mpicomm=mpicomm, mpiroot=0) 133 | if mpicomm.rank == 0: 134 | assert np.allclose(shifts, shifts_ref, rtol=1e-6) 135 | 136 | 137 | def main(): 138 | 139 | setup_logging() 140 | mkdir(catalog_dir) 141 | save_box_lognormal_catalogs(box_data_fn, seed=42) 142 | save_lognormal_catalogs(data_fn, randoms_fn, seed=42) 143 | 144 | 145 | if __name__ == '__main__': 146 | 147 | main() 148 | -------------------------------------------------------------------------------- /pyrecon/utils.h: -------------------------------------------------------------------------------- 1 | #ifndef _UTILS_H_ 2 | #define _UTILS_H_ 3 | 4 | #include 5 | #include 6 | 7 | #define NDIM 3 8 | 9 | 10 | void* my_malloc(size_t N, size_t size) 11 | { 12 | void *x = malloc(N*size); 13 | if (x == NULL){ 14 | fprintf(stderr, "malloc for %zu elements with %zu bytes failed...\n", N, size); 15 | perror(NULL); 16 | } 17 | return x; 18 | } 19 | 20 | void* my_calloc(size_t N, size_t size) 21 | { 22 | void *x = calloc(N, size); 23 | if (x == NULL){ 24 | fprintf(stderr, "calloc for %zu elements with %zu bytes failed...\n", N, size); 25 | perror(NULL); 26 | } 27 | return x; 28 | } 29 | 30 | #endif 31 | -------------------------------------------------------------------------------- /pyrecon/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import time 4 | import logging 5 | import traceback 6 | 7 | import numpy as np 8 | 9 | lib_dir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'lib') 10 | 11 | 12 | def exception_handler(exc_type, exc_value, exc_traceback): 13 | """Print exception with a logger.""" 14 | # Do not print traceback if the exception has been handled and logged 15 | _logger_name = 'Exception' 16 | log = logging.getLogger(_logger_name) 17 | line = '=' * 100 18 | # log.critical(line[len(_logger_name) + 5:] + '\n' + ''.join(traceback.format_exception(exc_type, exc_value, exc_traceback)) + line) 19 | log.critical('\n' + line + '\n' + ''.join(traceback.format_exception(exc_type, exc_value, exc_traceback)) + line) 20 | if exc_type is KeyboardInterrupt: 21 | log.critical('Interrupted by the user.') 22 | else: 23 | log.critical('An error occured.') 24 | 25 | 26 | def setup_logging(level=logging.INFO, stream=sys.stdout, filename=None, filemode='w', **kwargs): 27 | """ 28 | Set up logging. 29 | 30 | Parameters 31 | ---------- 32 | level : string, int, default=logging.INFO 33 | Logging level. 34 | 35 | stream : _io.TextIOWrapper, default=sys.stdout 36 | Where to stream. 37 | 38 | filename : string, default=None 39 | If not ``None`` stream to file name. 40 | 41 | filemode : string, default='w' 42 | Mode to open file, only used if filename is not ``None``. 43 | 44 | kwargs : dict 45 | Other arguments for :func:`logging.basicConfig`. 46 | """ 47 | # Cannot provide stream and filename kwargs at the same time to logging.basicConfig, so handle different cases 48 | # Thanks to https://stackoverflow.com/questions/30861524/logging-basicconfig-not-creating-log-file-when-i-run-in-pycharm 49 | if isinstance(level, str): 50 | level = {'info': logging.INFO, 'debug': logging.DEBUG, 'warning': logging.WARNING}[level.lower()] 51 | for handler in logging.root.handlers: 52 | logging.root.removeHandler(handler) 53 | 54 | t0 = time.time() 55 | 56 | class MyFormatter(logging.Formatter): 57 | 58 | def format(self, record): 59 | self._style._fmt = '[%09.2f] ' % (time.time() - t0) + ' %(asctime)s %(name)-28s %(levelname)-8s %(message)s' 60 | return super(MyFormatter, self).format(record) 61 | 62 | fmt = MyFormatter(datefmt='%m-%d %H:%M ') 63 | if filename is not None: 64 | mkdir(os.path.dirname(filename)) 65 | handler = logging.FileHandler(filename, mode=filemode) 66 | else: 67 | handler = logging.StreamHandler(stream=stream) 68 | handler.setFormatter(fmt) 69 | logging.basicConfig(level=level, handlers=[handler], **kwargs) 70 | sys.excepthook = exception_handler 71 | 72 | 73 | def mkdir(dirname): 74 | """Try to create ``dirname`` and catch :class:`OSError`.""" 75 | try: 76 | os.makedirs(dirname) # MPI... 77 | except OSError: 78 | return 79 | 80 | 81 | class BaseMetaClass(type): 82 | 83 | """Meta class to add logging attributes to :class:`BaseClass` derived classes.""" 84 | 85 | def __new__(meta, name, bases, class_dict): 86 | cls = super().__new__(meta, name, bases, class_dict) 87 | cls.set_logger() 88 | return cls 89 | 90 | def set_logger(cls): 91 | """ 92 | Add attributes for logging: 93 | 94 | - logger 95 | - methods log_debug, log_info, log_warning, log_error, log_critical 96 | """ 97 | cls.logger = logging.getLogger(cls.__name__) 98 | 99 | def make_logger(level): 100 | 101 | @classmethod 102 | def logger(cls, *args, **kwargs): 103 | getattr(cls.logger, level)(*args, **kwargs) 104 | 105 | return logger 106 | 107 | for level in ['debug', 'info', 'warning', 'error', 'critical']: 108 | setattr(cls, 'log_{}'.format(level), make_logger(level)) 109 | 110 | 111 | class BaseClass(object, metaclass=BaseMetaClass): 112 | """ 113 | Base class that implements :meth:`copy`. 114 | To be used throughout this package. 115 | """ 116 | def __copy__(self): 117 | new = self.__class__.__new__(self.__class__) 118 | new.__dict__.update(self.__dict__) 119 | return new 120 | 121 | def copy(self): 122 | return self.__copy__() 123 | 124 | 125 | def distance(position): 126 | """Return cartesian distance, taking coordinates along ``position`` last axis.""" 127 | return np.sqrt((position**2).sum(axis=-1)) 128 | 129 | 130 | def safe_divide(x, y, inplace=False): 131 | """ 132 | Divide ``x`` by ``y`` after replacing 0 in ``y`` by 1. 133 | If ``inplace`` is ``True``, ``x`` and ``y`` modified in-place. 134 | """ 135 | if not inplace: 136 | y = np.array(y) 137 | y[y == 0.] = 1. 138 | if inplace: 139 | toret = x 140 | toret /= y 141 | else: 142 | toret = x / y 143 | return toret 144 | 145 | 146 | def cartesian_to_sky(position, wrap=True, degree=True): 147 | r""" 148 | Transform Cartesian coordinates into distance, RA, Dec. 149 | 150 | Parameters 151 | ---------- 152 | position : array of shape (N, 3) 153 | Position in Cartesian coordinates. 154 | 155 | wrap : bool, default=True 156 | Whether to wrap RA in :math:`[0, 2 \pi]`. 157 | 158 | degree : bool, default=True 159 | Whether RA, Dec are in degrees (``True``) or radians (``False``). 160 | 161 | Returns 162 | ------- 163 | dist : array 164 | Distance. 165 | 166 | ra : array 167 | Right Ascension. 168 | 169 | dec : array 170 | Declination. 171 | """ 172 | dist = distance(position) 173 | ra = np.arctan2(position[:, 1], position[:, 0]) 174 | if wrap: ra %= 2. * np.pi 175 | dec = np.arcsin(position[:, 2] / dist) 176 | conversion = np.pi / 180. if degree else 1. 177 | return dist, ra / conversion, dec / conversion 178 | 179 | 180 | def sky_to_cartesian(dist, ra, dec, degree=True, dtype=None): 181 | """ 182 | Transform distance, RA, Dec into Cartesian coordinates. 183 | 184 | Parameters 185 | ---------- 186 | dist : array of shape (N,) 187 | Distance. 188 | 189 | ra : array of shape (N,) 190 | Right Ascension. 191 | 192 | dec : array of shape (N,) 193 | Declination. 194 | 195 | degree : default=True 196 | Whether RA, Dec are in degrees (``True``) or radians (``False``). 197 | 198 | dtype : numpy.dtype, default=None 199 | :class:`numpy.dtype` for returned array. 200 | 201 | Returns 202 | ------- 203 | position : array of shape (N, 3) 204 | Position in Cartesian coordinates. 205 | """ 206 | conversion = np.pi / 180. if degree else 1. 207 | position = [None] * 3 208 | cos_dec = np.cos(dec * conversion) 209 | position[0] = cos_dec * np.cos(ra * conversion) 210 | position[1] = cos_dec * np.sin(ra * conversion) 211 | position[2] = np.sin(dec * conversion) 212 | return (dist * np.array(position, dtype=dtype)).T 213 | 214 | 215 | class DistanceToRedshift(object): 216 | 217 | """Class that holds a conversion distance -> redshift.""" 218 | 219 | def __init__(self, distance, zmax=100., nz=2048, interp_order=3): 220 | """ 221 | Initialize :class:`DistanceToRedshift`. 222 | Creates an array of redshift -> distance in log(redshift) and instantiates 223 | a spline interpolator distance -> redshift. 224 | 225 | Parameters 226 | ---------- 227 | distance : callable 228 | Callable that provides distance as a function of redshift (array). 229 | 230 | zmax : float, default=100. 231 | Maximum redshift for redshift <-> distance mapping. 232 | 233 | nz : int, default=2048 234 | Number of points for redshift <-> distance mapping. 235 | 236 | interp_order : int, default=3 237 | Interpolation order, e.g. ``1`` for linear interpolation, ``3`` for cubic splines. 238 | """ 239 | self.distance = distance 240 | self.zmax = zmax 241 | self.nz = nz 242 | zgrid = np.logspace(-8, np.log10(self.zmax), self.nz) 243 | self.zgrid = np.concatenate([[0.], zgrid]) 244 | self.rgrid = self.distance(self.zgrid) 245 | from scipy import interpolate 246 | self.interp = interpolate.UnivariateSpline(self.rgrid, self.zgrid, k=interp_order, s=0) 247 | 248 | def __call__(self, distance): 249 | """Return (interpolated) redshift at distance ``distance`` (scalar or array).""" 250 | distance = np.asarray(distance) 251 | return self.interp(distance).astype(distance.dtype, copy=False) 252 | 253 | 254 | def _make_array(value, shape, dtype='f8'): 255 | # Return numpy array filled with value 256 | toret = np.empty(shape, dtype=dtype) 257 | toret[...] = value 258 | return toret 259 | 260 | 261 | def _get_box(*positions): 262 | """Return minimal box containing input positions.""" 263 | pos_min, pos_max = _make_array(np.inf, 3, dtype='f8'), _make_array(-np.inf, 3, dtype='f8') 264 | for position in positions: 265 | if position.shape[0] > 0: pos_min, pos_max = np.min([pos_min, position.min(axis=0)], axis=0), np.max([pos_max, position.max(axis=0)], axis=0) 266 | return pos_min, pos_max 267 | 268 | 269 | class MemoryMonitor(object): 270 | """ 271 | Class that monitors memory usage and clock, useful to check for memory leaks. 272 | 273 | >>> with MemoryMonitor() as mem: 274 | '''do something''' 275 | mem() 276 | '''do something else''' 277 | """ 278 | def __init__(self, pid=None): 279 | """ 280 | Initalize :class:`MemoryMonitor` and register current memory usage. 281 | 282 | Parameters 283 | ---------- 284 | pid : int, default=None 285 | Process identifier. If ``None``, use the identifier of the current process. 286 | """ 287 | import psutil 288 | self.proc = psutil.Process(os.getpid() if pid is None else pid) 289 | self.mem = self.proc.memory_info().rss / 1e6 290 | self.time = time.time() 291 | msg = 'using {:.3f} [Mb]'.format(self.mem) 292 | print(msg, flush=True) 293 | 294 | def __enter__(self): 295 | """Enter context.""" 296 | return self 297 | 298 | def __call__(self, log=None): 299 | """Update memory usage.""" 300 | mem = self.proc.memory_info().rss / 1e6 301 | t = time.time() 302 | msg = 'using {:.3f} [Mb] (increase of {:.3f} [Mb]) after {:.3f} [s]'.format(mem, mem - self.mem, t - self.time) 303 | if log: 304 | msg = '[{}] {}'.format(log, msg) 305 | print(msg, flush=True) 306 | self.mem = mem 307 | self.time = t 308 | 309 | def __exit__(self, exc_type, exc_value, exc_traceback): 310 | """Exit context.""" 311 | self() 312 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | from setuptools import setup, Extension 4 | 5 | import numpy as np 6 | 7 | # base directory of package 8 | package_basedir = os.path.abspath(os.path.dirname(__file__)) 9 | package_basename = 'pyrecon' 10 | 11 | sys.path.insert(0, os.path.join(package_basedir, package_basename)) 12 | import _version 13 | version = _version.__version__ 14 | 15 | 16 | if __name__ == '__main__': 17 | 18 | setup(name=package_basename, 19 | version=version, 20 | author='cosmodesi', 21 | author_email='', 22 | description='Python wrapper for reconstruction codes', 23 | license='BSD3', 24 | url='http://github.com/cosmodesi/pyrecon', 25 | install_requires=['numpy', 'scipy', 'pmesh @ git+https://github.com/MP-Gadget/pmesh'], 26 | extras_require={'extras': ['mpytools', 'fitsio', 'h5py'], 'metrics': ['pypower @ git+https://github.com/cosmodesi/pypower']}, 27 | ext_modules=[Extension(f'{package_basename}._multigrid', [f'{package_basename}/_multigrid.pyx'], 28 | depends=[f'{package_basename}/_multigrid_imp.h', f'{package_basename}/_multigrid_generics.h'], 29 | libraries=['m'], 30 | include_dirs=['./', np.get_include()])], 31 | packages=[package_basename], 32 | scripts=[f'bin/{package_basename}']) 33 | --------------------------------------------------------------------------------