├── LICENSE ├── README.md ├── blocks1.py ├── blocks2.py ├── bootstrap.py ├── common.py ├── compat_autoboot.py ├── compat_juliboots.py ├── compat_scalar_blocks.py ├── tutorial.py └── wrap.py /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Connor Behan 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PyCFTBoot 2 | 3 | PyCFTBoot is an interface for the conformal bootstrap as discussed in [its 2008 revival](http://arxiv.org/abs/0807.0004). Starting from the analytic structure of conformal blocks, the code formulates semidefinite programs without any proprietary software. The code does NOT perform the actual optimization. It assumes you already have a program for that, namely [SDPB](https://github.com/davidsd/sdpb) by David Simmons-Duffin. 4 | 5 | PyCFTBoot supports bounds on scaling dimensions and OPE coefficients in an arbitrary number of spacetime dimensions. The four-point functions used for the bounds must contain only scalars but they may have any combination of scaling dimensions and transform in arbitrary representations of a global symmetry. 6 | 7 | ## Installation on Linux 8 | If you use one of the mainstream Linux distributions, the following instructions should help you install PyCFTBoot and everything it depends on. 9 | 10 | 1. Follow [the instructions](https://github.com/davidsd/sdpb/blob/master/Install.md#linux) for installing SDPB. When this is done, you will have [Boost](http://www.boost.org) and [GMP](https://gmplib.org) as well, so we will not need to discuss those further. 11 | 12 | 2. Additional run-time dependencies are: [Sympy](http://www.sympy.org) and [MPFR >= 4.0](http://www.mpfr.org/). The build-time dependencies are: [Cython](http://cython.org/) and [CMake >= 2.8](https://cmake.org/). You should install all of these. You will probably not need to compile them because most distros have these packages in their repositories. 13 | 14 | 3. There are two library dependencies left. One is [Symengine](https://github.com/symengine/symengine) which probably needs to be compiled. One commit that has been tested is ec460e7. An even better idea is to use the latest commit that has been [marked stable](https://github.com/symengine/symengine.py/blob/master/symengine_version.txt) for language bindings. To compile it with the recommended settings, run: 15 | 16 | mkdir build && cd build 17 | # WITH_PTHREAD and WITH_SYMENGINE_THREAD_SAFE might be helpful as well 18 | cmake .. -DWITH_MPFR:BOOL=ON 19 | make 20 | 21 | 4. Lastly, compile and install [Symengine.py](https://github.com/symengine/symengine.py). 22 | 23 | 5. Additionally, extracting the spectrum with PyCFTBoot will require the binary [unisolve](https://numpi.dm.unipi.it/mpsolve-2.2/). 24 | 25 | ## Installation on Mac 26 | Thanks to Jaehoon Lee for writing these instructions and testing them on OS X 10.11 (El Capitan). 27 | 28 | 1. Follow the instructions for [installing SDPB](https://github.com/davidsd/sdpb/blob/master/Install.md#mac-os-x) on Mac OS X. Installing gcc takes a long time, so be patient. Also, you don't need `sudo` for installing boost due to recent changes. After that, you will have homebrew, gcc, gmp, mpfr and boost installed. The default compilers should be renamed as `gcc` and `g++` following the instructions. 29 | 30 | 2. Build all the required packages (Cython, Numpy, Sympy and Mpmath). One might alreday have these packages installed. The following assumes that no package other than the system's Python is installed. 31 | 32 | # Install homebrew's python which comes with pip 33 | brew install python 34 | brew linkapps python 35 | pip install --upgrade pip setuptools 36 | 37 | # Numpy 38 | brew install homebrew/python/numpy 39 | 40 | # Cython 41 | pip install cython 42 | 43 | # Sympy 44 | pip install sympy 45 | 46 | # Mpmath - technically not required as it is included in sympy 47 | pip install mpmath 48 | 49 | 3. Install `cmake` using homebrew: 50 | 51 | brew install cmake 52 | 53 | 4. Download [Symengine](https://github.com/symengine/symengine) and compile it. If you fail to install and need to rebuild, remove the build folder and start remaking it. Unpack the source file within the directory and run: 54 | 55 | mkdir build && cd build 56 | # Turning on the MPFR option is critical for using PyCFTBoot 57 | CC=gcc CXX=g++ cmake .. -DWITH_MPFR:BOOL=ON 58 | make 59 | # Test everything is built correctly 60 | ctest 61 | # Install files to default directories 62 | make install 63 | 64 | 5. Install the Python bindings [Symengine.py](https://github.com/symengine/symengine.py). Download the source and within the directory run: 65 | 66 | CC=gcc CXX=g++ python setup.py install 67 | 68 | ## Usage 69 | To test that PyCFTBoot is working, try to run: 70 | 71 | python 72 | import bootstrap 73 | 74 | If that doesn't work, you should check if the dependencies import correctly. 75 | 76 | python 77 | import symengine 78 | 79 | Assuming that all of this works, `python tutorial.py` will enter a tutorial with four examples. There are two changes that you might want to make to `bootstrap.py`. One is changing `python2` to `python` in the first line, for systems that don't append a specific number. The other is setting the path of SDPB and related executables by searching for `/usr/bin/sdpb` and updating this. Have fun constraining CFTs and convincing cluster maintainers to install fairly new software! 80 | 81 | ## Attribution 82 | If PyCFTBoot is helpful in one of your publications, please cite: 83 | 84 | - C. Behan, "PyCFTBoot: A flexible interface for the conformal bootstrap", [arXiv:1602.02810 \[hep-th\]](http://arxiv.org/abs/1602.02810). 85 | -------------------------------------------------------------------------------- /blocks1.py: -------------------------------------------------------------------------------- 1 | def delta_pole(nu, k, l, series): 2 | """ 3 | Returns the pole of a meromorphic global conformal block given by the 4 | parameters in arXiv:1406.4858 by Kos, Poland and Simmons-Duffin. 5 | 6 | Parameters 7 | ---------- 8 | nu: `(d - 2) / 2` where d is the spatial dimension. 9 | k: The parameter k indexing the various poles. As described in 10 | arXiv:1406.4858, it may be any positive integer unless `series` is 3. 11 | l: The spin. 12 | series: The parameter i desribing the three types of poles in arXiv:1406.4858. 13 | """ 14 | if series == 1: 15 | pole = 1 - l - k 16 | elif series == 2: 17 | pole = 1 + nu - k 18 | else: 19 | pole = 1 + l + 2 * nu - k 20 | 21 | return eval_mpfr(pole, prec) 22 | 23 | def delta_residue(nu, k, l, delta_12, delta_34, series): 24 | """ 25 | Returns the residue of a meromorphic global conformal block at a particular 26 | pole in `delta`. These residues were found by Kos, Poland and Simmons-Duffin 27 | in arXiv:1406.4858. 28 | 29 | Parameters 30 | ---------- 31 | nu: `(d - 2) / 2` where d is the spatial dimension. This must be 32 | different from an integer. 33 | k: The parameter k indexing the various poles. As described in 34 | arXiv:1406.4858, it may be any positive integer unless `series` 35 | is 3. 36 | l: The spin. 37 | delta_12: The difference between the external scaling dimensions of operator 38 | 1 and operator 2. 39 | delta_34: The difference between the external scaling dimensions of operator 40 | 3 and operator 4. 41 | series: The parameter i desribing the three types of poles in 42 | arXiv:1406.4858. 43 | """ 44 | # Time saving special case 45 | if series != 2 and k % 2 != 0 and delta_12 == 0 and delta_34 == 0: 46 | return 0 47 | 48 | if series == 1: 49 | ret = - ((k * (-4) ** k) / (factorial(k) ** 2)) * rf((1 - k + delta_12) / two, k) * rf((1 - k + delta_34) / two, k) 50 | if ret == 0: 51 | return ret 52 | elif l == 0 and nu == 0: 53 | # Take l to 0, then nu 54 | return ret * 2 55 | else: 56 | return ret * (rf(l + 2 * nu, k) / rf(l + nu, k)) 57 | elif series == 2: 58 | factors = [l + nu + 1 - delta_12, l + nu + 1 + delta_12, l + nu + 1 - delta_34, l + nu + 1 + delta_34] 59 | ret = ((k * rf(nu + 1, k - 1)) / (factorial(k) ** 2)) * ((l + nu - k) / (l + nu + k)) 60 | ret *= rf(-nu, k + 1) / ((rf((l + nu - k + 1) / 2, k) * rf((l + nu - k) / 2, k)) ** 2) 61 | 62 | for f in factors: 63 | ret *= rf((f - k) / two, k) 64 | return ret 65 | else: 66 | return - ((k * (-4) ** k) / (factorial(k) ** 2)) * (rf(1 + l - k, k) * rf((1 - k + delta_12) / two, k) * rf((1 - k + delta_34) / two, k) / rf(1 + nu + l - k, k)) 67 | 68 | class LeadingBlockVector: 69 | def __init__(self, dim, l, m_max, n_max, delta_12, delta_34): 70 | self.spin = l 71 | self.m_max = m_max 72 | self.n_max = n_max 73 | self.chunks = [] 74 | 75 | r = Symbol('r') 76 | eta = Symbol('eta') 77 | nu = (dim / Integer(2)) - 1 78 | derivative_order = m_max + 2 * n_max 79 | 80 | # With only a derivatives, we never need eta derivatives 81 | off_diag_order = derivative_order 82 | if n_max == 0: 83 | off_diag_order = 0 84 | 85 | # We cache derivatives as we go 86 | # This is because csympy can only compute them one at a time, but it's faster anyway 87 | old_expression = self.leading_block(nu, r, eta, l, delta_12, delta_34) 88 | 89 | for n in range(0, off_diag_order + 1): 90 | chunk = [] 91 | for m in range(0, derivative_order - n + 1): 92 | if n == 0 and m == 0: 93 | expression = old_expression 94 | elif m == 0: 95 | old_expression = old_expression.diff(eta) 96 | expression = old_expression 97 | else: 98 | expression = expression.diff(r) 99 | 100 | chunk.append(expression.subs({r : r_cross, eta : 1})) 101 | self.chunks.append(DenseMatrix(len(chunk), 1, chunk)) 102 | 103 | def leading_block(self, nu, r, eta, l, delta_12, delta_34): 104 | if self.n_max == 0: 105 | ret = 1 106 | elif nu == 0: 107 | ret = sympy.chebyshevt(l, eta) 108 | else: 109 | ret = factorial(l) * sympy.gegenbauer(l, nu, eta) / rf(2 * nu, l) 110 | 111 | # Time saving special case 112 | if delta_12 == delta_34: 113 | return ((-1) ** l) * ret / (((1 - r ** 2) ** nu) * sqrt((1 + r ** 2) ** 2 - 4 * (r * eta) ** 2)) 114 | else: 115 | return ((-1) ** l) * ret / (((1 - r ** 2) ** nu) * ((1 + r ** 2 + 2 * r * eta) ** ((one + delta_12 - delta_34) / two)) * ((1 + r ** 2 - 2 * r * eta) ** ((one - delta_12 + delta_34) / two))) 116 | 117 | class MeromorphicBlockVector: 118 | def __init__(self, leading_block): 119 | # A chunk is a set of r derivatives for one eta derivative 120 | # The matrix that should multiply a chunk is just R restricted to the right length 121 | self.chunks = [] 122 | 123 | for j in range(0, len(leading_block.chunks)): 124 | rows = leading_block.chunks[j].nrows() 125 | self.chunks.append(DenseMatrix(rows, 1, [0] * rows)) 126 | for n in range(0, rows): 127 | self.chunks[j].set(n, 0, leading_block.chunks[j].get(n, 0)) 128 | 129 | class ConformalBlockVector: 130 | def __init__(self, dim, l, delta_12, delta_34, derivative_order, kept_pole_order, s_matrix, leading_block, pol_list, res_list): 131 | self.large_poles = [] 132 | self.small_poles = [] 133 | self.chunks = [] 134 | 135 | nu = (dim / Integer(2)) - 1 136 | old_list = MeromorphicBlockVector(leading_block) 137 | for k in range(0, len(pol_list)): 138 | max_component = 0 139 | for j in range(0, len(leading_block.chunks)): 140 | for n in range(0, leading_block.chunks[j].nrows()): 141 | max_component = max(max_component, abs(float(res_list[k].chunks[j].get(n, 0)))) 142 | 143 | pole = delta_pole(nu, pol_list[k][1], l, pol_list[k][3]) 144 | if max_component < cutoff: 145 | self.small_poles.append(pole) 146 | else: 147 | self.large_poles.append(pole) 148 | 149 | matrix = [] 150 | if self.small_poles != []: 151 | for i in range(0, len(self.large_poles) // 2): 152 | for j in range(0, len(self.large_poles)): 153 | matrix.append(1 / ((cutoff + unitarity_bound(dim, l) - self.large_poles[j]) ** (i + 1))) 154 | for i in range(0, len(self.large_poles) - (len(self.large_poles) // 2)): 155 | for j in range(0, len(self.large_poles)): 156 | matrix.append(1 / (((1 / cutoff) - self.large_poles[j]) ** (i + 1))) 157 | matrix = DenseMatrix(len(self.large_poles), len(self.large_poles), matrix) 158 | 159 | for j in range(0, len(leading_block.chunks)): 160 | self.chunks.append(leading_block.chunks[j]) 161 | 162 | for p in self.small_poles: 163 | vector = [] 164 | for i in range(0, len(self.large_poles) // 2): 165 | vector.append(1 / ((unitarity_bound(dim, l) - p) ** (i + 1))) 166 | for i in range(0, len(self.large_poles) - (len(self.large_poles) // 2)): 167 | vector.append(1 / (((1 / cutoff) - p) ** (i + 1))) 168 | vector = DenseMatrix(len(self.large_poles), 1, vector) 169 | vector = matrix.solve(vector) 170 | 171 | k1 = self.get_pole_index(nu, l, pol_list, p) 172 | for i in range(0, len(self.large_poles)): 173 | k2 = self.get_pole_index(nu, l, pol_list, self.large_poles[i]) 174 | for j in range(0, len(self.chunks)): 175 | res_list[k2].chunks[j] = res_list[k2].chunks[j].add_matrix(res_list[k1].chunks[j].mul_scalar(vector.get(i, 0))) 176 | 177 | prod = 1 178 | for p in self.large_poles: 179 | k = self.get_pole_index(nu, l, pol_list, p) 180 | for j in range(0, len(self.chunks)): 181 | self.chunks[j] = self.chunks[j].mul_scalar(delta - p).add_matrix(res_list[k].chunks[j].mul_scalar(prod)) 182 | for i in range(0, self.chunks[j].nrows()): 183 | self.chunks[j].set(i, 0, self.chunks[j].get(i, 0).expand()) 184 | prod *= delta - p 185 | prod = prod.expand() 186 | 187 | for j in range(0, len(self.chunks)): 188 | s_sub = s_matrix[0:derivative_order - j + 1, 0:derivative_order - j + 1] 189 | self.chunks[j] = s_sub.mul_matrix(self.chunks[j]) 190 | 191 | def get_pole_index(self, nu, l, pol_list, p): 192 | for k in range(0, len(pol_list)): 193 | pole = delta_pole(nu, pol_list[k][1], l, pol_list[k][3]) 194 | if abs(float(pole - p)) < tiny: 195 | return k 196 | return -1 197 | 198 | class ConformalBlockTableSeed: 199 | """ 200 | A class which calculates tables of conformal block derivatives from scratch 201 | using the recursion relations with meromorphic versions of the blocks. 202 | Usually, it will not be necessary for the user to call it. Instead, 203 | `ConformalBlockTable` calls it automatically for `m_max = 3` and `n_max = 0`. 204 | For people wanting to call it with different values of `m_max` and `n_max`, 205 | the parameters and attributes are the same as those of `ConformalBlockTable`. 206 | It also supports the `dump` method. 207 | """ 208 | def __init__(self, dim, k_max, l_max, m_max, n_max, delta_12 = 0, delta_34 = 0, odd_spins = False): 209 | self.dim = dim 210 | self.k_max = k_max 211 | self.l_max = l_max 212 | self.m_max = m_max 213 | self.n_max = n_max 214 | self.delta_12 = delta_12 215 | self.delta_34 = delta_34 216 | self.odd_spins = odd_spins 217 | self.m_order = [] 218 | self.n_order = [] 219 | self.table = [] 220 | 221 | if odd_spins: 222 | step = 1 223 | else: 224 | step = 2 225 | 226 | derivative_order = m_max + 2 * n_max 227 | nu = (dim / Integer(2)) - 1 228 | 229 | # The matrix for how derivatives are affected when one multiplies by r 230 | r_powers = [] 231 | identity = [0] * ((derivative_order + 1) ** 2) 232 | lower_band = [0] * ((derivative_order + 1) ** 2) 233 | 234 | for i in range(0, derivative_order + 1): 235 | identity[i * (derivative_order + 1) + i] = 1 236 | for i in range(1, derivative_order + 1): 237 | lower_band[i * (derivative_order + 1) + i - 1] = i 238 | 239 | identity = DenseMatrix(derivative_order + 1, derivative_order + 1, identity) 240 | lower_band = DenseMatrix(derivative_order + 1, derivative_order + 1, lower_band) 241 | r_matrix = identity.mul_scalar(r_cross).add_matrix(lower_band) 242 | r_powers.append(identity) 243 | r_powers.append(r_matrix) 244 | 245 | conformal_blocks = [] 246 | leading_blocks = [] 247 | pol_list = [] 248 | res_list = [] 249 | pow_list = [] 250 | new_res_list = [] 251 | new_pow_list = [] 252 | 253 | # Find out which residues we will ever need to include 254 | for l in range(0, l_max + k_max + 1): 255 | lb = LeadingBlockVector(dim, l, m_max, n_max, delta_12, delta_34) 256 | leading_blocks.append(lb) 257 | current_pol_list = [] 258 | 259 | for k in range(1, k_max + 1): 260 | if l + k <= l_max + k_max: 261 | if delta_residue(nu, k, l, delta_12, delta_34, 1) != 0: 262 | current_pol_list.append((k, k, l + k, 1)) 263 | 264 | if k % 2 == 0: 265 | if delta_residue(nu, k // 2, l, delta_12, delta_34, 2) != 0: 266 | current_pol_list.append((k, k // 2, l, 2)) 267 | 268 | if k <= l: 269 | if delta_residue(nu, k, l, delta_12, delta_34, 3) != 0: 270 | current_pol_list.append((k, k, l - k, 3)) 271 | 272 | if l == 0: 273 | r_powers.append(r_powers[k].mul_matrix(r_powers[1])) 274 | 275 | # These are in the format (n, k, l, series) 276 | pol_list.append(current_pol_list) 277 | res_list.append([]) 278 | pow_list.append([]) 279 | new_res_list.append([]) 280 | new_pow_list.append([]) 281 | 282 | # Initialize the residues at the appropriate leading blocks 283 | for l in range(0, l_max + k_max + 1): 284 | for i in range(0, len(pol_list[l])): 285 | l_new = pol_list[l][i][2] 286 | res_list[l].append(MeromorphicBlockVector(leading_blocks[l_new])) 287 | pow_list[l].append(0) 288 | 289 | new_pow_list[l].append(pol_list[l][i][0]) 290 | new_res_list[l].append(0) 291 | 292 | for k in range(1, k_max + 1): 293 | for l in range(0, l_max + k_max + 1): 294 | for i in range(0, len(res_list[l])): 295 | if pow_list[l][i] >= k_max: 296 | continue 297 | 298 | res = delta_residue(nu, pol_list[l][i][1], l, delta_12, delta_34, pol_list[l][i][3]) 299 | pow_list[l][i] = new_pow_list[l][i] 300 | 301 | for j in range(0, len(res_list[l][i].chunks)): 302 | r_sub = r_powers[pol_list[l][i][0]][0:derivative_order - j + 1, 0:derivative_order - j + 1] 303 | res_list[l][i].chunks[j] = r_sub.mul_matrix(res_list[l][i].chunks[j]).mul_scalar(res) 304 | 305 | for l in range(0, l_max + k_max + 1): 306 | for i in range(0, len(res_list[l])): 307 | if pow_list[l][i] >= k_max: 308 | continue 309 | 310 | new_pow = k_max 311 | l_new = pol_list[l][i][2] 312 | new_res_list[l][i] = MeromorphicBlockVector(leading_blocks[l_new]) 313 | pole1 = delta_pole(nu, pol_list[l][i][1], l, pol_list[l][i][3]) + pol_list[l][i][0] 314 | 315 | for i_new in range(0, len(res_list[l_new])): 316 | new_pow = min(new_pow, pol_list[l_new][i_new][0]) 317 | pole2 = delta_pole(nu, pol_list[l_new][i_new][1], l_new, pol_list[l_new][i_new][3]) 318 | 319 | for j in range(0, len(new_res_list[l][i].chunks)): 320 | new_res_list[l][i].chunks[j] = new_res_list[l][i].chunks[j].add_matrix(res_list[l_new][i_new].chunks[j].mul_scalar(1 / eval_mpfr(pole1 - pole2, prec))) 321 | 322 | new_pow_list[l][i] = pow_list[l][i] + new_pow 323 | 324 | for l in range(0, l_max + k_max + 1): 325 | for i in range(0, len(res_list[l])): 326 | if pow_list[l][i] >= k_max: 327 | continue 328 | 329 | for j in range(0, len(res_list[l][i].chunks)): 330 | res_list[l][i].chunks[j] = new_res_list[l][i].chunks[j] 331 | 332 | # Perhaps poorly named, S keeps track of a linear combination of derivatives 333 | # We get this by including the essential singularity, then stripping it off again 334 | s_matrix = DenseMatrix(derivative_order + 1, derivative_order + 1, [0] * ((derivative_order + 1) ** 2)) 335 | for i in range(0, derivative_order + 1): 336 | new_element = 1 337 | for j in range(i, -1, -1): 338 | s_matrix.set(i, j, new_element) 339 | new_element *= (j / ((i - j + 1) * r_cross)) * (delta - (i - j)) 340 | 341 | for l in range(0, l_max + 1, step): 342 | conformal_block = ConformalBlockVector(dim, l, delta_12, delta_34, m_max + 2 * n_max, k_max, s_matrix, leading_blocks[l], pol_list[l], res_list[l]) 343 | conformal_blocks.append(conformal_block) 344 | self.table.append(PolynomialVector([], [l, 0], conformal_block.large_poles)) 345 | 346 | (rules1, rules2, self.m_order, self.n_order) = rules(m_max, n_max) 347 | # If b is always 0, then eta is always 1 348 | if n_max == 0: 349 | chain_rule_single(self.m_order, rules1, self.table, conformal_blocks, lambda l, i: conformal_blocks[l].chunks[0].get(i, 0)) 350 | else: 351 | chain_rule_double(self.m_order, self.n_order, rules1, rules2, self.table, conformal_blocks) 352 | 353 | def dump(self, name, form = None): 354 | if form == "juliboots": 355 | juliboots_write(self, name) 356 | elif form == "scalar_blocks": 357 | scalar_blocks_write(self, name) 358 | else: 359 | dump_table_contents(self, name) 360 | -------------------------------------------------------------------------------- /blocks2.py: -------------------------------------------------------------------------------- 1 | def convert_table(tab_short, tab_long): 2 | """ 3 | Converts a table with few poles into an equivalent table with many poles. 4 | When tables produced by different methods fail to look the same, it is often 5 | because their polynomials are being multiplied by different positive 6 | prefactors. This adjusts the prefactors so that they are the same. 7 | 8 | Parameters 9 | ---------- 10 | tab_short: A `ConformalBlockTable` where the blocks have a certain number of 11 | poles which is hopefully optimal. 12 | tab_long: A `ConformalBlockTable` with all of the poles that `tab_short` has 13 | plus more. 14 | """ 15 | for l in range(0, len(tab_short.table)): 16 | pole_prod = 1 17 | small_list = tab_short.table[l].poles[:] 18 | 19 | for p in tab_long.table[l].poles: 20 | index = get_index_approx(small_list, p) 21 | 22 | if index == -1: 23 | pole_prod *= delta - p 24 | tab_short.table[l].poles.append(p) 25 | else: 26 | small_list.remove(small_list[index]) 27 | 28 | for n in range(0, len(tab_short.table[l].vector)): 29 | tab_short.table[l].vector[n] = tab_short.table[l].vector[n] * pole_prod 30 | tab_short.table[l].vector[n] = tab_short.table[l].vector[n].expand() 31 | 32 | def cancel_poles(polynomial_vector): 33 | """ 34 | Checks which roots of a conformal block denominator are also roots of the 35 | numerator. Whenever one is found, a simple factoring is applied. 36 | 37 | Parameters 38 | ---------- 39 | polynomial_vector: The `PolynomialVector` that will be modified in place if 40 | it has superfluous poles. 41 | """ 42 | poles = [] 43 | zero_poles = [] 44 | for p in polynomial_vector.poles: 45 | if abs(float(p)) > tiny: 46 | poles.append(p) 47 | else: 48 | zero_poles.append(p) 49 | poles = zero_poles + poles 50 | 51 | for p in poles: 52 | # We should really make sure the pole is a root of all numerators 53 | # However, this is automatic if it is a root before differentiating 54 | if abs(polynomial_vector.vector[0].subs(delta, p)) < tiny: 55 | polynomial_vector.poles.remove(p) 56 | 57 | # A factoring algorithm which works if the zeros are first 58 | for n in range(0, len(polynomial_vector.vector)): 59 | coeffs = coefficients(polynomial_vector.vector[n]) 60 | if abs(p) > tiny: 61 | new_coeffs = [coeffs[0] / eval_mpfr(-p, prec)] 62 | for i in range(1, len(coeffs) - 1): 63 | new_coeffs.append((new_coeffs[i - 1] - coeffs[i]) / eval_mpfr(p, prec)) 64 | else: 65 | coeffs.remove(coeffs[0]) 66 | new_coeffs = coeffs 67 | 68 | polynomial_vector.vector[n] = build_polynomial(new_coeffs) 69 | 70 | class ConformalBlockTableSeed2: 71 | """ 72 | A class which calculates tables of conformal block derivatives from scratch 73 | using a power series solution of their fourth order differential equation. 74 | Usually, it will not be necessary for the user to call it. Instead, 75 | `ConformalBlockTable` calls it automatically for `m_max = 3`. Note that there 76 | is no `n_max` for this method. 77 | """ 78 | def __init__(self, dim, k_max, l_max, m_max, delta_12 = 0, delta_34 = 0, odd_spins = False): 79 | self.dim = dim 80 | self.k_max = k_max 81 | self.l_max = l_max 82 | self.m_max = m_max 83 | self.delta_12 = delta_12 84 | self.delta_34 = delta_34 85 | self.odd_spins = odd_spins 86 | self.m_order = [] 87 | self.n_order = [] 88 | self.table = [] 89 | 90 | if odd_spins: 91 | step = 1 92 | else: 93 | step = 2 94 | 95 | pole_set = [] 96 | conformal_blocks = [] 97 | nu = eval_mpfr((dim / Integer(2)) - 1, prec) 98 | c_2 = (ell * (ell + 2 * nu) + delta * (delta - 2 * nu - 2)) / 2 99 | c_4 = ell * (ell + 2 * nu) * (delta - 1) * (delta - 2 * nu - 1) 100 | delta_prod = delta_12 * delta_34 / (eval_mpfr(-2, prec)) 101 | delta_sum = (delta_12 - delta_34) / (eval_mpfr(-2, prec)) 102 | if delta_12 == 0 and delta_34 == 0: 103 | effective_power = 2 104 | else: 105 | effective_power = 1 106 | 107 | for l in range(0, l_max + 1, step): 108 | poles = [] 109 | for k in range(effective_power, k_max + 1, effective_power): 110 | poles.append(eval_mpfr(1 - k - l, prec)) 111 | poles.append((2 + 2 * nu - k) / eval_mpfr(2, prec)) 112 | poles.append(1 - k + l + 2 * nu) 113 | pole_set.append(poles) 114 | 115 | l = 0 116 | while l <= l_max and effective_power == 1: 117 | frob_coeffs = [1] 118 | conformal_blocks.append([]) 119 | self.table.append(PolynomialVector([], [l, 0], pole_set[l // step])) 120 | 121 | for k in range(1, k_max + 1): 122 | # A good check is to force this code to run for identical scalars too 123 | # This should produce the same blocks as the shorter recursion coming up 124 | recursion_coeffs = [0, 0, 0, 0, 0, 0, 0] 125 | recursion_coeffs[0] += 2 * c_2 * (2 * nu + 1) * (4 * delta_sum + 1) - c_4 + 8 * delta_prod * nu * (2 * nu + 1) 126 | recursion_coeffs[0] -= 2 * (delta + k - 1) * (c_2 * (2 * nu + 1) + 2 * delta_prod * (6 * nu - 1) + 8 * delta_sum * (c_2 + nu - 2 * nu * nu)) 127 | recursion_coeffs[0] += 2 * (delta + k - 1) * (delta + k - 2) * (c_2 + nu - 2 * nu * nu + 4 * delta_prod + 12 * delta_sum * (1 - 2 * nu)) 128 | recursion_coeffs[0] += 2 * (delta + k - 1) * (delta + k - 2) * (delta + k - 3) * (2 * nu - 1 + 8 * delta_sum) 129 | recursion_coeffs[0] -= 1 * (delta + k - 1) * (delta + k - 2) * (delta + k - 3) * (delta + k - 4) 130 | recursion_coeffs[1] += 3 * c_4 + 2 * c_2 * (4 * delta_sum * (4 * delta_sum + 2 * nu + 1) + 2 * nu - 3) - 8 * delta_prod * (2 * delta_sum * (1 - 6 * nu) + 6 * nu * nu - 5 * nu) 131 | recursion_coeffs[1] -= 2 * (delta + k - 2) * (2 * delta_prod * (16 * delta_sum - 10 * nu + 4) + 8 * delta_sum * (c_2 + nu - 2 * nu * nu) + (1 - 2 * nu) * (c_2 + 2 * nu - 2 + 32 * delta_sum * delta_sum)) 132 | recursion_coeffs[1] -= 2 * (delta + k - 2) * (delta + k - 3) * (3 * c_2 + 7 * nu + 2 * nu * nu - 10 + 4 * delta_prod + 4 * delta_sum * (10 * delta_sum + 6 * nu - 3)) 133 | recursion_coeffs[1] += 2 * (delta + k - 2) * (delta + k - 3) * (delta + k - 4) * (7 - 2 * nu + 8 * delta_sum) 134 | recursion_coeffs[1] += 3 * (delta + k - 2) * (delta + k - 3) * (delta + k - 4) * (delta + k - 5) 135 | recursion_coeffs[2] += 3 * c_4 + 2 * c_2 * (16 * delta_sum * delta_sum + 2 * nu - 3) + 16 * delta_prod * delta_sum * (8 * delta_sum + 2 * nu + 5) 136 | recursion_coeffs[2] -= 2 * (delta + k - 3) * ((1 - 2 * nu) * (c_2 + 2 * nu - 2 + 32 * delta_sum * delta_sum - 4 * delta_prod) - 8 * delta_sum * (8 * delta_sum * delta_sum + 4 * delta_prod + 4 * nu * nu + 2 * c_2 - 5)) 137 | recursion_coeffs[2] -= 2 * (delta + k - 3) * (delta + k - 4) * (8 * delta_prod + 40 * delta_sum * delta_sum + 48 * delta_sum + 3 * c_2 + 2 * nu * nu + 7 * nu - 10) 138 | recursion_coeffs[2] += 2 * (delta + k - 3) * (delta + k - 4) * (delta + k - 5) * (7 - 2 * nu - 16 * delta_sum) 139 | recursion_coeffs[2] += 3 * (delta + k - 3) * (delta + k - 4) * (delta + k - 5) * (delta + k - 6) 140 | recursion_coeffs[3] -= 3 * c_4 + 2 * c_2 * (16 * delta_sum * delta_sum + 2 * nu - 3) + 16 * delta_prod * delta_sum * (8 * delta_sum + 2 * nu + 5) 141 | recursion_coeffs[3] -= 2 * (delta + k - 4) * (12 + 4 * nu - 8 * nu * nu - c_2 * (2 * nu + 5) - 8 * delta_sum * (8 * delta_sum * delta_sum + 8 * delta_sum * nu + 6 * delta_sum + 4 * nu * nu + 2 * c_2 - 5) + 4 * delta_prod * (2 * nu - 5 - 8 * delta_sum)) 142 | recursion_coeffs[3] -= 2 * (delta + k - 4) * (delta + k - 5) * (3 * c_2 + 2 * nu * nu + 7 * nu - 10 + 8 * delta_prod + delta_sum * (40 * delta_sum - 18 * nu + 21)) 143 | recursion_coeffs[3] -= 2 * (delta + k - 4) * (delta + k - 5) * (delta + k - 6) * (16 * delta_sum + 2 * nu + 11) 144 | recursion_coeffs[3] -= 3 * (delta + k - 4) * (delta + k - 5) * (delta + k - 6) * (delta + k - 7) 145 | recursion_coeffs[4] -= 3 * c_4 + 2 * c_2 * (4 * delta_sum * (4 * delta_sum + 2 * nu + 1) + 2 * nu - 3) - 8 * delta_prod * (2 * delta_sum * (1 - 6 * nu) + 6 * nu * nu - 5 * nu) 146 | recursion_coeffs[4] -= 2 * (delta + k - 5) * (12 + 4 * nu - 8 * nu * nu - c_2 * (2 * nu + 5 - 8 * delta_sum) + 2 * delta_prod * (3 - 10 * nu + 16 * delta_sum) - 8 * delta_sum * (2 * nu * nu + 5 * nu + 3 + 8 * delta_sum * nu + 6 * delta_sum)) 147 | recursion_coeffs[4] -= 2 * (delta + k - 5) * (delta + k - 6) * (22 + 5 * nu - 2 * nu * nu - 3 * c_2 - 4 * delta_prod - 4 * delta_sum * (10 * delta_sum + 6 * nu + 9)) 148 | recursion_coeffs[4] += 2 * (delta + k - 5) * (delta + k - 6) * (delta + k - 7) * (8 * delta_sum - 2 * nu - 11) 149 | recursion_coeffs[4] -= 3 * (delta + k - 5) * (delta + k - 6) * (delta + k - 7) * (delta + k - 8) 150 | recursion_coeffs[5] -= 2 * c_2 * (2 * nu + 1) * (4 * delta_sum + 1) - c_4 + 8 * delta_prod * nu * (2 * nu + 1) 151 | recursion_coeffs[5] -= 2 * (delta + k - 6) * ((2 * nu + 3) * (c_2 - 2 * nu - 2) + 6 * delta_prod * (2 * nu + 1) + 8 * delta_sum * (c_2 - 2 * nu * nu - 5 * nu - 3)) 152 | recursion_coeffs[5] -= 2 * (delta + k - 6) * (delta + k - 7) * (c_2 + 4 * delta_prod - (2 * nu + 3) * (nu + 4 + 12 * delta_sum)) 153 | recursion_coeffs[5] += 2 * (delta + k - 6) * (delta + k - 7) * (delta + k - 8) * (2 * nu + 5 + 8 * delta_sum) 154 | recursion_coeffs[5] += 1 * (delta + k - 6) * (delta + k - 7) * (delta + k - 8) * (delta + k - 9) 155 | recursion_coeffs[6] = (k + 2 * nu - 5) * (2 * delta + k - 7) * (delta + k - l - 6) * (delta + k + l + 2 * nu - 6) 156 | recursion_coeffs[5] = recursion_coeffs[5].subs(ell, l) 157 | recursion_coeffs[4] = recursion_coeffs[4].subs(ell, l) 158 | recursion_coeffs[3] = recursion_coeffs[3].subs(ell, l) 159 | recursion_coeffs[2] = recursion_coeffs[2].subs(ell, l) 160 | recursion_coeffs[1] = recursion_coeffs[1].subs(ell, l) 161 | recursion_coeffs[0] = recursion_coeffs[0].subs(ell, l) 162 | 163 | pole_prod = 1 164 | frob_coeffs.append(0) 165 | for i in range(0, min(k, 7)): 166 | frob_coeffs[k] += recursion_coeffs[i] * pole_prod * frob_coeffs[k - i - 1] / eval_mpfr(2 * k, prec) 167 | frob_coeffs[k] = frob_coeffs[k].expand() 168 | if i + 1 < min(k, 7): 169 | pole_prod *= (delta - pole_set[l // step][3 * (k - i - 2)]) * (delta - pole_set[l // step][3 * (k - i - 2) + 1]) * (delta - pole_set[l // step][3 * (k - i - 2) + 2]) 170 | 171 | # We have solved for the Frobenius coefficients times products of poles 172 | # Fix them so that they all carry the same product 173 | pole_prod = 1 174 | for k in range(k_max, -1, -1): 175 | frob_coeffs[k] *= pole_prod 176 | frob_coeffs[k] = frob_coeffs[k].expand() 177 | if k > 0: 178 | pole_prod *= (delta - pole_set[l // step][3 * k - 1]) * (delta - pole_set[l // step][3 * k - 2]) * (delta - pole_set[l // step][3 * k - 3]) 179 | 180 | conformal_blocks[l // step] = [0] * (m_max + 1) 181 | for k in range(0, k_max + 1): 182 | prod = 1 183 | for m in range(0, m_max + 1): 184 | conformal_blocks[l // step][m] += prod * frob_coeffs[k] * (r_cross ** (k - m)) 185 | conformal_blocks[l // step][m] = conformal_blocks[l // step][m].expand() 186 | prod *= (delta + k - m) 187 | l += step 188 | 189 | l = 0 190 | while l <= l_max and effective_power == 2: 191 | frob_coeffs = [1] 192 | conformal_blocks.append([]) 193 | self.table.append(PolynomialVector([], [l, 0], pole_set[l // step])) 194 | 195 | for k in range(2, k_max + 1, 2): 196 | recursion_coeffs = [0, 0, 0] 197 | recursion_coeffs[0] += 3 * c_4 + 2 * c_2 * (2 * nu - 3) 198 | recursion_coeffs[0] += 2 * (delta + k - 2) * (2 * nu - 1) * (c_2 + 2 * nu - 2) 199 | recursion_coeffs[0] += 2 * (delta + k - 2) * (delta + k - 3) * (10 - 7 * nu - 2 * nu * nu - 3 * c_2) 200 | recursion_coeffs[0] += 2 * (delta + k - 2) * (delta + k - 3) * (delta + k - 4) * (7 - 2 * nu) 201 | recursion_coeffs[0] += 3 * (delta + k - 2) * (delta + k - 3) * (delta + k - 4) * (delta + k - 5) 202 | recursion_coeffs[1] += 2 * c_2 * (3 - 2 * nu) - 3 * c_4 203 | recursion_coeffs[1] += 2 * (delta + k - 4) * (c_2 * (2 * nu + 5) + 8 * nu * nu - 4 * nu - 12) 204 | recursion_coeffs[1] += 2 * (delta + k - 4) * (delta + k - 5) * (3 * c_2 + 2 * nu * nu - 5 * nu - 22) 205 | recursion_coeffs[1] -= 2 * (delta + k - 4) * (delta + k - 5) * (delta + k - 6) * (2 * nu + 11) 206 | recursion_coeffs[1] -= 3 * (delta + k - 4) * (delta + k - 5) * (delta + k - 6) * (delta + k - 7) 207 | recursion_coeffs[2] = (k + 2 * nu - 4) * (2 * delta + k - 6) * (delta + k - l - 5) * (delta + k + l + 2 * nu - 5) 208 | recursion_coeffs[1] = recursion_coeffs[1].subs(ell, l) 209 | recursion_coeffs[0] = recursion_coeffs[0].subs(ell, l) 210 | 211 | pole_prod = 1 212 | frob_coeffs.append(0) 213 | for i in range(0, min(k / 2, 3)): 214 | frob_coeffs[k / 2] += recursion_coeffs[i] * pole_prod * frob_coeffs[(k / 2) - i - 1] / eval_mpfr(2 * k, prec) 215 | frob_coeffs[k / 2] = frob_coeffs[k / 2].expand() 216 | if i + 1 < min(k / 2, 3): 217 | pole_prod *= (delta - pole_set[l // step][3 * ((k / 2) - i - 2)]) * (delta - pole_set[l // step][3 * ((k / 2) - i - 2) + 1]) * (delta - pole_set[l // step][3 * ((k / 2) - i - 2) + 2]) 218 | 219 | pole_prod = 1 220 | for k in range(k_max // 2, -1, -1): 221 | frob_coeffs[k] *= pole_prod 222 | frob_coeffs[k] = frob_coeffs[k].expand() 223 | if k > 0: 224 | pole_prod *= (delta - pole_set[l // step][3 * k - 1]) * (delta - pole_set[l // step][3 * k - 2]) * (delta - pole_set[l // step][3 * k - 3]) 225 | 226 | conformal_blocks[l // step] = [0] * (m_max + 1) 227 | for k in range(0, (k_max // 2) + 1): 228 | prod = 1 229 | for m in range(0, m_max + 1): 230 | conformal_blocks[l // step][m] += prod * frob_coeffs[k] * (r_cross ** (2 * k - m)) 231 | conformal_blocks[l // step][m] = conformal_blocks[l // step][m].expand() 232 | prod *= (delta + 2 * k - m) 233 | l += step 234 | 235 | (rules1, rules2, self.m_order, self.n_order) = rules(m_max, 0) 236 | chain_rule_single(self.m_order, rules1, self.table, conformal_blocks, lambda l, i: conformal_blocks[l][i]) 237 | 238 | # Find the superfluous poles (including possible triple poles) to cancel 239 | for l in range(0, len(self.table)): 240 | cancel_poles(self.table[l]) 241 | -------------------------------------------------------------------------------- /bootstrap.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """ 3 | PyCFTBoot is an interface for the numerical bootstrap in arbitrary dimension, 4 | a field that was initiated in 2008 by Rattazzi, Rychkov, Tonni and Vichi in 5 | arXiv:0807.0004. Starting from the analytic structure of conformal blocks, the 6 | code formulates semidefinite programs without any proprietary software. The 7 | actual optimization step must be performed by David Simmons-Duffin's program 8 | SDPB available at https://github.com/davidsd/sdpb. 9 | 10 | PyCFTBoot may be used to find bounds on OPE coefficients and allowed regions in 11 | the space of scaling dimensions for various CFT operators. All operators used in 12 | the explicit correlators must be scalars, but they may have different scaling 13 | dimensions and transform in arbitrary representations of a global symmetry. 14 | """ 15 | from __future__ import print_function 16 | import xml.dom.minidom 17 | import multiprocessing 18 | import subprocess 19 | import itertools 20 | import zipfile 21 | import json 22 | import time 23 | import re 24 | import os 25 | 26 | # Regular sympy is slow but we only use it for quick access to Gegenbauer polynomials 27 | # Even this could be removed since our conformal block code is needlessly general 28 | from symengine.lib.symengine_wrapper import * 29 | import sympy 30 | 31 | if have_mpfr == False: 32 | print("Symengine must be compiled with MPFR support") 33 | quit(1) 34 | 35 | # Relocate some self-contained classes to separate files 36 | # Importing them would not make sense because they refer back to things in this file 37 | exec(open("common.py").read()) 38 | exec(open("compat_autoboot.py").read()) 39 | exec(open("compat_juliboots.py").read()) 40 | exec(open("compat_scalar_blocks.py").read()) 41 | exec(open("blocks1.py").read()) 42 | exec(open("blocks2.py").read()) 43 | 44 | # MPFR has no trouble calling gamma_inc quickly when the first argument is zero 45 | # In case we need to go back to using non-zero values, the following might be faster 46 | """ 47 | import mpmath 48 | mpmath.mp.dps = dec_prec 49 | def uppergamma(x, a): 50 | return RealMPFR(str(mpmath.gammainc(mpmath.mpf(str(x)), a = mpmath.mpf(str(a)))), prec) 51 | """ 52 | 53 | class PolynomialVector: 54 | """ 55 | The main class for vectors on which the functionals being found by SDPB may act. 56 | 57 | Attributes 58 | ---------- 59 | vector: A list of the components, expected to be polynomials in `delta`. The 60 | number of components is dictated by the number of derivatives kept in 61 | the search space. 62 | label: A two element list where the first element is the spin and the second 63 | is a user-defined label for the representation of some global symmetry 64 | (or 0 if none have been set yet). 65 | poles: A list of roots of the common denominator shared by all entries in 66 | `vector`. This allows one to go back to the original rational functions 67 | instead of the more convenient polynomials. 68 | """ 69 | def __init__(self, derivatives, spin_irrep, poles): 70 | if type(spin_irrep) == type(1): 71 | spin_irrep = [spin_irrep, 0] 72 | self.vector = derivatives 73 | self.label = spin_irrep 74 | self.poles = poles 75 | 76 | class ConformalBlockTable: 77 | """ 78 | A class which calculates tables of conformal block derivatives when initialized. 79 | This uses recursion relations on the diagonal found by Hogervorst, Osborn and 80 | Rychkov in arXiv:1305.1321. 81 | 82 | Parameters 83 | ---------- 84 | dim: The spatial dimension. If even dimensions are of interest, floating 85 | point numbers with small fractional parts are recommended. 86 | k_max: Number controlling the accuracy of the rational approximation. 87 | Specifically, it is the maximum power of the crossing symmetric value 88 | of the radial co-ordinate as described in arXiv:1406.4858. 89 | l_max: The maximum spin to include in the table. 90 | m_max: Number controlling how many `a` derivatives to include where the 91 | standard co-ordinates are expressed as `(a + sqrt(b)) / 2` and 92 | `(a - sqrt(b)) / 2`. As explained in arXiv:1412.4127, a value of 0 93 | does not necessarily eliminate all `a` derivatives. 94 | n_max: The number of `b` derivatives to include where the standard 95 | co-ordinates are expressed as `(a + sqrt(b)) / 2` and 96 | `(a - sqrt(b)) / 2`. 97 | delta_12: [Optional] The difference between the external scaling dimensions of 98 | operator 1 and operator 2. Defaults to 0. 99 | delta_34: [Optional] The difference between the external scaling dimensions of 100 | operator 3 and operator 4. Defaults to 0. 101 | odd_spins: [Optional] Whether to include 0, 1, 2, 3, ..., `l_max` instead of 102 | just 0, 2, 4, ..., `l_max`. Defaults to `False`. 103 | name: [Optional] The name of a file containing conformal blocks that have 104 | already been calculated. If this is specified and refers to a file 105 | produced by PyCFTBoot, all other parameters passed to the class are 106 | overwritten by the ones in the table. If it refers to one produced 107 | by JuliBoots, all parameters passed to the class except delta_12 and 108 | delta_34 are overwritten. If it refers to a directory, the contents 109 | will be treated as scalar_blocks output files. Parameters will be 110 | only be overwritten in this case if they can easily be parsed from 111 | the filenames. 112 | 113 | Attributes 114 | ---------- 115 | table: A list of `PolynomialVector`s. A block's position in the table is 116 | equal to its spin if `odd_spins` is True. Otherwise it is equal to 117 | half of the spin. 118 | m_order: A list with the same number of components as the `PolynomialVector`s 119 | in `table`. Any `i`-th entry in a `PolynomialVector` is a particular 120 | derivative of a conformal block, but to remember which one, just look 121 | at the `i`-th entry of `m_order` which is the number of `a` 122 | derivatives. 123 | n_order: A list with the same number of components as the `PolynomialVector`s 124 | in `table`. Any `i`-th entry in a `PolynomialVector` is a particular 125 | derivative of a conformal block, but to remember which one, just look 126 | at the `i`-th entry of `n_order` which is the number of `b` 127 | derivatives. 128 | """ 129 | def __init__(self, dim, k_max, l_max, m_max, n_max, delta_12 = 0, delta_34 = 0, odd_spins = False, name = None): 130 | self.dim = dim 131 | self.k_max = k_max 132 | self.l_max = l_max 133 | self.m_max = m_max 134 | self.n_max = n_max 135 | self.delta_12 = delta_12 136 | self.delta_34 = delta_34 137 | self.odd_spins = odd_spins 138 | 139 | if name != None: 140 | try: 141 | if os.path.isdir(name): 142 | scalar_blocks_read(self, name) 143 | else: 144 | dump_file = open(name, 'r') 145 | token = next(dump_file)[:4] 146 | dump_file.close() 147 | 148 | if token == "self": 149 | dump_file = open(name, 'r') 150 | command = dump_file.read() 151 | exec(command) 152 | dump_file.close() 153 | else: 154 | juliboots_read(self, name) 155 | return 156 | except: 157 | print("Table " + name + " not present, generating another", flush = True) 158 | 159 | if type(dim) == type(1) and dim % 2 == 0: 160 | small_table = ConformalBlockTableSeed2(dim, k_max, l_max, min(m_max + 2 * n_max, 3), delta_12, delta_34, odd_spins) 161 | else: 162 | small_table = ConformalBlockTableSeed(dim, k_max, l_max, min(m_max + 2 * n_max, 3), 0, delta_12, delta_34, odd_spins) 163 | self.m_order = small_table.m_order 164 | self.n_order = small_table.n_order 165 | self.table = small_table.table 166 | 167 | a = Symbol('a') 168 | nu = RealMPFR(str(dim - 2), prec) / 2 169 | c_2 = (ell * (ell + 2 * nu) + delta * (delta - 2 * nu - 2)) / 2 170 | c_4 = ell * (ell + 2 * nu) * (delta - 1) * (delta - 2 * nu - 1) 171 | polys = [0, 0, 0, 0, 0] 172 | poly_derivs = [[], [], [], [], []] 173 | delta_prod = -delta_12 * delta_34 / two 174 | delta_sum = -(delta_12 - delta_34) / two 175 | 176 | # Polynomial 0 goes with the lowest order derivative on the right hand side 177 | # Polynomial 3 goes with the highest order derivative on the right hand side 178 | # Polynomial 4 goes with the derivative for which we are solving 179 | polys[0] += (a ** 0) * (16 * c_2 * (2 * nu + 1) - 8 * c_4) 180 | polys[0] += (a ** 1) * (4 * (c_4 + 2 * (2 * nu + 1) * (c_2 * delta_sum - c_2 + nu * delta_prod))) 181 | polys[0] += (a ** 2) * (2 * (delta_sum - nu) * (c_2 * (2 * delta_sum - 1) + delta_prod * (6 * nu - 1))) 182 | polys[0] += (a ** 3) * (2 * delta_prod * (delta_sum - nu) * (delta_sum - nu + 1)) 183 | polys[1] += (a ** 1) * (-16 * c_2 * (2 * nu + 1)) 184 | polys[1] += (a ** 2) * (4 * delta_prod - 24 * nu * delta_prod + 8 * nu * (2 * nu - 1) * (2 * delta_sum + 1) + 4 * c_2 * (1 - 4 * delta_sum + 6 * nu)) 185 | polys[1] += (a ** 3) * (2 * c_2 * (4 * delta_sum - 2 * nu + 1) + 4 * (2 * nu - 1) * (2 * delta_sum + 1) * (delta_sum - nu + 1) + 2 * delta_prod * (10 * nu - 5 - 4 * delta_sum)) 186 | polys[1] += (a ** 4) * ((delta_sum - nu + 1) * (4 * delta_prod + (2 * delta_sum + 1) * (delta_sum - nu + 2))) 187 | polys[2] += (a ** 2) * (16 * c_2 + 16 * nu - 32 * nu * nu) 188 | polys[2] += (a ** 3) * (8 * delta_prod - 8 * (3 * delta_sum - nu + 3) * (2 * nu - 1) - 16 * c_2 - 8 * nu + 16 * nu * nu) 189 | polys[2] += (a ** 4) * (4 * (c_2 - delta_prod + (3 * delta_sum - nu + 3) * (2 * nu - 1)) - 4 * delta_prod - 2 * (delta_sum - nu + 2) * (5 * delta_sum - nu + 5)) 190 | polys[2] += (a ** 5) * (2 * delta_prod + (delta_sum - nu + 2) * (5 * delta_sum - nu + 5)) 191 | polys[3] += (a ** 3) * (32 * nu - 16) 192 | polys[3] += (a ** 4) * (16 - 32 * nu + 4 * (4 * delta_sum - 2 * nu + 7)) 193 | polys[3] += (a ** 5) * (4 * (2 * nu - 1) - 4 * (4 * delta_sum - 2 * nu + 7)) 194 | polys[3] += (a ** 6) * (4 * delta_sum - 2 * nu + 7) 195 | polys[4] += (a ** 7) - 6 * (a ** 6) + 12 * (a ** 5) - 8 * (a ** 4) 196 | 197 | # Store all possible derivatives of these polynomials 198 | for i in range(0, 5): 199 | for j in range(0, i + 4): 200 | poly_derivs[i].append(polys[i].subs(a, 1)) 201 | polys[i] = polys[i].diff(a) 202 | 203 | for m in range(self.m_order[-1] + 1, m_max + 2 * n_max + 1): 204 | for l in range(0, len(small_table.table)): 205 | new_deriv = 0 206 | for i in range(m - 1, max(m - 8, -1), -1): 207 | coeff = 0 208 | index = max(m - i - 4, 0) 209 | 210 | prefactor = one 211 | for k in range(0, index): 212 | prefactor *= (m - 4 - k) 213 | prefactor /= k + 1 214 | 215 | k = max(4 + i - m, 0) 216 | while k <= 4 and index <= (m - 4): 217 | coeff += prefactor * poly_derivs[k][index] 218 | prefactor *= (m - 4 - index) 219 | prefactor /= index + 1 220 | index += 1 221 | k += 1 222 | 223 | if type(coeff) != type(1): 224 | coeff = coeff.subs(ell, small_table.table[l].label[0]) 225 | new_deriv -= coeff * self.table[l].vector[i] 226 | 227 | new_deriv = new_deriv / poly_derivs[4][0] 228 | self.table[l].vector.append(new_deriv.expand()) 229 | 230 | self.m_order.append(m) 231 | self.n_order.append(0) 232 | 233 | # This is just an alternative to storing derivatives as a doubly-indexed list 234 | index = m_max + 2 * n_max + 1 235 | index_map = [range(0, m_max + 2 * n_max + 1)] 236 | 237 | for n in range(1, n_max + 1): 238 | index_map.append([]) 239 | for m in range(0, 2 * (n_max - n) + m_max + 1): 240 | index_map[n].append(index) 241 | 242 | coeff1 = m * (-1) * (2 - 4 * n - 4 * nu) 243 | coeff2 = m * (m - 1) * (2 - 4 * n - 4 * nu) 244 | coeff3 = m * (m - 1) * (m - 2) * (2 - 4 * n - 4 * nu) 245 | coeff4 = 1 246 | coeff5 = (-6 + m + 4 * n - 2 * nu - 2 * delta_sum) 247 | coeff6 = (-1) * (4 * c_2 + m * m + 8 * m * n - 5 * m + 4 * n * n - 2 * n - 2 - 4 * nu * (1 - m - n) + 4 * delta_sum * (m + 2 * n - 2) + 2 * delta_prod) 248 | coeff7 = m * (-1) * (m * m + 12 * m * n - 13 * m + 12 * n * n - 34 * n + 22 - 2 * nu * (2 * n - m - 1) + 2 * delta_sum * (m + 4 * n - 5) + 2 * delta_prod) 249 | coeff8 = (1 - n) 250 | coeff9 = (1 - n) * (-6 + 3 * m + 4 * n - 2 * nu + 2 * delta_sum) 251 | 252 | for l in range(0, len(small_table.table)): 253 | new_deriv = 0 254 | 255 | if m > 0: 256 | new_deriv += coeff1 * self.table[l].vector[index_map[n][m - 1]] 257 | if m > 1: 258 | new_deriv += coeff2 * self.table[l].vector[index_map[n][m - 2]] 259 | if m > 2: 260 | new_deriv += coeff3 * self.table[l].vector[index_map[n][m - 3]] 261 | 262 | new_deriv += coeff4 * self.table[l].vector[index_map[n - 1][m + 2]] 263 | new_deriv += coeff5 * self.table[l].vector[index_map[n - 1][m + 1]] 264 | new_deriv += coeff6.subs(ell, small_table.table[l].label[0]) * self.table[l].vector[index_map[n - 1][m]] 265 | new_deriv += coeff7 * self.table[l].vector[index_map[n - 1][m - 1]] 266 | 267 | if n > 1: 268 | new_deriv += coeff8 * self.table[l].vector[index_map[n - 2][m + 2]] 269 | new_deriv += coeff9 * self.table[l].vector[index_map[n - 2][m + 1]] 270 | 271 | new_deriv = new_deriv / (2 - 4 * n - 4 * nu) 272 | self.table[l].vector.append(new_deriv.expand()) 273 | 274 | self.m_order.append(m) 275 | self.n_order.append(n) 276 | index += 1 277 | 278 | def dump(self, name, form = None): 279 | """ 280 | Saves a table of conformal block derivatives to a file. Unless overridden, 281 | the file is valid Python code which manually populates the entries of 282 | `table` when executed. 283 | 284 | Parameters 285 | ---------- 286 | name: The path to use for output. 287 | form: [Optional] A string indicating that the file should be saved in 288 | another program's format if it is equal to "scalar_blocks" or 289 | "juliboots". Any other value will be ignored. Defaults to `None`. 290 | """ 291 | if form == "juliboots": 292 | juliboots_write(self, name) 293 | elif form == "scalar_blocks": 294 | scalar_blocks_write(self, name) 295 | else: 296 | dump_table_contents(self, name) 297 | 298 | class ConvolvedBlockTable: 299 | """ 300 | A class which produces the functions that need to be linearly dependent in a 301 | crossing symmetric CFT. If a `ConformalBlockTable` does not need to be changed 302 | after a change to the external dimensions, a `ConvolvedBlockTable` does not 303 | either. This is because external dimensions only appear symbolically through a 304 | symbol called `delta_ext`. 305 | 306 | Parameters 307 | ---------- 308 | block_table: A `ConformalBlockTable` from which to produce the convolved blocks. 309 | odd_spins: [Optional] A parameter telling the class to keep odd spins which is 310 | only used if `odd_spins` is True for `block_table`. Defaults to 311 | `True`. 312 | symmetric: [Optional] Whether to add blocks in two different channels instead 313 | of subtract them. Defaults to `False`. 314 | content: [Optional] A list of ordered triples that are used to produce 315 | user-defined linear combinations of convolved conformal blocks 316 | instead of just individual convolved conformal blocks where all the 317 | coefficients are 1. Elements of a triple are taken to be the 318 | coefficient, the dimension shift and the spin shift respectively. 319 | It should always make sense to include a triple whose second and 320 | third entries are 0 and 0 since this corresponds to a convolved 321 | conformal block with scaling dimension `delta` and spin `ell`. 322 | However, if other blocks in the multiplet have `delta + 1` and 323 | `ell - 1` relative to this, another triple should be included whose 324 | second and third entries are 1 and -1. The coefficient (first 325 | entry) may be a polynomial in `delta` with coefficients depending 326 | on `ell`. 327 | 328 | Attributes 329 | ---------- 330 | dim: The spatial dimension, inherited from `block_table`. 331 | k_max: Numer controlling the accuracy of the rational approximation, 332 | inherited from `block_table`. 333 | l_max: The highest spin kept in the convolved block table. This is at most 334 | the `l_max` of `block_table`. 335 | m_max: Number controlling how many `a` derivatives there are where the 336 | standard co-ordinates are expressed as `(a + sqrt(b)) / 2` and 337 | `(a - sqrt(b)) / 2`. This is at most the `m_max` of `block_table`. 338 | n_max: The number of `b` derivatives there are where the standard 339 | co-ordinates are expressed as `(a + sqrt(b)) / 2` and 340 | `(a - sqrt(b)) / 2`. This is at most the `n_max` of `block_table`. 341 | delta_12: The difference between the external scaling dimensions of operator 342 | 1 and operator 2, inherited from `block_table`. 343 | delta_32: The difference between the external scaling dimensions of operator 344 | 3 and operator 4, inherited from `block_table`. 345 | table: A list of `PolynomialVector`s. A block's position in the table is 346 | equal to its spin if `odd_spins` is `True`. Otherwise it is equal 347 | to half of the spin. 348 | m_order: A list stating how many `a` derivatives are being described by the 349 | corresponding entry in a `PolynomialVector` in `table`. Different 350 | from the `m_order` of `block_table` because some derivatives vanish 351 | by symmetry. 352 | n_order: A list stating how many `b` derivatives are being described by the 353 | corresponding entry in a `PolynomialVector` in `table`. 354 | """ 355 | def __init__(self, block_table, odd_spins = True, symmetric = False, spins = [], content = [[1, 0, 0]]): 356 | # Copying everything but the unconvolved table is fine from a memory standpoint 357 | self.dim = block_table.dim 358 | self.k_max = block_table.k_max 359 | self.l_max = block_table.l_max 360 | self.m_max = block_table.m_max 361 | self.n_max = block_table.n_max 362 | self.delta_12 = block_table.delta_12 363 | self.delta_34 = block_table.delta_34 364 | 365 | self.m_order = [] 366 | self.n_order = [] 367 | self.table = [] 368 | 369 | max_spin_shift = 0 370 | for trip in content: 371 | max_spin_shift = max(max_spin_shift, trip[2]) 372 | self.l_max -= max_spin_shift 373 | 374 | # We can restrict to even spin when the provided table has odd spin but not vice-versa 375 | if odd_spins == False and block_table.odd_spins == True: 376 | self.odd_spins = False 377 | else: 378 | self.odd_spins = block_table.odd_spins 379 | if block_table.odd_spins == True: 380 | step = 1 381 | else: 382 | step = 2 383 | if len(spins) > 0: 384 | spin_list = spins 385 | elif self.odd_spins: 386 | spin_list = range(0, self.l_max + 1, 1) 387 | else: 388 | spin_list = range(0, self.l_max + 1, 2) 389 | 390 | symbol_array = [] 391 | for n in range(0, block_table.n_max + 1): 392 | symbol_list = [] 393 | for m in range(0, 2 * (block_table.n_max - n) + block_table.m_max + 1): 394 | symbol_list.append(Symbol('g_' + n.__str__() + '_' + m.__str__())) 395 | symbol_array.append(symbol_list) 396 | 397 | derivatives = [] 398 | for n in range(0, block_table.n_max + 1): 399 | for m in range(0, 2 * (block_table.n_max - n) + block_table.m_max + 1): 400 | # Skip the ones that will vanish 401 | if (symmetric == False and m % 2 == 0) or (symmetric == True and m % 2 == 1): 402 | continue 403 | 404 | self.m_order.append(m) 405 | self.n_order.append(n) 406 | 407 | expression = 0 408 | old_coeff = RealMPFR("0.25", prec) ** delta_ext 409 | for j in range(0, n + 1): 410 | coeff = old_coeff 411 | for i in range(0, m + 1): 412 | expression += coeff * symbol_array[n - j][m - i] 413 | coeff *= (i + 2 * j - 2 * delta_ext) * (m - i) / (i + 1) 414 | old_coeff *= (j - delta_ext) * (n - j) / (j + 1) 415 | 416 | deriv = expression / RealMPFR(str(factorial(m) * factorial(n)), prec) 417 | derivatives.append(deriv) 418 | 419 | combined_block_table = [] 420 | for spin in spin_list: 421 | vector = [] 422 | l = spin // step 423 | 424 | # Different blocks in the linear combination may be divided by different poles 425 | all_poles = [] 426 | pole_dict = {} 427 | for trip in content: 428 | del_shift = trip[1] 429 | ell_shift = trip[2] // step 430 | if l + ell_shift >= 0: 431 | gathered_poles = gather(block_table.table[l + ell_shift].poles) 432 | for p in gathered_poles.keys(): 433 | ind = get_index_approx(pole_dict.keys(), p - del_shift) 434 | if ind == -1: 435 | pole_dict[p - del_shift] = gathered_poles[p] 436 | else: 437 | pole_dict_index = index_iter(pole_dict.keys(), ind) 438 | num = pole_dict[pole_dict_index] 439 | pole_dict[pole_dict_index] = max(num, gathered_poles[p]) 440 | for p in pole_dict.keys(): 441 | all_poles += [p] * pole_dict[p] 442 | 443 | for i in range(0, len(block_table.table[l].vector)): 444 | entry = 0 445 | for trip in content: 446 | if "subs" in dir(trip[0]): 447 | coeff = trip[0].subs(ell, spin) 448 | else: 449 | coeff = trip[0] 450 | del_shift = trip[1] 451 | ell_shift = trip[2] // step 452 | 453 | coeff *= r_cross ** del_shift 454 | if l + ell_shift >= 0: 455 | coeff *= omit_all(all_poles, block_table.table[l + ell_shift].poles, delta, del_shift) 456 | entry += coeff * block_table.table[l + ell_shift].vector[i].subs(delta, delta + del_shift) 457 | vector.append(entry.expand()) 458 | combined_block_table.append(PolynomialVector(vector, [spin, 0], all_poles)) 459 | 460 | for l in range(0, len(combined_block_table)): 461 | new_derivs = [] 462 | for i in range(0, len(derivatives)): 463 | deriv = derivatives[i] 464 | for j in range(len(combined_block_table[l].vector) - 1, 0, -1): 465 | deriv = deriv.subs(symbol_array[block_table.n_order[j]][block_table.m_order[j]], combined_block_table[l].vector[j]) 466 | new_derivs.append(2 * deriv.subs(symbol_array[0][0], combined_block_table[l].vector[0])) 467 | self.table.append(PolynomialVector(new_derivs, combined_block_table[l].label, combined_block_table[l].poles)) 468 | 469 | class SDP: 470 | """ 471 | A class where convolved conformal blocks are augmented by crossing equations 472 | which allow numerical bounds to be derived. All calls to `SDPB` happen through 473 | this class. 474 | 475 | Parameters 476 | ---------- 477 | dim_list: A list of all scaling dimensions that appear in the external 478 | operators of the four-point functions being considered. If 479 | there is only one, this may be a float instead of a list. 480 | conv_table_list: A list of all types of convolved conformal block tables that 481 | appear in the crossing equations. If there is only one type, 482 | this may be a `ConvolvedBlockTable` instance instead of a list. 483 | vector_types: [Optional] A list of triples, one for each type of operator in 484 | the sum rule. The third element of each triple is the arbitrary 485 | label for that representation (something used to label 486 | `PolynomialVector`s that are generated). The second element is 487 | an even integer for even spin operators and an odd integer for 488 | odd spin operators. The first element is everything else. 489 | Specifically, it is a list of matrices of ordered quadruples 490 | where a matrix is a list of lists. If the sum rule involves no 491 | matrices, it may simply be a list of ordered quadruples. In a 492 | quadruple, the first entry is a numerical coefficient and the 493 | second entry is an index stating which element of 494 | `conv_table_list` that coefficient should multiply. The third 495 | and fourth entries (which may be omitted if `dim_list` has only 496 | one entry) specify the external dimensions that should replace 497 | `delta_ext` in a `ConvolvedConformalBlockTable` as positions in 498 | `dim_list`. They are the "inner two" dimensions `j` and `k` if 499 | convolved conformal blocks are given `i`, `j`, `k`, `l` labels 500 | as in arXiv:1406.4858. The first triple must describe the even 501 | spin singlet channel (where the identity can be found). After 502 | this, the order of the triples is not important. 503 | prototype: [Optional] A previous instance of `SDP` which may speed up the 504 | allocation of this one. The idea is that if a bound on any 505 | operator does not need to change from one table to the next, 506 | the bilinear basis corresponding to it (which requires a 507 | Cholesky decomposition and a matrix inversion to calculate) 508 | might simply be copied. 509 | 510 | Attributes 511 | ---------- 512 | dim: The spatial dimension, inherited from `conv_block_table_list`. 513 | k_max: The corresponding attribute from `conv_block_table_list`. 514 | l_max: The corresponding attribute from `conv_block_table_list`. 515 | m_max: The corresponding attribute from `conv_block_table_list`. 516 | n_max: The corresponding attribute from `conv_block_table_list`. 517 | odd_spins: Whether any element of `conv_block_table_list` has odd spins. 518 | table: A list of matrices of `PolynomialVector`s where the number of 519 | rows and columns is determined from `vector_types`. They are 520 | ordered first by the type of representation and then by spin. 521 | Each `PolynomialVector` may be longer than a `PolynomialVector` 522 | from a single entry of `conv_block_table_list`. They represent 523 | the concatenation of several such `PolynomialVectors`, one for 524 | each row of a vectorial sum rule. 525 | m_order: Analogous to `m_order` in `ConformalBlockTable` or 526 | `ConvolvedBlockTable`, this keeps track of the number of `a` 527 | derivatives in these longer `PolynomialVector`s. 528 | m_order: Analogous to `n_order` in `ConformalBlockTable` or 529 | `ConvolvedBlockTable`, this keeps track of the number of `b` 530 | derivatives in these longer `PolynomialVector`s. 531 | options: A list of strings where each string is a command line option 532 | that will be passed when `SDPB` is run from this `SDP`. This 533 | list should be touched with `set_option` and not directly. 534 | points: In addition to `PolynomialVector`s whose entries allow `delta` 535 | to take any positive value, the user may also include in the 536 | sum rule `PolynomialVector`s whose entries are pure numbers. 537 | In other words, she may evaluate some of them once and for all 538 | at particular values of `delta` to force certain operators to 539 | appear in the spectrum. This list should be touched with 540 | `add_point` and not directly. 541 | unit: A list which gives the `PolynomialVector` corresponding to the 542 | identity. This is obtained simply by plugging `delta = 0` into 543 | the zero spin singlet channel. If such a channel involves 544 | matrices, the sum of all elements is taken since the conformal 545 | blocks are normalized under the convention that all OPE 546 | coefficients involving the identity are 1. It should not be 547 | necessary to change this. 548 | irrep_set: A list of ordered pairs, one for each type of operator in 549 | `vector_types`. The second element of each is a label for the 550 | representation. The first is a modified version of the first 551 | matrix. The ordered quadruples do not correspond to the 552 | prefactors and list positions anymore but to the four external 553 | operator dimensions that couple to the block in this position. 554 | It should not be necessary to change this. 555 | basis: A list of matrices which has as many matrices as `table`. 556 | Each triangular matrix stores a set of orthogonal polynomials 557 | in the monomial basis. It should not be necessary to change 558 | this. 559 | """ 560 | def __init__(self, dim_list, conv_table_list, vector_types = [[[[[[1, 0, 0, 0]]]], 0, 0]], prototype = None): 561 | # If a user is looking at single correlators, we will not punish 562 | # her for only passing one dimension 563 | if type(dim_list) != type([]): 564 | dim_list = [dim_list] 565 | if type(conv_table_list) != type([]): 566 | conv_table_list = [conv_table_list] 567 | 568 | # Same story here 569 | self.dim = 0 570 | self.k_max = 0 571 | self.l_max = 0 572 | self.m_max = 0 573 | self.n_max = 0 574 | self.odd_spins = False 575 | 576 | # Just in case these are different 577 | for tab in conv_table_list: 578 | self.dim = max(self.dim, tab.dim) 579 | self.k_max = max(self.k_max, tab.k_max) 580 | self.l_max = max(self.l_max, tab.l_max) 581 | self.m_max = max(self.m_max, tab.m_max) 582 | self.n_max = max(self.n_max, tab.n_max) 583 | 584 | self.points = [] 585 | self.m_order = [] 586 | self.n_order = [] 587 | self.table = [] 588 | self.unit = [] 589 | self.irrep_set = [] 590 | 591 | # Turn any "raw elements" from the vectorial sum rule into 1x1 matrices 592 | for i in range(0, len(vector_types)): 593 | for j in range(0, len(vector_types[i][0])): 594 | if type(vector_types[i][0][j][0]) != type([]): 595 | vector_types[i][0][j] = [[vector_types[i][0][j]]] 596 | 597 | # Again, fill in arguments that need not be specified for single correlators 598 | for i in range(0, len(vector_types)): 599 | for j in range(0, len(vector_types[i][0])): 600 | for k in range(0, len(vector_types[i][0][j])): 601 | for l in range(0, len(vector_types[i][0][j][k])): 602 | if len(vector_types[i][0][j][k][l]) == 2: 603 | vector_types[i][0][j][k][l].append(0) 604 | vector_types[i][0][j][k][l].append(0) 605 | 606 | # We must assume the 0th element put in vector_types corresponds to the singlet channel 607 | # This is because we must harvest the identity from it 608 | for matrix in vector_types[0][0]: 609 | chosen_tab = conv_table_list[matrix[0][0][1]] 610 | 611 | for i in range(0, len(chosen_tab.table[0].vector)): 612 | unit = 0 613 | m = chosen_tab.m_order[i] 614 | n = chosen_tab.n_order[i] 615 | for r in range(0, len(matrix)): 616 | for s in range(0, len(matrix[r])): 617 | quad = matrix[r][s] 618 | param = RealMPFR("0.5", prec) * (dim_list[quad[2]] + dim_list[quad[3]]) 619 | #tab = conv_table_list[quad[1]] 620 | #factor = self.shifted_prefactor(tab.table[0].poles, r_cross, 0, 0) 621 | #unit += factor * quad[0] * tab.table[0].vector[i].subs(delta, 0).subs(delta_ext, (dim_list[quad[2]] + dim_list[quad[3]]) / 2.0) 622 | unit += 2 * quad[0] * (RealMPFR("0.25", prec) ** param) * rf(-param, n) * rf(2 * n - 2 * param, m) / (factorial(m) * factorial(n)) 623 | 624 | self.m_order.append(m) 625 | self.n_order.append(n) 626 | self.unit.append(unit) 627 | 628 | # Looping over types and spins gives "0 - S", "0 - T", "1 - A" and so on 629 | for vec in vector_types: 630 | # Instead of specifying even or odd spins, the user can specify a list of spins 631 | if type(vec[1]) == type([]): 632 | spin_list = vec[1] 633 | elif (vec[1] % 2) == 1: 634 | self.odd_spins = True 635 | spin_list = range(1, self.l_max, 2) 636 | else: 637 | spin_list = range(0, self.l_max, 2) 638 | 639 | for l in spin_list: 640 | size = len(vec[0][0]) 641 | 642 | outer_list = [] 643 | for r in range(0, size): 644 | inner_list = [] 645 | for s in range(0, size): 646 | derivatives = [] 647 | large_poles = [] 648 | for matrix in vec[0]: 649 | quad = matrix[r][s] 650 | tab = conv_table_list[quad[1]] 651 | 652 | if tab.odd_spins: 653 | index = l 654 | else: 655 | index = l // 2 656 | if quad[0] != 0: 657 | large_poles = tab.table[index].poles 658 | 659 | for i in range(0, len(tab.table[index].vector)): 660 | derivatives.append(quad[0] * tab.table[index].vector[i].subs(delta_ext, (dim_list[quad[2]] + dim_list[quad[3]]) / 2.0)) 661 | inner_list.append(PolynomialVector(derivatives, [l, vec[2]], large_poles)) 662 | outer_list.append(inner_list) 663 | self.table.append(outer_list) 664 | 665 | # We are done with vector_types now so we can change it 666 | for vec in vector_types: 667 | matrix = deepcopy(vec[0][0]) 668 | for r in range(0, len(matrix)): 669 | for s in range(0, len(matrix)): 670 | quad = matrix[r][s] 671 | dim2 = dim_list[quad[2]] 672 | dim3 = dim_list[quad[3]] 673 | dim1 = dim2 + conv_table_list[quad[1]].delta_12 674 | dim4 = dim3 - conv_table_list[quad[1]].delta_34 675 | matrix[r][s] = [dim1, dim2, dim3, dim4] 676 | self.irrep_set.append([matrix, vec[2]]) 677 | 678 | self.bounds = [0.0] * len(self.table) 679 | self.options = [] 680 | 681 | if prototype == None: 682 | self.basis = [0] * len(self.table) 683 | self.set_bound(reset_basis = True) 684 | else: 685 | self.basis = [] 686 | for mat in prototype.basis: 687 | self.basis.append(mat) 688 | self.set_bound(reset_basis = False) 689 | 690 | def add_point(self, spin_irrep = -1, dimension = -1, extra = []): 691 | """ 692 | Tells the `SDP` that a particular fixed operator should be included in the 693 | sum rule. If called with one argument, all points with that label will be 694 | removed. If called with no arguments, all points with any label will be 695 | removed. 696 | 697 | Parameters 698 | ---------- 699 | spin_irrep: [Optional] An ordered pair used to label the `PolynomialVector` 700 | for the operator. The first entry is the spin, the second is the 701 | label which must be found in `vector_types` or 0 if not present. 702 | Defaults to -1 which means all operators. 703 | dimension: [Optional] The scaling dimension of the operator being added. 704 | Defaults to -1 which means the point should be removed. 705 | extra: [Optional] A list of quintuples specifying information about 706 | other operators that should be packaged with this operator. The 707 | first two elements of a quintuple are the `spin_irrep` and 708 | `dimension` except for the operator which is not being added 709 | separately because its presence is tied to this one. The next 710 | two elements of a quintuple are ordered pairs giving positions 711 | in the crossing equation matrices. The operator described by the 712 | first two quintuple elements should have its contribution in the 713 | position given by the first ordered pair added to that of the 714 | operator described by `spin_irrep` and `dimension` in the 715 | position given by the second ordered pair. The final element of 716 | the quintuple is a coefficient that should multiply whatever is 717 | added. The purpose of this is to enforce OPE coefficient 718 | relations as in arXiv:1603.04436. 719 | """ 720 | if spin_irrep == -1: 721 | self.points = [] 722 | return 723 | 724 | if type(spin_irrep) == type(1): 725 | spin_irrep = [spin_irrep, 0] 726 | if dimension != -1: 727 | self.points.append((spin_irrep, dimension, extra)) 728 | else: 729 | for p in self.points: 730 | if p[0] == spin_irrep: 731 | self.points.remove(p) 732 | 733 | def get_bound(self, gapped_spin_irrep): 734 | """ 735 | Returns the minimum scaling dimension of a given operator in this `SDP`. 736 | This will return the unitarity bound until the user starts calling 737 | `set_bound`. 738 | 739 | Parameters 740 | ---------- 741 | gapped_spin_irrep: An ordered pair used to label the `PolynomialVector` 742 | whose bound should be read. The first entry is the spin 743 | and the second is the label found in `vector_types` or 744 | 0 if not present. 745 | """ 746 | if type(gapped_spin_irrep) == type(1): 747 | gapped_spin_irrep = [gapped_spin_irrep, 0] 748 | for l in range(0, len(self.table)): 749 | if self.table[l][0][0].label == gapped_spin_irrep: 750 | return self.bounds[l] 751 | 752 | def set_bound(self, gapped_spin_irrep = -1, delta_min = -1, reset_basis = True): 753 | """ 754 | Sets the minimum scaling dimension of a given operator in the sum rule. If 755 | called with one argument, the operator with that label will be assigned the 756 | unitarity bound. If called with no arguments, all operators will be assigned 757 | the unitarity bound. 758 | 759 | Parameters 760 | ---------- 761 | gapped_spin_irrep: [Optional] An ordered pair used to label the 762 | `PolynomialVector` whose bound should be set. The first 763 | entry is the spin and the second is the label found in 764 | `vector_types` or 0 if not present. Defaults to -1 which 765 | means all operators. 766 | delta_min: [Optional] The minimum scaling dimension to set. Also 767 | accepts oo to indicate that a continuum should not be 768 | included. Defaults to -1 which means unitarity. 769 | reset_basis: [Optional] An internal parameter which may be used to 770 | prevent the orthogonal polynomials which improve the 771 | numerical stability of `SDPB` from being recalculated. 772 | Defaults to `True`. 773 | """ 774 | if gapped_spin_irrep == -1: 775 | for l in range(0, len(self.table)): 776 | spin = self.table[l][0][0].label[0] 777 | self.bounds[l] = unitarity_bound(self.dim, spin) 778 | 779 | if reset_basis: 780 | self.set_basis(l) 781 | else: 782 | if type(gapped_spin_irrep) == type(1): 783 | gapped_spin_irrep = [gapped_spin_irrep, 0] 784 | 785 | l = self.get_table_index(gapped_spin_irrep) 786 | spin = gapped_spin_irrep[0] 787 | 788 | if delta_min == -1: 789 | self.bounds[l] = unitarity_bound(self.dim, spin) 790 | else: 791 | self.bounds[l] = delta_min 792 | 793 | if reset_basis and delta_min != oo: 794 | self.set_basis(l) 795 | 796 | def get_option(self, key): 797 | """ 798 | Returns the string representation of a value that `SDPB` will use, whether 799 | or not it has been explicitly set. 800 | 801 | Parameters 802 | ---------- 803 | key: The name of the `SDPB` parameter without any "--" at the beginning or 804 | "=" at the end. 805 | """ 806 | if key in sdpb_options: 807 | ret = sdpb_defaults[sdpb_options.index(key)] 808 | opt_string = "--" + key + "=" 809 | for i in range(0, len(self.options)): 810 | if self.options[i][:len(opt_string)] == opt_string: 811 | ret = self.options[i][len(opt_string):] 812 | break 813 | return ret 814 | 815 | def set_option(self, key = None, value = None): 816 | """ 817 | Sets the value of a switch that should be passed to `SDPB` on the command 818 | line. `SDPB` options that do not take a parameter are handled by other 819 | methods so it should not be necessary to pass them. 820 | 821 | Parameters 822 | ---------- 823 | key: [Optional] The name of the `SDPB` parameter being set without any 824 | "--" at the beginning or "=" at the end. Defaults to `None` which 825 | means all parameters will be reset to their default values. 826 | value: [Optional] The string or numerical value that should accompany `key`. 827 | Defaults to `None` which means that the parameter for `key` will be 828 | reset to its default value. 829 | """ 830 | if key == None: 831 | self.options = [] 832 | elif key in sdpb_options: 833 | found = False 834 | opt_string = "--" + key + "=" 835 | for i in range(0, len(self.options)): 836 | if self.options[i][:len(opt_string)] == opt_string: 837 | found = True 838 | break 839 | if found == True and value == None: 840 | self.options = self.options[:i] + self.options[i + 1:] 841 | elif found == True and value != None: 842 | self.options[i] = opt_string + str(value) 843 | elif found == False and value != None: 844 | self.options.append(opt_string + str(value)) 845 | else: 846 | print("Unknown option", flush = True) 847 | 848 | def get_table_index(self, spin_irrep): 849 | """ 850 | Searches for the label of a `PolynomialVector` and returns its position in 851 | `table` or -1 if not found. 852 | 853 | Parameters 854 | ---------- 855 | spin_irrep: An ordered pair of the type passed to `set_bound`. Used to 856 | label the spin and representation being searched. 857 | """ 858 | if type(spin_irrep) == type(1): 859 | spin_irrep = [spin_irrep, 0] 860 | for l in range(0, len(self.table)): 861 | if self.table[l][0][0].label == spin_irrep: 862 | return l 863 | return -1 864 | 865 | def set_basis(self, index): 866 | """ 867 | Calculates a basis of polynomials that are orthogonal with respect to the 868 | positive measure prefactor that turns a `PolynomialVector` into a rational 869 | approximation to a conformal block. It should not be necessary to explicitly 870 | call this. 871 | 872 | Parameters 873 | ---------- 874 | index: The position of the matrix in `table` whose basis needs updating. 875 | """ 876 | poles = self.table[index][0][0].poles 877 | delta_min = self.bounds[index] 878 | delta_min = float(delta_min) 879 | delta_min = RealMPFR(str(delta_min), prec) 880 | bands = [] 881 | matrix = [] 882 | 883 | degree = 0 884 | size = len(self.table[index]) 885 | for r in range(0, size): 886 | for s in range(0, size): 887 | polynomial_vector = self.table[index][r][s].vector 888 | 889 | for n in range(0, len(polynomial_vector)): 890 | expression = polynomial_vector[n].expand() 891 | degree = max(degree, len(coefficients(expression)) - 1) 892 | 893 | # Separate the poles and associate each with an uppergamma function 894 | # This avoids computing these same functions for each d in the loop below 895 | gathered_poles = gather(poles) 896 | poles = [] 897 | orders = [] 898 | gammas = [] 899 | for p in gathered_poles: 900 | if p < delta_min: 901 | poles.append(p - delta_min) 902 | orders.append(gathered_poles[p]) 903 | gammas.append(uppergamma(zero, (p - delta_min) * log(r_cross))) 904 | 905 | for d in range(0, 2 * (degree // 2) + 1): 906 | result = (r_cross ** delta_min) * self.integral(d, poles, orders, gammas) 907 | bands.append(result) 908 | for r in range(0, (degree // 2) + 1): 909 | new_entries = [] 910 | for s in range(0, (degree // 2) + 1): 911 | new_entries.append(bands[r + s]) 912 | matrix.append(new_entries) 913 | 914 | matrix = DenseMatrix(matrix) 915 | matrix = matrix.cholesky() 916 | matrix = matrix.inv() 917 | self.basis[index] = matrix 918 | 919 | def reshuffle_with_normalization(self, vector, norm): 920 | """ 921 | Converts between the Mathematica definition and the bootstrap definition of 922 | an SDP. As explained in arXiv:1502.02033, it is natural to normalize the 923 | functionals being found by demanding that they give 1 when acting on a 924 | particular `PolynomialVector`. `SDPB` on the other hand works with 925 | functionals that have a fixed leading component. This is an equivalent 926 | problem after a trivial reshuffling. 927 | 928 | Parameters 929 | ---------- 930 | vector: The `vector` part of the `PolynomialVector` needing to be shuffled. 931 | norm: The `vector` part of the `PolynomialVector` constrained to have 932 | unit action under the functional before the reshuffling. 933 | """ 934 | norm_hack = [] 935 | for el in norm: 936 | norm_hack.append(float(el)) 937 | 938 | max_index = norm_hack.index(max(norm_hack, key = abs)) 939 | const = vector[max_index] / norm[max_index] 940 | ret = [] 941 | 942 | for i in range(0, len(norm)): 943 | ret.append(vector[i] - const * norm[i]) 944 | 945 | ret = [const] + ret[:max_index] + ret[max_index + 1:] 946 | return ret 947 | 948 | def short_string(self, num): 949 | """ 950 | Returns the string representation of a number except with an attempt to trim 951 | superfluous zeros if the number is too small. 952 | 953 | Parameters 954 | ---------- 955 | num: The number. 956 | """ 957 | if abs(num) < tiny: 958 | return "0" 959 | else: 960 | return str(num) 961 | 962 | def make_laguerre_points(self, degree): 963 | """ 964 | Returns a list of convenient sample points for the XML files of `SDPB`. 965 | 966 | Parameters 967 | ---------- 968 | degree: The maximum degree of all polynomials in a `PolynomialVector`. 969 | """ 970 | ret = [] 971 | for d in range(0, degree + 1): 972 | point = -(pi.n(prec) ** 2) * ((4 * d - 1) ** 2) / (64 * (degree + 1) * log(r_cross)) 973 | ret.append(point) 974 | return ret 975 | 976 | def shifted_prefactor(self, poles, base, x, shift): 977 | """ 978 | Returns the positive measure prefactor that turns a `PolynomialVector` into 979 | a rational approximation to a conformal block. Evaluating this at a sample 980 | point produces a sample scaling needed by the XML files of `SDPB`. 981 | 982 | Parameters 983 | ---------- 984 | poles: The roots of the prefactor's denominator, often from the `poles` 985 | attribute of a `PolynomialVector`. 986 | base: The base of the exponential in the numerator, often the crossing 987 | symmetric value of the radial co-ordinate. 988 | x: The argument of the function, often `delta`. 989 | shift: An amount by which to shift `x`. This should match one of the minimal 990 | values assigned by `set_bound`. 991 | """ 992 | product = 1 993 | for p in poles: 994 | product *= x - (p - shift) 995 | return (base ** (x + shift)) / product 996 | 997 | def basic_integral(self, pos, pole, order, gamma_val): 998 | """ 999 | Returns the inner product of two monic monomials with respect to a more 1000 | basic positive measure prefactor which has just a single pole. 1001 | 1002 | Parameters 1003 | ---------- 1004 | pos: The sum of the degrees of the two monomials. 1005 | pole: The root of the prefactor's denominator. 1006 | order: The multiplicity of this root. 1007 | gamma_val: The associated incomplete gamma function. Note that it is no 1008 | longer uppergamma(0, pole * log(r_cross)) because we are 1009 | performing a change of variables. 1010 | """ 1011 | if order == 1: 1012 | ret = exp(-pole) * (pole ** pos) * gamma_val 1013 | for i in range(0, pos): 1014 | ret += factorial(pos - i - 1) * (pole ** i) 1015 | return ret 1016 | elif pos == 0: 1017 | return ((-pole) ** (1 - order) / (order - 1)) - (one / (order - 1)) * self.basic_integral(pos, pole, order - 1, gamma_val) 1018 | else: 1019 | return (one / (order - 1)) * (pos * self.basic_integral(pos - 1, pole, order - 1, gamma_val) - self.basic_integral(pos, pole, order - 1, gamma_val)) 1020 | 1021 | def integral(self, pos, poles, orders, gammas): 1022 | """ 1023 | Returns the inner product of two monic monomials with respect to the 1024 | positive measure prefactor that turns a `PolynomialVector` into a rational 1025 | approximation to a conformal block. 1026 | 1027 | Parameters 1028 | ---------- 1029 | pos: The sum of the degrees of the two monomials. 1030 | poles: A list of the roots of the prefactor's denominator. 1031 | orders: The multiplicities of those poles. 1032 | gammas: A list representing the image of `poles` under the map which sends 1033 | x to uppergamma(0, x * log(r_cross)). 1034 | """ 1035 | ret = zero 1036 | if len(poles) == 0: 1037 | return factorial(pos) / ((-log(r_cross)) ** (pos + 1)) 1038 | 1039 | for i in range(0, len(poles)): 1040 | pole = poles[i] 1041 | order = orders[i] 1042 | gamma_val = gammas[i] 1043 | other_poles = poles[:i] + poles[i + 1:] 1044 | other_orders = orders[:i] + orders[i + 1:] 1045 | exponents = {} 1046 | exponents[tuple(other_orders)] = 1 1047 | # For an order 3 pole, 0, 1 and 2 derivatives are needed in the Laurent series 1048 | for j in range(0, order): 1049 | for term in exponents: 1050 | prod = factorial(j) 1051 | for k in range(0, len(term)): 1052 | prod *= (pole - other_poles[k]) ** term[k] 1053 | ret += (one / prod) * exponents[term] * ((-log(r_cross)) ** (order - j - 1 - pos)) * self.basic_integral(pos, -pole * log(r_cross), order - j, gamma_val) 1054 | if j == order - 1: 1055 | break 1056 | # Update exponents to move onto the next derivative 1057 | new_exponents = {} 1058 | for term in exponents: 1059 | for k in range(0, len(term)): 1060 | new_term = list(term) 1061 | new_term[k] += 1 1062 | new_term = tuple(new_term) 1063 | if new_term in new_exponents: 1064 | new_exponents[new_term] += -term[k] * exponents[term] 1065 | else: 1066 | new_exponents[new_term] = -term[k] * exponents[term] 1067 | exponents = new_exponents 1068 | 1069 | return ret 1070 | 1071 | def table_extension(self, points): 1072 | """ 1073 | Returns a list of matrices of `PolynomialVector's which should be appended 1074 | to `table' to directly before the SDP is written. The caller is responsible 1075 | for shortening `table' again afterwards. There should be no need to call 1076 | this directly. 1077 | 1078 | Parameters 1079 | ---------- 1080 | points: A list of points discretely added to the SDP in the format prepared 1081 | by `add_point' (often the `points' attribute itself). 1082 | """ 1083 | extra_vectors = [] 1084 | 1085 | for p in points: 1086 | l = self.get_table_index(p[0]) 1087 | size = len(self.table[l]) 1088 | 1089 | outer_list = [] 1090 | for r in range(0, size): 1091 | inner_list = [] 1092 | for s in range(0, size): 1093 | new_vector = [] 1094 | for i in range(0, len(self.table[l][r][s].vector)): 1095 | addition = self.table[l][r][s].vector[i].subs(delta, p[1]) 1096 | for quint in p[2]: 1097 | if quint[3][0] != r or quint[3][1] != s: 1098 | continue 1099 | l_new = self.get_table_index(quint[0]) 1100 | r_new = quint[2][0] 1101 | s_new = quint[2][1] 1102 | coeff = quint[4] 1103 | coeff *= self.shifted_prefactor(self.table[l_new][0][0].poles, r_cross, quint[1], 0) 1104 | coeff /= self.shifted_prefactor(self.table[l][0][0].poles, r_cross, p[1], 0) 1105 | addition += coeff * self.table[l_new][r_new][s_new].vector[i].subs(delta, quint[1]) 1106 | new_vector.append(addition) 1107 | inner_list.append(PolynomialVector(new_vector, p[0], self.table[l][r][s].poles)) 1108 | outer_list.append(inner_list) 1109 | extra_vectors.append(outer_list) 1110 | 1111 | return extra_vectors 1112 | 1113 | def write_zip(self, obj, norm, name = "mySDP"): 1114 | """ 1115 | Outputs a PKZIP archive containing JSON files which describe the `table', 1116 | `bounds', `points' and `basis' for this SDP to be read by the Elemental 1117 | version of SDPB. The result should be equivalent to calling `write_xml' and 1118 | then `pvm2sdp'. 1119 | 1120 | Parameters 1121 | ---------- 1122 | obj: Objective vector (often the `vector` part of a `PolynomialVector`) 1123 | whose action under the found functional should be maximized. 1124 | norm: Normalization vector (often the `vector` part of a `PolynomialVector`) 1125 | which should have unit action under the functionals. 1126 | name: [Optional] Name of the PKZIP file to produce. If a ".zip" extension 1127 | is desired, the user needs to add it. Defaults to "mySDP". 1128 | """ 1129 | doc = zipfile.ZipFile(name, mode = 'w') 1130 | obj = self.reshuffle_with_normalization(obj, norm) 1131 | self.table += self.table_extension(self.points) 1132 | laguerre_points = [] 1133 | laguerre_degrees = [] 1134 | degree_sum = 0 1135 | 1136 | control_dict = {"num_blocks": len(self.table) - self.bounds.count(oo), "command": "python"} 1137 | control_str = json.dumps(control_dict, indent = 2) 1138 | doc.writestr("control.json", control_str) 1139 | 1140 | # Here, we use indices that match the SDPB specification 1141 | objectives_dict = {"constant": self.short_string(obj[0]), "b": []} 1142 | for n in range(1, len(obj)): 1143 | objectives_dict["b"].append(self.short_string(obj[n])) 1144 | objectives_str = json.dumps(objectives_dict, indent = 2) 1145 | doc.writestr("objectives.json", objectives_str) 1146 | 1147 | current_block = -1 1148 | for j in range(0, len(self.table)): 1149 | if j >= len(self.bounds): 1150 | delta_min = 0 1151 | else: 1152 | delta_min = self.bounds[j] 1153 | if delta_min == oo: 1154 | continue 1155 | else: 1156 | current_block += 1 1157 | size = len(self.table[j]) 1158 | degree = 0 1159 | 1160 | # A first pass is needed to get the maximum degree 1161 | for r in range(0, size): 1162 | for s in range(0, size): 1163 | polynomial_vector = self.table[j][r][s].vector 1164 | 1165 | for n in range(0, len(polynomial_vector)): 1166 | coeff_list = coefficients(polynomial_vector[n].expand()) 1167 | degree = max(degree, len(coeff_list) - 1) 1168 | 1169 | block_info_dict = {"dim": size, "num_points": degree + 1} 1170 | block_data_dict = {"bilinear_bases_even": [], "bilinear_bases_odd": [], "c": [], "B": []} 1171 | poles = self.table[j][0][0].poles 1172 | index = get_index(laguerre_degrees, degree) 1173 | degree_sum += degree + 1 1174 | 1175 | if j >= len(self.bounds): 1176 | points = [self.points[j - len(self.bounds)][1]] 1177 | elif index == -1: 1178 | points = self.make_laguerre_points(degree) 1179 | laguerre_points.append(points) 1180 | laguerre_degrees.append(degree) 1181 | else: 1182 | points = laguerre_points[index] 1183 | 1184 | scalings = [] 1185 | for d in range(0, degree + 1): 1186 | scalings.append(self.shifted_prefactor(poles, r_cross, points[d], eval_mpfr(delta_min, prec))) 1187 | 1188 | matrix = [] 1189 | if j >= len(self.bounds): 1190 | result = self.shifted_prefactor(poles, r_cross, points[0], zero) 1191 | result = one / sqrt(result) 1192 | matrix = DenseMatrix([[result]]) 1193 | else: 1194 | matrix = self.basis[j] 1195 | 1196 | for n in range(0, (degree // 2) + 1): 1197 | expression = build_polynomial(list(matrix.row(n))) 1198 | 1199 | even_parts = [] 1200 | odd_parts = [] 1201 | for d in range(0, degree + 1): 1202 | even_parts.append(sqrt(scalings[d]) * expression.subs(delta, points[d])) 1203 | odd_parts.append(sqrt(points[d]) * even_parts[-1]) 1204 | even_parts[-1] = str(even_parts[-1]) 1205 | odd_parts[-1] = str(odd_parts[-1]) 1206 | block_data_dict["bilinear_bases_even"].append(even_parts) 1207 | if degree % 2 == 0 and n == degree // 2: 1208 | break 1209 | block_data_dict["bilinear_bases_odd"].append(odd_parts) 1210 | 1211 | # Now we can evaluate everything at the points above 1212 | for r in range(0, size): 1213 | for s in range(0, size): 1214 | polynomial_vector = self.reshuffle_with_normalization(self.table[j][r][s].vector, norm) 1215 | 1216 | for d in range(0, degree + 1): 1217 | first = polynomial_vector[0].subs(delta, eval_mpfr(delta_min, prec) + points[d]) 1218 | block_data_dict["c"].append(str(scalings[d] * first)) 1219 | 1220 | rest = [] 1221 | for n in range(1, len(polynomial_vector)): 1222 | expression = polynomial_vector[n].subs(delta, eval_mpfr(delta_min, prec) + points[d]) 1223 | rest.append(str(-scalings[d] * expression)) 1224 | block_data_dict["B"].append(rest) 1225 | 1226 | if sdpb_version_major == 2 and sdpb_version_minor <= 5: 1227 | block_dict = {} 1228 | block_dict.update(block_info_dict) 1229 | block_dict.update(block_data_dict) 1230 | block_str = json.dumps(block_dict, indent = 2) 1231 | doc.writestr("block_" + str(current_block) + ".json", block_str) 1232 | else: 1233 | block_info_str = json.dumps(block_info_dict, indent = 2) 1234 | block_data_str = json.dumps(block_data_dict, indent = 2) 1235 | doc.writestr("block_info_" + str(current_block) + ".json", block_info_str) 1236 | doc.writestr("block_data_" + str(current_block) + ".json", block_data_str) 1237 | 1238 | # Recognize an SDP that looks overdetermined 1239 | if degree_sum < len(self.unit): 1240 | print("Crossing equations have too many derivative components", flush = True) 1241 | 1242 | self.table = self.table[:len(self.bounds)] 1243 | doc.close() 1244 | 1245 | def write_xml(self, obj, norm, name = "mySDP"): 1246 | """ 1247 | Outputs an XML file describing the `table`, `bounds`, `points` and `basis` 1248 | for this `SDP` in a format that `SDPB` can use to check for solvability. 1249 | If the user has the Elemental version of `SDPB` then the `pvm2sdb` utility 1250 | (assumed to be in the same directory) is also run. 1251 | 1252 | Parameters 1253 | ---------- 1254 | obj: Objective vector (often the `vector` part of a `PolynomialVector`) 1255 | whose action under the found functional should be maximized. 1256 | norm: Normalization vector (often the `vector` part of a `PolynomialVector`) 1257 | which should have unit action under the functionals. 1258 | name: [Optional] Name of the XML file to produce without any ".xml" at the 1259 | end. Defaults to "mySDP". 1260 | """ 1261 | obj = self.reshuffle_with_normalization(obj, norm) 1262 | self.table += self.table_extension(self.points) 1263 | laguerre_points = [] 1264 | laguerre_degrees = [] 1265 | degree_sum = 0 1266 | 1267 | doc = xml.dom.minidom.Document() 1268 | root_node = doc.createElement("sdp") 1269 | doc.appendChild(root_node) 1270 | 1271 | objective_node = doc.createElement("objective") 1272 | matrices_node = doc.createElement("polynomialVectorMatrices") 1273 | root_node.appendChild(objective_node) 1274 | root_node.appendChild(matrices_node) 1275 | 1276 | # Here, we use indices that match the SDPB specification 1277 | for n in range(0, len(obj)): 1278 | elt_node = doc.createElement("elt") 1279 | elt_node.appendChild(doc.createTextNode(self.short_string(obj[n]))) 1280 | objective_node.appendChild(elt_node) 1281 | 1282 | for j in range(0, len(self.table)): 1283 | if j >= len(self.bounds): 1284 | delta_min = 0 1285 | else: 1286 | delta_min = self.bounds[j] 1287 | if delta_min == oo: 1288 | continue 1289 | size = len(self.table[j]) 1290 | degree = 0 1291 | 1292 | matrix_node = doc.createElement("polynomialVectorMatrix") 1293 | rows_node = doc.createElement("rows") 1294 | cols_node = doc.createElement("cols") 1295 | elements_node = doc.createElement("elements") 1296 | sample_point_node = doc.createElement("samplePoints") 1297 | sample_scaling_node = doc.createElement("sampleScalings") 1298 | bilinear_basis_node = doc.createElement("bilinearBasis") 1299 | rows_node.appendChild(doc.createTextNode(size.__str__())) 1300 | cols_node.appendChild(doc.createTextNode(size.__str__())) 1301 | 1302 | for r in range(0, size): 1303 | for s in range(0, size): 1304 | polynomial_vector = self.reshuffle_with_normalization(self.table[j][r][s].vector, norm) 1305 | vector_node = doc.createElement("polynomialVector") 1306 | 1307 | for n in range(0, len(polynomial_vector)): 1308 | expression = polynomial_vector[n].expand() 1309 | # Impose unitarity bounds and the specified gap 1310 | expression = expression.subs(delta, delta + delta_min).expand() 1311 | coeff_list = coefficients(expression) 1312 | degree = max(degree, len(coeff_list) - 1) 1313 | 1314 | polynomial_node = doc.createElement("polynomial") 1315 | for coeff in coeff_list: 1316 | coeff_node = doc.createElement("coeff") 1317 | coeff_node.appendChild(doc.createTextNode(self.short_string(coeff))) 1318 | polynomial_node.appendChild(coeff_node) 1319 | vector_node.appendChild(polynomial_node) 1320 | elements_node.appendChild(vector_node) 1321 | 1322 | poles = self.table[j][0][0].poles 1323 | index = get_index(laguerre_degrees, degree) 1324 | 1325 | if j >= len(self.bounds): 1326 | points = [self.points[j - len(self.bounds)][1]] 1327 | elif index == -1: 1328 | points = self.make_laguerre_points(degree) 1329 | laguerre_points.append(points) 1330 | laguerre_degrees.append(degree) 1331 | else: 1332 | points = laguerre_points[index] 1333 | 1334 | for d in range(0, degree + 1): 1335 | elt_node = doc.createElement("elt") 1336 | elt_node.appendChild(doc.createTextNode(points[d].__str__())) 1337 | sample_point_node.appendChild(elt_node) 1338 | damped_rational = self.shifted_prefactor(poles, r_cross, points[d], eval_mpfr(delta_min, prec)) 1339 | elt_node = doc.createElement("elt") 1340 | elt_node.appendChild(doc.createTextNode(damped_rational.__str__())) 1341 | sample_scaling_node.appendChild(elt_node) 1342 | 1343 | matrix = [] 1344 | if j >= len(self.bounds): 1345 | result = self.shifted_prefactor(poles, r_cross, points[0], zero) 1346 | result = one / sqrt(result) 1347 | matrix = DenseMatrix([[result]]) 1348 | else: 1349 | matrix = self.basis[j] 1350 | 1351 | for d in range(0, (degree // 2) + 1): 1352 | polynomial_node = doc.createElement("polynomial") 1353 | for q in range(0, d + 1): 1354 | coeff_node = doc.createElement("coeff") 1355 | coeff_node.appendChild(doc.createTextNode(matrix[d, q].__str__())) 1356 | polynomial_node.appendChild(coeff_node) 1357 | bilinear_basis_node.appendChild(polynomial_node) 1358 | 1359 | matrix_node.appendChild(rows_node) 1360 | matrix_node.appendChild(cols_node) 1361 | matrix_node.appendChild(elements_node) 1362 | matrix_node.appendChild(sample_point_node) 1363 | matrix_node.appendChild(sample_scaling_node) 1364 | matrix_node.appendChild(bilinear_basis_node) 1365 | matrices_node.appendChild(matrix_node) 1366 | degree_sum += degree + 1 1367 | 1368 | # Recognize an SDP that looks overdetermined 1369 | if degree_sum < len(self.unit): 1370 | print("Crossing equations have too many derivative components", flush = True) 1371 | 1372 | self.table = self.table[:len(self.bounds)] 1373 | xml_file = open(name + ".xml", 'w') 1374 | doc.writexml(xml_file, addindent = " ", newl = '\n') 1375 | xml_file.close() 1376 | doc.unlink() 1377 | 1378 | if sdpb_version_major == 1: 1379 | return 1380 | elif sdpb_version_major == 2 and sdpb_version_minor <= 6: 1381 | pvm2sdp_path = os.path.dirname(sdpb_path) + "/pvm2sdp" 1382 | subprocess.check_call([mpirun_path, "-n", "1", pvm2sdp_path, "json", str(prec), name + ".xml", name]) 1383 | else: 1384 | pmp2sdp_path = os.path.dirname(sdpb_path) + "/pmp2sdp" 1385 | subprocess.check_call([mpirun_path, "-n", "1", pmp2sdp_path, "-f", "json", "-i", name + ".xml", "-o", name, "-p", str(prec)]) 1386 | 1387 | def read_output(self, name = "mySDP"): 1388 | """ 1389 | Reads an `SDPB` output file and returns a dictionary in which all entries 1390 | have been converted to their respective Python types. 1391 | 1392 | Parameters 1393 | ---------- 1394 | name: [Optional] The name of the file without any ".out" at the end. 1395 | Defaults to "mySDP". 1396 | """ 1397 | ret = {} 1398 | if sdpb_version_major == 1: 1399 | out_file = open(name + ".out", 'r') 1400 | else: 1401 | out_file = open(name + "_out/out.txt", 'r') 1402 | for line in out_file: 1403 | (key, delimiter, value) = line.partition(" = ") 1404 | value = value.replace('\n', '') 1405 | value = value.replace(';', '') 1406 | value = value.replace('{', '[') 1407 | value = value.replace('}', ']') 1408 | value = re.sub(r"([0-9]+\.[0-9]+e?-?[0-9]+)", r"RealMPFR('\1', prec)", value) 1409 | command = "ret['" + key.strip() + "'] = " + value 1410 | exec(command) 1411 | out_file.close() 1412 | 1413 | if sdpb_version_major > 1: 1414 | y = [] 1415 | outfile = open(name + "_out/y.txt", 'r') 1416 | lines = outfile.readlines() 1417 | for i in range(1, len(lines)): 1418 | value = lines[i].replace('\n', '') 1419 | value = re.sub(r"([0-9]+\.[0-9]+e?-?[0-9]+)", r"RealMPFR('\1', prec)", value) 1420 | exec("y.append(" + value + ")") 1421 | outfile.close() 1422 | ret["y"] = y 1423 | return ret 1424 | 1425 | def iterate(self, name = "mySDP"): 1426 | """ 1427 | Returns `True` if this `SDP` with its current gaps represents an allowed CFT 1428 | and `False` otherwise. 1429 | 1430 | Parameters 1431 | ---------- 1432 | name: [Optional] The name of the XML file generated in the process 1433 | without any ".xml" at the end. Defaults to "mySDP". 1434 | """ 1435 | obj = [0.0] * len(self.table[0][0][0].vector) 1436 | self.write_xml(obj, self.unit, name) 1437 | 1438 | if sdpb_version_major == 1: 1439 | subprocess.check_call([sdpb_path, "-s", name + ".xml", "--precision=" + str(prec), "--findPrimalFeasible", "--findDualFeasible", "--noFinalCheckpoint"] + self.options) 1440 | elif sdpb_version_major == 2 and 0 <= sdpb_version_minor <= 6: 1441 | ppn = self.get_option("procsPerNode") 1442 | ppn = str(max(1, int(ppn))) 1443 | self.set_option("procsPerNode", ppn) 1444 | subprocess.check_call([mpirun_path, "-n", ppn, sdpb_path, "-s", name, "--precision=" + str(prec), "--findPrimalFeasible", "--findDualFeasible"] + self.options) 1445 | else: 1446 | ppn = str(multiprocessing.cpu_count() // 2) 1447 | subprocess.check_call([mpirun_path, "-n", ppn, sdpb_path, "-s", name, "--precision=" + str(prec), "--findPrimalFeasible", "--findDualFeasible"] + self.options) 1448 | output = self.read_output(name = name) 1449 | 1450 | terminate_reason = output["terminateReason"] 1451 | return terminate_reason == "found primal feasible solution" 1452 | 1453 | def bisect(self, lower, upper, threshold, spin_irrep, isolated = False, reverse = False, bias = None, name = "mySDP"): 1454 | """ 1455 | Uses a binary search to find the maximum allowed gap in a particular type 1456 | of operator before the CFT stops existing. The allowed value closest to the 1457 | boundary is returned. 1458 | 1459 | Parameters 1460 | ---------- 1461 | lower: A scaling dimension for the operator known to be allowed. 1462 | upper: A scaling dimension for the operator known to be disallowed. 1463 | threshold: How accurate the bisection needs to be before returning. 1464 | spin_irrep: An ordered pair of the type passed to `set_bound`. Used to 1465 | label the spin and representation of the operator whose 1466 | dimension is being bounded. 1467 | isolated: [Optional] Whether to bisect the position of an isolated 1468 | operator rather than the gap where the continuum starts. 1469 | Defaults to `False`. 1470 | reverse: [Optional] Whether we are looking for a lower bound instead of 1471 | an upper bound. This should only be used when `isolated` is 1472 | `True`. Defaults to `False`. 1473 | bias: [Optional] The ratio between the expected time needed to rule 1474 | out a CFT and the expected time needed to conclude that it 1475 | cannot be. Defaults to `None` which means that this will be 1476 | measured as the binary search progresses. 1477 | """ 1478 | x = 0.5 1479 | d_time = 0 1480 | p_time = 0 1481 | bias_found = False 1482 | checkpoints = False 1483 | old = self.get_bound(spin_irrep) 1484 | if bias != None: 1485 | bias = min(bias, 1.0 / bias) 1486 | 1487 | while abs(upper - lower) > threshold: 1488 | if bias == None and d_time != 0 and p_time != 0: 1489 | bias = p_time / d_time 1490 | if bias != None and bias_found == False: 1491 | # Bisection within a bisection 1492 | u = 0.5 1493 | l = 0.0 1494 | while abs(u - l) > 0.01: 1495 | x = (u + l) / 2.0 1496 | frac = log((x ** x) * ((1 - x) ** (1 - x))) / log(x / (1 - x)) 1497 | test = (frac - x) / (frac - x + 1) 1498 | if test > bias: 1499 | u = x 1500 | else: 1501 | l = x 1502 | bias_found = True 1503 | 1504 | test = lower + x * (upper - lower) 1505 | print("Trying " + test.__str__(), flush = True) 1506 | if isolated == True: 1507 | self.add_point(spin_irrep, test) 1508 | else: 1509 | self.set_bound(spin_irrep, test) 1510 | 1511 | # Using the same name twice in a row is only dangerous if the runs are really long 1512 | start = time.time() 1513 | if checkpoints and sdpb_version_major == 1: 1514 | result = self.iterate(name = str(start)) 1515 | else: 1516 | result = self.iterate(name = name) 1517 | end = time.time() 1518 | if int(end - start) > int(self.get_option("checkpointInterval")): 1519 | checkpoints = True 1520 | if isolated == True: 1521 | self.points = self.points[:-1] 1522 | 1523 | if result == False: 1524 | if reverse == False: 1525 | upper = test 1526 | else: 1527 | lower = test 1528 | d_time = end - start 1529 | else: 1530 | if reverse == False: 1531 | lower = test 1532 | else: 1533 | upper = test 1534 | p_time = end - start 1535 | 1536 | self.set_bound(spin_irrep, old) 1537 | if reverse == False: 1538 | return lower 1539 | else: 1540 | return upper 1541 | 1542 | def opemax(self, dimension, spin_irrep, reverse = False, vector = None, name = "mySDP"): 1543 | """ 1544 | Minimizes or maximizes the squared length of the vector of OPE coefficients 1545 | involving an operator with a prescribed scaling dimension, spin and global 1546 | symmetry representation. This results in a matrix produced by the action of 1547 | the functional found by `SDPB`. If a direction in OPE space has been passed 1548 | then the corresponding matrix element is returned. Otherwise, the matrix is 1549 | returned and it is up to the user to find the unconstrained minimum or 1550 | maximum value by diagonalizing it. 1551 | 1552 | Parameters 1553 | ---------- 1554 | dimension: The scaling dimension of the operator whose OPE coefficients 1555 | are having their length being bounded. 1556 | spin_irrep: An ordered pair of the type passed to `set_bound`. Used to label 1557 | the spin and representation of the operator whose OPE 1558 | coefficients have their length being bounded. 1559 | reverse: [Optional] Whether to minimize a squared OPE coefficient vector 1560 | instead of maximizing it. This only has a chance of working if 1561 | the bounds are such that the specified operator is isolated. 1562 | Defaults to `False`. 1563 | vector: [Optional] A unit vector specifying the direction in OPE space 1564 | being scanned if applicable. In a 2x2 scan, for instance, which 1565 | is specified by one angle, the components of this vector will 1566 | be the sine and the cosine. Defaults to `None`. 1567 | name: [Optional] Name of the XML file generated in the process without 1568 | any ".xml" at the end. Defaults to "mySDP". 1569 | """ 1570 | l = self.get_table_index(spin_irrep) 1571 | size = len(self.table[l]) 1572 | if reverse: 1573 | sign = -1 1574 | else: 1575 | sign = 1 1576 | prod = self.shifted_prefactor(self.table[l][0][0].poles, r_cross, dimension, 0) * sign 1577 | 1578 | if vector == None or len(vector) != size: 1579 | vec = [0] * size 1580 | vec[0] = 1 1581 | else: 1582 | vector_length = 0 1583 | for r in range(0, size): 1584 | vector_length += vector[r] ** 2 1585 | vector_length = sqrt(vector_length) 1586 | for s in range(0, size): 1587 | vec[s] = vector[s] / vector_length 1588 | 1589 | norm = [] 1590 | for i in range(0, len(self.unit)): 1591 | el = 0 1592 | for r in range(0, size): 1593 | for s in range(0, size): 1594 | el += vec[r] * vec[s] * self.table[l][r][s].vector[i].subs(delta, dimension) 1595 | norm.append(el * prod) 1596 | functional = self.solution_functional(self.get_bound(spin_irrep), spin_irrep, self.unit, norm, name) 1597 | output = self.read_output(name = name) 1598 | primal_value = output["primalObjective"] 1599 | if size == 1 or vector != None: 1600 | return float(primal_value) * (-1) 1601 | 1602 | # This primal value will be divided by 1 or something different if the matrix is not 1x1 1603 | outer_list = [] 1604 | for r in range(0, size): 1605 | inner_list = [] 1606 | for s in range(0, size): 1607 | inner_product = 0.0 1608 | polynomial_vector = self.reshuffle_with_normalization(self.table[l][r][s].vector, norm) 1609 | 1610 | for i in range(0, len(self.table[l][r][s].vector)): 1611 | inner_product += functional[i] * polynomial_vector[i] 1612 | inner_product = inner_product.subs(delta, dimension) 1613 | 1614 | inner_list.append(float(inner_product)) 1615 | outer_list.append(inner_list) 1616 | if reverse: 1617 | print("Divide " + str(float(primal_value)) + " by the maximum eigenvalue") 1618 | else: 1619 | print("Divide " + str(float(primal_value)) + " by the minimum eigenvalue") 1620 | return DenseMatrix(outer_list) 1621 | 1622 | def solution_functional(self, dimension, spin_irrep, obj = None, norm = None, name = "mySDP"): 1623 | """ 1624 | Returns a functional (list of numerical components) that serves as a 1625 | solution to the `SDP`. Like `iterate`, this sets a bound, generates an XML 1626 | file and calls `SDPB`. However, rather than stopping after it determines 1627 | that the `SDP` is indeed solvable, it will finish the computation to find 1628 | the actual functional. 1629 | 1630 | Parameters 1631 | ---------- 1632 | dimension: The minimum value of the scaling dimension to test. 1633 | spin_irrep: An ordered pair of the type passed to `set_bound`. Used to label 1634 | the spin / representation of the operator being given a minimum 1635 | scaling dimension of `dimension`. 1636 | obj: [Optional] The objective vector whose action under the found 1637 | functional should be maximized. Defaults to `None` which means 1638 | it will be determined automatically just like it is in 1639 | `iterate`. 1640 | norm: [Optional] Normalization vector which should have unit action 1641 | under the functional. Defaults to `None` which means it will be 1642 | determined automatically just like it is in `iterate`. 1643 | name: [Optional] The name of the XML file generated in the process 1644 | without any ".xml" at the end. Defaults to "mySDP". 1645 | """ 1646 | if obj == None: 1647 | obj = [0.0] * len(self.table[0][0][0].vector) 1648 | if norm == None: 1649 | norm = self.unit 1650 | 1651 | old = self.get_bound(spin_irrep) 1652 | self.set_bound(spin_irrep, dimension) 1653 | self.write_xml(obj, norm, name) 1654 | self.set_bound(spin_irrep, old) 1655 | 1656 | if sdpb_version_major == 1: 1657 | subprocess.check_call([sdpb_path, "-s", name + ".xml", "--precision=" + str(prec), "--noFinalCheckpoint"] + self.options) 1658 | elif sdpb_version_major == 2 and 0 <= sdpb_version_minor <= 6: 1659 | ppn = self.get_option("procsPerNode") 1660 | ppn = str(max(1, int(ppn))) 1661 | self.set_option("procsPerNode", ppn) 1662 | subprocess.check_call([mpirun_path, "-n", ppn, sdpb_path, "-s", name, "--precision=" + str(prec), "--noFinalCheckpoint"] + self.options) 1663 | else: 1664 | ppn = str(multiprocessing.cpu_count() // 2) 1665 | subprocess.check_call([mpirun_path, "-n", ppn, sdpb_path, "-s", name, "--precision=" + str(prec), "--noFinalCheckpoint"] + self.options) 1666 | output = self.read_output(name = name) 1667 | return [one] + output["y"] 1668 | 1669 | def convert_spectrum_file(self, input_path, output_path, rescaling = 4 ** delta): 1670 | """ 1671 | Reads a spectrum produced by the arXiv:1603.04444 script and outputs a file 1672 | with physical dimensions and OPE coefficients. Instead of a scaling 1673 | dimension, the original file reports the difference between the scaling 1674 | dimension and the gap. Instead of an OPE coefficient, the original file 1675 | reports the factor relating the OPE coefficient to the positive prefactor. 1676 | Note that this only works if `set_bound` has not been called since the 1677 | XML file was generated. 1678 | 1679 | Parameters 1680 | ---------- 1681 | input_path: The path to the spectrum in Mathematica-like format. 1682 | output_path: The path desired for the file after the additive and 1683 | multiplicative corrections have been performed. 1684 | rescaling: [Optional] An expression, which may depend on `delta` and 1685 | `ell`, for changing the convention used for OPE coefficients. 1686 | Defaults to 4 ** delta. 1687 | """ 1688 | in_file = open(input_path, 'r') 1689 | out_file = open(output_path, 'w') 1690 | 1691 | out_file.write('{') 1692 | for j in range(0, len(self.table) + len(self.points)): 1693 | if j >= len(self.table): 1694 | shift = self.points[j - len(self.table)][1] 1695 | spin = self.points[j - len(self.table)][0][0] 1696 | else: 1697 | shift = self.bounds[j] 1698 | spin = self.table[j][0][0].label[0] 1699 | out_file.write(str(j) + " -> ") 1700 | line = next(in_file)[:-2].split("->")[1] 1701 | line = line.replace('{', '[').replace('}', ']') 1702 | line = re.sub(r"([0-9]+\.[0-9]+e?-?[0-9]+)", r"RealMPFR('\1', prec)", line) 1703 | exec("ops = " + line) 1704 | for o in range(0, len(ops)): 1705 | ops[o][0] = ops[o][0] + shift 1706 | if j >= len(self.table): 1707 | prod = 1 1708 | else: 1709 | prod = self.shifted_prefactor(self.table[j][0][0].poles, r_cross, ops[o][0], 0) 1710 | if "subs" in rescaling: 1711 | prod *= rescaling.subs(delta, ops[o][0]).subs(ell, spin) 1712 | else: 1713 | prod *= rescaling 1714 | for t in range(0, len(ops[o][1])): 1715 | ops[o][1][t] = ops[o][1][t] / sqrt(prod) 1716 | ops_str = str(ops).replace('[', '{').replace(']', '}') 1717 | out_file.write(ops_str + ",\n") 1718 | # Copy the objective at the end 1719 | out_file.write(next(in_file)) 1720 | 1721 | in_file.close() 1722 | out_file.close() 1723 | 1724 | def extremal_dimensions(self, functional, spin_irrep, zero_threshold): 1725 | """ 1726 | When a functional acts on `PolynomialVector`s, this finds approximate zeros 1727 | of the resulting expression with the `unisolve` executable. When the sum 1728 | rule has matrices of `PolynomialVector`s, sufficiently small local minima 1729 | of their determinants are returned. The list consists of dimensions for a 1730 | given spin and representation. The logic is a subset of that used in the 1731 | arXiv:1603.04444 script. 1732 | 1733 | Parameters 1734 | ---------- 1735 | functional: A list of functional components of the type returned by 1736 | `solution_functional`. 1737 | spin_irrep: An ordered pair used to label the type of operator whose 1738 | extremal dimensions are being found. The first entry is the 1739 | spin and the second entry is the representation label found 1740 | in `vector_types`. 1741 | zero_threshold: The threshold for identifying a real zero. The determinant 1742 | over its second derivative must be less than this value. 1743 | """ 1744 | unisolve_path = find_executable("unisolve") 1745 | 1746 | zeros = [] 1747 | entries = [] 1748 | l = self.get_table_index(spin_irrep) 1749 | 1750 | size = len(self.table[l]) 1751 | for r in range(0, size): 1752 | for s in range(0, size): 1753 | inner_product = 0.0 1754 | polynomial_vector = self.reshuffle_with_normalization(self.table[l][r][s].vector, self.unit) 1755 | 1756 | for i in range(0, len(self.table[l][r][s].vector)): 1757 | inner_product += functional[i] * polynomial_vector[i] 1758 | inner_product = inner_product.expand() 1759 | 1760 | entries.append(inner_product) 1761 | 1762 | matrix = DenseMatrix(size, size, entries) 1763 | det0 = matrix.det().expand() 1764 | det1 = det0.diff(delta) 1765 | det2 = det1.diff(delta) 1766 | coeffs = coefficients(det1) 1767 | # Pass output to unisolve 1768 | pol_file = open("tmp.pol", 'w') 1769 | pol_file.write("drf\n") 1770 | pol_file.write(str(prec) + "\n") 1771 | pol_file.write(str(len(coeffs) - 1) + "\n") 1772 | for c in coeffs: 1773 | pol_file.write(str(c) + "\n") 1774 | pol_file.close() 1775 | spec = subprocess.check_output([unisolve_path, "-H1", "-o" + str(prec), "-Oc", "-Ga", "tmp.pol"]) 1776 | spec_lines = spec.split('\n')[:-1] 1777 | for line in spec_lines: 1778 | pair = line.replace('(', '').replace(')', '').split(',') 1779 | real = RealMPFR(pair[0], prec) 1780 | imag = RealMPFR(pair[1], prec) 1781 | if imag < tiny and det0.subs(delta, real) / det2.subs(delta, real) < zero_threshold: 1782 | zeros.append(real) 1783 | return zeros 1784 | 1785 | def extremal_coefficients(self, dimensions, spin_irreps, nullity = 1): 1786 | """ 1787 | Once the full extremal spectrum is known, one can reconstruct the OPE 1788 | coefficients that cause those convolved conformal blocks to sum to the 1789 | `SDP`'s `unit`. This outputs a vector of squared OPE coefficients 1790 | determined in this way. 1791 | 1792 | Parameters 1793 | ---------- 1794 | dimensions: A list of dimensions in the spectrum as returned by 1795 | `extremal_dimensions`. However, it must be the union of such 1796 | scaling dimensions over all possible `spin_irrep` inputs to 1797 | `extremal_dimensions`. 1798 | spin_irreps: A list of ordered pairs of the type passed to 1799 | `extremal_dimensions` used to label the spin and global 1800 | symmetry representations of all operators that 1801 | `extremal_dimensions` can find. This list must be in the same 1802 | order used for `dimensions`. 1803 | nullity: [Optional] The number of extra equations to use beyond the 1804 | number of unknown variables. If this is non-zero, a positivity 1805 | constraint will be placed on the optimal OPE coefficients. 1806 | Defaults to 1. 1807 | """ 1808 | # Builds an auxillary table to store the specific vectors in this sum rule 1809 | extremal_table = [] 1810 | zeros = min(len(dimensions), len(spin_irreps)) 1811 | for j in range(0, zeros): 1812 | if type(spin_irreps[j]) == type(1): 1813 | spin_irreps[j] = [spin_irreps[j], 0] 1814 | l = self.get_table_index(spin_irreps[j]) 1815 | factor = self.shifted_prefactor(self.table[l][0][0].poles, r_cross, dimensions[j], 0) 1816 | size = len(self.table[l]) 1817 | outer_list = [] 1818 | for r in range(0, size): 1819 | inner_list = [] 1820 | for s in range(0, size): 1821 | extremal_entry = [] 1822 | for i in range(0, len(self.unit)): 1823 | extremal_entry.append(self.table[l][r][s].vector[i].subs(delta, dimensions[j]) * factor) 1824 | inner_list.append(extremal_entry) 1825 | outer_list.append(inner_list) 1826 | extremal_table.append(outer_list) 1827 | 1828 | # Determines the crossing equations where OPE coefficients only enter diagonally 1829 | good_rows = [] 1830 | for i in range(0, len(self.unit)): 1831 | j = 0 1832 | good_row = True 1833 | while j < zeros and good_row == True: 1834 | size = len(extremal_table[j]) 1835 | for r in range(0, size): 1836 | for s in range(0, size): 1837 | if abs(extremal_table[j][r][s][i]) > tiny and r != s: 1838 | good_row = False 1839 | j += 1 1840 | if good_row == True: 1841 | good_rows.append(i) 1842 | 1843 | fail = False 1844 | known_ops = [] 1845 | # We go through the good rows, each time removing a chunk of them that uniformly include an OPE coefficient that is known 1846 | # On the first iteration, when we do not know any, we pull out the ones that are inhomogeneous due to the identity 1847 | while len(good_rows) > 0 and fail == False: 1848 | other_rows = [] 1849 | current_rows = [] 1850 | current_coeffs = [] 1851 | new_dimensions = [] 1852 | new_spin_irreps = [] 1853 | 1854 | current_target = [0, -1, -1] 1855 | for i in good_rows: 1856 | potential_coeffs = [] 1857 | if len(known_ops) == 0 and abs(self.unit[i]) < tiny: 1858 | other_rows.append(i) 1859 | elif len(known_ops) == 0: 1860 | current_rows.append(i) 1861 | elif current_target[0] == 0: 1862 | j = 0 1863 | found = False 1864 | while j < zeros and found == False: 1865 | size = len(extremal_table[j]) 1866 | for vec in self.irrep_set: 1867 | if vec[1] == spin_irreps[j][1]: 1868 | break 1869 | r = 0 1870 | while r < size and found == False: 1871 | dim_set1 = [vec[0][0][r][r][0], vec[0][0][r][r][1], dimensions[j]] 1872 | dim_set1 = sorted(dim_set1) 1873 | for c in known_ops: 1874 | dim_set2 = [c[1], c[2], c[3]] 1875 | dim_set2 = sorted(dim_set2) 1876 | if abs(dim_set1[0] - dim_set2[0]) < 0.01 and abs(dim_set1[1] - dim_set2[1]) < 0.01 and abs(dim_set1[2] - dim_set2[2]) < 0.01: 1877 | # OPE coefficient symmetry only holds with a particular normalization 1878 | current_target = [(4.0 ** (dimensions[j] - c[3])) * c[0], j, r] 1879 | found = True 1880 | break 1881 | r += 1 1882 | j += 1 1883 | if found == False: 1884 | # This could happen if the SDP given to us does not correspond to the bootstrap of a physical theory 1885 | print("Leads exhausted") 1886 | fail = True 1887 | if current_target[0] != 0: 1888 | j = current_target[1] 1889 | r = current_target[2] 1890 | if abs(extremal_table[j][r][r][i]) < tiny: 1891 | other_rows.append(i) 1892 | else: 1893 | current_rows.append(i) 1894 | good_rows = other_rows 1895 | 1896 | # Determine all the OPE coefficients that could possibly be solved using these rows 1897 | for i in current_rows: 1898 | for j in range(0, zeros): 1899 | size = len(extremal_table[j]) 1900 | for r in range(0, size): 1901 | if abs(extremal_table[j][r][r][i]) < tiny: 1902 | continue 1903 | if j == current_target[1] and r == current_target[2]: 1904 | continue 1905 | found_one = False 1906 | found_both = False 1907 | for c in current_coeffs: 1908 | if c[0] == j and c[1] == r: 1909 | found_one = True 1910 | found_both = True 1911 | break 1912 | elif c[0] == j: 1913 | found_one = True 1914 | if found_both == False: 1915 | current_coeffs.append((j, r)) 1916 | if found_one == False: 1917 | new_dimensions.append(dimensions[j]) 1918 | new_spin_irreps.append(spin_irreps[j]) 1919 | 1920 | # If there are more operators than crossing equations, we must remove those of highest dimension 1921 | if len(current_coeffs) + nullity > len(current_rows): 1922 | refine = True 1923 | kept_coeffs = [] 1924 | 1925 | while refine == True: 1926 | index_new = new_dimensions.index(min(new_dimensions)) 1927 | # Allow for different operators of the same dimension 1928 | target_dimension = new_dimensions[index_new] 1929 | target_spin_irrep = new_spin_irreps[index_new] 1930 | for index_old in range(0, len(dimensions)): 1931 | if abs(dimensions[index_old] - target_dimension) < tiny and spin_irreps[index_old] == target_spin_irrep: 1932 | break 1933 | new_coeffs = [] 1934 | for pair in current_coeffs: 1935 | if pair[0] == index_old: 1936 | new_coeffs.append(pair) 1937 | if len(new_coeffs) + len(kept_coeffs) + nullity <= len(current_rows): 1938 | kept_coeffs = kept_coeffs + new_coeffs 1939 | new_dimensions = new_dimensions[:index_new] + new_dimensions[index_new + 1:] 1940 | new_spin_irreps = new_spin_irreps[:index_new] + new_spin_irreps[index_new + 1:] 1941 | refine = (len(new_dimensions) > 0) 1942 | else: 1943 | refine = False 1944 | current_coeffs = kept_coeffs 1945 | 1946 | # If there are more crossing equations than operators, we must omit the ones corresponding to high derivatives 1947 | # The last case might land us in this one as well if some OPE coefficients show up in pairs 1948 | if len(current_rows) > len(current_coeffs) + nullity: 1949 | current_rows = sorted(current_rows, key = lambda i: self.m_order[i] + self.n_order[i]) 1950 | current_rows = current_rows[:len(current_coeffs) + nullity] 1951 | 1952 | # Solve our system now that it is square 1953 | identity = [] 1954 | extremal_blocks = [] 1955 | size = len(current_coeffs) 1956 | if current_target[0] != 0: 1957 | j_id = current_target[1] 1958 | r_id = current_target[2] 1959 | for i in current_rows: 1960 | if current_target[0] == 0: 1961 | identity.append(self.unit[i]) 1962 | else: 1963 | identity.append(extremal_table[j_id][r_id][r_id][i]) 1964 | for pair in current_coeffs: 1965 | (j, r) = pair 1966 | extremal_blocks.append(float(extremal_table[j][r][r][i])) 1967 | identity = DenseMatrix(size + nullity, 1, identity) 1968 | extremal_matrix = DenseMatrix(size + nullity, size, extremal_blocks) 1969 | if nullity == 0: 1970 | solution = extremal_matrix.solve(identity) 1971 | else: 1972 | solution = self.least_absolute_distance(extremal_matrix, identity) 1973 | 1974 | # Add these coefficients, along with other things we know, to the list of operators 1975 | for i in range(0, len(current_coeffs)): 1976 | (j, r) = current_coeffs[i] 1977 | ope_coeff = solution.get(i, 0) 1978 | for vec in self.irrep_set: 1979 | if vec[1] == spin_irreps[j][1]: 1980 | break 1981 | dim1 = vec[0][r][r][0] 1982 | dim2 = vec[0][r][r][1] 1983 | known_ops.append([ope_coeff, dim1, dim2, dimensions[j], spin_irreps[j]]) 1984 | return known_ops 1985 | 1986 | def least_absolute_distance(self, matrix, vector): 1987 | """ 1988 | This returns the vector which is closest in the 1-norm to being a solution 1989 | of an inhomogeneous linear system. It is convenient to overload `SDPB` as a 1990 | linear program solver here. 1991 | 1992 | Parameters 1993 | ---------- 1994 | matrix: A matrix having more rows than columns. 1995 | vector: A vector whose length is the row dimension of `matrix`. 1996 | """ 1997 | zeros = matrix.ncols() 1998 | nullity = matrix.nrows() - zeros 1999 | 2000 | # The initial zero is the b_0 ignored by SDPB 2001 | obj = [0] 2002 | for i in range(0, zeros + nullity): 2003 | obj.append(-1) 2004 | for i in range(0, zeros): 2005 | obj.append(0) 2006 | 2007 | constraint_vector = [] 2008 | for i in range(0, zeros + nullity): 2009 | constraint_vector.append(vector.get(i, 0) * (-1)) 2010 | for i in range(0, zeros + nullity): 2011 | constraint_vector.append(vector.get(i, 0)) 2012 | 2013 | constraint_matrix = [] 2014 | for i in range(0, 2 * (zeros + nullity)): 2015 | constraint_matrix.append([zero] * (2 * zeros + nullity)) 2016 | for i in range(0, zeros + nullity): 2017 | constraint_matrix[i][i] = one 2018 | constraint_matrix[zeros + nullity + i][i] = one 2019 | for i in range(0, zeros + nullity): 2020 | for j in range(0, zeros): 2021 | constraint_matrix[i][zeros + nullity + j] = matrix.get(i, j) * (-1) 2022 | constraint_matrix[zeros + nullity + i][zeros + nullity + j] = matrix.get(i, j) 2023 | 2024 | # To solve this with scipy, one could stop here 2025 | #return linprog(-obj[1:], -constraint_matrix, -constraint_vector) 2026 | 2027 | extra = [] 2028 | for i in range(0, 2 * zeros + nullity): 2029 | extra.append([zero] * (2 * zeros + nullity)) 2030 | for i in range(0, 2 * zeros + nullity): 2031 | extra[i][i] = one 2032 | constraint_matrix = extra + constraint_matrix 2033 | constraint_vector = [zero] * (2 * zeros + nullity) + constraint_vector 2034 | 2035 | # Now that the functional components are positive, make a toy SDP for this 2036 | aux_table1 = ConformalBlockTable(1, 0, 0, 0, 0) 2037 | aux_table2 = ConvolvedBlockTable(aux_table1) 2038 | aux_sdp = SDP(0, aux_table2) 2039 | aux_sdp.bounds = [0] * len(constraint_vector) 2040 | aux_sdp.basis = [DenseMatrix([[1]])] * len(constraint_vector) 2041 | for i in range(0, len(constraint_vector)): 2042 | block = [constraint_vector[i]] + constraint_matrix[i] 2043 | aux_sdp.table.append([[PolynomialVector(block, [0, 0], [])]]) 2044 | 2045 | norm = [0] * len(obj) 2046 | norm[0] = -1 2047 | aux_sdp.write_xml(obj, norm, name = "tmp") 2048 | 2049 | # SDPB should now run quickly with default options 2050 | if sdpb_version_major == 1: 2051 | subprocess.check_call([sdpb_path, "-s", "tmp.xml", "--noFinalCheckpoint"]) 2052 | else: 2053 | subprocess.check_call([mpirun_path, "-n", "1", sdpb_path, "-s", "tmp.xml", "--noFinalCheckpoint"]) 2054 | output = self.read_output(name = "tmp") 2055 | solution = output["y"] 2056 | solution = solution[zeros + nullity:] 2057 | return DenseMatrix(zeros, 1, solution) 2058 | -------------------------------------------------------------------------------- /common.py: -------------------------------------------------------------------------------- 1 | cutoff = 0 2 | prec = 660 3 | dec_prec = int((3.0 / 10.0) * prec) 4 | tiny = RealMPFR("1e-" + str(dec_prec // 2), prec) 5 | 6 | zero = zero.n(prec) 7 | one = one.n(prec) 8 | two = 2 * one 9 | r_cross = 3 - 2 * sqrt(2).n(prec) 10 | 11 | ell = Symbol('ell') 12 | delta = Symbol('delta') 13 | delta_ext = Symbol('delta_ext') 14 | 15 | # Default paths, used as first priority if they exists 16 | sdpb_path = "/usr/bin/sdpb" 17 | mpirun_path = "/usr/bin/mpirun" 18 | 19 | def find_executable(name): 20 | if os.path.isfile(name): 21 | return name 22 | else: 23 | for path in os.environ["PATH"].split(os.pathsep): 24 | test = os.path.join(path, name) 25 | if os.path.isfile(test): 26 | return test 27 | else: 28 | raise EnvironmentError("%s was not found on path." % name) 29 | 30 | # If default path doesn't apply, look for SDPB on user's PATH 31 | if not os.path.isfile(sdpb_path): 32 | sdpb_path = find_executable("sdpb") 33 | 34 | # Determine major and minor version of SDPB 35 | proc = subprocess.Popen([sdpb_path, "--version"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) 36 | (stdout, _) = proc.communicate() 37 | if proc.returncode != 0: 38 | # Assume that this is version 1.x, which didn't support --version 39 | sdpb_version_major = 1 40 | sdpb_version_minor = 0 41 | else: 42 | # Otherwise parse the output of --version 43 | m = re.search(r"SDPB ([0-9]).([0-9])", str(stdout)) 44 | if m is None: 45 | raise RuntimeError("Failed to retrieve SDPB version.") 46 | sdpb_version_major = int(m.group(1)) 47 | sdpb_version_minor = int(m.group(2)) 48 | 49 | sdpb_options = ["checkpointInterval", "maxIterations", "maxRuntime", "dualityGapThreshold", "primalErrorThreshold", "dualErrorThreshold", "initialMatrixScalePrimal", "initialMatrixScaleDual", "feasibleCenteringParameter", "infeasibleCenteringParameter", "stepLengthReduction", "maxComplementarity"] 50 | sdpb_defaults = ["3600", "500", "86400", "1e-30", "1e-30", "1e-30", "1e+20", "1e+20", "0.1", "0.3", "0.7", "1e+100"] 51 | if sdpb_version_major == 1: 52 | sdpb_options = ["maxThreads", "choleskyStabilizeThreshold"] + sdpb_options 53 | sdpb_defaults = ["4", "1e-40"] + sdpb_defaults 54 | if sdpb_version_minor > 1: 55 | sdpb_options = ["verbosity"] + sdpb_options 56 | sdpb_defaults = ["1"] + sdpb_defaults 57 | if sdpb_version_major == 2 and 0 <= sdpb_version_minor <= 6: 58 | sdpb_options = ["procsPerNode"] + sdpb_options 59 | sdpb_defaults = ["0"] + sdpb_defaults 60 | if sdpb_version_major == 2 and 1 <= sdpb_version_minor <= 7: 61 | sdpb_options = ["procGranularity"] + sdpb_options 62 | sdpb_defaults = ["1"] + sdpb_defaults 63 | if sdpb_version_major > 2 or (sdpb_version_major == 2 and sdpb_version_minor >= 5): 64 | sdpb_options = ["minPrimalStep", "minDualStep"] + sdpb_options 65 | sdpb_defaults = ["0", "0"] + sdpb_defaults 66 | if sdpb_version_major > 2: 67 | sdpb_options = ["maxSharedMemory"] + sdpb_options 68 | sdpb_defaults = ["0"] + sdpb_defaults 69 | if sdpb_version_major > 1: 70 | if not os.path.isfile(mpirun_path): 71 | mpirun_path = find_executable("mpirun") 72 | 73 | def rf(x, n): 74 | """ 75 | Implements the rising factorial or Pochhammer symbol. 76 | """ 77 | ret = 1 78 | if n < 0: 79 | return rf(x - abs(n), abs(n)) ** (-1) 80 | for k in range(0, n): 81 | ret *= x + k 82 | return ret 83 | 84 | def deepcopy(array): 85 | """ 86 | Copies a list of a list so that entries can be changed non-destructively. 87 | """ 88 | ret = [] 89 | for el in array: 90 | ret.append(list(el)) 91 | return ret 92 | 93 | def index_iter(iter, n): 94 | """ 95 | Returns the nth element of an iterator. 96 | """ 97 | return next(itertools.islice(iter, n, None)) 98 | 99 | def get_index(array, element, start = 0): 100 | """ 101 | Finds where an element occurs in an array or -1 if not present. 102 | """ 103 | for i, v in itertools.islice(enumerate(array), start, None): 104 | if v == element: 105 | return i 106 | return -1 107 | 108 | def get_index_approx(array, element, start = 0): 109 | """ 110 | Finds where an element numerically close to the one given occurs in an array 111 | or -1 if not present. 112 | """ 113 | for i, v in itertools.islice(enumerate(array), start, None): 114 | if abs(v - element) < tiny: 115 | return i 116 | return -1 117 | 118 | def gather(array): 119 | """ 120 | Finds (approximate) duplicates in a list and returns a dictionary that counts 121 | the number of appearances. 122 | """ 123 | ret = {} 124 | backup = list(array) 125 | while len(backup) > 0: 126 | i = 0 127 | hits = [] 128 | while i >= 0: 129 | hits.append(i) 130 | i = get_index_approx(backup, backup[i], i + 1) 131 | ret[backup[0]] = len(hits) 132 | hits.reverse() 133 | for i in hits: 134 | backup = backup[:i] + backup[i + 1:] 135 | return ret 136 | 137 | def extract_power(term): 138 | """ 139 | Returns the degree of a single term in a polynomial. Symengine stores these 140 | as (coefficient, (delta, exponent)). This is helpful for sorting polynomials 141 | which are not sorted by default. 142 | """ 143 | if not "args" in dir(term): 144 | return 0 145 | 146 | if term.args == (): 147 | return 0 148 | elif term.args[1].args == (): 149 | return 1 150 | else: 151 | return int(term.args[1].args[1]) 152 | 153 | def coefficients(polynomial): 154 | """ 155 | Returns a sorted list of all coefficients in a polynomial starting with the 156 | constant term. Zeros are automatically added so that the length of the list 157 | is always one more than the degree. 158 | """ 159 | if not "args" in dir(polynomial): 160 | return [polynomial] 161 | if polynomial.args == (): 162 | return [polynomial] 163 | 164 | coeff_list = sorted(polynomial.args, key = extract_power) 165 | degree = extract_power(coeff_list[-1]) 166 | 167 | pos = 0 168 | ret = [] 169 | for d in range(0, degree + 1): 170 | if extract_power(coeff_list[pos]) == d: 171 | if d == 0: 172 | ret.append(RealMPFR(str(coeff_list[0]), prec)) 173 | else: 174 | ret.append(RealMPFR(str(coeff_list[pos].args[0]), prec)) 175 | pos += 1 176 | else: 177 | ret.append(0) 178 | return ret 179 | 180 | def build_polynomial(coefficients): 181 | """ 182 | Returns a polynomial in `delta` from a list of coefficients. The first one is 183 | expected to be the constant term. 184 | """ 185 | ret = 0 186 | prod = 1 187 | for d in range(0, len(coefficients)): 188 | ret += coefficients[d] * prod 189 | prod *= delta 190 | return ret 191 | 192 | def unitarity_bound(dim, spin): 193 | """ 194 | Returns the lower bound for conformal dimensions in a unitary theory for a 195 | given spatial dimension and spin. 196 | """ 197 | if spin == 0: 198 | return (dim / Integer(2)) - 1 199 | else: 200 | return dim + spin - 2 201 | 202 | def omit_all(poles, special_poles, var, shift = 0): 203 | """ 204 | Instead of returning a product of poles where each pole is not in a special 205 | list, this returns a product where each pole is subtracted from some variable. 206 | """ 207 | expression = 1 208 | gathered1 = gather(poles) 209 | gathered0 = gather(special_poles) 210 | for p in gathered1.keys(): 211 | ind = get_index_approx(gathered0.keys(), p + shift) 212 | if ind == -1: 213 | power = 0 214 | else: 215 | power = gathered0[index_iter(gathered0.keys(), ind)] 216 | expression *= (var - p) ** (gathered1[p] - power) 217 | return expression 218 | 219 | def dump_table_contents(block_table, name, delta_ext_sub = 0): 220 | """ 221 | This is called by `ConformalBlockTable` and `ConformalBlockTableSeed`. It 222 | writes executable Python code to a file designed to recreate the full set of 223 | the table's attributes as quickly as possible. 224 | """ 225 | dump_file = open(name, 'w') 226 | 227 | dump_file.write("self.dim = " + block_table.dim.__str__() + "\n") 228 | dump_file.write("self.k_max = " + block_table.k_max.__str__() + "\n") 229 | dump_file.write("self.l_max = " + block_table.l_max.__str__() + "\n") 230 | dump_file.write("self.m_max = " + block_table.m_max.__str__() + "\n") 231 | dump_file.write("self.n_max = " + block_table.n_max.__str__() + "\n") 232 | dump_file.write("self.delta_12 = " + block_table.delta_12.__str__() + "\n") 233 | dump_file.write("self.delta_34 = " + block_table.delta_34.__str__() + "\n") 234 | dump_file.write("self.odd_spins = " + block_table.odd_spins.__str__() + "\n") 235 | dump_file.write("self.m_order = " + block_table.m_order.__str__() + "\n") 236 | dump_file.write("self.n_order = " + block_table.n_order.__str__() + "\n") 237 | dump_file.write("self.table = []\n") 238 | 239 | for l in range(0, len(block_table.table)): 240 | dump_file.write("derivatives = []\n") 241 | for i in range(0, len(block_table.table[0].vector)): 242 | poly_string = block_table.table[l].vector[i].subs(delta_ext, delta_ext_sub).expand().__str__() 243 | poly_string = re.sub(r"([0-9]+\.[0-9]+e?-?[0-9]+)", r"RealMPFR('\1', prec)", poly_string) 244 | dump_file.write("derivatives.append(" + poly_string + ")\n") 245 | dump_file.write("self.table.append(PolynomialVector(derivatives, " + block_table.table[l].label.__str__() + ", " + block_table.table[l].poles.__str__() + "))\n") 246 | 247 | dump_file.close() 248 | 249 | def rules(m_max, n_max): 250 | """ 251 | This takes the radial and angular co-ordinates, defined by Hogervorst and 252 | Rychkov in arXiv:1303.1111, and differentiates them with respect to the 253 | diagonal `a` and off-diagonal `b`. It returns a quadruple where the first 254 | two entries store radial and angular derivatives respectively evaluated at 255 | the crossing symmetric point. The third entry is a list stating the number of 256 | `a` derivatives to which a given position corresponds and the fourth entry 257 | does the same for `b` derivatives. 258 | """ 259 | a = Symbol('a') 260 | b = Symbol('b') 261 | hack = Symbol('hack') 262 | 263 | rules1 = [] 264 | rules2 = [] 265 | m_order = [] 266 | n_order = [] 267 | old_expression1 = sqrt(a ** 2 - b) / (hack + sqrt((hack - a) ** 2 - b) + hack * sqrt(hack - a + sqrt((hack - a) ** 2 - b))) 268 | old_expression2 = (hack - sqrt((hack - a) ** 2 - b)) / sqrt(a ** 2 - b) 269 | 270 | if n_max == 0: 271 | old_expression1 = old_expression1.subs(b, 0) 272 | old_expression2 = b 273 | 274 | for n in range(0, n_max + 1): 275 | for m in range(0, 2 * (n_max - n) + m_max + 1): 276 | if n == 0 and m == 0: 277 | expression1 = old_expression1 278 | expression2 = old_expression2 279 | elif m == 0: 280 | old_expression1 = old_expression1.diff(b) 281 | old_expression2 = old_expression2.diff(b) 282 | expression1 = old_expression1 283 | expression2 = old_expression2 284 | else: 285 | expression1 = expression1.diff(a) 286 | expression2 = expression2.diff(a) 287 | 288 | rules1.append(expression1.subs({hack : RealMPFR("2", prec), a : 1, b : 0})) 289 | rules2.append(expression2.subs({hack : RealMPFR("2", prec), a : 1, b : 0})) 290 | m_order.append(m) 291 | n_order.append(n) 292 | 293 | return (rules1, rules2, m_order, n_order) 294 | 295 | def chain_rule_single_symengine(m_order, rules, table, conformal_blocks, accessor): 296 | """ 297 | This reads a conformal block list where each spin's entry is a list of radial 298 | derivatives. It converts these to diagonal `a` derivatives using the rules 299 | given. Once these are calculated, the passed `table` is populated. Here, 300 | `accessor` is a hack to get around the fact that different parts of the code 301 | like to index in different ways. 302 | """ 303 | _x = Symbol('_x') 304 | a = Symbol('a') 305 | r = function_symbol('r', a) 306 | g = function_symbol('g', r) 307 | m_max = max(m_order) 308 | 309 | for m in range(0, m_max + 1): 310 | if m == 0: 311 | old_expression = g 312 | g = function_symbol('g', _x) 313 | else: 314 | old_expression = old_expression.diff(a) 315 | 316 | expression = old_expression 317 | for i in range(1, m + 1): 318 | expression = expression.subs(Derivative(r, [a] * m_order[i]), rules[i]) 319 | 320 | for l in range(0, len(conformal_blocks)): 321 | new_deriv = expression 322 | for i in range(1, m + 1): 323 | new_deriv = new_deriv.subs(Subs(Derivative(g, [_x] * i), [_x], [r]), accessor(l, i)) 324 | if m == 0: 325 | new_deriv = accessor(l, 0) 326 | table[l].vector.append(new_deriv.expand()) 327 | 328 | def chain_rule_single(m_order, rules, table, conformal_blocks, accessor): 329 | """ 330 | This implements the same thing except in Python which should not be faster 331 | but it is. 332 | """ 333 | a = Symbol('a') 334 | r = function_symbol('r', a) 335 | m_max = max(m_order) 336 | 337 | old_coeff_grid = [0] * (m_max + 1) 338 | old_coeff_grid[0] = 1 339 | order = 0 340 | 341 | for m in range(0, m_max + 1): 342 | if m == 0: 343 | coeff_grid = old_coeff_grid[:] 344 | else: 345 | for i in range(m - 1, -1, -1): 346 | coeff = coeff_grid[i] 347 | if type(coeff) == type(1): 348 | coeff_deriv = 0 349 | else: 350 | coeff_deriv = coeff.diff(a) 351 | coeff_grid[i + 1] += coeff * r.diff(a) 352 | coeff_grid[i] = coeff_deriv 353 | 354 | deriv = coeff_grid[:] 355 | for l in range(order, 0, -1): 356 | for i in range(0, m + 1): 357 | if type(deriv[i]) != type(1): 358 | deriv[i] = deriv[i].subs(Derivative(r, [a] * m_order[l]), rules[l]) 359 | 360 | for l in range(0, len(conformal_blocks)): 361 | new_deriv = 0 362 | for i in range(0, m + 1): 363 | new_deriv += deriv[i] * accessor(l, i) 364 | table[l].vector.append(new_deriv.expand()) 365 | order += 1 366 | 367 | def chain_rule_double_symengine(m_order, n_order, rules1, rules2, table, conformal_blocks): 368 | """ 369 | This reads a conformal block list where each spin has a chunk for a given 370 | number of angular derivatives and different radial derivatives within each 371 | chunk. It converts these to diagonal and off-diagonal `a` and `b` derivatives 372 | using the two sets of rules given. Once these are calculated, the passed 373 | `table` is populated. 374 | """ 375 | _x = Symbol('_x') 376 | __x = Symbol('__x') 377 | a = Symbol('a') 378 | b = Symbol('b') 379 | r = function_symbol('r', a, b) 380 | eta = function_symbol('eta', a, b) 381 | g = function_symbol('g', r, eta) 382 | n_max = max(n_order) 383 | m_max = max(m_order) - 2 * n_max 384 | order = 0 385 | 386 | for n in range(0, n_max + 1): 387 | for m in range(0, 2 * (n_max - n) + m_max + 1): 388 | if n == 0 and m == 0: 389 | old_expression = g 390 | expression = old_expression 391 | g0 = function_symbol('g', __x, _x) 392 | g1 = function_symbol('g', _x, __x) 393 | g2 = function_symbol('g', _x, eta) 394 | g3 = function_symbol('g', r, _x) 395 | g4 = function_symbol('g', r, eta) 396 | elif m == 0: 397 | old_expression = old_expression.diff(b) 398 | expression = old_expression 399 | else: 400 | expression = expression.diff(a) 401 | 402 | deriv = expression 403 | for l in range(order, 0, -1): 404 | deriv = deriv.subs(Derivative(r, [a] * m_order[l] + [b] * n_order[l]), rules1[l]) 405 | deriv = deriv.subs(Derivative(r, [b] * n_order[l] + [a] * m_order[l]), rules1[l]) 406 | deriv = deriv.subs(Derivative(eta, [a] * m_order[l] + [b] * n_order[l]), rules2[l]) 407 | deriv = deriv.subs(Derivative(eta, [b] * n_order[l] + [a] * m_order[l]), rules2[l]) 408 | 409 | for l in range(0, len(conformal_blocks)): 410 | new_deriv = deriv 411 | for i in range(1, m + n + 1): 412 | for j in range(1, m + n - i + 1): 413 | new_deriv = new_deriv.subs(Subs(Derivative(g1, [_x] * i + [__x] * j), [_x, __x], [r, eta]), conformal_blocks[l].chunks[j].get(i, 0)) 414 | new_deriv = new_deriv.subs(Subs(Derivative(g0, [_x] * j + [__x] * i), [_x, __x], [eta, r]), conformal_blocks[l].chunks[j].get(i, 0)) 415 | for i in range(1, m + n + 1): 416 | new_deriv = new_deriv.subs(Subs(Derivative(g2, [_x] * i), [_x], [r]), conformal_blocks[l].chunks[0].get(i, 0)) 417 | for j in range(1, m + n + 1): 418 | new_deriv = new_deriv.subs(Subs(Derivative(g3, [_x] * j), [_x], [eta]), conformal_blocks[l].chunks[j].get(0, 0)) 419 | new_deri = new_deriv.subs(g4, conformal_blocks[l].chunks[0].get(0, 0)) 420 | table[l].vector.append(new_deriv.expand()) 421 | order += 1 422 | 423 | def chain_rule_double(m_order, n_order, rules1, rules2, table, conformal_blocks): 424 | """ 425 | This implements the same thing except in Python which should not be faster 426 | but it is. 427 | """ 428 | a = Symbol('a') 429 | b = Symbol('b') 430 | r = function_symbol('r', a, b) 431 | eta = function_symbol('eta', a, b) 432 | n_max = max(n_order) 433 | m_max = max(m_order) - 2 * n_max 434 | 435 | old_coeff_grid = [] 436 | for n in range(0, m_max + 2 * n_max + 1): 437 | old_coeff_grid.append([0] * (m_max + 2 * n_max + 1)) 438 | old_coeff_grid[0][0] = 1 439 | order = 0 440 | 441 | for n in range(0, n_max + 1): 442 | for m in range(0, 2 * (n_max - n) + m_max + 1): 443 | # Hack implementation of the g(r(a, b), eta(a, b)) chain rule 444 | if n == 0 and m == 0: 445 | coeff_grid = deepcopy(old_coeff_grid) 446 | elif m == 0: 447 | for i in range(m + n - 1, -1, -1): 448 | for j in range(m + n - i - 1, -1, -1): 449 | coeff = old_coeff_grid[i][j] 450 | if type(coeff) == type(1): 451 | coeff_deriv = 0 452 | else: 453 | coeff_deriv = coeff.diff(b) 454 | old_coeff_grid[i + 1][j] += coeff * r.diff(b) 455 | old_coeff_grid[i][j + 1] += coeff * eta.diff(b) 456 | old_coeff_grid[i][j] = coeff_deriv 457 | coeff_grid = deepcopy(old_coeff_grid) 458 | else: 459 | for i in range(m + n - 1, -1, -1): 460 | for j in range(m + n - i - 1, -1, -1): 461 | coeff = coeff_grid[i][j] 462 | if type(coeff) == type(1): 463 | coeff_deriv = 0 464 | else: 465 | coeff_deriv = coeff.diff(a) 466 | coeff_grid[i + 1][j] += coeff * r.diff(a) 467 | coeff_grid[i][j + 1] += coeff * eta.diff(a) 468 | coeff_grid[i][j] = coeff_deriv 469 | 470 | # Replace r and eta derivatives with the rules found above 471 | deriv = deepcopy(coeff_grid) 472 | for l in range(order, 0, -1): 473 | for i in range(0, m + n + 1): 474 | for j in range(0, m + n - i + 1): 475 | if type(deriv[i][j]) != type(1): 476 | deriv[i][j] = deriv[i][j].subs(Derivative(r, [a] * m_order[l] + [b] * n_order[l]), rules1[l]) 477 | deriv[i][j] = deriv[i][j].subs(Derivative(r, [b] * n_order[l] + [a] * m_order[l]), rules1[l]) 478 | deriv[i][j] = deriv[i][j].subs(Derivative(eta, [a] * m_order[l] + [b] * n_order[l]), rules2[l]) 479 | deriv[i][j] = deriv[i][j].subs(Derivative(eta, [b] * n_order[l] + [a] * m_order[l]), rules2[l]) 480 | 481 | # Replace conformal block derivatives similarly for each spin 482 | for l in range(0, len(conformal_blocks)): 483 | new_deriv = 0 484 | for i in range(0, m + n + 1): 485 | for j in range(0, m + n - i + 1): 486 | new_deriv += deriv[i][j] * conformal_blocks[l].chunks[j].get(i, 0) 487 | table[l].vector.append(new_deriv.expand()) 488 | order += 1 489 | -------------------------------------------------------------------------------- /compat_autoboot.py: -------------------------------------------------------------------------------- 1 | def admissible_min(x, y): 2 | """ 3 | Returns the minimum of the set obtained by taking the two arguments and 4 | discarding those that are negative. 5 | 6 | Parameters 7 | ---------- 8 | x: The first number. 9 | y: The second number. 10 | """ 11 | if x < 0: 12 | return y 13 | if y < 0: 14 | return x 15 | return min(x, y) 16 | 17 | def find_table(tab_list, symmetric, delta_12, delta_34): 18 | """ 19 | Searches a list of `ConvolvedBlockTable` instances for criteria given by the 20 | last three arguments and returns the index of the first match or -1 if absent. 21 | 22 | Parameters 23 | ---------- 24 | tab_list: The list to be searched. 25 | symmetric: Whether the desired table should involve the sum of blocks in two 26 | channels as opposed to the difference. 27 | delta_12: The `delta_12` attribute of the desired table. 28 | delta_34: The `delta_34` attribute of the desired table. 29 | """ 30 | for i in range(0, len(tab_list)): 31 | if abs(delta_12 - tab_list[i].delta_12) > tiny: 32 | continue 33 | if abs(delta_34 - tab_list[i].delta_34) > tiny: 34 | continue 35 | if (symmetric == True and tab_list[i].m_order[0] % 2 == 0) or (symmetric == False and tab_list[i].m_order[0] % 2 == 1): 36 | return i 37 | return -1 38 | 39 | def autoboot_sdp(dim, k_max, l_max, m_max, n_max, operators, equations): 40 | """ 41 | Returns an `SDP` generated by parsing strings which encode crossing equations. 42 | These are assumed to be in the format of the Mathematica package autoboot by 43 | Mocho Go and Yuji Tachikawa. Single blocks not part of a sum are currently 44 | skipped and must be put into the `SDP` with `add_point`. 45 | 46 | Parameters 47 | ---------- 48 | dim: The spatial dimension for the crossing equations. 49 | k_max: Accuracy parameter for each `ConvolvedBlockTable` to be generated. 50 | l_max: Maximum spin to include in each generated `ConvolvedBlockTable`. 51 | m_max: First derivative cutoff parameter for each table. 52 | n_max: Second derivative cutoff parameter for each table. 53 | operators: A dictionary whose keys are the strings which have been used to 54 | represent external operators in autoboot. The corresponding values 55 | should be their scaling dimensions. 56 | equations: A crossing equation string encoding the output of autoboot's 57 | `bootAll` function. It must be expressed in input form and not 58 | contain any new lines. 59 | """ 60 | vector_names = [] 61 | vector_types = [] 62 | possible_opes = [] 63 | conv_table_list = [] 64 | dim_list = list(operators.values()) 65 | # Hopefully this works in python2 as well 66 | ope_symbol = "β" 67 | 68 | depth = 0 69 | marker = 5 70 | eq_list = [] 71 | for i in range(5, len(equations) - 2): 72 | if equations[i] == '[': 73 | depth += 1 74 | elif equations[i] == ']': 75 | depth -= 1 76 | elif equations[i] == ',' and depth == 0: 77 | eq_list.append(equations[marker:i]) 78 | marker = i + 2 79 | eq_list.append(equations[marker:-2]) 80 | 81 | # Get all types of exchanged operators 82 | for eq in eq_list: 83 | data = eq[:] 84 | while ope_symbol in data: 85 | m = re.search(r'\]\[[0-9]*\]\^2, |\]\[[0-9]*\], ', data) 86 | pos0 = m.start() 87 | pos1 = m.end() 88 | pos2 = data.index("]]", pos1) 89 | current_name = data[pos1:pos2 + 1] 90 | ind1 = get_index(vector_names, current_name) 91 | if ind1 == -1: 92 | vector_names.append(current_name) 93 | possible_opes.append([]) 94 | if "^2" in data[pos0:pos1]: 95 | ind2 = data.rindex(ope_symbol, 0, pos0) 96 | first = data[ind2:pos1 - 4] 97 | second = first 98 | else: 99 | ind2 = data.rindex(ope_symbol, 0, pos0) 100 | first = data[ind2:pos1 - 2] 101 | ind3 = data.rindex(ope_symbol, 0, ind2 - 1) 102 | second = data[ind3:ind2 - 1] 103 | # This assumes autoboot sticks to a single convention for how the three operators are ordered 104 | if first not in possible_opes[ind1]: 105 | possible_opes[ind1].append(first) 106 | if second not in possible_opes[ind1]: 107 | possible_opes[ind1].append(second) 108 | data = data[pos2 + 1:] 109 | 110 | # Convert them to the format we use 111 | for i in range(0, len(vector_names)): 112 | name = vector_names[i] 113 | size = len(possible_opes[i]) 114 | if "-1]" in name: 115 | vector_types.append([[], 1, name]) 116 | else: 117 | vector_types.append([[], 0, name]) 118 | for j in range(0, len(eq_list)): 119 | outer_list = [] 120 | for r in range(0, size): 121 | inner_list = [] 122 | for s in range(0, size): 123 | inner_list.append([0, 0, 0, 0]) 124 | outer_list.append(inner_list) 125 | vector_types[-1][0].append(outer_list) 126 | 127 | for i in range(0, len(vector_types)): 128 | for j in range(0, len(possible_opes[i])): 129 | for k in range(j, len(possible_opes[i])): 130 | for l in range(0, len(eq_list)): 131 | # This assumes like terms have already been collected 132 | if j == k: 133 | ind = eq_list[l].find(possible_opes[i][j] + "^2") 134 | else: 135 | ind1 = eq_list[l].find(possible_opes[i][j] + "*" + possible_opes[i][k]) 136 | ind2 = eq_list[l].find(possible_opes[i][k] + "*" + possible_opes[i][j]) 137 | ind = admissible_min(ind1, ind2) 138 | if ind == -1: 139 | continue 140 | if eq_list[l].index("]") >= ind - 2: 141 | pos0 = eq_list[l].index("sum") 142 | coeff = eq_list[l][:pos0].replace('*', '').replace(' ', '') 143 | else: 144 | data = eq_list[l][ind:0:-1] 145 | m = re.search(r"mus[*]?[0-9]* [\+|\-]", data) 146 | coeff = m.group()[-1:3:-1] 147 | coeff = coeff.replace('*', '').replace(' ', '') 148 | if '/' in coeff: 149 | parts = coeff.split('/') 150 | coeff = Rational(parts[0], parts[1]) 151 | #coeff = float(parts[0]) / float(parts[1]) 152 | elif coeff == '' or coeff == '+': 153 | coeff = 1 154 | elif coeff == '-': 155 | coeff = -1 156 | else: 157 | coeff = float(coeff) 158 | vector_types[i][0][l][j][k][0] += 0.5 * coeff 159 | vector_types[i][0][l][k][j][0] += 0.5 * coeff 160 | pos1 = eq_list[l].rindex('[', 0, ind) 161 | block_string = eq_list[l][pos1 + 1:ind - 2].replace(' ', '') 162 | parts = block_string.split(',') 163 | vector_types[i][0][l][j][k][2] = get_index_approx(dim_list, operators[parts[1]]) 164 | vector_types[i][0][l][k][j][2] = get_index_approx(dim_list, operators[parts[1]]) 165 | vector_types[i][0][l][j][k][3] = get_index_approx(dim_list, operators[parts[2]]) 166 | vector_types[i][0][l][k][j][3] = get_index_approx(dim_list, operators[parts[2]]) 167 | delta_12 = operators[parts[0]] - operators[parts[1]] 168 | delta_34 = operators[parts[2]] - operators[parts[3]] 169 | if eq_list[l][pos1 - 1] == 'H': 170 | symmetric = True 171 | else: 172 | symmetric = False 173 | pos2 = find_table(conv_table_list, symmetric, delta_12, delta_34) 174 | if pos2 == -1: 175 | # This is slightly wasteful because a few tables will only be used for even spins 176 | # Also because not every table appears with both symmetric and antisymmetric versions 177 | tab = ConformalBlockTable(dim, k_max, l_max, m_max, n_max, delta_12, delta_34, True) 178 | conv_table_list.append(ConvolvedBlockTable(tab, symmetric = False)) 179 | conv_table_list.append(ConvolvedBlockTable(tab, symmetric = True)) 180 | pos2 = find_table(conv_table_list, symmetric, delta_12, delta_34) 181 | vector_types[i][0][l][j][k][1] = pos2 182 | vector_types[i][0][l][k][j][1] = pos2 183 | 184 | return SDP(dim_list, conv_table_list, vector_types) 185 | -------------------------------------------------------------------------------- /compat_juliboots.py: -------------------------------------------------------------------------------- 1 | def juliboots_read(block_table, name): 2 | """ 3 | This reads in a block table produced by JuliBoots, the program by Miguel 4 | Paulos. Whether to call it is determined by `ConformalBlockTable` 5 | automatically. The two attributes of `ConformalBlockTable` that do not appear 6 | in the JuliBoots specification are delta_12 and delta_34. The user just has to 7 | remember them. 8 | """ 9 | tab_file = open(name, 'r') 10 | nu = float(next(tab_file)) 11 | 12 | block_table.n_max = int(next(tab_file)) 13 | block_table.m_max = int(next(tab_file)) 14 | block_table.l_max = int(next(tab_file)) 15 | odds = int(next(tab_file)) 16 | prec = int(next(tab_file)) 17 | comp = int(next(tab_file)) 18 | 19 | block_table.dim = 2 * nu + 2 20 | if odds == 0: 21 | step = 2 22 | block_table.odd_spins = False 23 | else: 24 | step = 1 25 | block_table.odd_spins = True 26 | 27 | block_table.m_order = [] 28 | block_table.n_order = [] 29 | for n in range(0, block_table.n_max + 1): 30 | for m in range(0, 2 * (block_table.n_max - n) + block_table.m_max + 1): 31 | block_table.m_order.append(m) 32 | block_table.n_order.append(n) 33 | 34 | block_table.table = [] 35 | for l in range(0, block_table.l_max + 1, step): 36 | artifact = float(next(tab_file)) 37 | degree = int(next(tab_file)) - 1 38 | 39 | derivatives = [] 40 | for i in range(0, comp): 41 | poly = 0 42 | for k in range(0, degree + 1): 43 | coeff = RealMPFR(next(tab_file)[:-1], prec) 44 | poly += coeff * (delta ** k) 45 | derivatives.append(poly.expand()) 46 | 47 | single_poles = [0] * int(next(tab_file)) 48 | for p in range(0, len(single_poles)): 49 | single_poles[p] = RealMPFR(next(tab_file)[:-1], prec) 50 | 51 | # We add coeff / (delta - p) summed over all poles 52 | # This just puts it over a common denominator automatically 53 | for i in range(0, len(derivatives)): 54 | prod1 = 1 55 | single_pole_term = 0 56 | for p in single_poles: 57 | coeff = RealMPFR(next(tab_file)[:-1], prec) 58 | single_pole_term = single_pole_term * (delta - p) + coeff * prod1 59 | single_pole_term = single_pole_term.expand() 60 | prod1 *= (delta - p) 61 | prod1 = prod1.expand() 62 | derivatives[i] = derivatives[i] * prod1 + single_pole_term 63 | derivatives[i] = derivatives[i].expand() 64 | 65 | double_poles = [0] * int(next(tab_file)) 66 | for p in range(0, len(double_poles)): 67 | double_poles[p] = RealMPFR(next(tab_file)[:-1], prec) 68 | 69 | # Doing this for double poles is the same if we remember to square everything 70 | # We also need the product of single poles to come in at the end 71 | for i in range(0, len(derivatives)): 72 | prod2 = 1 73 | double_pole_term = 0 74 | for p in double_poles: 75 | coeff = RealMPFR(next(tab_file)[:-1], prec) 76 | double_pole_term = double_pole_term * ((delta - p) ** 2) + coeff * prod2 77 | double_pole_term = double_pole_term.expand() 78 | prod2 *= (delta - p) ** 2 79 | prod2 = prod2.expand() 80 | derivatives[i] = derivatives[i] * prod2 + double_pole_term * prod1 81 | derivatives[i] = derivatives[i] / (two ** (block_table.m_order[i] + 2 * block_table.n_order[i])) 82 | derivatives[i] = derivatives[i].expand() 83 | 84 | poles = single_poles + (double_poles * 2) 85 | block_table.table.append(PolynomialVector(derivatives, [l, 0], poles)) 86 | block_table.k_max = len(poles) 87 | tab_file.close() 88 | 89 | def juliboots_write(block_table, name): 90 | """ 91 | This writes out a block table in the format expected by JuliBoots. It is 92 | triggered when a `ConformalBlockTable` is dumped with the right format string. 93 | """ 94 | tab_file = open(name, 'w') 95 | tab_file.write(str((block_table.dim / Integer(2)) - 1) + "\n") 96 | tab_file.write(str(block_table.n_max) + "\n") 97 | tab_file.write(str(block_table.m_max) + "\n") 98 | tab_file.write(str(block_table.l_max) + "\n") 99 | 100 | alternate = 1 101 | if block_table.odd_spins: 102 | tab_file.write("1\n") 103 | else: 104 | tab_file.write("0\n") 105 | tab_file.write(str(prec) + "\n") 106 | tab_file.write(str(len(block_table.table[0].vector)) + "\n") 107 | 108 | # Print delta_12 or delta_34 when we get the chance 109 | # If the file is going to have unused bits, we might as well use them 110 | for l in range(0, len(block_table.table)): 111 | if alternate == 1: 112 | tab_file.write(str(block_table.delta_12) + "\n") 113 | else: 114 | tab_file.write(str(block_table.delta_34) + "\n") 115 | 116 | max_degree = 0 117 | for poly in block_table.table[l].vector: 118 | coeff_list = sorted(poly.args, key = extract_power) 119 | degree = extract_power(coeff_list[-1]) 120 | max_degree = max(max_degree, degree - len(block_table.table[l].poles)) 121 | tab_file.write(str(max_degree + 1) + "\n") 122 | 123 | series = 1 124 | for p in block_table.table[l].poles: 125 | term = build_polynomial([1] * (max_degree + 1)) 126 | term = term.subs(delta, p * delta) 127 | series *= term 128 | series = series.expand() 129 | series = build_polynomial(coefficients(series)[:max_degree + 1]) 130 | 131 | # Above, delta functions as 1 / delta 132 | # We need to multiply by the numerator with reversed coefficients to get the entire part 133 | for i in range(0, len(block_table.table[l].vector)): 134 | poly = block_table.table[l].vector[i] 135 | coeff_list = coefficients(poly) 136 | coeff_list.reverse() 137 | poly = build_polynomial(coeff_list) 138 | poly = poly * series 139 | poly = poly.expand() 140 | coeff_list = coefficients(poly) 141 | # We get the numerator degree by subtracting the degree of series 142 | # The difference between this and the number of poles is the degree of the polynomial we write 143 | degree = len(coeff_list) - max_degree - len(block_table.table[l].poles) - 1 144 | factor = two ** (block_table.m_order[i] + 2 * block_table.n_order[i]) 145 | for k in range(0, max_degree + 1): 146 | index = degree - k 147 | if index >= 0: 148 | tab_file.write(str(factor * coeff_list[index]) + "\n") 149 | else: 150 | tab_file.write("0\n") 151 | 152 | single_poles = [] 153 | double_poles = [] 154 | gathered_poles = gather(block_table.table[l].poles) 155 | for p in gathered_poles.keys(): 156 | if gathered_poles[p] == 1: 157 | single_poles.append(p) 158 | else: 159 | double_poles.append(p) 160 | 161 | # The single pole part of the partial fraction decomposition is easier 162 | tab_file.write(str(len(single_poles)) + "\n") 163 | for p in single_poles: 164 | tab_file.write(str(p) + "\n") 165 | 166 | for i in range(0, len(block_table.table[l].vector)): 167 | poly = block_table.table[l].vector[i] 168 | factor = two ** (block_table.m_order[i] + 2 * block_table.n_order[i]) 169 | for p in single_poles: 170 | num = poly.subs(delta, p) 171 | denom = omit_all(block_table.table[l].poles, [p, p], p) 172 | tab_file.write(str(factor * num / denom) + "\n") 173 | 174 | # The double pole part is identical 175 | tab_file.write(str(len(double_poles)) + "\n") 176 | for p in double_poles: 177 | tab_file.write(str(p) + "\n") 178 | 179 | for i in range(0, len(block_table.table[l].vector)): 180 | poly = block_table.table[l].vector[i] 181 | factor = two ** (block_table.m_order[i] + 2 * block_table.n_order[i]) 182 | for p in double_poles: 183 | num = poly.subs(delta, p) 184 | denom = omit_all(block_table.table[l].poles, [p, p], p) 185 | tab_file.write(str(factor * num / denom) + "\n") 186 | 187 | alternate *= -1 188 | tab_file.close() 189 | -------------------------------------------------------------------------------- /compat_scalar_blocks.py: -------------------------------------------------------------------------------- 1 | def scalar_blocks_read(block_table, name): 2 | """ 3 | This reads in a block table produced by scalar_blocks, the program by Walter 4 | Landry. Whether to call it is determined by `ConformalBlockTable` 5 | automatically. 6 | """ 7 | files1 = os.listdir(name) 8 | files0 = sorted(files1) 9 | files = sorted(files0, key = len) 10 | # A cheap way to get alphanumeric sort 11 | info = files[0] 12 | 13 | # The convolution functions to support both can be found in the git history 14 | if info[:13] == "zzbDerivTable": 15 | print("Please rerun scalar_blocks with --output-ab") 16 | return 17 | elif info[:12] != "abDerivTable": 18 | print("Unknown convention for derivatives") 19 | return 20 | 21 | # Parsing is annoying because '-' is used in the numbers and the delimiters 22 | delta12_negative = info.split("-delta12--") 23 | delta12_positive = info.split("-delta12-") 24 | if len(delta12_negative) > 1: 25 | block_table.delta_12 = -float(delta12_negative[1].split('-')[0]) 26 | info = info.replace("-delta12--", "-delta12-") 27 | else: 28 | block_table.delta_12 = float(delta12_positive[1].split('-')[0]) 29 | delta34_negative = info.split("-delta34--") 30 | delta34_positive = info.split("-delta34-") 31 | if len(delta34_negative) > 1: 32 | block_table.delta_34 = -float(delta34_negative[1].split('-')[0]) 33 | info = info.replace("-delta34--", "-delta34-") 34 | else: 35 | block_table.delta_34 = float(delta34_positive[1].split('-')[0]) 36 | 37 | info = info.split('-') 38 | block_table.dim = RealMPFR(info[1][1:], prec) 39 | block_table.k_max = int(info[8][13:]) 40 | block_table.n_max = int(info[7][4:]) - 1 41 | block_table.m_max = 1 42 | block_table.l_max = len(files) - 1 43 | block_table.odd_spins = False 44 | block_table.m_order = [] 45 | block_table.n_order = [] 46 | for n in range(0, block_table.n_max + 1): 47 | for m in range(0, 2 * (block_table.n_max - n) + 2): 48 | block_table.m_order.append(m) 49 | block_table.n_order.append(n) 50 | 51 | block_table.table = [] 52 | for f in files: 53 | sign = 1 54 | remove_zero = 0 55 | info = f.replace('--', '-') 56 | full = name + "/" + f 57 | l = int(info.split('-')[6][1:]) 58 | if l % 2 == 1: 59 | block_table.odd_spins = True 60 | sign = -1 61 | if l > block_table.l_max: 62 | block_table.l_max = l 63 | 64 | derivatives = [] 65 | vector_with_poles = open(full, 'r').read().split(" shiftedPoles -> ") 66 | if len(vector_with_poles) < 2: 67 | print("Please rerun scalar_blocks with --output-poles") 68 | return 69 | vector = vector_with_poles[0].replace('{', '').replace('}', '') 70 | vector = re.sub(r"abDeriv\[[0-9]+,[0-9]+\]", "", vector).split(',\n')[:-1] 71 | for el in vector: 72 | poly = 0 73 | poly_lines = el.split('\n') 74 | for k in range(0, len(poly_lines)): 75 | if k == 0: 76 | coeff = poly_lines[k].split('->')[1] 77 | else: 78 | coeff = poly_lines[k].split('*')[0][5:] 79 | poly += sign * RealMPFR(coeff, prec) * (delta ** k) 80 | # It turns out that the scalars come with a shift of d - 2 which is not the unitarity bound 81 | # All shifts, scalar or not, are undone here as we prefer to handle this step during XML writing 82 | derivatives.append(poly.subs(delta, delta - block_table.dim - l + 2).expand()) 83 | 84 | poles = [] 85 | passed_halfway = False 86 | shifted_poles = vector_with_poles[1].split(',\n') 87 | for p in range(0, len(shifted_poles)): 88 | line = shifted_poles[p].strip().replace(',', '') 89 | if p > 0 and '{' in line: 90 | passed_halfway = True 91 | line = line.replace('{', '').replace('}', '') 92 | pole = RealMPFR(line, prec) + RealMPFR(str(block_table.dim + l - 2), prec) 93 | if passed_halfway: 94 | if l == 0 and abs(pole) < tiny and abs(block_table.delta_12) < tiny and abs(block_table.delta_34) < tiny: 95 | remove_zero = 2 96 | continue 97 | poles.append(pole) 98 | poles.append(pole) 99 | else: 100 | if l == 0 and abs(pole) < tiny and abs(block_table.delta_12) < tiny and abs(block_table.delta_34) < tiny: 101 | remove_zero = 1 102 | continue 103 | poles.append(pole) 104 | 105 | # The block for scalar exchange should not give zero for the identity 106 | if remove_zero > 0: 107 | for i in range(0, len(derivatives)): 108 | poly = 0 109 | coeffs = coefficients(derivatives[i]) 110 | for c in range(remove_zero, len(coeffs)): 111 | poly += coeffs[c] * (delta ** (c - remove_zero)) 112 | derivatives[i] = poly 113 | block_table.table.append(PolynomialVector(derivatives, [l, 0], poles)) 114 | 115 | def scalar_blocks_write(block_table, name): 116 | """ 117 | This writes out a block table in the format that scalar_blocks uses. It is 118 | triggered when a `ConformalBlockTable` is dumped with the right format string. 119 | """ 120 | os.makedirs(name) 121 | name_prefix = "abDerivTable-d" + str(block_table.dim) + "-delta12-" + str(block_table.delta_12) + "-delta34-" + str(block_table.delta_34) + "-L" 122 | name_suffix = "-nmax" + str(block_table.n_max + 1) + "-keptPoleOrder" + str(block_table.k_max) + "-order" + str(block_table.k_max) + ".m" 123 | for l in range(0, len(block_table.table)): 124 | full = name + "/" + name_prefix + str(block_table.table[l].label[0]) + name_suffix 125 | block_file = open(full, 'w') 126 | block_file.write('{') 127 | for i in range(0, len(block_table.table[l].vector)): 128 | poly = block_table.table[l].vector[i].subs(delta, delta + block_table.dim + l - 2).expand() 129 | coeffs = coefficients(poly) 130 | block_file.write("abDeriv[" + str(block_table.m_order[i]) + "," + str(block_table.n_order[i]) + "] -> " + str(coeffs[0]) + "\n ") 131 | for c in range(1, len(coeffs)): 132 | block_file.write(" + " + str(coeffs[c]) + "*x^" + str(c)) 133 | if c == len(coeffs) - 1: 134 | block_file.write(",\n ") 135 | else: 136 | block_file.write("\n ") 137 | 138 | single_poles = [] 139 | double_poles = [] 140 | gathered_poles = gather(block_table.table[l].poles) 141 | for p in gathered_poles.keys(): 142 | if gathered_poles[p] == 1: 143 | single_poles.append(p) 144 | else: 145 | double_poles.append(p) 146 | 147 | block_file.write("shiftedPoles -> {{") 148 | for p in range(0, len(single_poles)): 149 | block_file.write(str(single_poles[p])) 150 | if p < len(single_poles) - 1: 151 | block_file.write(",\n ") 152 | block_file.write("},\n {") 153 | for p in range(0, len(double_poles)): 154 | block_file.write(str(double_poles[p])) 155 | if p < len(double_poles) - 1: 156 | block_file.write(",\n ") 157 | block_file.write('}}}') 158 | block_file.close() 159 | -------------------------------------------------------------------------------- /tutorial.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """ 3 | This is a tutorial that applies conformal bootstrap logic with PyCFTBoot to make 4 | non-trivial statements about four different conformal field theories. These are 5 | taken from the examples in section 4 of arXiv:1602.02810. More information can 6 | be found by reading the other sections or the documentation strings of PyCFTBoot. 7 | These can be accessed by running `pydoc -g`. 8 | """ 9 | # Imports the package 10 | import bootstrap 11 | # The conformal blocks needed for a given run are calculated as a sum over poles. 12 | # Demand that poles with small residues are approximated by poles with large ones. 13 | bootstrap.cutoff = 1e-10 14 | 15 | def cprint(message): 16 | print("\033[94m" + message + "\033[0m") 17 | 18 | print("Welcome to the PyCFTBoot tutorial!") 19 | print("Please read the comments and watch for coloured text.") 20 | print("Which theory would you like to study?") 21 | print("These take increasingly long amounts of time.") 22 | print("1. 3D Ising model (even sector only).") 23 | print("2. 3D O(3) model.") 24 | print("3. 3D Ising model (odd sector as well).") 25 | print("4. 4D N = 1 supersymmetric model.") 26 | choice = int(input("Choice: ")) 27 | 28 | if choice == 1: 29 | # Concentrates on external scalars of 0.518, close to the 3D Ising value. 30 | dim_phi = 0.518 31 | cprint("Finding basic bound at external dimension " + str(dim_phi) + "...") 32 | # Spatial dimension. 33 | dim = 3 34 | # Dictates the number of poles to keep and therefore the accuracy of a conformal block. 35 | k_max = 20 36 | # Says that conformal blocks for spin-0 up to and including spin-14 should be computed. 37 | l_max = 14 38 | # Conformal blocks are functions of (a, b) and many derivatives of each should be kept for strong bounds. 39 | # This says to keep derivatives up to fourth order in b. 40 | n_max = 4 41 | # For a given n, this states how many a derivatives should be included beyond 2 * (n - n_max). 42 | m_max = 2 43 | # Generates the table. 44 | table1 = bootstrap.ConformalBlockTable(dim, k_max, l_max, m_max, n_max) 45 | # Computes the convolution. 46 | table2 = bootstrap.ConvolvedBlockTable(table1) 47 | # Sets up a semidefinite program that we can use to study this. 48 | sdp = bootstrap.SDP(dim_phi, table2) 49 | # We think it is perfectly fine for all internal scalars coupling to our external one to have dimension above 0.7. 50 | lower = 0.7 51 | # Conversely, we think it is a problem for crossing symmetry if they all have dimension above 1.7. 52 | upper = 1.7 53 | # The boundary between these regions will be found within an error of 0.01. 54 | tol = 0.01 55 | # The 0.7 and 1.7 are our guesses for scalars, not some other type of operator. 56 | channel = 0 57 | # Calls SDPB to compute the bound. 58 | result = sdp.bisect(lower, upper, tol, channel, name = "tutorial_1a") 59 | cprint("If crossing symmetry and unitarity hold, the maximum gap we can have for Z2-even scalars is: " + str(result)) 60 | cprint("Checking if (" + str(dim_phi) + ", " + str(result) + ") is still allowed if we require only one relevant Z2-even scalar...") 61 | # States that the continuum of internal scalars being checked starts at 3. 62 | sdp.set_bound(channel, float(dim)) 63 | # States that the point near the boundary that we found should be the one exception. 64 | sdp.add_point(channel, result) 65 | # Calls SDPB to check. 66 | allowed = sdp.iterate(name = "tutorial_1b") 67 | if (allowed): 68 | cprint("Yes") 69 | else: 70 | cprint("No") 71 | cprint("Checking if (" + str(dim_phi) + ", 1.2) is allowed under the same conditions...") 72 | # Removes the point we previously set by calling this with one missing argument. 73 | sdp.add_point(channel) 74 | # Adds a different one at a smaller dimension. 75 | sdp.add_point(channel, 1.2) 76 | # Checks again. 77 | allowed = sdp.iterate(name = "tutorial_1c") 78 | if (allowed): 79 | cprint("Yes") 80 | else: 81 | cprint("No") 82 | 83 | if choice == 2: 84 | # The 0.52 value turns out to be close to a kink. 85 | dim_phi1 = 0.52 86 | cprint("Finding basic bound on singlets at external dimension " + str(dim_phi1) + "...") 87 | # Parameters like those in example 1. 88 | dim = 3 89 | k_max = 20 90 | l_max = 15 91 | m_max = 1 92 | n_max = 3 93 | # This time we need to keep odd spins because O(N) models have antisymmetric tensors. 94 | table1 = bootstrap.ConformalBlockTable(dim, k_max, l_max, m_max, n_max, odd_spins = True) 95 | # Computes the two convolutions needed by the sum rule. 96 | table2 = bootstrap.ConvolvedBlockTable(table1, symmetric = True) 97 | table3 = bootstrap.ConvolvedBlockTable(table1) 98 | # Specializes to N = 3. 99 | N = 3.0 100 | # First vector: 0 * table3, 1 * table3, 1 * table2 101 | vec1 = [[0, 1], [1, 1], [1, 0]] 102 | # Second vector: 1 * table3, (1 - (2 / N)) * table3, -(1 + (2 / N)) * table2 103 | vec2 = [[1, 1], [1.0 - (2.0 / N), 1], [-(1.0 + (2.0 / N)), 0]] 104 | # Third vector: 1 * table3, -1 * table3, 1 * table2 105 | vec3 = [[1, 1], [-1, 1], [1, 0]] 106 | # The spins of these irreps (with arbitrary names) are even, even, odd. 107 | info = [[vec1, 0, "singlet"], [vec2, 0, "symmetric"], [vec3, 1, "antisymmetric"]] 108 | # Sets up an SDP here. 109 | sdp1 = bootstrap.SDP(dim_phi1, [table2, table3], vector_types = info) 110 | # This time channel needs two labels. 111 | channel1 = [0, "singlet"] 112 | result = sdp1.bisect(0.7, 1.8, 0.01, channel1, name = "tutorial_2a") 113 | cprint("If crossing symmetry and unitarity hold, the maximum gap we can have for singlet scalars is: " + str(result)) 114 | cprint("Bounding the OPE coefficient for the stress-energy tensor...") 115 | # The spin is now 2 and the dimension is 3. 116 | channel2 = [2, "singlet"] 117 | dim_t = dim 118 | # Calls SDPB to return a squared OPE coefficient bound. 119 | result1 = sdp1.opemax(dim_t, channel2, name = "tutorial_2b") 120 | cprint("Bounding the same coefficient in the free theory to get a point of comparison...") 121 | # Sets up a new SDP where this time, the external scalar has a dimension very close to unitarity. 122 | dim_phi2 = 0.5001 123 | sdp2 = bootstrap.SDP(dim_phi2, [table2, table3], vector_types = info) 124 | result2 = sdp2.opemax(dim_t, channel2, name = "tutorial_2c") 125 | # Uses the central charge formula which follows from the Ward identity to compute the ratio. 126 | ratio = ((result2 / result1) * (dim_phi1 / dim_phi2)) ** 2 127 | cprint("The central charge of the theory at " + str(dim_phi1) + " is " + str(ratio) + " times the free one.") 128 | 129 | # A function used for the multi-correlator 3D Ising example. 130 | def convolved_table_list(tab1, tab2, tab3): 131 | f_tab1a = bootstrap.ConvolvedBlockTable(tab1) 132 | f_tab1s = bootstrap.ConvolvedBlockTable(tab1, symmetric = True) 133 | f_tab2a = bootstrap.ConvolvedBlockTable(tab2) 134 | f_tab2s = bootstrap.ConvolvedBlockTable(tab2, symmetric = True) 135 | f_tab3 = bootstrap.ConvolvedBlockTable(tab3) 136 | return [f_tab1a, f_tab1s, f_tab2a, f_tab2s, f_tab3] 137 | 138 | if choice == 3: 139 | cprint("Generating the tables needed to test two points...") 140 | dim = 3 141 | # Poles would be too approximate otherwise. 142 | bootstrap.cutoff = 0 143 | # First odd scalar, first even scalar. 144 | pair1 = [0.518, 1.412] 145 | pair2 = [0.53, 1.412] 146 | # Generates three tables, two of which depend on the dimension differences. 147 | g_tab1 = bootstrap.ConformalBlockTable(dim, 20, 20, 2, 4) 148 | g_tab2 = bootstrap.ConformalBlockTable(dim, 20, 20, 2, 4, pair1[1] - pair1[0], pair1[0] - pair1[1], odd_spins = True) 149 | g_tab3 = bootstrap.ConformalBlockTable(dim, 20, 20, 2, 4, pair1[0] - pair1[1], pair1[0] - pair1[1], odd_spins = True) 150 | # Uses the function above to return the convolved tables we need. 151 | tab_list1 = convolved_table_list(g_tab1, g_tab2, g_tab3) 152 | # One of the three tables above does not need to be regenerated for the next point. 153 | g_tab4 = bootstrap.ConformalBlockTable(dim, 20, 20, 2, 4, pair2[1] - pair2[0], pair2[0] - pair2[1], odd_spins = True) 154 | g_tab5 = bootstrap.ConformalBlockTable(dim, 20, 20, 2, 4, pair2[0] - pair2[1], pair2[0] - pair2[1], odd_spins = True) 155 | tab_list2 = convolved_table_list(g_tab1, g_tab4, g_tab5) 156 | # Saves and deletes tables that are no longer needed and might take up a lot of memory. 157 | for tab in [g_tab1, g_tab2, g_tab3, g_tab4, g_tab5]: 158 | # A somewhat descriptive name. 159 | tab.dump("tab_" + str(tab.delta_12) + "_" + str(tab.delta_34)) 160 | del tab 161 | # Third vector: 0, 0, 1 * table4 with one of each dimension, -1 * table2 with only pair[0] dimensions, 1 * table3 with only pair[0] dimensions 162 | vec3 = [[0, 0, 0, 0], [0, 0, 0, 0], [1, 4, 1, 0], [-1, 2, 0, 0], [1, 3, 0, 0]] 163 | # Second vector: 0, 0, 1 * table4 with one of each dimension, 1 * table2 with only pair[0] dimensions, -1 * table3 with only pair[0] dimensions 164 | vec2 = [[0, 0, 0, 0], [0, 0, 0, 0], [1, 4, 1, 0], [1, 2, 0, 0], [-1, 3, 0, 0]] 165 | # The first vector has five components as well but they are matrices of quads, not just the quads themselves. 166 | m1 = [[[1, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 0, 0, 0]]] 167 | m2 = [[[0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [1, 0, 1, 1]]] 168 | m3 = [[[0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 0, 0, 0]]] 169 | m4 = [[[0, 0, 0, 0], [0.5, 0, 0, 1]], [[0.5, 0, 0, 1], [0, 0, 0, 0]]] 170 | m5 = [[[0, 1, 0, 0], [0.5, 1, 0, 1]], [[0.5, 1, 0, 1], [0, 1, 0, 0]]] 171 | vec1 = [m1, m2, m3, m4, m5] 172 | # Spins for these again go even, even, odd. 173 | info = [[vec1, 0, "z2-even-l-even"], [vec2, 0, "z2-odd-l-even"], [vec3, 1, "z2-odd-l-odd"]] 174 | cprint("Checking if (" + str(pair1[0]) + ", " + str(pair1[1]) + ") is allowed if we require only one relevant Z2-odd scalar...") 175 | sdp1 = bootstrap.SDP(pair1, tab_list1, vector_types = info) 176 | # The pair[1] scalar is Z2-even so have the corresponding channel start here. 177 | sdp1.set_bound([0, "z2-even-l-even"], pair1[1]) 178 | # The Z2-odd scalars should start at 3 instead and just have pair[0] as a point given our assumption. 179 | sdp1.set_bound([0, "z2-odd-l-even"], dim) 180 | sdp1.add_point([0, "z2-odd-l-even"], pair1[0]) 181 | # In this problem, a ruled out point may have primal error smaller than dual error unless we run for much longer. 182 | sdp1.set_option("dualErrorThreshold", 1e-15) 183 | allowed = sdp1.iterate(name = "tutorial_3a") 184 | if (allowed): 185 | cprint("Yes") 186 | else: 187 | cprint("No") 188 | cprint("Checking if (" + str(pair2[0]) + ", " + str(pair2[1]) + ") is allowed under the same conditions...") 189 | # All bounds / points changed in the first SDP will be changed again so we may use it as a prototype. 190 | sdp2 = bootstrap.SDP(pair2, tab_list2, vector_types = info, prototype = sdp1) 191 | # Does the exact same testing for the second point. 192 | sdp2.set_bound([0, "z2-even-l-even"], pair2[1]) 193 | sdp2.set_bound([0, "z2-odd-l-even"], dim) 194 | sdp2.add_point([0, "z2-odd-l-even"], pair2[0]) 195 | sdp2.set_option("dualErrorThreshold", 1e-15) 196 | allowed = sdp2.iterate(name = "tutorial_3b") 197 | if (allowed): 198 | cprint("Yes") 199 | else: 200 | cprint("No") 201 | 202 | if choice == 4: 203 | # This is where a kink begins to appear. 204 | dim_phi = 1.4 205 | cprint("Finding a bound on the singlets...") 206 | # Generates a fairly demanding table in 3.99 dimensions. 207 | k_max = 25 208 | l_max = 26 209 | m_max = 3 210 | n_max = 5 211 | g_tab = bootstrap.ConformalBlockTable(3.99, k_max, l_max, m_max, n_max, odd_spins = True) 212 | # Bring reserved symbols into our namespace to avoid typing "bootstrap" in what follows. 213 | delta = bootstrap.delta 214 | ell = bootstrap.ell 215 | # Four coefficients that show up in the 4D N = 1 expression for superconformal blocks. 216 | c1 = (delta + ell + 1) * (delta - ell - 1) * (ell + 1) 217 | c2 = -(delta + ell) * (delta - ell - 1) * (ell + 2) 218 | c3 = -(delta - ell - 2) * (delta + ell + 1) * ell 219 | c4 = (delta + ell) * (delta - ell - 2) * (ell + 1) 220 | # We have c1 beside (delta + 0, ell + 0), c2 beside (delta + 1, ell + 1), c3 beside (delta + 1, ell - 1) and c4 beside (delta + 2, ell). 221 | combo1 = [[c1, 0, 0], [c2, 1, 1], [c3, 1, -1], [c4, 2, 0]] 222 | # The second linear combination has signs flipped on the parts with odd spin shift. 223 | combo2 = combo1 224 | combo2[1][0] *= -1 225 | combo2[2][0] *= -1 226 | # This makes all of the convolved block tables we need. 227 | f_tab1a = bootstrap.ConvolvedBlockTable(g_tab) 228 | f_tab1s = bootstrap.ConvolvedBlockTable(g_tab, symmetric = True) 229 | f_tab2a = bootstrap.ConvolvedBlockTable(g_tab, content = combo1) 230 | f_tab2s = bootstrap.ConvolvedBlockTable(g_tab, content = combo1, symmetric = True) 231 | f_tab3 = bootstrap.ConvolvedBlockTable(g_tab, content = combo2) 232 | tab_list = [f_tab1a, f_tab1s, f_tab2a, f_tab2s, f_tab3] 233 | # Sets up a vectorial sum rule just like in example 2. 234 | vec1 = [[1, 4], [1, 2], [1, 3]] 235 | vec2 = [[-1, 4], [1, 2], [1, 3]] 236 | vec3 = [[0, 0], [1, 0], [-1, 1]] 237 | info = [[vec1, 0, "singlet"], [vec2, 1, "antisymmetric"], [vec3, 0, "symmetric"]] 238 | # Allocates an SDP and makes it easier for a problem to be recognized as dual feasible. 239 | sdp = bootstrap.SDP(dim_phi, tab_list, vector_types = info) 240 | sdp.set_option("dualErrorThreshold", 1e-22) 241 | # Goes through all the spins and tells the symmetric channel to contain a BPS operator and then a gap. 242 | for l in range(0, l_max + 1, 2): 243 | sdp.add_point([l, "symmetric"], 2 * dim_phi + l) 244 | sdp.set_bound([l, "symmetric"], abs(2 * dim_phi - 3) + 3 + l) 245 | # Does a long test. 246 | result = sdp.bisect(3.0, 4.25, 0.01, [0, "singlet"], name = "tutorial_4") 247 | cprint("If crossing symmetry and unitarity hold, the maximum gap we can have for singlet scalars is: " + str(result)) 248 | -------------------------------------------------------------------------------- /wrap.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | """ 3 | This concatenates all of the PyCFTBoot files. The idea is to need only one command 4 | to send it to a cluster. 5 | """ 6 | from __future__ import print_function 7 | import shutil 8 | import sys 9 | 10 | i = 0 11 | files = ["common.py", "compat_autoboot.py", "compat_juliboots.py", "compat_scalar_blocks.py", "blocks1.py", "blocks2.py"] 12 | main_file = open("bootstrap.py", 'r') 13 | 14 | for line in main_file: 15 | if line[:4] != "exec": 16 | print(line, end = "") 17 | else: 18 | print("") 19 | f = open(files[i], 'r') 20 | shutil.copyfileobj(f, sys.stdout) 21 | f.close() 22 | i += 1 23 | main_file.close() 24 | --------------------------------------------------------------------------------