├── CodeRed.png ├── CodeRedSmall.png ├── LICENSE.txt ├── README.md ├── coderedlib.cpp ├── compile_cpp_core.sh ├── example.py ├── experiment_LBB_distrib.py ├── experiment_LBB_perf.py ├── experiment_LLL_profile.py ├── experiment_LLL_time.py ├── middleware.py ├── paper.pdf └── weights.py /CodeRed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lducas/CodeRed/03ef4a3013537252dc8ff92f376cf6879ed3249f/CodeRed.png -------------------------------------------------------------------------------- /CodeRedSmall.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lducas/CodeRed/03ef4a3013537252dc8ff92f376cf6879ed3249f/CodeRedSmall.png -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020: 4 | Leo Ducas 5 | Cryptology Group 6 | Centrum Wiskunde & Informatica 7 | P.O. Box 94079, 1090 GB Amsterdam, Netherlands 8 | ducas@cwi.nl 9 | 10 | Wessel van Woerden 11 | Cryptology Group 12 | Centrum Wiskunde & Informatica 13 | P.O. Box 94079, 1090 GB Amsterdam, Netherlands 14 | ducas@cwi.nl 15 | 16 | Permission is hereby granted, free of charge, to any person obtaining a copy 17 | of this software and associated documentation files (the "Software"), to deal 18 | in the Software without restriction, including without limitation the rights 19 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 20 | copies of the Software, and to permit persons to whom the Software is 21 | furnished to do so, subject to the following conditions: 22 | 23 | The above copyright notice and this permission notice shall be included in all 24 | copies or substantial portions of the Software. 25 | 26 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 27 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 28 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 29 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 30 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 31 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 32 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ![CodeRed](CodeRedSmall.png) CodeRed 2 | Basis Reduction Algorithms for Codes (LLL and more) 3 | 4 | --- 5 | 6 | This small library is a research Artefact, paired with the article 7 | 8 | **An Algorithmic Reduction Theory for Binary Codes: LLL and more** 9 | *Thomas Debris--Alazard, Léo Ducas and Wessel P.J. van Woerden* 10 | 11 | The article is available on this [repository](paper.pdf) and on the [IACR eprint](https://eprint.iacr.org/2020/869). 12 | 13 | It provides a core in C++ for reduction algorithms for codes (LLL, Size-Reduction, Lee-Brickell, Lee-Brickell-Babai), and a Python interface. 14 | 15 | #### Acknowledgments 16 | 17 | This work was supported by the grant EPSRC EP/S02087X/1, by the European Union Horizon 2020 Research and Innovation Program Grant 780701 (PROMETHEUS) and by the ERC Advanced Grant 740972 (ALGSTRONGCRYPTO). 18 | 19 | #### License: 20 | MIT License see [LICENSE.txt](LICENSE.txt). 21 | 22 | #### How to cite: 23 | 24 | ``` 25 | @misc{DDW20, 26 | author = {Thomas {Debris-Alazard} and Léo Ducas and Wessel P.J. van Woerden}, 27 | title = {An Algorithmic Reduction Theory for Binary Codes: LLL and more}, 28 | howpublished = {Cryptology ePrint Archive, Report 2020/869}, 29 | year = {2020}, 30 | note = {\url{https://eprint.iacr.org/2020/869}}, 31 | } 32 | ``` 33 | 34 | --- 35 | ## Installation 36 | 37 | #### Requirement 38 | gcc 39 | python 40 | numpy 41 | 42 | #### Compilation 43 | 44 | To compile the C++ core before reproducing the experiments simply run 45 | `bash compile_cpp_core.sh`. 46 | 47 | A maximum dimension (n, multiple of 64) is hardcoded at compile time, and the above creates binaries for `n=256,384,512,768...` If you use this library for other purpose than reproducing experiment, please adjust `compile_cpp_core.sh` to your needs. 48 | 49 | --- 50 | ## Reproducing Experiments from the paired article 51 | 52 | Figure 4: `python experiment_LLL_time.py` 53 | Figure 5: `python experiment_LLL_profile.py` 54 | Figure 6: `python experiment_LBB_distrib.py` 55 | Figure 7: `python experiment_LBB_perf.py` 56 | 57 | To reach very large dimensions, the last experiment may be run using several cores by editing the python script. 58 | 59 | --- 60 | ## Usage from Python 61 | 62 | All reduction algorithms are available through the class `CodeRedLib` and are applied on the internally stored basis. All input/outputs are `numpy` arrays. 63 | 64 | #### Example (Code and execution of [example.py](example.py)): 65 | 66 | ``` python 67 | >>> from numpy import array, random 68 | >>> from middleware import CodeRedLib 69 | >>> 70 | >>> B = random.randint(0,2, size=(5, 16), dtype="bool") # Create a random Basis for a [5,16]-code 71 | >>> red = CodeRedLib(B) # Load it into a fresh CodeRedLib object 72 | >>> # The above fails in the unlucky case of the code C(B) not of full rank or not of full length. 73 | >>> 74 | >>> def niceprint(B): 75 | ... for v in B: 76 | ... print("".join(["1" if x else "." for x in v])) 77 | ... print 78 | ... 79 | >>> 80 | >>> niceprint(red.B) # Print current basis 81 | 1..1.1.1...1.1.. 82 | 11..1..1111..1.1 83 | 11.11.1.1...111. 84 | .111.11.1....1.1 85 | 11....1.1.1.1... 86 | >>> niceprint(red.E) # Print current Epipodal matrix 87 | 1..1.1.1...1.1.. 88 | .1..1...111....1 89 | ......1.....1.1. 90 | ..1............. 91 | ................ 92 | >>> print(red.l) # Print current Profile 93 | [6 6 3 1 0] 94 | >>> 95 | >>> red.LLL() # Apply LLL 96 | >>> 97 | >>> niceprint(red.B) # Print current basis 98 | 1..1.1.1...1.1.. 99 | ..1....1..111..1 100 | .111.11.1....1.1 101 | 1.111111.11..... 102 | 11.11.1.1...111. 103 | >>> niceprint(red.E) # Print current Epipodal matrix 104 | 1..1.1.1...1.1.. 105 | ..1.......1.1..1 106 | .1....1.1....... 107 | ....1....1...... 108 | ..............1. 109 | >>> print(red.l) # Print current Profile 110 | [6 4 3 2 1] 111 | ``` 112 | 113 | #### API: 114 | 115 | Functions names match with the one of the paper. 116 | 117 | Basis Processing functions: 118 | ``` python 119 | Randomize() 120 | Systematize() 121 | LLL() 122 | EpiSort() 123 | SizeRedBasis() 124 | SemiSystematize() 125 | KillTwos() 126 | ``` 127 | 128 | Functions for finding short words in the code C (or in the coset t+C): 129 | ``` python 130 | SizeRed(t) 131 | LB(w2, goal_w=None, t=None, stats=False) 132 | LBB(k1, w2, goal_w=None, t=None, stats=False) 133 | ``` 134 | Parameters: 135 | - The default value for the target `t=None` is interpreted as the zero vector of length n (*i.e.* LB/LBB searches for a short codeword rather than a close codeword). 136 | - Parameters `k1` and `w2` are integers, whose role is described in the paper. 137 | - Leave default value `goal_w=None` to get the shortest visited codeword as output. Set `goal_w` to an integer to return as soon as a coderword of at most that length is found (and returns `None` if goal not met). 138 | - Set `Stats=True` to instead get as return value the counts on visited codeword of each length. 139 | 140 | #### Fundamental Domain and Probabilities 141 | 142 | Function for computing distributions of words in the fundamental domain W(l) for a given profile l are provided in [weights.py](weights.py). 143 | 144 | --- 145 | ## Extending the C++ Core 146 | 147 | The "plumbing" from C++ to python is done via the `ctypes` library for `python`. This require boilerplates both in the `middleware.py` file defining the `CodeRedLib` class, and within the 148 | ``` c++ 149 | extern "C" { 150 | ... 151 | } 152 | ``` 153 | statement at the end of the `coderedlib.cpp` file. 154 | -------------------------------------------------------------------------------- /coderedlib.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | 13 | using namespace std; 14 | 15 | #define max_n maxn // Hardcode maximal code length as a compilation parameter maxn 16 | 17 | typedef bitset binvec; 18 | typedef vector binmat; 19 | 20 | int skip = 0; 21 | 22 | size_t n, k; // The length n and dimension k of the code. 23 | 24 | binmat B; // The basis of the code 25 | binmat E; // The epipodal matrix 26 | binmat P; // The cumulative projector matrix P[i] = &_{j l; // The profile of the basis (list of epipodal length) 29 | 30 | inline int64_t popcnt(binvec& t) 31 | { 32 | int64_t ham1 = 0; 33 | int64_t ham2 = 0; 34 | uint64_t * t_ = (uint64_t *) &t; 35 | 36 | for (int j = 0; j < maxn/64; j+=4) 37 | { 38 | ham1 +=__builtin_popcountll(t_[j+0]); 39 | ham2 +=__builtin_popcountll(t_[j+1]); 40 | ham1 +=__builtin_popcountll(t_[j+2]); 41 | ham2 +=__builtin_popcountll(t_[j+3]); 42 | } 43 | return ham1 + ham2; 44 | } 45 | 46 | inline int64_t AND_popcnt(binvec& t, binvec& e) 47 | { 48 | int64_t ham1 = 0; 49 | int64_t ham2 = 0; 50 | uint64_t * t_ = (uint64_t *) &t; 51 | uint64_t * e_ = (uint64_t *) &e; 52 | 53 | for (int j = 0; j < maxn/64; j+=4) 54 | { 55 | ham1 +=__builtin_popcountll(t_[j+0] & e_[j+0]); 56 | ham2 +=__builtin_popcountll(t_[j+1] & e_[j+1]); 57 | ham1 +=__builtin_popcountll(t_[j+2] & e_[j+2]); 58 | ham2 +=__builtin_popcountll(t_[j+3] & e_[j+3]); 59 | } 60 | return ham1 + ham2; 61 | } 62 | 63 | // Update the epipodal vectors from beg to end, assuming it is up to date up to beg already. 64 | void UpdateEP(size_t beg, size_t end) 65 | { 66 | assert(beg <= end); 67 | assert(end <= k); 68 | 69 | P[0].set(); 70 | for (int i = beg; i < end; ++i) 71 | { 72 | E[i] = B[i] & P[i]; 73 | l[i] = E[i].count(); 74 | P[i+1] = P[i] & ~B[i]; 75 | } 76 | } 77 | 78 | void UpdateEP() 79 | { 80 | UpdateEP(0, k); 81 | } 82 | 83 | 84 | 85 | // Update the epipodal vectors from beg to end, 86 | void SizeRedBasis(size_t beg, size_t end) 87 | { 88 | assert(beg <= end); 89 | assert(end <= k); 90 | 91 | for (int j = end-1; j >= (int) beg; --j) 92 | { 93 | for (int i = j-1; i >= (int) beg; --i) 94 | { 95 | if ((B[j] & P[i]).count() > ((B[j]^B[i]) & P[i]).count()) B[j] ^= B[i]; 96 | } 97 | } 98 | } 99 | 100 | // Apply a random transformation on the basis 101 | void Randomize(bool light=true) 102 | { 103 | size_t steps = light ? 3*k : k*k; 104 | 105 | for (size_t t = 0; t < steps; ++t) 106 | { 107 | size_t i = rand() % k; 108 | size_t j = rand() % k; 109 | if (i==j) continue; 110 | B[i] ^= B[j]; 111 | } 112 | UpdateEP(0, k); 113 | } 114 | 115 | // put the basis in systematic form, according to a random information set. 116 | void Systematize() 117 | { 118 | for (int i = 0; i < k; ++i) 119 | { 120 | size_t pivot = rand() % n; 121 | while (!B[i][pivot]) pivot = rand() % n; 122 | for (int j = 0; j < k; ++j) 123 | { 124 | if (i==j) continue; 125 | if (B[j][pivot]) B[j] ^= B[i]; 126 | } 127 | } 128 | UpdateEP(0, k); 129 | } 130 | 131 | void EpiSort() 132 | { 133 | binvec p; 134 | p.set(); 135 | 136 | for (int i = 0; i < k; ++i) 137 | { 138 | size_t best_w=n, best_j=i; 139 | for (int j = i; j < k; ++j) 140 | { 141 | int w= (p&B[j]).count(); 142 | 143 | if (w < best_w) 144 | { 145 | best_w = w; 146 | best_j = j; 147 | } 148 | } 149 | if (i != best_j) swap(B[i], B[best_j]); 150 | p &= ~B[i]; 151 | } 152 | 153 | UpdateEP(0, k); 154 | } 155 | 156 | 157 | 158 | 159 | // Put the basis B into semi-systematic form, only permuting its Epipodal matrix E. 160 | // E and P are not maintained during the computation but simply recomputed at the end. 161 | void SemiSystematize() 162 | { 163 | SizeRedBasis(0, k); 164 | for (int i = 0; i < k; ++i) 165 | { 166 | for (int j = 0; j < k-1; ++j) 167 | { 168 | if ((l[j]==1) & (l[j+1]>1)) 169 | { 170 | swap(l[j], l[j+1]); 171 | swap(B[j], B[j+1]); 172 | } 173 | } 174 | } 175 | UpdateEP(); 176 | } 177 | 178 | void export_mat(/*output*/ char* M_, /*input*/ binmat& M) 179 | { 180 | size_t i = 0; 181 | for (auto& v : M) 182 | { 183 | for (size_t k = 0; k < n; ++k) 184 | { 185 | M_[i] = v[k]; 186 | i++; 187 | } 188 | } 189 | } 190 | 191 | 192 | void LLL(size_t beg, size_t end) 193 | { 194 | assert(end <= k); 195 | size_t i = beg; 196 | binvec p; 197 | 198 | // Loop invariant: the basis is LLL-reduced from beg to i. 199 | while(i+1 < end) 200 | { 201 | // define the projection 202 | p = P[i]; 203 | 204 | // Local size-reduction 205 | if (((B[i+1]^B[i]) & p).count() < ((B[i+1]) & p).count()) B[i+1] ^= B[i]; 206 | 207 | //Lovasz condition 208 | if ((B[i+1] & p).count() < (B[i] & p).count()) 209 | { 210 | swap(B[i+1], B[i]); 211 | 212 | // Update auxiliary data 213 | E[i] = B[i] & P[i]; 214 | l[i] = E[i].count(); 215 | P[i+1] = P[i] & ~B[i]; 216 | 217 | E[i+1] = B[i+1] & P[i+1]; 218 | l[i+1] = E[i+1].count(); 219 | P[i+2] = P[i+1] & ~B[i+1]; 220 | 221 | if (i > beg) 222 | { 223 | --i; 224 | continue; 225 | } 226 | } 227 | ++i; 228 | } 229 | } 230 | 231 | 232 | 233 | void KillTwos() 234 | { 235 | for (int i = 0; i < k; ++i) 236 | { 237 | if (l[i] != 2) continue; 238 | for (int j = i+1; j < k; ++j) 239 | { 240 | if (l[j] != 2) continue; 241 | if ((B[j] & P[i]).count() != 3) continue; 242 | swap(B[i], B[j]); 243 | UpdateEP(i, j+1); 244 | LLL(i+1, k); 245 | break; 246 | } 247 | } 248 | SizeRedBasis(0, k); 249 | } 250 | 251 | 252 | // set-up auxiliary data for enumeration 253 | // First and last element is only a helper for streamlining iteration. 254 | vector start(size_t beg, size_t end, size_t w) 255 | { 256 | vector res; 257 | res.push_back(beg-1); 258 | for (size_t i = beg; i < beg+w; ++i) res.push_back(-1); 259 | res.push_back(end); 260 | return res; 261 | } 262 | 263 | // An helper function to enumerate targets of weight w as follows. 264 | // example enumerating 3 choose 5 265 | // 0 1 2 3 4 5 266 | // 01 02 12 03 13 23 04 14 24 34 267 | // 012 013 023 123 014 024 124 034 134 234 268 | inline bool next(binvec& t, vector& e) 269 | { 270 | for (size_t i = 1; i < e.size()-1; ++i) 271 | { 272 | if (e[i] >= 0) 273 | { 274 | // clear codeword from target 275 | t ^= B[e[i]]; 276 | ++e[i]; 277 | if (i > 1) e[i] += skip; // Only search a fraction of the space for other indices 278 | } 279 | else 280 | { 281 | e[i] = e[i-1]+1; 282 | } 283 | 284 | if ((e[i] < e[i+1]) | ((e[i+1] < 0) & (e[i] < e[e.size()-1])) ) 285 | { 286 | // add the next codeword 287 | t ^= B[e[i]]; 288 | return true; 289 | } 290 | else 291 | { 292 | // reset the coordinate 293 | e[i] = e[i-1]+1; 294 | // add the codeword 295 | t ^= B[e[i]]; 296 | // move on to the next coordinate 297 | } 298 | } 299 | return false; 300 | } 301 | 302 | void TestEnum(int p, int w) 303 | { 304 | vector enumerator = start(p, k, w); 305 | binvec t; 306 | t.reset(); 307 | 308 | while(next(t, enumerator)) 309 | { 310 | cerr << t << endl; 311 | } 312 | 313 | } 314 | 315 | bool LB(binvec& tt, size_t w2, int goal_w, uint64_t* stats) 316 | { 317 | binvec t = tt; 318 | vector enumerator = start(0, k, w2); 319 | 320 | // If no goal set, just return the best visited solution 321 | int best_w = goal_w > 0 ? goal_w + 1 : tt.count(); 322 | if (best_w==0) best_w=n; 323 | 324 | while(next(t, enumerator)) 325 | { 326 | size_t w = popcnt(t); 327 | if (stats) stats[w]++; 328 | if (w >= best_w) continue; 329 | if (w == 0) continue; 330 | tt = t; 331 | if (goal_w > 0) return true; 332 | best_w = w; 333 | } 334 | return (goal_w==0); 335 | } 336 | 337 | 338 | // Size-reduce the target word t with respect to a (segment) of B 339 | inline void SizeRed(binvec& t, size_t beg, size_t end) 340 | { 341 | for (int i = end-1; i >= (int) beg; --i) 342 | { 343 | // This is the most critical peace: helping the compiler 344 | // For some reason using (t & P[i]).count() gets slow for n > 1024. 345 | int64_t ham = (t, E[i]).count(); 346 | if (2*ham > l[i]) t ^= B[i]; 347 | } 348 | } 349 | 350 | 351 | 352 | // Making the critical data contiguous 353 | vector stream_SR; 354 | inline void StreamSizeRed(array& ts, size_t k1) 355 | { 356 | for (int i = 0; i < k1; ++i) 357 | { 358 | // This is the most critical loop: helping the compiler. 359 | // For some reason using (t & P[i]).count() gets slow for n > 1024. 360 | for (int j = 0; j < 4; ++j) 361 | { 362 | int64_t ham = AND_popcnt(ts[j], stream_SR[2*i]); 363 | if (2*ham > l[k1 -1 - i]) ts[j] ^= stream_SR[2*i+1]; 364 | } 365 | } 366 | } 367 | 368 | // Lee-Brickell-Babai 369 | bool LBB(binvec& tt, size_t k1, size_t w2, int goal_w, uint64_t* stats) 370 | { 371 | binvec t = tt; 372 | array ts; 373 | vector enumerator = start(k1, k, w2); 374 | stream_SR.resize(2 * k1); 375 | for (int i = 0; i < k1; ++i) 376 | { 377 | stream_SR[2*i] = E[k1 - 1 - i]; 378 | stream_SR[2*i+1] = B[k1 - 1 - i]; 379 | } 380 | 381 | 382 | // If no goal set, just return the best visited solution 383 | int best_w = goal_w > 0 ? goal_w + 1 : tt.count(); 384 | if (best_w==0) best_w=n; 385 | 386 | bool notover = true; 387 | while(notover) 388 | { 389 | 390 | notover &= next(t, enumerator); 391 | ts[0] = t; 392 | notover &= next(t, enumerator); 393 | ts[1] = t; 394 | notover &= next(t, enumerator); 395 | ts[2] = t; 396 | notover &= next(t, enumerator); 397 | ts[3] = t; 398 | 399 | StreamSizeRed(ts, k1); 400 | 401 | for (int i = 0; i < 4; ++i) 402 | { 403 | size_t w = popcnt(ts[i]); 404 | if (stats) stats[w]++; 405 | if (w >= best_w) continue; 406 | if (w == 0) continue; 407 | tt = ts[i]; 408 | if (goal_w > 0) return true; 409 | best_w = w; 410 | } 411 | } 412 | return (goal_w==0); 413 | } 414 | 415 | 416 | 417 | 418 | extern "C" 419 | { 420 | void _setup(/*input*/ size_t k_, size_t n_, char* B_, long seed=0) 421 | { 422 | k = k_; 423 | n = n_; 424 | assert(n <= max_n); 425 | if (seed==0) seed = time(NULL)+99997*getpid()+123*clock(); 426 | srand(seed); 427 | 428 | B.clear(); 429 | P.clear(); 430 | E.clear(); 431 | l.clear(); 432 | 433 | binvec v, zero; 434 | zero.reset(); 435 | 436 | l.resize(k); 437 | P.push_back(zero); 438 | 439 | size_t i = 0; 440 | for (size_t j = 0; j < k; ++j) 441 | { 442 | v.reset(); 443 | for (size_t k = 0; k < n; ++k) 444 | { 445 | v[k] = B_[i]; 446 | i++; 447 | } 448 | B.push_back(v); 449 | E.push_back(zero); 450 | P.push_back(zero); 451 | } 452 | UpdateEP(); 453 | } 454 | 455 | void _export_all(/*output*/ char* B_, char* E_, char* P_, long* l_) 456 | { 457 | export_mat(B_, B); 458 | export_mat(E_, E); 459 | export_mat(P_, P); 460 | 461 | for (int i = 0; i < k; ++i) 462 | { 463 | l_[i] = l[i]; 464 | } 465 | } 466 | 467 | void _LLL() 468 | { 469 | LLL(0, k); 470 | } 471 | 472 | bool _LBB(char* tt_, size_t k1, size_t w2, int goal_w, uint64_t* stats) 473 | { 474 | binvec tt; 475 | for (int i = 0; i < n; ++i) tt[i] = tt_[i]; 476 | bool res = LBB(tt, k1, w2, goal_w, stats); 477 | for (int i = 0; i < n; ++i) tt_[i] = tt[i]; 478 | return res; 479 | } 480 | 481 | bool _LB(char* tt_, size_t w2, int goal_w, uint64_t* stats) 482 | { 483 | binvec tt; 484 | for (int i = 0; i < n; ++i) tt[i] = tt_[i]; 485 | bool res = LB(tt, w2, goal_w, stats); 486 | for (int i = 0; i < n; ++i) tt_[i] = tt[i]; 487 | return res; 488 | } 489 | 490 | void _TestEnum(int p, int w) 491 | { 492 | TestEnum(p, w); 493 | } 494 | 495 | void _SizeRedBasis() 496 | { 497 | SizeRedBasis(0, k); 498 | } 499 | void _SizeRed(char* tt_) 500 | { 501 | binvec tt; 502 | for (int i = 0; i < n; ++i) tt[i] = tt_[i]; 503 | SizeRed(tt, 0, k); 504 | for (int i = 0; i < n; ++i) tt_[i] = tt[i]; 505 | } 506 | 507 | void _Systematize() 508 | { 509 | Systematize(); 510 | } 511 | 512 | void _EpiSort() 513 | { 514 | EpiSort(); 515 | } 516 | 517 | 518 | void _SemiSystematize() 519 | { 520 | SemiSystematize(); 521 | } 522 | 523 | void _KillTwos() 524 | { 525 | KillTwos(); 526 | } 527 | 528 | void _Randomize(int light=1) 529 | { 530 | Randomize(light); 531 | } 532 | void _set_skip(int skip_) 533 | { 534 | skip = skip_; 535 | } 536 | } -------------------------------------------------------------------------------- /compile_cpp_core.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | rm -rf bin 4 | mkdir bin 5 | 6 | for i in 256 384 512 768 1024 1280 1536 2048 3072 4096 6144 8192 10240 12288 16384 24576 32768 49152 65536 7 | do 8 | g++ -fPIC -O3 -march=native -funroll-loops -std=c++14 -c -Dmaxn=$i coderedlib.cpp -o bin/coderedlib-$i.o # -march=native might not be supported on arm 9 | g++ -shared -O3 -march=native -funroll-loops -std=c++14 bin/coderedlib-$i.o -o bin/coderedlib-$i.so 10 | done 11 | -------------------------------------------------------------------------------- /example.py: -------------------------------------------------------------------------------- 1 | from numpy import array, random 2 | from middleware import CodeRedLib 3 | 4 | B = random.randint(0,2, size=(5, 16), dtype="bool") # Create a random Basis for a [5,16]-code 5 | red = CodeRedLib(B) # Load it into a fresh CodeRedLib object 6 | 7 | def niceprint(B): 8 | for v in B: 9 | print("".join(["1" if x else "." for x in v])) 10 | print() 11 | 12 | 13 | niceprint(red.B) # Print current basis 14 | niceprint(red.E) # Print current Epipodal matrix 15 | print(red.l) # Print current Profile 16 | 17 | red.LLL() # Apply LLL 18 | 19 | niceprint(red.B) # Print current basis 20 | niceprint(red.E) # Print current Epipodal matrix 21 | print(red.l) # Print current Profile 22 | -------------------------------------------------------------------------------- /experiment_LBB_distrib.py: -------------------------------------------------------------------------------- 1 | from numpy import zeros, matrix, array, random 2 | from middleware import CodeRedLib 3 | from time import time 4 | from math import floor 5 | from weights import * 6 | 7 | def experiment(n, k, samples): 8 | curves = zeros((4, n+1)) 9 | G = random.randint(0,2, size=(k, n), dtype="bool") 10 | red = CodeRedLib(G) 11 | 12 | for s in range(samples): 13 | 14 | T0 = time() 15 | red.Randomize() 16 | red.Systematize() 17 | T1 = time() 18 | print("Produced Systematic form in %.4fs"%(T1 - T0)) 19 | 20 | T0 = time() 21 | stats = red.LB(3, stats = True) 22 | curves[0] += stats 23 | T1 = time() 24 | print("Ran LB(3) in %.4fs"%(T1 -T0)) 25 | print("Visited %.4f codewords"%sum(stats)) 26 | 27 | stats = weights_LB(n, k, 3) 28 | print("Predicted %.4f codewords"%sum(stats)) 29 | curves[1] += array(stats+(n+1)*[0])[:n+1] 30 | 31 | print() 32 | T0 = time() 33 | red.Randomize() 34 | red.Systematize() 35 | red.EpiSort() 36 | red.LLL() 37 | red.KillTwos() 38 | k1 = red.SemiSystematize() 39 | 40 | T1 = time() 41 | print("Produced Semisystematic form in %.4fs"%(T1 - T0)) 42 | print("k1 = %d"%k1) 43 | print("profile = ", red.l[:k1+1]) 44 | assert(sum(red.l[:k1]) == k+k1) 45 | assert(min(red.l[k1:]) == 1) 46 | assert(max(red.l[k1:]) == 1) 47 | 48 | T0 = time() 49 | stats = red.LBB(k1, 3, stats=True) 50 | curves[2] += stats 51 | T1 = time() 52 | print("Ran LBB(k1, 3) in %.4fs"%(T1 -T0)) 53 | 54 | print("Visited %.4f codewords"%sum(stats)) 55 | stats = weights_LBB(red.l, k1, 3) 56 | stats = array(stats +(n+1)*[0]) 57 | print("Predicted %.4f codewords"%sum(stats)) 58 | curves[3] += stats[:n+1] 59 | 60 | return curves/samples 61 | 62 | 63 | n = 1280 64 | k = 640 65 | 66 | curves = experiment(n, k, 1) 67 | 68 | print("w, LB_exp, LB_pred, LBB_exp, LBB_pred") 69 | C = curves.transpose() 70 | for i in range(230, 340): 71 | print(i, "\t %.3e \t%.3e \t%.3e \t%.3e \t"%tuple(C[i])) 72 | -------------------------------------------------------------------------------- /experiment_LBB_perf.py: -------------------------------------------------------------------------------- 1 | from numpy import zeros, matrix, array, random 2 | from middleware import CodeRedLib 3 | from time import time 4 | from math import floor, ceil, sqrt, log 5 | 6 | from multiprocessing import Pool 7 | from random import randint 8 | from weights import * 9 | 10 | def one_experiment(par): 11 | n, k, seed = par 12 | 13 | h = int(ceil(0.115 * n)) 14 | 15 | data = zeros(9) 16 | G = random.randint(0,2, size=(k, n), dtype="bool") 17 | red = CodeRedLib(G, seed=seed) 18 | 19 | # Defining the skip parameter to only expore 20 | # a portion of the space and get relevant data in reasonable time 21 | # This speeds up things by (1+skip)^{w2-1} where w2=3 is the LB/LBB parameter 22 | 23 | skip = floor(sqrt(k/256.)) 24 | red.set_skip(skip) 25 | 26 | T0 = time() 27 | red.Randomize() 28 | red.Systematize() 29 | red.LB(3) 30 | T1 = time() 31 | predicted_distr, denom = weights_LB_absolute(n, k, 3) 32 | data[0] += log(sum(predicted_distr[:h+1]), 2) - log(denom,2) 33 | data[1] += T1 - T0 34 | 35 | T0 = time() 36 | red.Randomize() 37 | red.Systematize() 38 | red.EpiSort() 39 | red.LLL() 40 | red.KillTwos() 41 | k1 = red.SemiSystematize() 42 | red.LBB(k1, 3) 43 | 44 | T1 = time() 45 | predicted_distr, denom = weights_LBB_absolute(red.l, k1, 3) 46 | data[2] += log(sum(predicted_distr[:h+1]), 2) - log(denom,2) 47 | data[3] += T1 - T0 48 | data[7] += k1 49 | return data 50 | 51 | 52 | def experiment(n, k, samples, cores=1): 53 | if cores == 1: 54 | res = [one_experiment((n, k, randint(0,2**63))) for i in range(samples)] 55 | else: 56 | p = Pool(cores) 57 | res = p.map(one_experiment, [(n, k, randint(0,2**63)) for i in range(samples)]) 58 | 59 | return sum(res)/samples 60 | 61 | print("n, \t LB_logP, \tLB_time, \tLBB_logP, \tLBB_time, \t ProbaGain, \tTimeLoss, \tGain, \t\tk1, \tCGain") 62 | for n in [128, 192, 256, 384, 512, 768, 1024, 1280, 1536, 2048, 3072, 4096, 6144, 8192, 12288, 16384]: 63 | k = n//2 64 | C = experiment(n, k, 12, cores=4) 65 | C[4] = 2**(C[2]-C[0]) # Proba gain LBB/LB 66 | C[5] = C[3]/C[1] # time loss LBB/LB 67 | C[6] = C[4]/C[5] # overall gain LBB/LB 68 | C[8] = C[4]/C[7] # overall Corrected gain LBB/LB 69 | print(n, ",\t %.1f, \t%.3f, \t%.1f, \t%.3f, \t%.3f, \t%.3f, \t%.3f, \t%.1f, \t%.3f"%tuple(C)) 70 | -------------------------------------------------------------------------------- /experiment_LLL_profile.py: -------------------------------------------------------------------------------- 1 | from numpy import zeros, matrix, array, random 2 | from middleware import CodeRedLib 3 | from time import time 4 | from math import floor 5 | 6 | def experiment(n, k, samples): 7 | profiles = zeros((6, k)) 8 | G = random.randint(0,2, size=(k, n), dtype="bool") 9 | red = CodeRedLib(G) 10 | 11 | for s in range(samples): 12 | 13 | red.Randomize() 14 | red.LLL() 15 | profiles[0] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 16 | 17 | 18 | red.Randomize() 19 | red.Systematize() 20 | profiles[1] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 21 | red.LLL() 22 | profiles[2] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 23 | 24 | red.Randomize() 25 | red.Systematize() 26 | red.EpiSort() 27 | profiles[3] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 28 | red.LLL() 29 | profiles[4] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 30 | 31 | red.SizeRedBasis() 32 | red.KillTwos() 33 | profiles[5] += array(sorted(list(red.l), reverse=True)) / (1.*samples) 34 | 35 | return profiles 36 | 37 | n = 1280 38 | k = int(n/2) 39 | samples = 100 40 | M = experiment(n, k, samples) 41 | C = M.transpose() 42 | 43 | print("index, pLLL_raw, pSys, pLLL_Sys, pSort, pLLL_Sort, pLLL_Sort_K2") 44 | for i in range(25): 45 | print(i+1, ", \t %.2f,\t %.2f,\t %.2f,\t %.2f,\t %.2f, \t %.2f \t"%tuple(C[i])) 46 | 47 | -------------------------------------------------------------------------------- /experiment_LLL_time.py: -------------------------------------------------------------------------------- 1 | from numpy import zeros, array, random 2 | from middleware import CodeRedLib 3 | from time import time 4 | 5 | 6 | def experiment(n, k, samples): 7 | times = zeros(5) 8 | G = random.randint(0,2, size=(k, n), dtype="bool") 9 | red = CodeRedLib(G) 10 | 11 | for s in range(samples): 12 | 13 | 14 | T0 = time() 15 | red.Randomize() 16 | red.LLL() 17 | T1 = time() 18 | 19 | 20 | red.Randomize() 21 | red.Systematize() 22 | T2 = time() 23 | red.LLL() 24 | T3 = time() 25 | 26 | red.Randomize() 27 | red.Systematize() 28 | red.EpiSort() 29 | T4 = time() 30 | red.LLL() 31 | 32 | T5 = time() 33 | times += array([T1 - T0, T3 - T1, T3 - T2, T5 - T3, T5 - T4]) 34 | 35 | return times/samples 36 | 37 | 38 | print("n, tLLL_raw, tLLL_Sys, tLLL_afterSys, tLLL_Sort, tLLL_afterSort") 39 | for n in [128, 192, 256, 384, 512, 768, 1024, 1280, 1536, 2048, 3072, 4096, 6144, 8192, 12288, 16384]: 40 | k = int(n/2) 41 | v = experiment(n, k, 10) 42 | print (n, ",\t %.4e,\t %.4e,\t %.4e,\t %.4e,\t %.4e"%tuple(v)) -------------------------------------------------------------------------------- /middleware.py: -------------------------------------------------------------------------------- 1 | from numpy import zeros, float64, int64, array, random 2 | import ctypes 3 | import _ctypes 4 | from math import ceil, floor 5 | 6 | import sys 7 | if sys.version_info[0] < 3: 8 | raise Exception("Must be using Python 3") 9 | 10 | def c_char_ptr(x): 11 | return x.ctypes.data_as(ctypes.POINTER(ctypes.c_char)) 12 | 13 | def c_long_ptr(x): 14 | return x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)) 15 | 16 | def ham(v): 17 | return sum(v) 18 | 19 | 20 | 21 | # CodeRedLib: a python wrapper for the c++ coreredlib.cpp 22 | 23 | # coreredlib is compiled many time with various value of maxn 24 | # make sure the value you want to use are listed in compile_cpp_core.sh 25 | 26 | # Functions names match with the paper. They all act on the internal state: 27 | # self.B : the basis 28 | # self.E : The epipodal matrix 29 | # self.P : The cumulative projector matrix P[i] = &_{j 1: 91 | return k1+1 92 | return 0 93 | 94 | def KillTwos(self): 95 | self.lib._KillTwos() 96 | self.update() 97 | 98 | 99 | # Used to speed up LB/LBB experiments in large dimension by only 100 | # vistining a (1+skip)^{1-w2} fraction of the enumerated space. 101 | def set_skip(self, skip): 102 | return self.lib._set_skip(int(floor(skip))) 103 | 104 | 105 | def SizeRed(self, t): 106 | return self.lib._SizeRed(c_char_ptr(t)) 107 | 108 | 109 | def LB(self, w2, goal_w=None, t=None, stats=False): 110 | tt = zeros(self.n, dtype='bool') if t is None else 1 * t 111 | _stats = zeros(self.n+1, dtype='int64') if stats else None 112 | 113 | success = self.lib._LB(c_char_ptr(tt), w2, 114 | 0 if goal_w is None else goal_w, 115 | c_long_ptr(_stats) if stats else None) 116 | 117 | if stats: 118 | return _stats 119 | if success or goal_w is None: 120 | return tt 121 | 122 | 123 | def LBB(self, k1, w2, goal_w=None, t=None, stats=False): 124 | tt = zeros(self.n, dtype='bool') if t is None else 1 * t 125 | _stats = zeros(self.n+1, dtype='int64') if stats else None 126 | 127 | success = self.lib._LBB(c_char_ptr(tt), k1, w2, 128 | 0 if goal_w is None else goal_w, 129 | c_long_ptr(_stats) if stats else None) 130 | 131 | if stats: 132 | return _stats 133 | if success or goal_w is None: 134 | return tt -------------------------------------------------------------------------------- /paper.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lducas/CodeRed/03ef4a3013537252dc8ff92f376cf6879ed3249f/paper.pdf -------------------------------------------------------------------------------- /weights.py: -------------------------------------------------------------------------------- 1 | import warnings 2 | import operator as op 3 | import numpy as np 4 | from functools import reduce 5 | 6 | import sys 7 | if sys.version_info[0] < 3: 8 | raise Exception("Must be using Python 3") 9 | 10 | 11 | # n choose r 12 | def comb(n, r): 13 | r = min(r, n-r) 14 | numer = reduce(op.mul, range(n, n-r, -1), 1) 15 | denom = reduce(op.mul, range(1, r+1), 1) 16 | return numer // denom 17 | 18 | # convolution of two distributions/measures with support {0, 1, ..., n} 19 | def convol(A, B): 20 | C = [0 for i in range(len(A)+len(B))] 21 | for x in range(len(A)): 22 | for y in range(len(B)): 23 | C[x+y] += A[x]*B[y] 24 | return C 25 | 26 | # returns the weight enumerator of a ball of radius r in F_2^n 27 | def weights_ball(n, r): 28 | return [comb(n, int(i)) for i in range(r+1)] 29 | 30 | def volume_ball(n,r): 31 | return sum( weights_ball(n,r) ) 32 | 33 | # returns the weight enumerator of a fundamental ball of length n 34 | def weights_fundamental_ball(n): 35 | if n%2 > 0: 36 | L = [comb(n, i) for i in range((n+1)//2)] 37 | else: 38 | L = [comb(n, i) for i in range((n+1)//2)] + [comb(n, (n+1)//2)//2] 39 | return L 40 | 41 | # returns the weight enumerator of the SizeRed fundamental domain 42 | def weights_fundamental_domain(profile): 43 | fundamental_balls = [weights_fundamental_ball(int(l)) for l in profile] 44 | n = sum(profile) 45 | k = len(profile) 46 | C = fundamental_balls[0] 47 | for i in range(1, k): 48 | C = convol(C, fundamental_balls[i]) 49 | return C 50 | 51 | # return the expected weights of visited codewords by Lee-Brickell 52 | def weights_LB(n, k, w2): 53 | a,b = weights_LB_absolute(n, k, w2) 54 | return [x/float(b) for x in a] 55 | 56 | # returns integer list a and integer b such that a/b = weights_LB(n,k,w2) 57 | # prevents overflow 58 | def weights_LB_absolute(n, k, w2): 59 | A = weights_ball(n-k, n-k) 60 | B = weights_ball(k, w2) 61 | return convol(A, B), 2**int(n-k) 62 | 63 | # returns the expected weights of visited codewords by Lee-Brickell-Babai 64 | def weights_LBB(profile, k1, w2): 65 | a,b = weights_LBB_absolute(profile, k1, w2) 66 | return [x/float(b) for x in a] 67 | 68 | # returns integer list a and integer b such that a/b = weights_LBB(profile,k1,w2) 69 | # prevents overflow 70 | def weights_LBB_absolute(profile, k1, w2): 71 | n = sum(profile) 72 | k = len(profile) 73 | A = weights_fundamental_domain(profile[:k1]) 74 | B = weights_ball(k - k1, w2) 75 | return convol(A, B), 2**int(sum(profile[:k1])-k1) 76 | 77 | --------------------------------------------------------------------------------