├── .gitignore ├── README.md ├── fastronWrapper ├── fastron.cpp ├── fastron.h ├── fastronWrapper.cpp ├── fastronWrapper.cpython-37m-x86_64-linux-gnu.so ├── fastronWrapper.pyx └── fastronWrapper_h.pxd ├── requirements.txt └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | # ignore test_env 2 | fastronWrapper/test_env/ 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Python Implementation of Fastron 2 | 3 | A Cython interface to implement Fastron library with Rational Quadratic kernel in Python environment. This implementation is based on the paper [Learning-Based Proxy Collision Detection for Robot Motion Planning Applications 4 | ](https://ieeexplore.ieee.org/abstract/document/9023003). If you use this work in your research, please cite 5 | 6 | @article{das2020learning, 7 | title={Learning-based Proxy Collision Detection for Robot Motion Planning Applications}, 8 | author={Das, Nikhil and Yip, Michael}, 9 | journal={IEEE Transactions on Robotics}, 10 | year={2020}, 11 | publisher={IEEE} 12 | } 13 | 14 | 15 | ## Use Fastron in Python 16 | The Cython wrapper is already setup, hence Fastron can be directly imported. 17 | Initialze Fastron class with training data (numpy 2d array) 18 | 19 | ```python 20 | import numpy as np 21 | import fastronWrapper 22 | 23 | ... 24 | 25 | fastron = fastronWrapper.PyFastron(data) 26 | ``` 27 | 28 | ## Compile the Code 29 | 30 | Before compiling the Fastron Cython code, make sure you have Eigen and Eigency installed. 31 | ```bash 32 | pip install eigen 33 | pip install eigency 34 | ``` 35 | If using Windows, you may need to install/update MS Visual C++ 14.0. 36 | 37 | **NOTE: before compile, please read the issues section below** 38 | 39 | To compile, run command: 40 | 41 | ```bash 42 | python setup.py build_ext --inplace 43 | ``` 44 | 45 | ## Issues 46 | When trying to setup the Cython wrapper, compilation error may occur on some Eigen functions. This is because the bridge between numpy and Eigen, Eigency, maintains its own version of Eigen library and it is outdated. To fix this issue, replace that library with the latest Eigen library. 47 | 48 | ```bash 49 | cd /tmp 50 | curl -o eigen-3.3.7.tar.bz https://bitbucket.org/eigen/eigen/get/3.3.7.tar.bz2 51 | tar -xvf eigen-3.3.7 52 | 53 | # Need to rename it to eigen_3.2.8; 54 | mv /lib/python3.7/site-packages/eigency/eigen_3.2.8 55 | ``` 56 | 57 | On Windows, replace Eigency's version of Eigen 3.2.8 (in site-packages) with Eigen 3.3.7 by selecting the contents in the Eigen 3.3.7 folder and pasting them into Eigency's Eigen folder. 58 | 59 | ## Usage 60 | Import the fastronWrapper.pyx file to allow instantiation of PyFastron objects. 61 | ```python 62 | import pyximport 63 | pyximport.install() 64 | 65 | import numpy as np 66 | from fastronWrapper.fastronWrapper import PyFastron 67 | ``` 68 | 69 | The PyFastron constructor only has a dataset of configurations (as a Numpy matrix) as an argument. Data labels (as a Numpy array) and kernel and model parameters can be set after construction. 70 | 71 | The `updateModel()` method learns the weights for the model, and `eval()` predicts the collision status values with the trained model. 72 | 73 | ### Example 74 | ```python 75 | # Initialize PyFastron 76 | fastron = PyFastron(data) # where data.shape = (N, d) 77 | 78 | fastron.y = y # where y.shape = (N,) 79 | 80 | fastron.g = 10 81 | fastron.maxUpdates = 10000 82 | fastron.maxSupportPoints = 1500 83 | fastron.beta = 100 84 | 85 | # Train model 86 | fastron.updateModel() 87 | 88 | # Predict values for a test set 89 | pred = fastron.eval(data_test) # where data_test.shape = (N_test, d) 90 | ``` 91 | 92 | ## Requirements 93 | * Python 94 | * Cython 95 | * Numpy 96 | * Eigen 97 | * Eigency 98 | 99 | ## Credits 100 | * Mingwei Xu - [InspireX96](https://github.com/InspireX96) 101 | * Nikhil Das - [nkhldas](https://github.com/nkhldas) 102 | -------------------------------------------------------------------------------- /fastronWrapper/fastron.cpp: -------------------------------------------------------------------------------- 1 | #include "fastron.h" 2 | 3 | #include 4 | #include 5 | 6 | // return the indices of nonzero elements 7 | template 8 | Eigen::ArrayXi find(T vec) 9 | { 10 | Eigen::ArrayXi idx = Eigen::ArrayXi::Zero(vec.count()); 11 | int ii = 0; 12 | for (int i = 0; i < vec.size(); ++i) 13 | if (vec(i)) 14 | idx(ii++) = i; 15 | return idx; 16 | } 17 | 18 | // create m x n matrix of 0-mean normal distribution samples 19 | Eigen::MatrixXd randn(int m, int n) 20 | { 21 | Eigen::MatrixXd mat = Eigen::MatrixXd::Zero(m, n); 22 | 23 | static std::default_random_engine generator(std::chrono::system_clock::now().time_since_epoch().count()); 24 | std::normal_distribution distribution(0, 1); 25 | 26 | for (int j = 0; j < n; ++j) 27 | for (int i = 0; i < m; ++i) 28 | mat(i, j) = distribution(generator); 29 | return mat; 30 | } 31 | 32 | // remove specific rows 33 | template 34 | void keepSelectRows(T *matPtr, Eigen::ArrayXi rowsToRetain) 35 | { 36 | // Use pointer to prevent having to return a copy 37 | int ii = 0; 38 | for (int i = 0; i < rowsToRetain.size(); ++i) 39 | (*matPtr).row(ii++) = (*matPtr).row(rowsToRetain(i)); 40 | (*matPtr).conservativeResize(ii, Eigen::NoChange); 41 | } 42 | 43 | // remove specific columns 44 | template 45 | void keepSelectCols(T *matPtr, Eigen::ArrayXi colsToRetain) 46 | { 47 | // Use pointer to prevent having to return a copy 48 | int ii = 0; 49 | for (int i = 0; i < colsToRetain.size(); ++i) 50 | (*matPtr).col(ii++) = (*matPtr).col(colsToRetain(i)); 51 | (*matPtr).conservativeResize(Eigen::NoChange, ii); 52 | } 53 | 54 | // remove specific rows and columns; if shiftOnly, do not resize matrix 55 | template 56 | void keepSelectRowsCols(T *matPtr, Eigen::ArrayXi rowsToRetain, Eigen::ArrayXi colsToRetain, bool shiftOnly) 57 | { 58 | for (int j = 0; j < colsToRetain.size(); ++j) 59 | for (int i = 0; i < rowsToRetain.size(); ++i) 60 | (*matPtr)(i, j) = (*matPtr)(rowsToRetain(i), colsToRetain(j)); 61 | if (!shiftOnly) 62 | (*matPtr).conservativeResize(rowsToRetain.size(), colsToRetain.size()); 63 | } 64 | 65 | // Default constructor 66 | Fastron::Fastron() {} 67 | 68 | Fastron::Fastron(Eigen::MatrixXd input_data) 69 | { 70 | data = input_data; 71 | 72 | N = data.rows(); 73 | d = data.cols(); 74 | 75 | // Initialization 76 | y = Eigen::ArrayXd::Zero(N); 77 | alpha = F = Eigen::VectorXd::Zero(N); 78 | gramComputed.setConstant(N, 0); //? 79 | G.resize(N, N); 80 | 81 | // std::cout << "C++ data:\n" << data << std::endl;//TODO 82 | // std::cout << "C++ G:\n" << G << std::endl;//TODO 83 | // std::cout << "C++ y:\n" << y << std::endl;//TODO 84 | }; 85 | 86 | Fastron::~Fastron() 87 | { 88 | alpha.resize(0); 89 | F.resize(0); 90 | y.resize(0); 91 | //gramComputed.resize(0); 92 | data.resize(0, 0); 93 | G.resize(0, 0); 94 | } 95 | 96 | void Fastron::computeGramMatrixCol(int idx, int startIdx = 0) 97 | { 98 | Eigen::ArrayXd r2; 99 | r2.setOnes(N - startIdx); 100 | for (int j = 0; j < d; ++j) 101 | r2 += g / 2 * (data.block(startIdx, j, N - startIdx, 1).array() - data(idx, j)).square(); 102 | G.block(startIdx, idx, N - startIdx, 1) = 1 / (r2 * r2); 103 | gramComputed(idx) = 1; 104 | } 105 | 106 | double Fastron::calculateMarginRemoved(int *idx) 107 | { 108 | double max = 0, removed; 109 | for (int i = 0; i < N; ++i) 110 | if (alpha(i)) 111 | { 112 | removed = y(i) * (F(i) - alpha(i)); 113 | if (removed > max) 114 | { 115 | max = removed; 116 | *idx = i; 117 | } 118 | } 119 | return max; 120 | } 121 | 122 | void Fastron::updateModel() 123 | { 124 | Eigen::ArrayXd margin = y * F; 125 | 126 | int idx; 127 | 128 | double delta, maxG; 129 | int NN; 130 | 131 | for (int i = 0; i < maxUpdates; ++i) 132 | { 133 | margin = y * F; 134 | 135 | if (margin.minCoeff(&idx) <= 0) 136 | { 137 | if (!gramComputed(idx)) 138 | computeGramMatrixCol(idx); 139 | delta = (y(idx) < 0 ? -1.0 : beta) - F(idx); 140 | if (alpha(idx)) // already a support point, doesn't hurt to modify it 141 | { 142 | alpha(idx) += delta; 143 | F += G.block(0, idx, N, 1).array() * delta; 144 | continue; 145 | } 146 | else if (numberSupportPoints < maxSupportPoints) // adding new support point? 147 | { 148 | alpha(idx) = delta; 149 | F += G.block(0, idx, N, 1).array() * delta; 150 | ++numberSupportPoints; 151 | continue; 152 | } 153 | // else // If you reach this point, there is a need to correct a point but you can't 154 | } 155 | 156 | // Remove redundant points 157 | if (calculateMarginRemoved(&idx) > 0) 158 | { 159 | F -= G.block(0, idx, N, 1).array() * alpha(idx); 160 | alpha(idx) = 0; 161 | margin = y * F; 162 | --numberSupportPoints; 163 | continue; 164 | } 165 | 166 | if (numberSupportPoints == maxSupportPoints) 167 | { 168 | std::cout << "Fail: Hit support point limit in " << std::setw(4) << i << " iterations!" << std::endl; 169 | sparsify(); 170 | return; 171 | } 172 | else 173 | { 174 | std::cout << "Success: Model update complete in " << std::setw(4) << i << " iterations!" << std::endl; 175 | sparsify(); 176 | return; 177 | } 178 | } 179 | 180 | std::cout << "Failed to converge after " << maxUpdates << " iterations!" << std::endl; 181 | sparsify(); 182 | return; 183 | } 184 | 185 | void Fastron::sparsify() 186 | { 187 | Eigen::ArrayXi retainIdx = find(alpha); 188 | 189 | N = retainIdx.size(); 190 | numberSupportPoints = N; 191 | 192 | // sparsify model 193 | keepSelectRows(&data, retainIdx); 194 | 195 | keepSelectRows(&alpha, retainIdx); 196 | keepSelectRows(&gramComputed, retainIdx); 197 | 198 | keepSelectRowsCols(&G, retainIdx, retainIdx, true); 199 | 200 | // sparsify arrays needed for updating 201 | keepSelectRows(&F, retainIdx); 202 | keepSelectRows(&y, retainIdx); 203 | } 204 | 205 | Eigen::ArrayXd Fastron::eval(Eigen::MatrixXd *query_points) 206 | { 207 | // returns -1.0 for collision-free, otherwise +1.0 208 | Eigen::ArrayXd acc(query_points->rows()); 209 | Eigen::ArrayXd temp(N); 210 | 211 | for (int i = 0; i < query_points->rows(); ++i) 212 | { 213 | // rat quad. loop unrolling is faster. 214 | switch (d) 215 | { 216 | case 7: 217 | temp = 2.0 / g + (data.col(0).array() - (*query_points)(i, 0)).square() + (data.col(1).array() - (*query_points)(i, 1)).square() + (data.col(2).array() - (*query_points)(i, 2)).square() + (data.col(3).array() - (*query_points)(i, 3)).square() + (data.col(4).array() - (*query_points)(i, 4)).square() + (data.col(5).array() - (*query_points)(i, 5)).square() + (data.col(6).array() - (*query_points)(i, 6)).square(); 218 | break; 219 | case 6: 220 | temp = 2.0 / g + (data.col(0).array() - (*query_points)(i, 0)).square() + (data.col(1).array() - (*query_points)(i, 1)).square() + (data.col(2).array() - (*query_points)(i, 2)).square() + (data.col(3).array() - (*query_points)(i, 3)).square() + (data.col(4).array() - (*query_points)(i, 4)).square() + (data.col(5).array() - (*query_points)(i, 5)).square(); 221 | break; 222 | case 5: 223 | temp = 2.0 / g + (data.col(0).array() - (*query_points)(i, 0)).square() + (data.col(1).array() - (*query_points)(i, 1)).square() + (data.col(2).array() - (*query_points)(i, 2)).square() + (data.col(3).array() - (*query_points)(i, 3)).square() + (data.col(4).array() - (*query_points)(i, 4)).square(); 224 | break; 225 | case 4: 226 | temp = 2.0 / g + (data.col(0).array() - (*query_points)(i, 0)).square() + (data.col(1).array() - (*query_points)(i, 1)).square() + (data.col(2).array() - (*query_points)(i, 2)).square() + (data.col(3).array() - (*query_points)(i, 3)).square(); 227 | break; 228 | default: 229 | temp = 2.0 / g + (data.col(0).array() - (*query_points)(i, 0)).square(); 230 | for (int j = 1; j < d; ++j) 231 | temp += (data.col(j).array() - (*query_points)(i, j)).square(); 232 | } 233 | acc(i) = (alpha / (temp * temp)).sum(); 234 | } 235 | 236 | temp.resize(0); 237 | return acc.sign(); 238 | } 239 | 240 | // Overload 241 | Eigen::ArrayXd Fastron::eval(Eigen::MatrixXd query_points) 242 | { 243 | // returns -1.0 for collision-free, otherwise +1.0 244 | Eigen::ArrayXd acc(query_points.rows()); 245 | Eigen::ArrayXd temp(N); 246 | 247 | for (int i = 0; i < query_points.rows(); ++i) 248 | { 249 | // rat quad. loop unrolling is faster. 250 | switch (d) 251 | { 252 | case 7: 253 | temp = 2.0 / g + (data.col(0).array() - (query_points)(i, 0)).square() + (data.col(1).array() - (query_points)(i, 1)).square() + (data.col(2).array() - (query_points)(i, 2)).square() + (data.col(3).array() - (query_points)(i, 3)).square() + (data.col(4).array() - (query_points)(i, 4)).square() + (data.col(5).array() - (query_points)(i, 5)).square() + (data.col(6).array() - (query_points)(i, 6)).square(); 254 | break; 255 | case 6: 256 | temp = 2.0 / g + (data.col(0).array() - (query_points)(i, 0)).square() + (data.col(1).array() - (query_points)(i, 1)).square() + (data.col(2).array() - (query_points)(i, 2)).square() + (data.col(3).array() - (query_points)(i, 3)).square() + (data.col(4).array() - (query_points)(i, 4)).square() + (data.col(5).array() - (query_points)(i, 5)).square(); 257 | break; 258 | case 5: 259 | temp = 2.0 / g + (data.col(0).array() - (query_points)(i, 0)).square() + (data.col(1).array() - (query_points)(i, 1)).square() + (data.col(2).array() - (query_points)(i, 2)).square() + (data.col(3).array() - (query_points)(i, 3)).square() + (data.col(4).array() - (query_points)(i, 4)).square(); 260 | break; 261 | case 4: 262 | temp = 2.0 / g + (data.col(0).array() - (query_points)(i, 0)).square() + (data.col(1).array() - (query_points)(i, 1)).square() + (data.col(2).array() - (query_points)(i, 2)).square() + (data.col(3).array() - (query_points)(i, 3)).square(); 263 | break; 264 | default: 265 | temp = 2.0 / g + (data.col(0).array() - (query_points)(i, 0)).square(); 266 | for (int j = 1; j < d; ++j) 267 | temp += (data.col(j).array() - (query_points)(i, j)).square(); 268 | } 269 | acc(i) = (alpha / (temp * temp)).sum(); 270 | } 271 | 272 | temp.resize(0); 273 | return acc.sign(); 274 | } 275 | /* 276 | void Fastron::activeLearning() 277 | { 278 | int N_prev = N; 279 | N += allowance; 280 | 281 | y.conservativeResize(N); 282 | y.tail(allowance) = 0; 283 | 284 | // make room for new data 285 | data.conservativeResize(N, Eigen::NoChange); 286 | 287 | // START ACTIVE LEARNING 288 | std::cout << "Before active learning\n" << data << std::endl; 289 | // copy support points as many times as possible 290 | int k; 291 | if (allowance / N_prev) 292 | { 293 | // Exploitation 294 | for (k = 0; k < std::min(kNS, allowance / N_prev); ++k) 295 | data.block((k + 1) * N_prev, 0, N_prev, d) = (data.block(0, 0, N_prev, d) + sigma * randn(N_prev, d)).cwiseMin(1.0).cwiseMax(-1.0); 296 | 297 | // Exploration 298 | data.bottomRows(allowance - k * N_prev) = Eigen::MatrixXd::Random(allowance - k * N_prev, d); 299 | } 300 | else 301 | { 302 | std::vector idx; 303 | for (int i = 0; i < N_prev; ++i) 304 | idx.push_back(i); 305 | std::random_shuffle(idx.begin(), idx.end()); 306 | 307 | for (int i = 0; i < allowance; i++) 308 | data.row(i + N_prev) = (data.row(idx[i]) + sigma * randn(1, d)).cwiseMin(1.0).cwiseMax(-1.0); 309 | } 310 | // END ACTIVE LEARNING 311 | std::cout << "After active learning\n" << data << std::endl; 312 | // Update Gram matrix 313 | if (G.cols() < N) 314 | G.conservativeResize(N, N); 315 | gramComputed.conservativeResize(N); 316 | gramComputed.tail(allowance) = 0; 317 | 318 | // Update hypothesis vector and Gram matrix 319 | F.conservativeResize(N); 320 | Eigen::ArrayXi idx = find(alpha); 321 | for (int i = 0; i < idx.size(); ++i) 322 | computeGramMatrixCol(idx(i), N_prev); // this is the slowest part of this function 323 | 324 | F.tail(allowance) = (G.block(N_prev, 0, allowance, N_prev) * alpha.matrix()).array(); 325 | 326 | alpha.conservativeResize(N); 327 | alpha.tail(allowance) = 0; 328 | } 329 | */ 330 | 331 | // overload activelearning to block communication 332 | double Fastron::activeLearning() 333 | { 334 | int N_prev = N; 335 | N += allowance; 336 | 337 | y.conservativeResize(N); 338 | y.tail(allowance) = 0; 339 | 340 | // make room for new data 341 | data.conservativeResize(N, Eigen::NoChange); 342 | 343 | // START ACTIVE LEARNING 344 | // std::cout << "Before active learning\n" << data << std::endl; 345 | // copy support points as many times as possible 346 | int k; 347 | if (allowance / N_prev) 348 | { 349 | // Exploitation 350 | for (k = 0; k < std::min(kNS, allowance / N_prev); ++k) 351 | data.block((k + 1) * N_prev, 0, N_prev, d) = (data.block(0, 0, N_prev, d) + sigma * randn(N_prev, d)).cwiseMin(1.0).cwiseMax(-1.0); 352 | 353 | // Exploration 354 | data.bottomRows(allowance - k * N_prev) = Eigen::MatrixXd::Random(allowance - k * N_prev, d); 355 | } 356 | else 357 | { 358 | std::vector idx; 359 | for (int i = 0; i < N_prev; ++i) 360 | idx.push_back(i); 361 | std::random_shuffle(idx.begin(), idx.end()); 362 | 363 | for (int i = 0; i < allowance; i++) 364 | data.row(i + N_prev) = (data.row(idx[i]) + sigma * randn(1, d)).cwiseMin(1.0).cwiseMax(-1.0); 365 | } 366 | // END ACTIVE LEARNING 367 | // std::cout << "After active learning\n" << data << std::endl; 368 | // Update Gram matrix 369 | if (G.cols() < N) 370 | G.conservativeResize(N, N); 371 | gramComputed.conservativeResize(N); 372 | gramComputed.tail(allowance) = 0; 373 | 374 | // Update hypothesis vector and Gram matrix 375 | F.conservativeResize(N); 376 | Eigen::ArrayXi idx = find(alpha); 377 | for (int i = 0; i < idx.size(); ++i) 378 | computeGramMatrixCol(idx(i), N_prev); // this is the slowest part of this function 379 | 380 | F.tail(allowance) = (G.block(N_prev, 0, allowance, N_prev) * alpha.matrix()).array(); 381 | 382 | alpha.conservativeResize(N); 383 | alpha.tail(allowance) = 0; 384 | return 1; 385 | } 386 | 387 | 388 | double Fastron::kcd(Eigen::RowVectorXd query_point, int colDetector = 0) 389 | { 390 | std::cerr << "Error: No kinematics collision detector defined! Labeling as in-collision by default!" << std::endl; 391 | return 1; 392 | } 393 | 394 | void Fastron::updateLabels(int colDetector = 0) 395 | { 396 | for (int i = 0; i < N; ++i) 397 | y(i) = kcd(data.row(i), colDetector); 398 | } 399 | 400 | void Fastron::updateLabels() 401 | { 402 | for (int i = 0; i < N; ++i) 403 | y(i) = kcd(data.row(i)); 404 | } 405 | 406 | // overload updateLabels to prevent calling the virtual function 407 | void Fastron::updateLabels(Eigen::ArrayXd yKcd) 408 | { 409 | for (int i = 0; i < N; ++i) 410 | y(i) = yKcd(i); 411 | } 412 | -------------------------------------------------------------------------------- /fastronWrapper/fastron.h: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | 6 | #include 7 | 8 | #ifndef FASTRON_H 9 | #define FASTRON_H 10 | 11 | // Helper functions 12 | template 13 | Eigen::ArrayXi find(T vec); 14 | 15 | template 16 | void keepSelectRows(T *matPtr, Eigen::ArrayXi rowsToRetain); 17 | 18 | template 19 | void keepSelectCols(T *matPtr, Eigen::ArrayXi colsToRetain); 20 | 21 | template 22 | void keepSelectRowsCols(T *matPtr, Eigen::ArrayXi rowsToRetain, Eigen::ArrayXi colsToRetain, bool shiftOnly = false); 23 | 24 | Eigen::MatrixXd randn(int m, int n); 25 | 26 | // Fastron class. When using this class, use the following pipeline: 27 | // +----------------+ +---------------+ 28 | // data --->+ updateLabels() +--->+ updateModel() +---> model (eval()) 29 | // +----------------+ +---------------+ 30 | // ^ +------------------+ | 31 | // +------+ activeLearning() +<-----+ 32 | // +------------------+ 33 | class Fastron { 34 | public: 35 | // default constructor 36 | Fastron(); 37 | 38 | // constructor. initialize using dataset 39 | Fastron(Eigen::MatrixXd data); 40 | 41 | // destructor 42 | ~Fastron(); 43 | 44 | // model update parameters: gamma (kernel width), beta (conditional bias) 45 | double g = 10, beta = 1; 46 | 47 | // max update iterations, max number of support points 48 | int maxUpdates = 1000, maxSupportPoints = 0; 49 | 50 | // Gram matrix and dataset of configurations 51 | Eigen::MatrixXd G, data; 52 | 53 | // number of datapoints and dimensionality 54 | int N, d; 55 | 56 | // weights, hypothesis, and true labels 57 | Eigen::ArrayXd alpha, F, y; 58 | 59 | // functions and variables for model update 60 | void updateModel(); 61 | double calculateMarginRemoved(int *idx); 62 | Eigen::ArrayXi gramComputed; 63 | void computeGramMatrixCol(int idx, int startIdx); 64 | void sparsify(); 65 | 66 | // perform proxy check 67 | Eigen::ArrayXd eval(Eigen::MatrixXd* query_points); 68 | Eigen::ArrayXd eval(Eigen::MatrixXd query_points); //overload 69 | 70 | // active learning parameters: allowance (number of new samples), kNS (number of points near supports), sigma (Gaussian sampling std), exploitP (proportion of exploitation samples) 71 | int allowance = 800, kNS = 4; 72 | double sigma, exploitP = 0.5; 73 | // void activeLearning(); 74 | double activeLearning(); // overload 75 | 76 | // kinematics-based collision detector. Not defined, as its implementation is depends on robot/environment 77 | virtual double kcd(Eigen::RowVectorXd query_point, int colDetector); 78 | 79 | // update all labels in 80 | void updateLabels(); 81 | void updateLabels(int colDetector); 82 | // overload 83 | void updateLabels(Eigen::ArrayXd yKcd); 84 | private: 85 | // count of points with nonzero weights 86 | int numberSupportPoints = 0; 87 | }; 88 | 89 | #endif -------------------------------------------------------------------------------- /fastronWrapper/fastronWrapper.cpython-37m-x86_64-linux-gnu.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ucsdarclab/fastron_python/9aa14b5de5fa7b61dd8849e2ddb0776f519d0dbf/fastronWrapper/fastronWrapper.cpython-37m-x86_64-linux-gnu.so -------------------------------------------------------------------------------- /fastronWrapper/fastronWrapper.pyx: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | # cython: language_level=3 3 | import numpy 4 | cimport numpy as np 5 | from fastronWrapper_h cimport Fastron 6 | from eigency.core cimport * 7 | 8 | from cpython.ref cimport PyObject 9 | 10 | cdef class PyFastron: 11 | cdef Fastron c_fastron 12 | 13 | def __cinit__(self, np.ndarray[np.float64_t, ndim=2] data): 14 | # type matters, must use float type 15 | #self.c_fastron = Fastron() 16 | #self.c_fastron = Fastron(Map[MatrixXd](data)) 17 | self.c_fastron = Fastron(Map[MatrixXd](numpy.array(data,order='F'))) 18 | 19 | # def __dealloc__(self): 20 | # del self.c_fastron 21 | 22 | 23 | # Attribute access 24 | # model update parameters: gamma (kernel width), beta (conditional bias) 25 | @property 26 | def g(self): 27 | return self.c_fastron.g 28 | @g.setter 29 | def g(self, g): 30 | self.c_fastron.g = g 31 | 32 | @property 33 | def beta(self): 34 | return self.c_fastron.beta 35 | @beta.setter 36 | def beta(self, beta): 37 | self.c_fastron.beta = beta 38 | 39 | 40 | 41 | # max update iterations, max number of support points 42 | @property 43 | def maxUpdates(self): 44 | return self.c_fastron.maxUpdates 45 | @maxUpdates.setter 46 | def maxUpdates(self, maxUpdates): 47 | assert type(maxUpdates) == int 48 | self.c_fastron.maxUpdates = maxUpdates 49 | 50 | @property 51 | def maxSupportPoints(self): 52 | return self.c_fastron.maxSupportPoints 53 | @maxSupportPoints.setter 54 | def maxSupportPoints(self, maxSupportPoints): 55 | assert type(maxSupportPoints) == int 56 | self.c_fastron.maxSupportPoints = maxSupportPoints 57 | 58 | # Gram matrix and dataset of configurations 59 | @property 60 | def G(self): 61 | return ndarray(self.c_fastron.G) 62 | @G.setter 63 | def G(self, np.ndarray[np.float64_t, ndim=2] G): 64 | # type matters, must use float type 65 | self.c_fastron.G = Map[MatrixXd](numpy.array(G,order='F')) 66 | 67 | @property 68 | def data(self): 69 | return ndarray(self.c_fastron.data) 70 | @data.setter 71 | def data(self, np.ndarray[np.float64_t, ndim=2] data): 72 | # type matters, must use float type 73 | self.c_fastron.data = Map[MatrixXd](numpy.array(data,order='F')) 74 | 75 | # number of datapoints and dimensionality 76 | @property 77 | def N(self): 78 | return self.c_fastron.N 79 | @N.setter 80 | def N(self, N): 81 | assert type(N) == int 82 | self.c_fastron.N = N 83 | 84 | @property 85 | def d(self): 86 | return self.c_fastron.d 87 | @d.setter 88 | def d(self, d): 89 | assert type(d) == int 90 | self.c_fastron.d = d 91 | 92 | # weights, hypothesis, and true labels 93 | @property 94 | def alpha(self): 95 | return ndarray(self.c_fastron.alpha) 96 | @alpha.setter 97 | def alpha(self, np.ndarray[np.float64_t, ndim=2] alpha): 98 | # type matters, must use float type 99 | self.c_fastron.alpha = Map[ArrayXd](alpha) 100 | 101 | @property 102 | def F(self): 103 | return ndarray(self.c_fastron.F) 104 | @F.setter 105 | def F(self, np.ndarray[np.float64_t, ndim=2] F): 106 | # type matters, must use float type 107 | self.c_fastron.F = Map[ArrayXd](F) 108 | 109 | @property 110 | def y(self): 111 | return ndarray(self.c_fastron.y) 112 | @y.setter 113 | def y(self, np.ndarray[np.float64_t, ndim=2] y): 114 | # type matters, must use float type 115 | self.c_fastron.y = Map[ArrayXd](y) 116 | 117 | 118 | # functions and variables for model update 119 | def updateModel(self): 120 | self.c_fastron.updateModel() 121 | 122 | # def calculateMarginRemoved(self, idx): 123 | # cdef int* idx_ptr = idx 124 | # #cdef double max 125 | # self.c_fastron.calculateMarginRemoved(idx_ptr) 126 | # #return max 127 | # def computeGramMatrixCol(self, int idx, int startIdx): 128 | # self.c_fastron.computeGramMatrixCol(idx, startIdx) 129 | 130 | # fastronWrapper/fastron.cpp:244:16: error: ‘Eigen::ArrayXd {aka class Eigen::Array}’ has no member named ‘sign’; did you mean ‘sin’? 131 | # return acc.sign(); put the latest eigen under eigency''' 132 | 133 | # # perform proxy check 134 | # # TODO pointer does not work 135 | # def eval(self, np.ndarray[np.float64_t, ndim=2] query_points): 136 | # cdef Map[MatrixXd]* ptr_query_points 137 | # ptr_query_points = (&Map[MatrixXd](query_points)) 138 | # # cdef PyObject* ptr_query_points = Map[MatrixXd](query_points) 139 | # return ndarray(self.c_fastron.eval(ptr_query_points))#ptr_query_points)) 140 | 141 | def eval(self, np.ndarray[np.float64_t, ndim=2] query_points): 142 | return ndarray(self.c_fastron.eval(Map[MatrixXd](numpy.array(query_points,order='F')))) 143 | 144 | # active learning parameters: allowance (number of new samples), kNS (number of points near supports), sigma (Gaussian sampling std), exploitP (proportion of exploitation samples) 145 | @property 146 | def allowance(self): 147 | return self.c_fastron.allowance 148 | @allowance.setter 149 | def allowance(self, allowance): 150 | assert type(allowance) == int 151 | self.c_fastron.allowance = allowance 152 | 153 | @property 154 | def kNS(self): 155 | return self.c_fastron.kNS 156 | @kNS.setter 157 | def kNS(self, kNS): 158 | assert type(kNS) == int 159 | self.c_fastron.kNS = kNS 160 | 161 | @property 162 | def sigma(self): 163 | return self.c_fastron.sigma 164 | @sigma.setter 165 | def sigma(self, sigma): 166 | self.c_fastron.sigma = sigma 167 | 168 | @property 169 | def exploitP(self): 170 | return self.c_fastron.exploitP 171 | @exploitP.setter 172 | def exploitP(self, exploitP): 173 | self.c_fastron.exploitP = exploitP 174 | 175 | def activeLearning(self): 176 | # self.c_fastron.activeLearning() 177 | return self.c_fastron.activeLearning() 178 | 179 | # kinematics-based collision detector 180 | # virtual function can not work, call kcd externally 181 | 182 | # update all labels in 183 | def updateLabels(self, np.ndarray[np.float64_t, ndim=2] yKcd): 184 | self.c_fastron.updateLabels(Map[ArrayXd](yKcd)) 185 | -------------------------------------------------------------------------------- /fastronWrapper/fastronWrapper_h.pxd: -------------------------------------------------------------------------------- 1 | # distutils: language = c++ 2 | # cython: language_level=3 3 | 4 | from eigency.core cimport * 5 | 6 | from cpython.ref cimport PyObject 7 | 8 | # cdef extern from "../fastron/src/fastron.cpp": 9 | cdef extern from "fastron.cpp": 10 | pass 11 | 12 | # Decalre the class with cdef 13 | # cdef extern from "../fastron/include/fastron.h": 14 | cdef extern from "fastron.h": 15 | cdef cppclass Fastron: 16 | # default construtor 17 | Fastron() except + # must have a nullary constructor to be stack allocated 18 | 19 | # overload constructor 20 | Fastron(Map[MatrixXd]) except + 21 | 22 | # model update parameters: gamma (kernel width), beta (conditional bias) 23 | double g, beta # Cannot assign default value to fields in cdef classes 24 | 25 | # max update iterations, max number of support points 26 | int maxUpdates, maxSupportPoints 27 | 28 | # Gram matrix and dataset of configurations 29 | # TODO 30 | Map[MatrixXd] G, data 31 | 32 | # number of datapoints and dimensionality 33 | int N, d 34 | 35 | # weights, hypothesis, and true labels 36 | Map[ArrayXd] alpha, F, y 37 | 38 | # functions and variables for model update 39 | void updateModel() 40 | # double calculateMarginRemoved(int *idx) 41 | # Map[ArrayXi] gramComputed 42 | # void computeGramMatrixCol(int idx, int startIdx) 43 | # void sparsify() 44 | 45 | # perform proxy check 46 | #Map[ArrayXd] eval(Map[MatrixXd] *ptr_query_points) # not work 47 | Map[ArrayXd] eval(Map[MatrixXd]) # overload 48 | 49 | # active learning parameters: allowance (number of new samples), kNS (number of points near supports), sigma (Gaussian sampling std), exploitP (proportion of exploitation samples) 50 | int allowance, kNS 51 | double sigma, exploitP 52 | # void activeLearning() 53 | double activeLearning() 54 | 55 | # kinematics-based collision detector 56 | # virtual function can not work, call kcd externally 57 | 58 | # update all labels in 59 | void updateLabels(Map[ArrayXd]) -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # required packages 2 | Cython 3 | numpy 4 | eigency 5 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | from setuptools.extension import Extension 3 | from Cython.Build import cythonize 4 | 5 | import eigency 6 | 7 | 8 | extensions = [ 9 | Extension("fastronWrapper.fastronWrapper", ["fastronWrapper/fastronWrapper.pyx"], 10 | include_dirs = [".", "fastronWrapper"] + eigency.get_includes() 11 | ), 12 | ] 13 | 14 | dist = setup( 15 | ext_modules = cythonize(extensions), 16 | ) 17 | --------------------------------------------------------------------------------