├── .dockerignore ├── .gitignore ├── CMakeLists.txt ├── Dockerfile ├── LICENSE ├── README.md ├── WORKSPACE ├── benchmark ├── CMakeLists.txt ├── benchmark_add_operation ├── benchmark_add_operation.cpp ├── benchmark_linear_regression.cpp ├── benchmark_liner_regression ├── benchmark_multiple_operation └── benchmark_multiple_operation.cpp ├── hplearn ├── BUILD ├── CMakeLists.txt ├── graph.cpp ├── graph.h ├── main.cpp ├── op.cpp ├── op.h ├── optimizer.cpp ├── optimizer.h ├── session.cpp └── session.h ├── images ├── benchmark_add_operation.png ├── benchmark_linear_regression.png └── benchmark_multiple_operation.png └── models └── README.md /.dockerignore: -------------------------------------------------------------------------------- 1 | # Copyright 2017 The Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | .git/ 16 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Copyright 2017 The Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # IDE 16 | .idea/ 17 | 18 | # Prerequisites 19 | *.d 20 | 21 | # Compiled Object files 22 | *.slo 23 | *.lo 24 | *.o 25 | *.obj 26 | 27 | # Precompiled Headers 28 | *.gch 29 | *.pch 30 | 31 | # Compiled Dynamic libraries 32 | *.so 33 | *.dylib 34 | *.dll 35 | 36 | # Fortran module files 37 | *.mod 38 | *.smod 39 | 40 | # Compiled Static libraries 41 | *.lai 42 | *.la 43 | *.a 44 | *.lib 45 | 46 | # Executables 47 | *.exe 48 | *.out 49 | *.app 50 | -------------------------------------------------------------------------------- /CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.7) 2 | project(hplearn) 3 | 4 | set(CMAKE_CXX_STANDARD 11) 5 | 6 | # set(SOURCE_FILES src/CMakeLists.txt) 7 | # add_executable(hplearn ${SOURCE_FILES}) 8 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:17.04 2 | 3 | RUN apt-get update -y 4 | 5 | # Install dependencies 6 | RUN apt-get install -y g++ 7 | RUN apt-get install -y cmake 8 | 9 | # Add source code 10 | ADD . /root/hplearn 11 | 12 | WORKDIR /root/hplearn 13 | 14 | CMD bash 15 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # HPLearn 2 | 3 | ## Introduction 4 | 5 | HPLearn is the high performance machine learning system in pure C++. 6 | 7 | - [x] Basic mathematics operations 8 | - [x] Automatic partial derivative with chain rule 9 | - [x] Imperative and declarative computations 10 | - [x] Override +-*/ operations for graph building 11 | - [x] CMake integration for Linux/Mac/Windows 12 | - [x] Bazel integration for build and run 13 | - [x] Op/Graph/Session TensorFlow-like APIs 14 | - [ ] GPU integration with Nividia CUDA library 15 | 16 | ## Installation 17 | 18 | Build from scratch with `cmake`. 19 | 20 | ``` 21 | cmake . 22 | 23 | make 24 | ``` 25 | 26 | Or compile code with `bazel`. 27 | 28 | ``` 29 | bazel build hplearn:main 30 | ``` 31 | 32 | Or run directly with `docker`. 33 | 34 | ``` 35 | docker run -it tobegit3hub/hplearn bash 36 | ``` 37 | 38 | ## Usage 39 | 40 | ### Import hplearn libraries 41 | 42 | ``` 43 | #include "op.h" 44 | #include "graph.h" 45 | #include "session.h" 46 | #include "optimizer.h" 47 | 48 | using namespace hplearn; 49 | ``` 50 | 51 | ### Basic operations 52 | 53 | ``` 54 | VariableOp* firstOp = new VariableOp(20.2); 55 | VariableOp* secondOp = new VariableOp(10.1); 56 | AddOp* addOp = new AddOp(firstOp, secondOp); 57 | 58 | cout << "Basic operation result: " << addOp->forward() << endl; 59 | ``` 60 | 61 | ### Overrided operations 62 | 63 | ``` 64 | Graph* graph = new Graph(); 65 | VariableOp* variableOp1 = new VariableOp(20.2); 66 | VariableOp* variableOp2 = new VariableOp(10.1); 67 | AddOp* addOp = (AddOp*) (*variableOp1 + *variableOp2); 68 | MinusOp* minusOp = (MinusOp*) (*variableOp1 - *variableOp2); 69 | MultipleOp* multipleOp = (MultipleOp*) (*variableOp1 * *variableOp2); 70 | DivideOp* divideOp = (DivideOp*) (*variableOp1 / *variableOp2); 71 | 72 | Session* session = new Session(graph); 73 | cout << "Overrided + operator result: " << to_string(session->run(addOp->getName())) << endl; 74 | cout << "Overrided - operator result: " << to_string(session->run(minusOp->getName())) << endl; 75 | cout << "Overrided * operator result: " << to_string(session->run(multipleOp->getName())) << endl; 76 | cout << "Overrided / operator result: " << to_string(session->run(divideOp->getName())) << endl; 77 | ``` 78 | 79 | ### Use placeholder 80 | 81 | ``` 82 | Graph* graph = new Graph(); 83 | PlaceholderOp* placeholderOp1 = new PlaceholderOp(); 84 | PlaceholderOp* placeholderOp2 = new PlaceholderOp(); 85 | AddOp* addOp = new AddOp(placeholderOp1, placeholderOp2); 86 | 87 | Session* session = new Session(graph); 88 | map feedDict; 89 | feedDict[placeholderOp1->getName()] = 20.2; 90 | feedDict[placeholderOp2->getName()] = 10.1; 91 | double result = session->run(addOp->getName(), feedDict); 92 | cout << "Use placeholder result: " << to_string(result) << endl; 93 | ``` 94 | 95 | ### Linear model 96 | 97 | ``` 98 | // Define graph 99 | Graph* graph = new Graph(); 100 | VariableOp* weights = new VariableOp(0.0); 101 | VariableOp* bias = new VariableOp(0.0); 102 | PlaceholderOp* x = new PlaceholderOp(); 103 | PlaceholderOp* y = new PlaceholderOp(); 104 | 105 | Op* multipleOp = *weights * *x; 106 | Op* predictOp = *multipleOp + *bias; 107 | Op* minusOp = *y - *predictOp; 108 | SquareOp* lossOp = new SquareOp(minusOp); 109 | GradientDescentOptimizer* optimizer = new GradientDescentOptimizer(graph, learningRate); 110 | OptimizerMinimizeOp* trainOp = (OptimizerMinimizeOp*) optimizer->minimize(lossOp); 111 | 112 | // Define session 113 | Session* sess = new Session(graph); 114 | map feedDict; 115 | 116 | for (int i=0; igetName()] = feature; 119 | feedDict[y->getName()] = label; 120 | sess->run(trainOp->getName(), feedDict); 121 | 122 | // Print statistics 123 | double lossValue = sess->run(lossOp->getName(), feedDict); 124 | double weightValue = sess->run(weights->getName()); 125 | double biasValue = sess->run(bias->getName()); 126 | cout << "Epoch: " << to_string(i) << ", loss: " << to_string(lossValue) << ", weight: " 127 | << to_string(weightValue) << ", bias: " << to_string(biasValue) << endl; 128 | ``` 129 | 130 | ## Performance 131 | 132 | Benchmark with [TensorFlow](https://github.com/tensorflow/tensorflow) while running 100000 epochs of [add operation](./benchmark/benchmark_add_operation.cpp), [multiple operation](./benchmark/benchmark_multiple_operation.cpp) and [linear regression](benchmark/benchmark_linear_regression.cpp). 133 | 134 | ![](./images/benchmark_add_operation.png) 135 | 136 | ![](./images/benchmark_multiple_operation.png) 137 | 138 | ![](./images/benchmark_linear_regression.png) 139 | 140 | ## Contribution 141 | 142 | The HPLearn project is mostly inspired by [TensorFlow](https://github.com/tensorflow/tensorflow) and [MiniFlow](https://github.com/tobegit3hub/miniflow). 143 | 144 | GitHub issues and pull-requests are highly appreciated and feel free to make your contribution. 145 | -------------------------------------------------------------------------------- /WORKSPACE: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/WORKSPACE -------------------------------------------------------------------------------- /benchmark/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | # Copyright 2017 The Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | cmake_minimum_required(VERSION 3.7) 16 | project(hplearn) 17 | 18 | set(CMAKE_CXX_STANDARD 11) 19 | 20 | set(SOURCE_FILES ../hplearn/op.h ../hplearn/op.cpp ../hplearn/graph.h ../hplearn/graph.cpp ../hplearn/session.h ../hplearn/session.cpp ../hplearn/optimizer.h ../hplearn/optimizer.cpp) 21 | 22 | add_executable(benchmark_linear_regression benchmark_linear_regression.cpp ${SOURCE_FILES}) 23 | 24 | add_executable(benchmark_add_operation benchmark_add_operation.cpp ${SOURCE_FILES}) 25 | 26 | add_executable(benchmark_multiple_operation benchmark_multiple_operation.cpp ${SOURCE_FILES}) 27 | -------------------------------------------------------------------------------- /benchmark/benchmark_add_operation: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/benchmark/benchmark_add_operation -------------------------------------------------------------------------------- /benchmark/benchmark_add_operation.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include 18 | 19 | #include "../hplearn/op.h" 20 | #include "../hplearn/graph.h" 21 | #include "../hplearn/session.h" 22 | #include "../hplearn/optimizer.h" 23 | 24 | using namespace std; 25 | using namespace hplearn; 26 | 27 | 28 | void benchmarkAddOperation() { 29 | int epochNumber = 1000000; 30 | 31 | // Create graph 32 | Graph* graph = new Graph(); 33 | ConstantOp* constantOp1 = new ConstantOp(10.0); 34 | ConstantOp* constantOp2 = new ConstantOp(32.0); 35 | AddOp* addOp = new AddOp(constantOp1, constantOp2); 36 | 37 | graph->addToGraph(constantOp1); 38 | graph->addToGraph(constantOp2); 39 | graph->addToGraph(addOp); 40 | 41 | // Create session 42 | Session* sess = new Session(graph); 43 | 44 | // Start training 45 | for (int i=0; irun(addOp->getName()); 47 | } 48 | } 49 | 50 | 51 | int main(int argc,char* argv[]) { 52 | cout << "Benchmark scenario: add operation, epoch: 1000000" << endl; 53 | benchmarkAddOperation(); 54 | return 0; 55 | } 56 | -------------------------------------------------------------------------------- /benchmark/benchmark_linear_regression.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include 18 | #include 19 | #include 20 | 21 | #include "../hplearn/op.h" 22 | #include "../hplearn/graph.h" 23 | #include "../hplearn/session.h" 24 | #include "../hplearn/optimizer.h" 25 | 26 | using namespace std; 27 | using namespace hplearn; 28 | 29 | 30 | void benchmarkLinearRegression() { 31 | // Define train data 32 | double learningRate = 0.01; 33 | int epochNumber = 1000000; 34 | vector trainFeatureList = {1.0, 2.0, 3.0, 4.0, 5.0}; 35 | vector trainLabelList = {10.0, 20.0, 30.0, 40.0, 50.0}; 36 | int instanceNumber = trainFeatureList.size(); 37 | 38 | // Create graph 39 | Graph* graph = new Graph(); 40 | 41 | VariableOp* weights = new VariableOp(0.0); 42 | VariableOp* bias = new VariableOp(0.0); 43 | PlaceholderOp* x = new PlaceholderOp(); 44 | PlaceholderOp* y = new PlaceholderOp(); 45 | 46 | Op* multipleOp = *weights * *x; 47 | Op* predictOp = *multipleOp + *bias; 48 | Op* minusOp = *y - *predictOp; 49 | SquareOp* lossOp = new SquareOp(minusOp); 50 | GradientDescentOptimizer* optimizer = new GradientDescentOptimizer(graph, learningRate); 51 | OptimizerMinimizeOp* trainOp = (OptimizerMinimizeOp*) optimizer->minimize(lossOp); 52 | 53 | graph->addToGraph(weights); 54 | graph->addToGraph(bias); 55 | graph->addToGraph(x); 56 | graph->addToGraph(y); 57 | graph->addToGraph(multipleOp); 58 | graph->addToGraph(predictOp); 59 | graph->addToGraph(minusOp); 60 | graph->addToGraph(lossOp); 61 | graph->addToGraph(trainOp); 62 | 63 | // Create session 64 | Session* sess = new Session(graph); 65 | map feedDict; 66 | 67 | for (int i=0; igetName()] = feature; 74 | feedDict[y->getName()] = label; 75 | sess->run(trainOp->getName(), feedDict); 76 | } 77 | 78 | } 79 | 80 | 81 | int main(int argc,char* argv[]) { 82 | cout << "Benchmark scenario: linear regression, epoch: 1000000" << endl; 83 | benchmarkLinearRegression(); 84 | return 0; 85 | } 86 | -------------------------------------------------------------------------------- /benchmark/benchmark_liner_regression: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/benchmark/benchmark_liner_regression -------------------------------------------------------------------------------- /benchmark/benchmark_multiple_operation: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/benchmark/benchmark_multiple_operation -------------------------------------------------------------------------------- /benchmark/benchmark_multiple_operation.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include 18 | 19 | #include "../hplearn/op.h" 20 | #include "../hplearn/graph.h" 21 | #include "../hplearn/session.h" 22 | #include "../hplearn/optimizer.h" 23 | 24 | using namespace std; 25 | using namespace hplearn; 26 | 27 | 28 | void benchmarkAddOperation() { 29 | int epochNumber = 1000000; 30 | 31 | // Create graph 32 | Graph* graph = new Graph(); 33 | ConstantOp* constantOp1 = new ConstantOp(10.0); 34 | ConstantOp* constantOp2 = new ConstantOp(32.0); 35 | MultipleOp* multipleOp = new MultipleOp(constantOp1, constantOp2); 36 | 37 | graph->addToGraph(constantOp1); 38 | graph->addToGraph(constantOp2); 39 | graph->addToGraph(multipleOp); 40 | 41 | // Create session 42 | Session* sess = new Session(graph); 43 | 44 | // Start training 45 | for (int i=0; irun(multipleOp->getName()); 47 | } 48 | } 49 | 50 | 51 | int main(int argc,char* argv[]) { 52 | cout << "Benchmark scenario: multiple operation, epoch: 1000000" << endl; 53 | benchmarkAddOperation(); 54 | return 0; 55 | } 56 | -------------------------------------------------------------------------------- /hplearn/BUILD: -------------------------------------------------------------------------------- 1 | 2 | cc_binary( 3 | name = "main", 4 | srcs = ["main.cpp", "op.h", "op.cpp", "graph.h", "graph.cpp", "session.h", "session.cpp", "optimizer.h", "optimizer.cpp"], 5 | deps = [] 6 | ) 7 | -------------------------------------------------------------------------------- /hplearn/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | # Copyright 2017 The Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | cmake_minimum_required(VERSION 3.7) 16 | project(hplearn) 17 | 18 | set(CMAKE_CXX_STANDARD 11) 19 | 20 | set(SOURCE_FILES main.cpp op.h op.cpp graph.h graph.cpp session.h session.cpp optimizer.h optimizer.cpp) 21 | add_executable(main ${SOURCE_FILES}) 22 | -------------------------------------------------------------------------------- /hplearn/graph.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include 18 | #include 19 | 20 | #include "graph.h" 21 | 22 | namespace hplearn { 23 | 24 | 25 | Graph::Graph() { 26 | this->name = ""; 27 | } 28 | 29 | Graph::Graph(string name) : name(name) { 30 | 31 | } 32 | 33 | map Graph::getNameOpMap() { 34 | return this->nameOpMap; 35 | } 36 | 37 | map Graph::getTrainableNameOpMap() { 38 | return this->trainableNameOpMap; 39 | } 40 | 41 | 42 | string Graph::getUniqueName(string inputName) { 43 | string outputName = inputName; 44 | int index = 0; 45 | 46 | while (this->nameOpMap.count(outputName) > 0 ) { 47 | outputName = outputName + "_" + to_string(index); 48 | index += 1; 49 | } 50 | 51 | // cout << "Unique op name is " << outputName << endl; 52 | return outputName; 53 | } 54 | 55 | void Graph::addNameOpMap(string opName, Op* op) { 56 | this->nameOpMap[opName] = op; 57 | } 58 | 59 | void Graph::addTrainableNameOpMap(string opName, Op* variableOp) { 60 | this->trainableNameOpMap[opName] = variableOp; 61 | } 62 | 63 | void Graph::addToGraph(Op *op) { 64 | // Get unique name and set in the op 65 | string opName = this->getUniqueName(op->getName()); 66 | if (opName != op->getName()) { 67 | op->setName(opName); 68 | } 69 | 70 | // Add to the map 71 | this->addNameOpMap(opName, op); 72 | 73 | // Add to the trainable map 74 | if(VariableOp* variableOp = dynamic_cast(op)) { 75 | if (variableOp->getIsTrainable()) { 76 | this->addTrainableNameOpMap(opName, variableOp); 77 | } 78 | } 79 | } 80 | 81 | 82 | 83 | } // namespace hplearn 84 | -------------------------------------------------------------------------------- /hplearn/graph.h: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #ifndef HPLEARN_GRAPH_H 18 | #define HPLEARN_GRAPH_H 19 | 20 | #include 21 | #include 22 | 23 | #include "op.h" 24 | 25 | using namespace std; 26 | 27 | namespace hplearn { 28 | 29 | 30 | class Graph { 31 | private: 32 | string name; 33 | map nameOpMap; 34 | map trainableNameOpMap; 35 | 36 | public: 37 | Graph(); 38 | Graph(string name); 39 | map getNameOpMap(); 40 | void addNameOpMap(string opName, Op* op); 41 | map getTrainableNameOpMap(); 42 | void addTrainableNameOpMap(string opName, Op* variableOp); 43 | string getUniqueName(string inputName); 44 | void addToGraph(Op* op); 45 | 46 | // TODO: Add toString method 47 | }; 48 | 49 | 50 | 51 | } // namespace hplearn 52 | 53 | #endif //HPLEARN_GRAPH_H -------------------------------------------------------------------------------- /hplearn/main.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include 18 | #include 19 | #include 20 | 21 | #include "op.h" 22 | #include "graph.h" 23 | #include "session.h" 24 | #include "optimizer.h" 25 | 26 | using namespace std; 27 | using namespace hplearn; 28 | 29 | 30 | void testOp() { 31 | // Test ConstantOp 32 | ConstantOp* constantOp = new ConstantOp(10.5); 33 | 34 | cout << "ConstantOp name: " << constantOp->getName() << endl; 35 | cout << "ConstantOp forward: " << constantOp->forward() << endl; 36 | cout << "ConstantOp backward: " << constantOp->backward() << endl; 37 | 38 | // Test PlaceholderOp 39 | PlaceholderOp* placeholderOp = new PlaceholderOp(); 40 | placeholderOp->setValue(10.5); 41 | 42 | cout << "PlaceholderOp name: " << placeholderOp->getName() << endl; 43 | cout << "PlaceholderOp forward: " << placeholderOp->forward() << endl; 44 | cout << "PlaceholderOp backward: " << placeholderOp->backward() << endl; 45 | 46 | // Test VariableOp 47 | VariableOp* variableOp = new VariableOp(10.5); 48 | 49 | cout << "VariableOp name: " << variableOp->getName() << endl; 50 | cout << "VariableOp forward: " << variableOp->forward() << endl; 51 | cout << "VariableOp backward: " << variableOp->backward() << endl; 52 | 53 | // Test PowerOp 54 | PowerOp* powerOp = new PowerOp(variableOp, 3); 55 | 56 | cout << "PowerOp name: " << powerOp->getName() << endl; 57 | cout << "PowerOp forward: " << powerOp->forward() << endl; 58 | cout << "PowerOp backward: " << powerOp->backward() << endl; 59 | 60 | powerOp = new PowerOp(10, 3); 61 | 62 | cout << "PowerOp name: " << powerOp->getName() << endl; 63 | cout << "PowerOp forward: " << powerOp->forward() << endl; 64 | cout << "PowerOp backward: " << powerOp->backward() << endl; 65 | 66 | // Test SquareOp 67 | SquareOp* squareOp = new SquareOp(variableOp); 68 | 69 | cout << "SquareOp name: " << squareOp->getName() << endl; 70 | cout << "SquareOp forward: " << squareOp->forward() << endl; 71 | cout << "SquareOp backward: " << squareOp->backward() << endl; 72 | 73 | // Test AddOp 74 | VariableOp* firstOp = new VariableOp(20.2); 75 | VariableOp* secondOp = new VariableOp(10.1); 76 | AddOp* addOp = new AddOp(firstOp, secondOp); 77 | 78 | cout << "AddOp name: " << addOp->getName() << endl; 79 | cout << "AddOp forward: " << addOp->forward() << endl; 80 | cout << "AddOp backward: " << addOp->backward() << endl; 81 | 82 | addOp = new AddOp(addOp, 100.0); 83 | 84 | cout << "AddOp name: " << addOp->getName() << endl; 85 | cout << "AddOp forward: " << addOp->forward() << endl; 86 | cout << "AddOp backward: " << addOp->backward() << endl; 87 | 88 | // Test MinusOp 89 | MinusOp* minusOp = new MinusOp(firstOp, secondOp); 90 | 91 | cout << "MinusOp name: " << minusOp->getName() << endl; 92 | cout << "MinusOp forward: " << minusOp->forward() << endl; 93 | cout << "MinusOp backward: " << minusOp->backward() << endl; 94 | 95 | // Test MultipleOp 96 | MultipleOp* multipleOp = new MultipleOp(firstOp, secondOp); 97 | 98 | cout << "MultipleOp name: " << multipleOp->getName() << endl; 99 | cout << "MultipleOp forward: " << multipleOp->forward() << endl; 100 | cout << "MultipleOp backward: " << multipleOp->backward() << endl; 101 | 102 | cout << "MultipleOp backward(first): " << multipleOp->backward(firstOp->getName()) << endl; 103 | cout << "MultipleOp backward(second): " << multipleOp->backward(secondOp->getName()) << endl; 104 | 105 | // Test DivideOp 106 | DivideOp* divideOp = new DivideOp(firstOp, secondOp); 107 | 108 | cout << "DivideOp name: " << divideOp->getName() << endl; 109 | cout << "DivideOp forward: " << divideOp->forward() << endl; 110 | cout << "DivideOp backward: " << divideOp->backward() << endl; 111 | 112 | cout << "DivideOp backward(first): " << divideOp->backward(firstOp->getName()) << endl; 113 | cout << "DivideOp backward(second): " << divideOp->backward(secondOp->getName()) << endl; 114 | 115 | } 116 | 117 | void testGraphAndSession() { 118 | // Create Graph 119 | Graph* graph = new Graph(); 120 | 121 | ConstantOp* constantOp1 = new ConstantOp(1.0); 122 | ConstantOp* constantOp2 = new ConstantOp(2.5); 123 | AddOp* addOp = new AddOp(constantOp1, constantOp2); 124 | 125 | graph->addToGraph(constantOp1); 126 | graph->addToGraph(constantOp2); 127 | graph->addToGraph(addOp); 128 | 129 | // Create session 130 | Session* session = new Session(graph); 131 | 132 | double result = session->run(addOp->getName()); 133 | cout << "Session run result: " << to_string(result) << endl; 134 | 135 | } 136 | 137 | void testPlaceholder() { 138 | // Create Graph 139 | Graph* graph = new Graph(); 140 | 141 | PlaceholderOp* placeholderOp1 = new PlaceholderOp(); 142 | PlaceholderOp* placeholderOp2 = new PlaceholderOp(); 143 | AddOp* addOp = new AddOp(placeholderOp1, placeholderOp2); 144 | 145 | graph->addToGraph(placeholderOp1); 146 | graph->addToGraph(placeholderOp2); 147 | graph->addToGraph(addOp); 148 | 149 | // Create session 150 | Session* session = new Session(graph); 151 | 152 | map feedDict; 153 | feedDict[placeholderOp1->getName()] = 10.1; 154 | feedDict[placeholderOp2->getName()] = 20.3; 155 | double result = session->run(addOp->getName(), feedDict); 156 | cout << "Run with placeholder result: " << to_string(result) << endl; 157 | } 158 | 159 | void testOptimizer() { 160 | // Create Graph 161 | Graph* graph = new Graph(); 162 | 163 | VariableOp* variableOp1 = new VariableOp(10.0); 164 | VariableOp* variableOp2 = new VariableOp(20.5); 165 | AddOp* addOp = new AddOp(variableOp1, variableOp2); 166 | Op* lossOp = addOp; 167 | 168 | GradientDescentOptimizer* optimizer = new GradientDescentOptimizer(graph); 169 | Op* trainOp = (OptimizerMinimizeOp*) optimizer->minimize(lossOp); 170 | 171 | graph->addToGraph(variableOp1); 172 | graph->addToGraph(variableOp2); 173 | graph->addToGraph(addOp); 174 | graph->addToGraph(trainOp); 175 | 176 | // Create session 177 | Session* session = new Session(graph); 178 | 179 | // Run training 180 | session->run(trainOp->getName()); 181 | double loss = session->run(lossOp->getName()); 182 | cout << "Loss is " << to_string(loss) << endl; 183 | 184 | session->run(trainOp->getName()); 185 | loss = session->run(lossOp->getName()); 186 | cout << "Loss is " << to_string(loss) << endl; 187 | 188 | } 189 | 190 | 191 | void testOverridedOperator() { 192 | // Create Graph 193 | Graph* graph = new Graph(); 194 | 195 | VariableOp* variableOp1 = new VariableOp(20.2); 196 | VariableOp* variableOp2 = new VariableOp(10.1); 197 | AddOp* addOp = (AddOp*) (*variableOp1 + *variableOp2); 198 | MinusOp* minusOp = (MinusOp*) (*variableOp1 - *variableOp2); 199 | MultipleOp* multipleOp = (MultipleOp*) (*variableOp1 * *variableOp2); 200 | DivideOp* divideOp = (DivideOp*) (*variableOp1 / *variableOp2); 201 | 202 | graph->addToGraph(variableOp1); 203 | graph->addToGraph(variableOp2); 204 | graph->addToGraph(addOp); 205 | graph->addToGraph(minusOp); 206 | graph->addToGraph(multipleOp); 207 | graph->addToGraph(divideOp); 208 | 209 | // Create session 210 | Session* session = new Session(graph); 211 | cout << "Overrided + operator result is " << to_string(session->run(addOp->getName())) << endl; 212 | cout << "Overrided - operator result is " << to_string(session->run(minusOp->getName())) << endl; 213 | cout << "Overrided * operator result is " << to_string(session->run(multipleOp->getName())) << endl; 214 | cout << "Overrided / operator result is " << to_string(session->run(divideOp->getName())) << endl; 215 | } 216 | 217 | 218 | void testLinearRegression() { 219 | // Define train data 220 | double learningRate = 0.01; 221 | int epochNumber = 20; 222 | vector trainFeatureList = {1.0, 2.0, 3.0, 4.0, 5.0}; 223 | vector trainLabelList = {10.0, 20.0, 30.0, 40.0, 50.0}; 224 | int instanceNumber = trainFeatureList.size(); 225 | 226 | // Create graph 227 | Graph* graph = new Graph(); 228 | 229 | VariableOp* weights = new VariableOp(0.0); 230 | VariableOp* bias = new VariableOp(0.0); 231 | PlaceholderOp* x = new PlaceholderOp(); 232 | PlaceholderOp* y = new PlaceholderOp(); 233 | 234 | Op* multipleOp = *weights * *x; 235 | Op* predictOp = *multipleOp + *bias; 236 | Op* minusOp = *y - *predictOp; 237 | SquareOp* lossOp = new SquareOp(minusOp); 238 | GradientDescentOptimizer* optimizer = new GradientDescentOptimizer(graph, learningRate); 239 | OptimizerMinimizeOp* trainOp = (OptimizerMinimizeOp*) optimizer->minimize(lossOp); 240 | 241 | graph->addToGraph(weights); 242 | graph->addToGraph(bias); 243 | graph->addToGraph(x); 244 | graph->addToGraph(y); 245 | graph->addToGraph(multipleOp); 246 | graph->addToGraph(predictOp); 247 | graph->addToGraph(minusOp); 248 | graph->addToGraph(lossOp); 249 | graph->addToGraph(trainOp); 250 | 251 | // Create session 252 | Session* sess = new Session(graph); 253 | map feedDict; 254 | 255 | for (int i=0; igetName()] = feature; 262 | feedDict[y->getName()] = label; 263 | sess->run(trainOp->getName(), feedDict); 264 | 265 | // Print loss and model 266 | double lossValue = sess->run(lossOp->getName(), feedDict); 267 | double weightValue = sess->run(weights->getName()); 268 | double biasValue = sess->run(bias->getName()); 269 | cout << "Epoch: " << to_string(i) << ", loss: " << to_string(lossValue) << ", weight: " 270 | << to_string(weightValue) << ", bias: " << to_string(biasValue) << endl; 271 | } 272 | 273 | } 274 | 275 | 276 | int main(int argc,char* argv[]) { 277 | cout<<"Start main"< 18 | 19 | #include "op.h" 20 | 21 | namespace hplearn { 22 | 23 | // Op 24 | Op::Op() : name("") { 25 | 26 | } 27 | 28 | Op::Op(string name) : name(name) { 29 | 30 | } 31 | 32 | string Op::getName() { 33 | return this->name; 34 | } 35 | 36 | void Op::setName(string name) { 37 | this->name = name; 38 | } 39 | 40 | Op* Op::operator+(Op& op) { 41 | return new AddOp(this, &op); 42 | } 43 | 44 | Op* Op::operator-(Op& op) { 45 | return new MinusOp(this, &op); 46 | } 47 | 48 | Op* Op::operator*(Op& op) { 49 | return new MultipleOp(this, &op); 50 | } 51 | 52 | Op* Op::operator/(Op& op) { 53 | return new DivideOp(this, &op); 54 | } 55 | 56 | // ConstantOp 57 | ConstantOp::ConstantOp() : Op("ConstantOp") { 58 | 59 | } 60 | 61 | ConstantOp::ConstantOp(double value) : Op("ConstantOp"), value(value) { 62 | 63 | } 64 | 65 | double ConstantOp::getValue() { 66 | return this->value; 67 | } 68 | 69 | void ConstantOp::setValue(double value) { 70 | this->value = value; 71 | } 72 | 73 | double ConstantOp::forward() { 74 | return this->getValue(); 75 | } 76 | 77 | double ConstantOp::backward(string partialDerivativeOpname) { 78 | return 0; 79 | } 80 | 81 | 82 | // PlaceholderOp 83 | PlaceholderOp::PlaceholderOp() : Op("PlaceholderOp") { 84 | 85 | } 86 | 87 | PlaceholderOp::PlaceholderOp(double value) : Op("PlaceholderOp"), value(value) { 88 | 89 | } 90 | 91 | double PlaceholderOp::getValue() { 92 | return this->value; 93 | } 94 | 95 | void PlaceholderOp::setValue(double value) { 96 | this->value = value; 97 | } 98 | 99 | double PlaceholderOp::forward() { 100 | return this->getValue(); 101 | } 102 | 103 | double PlaceholderOp::backward(string partialDerivativeOpname) { 104 | return 0; 105 | } 106 | 107 | // VariableOp 108 | VariableOp::VariableOp() : Op("VariableOp"), isTrainable(true) { 109 | 110 | } 111 | 112 | VariableOp::VariableOp(double value) : Op("VariableOp"), value(value), isTrainable(true) { 113 | 114 | } 115 | 116 | VariableOp::VariableOp(double value, bool isTrainable) : Op("VariableOp"), value(value), isTrainable(isTrainable) { 117 | 118 | }; 119 | 120 | double VariableOp::getValue() { 121 | return this->value; 122 | } 123 | 124 | void VariableOp::setValue(double value) { 125 | this->value = value; 126 | } 127 | 128 | double VariableOp::forward() { 129 | return this->getValue(); 130 | } 131 | 132 | double VariableOp::backward(string partialDerivativeOpname) { 133 | double grad; 134 | 135 | if (partialDerivativeOpname == "") { 136 | grad = 1; 137 | } else if (this->getName() == partialDerivativeOpname) { 138 | grad = 1; 139 | } else { 140 | grad = 0; 141 | } 142 | 143 | return grad; 144 | } 145 | 146 | bool VariableOp::getIsTrainable() { 147 | return this->isTrainable; 148 | } 149 | 150 | void VariableOp::setIsTrainable(bool isTrainable) { 151 | this->isTrainable = isTrainable; 152 | } 153 | 154 | // PowerOp 155 | PowerOp::PowerOp(Op* inputOp, int power) : Op("PowerOp"), inputOp(inputOp), power(power) { 156 | 157 | } 158 | 159 | PowerOp::PowerOp(double inputValue, int power) : Op("PowerOp"), power(power) { 160 | this->inputOp = new ConstantOp(inputValue); 161 | } 162 | 163 | double PowerOp::forward() { 164 | double x = this->inputOp->forward(); 165 | return pow(x, this->power); 166 | } 167 | 168 | double PowerOp::backward(string partialDerivativeOpname) { 169 | double x = this->inputOp->forward(); 170 | double grad; 171 | 172 | if (PlaceholderOp* op = dynamic_cast(this->inputOp)) { 173 | grad = 0; 174 | } else if(ConstantOp* op = dynamic_cast(this->inputOp)) { 175 | grad = 0; 176 | } else if(VariableOp* op = dynamic_cast(this->inputOp)) { 177 | grad = this->power * pow(x, this->power - 1); 178 | } else { 179 | grad = this->power * pow(x, this->power - 1) * this->inputOp->backward(partialDerivativeOpname); 180 | } 181 | 182 | return grad; 183 | } 184 | 185 | 186 | // SquareOp 187 | SquareOp::SquareOp(Op* inputOp) : PowerOp(inputOp, 2) { 188 | 189 | } 190 | 191 | // AddOp 192 | AddOp::AddOp(Op* firstInputOp, Op* secondInputOp) : Op("AddOp"), firstInputOp(firstInputOp), secondInputOp(secondInputOp) { 193 | 194 | } 195 | 196 | AddOp::AddOp(Op* firstInputOp, double secondInputValue) : Op("AddOp"), firstInputOp(firstInputOp) { 197 | this->secondInputOp = new ConstantOp(secondInputValue); 198 | } 199 | 200 | AddOp::AddOp(double firstInputValue, Op* secondInputOp) : Op("AddOp"), secondInputOp(secondInputOp) { 201 | this->firstInputOp = new ConstantOp(firstInputValue); 202 | } 203 | 204 | AddOp::AddOp(double firstInputValue, double secondInputValue) : Op("AddOp") { 205 | this->firstInputOp = new ConstantOp(firstInputValue); 206 | this->secondInputOp = new ConstantOp(secondInputValue); 207 | } 208 | 209 | double AddOp::forward() { 210 | return this->firstInputOp->forward() + this->secondInputOp->forward(); 211 | } 212 | 213 | double AddOp::backward(string partialDerivativeOpname) { 214 | double grad = this->firstInputOp->backward(partialDerivativeOpname) + this->secondInputOp->backward(partialDerivativeOpname); 215 | return grad; 216 | } 217 | 218 | // MinusOp 219 | MinusOp::MinusOp(Op* firstInputOp, Op* secondInputOp) : Op("MinusOp"), firstInputOp(firstInputOp), secondInputOp(secondInputOp) { 220 | 221 | } 222 | 223 | MinusOp::MinusOp(Op* firstInputOp, double secondInputValue) : Op("MinusOp"), firstInputOp(firstInputOp) { 224 | this->secondInputOp = new ConstantOp(secondInputValue); 225 | } 226 | 227 | MinusOp::MinusOp(double firstInputValue, Op* secondInputOp) : Op("MinusOp"), secondInputOp(secondInputOp) { 228 | this->firstInputOp = new ConstantOp(firstInputValue); 229 | } 230 | 231 | MinusOp::MinusOp(double firstInputValue, double secondInputValue) : Op("MinusOp") { 232 | this->firstInputOp = new ConstantOp(firstInputValue); 233 | this->secondInputOp = new ConstantOp(secondInputValue); 234 | } 235 | 236 | double MinusOp::forward() { 237 | return this->firstInputOp->forward() - this->secondInputOp->forward(); 238 | } 239 | 240 | double MinusOp::backward(string partialDerivativeOpname) { 241 | double grad = this->firstInputOp->backward(partialDerivativeOpname) - this->secondInputOp->backward(partialDerivativeOpname); 242 | return grad; 243 | } 244 | 245 | // MultipleOp 246 | MultipleOp::MultipleOp(Op* firstInputOp, Op* secondInputOp) : Op("MultipleOp"), firstInputOp(firstInputOp), secondInputOp(secondInputOp) { 247 | 248 | } 249 | 250 | MultipleOp::MultipleOp(Op* firstInputOp, double secondInputValue) : Op("MultipleOp"), firstInputOp(firstInputOp) { 251 | this->secondInputOp = new ConstantOp(secondInputValue); 252 | } 253 | 254 | MultipleOp::MultipleOp(double firstInputValue, Op* secondInputOp) : Op("MultipleOp"), secondInputOp(secondInputOp) { 255 | this->firstInputOp = new ConstantOp(firstInputValue); 256 | } 257 | 258 | MultipleOp::MultipleOp(double firstInputValue, double secondInputValue) : Op("MultipleOp") { 259 | this->firstInputOp = new ConstantOp(firstInputValue); 260 | this->secondInputOp = new ConstantOp(secondInputValue); 261 | } 262 | 263 | double MultipleOp::forward() { 264 | return this->firstInputOp->forward() * this->secondInputOp->forward(); 265 | } 266 | 267 | double MultipleOp::backward(string partialDerivativeOpname) { 268 | double grad; 269 | double firstInputOpValue = this->firstInputOp->forward(); 270 | double secondInputOpValue = this->secondInputOp->forward(); 271 | double firstInputOpGrad = this->firstInputOp->backward(partialDerivativeOpname); 272 | double secondInputOpGrad = this->secondInputOp->backward(partialDerivativeOpname); 273 | 274 | grad = firstInputOpGrad * secondInputOpValue + firstInputOpValue * secondInputOpGrad; 275 | return grad; 276 | } 277 | 278 | // DivideOp 279 | DivideOp::DivideOp(Op* firstInputOp, Op* secondInputOp) : Op("DivideOp"), firstInputOp(firstInputOp), secondInputOp(secondInputOp) { 280 | 281 | } 282 | 283 | DivideOp::DivideOp(Op* firstInputOp, double secondInputValue) : Op("DivideOp"), firstInputOp(firstInputOp) { 284 | this->secondInputOp = new ConstantOp(secondInputValue); 285 | } 286 | 287 | DivideOp::DivideOp(double firstInputValue, Op* secondInputOp) : Op("DivideOp"), secondInputOp(secondInputOp) { 288 | this->firstInputOp = new ConstantOp(firstInputValue); 289 | } 290 | 291 | DivideOp::DivideOp(double firstInputValue, double secondInputValue) : Op("DivideOp") { 292 | this->firstInputOp = new ConstantOp(firstInputValue); 293 | this->secondInputOp = new ConstantOp(secondInputValue); 294 | } 295 | 296 | double DivideOp::forward() { 297 | return this->firstInputOp->forward() / this->secondInputOp->forward(); 298 | } 299 | 300 | double DivideOp::backward(string partialDerivativeOpname) { 301 | double grad; 302 | double firstInputOpValue = this->firstInputOp->forward(); 303 | double secondInputOpValue = this->secondInputOp->forward(); 304 | double firstInputOpGrad = this->firstInputOp->backward(partialDerivativeOpname); 305 | double secondInputOpGrad = this->secondInputOp->backward(partialDerivativeOpname); 306 | 307 | if (secondInputOpGrad == 0) { 308 | // TODO: Throw exception because of no reasonable grad 309 | grad = 0; 310 | } else { 311 | grad = firstInputOpGrad * secondInputOpValue - firstInputOpValue * secondInputOpGrad; 312 | grad = grad / pow(secondInputOpValue, 2); 313 | } 314 | return grad; 315 | } 316 | 317 | 318 | 319 | 320 | } // namespace hplearn 321 | -------------------------------------------------------------------------------- /hplearn/op.h: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #ifndef HPLEARN_OP_H 18 | #define HPLEARN_OP_H 19 | 20 | #include 21 | 22 | 23 | using namespace std; 24 | 25 | namespace hplearn { 26 | 27 | 28 | /* 29 | * The abstract operation. 30 | */ 31 | class Op { 32 | protected: 33 | string name; 34 | 35 | public: 36 | Op(); 37 | Op(string name); 38 | 39 | string getName(); 40 | void setName(string name); 41 | 42 | virtual double forward() = 0; 43 | virtual double backward(string partialDerivativeOpname="") = 0; 44 | 45 | // TODO: Change to AddOp after resolving circular dependencies 46 | Op* operator+(Op& op); 47 | Op* operator-(Op& op); 48 | Op* operator*(Op& op); 49 | Op* operator/(Op& op); 50 | 51 | }; 52 | 53 | /* 54 | * The constant operation. 55 | */ 56 | class ConstantOp : public Op { 57 | private: 58 | double value; 59 | 60 | public: 61 | ConstantOp(); 62 | ConstantOp(double value); 63 | double getValue(); 64 | void setValue(double value); 65 | double forward(); 66 | double backward(string partialDerivativeOpname=""); 67 | }; 68 | 69 | /* 70 | * The placeholder operation. 71 | */ 72 | class PlaceholderOp : public Op { 73 | private: 74 | double value; 75 | 76 | public: 77 | PlaceholderOp(); 78 | PlaceholderOp(double value); 79 | 80 | double getValue(); 81 | void setValue(double value); 82 | 83 | double forward(); 84 | double backward(string partialDerivativeOpname=""); 85 | }; 86 | 87 | /* 88 | * The variable operation. 89 | */ 90 | class VariableOp : public Op { 91 | private: 92 | double value; 93 | bool isTrainable; 94 | 95 | public: 96 | VariableOp(); 97 | VariableOp(double value); 98 | VariableOp(double value, bool isTrainable); 99 | double getValue(); 100 | void setValue(double value); 101 | double forward(); 102 | double backward(string partialDerivativeOpname=""); 103 | bool getIsTrainable(); 104 | void setIsTrainable(bool isTrainable); 105 | }; 106 | 107 | /* 108 | * The power operation. 109 | */ 110 | class PowerOp : public Op { 111 | private: 112 | Op* inputOp; 113 | int power; 114 | 115 | public: 116 | PowerOp(Op* inputOp, int power); 117 | PowerOp(double inputValue, int power); 118 | double forward(); 119 | double backward(string partialDerivativeOpname=""); 120 | }; 121 | 122 | /* 123 | * The square operation. 124 | */ 125 | class SquareOp : public PowerOp { 126 | public: 127 | SquareOp(Op* inputOp); 128 | }; 129 | 130 | 131 | /* 132 | * The add operation. 133 | */ 134 | class AddOp : public Op { 135 | private: 136 | Op* firstInputOp; 137 | Op* secondInputOp; 138 | 139 | public: 140 | AddOp(Op* firstInputOp, Op* secondInputOp); 141 | AddOp(Op* firstInputOp, double secondInputValue); 142 | AddOp(double firstInputValue, Op* secondInputOp); 143 | AddOp(double firstInputValue, double secondInputValue); 144 | double forward(); 145 | double backward(string partialDerivativeOpname=""); 146 | }; 147 | 148 | /* 149 | * The minus operation. 150 | */ 151 | class MinusOp : public Op { 152 | private: 153 | Op* firstInputOp; 154 | Op* secondInputOp; 155 | 156 | public: 157 | MinusOp(Op* firstInputOp, Op* secondInputOp); 158 | MinusOp(Op* firstInputOp, double secondInputValue); 159 | MinusOp(double firstInputValue, Op* secondInputOp); 160 | MinusOp(double firstInputValue, double secondInputValue); 161 | double forward(); 162 | double backward(string partialDerivativeOpname=""); 163 | }; 164 | 165 | /* 166 | * The multiple operation. 167 | */ 168 | class MultipleOp : public Op { 169 | private: 170 | Op* firstInputOp; 171 | Op* secondInputOp; 172 | 173 | public: 174 | MultipleOp(Op* firstInputOp, Op* secondInputOp); 175 | MultipleOp(Op* firstInputOp, double secondInputValue); 176 | MultipleOp(double firstInputValue, Op* secondInputOp); 177 | MultipleOp(double firstInputValue, double secondInputValue); 178 | double forward(); 179 | double backward(string partialDerivativeOpname=""); 180 | }; 181 | 182 | /* 183 | * The divide operation. 184 | */ 185 | class DivideOp : public Op { 186 | private: 187 | Op* firstInputOp; 188 | Op* secondInputOp; 189 | 190 | public: 191 | DivideOp(Op* firstInputOp, Op* secondInputOp); 192 | DivideOp(Op* firstInputOp, double secondInputValue); 193 | DivideOp(double firstInputValue, Op* secondInputOp); 194 | DivideOp(double firstInputValue, double secondInputValue); 195 | double forward(); 196 | double backward(string partialDerivativeOpname=""); 197 | }; 198 | 199 | 200 | 201 | 202 | 203 | } // namespace hplearn 204 | 205 | #endif //HPLEARN_OP_H 206 | 207 | -------------------------------------------------------------------------------- /hplearn/optimizer.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include "optimizer.h" 18 | 19 | namespace hplearn { 20 | 21 | 22 | // Optimizer 23 | Optimizer::Optimizer() : name("Optimizer") { 24 | 25 | } 26 | 27 | Optimizer::Optimizer(string name) : name(name) { 28 | 29 | } 30 | 31 | string Optimizer::getName() { 32 | return this->name; 33 | } 34 | 35 | void Optimizer::setName(string name) { 36 | this->name = name; 37 | } 38 | 39 | 40 | // OptimizerMinimizeOp 41 | OptimizerMinimizeOp::OptimizerMinimizeOp(Graph *graph, Optimizer *optimizer, Op *lossOp) : Op("OptimizerMinimizeOp"), graph(graph), optimizer(optimizer), lossOp(lossOp) { 42 | 43 | 44 | } 45 | 46 | Graph * OptimizerMinimizeOp::getGraph() { 47 | return this->graph; 48 | } 49 | 50 | void OptimizerMinimizeOp::setGraph(Graph *graph) { 51 | this->graph = graph; 52 | } 53 | 54 | Optimizer * OptimizerMinimizeOp::getOptimizer() { 55 | return this->optimizer; 56 | } 57 | 58 | void OptimizerMinimizeOp::setOptimizer(Optimizer *optimizer) { 59 | this->optimizer = optimizer; 60 | } 61 | 62 | Op* OptimizerMinimizeOp::getLossOp() { 63 | return this->lossOp; 64 | } 65 | 66 | void OptimizerMinimizeOp::setLossOp(Op *lossOp) { 67 | this->lossOp = lossOp; 68 | } 69 | 70 | double OptimizerMinimizeOp::forward() { 71 | 72 | map variablenameGradMap = this->optimizer->computeGradients(this->lossOp); 73 | this->optimizer->applyGradients(variablenameGradMap); 74 | 75 | // TODO: Should return nothing 76 | return 0.0; 77 | } 78 | 79 | double OptimizerMinimizeOp::backward(string partialDerivativeOpname) { 80 | // TODO: Unimplement and throw exception if needed 81 | return 0.0; 82 | } 83 | 84 | 85 | // GradientDescentOptimizer 86 | GradientDescentOptimizer::GradientDescentOptimizer(Graph* graph) : Optimizer("GradientDescentOptimizer"), graph(graph), learningRate(0.01) { 87 | 88 | } 89 | 90 | GradientDescentOptimizer::GradientDescentOptimizer(Graph* graph, double learningRate): Optimizer("GradientDescentOptimizer"), graph(graph), learningRate(learningRate) { 91 | 92 | } 93 | 94 | Graph* GradientDescentOptimizer::getGraph() { 95 | return this->graph; 96 | } 97 | 98 | 99 | void GradientDescentOptimizer::setGraph(Graph *graph) { 100 | this->graph = graph; 101 | } 102 | 103 | double GradientDescentOptimizer::getLearningRate() { 104 | return this->learningRate; 105 | } 106 | 107 | void GradientDescentOptimizer::setLearningRate(double learningRate) { 108 | this->learningRate = learningRate; 109 | } 110 | 111 | void* GradientDescentOptimizer::minimize(Op* lossOp) { 112 | OptimizerMinimizeOp* optimizerMinimizeOp = new OptimizerMinimizeOp(this->graph, this, lossOp); 113 | return optimizerMinimizeOp; 114 | 115 | } 116 | 117 | map GradientDescentOptimizer::computeGradients(Op* lossOp) { 118 | 119 | map nameOpMap = this->graph->getTrainableNameOpMap(); 120 | 121 | map variablenameGradMap; 122 | 123 | map::iterator item; 124 | for(item=nameOpMap.begin(); item!=nameOpMap.end(); ++item) { 125 | string opName = item->first; 126 | // Op* op = item->second; 127 | double grad = lossOp->backward(opName); 128 | variablenameGradMap[opName] = grad; 129 | } 130 | 131 | return variablenameGradMap; 132 | 133 | } 134 | 135 | 136 | void GradientDescentOptimizer::applyGradients(map variablenameGradMap) { 137 | 138 | map nameOpMap = this->graph->getTrainableNameOpMap(); 139 | 140 | map::iterator item; 141 | for(item=nameOpMap.begin(); item!=nameOpMap.end(); ++item) { 142 | string opName = item->first; 143 | // TODO: Type check before converting 144 | VariableOp* variableOp = (VariableOp*) item->second; 145 | 146 | double grad = variablenameGradMap[opName]; 147 | double finalGrad = this->learningRate * grad; 148 | 149 | variableOp->setValue(variableOp->getValue() - finalGrad); 150 | } 151 | 152 | } 153 | 154 | 155 | } // namespace hplearn 156 | -------------------------------------------------------------------------------- /hplearn/optimizer.h: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #ifndef HPLEARN_OPTIMIZER_H 18 | #define HPLEARN_OPTIMIZER_H 19 | 20 | #include 21 | 22 | #include "op.h" 23 | #include "graph.h" 24 | 25 | using namespace std; 26 | 27 | namespace hplearn { 28 | 29 | class Optimizer { 30 | protected: 31 | string name; 32 | 33 | public: 34 | Optimizer(); 35 | Optimizer(string name); 36 | 37 | string getName(); 38 | void setName(string name); 39 | 40 | // TODO: Change to the class pointer after resolving circular dependencies 41 | // virtual OptimizerMinimizeOp* minimize(Op* lossOp) = 0; 42 | virtual void* minimize(Op* lossOp) = 0; 43 | virtual map computeGradients(Op* lossOp) = 0; 44 | virtual void applyGradients(map variablenameGradMap) = 0; 45 | }; 46 | 47 | 48 | /** 49 | * The minimize operation for optimizer. 50 | */ 51 | class OptimizerMinimizeOp : public Op { 52 | private: 53 | Graph* graph; 54 | Optimizer* optimizer; 55 | Op* lossOp; 56 | 57 | public: 58 | OptimizerMinimizeOp(Graph* graph, Optimizer* optimizer, Op* lossOp); 59 | 60 | Graph* getGraph(); 61 | void setGraph(Graph* graph); 62 | Optimizer* getOptimizer(); 63 | void setOptimizer(Optimizer* optimizer); 64 | Op* getLossOp(); 65 | void setLossOp(Op* lossOp); 66 | 67 | double forward(); 68 | double backward(string partialDerivativeOpname=""); 69 | 70 | }; 71 | 72 | 73 | class GradientDescentOptimizer : public Optimizer { 74 | private: 75 | Graph* graph; 76 | double learningRate; 77 | 78 | 79 | public: 80 | GradientDescentOptimizer(Graph* graph); 81 | GradientDescentOptimizer(Graph* graph, double learningRate); 82 | 83 | Graph* getGraph(); 84 | void setGraph(Graph* graph); 85 | double getLearningRate(); 86 | void setLearningRate(double learningRate); 87 | 88 | void* minimize(Op* lossOp); 89 | map computeGradients(Op* lossOp); 90 | void applyGradients(map variablenameGradMap); 91 | }; 92 | 93 | 94 | 95 | } // namespace hplearn 96 | 97 | #endif //HPLEARN_OPTIMIZER_H -------------------------------------------------------------------------------- /hplearn/session.cpp: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #include "session.h" 18 | 19 | namespace hplearn { 20 | 21 | 22 | Session::Session(Graph* graph) : graph(graph) { 23 | 24 | } 25 | 26 | string Session::getName() { 27 | return this->name; 28 | } 29 | 30 | void Session::setName(string name) { 31 | this->name = name; 32 | } 33 | 34 | Graph * Session::getGraph() { 35 | return this->graph; 36 | } 37 | 38 | void Session::setGraph(Graph *graph) { 39 | this->graph = graph; 40 | } 41 | 42 | double Session::run(string opName) { 43 | map nameOpMap = this->graph->getNameOpMap(); 44 | Op* op = nameOpMap[opName]; 45 | double result = op->forward(); 46 | 47 | return result; 48 | } 49 | 50 | double Session::run(string opName, map feedDict) { 51 | 52 | map nameOpMap = this->graph->getNameOpMap(); 53 | 54 | map::iterator item; 55 | for(item=feedDict.begin(); item!=feedDict.end(); ++item) { 56 | string feedOpName = item->first; 57 | double value = item->second; 58 | Op* feedOp = nameOpMap[feedOpName]; 59 | if (PlaceholderOp* placeholderOp = dynamic_cast(feedOp)) { 60 | placeholderOp->setValue(value); 61 | } 62 | } 63 | 64 | Op* op = nameOpMap[opName]; 65 | double result = op->forward(); 66 | 67 | return result; 68 | } 69 | 70 | 71 | } // namespace hplearn 72 | -------------------------------------------------------------------------------- /hplearn/session.h: -------------------------------------------------------------------------------- 1 | /* ===================================================================== 2 | Copyright 2017 The Authors. All Rights Reserved. 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | ========================================================================*/ 16 | 17 | #ifndef HPLEARN_SESSION_H 18 | #define HPLEARN_SESSION_H 19 | 20 | #include 21 | #include 22 | 23 | #include "graph.h" 24 | 25 | using namespace std; 26 | 27 | namespace hplearn { 28 | 29 | 30 | class Session { 31 | private: 32 | string name; 33 | Graph *graph; 34 | 35 | public: 36 | Session(Graph* graph); 37 | string getName(); 38 | void setName(string name); 39 | Graph *getGraph(); 40 | void setGraph(Graph* graph); 41 | double run(string opName); 42 | double run(string opName, map opnameValueMap); 43 | }; 44 | 45 | }// namespace hplearn 46 | 47 | #endif //HPLEARN_SESSION_H -------------------------------------------------------------------------------- /images/benchmark_add_operation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/images/benchmark_add_operation.png -------------------------------------------------------------------------------- /images/benchmark_linear_regression.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/images/benchmark_linear_regression.png -------------------------------------------------------------------------------- /images/benchmark_multiple_operation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/images/benchmark_multiple_operation.png -------------------------------------------------------------------------------- /models/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tobegit3hub/hplearn/edbdb839540abc188200b920dc437b0c270bb54b/models/README.md --------------------------------------------------------------------------------