├── articles ├── includes │ ├── cntk-short.md │ ├── cntk.md │ ├── versionadded-2.1.md │ ├── deprecated-2.1.md │ ├── versionchanged-2.1.md │ ├── vs2015u3-md.md │ ├── versionadded-2.1-block.md │ ├── deprecated-2.1-block.md │ ├── versionchanged-2.1-block.md │ └── Index-Caching.md ├── figures │ ├── bm.jpg │ ├── bmcompare.png │ └── asgd-cifar-compare.png ├── Articles2 │ ├── eq1.png │ ├── eq2.png │ ├── eq3.png │ ├── eq4.png │ ├── eq5.png │ ├── eq6.png │ ├── 071316_1312_RecurrentNe1.png │ ├── 071316_1312_RecurrentNe2.png │ ├── 071316_1312_RecurrentNe3.jpg │ ├── 071316_1312_RecurrentNe4.png │ ├── 071316_1315_GRUsBSando1.png │ ├── 071316_1315_GRUsBSando2.png │ ├── 071316_1315_GRUsBSando3.png │ └── 071316_1315_GRUsBSando4.png ├── Tutorial2 │ ├── cnn.png │ ├── loss.png │ ├── convLayer.PNG │ ├── Max_pooling.png │ └── mnist_examples.png ├── pictures │ ├── graph.png │ ├── setup │ │ ├── uwp-arp.png │ │ ├── uwp-tests.png │ │ ├── UnblockZip70.jpg │ │ ├── uwp-vs-setup.png │ │ ├── VS2017Workloads.jpg │ │ ├── VS2015InstallCustom.jpg │ │ ├── VS2017VCTools14.11.jpg │ │ ├── VS2015InstallCustom70.jpg │ │ ├── VS2015InstallFeatures.jpg │ │ └── VS2015InstallFeatures70.jpg │ ├── ONNX_logo_main.png │ ├── TensorBoard │ │ ├── tensorboard_graph.png │ │ ├── tensorboard_images.png │ │ └── tensorboard_scalars.png │ ├── EvaluateWebApiEvalDll │ │ ├── nuget_manager.png │ │ ├── publishing_step.png │ │ ├── publishing_webapp.png │ │ ├── setting_64_bits_in_vs.png │ │ ├── local_webapi_evaluation.png │ │ ├── remote_webapi_evaluation.png │ │ └── setting_64_bits_in_portal.png │ ├── ImageAutoEncoder │ │ ├── imageAutoEncoder_16x.png │ │ └── imageAutoEncoder_cmp.png │ └── EvaluateWebApiCntkLibrary │ │ ├── nuget_manager.png │ │ ├── publishing_step.png │ │ ├── publishing_webapp.png │ │ ├── setting_64_bits_in_vs.png │ │ ├── local_webapi_evaluation.png │ │ ├── remote_webapi_evaluation.png │ │ └── setting_64_bits_in_portal.png ├── Tutorial │ ├── logistic.png │ ├── softmax.png │ ├── synth_data.png │ ├── synth_data_3.png │ ├── 3class_decision.png │ ├── softmaxformula.png │ └── decision_boundary.png ├── Articles3 │ ├── residual.png │ ├── deepCrossing.png │ └── DeepCrossing.cntk ├── Articles4 │ └── SequencetoS2.png ├── Tutorial_FastRCNN │ ├── AP.jpg │ ├── bus_01.jpg │ ├── p_interp.jpg │ ├── rcnnPipeline.JPG │ ├── WIN_20160803_11_29_07_Pro.roi.jpg │ ├── nn_0WIN_20160803_11_28_42_Pro.jpg │ ├── nn_1WIN_20160803_11_42_36_Pro.jpg │ ├── nn_2WIN_20160803_11_46_03_Pro.jpg │ ├── nn_3WIN_20160803_11_48_26_Pro.jpg │ ├── nn_4WIN_20160803_12_37_07_Pro.jpg │ ├── svm_4WIN_20160803_12_37_07_Pro.jpg │ ├── nn_noNms4WIN_20160803_12_37_07_Pro.jpg │ ├── WIN_20160803_11_29_07_Pro.noGrid.roi.jpg │ └── WIN_20160803_11_29_07_Pro.noGridNoFiltering.roi.jpg ├── Tutorial_TL │ ├── Weaver_bird.jpg │ ├── image_08058.jpg │ ├── image_08081.jpg │ ├── image_08084.jpg │ ├── image_08093.jpg │ ├── quetzal-bird.jpg │ ├── Swaledale_sheep.jpg │ ├── Icelandic_breed_sheep.jpg │ ├── Canis_lupus_occidentalis.jpg │ ├── The_white_wolf_by_Lunchi.jpg │ └── Bird_in_flight_wings_spread.jpg ├── breadcrumb │ └── toc.yml ├── 404.md ├── CNTK-Python-known-issues-and-limitations.md ├── How-do-I-Adapt-models-in-Python.md ├── Setup-CNTK-from-source.md ├── ConvertDBN-command.md ├── docfx.json ├── ReleaseNotes │ ├── CNTK_2_0_Beta_2_Release_Notes.md │ ├── CNTK_1_7_2_Release_Notes.md │ ├── CNTK_2_0_Beta_3_Release_Notes.md │ ├── CNTK_2_0_Beta_5_Release_Notes.md │ ├── CNTK_2_0_Beta_4_Release_Notes.md │ ├── CNTK_2_0_Beta_1_Release_Notes.md │ ├── CNTK_2_0_Beta_7_Release_Notes.md │ ├── CNTK_1_6_Release_Notes.md │ ├── CNTK_2_0_RC_2_Release_Notes.md │ ├── CNTK_2_0_Beta_9_Release_Notes.md │ ├── CNTK_2_0_Beta_10_Release_Notes.md │ └── CNTK_1_7_1_Release_Notes.md ├── Setup-development-environment.md ├── How-do-I-Read-Things-in-Python.md ├── Archive │ ├── CNTK-Evaluate-Multiple-Models.md │ ├── EvalDLL-Evaluation-on-Linux.md │ ├── EvalDll-Examples.md │ ├── EvalDLL-Evaluation-Overview.md │ ├── CNTK-Evaluation-using-cntk.exe.md │ └── CNTK-Evaluate-Hidden-Layers.md ├── getting-started.md ├── CNTK-1bit-SGD-License.md ├── How-To-Overview.md ├── Unary-Operations.md ├── Compatible-dimensions-in-reader-and-config.md ├── CNTK-Library-Evaluation-Overview.md ├── Feedback-Channels.md ├── Developing-and-Testing.md ├── Conference-Appearances.md ├── BrainScript-Reader-block.md ├── Update-1bit-SGD-Submodule-Location.md ├── Special-Nodes.md ├── Debugging-CNTK-source-code-in-Visual-Studio.md ├── Setup-Test-Python.md ├── Variables.md ├── If-Operation.md ├── CNTK-Library-Evaluation-on-UWP.md ├── Blog.md ├── ROIPooling.md ├── CNTK-model-format.md ├── Using-CNTK-with-BrainScript.md ├── CNTK-on-Azure.md ├── Deploy-Model-to-AKS.md ├── Debugging-CNTKs-GPU-source-code-in-Visual-Studio.md ├── Loading-Data-Overview.md ├── How-do-I-Deal-with-Errors-in-Python.md ├── CNTK-CSharp-Examples.md ├── CNTK-Evaluation-Overview.md ├── index.md ├── Post-Batch-Normalization-Statistics.md ├── Test-Configurations.md ├── archive.md ├── Plot-command.md ├── Records.md ├── Linux-Environment-Variables.md ├── Dropout.md ├── Setup-CNTK-Python-Tools-For-Windows.md ├── Setup-CNTK-on-your-machine.md ├── How-do-I-Read-Things-in-BrainScript.md ├── BrainScript-epochSize-and-Python-epoch_size-in-CNTK.md ├── How-do-I-Evaluate-models-in-Python.md ├── Reduction-Operations.md ├── Setup-UWP-Build-on-Windows.md ├── Tutorials.md ├── BrainScript-LM-sequence-reader.md ├── Sequential.md ├── NuGet-Package.md ├── BrainScript-Activation-Functions.md ├── CNTK-Library-Evaluation-on-Linux.md ├── Pooling.md ├── Using-CNTK-with-Keras.md ├── BrainScript-and-Python-Performance-Profiler.md ├── Contributing-to-CNTK.md └── OptimizedRNNStack.md ├── .gitignore ├── .vscode └── settings.json ├── README.md ├── .openpublishing.build.ps1 ├── LICENSE-CODE ├── ThirdPartyNotices ├── .openpublishing.publish.config.json ├── .openpublishing.redirection.json └── .spelling /articles/includes/cntk-short.md: -------------------------------------------------------------------------------- 1 | CNTK 2 | -------------------------------------------------------------------------------- /articles/includes/cntk.md: -------------------------------------------------------------------------------- 1 | Microsoft Cognitive Toolkit (CNTK) 2 | -------------------------------------------------------------------------------- /articles/includes/versionadded-2.1.md: -------------------------------------------------------------------------------- 1 | _New in CNTK version 2.1._ 2 | -------------------------------------------------------------------------------- /articles/includes/deprecated-2.1.md: -------------------------------------------------------------------------------- 1 | _Deprecated in CNTK version 2.1._ 2 | -------------------------------------------------------------------------------- /articles/includes/versionchanged-2.1.md: -------------------------------------------------------------------------------- 1 | _Changed in CNTK version 2.1._ 2 | -------------------------------------------------------------------------------- /articles/includes/vs2015u3-md.md: -------------------------------------------------------------------------------- 1 | Microsoft Visual Studio 2015 Update 3 2 | -------------------------------------------------------------------------------- /articles/includes/versionadded-2.1-block.md: -------------------------------------------------------------------------------- 1 | > [!NOTE] 2 | > New in CNTK version 2.1. 3 | -------------------------------------------------------------------------------- /articles/includes/deprecated-2.1-block.md: -------------------------------------------------------------------------------- 1 | > [!NOTE] 2 | > Deprecated in CNTK version 2.1. 3 | -------------------------------------------------------------------------------- /articles/includes/versionchanged-2.1-block.md: -------------------------------------------------------------------------------- 1 | > [!NOTE] 2 | > Changed in CNTK version 2.1. 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | log/ 2 | obj/ 3 | _site/ 4 | .optemp/ 5 | _themes*/ 6 | 7 | .openpublishing.buildcore.ps1 -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | // Place your settings in this file to overwrite default and user settings. 2 | { 3 | } -------------------------------------------------------------------------------- /articles/figures/bm.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/figures/bm.jpg -------------------------------------------------------------------------------- /articles/Articles2/eq1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq1.png -------------------------------------------------------------------------------- /articles/Articles2/eq2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq2.png -------------------------------------------------------------------------------- /articles/Articles2/eq3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq3.png -------------------------------------------------------------------------------- /articles/Articles2/eq4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq4.png -------------------------------------------------------------------------------- /articles/Articles2/eq5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq5.png -------------------------------------------------------------------------------- /articles/Articles2/eq6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/eq6.png -------------------------------------------------------------------------------- /articles/Tutorial2/cnn.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial2/cnn.png -------------------------------------------------------------------------------- /articles/Tutorial2/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial2/loss.png -------------------------------------------------------------------------------- /articles/pictures/graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/graph.png -------------------------------------------------------------------------------- /articles/Tutorial/logistic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/logistic.png -------------------------------------------------------------------------------- /articles/Tutorial/softmax.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/softmax.png -------------------------------------------------------------------------------- /articles/figures/bmcompare.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/figures/bmcompare.png -------------------------------------------------------------------------------- /articles/Articles3/residual.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles3/residual.png -------------------------------------------------------------------------------- /articles/Tutorial/synth_data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/synth_data.png -------------------------------------------------------------------------------- /articles/Tutorial2/convLayer.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial2/convLayer.PNG -------------------------------------------------------------------------------- /articles/Articles3/deepCrossing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles3/deepCrossing.png -------------------------------------------------------------------------------- /articles/Articles4/SequencetoS2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles4/SequencetoS2.png -------------------------------------------------------------------------------- /articles/Tutorial/synth_data_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/synth_data_3.png -------------------------------------------------------------------------------- /articles/Tutorial2/Max_pooling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial2/Max_pooling.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/AP.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/AP.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/uwp-arp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/uwp-arp.png -------------------------------------------------------------------------------- /articles/Tutorial/3class_decision.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/3class_decision.png -------------------------------------------------------------------------------- /articles/Tutorial/softmaxformula.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/softmaxformula.png -------------------------------------------------------------------------------- /articles/Tutorial2/mnist_examples.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial2/mnist_examples.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/bus_01.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/bus_01.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/Weaver_bird.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/Weaver_bird.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/image_08058.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/image_08058.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/image_08081.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/image_08081.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/image_08084.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/image_08084.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/image_08093.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/image_08093.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/quetzal-bird.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/quetzal-bird.jpg -------------------------------------------------------------------------------- /articles/pictures/ONNX_logo_main.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/ONNX_logo_main.png -------------------------------------------------------------------------------- /articles/pictures/setup/uwp-tests.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/uwp-tests.png -------------------------------------------------------------------------------- /articles/Tutorial/decision_boundary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial/decision_boundary.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/p_interp.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/p_interp.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/Swaledale_sheep.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/Swaledale_sheep.jpg -------------------------------------------------------------------------------- /articles/figures/asgd-cifar-compare.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/figures/asgd-cifar-compare.png -------------------------------------------------------------------------------- /articles/pictures/setup/UnblockZip70.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/UnblockZip70.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/uwp-vs-setup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/uwp-vs-setup.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/rcnnPipeline.JPG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/rcnnPipeline.JPG -------------------------------------------------------------------------------- /articles/pictures/setup/VS2017Workloads.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2017Workloads.jpg -------------------------------------------------------------------------------- /articles/Articles2/071316_1312_RecurrentNe1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1312_RecurrentNe1.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1312_RecurrentNe2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1312_RecurrentNe2.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1312_RecurrentNe3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1312_RecurrentNe3.jpg -------------------------------------------------------------------------------- /articles/Articles2/071316_1312_RecurrentNe4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1312_RecurrentNe4.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1315_GRUsBSando1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1315_GRUsBSando1.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1315_GRUsBSando2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1315_GRUsBSando2.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1315_GRUsBSando3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1315_GRUsBSando3.png -------------------------------------------------------------------------------- /articles/Articles2/071316_1315_GRUsBSando4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Articles2/071316_1315_GRUsBSando4.png -------------------------------------------------------------------------------- /articles/Tutorial_TL/Icelandic_breed_sheep.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/Icelandic_breed_sheep.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/VS2015InstallCustom.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2015InstallCustom.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/VS2017VCTools14.11.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2017VCTools14.11.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/Canis_lupus_occidentalis.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/Canis_lupus_occidentalis.jpg -------------------------------------------------------------------------------- /articles/Tutorial_TL/The_white_wolf_by_Lunchi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/The_white_wolf_by_Lunchi.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/VS2015InstallCustom70.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2015InstallCustom70.jpg -------------------------------------------------------------------------------- /articles/pictures/setup/VS2015InstallFeatures.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2015InstallFeatures.jpg -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # cognitive-toolkit-docs-pr 2 | 3 | Sources for Microsoft Cognitive Toolkit documentation hosted at https://docs.microsoft.com/en-us/cognitive-toolkit/. 4 | -------------------------------------------------------------------------------- /articles/Tutorial_TL/Bird_in_flight_wings_spread.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_TL/Bird_in_flight_wings_spread.jpg -------------------------------------------------------------------------------- /articles/pictures/TensorBoard/tensorboard_graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/TensorBoard/tensorboard_graph.png -------------------------------------------------------------------------------- /articles/pictures/TensorBoard/tensorboard_images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/TensorBoard/tensorboard_images.png -------------------------------------------------------------------------------- /articles/pictures/setup/VS2015InstallFeatures70.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/setup/VS2015InstallFeatures70.jpg -------------------------------------------------------------------------------- /articles/pictures/TensorBoard/tensorboard_scalars.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/TensorBoard/tensorboard_scalars.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/nuget_manager.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/nuget_manager.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.roi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.roi.jpg -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_0WIN_20160803_11_28_42_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_0WIN_20160803_11_28_42_Pro.jpg -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_1WIN_20160803_11_42_36_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_1WIN_20160803_11_42_36_Pro.jpg -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_2WIN_20160803_11_46_03_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_2WIN_20160803_11_46_03_Pro.jpg -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_3WIN_20160803_11_48_26_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_3WIN_20160803_11_48_26_Pro.jpg -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_4WIN_20160803_12_37_07_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_4WIN_20160803_12_37_07_Pro.jpg -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/publishing_step.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/publishing_step.png -------------------------------------------------------------------------------- /articles/pictures/ImageAutoEncoder/imageAutoEncoder_16x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/ImageAutoEncoder/imageAutoEncoder_16x.png -------------------------------------------------------------------------------- /articles/pictures/ImageAutoEncoder/imageAutoEncoder_cmp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/ImageAutoEncoder/imageAutoEncoder_cmp.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/svm_4WIN_20160803_12_37_07_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/svm_4WIN_20160803_12_37_07_Pro.jpg -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/nuget_manager.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/nuget_manager.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/publishing_webapp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/publishing_webapp.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/nn_noNms4WIN_20160803_12_37_07_Pro.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/nn_noNms4WIN_20160803_12_37_07_Pro.jpg -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/publishing_step.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/publishing_step.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/publishing_webapp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/publishing_webapp.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/setting_64_bits_in_vs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/setting_64_bits_in_vs.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.noGrid.roi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.noGrid.roi.jpg -------------------------------------------------------------------------------- /articles/breadcrumb/toc.yml: -------------------------------------------------------------------------------- 1 | - name: Docs 2 | href: / 3 | homepage: / 4 | items: 5 | - name: Cognitive Toolkit 6 | tocHref: /cognitive-toolkit/ 7 | topicHref: /cognitive-toolkit/index 8 | -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/local_webapi_evaluation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/local_webapi_evaluation.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/setting_64_bits_in_vs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/setting_64_bits_in_vs.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/remote_webapi_evaluation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/remote_webapi_evaluation.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiEvalDll/setting_64_bits_in_portal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiEvalDll/setting_64_bits_in_portal.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/local_webapi_evaluation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/local_webapi_evaluation.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/remote_webapi_evaluation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/remote_webapi_evaluation.png -------------------------------------------------------------------------------- /articles/pictures/EvaluateWebApiCntkLibrary/setting_64_bits_in_portal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/pictures/EvaluateWebApiCntkLibrary/setting_64_bits_in_portal.png -------------------------------------------------------------------------------- /articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.noGridNoFiltering.roi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/options/cognitive-toolkit-docs/master/articles/Tutorial_FastRCNN/WIN_20160803_11_29_07_Pro.noGridNoFiltering.roi.jpg -------------------------------------------------------------------------------- /articles/404.md: -------------------------------------------------------------------------------- 1 | # The page you requested could not be found 2 | 3 | The URL might be misspelled, or the page you are looking for is no longer available. 4 | 5 | [Back to Cognitive Toolkit Home](https://docs.microsoft.com/en-us/cognitive-toolkit/index) 6 | -------------------------------------------------------------------------------- /articles/CNTK-Python-known-issues-and-limitations.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Python known issues and limitations 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/03/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: python, cpp, csharp, dotnet 10 | --- 11 | 12 | # CNTK Python known issues and limitations 13 | 14 | - The core API itself is implemented in C++ for speed and efficiency and Python bindings are created through SWIG. We are increasingly creating thin Python wrappers for the APIs to attach docstrings to, but this is a work in progress and for some of the APIs, you may directly encounter SWIG generated API definitions (which are not the prettiest to read). 15 | -------------------------------------------------------------------------------- /articles/How-do-I-Adapt-models-in-Python.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How do I adapt models in Python 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/05/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # How do I adapt models in Python 13 | 14 | [Read and modify the training weights from Python](./How-do-I-Adapt-models-in-Python.md#read-and-modify-the-training-weights-from-python) 15 | 16 | ## Read and modify the training weights from Python 17 | 18 | ```python 19 | from cntk import * 20 | p=parameter(5, init=glorot_uniform()) 21 | 22 | p.value 23 | >>>result: array([-0.7146188 , 0.59619093, 0.95851505, 0.29351783, 0.13692594], dtype=float32) 24 | 25 | p.value = np.ones(5) 26 | ``` 27 | -------------------------------------------------------------------------------- /.openpublishing.build.ps1: -------------------------------------------------------------------------------- 1 | param( 2 | [string]$buildCorePowershellUrl = "https://opbuildstorageprod.blob.core.windows.net/opps1container/.openpublishing.buildcore.ps1", 3 | [string]$parameters 4 | ) 5 | # Main 6 | $errorActionPreference = 'Stop' 7 | 8 | # Step-1: Download buildcore script to local 9 | echo "download build core script to local with source url: $buildCorePowershellUrl" 10 | $repositoryRoot = Split-Path -Parent $MyInvocation.MyCommand.Definition 11 | $buildCorePowershellDestination = "$repositoryRoot\.openpublishing.buildcore.ps1" 12 | Invoke-WebRequest $buildCorePowershellUrl -OutFile "$buildCorePowershellDestination" 13 | 14 | # Step-2: Run build core 15 | echo "run build core script with parameters: $parameters" 16 | & "$buildCorePowershellDestination" "$parameters" 17 | exit $LASTEXITCODE 18 | -------------------------------------------------------------------------------- /articles/Setup-CNTK-from-source.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Source Code and Development 3 | author: wolfma61 4 | ms.author: wolfma 5 | ms.date: 06/12/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: setup from source 8 | ms.service: Cognitive-services 9 | ms.devlang: 10 | --- 11 | 12 | # CNTK Source Code and Development 13 | 14 | The complete source code of the Microsoft Cognitive Toolkit is published on [GitHub](https://www.github.com/Microsoft/CNTK). If you plan on building CNTK yourself you will need to [setup a local CNTK development environment](./Setup-development-environment.md). Information about **developing and testing** is [available here](./Developing-and-Testing.md). Finally, if you want to **contribute** your development efforts back into the CNTK community, you will find this [page](./Contributing-to-CNTK.md) useful. 15 | -------------------------------------------------------------------------------- /articles/ConvertDBN-command.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: ConvertDBN command 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/20/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # ConvertDBN command 13 | 14 | This command is used to convert a model generated by Microsoft’s dbn.exe tool to a CNTK model. This command is useful when you want to compare the performance of these two tools (dbn.exe only supports simple fully connected deep neural networks), port existing models trained with dbn.exe to CNTK, or if you want to use the RBM pre-training which is available in dbn.exe but not in CNTK right now. The related parameters are 15 | * `modelPath`: the full path of the generated CNTK model. 16 | 17 | * `dbnModelPath`: the full path of the model to be converted. 18 | -------------------------------------------------------------------------------- /articles/docfx.json: -------------------------------------------------------------------------------- 1 | { 2 | "build": { 3 | "content": [ 4 | { 5 | "files": [ 6 | "**/*.md", 7 | "**/*.yml" 8 | ], 9 | "exclude": [ 10 | "**/obj/**", 11 | "**/includes/**", 12 | "README.md", 13 | "LICENSE", 14 | "LICENSE-CODE", 15 | "ThirdPartyNotices" 16 | ] 17 | } 18 | ], 19 | "resource": [ 20 | { 21 | "files": [ 22 | "**/*.png", 23 | "**/*.jpg" 24 | ], 25 | "exclude": [ 26 | "**/obj/**", 27 | "**/includes/**" 28 | ] 29 | } 30 | ], 31 | "overwrite": [], 32 | "externalReference": [], 33 | "globalMetadata": { 34 | "uhfHeaderId": "MSDocsHeader-cognitive-toolkit", 35 | "breadcrumb_path": "/cognitive-toolkit/breadcrumb/TOC.json", 36 | "searchScope": ["Cognitive Toolkit"] 37 | }, 38 | "fileMetadata": {}, 39 | "template": [], 40 | "dest": "cntk" 41 | } 42 | } 43 | 44 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_2_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_2_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/04/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_2_Release_Notes 13 | 14 | In this Release we have further tuned different features based on the overwhelming feedback on Beta 1 and have fixed several bugs. You are not required to adopt your code or models to take an advantage of these improvements. 15 | 16 | ## Improvements in the Examples and Tutorials and new Tutorial on Reinforcement Learning 17 | 18 | Based on your feedback we have made some changes in the examples and tutorials. We have added a new [Tutorial on Reinforcement Learning](https://github.com/Microsoft/CNTK/blob/v2.0.beta2.0/bindings/python/tutorials/CNTK_203_Reinforcement_Learning_Basics.ipynb) implemented as a Jupyter Notebook. 19 | -------------------------------------------------------------------------------- /articles/Setup-development-environment.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Development Environment Setup 3 | author: wolfma61 4 | ms.author: wolfma 5 | ms.date: 06/12/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: setup from source 8 | ms.service: Cognitive-services 9 | ms.devlang: 10 | --- 11 | 12 | # CNTK Development Environment Setup 13 | 14 | If you want to take a look at the CNTK source code, compile CNTK yourself, make changes to the CNTK codebase, or contribute these changes back to the community, these pages describe how to setup the development environment on your local system: 15 | 16 | |Windows | Linux | 17 | |:------------------------|:------------------------| 18 | |[Script-driven development setup](./Setup-CNTK-with-script-on-Windows.md) | 19 | |[Manual development setup](./Setup-CNTK-on-Windows.md) | [Manual development setup](./Setup-CNTK-on-Linux.md) 20 | 21 | On Windows we recommend the script-driven development setup! 22 | -------------------------------------------------------------------------------- /articles/How-do-I-Read-Things-in-Python.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How do I read things in Python 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/05/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # How do I read things in Python 13 | 14 | [Load model and access network weights (parameters)](#load-model-and-access-network-weights-parameters) 15 | 16 | ## Load model and access network weights (parameters) 17 | 18 | You have trained and saved a model file. Now you want to load it elsewhere and get the parameters. Here are the steps you need to follow: 19 | 20 | ```python 21 | loaded_model = load_model("model_file_path") 22 | if is_BrainScript: 23 | loaded_model = combine([loaded_model.outputs[0]]) 24 | 25 | parameters = loaded_model.parameters 26 | for parameter in parameters: 27 | print(parameter.name, parameter.shape, "\n", parameter.value) 28 | ``` 29 | -------------------------------------------------------------------------------- /articles/Archive/CNTK-Evaluate-Multiple-Models.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Evaluate Multiple Models 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Evaluate Multiple Models 13 | 14 | ## Overview 15 | The CNTK EvalDll library (`Cntk.Eval` and its .Net wrapper `Cntk.Eval.Wrapper` in Windows and `libCntk.Eval` in Linux) enables programmatic single threaded evaluation of CNTK models (concurrent evaluations of a single model instance is not supported). 16 | However, it is possible to load multiple instances of a model and evaluate each model with a single thread. This enables multiple models to be evaluated in parallel, yet each model with a single thread. 17 | 18 | ## Example 19 | Refer to the `EvaluateMultipleModels` method in the [CSEvalClient](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/CSEvalClient) program for an example of a possible implementation for evaluating multiple models concurrently. 20 | 21 | -------------------------------------------------------------------------------- /LICENSE-CODE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | Copyright (c) Microsoft Corporation 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 5 | associated documentation files (the "Software"), to deal in the Software without restriction, 6 | including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, 7 | and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, 8 | subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all copies or substantial 11 | portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT 14 | NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 15 | IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 16 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /articles/includes/Index-Caching.md: -------------------------------------------------------------------------------- 1 | [!INCLUDE[versionadded-2.1-block](versionadded-2.1-block.md)] 2 | 3 | Index caching allows to significantly (by a factor of 2-3x) reduce start-up times, especially when working with large input files. Setting the `cacheIndex` flag to `true` will signal the reader to write the indexing meta-data to disk (same directory as the input file) if the cache file is not available or if it is stale (older than the input file). The writing is best effort and is carried out on a separate thread in order to not affect the reader performance. If the cache file is present and is up-to-date, the reader will no longer skim the input file to build the index, instead it will load the index from the cache file. Please note that certain reader configuration parameters have a direct impact on indexing (for instance, different values of `frameMode` could potentially result in indices that have different number of sequences). For that reason, a cache file could be ignored by a reader with a configuration 4 | different from the one that produced the cache. To see the full benefit of caching, the configuration should not be modified on subsequent reruns. 5 | -------------------------------------------------------------------------------- /articles/getting-started.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Getting Started With the Microsoft Cognitive Toolkit 3 | author: wolfma 4 | ms.author: cbasoglu 5 | ms.date: 12/06/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: getting started 8 | 9 | ms.service: Cognitive-services 10 | ms.devlang: NA 11 | --- 12 | # Getting Started With the Microsoft Cognitive Toolkit 13 | 14 | In this section the different ways to [install CNTK from precompiled binaries](./Setup-CNTK-on-your-machine.md) are explained. If you want to build CNTK from source code, the required steps are described [here](./Setup-CNTK-from-source.md). 15 | 16 | Once you installed CNTK, we recommend you try the [tutorial](./Tutorials.md) or [example](./Examples.md) sections of the documentation. 17 | 18 | You may use CNTK via Microsoft Azure Virtual Machine offering (Windows and Linux) or install it as a Docker container (Linux). See the corresponding sections: 19 | 20 | * [CNTK on Azure](./CNTK-on-Azure.md) 21 | * [CNTK Docker Containers](./CNTK-Docker-Containers.md) 22 | 23 | If you want to use Keras together with CNTK, you will find the required information [here](./Using-CNTK-with-Keras.md). 24 | 25 | -------------------------------------------------------------------------------- /ThirdPartyNotices: -------------------------------------------------------------------------------- 1 | ##Legal Notices 2 | Microsoft and any contributors grant you a license to the Microsoft documentation and other content 3 | in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), 4 | see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the 5 | [LICENSE-CODE](LICENSE-CODE) file. 6 | 7 | Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation 8 | may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. 9 | The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. 10 | Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653. 11 | 12 | Privacy information can be found at https://privacy.microsoft.com/en-us/ 13 | 14 | Microsoft and any contributors reserve all others rights, whether under their respective copyrights, patents, 15 | or trademarks, whether by implication, estoppel or otherwise. -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_1_7_2_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_1_7_2_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 09/30/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_1_7_2_Release_Notes 13 | 14 | This is a summary on what's new in CNTK 1.7.2 Binary Release. 15 | 16 | **This is a Hot Fix Release. It affects all users of Model Evaluation Library** 17 | 18 | If you are NOT using Model Evaluation Library you may skip this release. 19 | If you ARE using Model Evaluation Library we **strongly recommend** installing version 1.7.2 instead of **any** previous version you might be using. 20 | 21 | ## Detailed Description 22 | 23 | We have identified a bug in **CNTK Model Evaluation Library**. In particular it related to ```EvalExtended``` interface and could cause unexpected exceptions and crashes due to memory access violation. 24 | 25 | The bug was present in **all** previous versions of CNTK Model Evaluation Library. 26 | 27 | The bug mainly affects Windows users but we strongly recommend to install v.1.7.2 to users of **both Windows and Linux**. 28 | -------------------------------------------------------------------------------- /.openpublishing.publish.config.json: -------------------------------------------------------------------------------- 1 | { 2 | "docsets_to_publish": [ 3 | { 4 | "docset_name": "cntk", 5 | "build_source_folder": "articles", 6 | "build_output_subfolder": "cntk", 7 | "locale": "en-us", 8 | "monikers": [], 9 | "open_to_public_contributors": true, 10 | "type_mapping": { 11 | "Conceptual": "Content", 12 | "ManagedReference": "Content", 13 | "RestApi": "Content" 14 | }, 15 | "build_entry_point": "docs", 16 | "template_folder": "_themes", 17 | "version": 0 18 | } 19 | ], 20 | "notification_subscribers": [ 21 | "lyn@microsoft.com" 22 | ], 23 | "branches_to_filter": [], 24 | "skip_source_output_uploading": false, 25 | "need_preview_pull_request": true, 26 | "contribution_branch_mappings": {}, 27 | "dependent_repositories": [ 28 | { 29 | "path_to_root": "_themes", 30 | "url": "https://github.com/Microsoft/templates.docs.msft", 31 | "branch": "master", 32 | "branch_mapping": {} 33 | } 34 | ], 35 | "branch_target_mapping": {}, 36 | "need_generate_pdf_url_template": false, 37 | "need_generate_pdf": false, 38 | "need_generate_intellisense": false 39 | } -------------------------------------------------------------------------------- /articles/CNTK-1bit-SGD-License.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK 1bit-SGD license 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/25/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK 1bit-SGD license 13 | 14 | We offer two licensing options for Microsoft 1-bit Stochastic Gradient Descent for the Microsoft Cognitive Toolkit (CNTK 1bit-SGD): 15 | 16 | **[CNTK 1bit-SGD General License](https://github.com/CNTK-components/CNTK1bitSGD/blob/master/LICENSE-GENERAL.md)** 17 | This license is applicable to any organization or user and explains the rights of usage of CNTK 1bit-SGD including the scenarios of commercial applications and services. This license does not provide any rights for the modification of CNTK 1bit-SGD. If you and/or your organization would like to contribute to CNTK 1bit-SGD, please use CNTK 1bit-SGD Non-Commercial Usage License (see below). 18 | 19 | **[CNTK 1bit-SGD Non-Commercial Usage License](https://github.com/CNTK-components/CNTK1bitSGD/blob/master/LICENSE-NON-COMMERCIAL.md)** 20 | This license is valid for Non-Commercial usage of CNTK only. It provides more flexible usage rights, and specifies the Terms for contributing to CNTK 1bit-SGD part. 21 | -------------------------------------------------------------------------------- /articles/How-To-Overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How To 3 | author: mx-iao 4 | ms.author: minxia 5 | ms.date: 12/04/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # How To 13 | 14 | This how-to section features explanations to commonly asked questions about programming in CNTK. The questions covered here are specific in scope, and the answers are fairly concise. For more detailed and comprehensive guides to CNTK, please refer to the [Tutorials & Examples](/cognitive-toolkit/tutorials), and the content under the Train/Develop and Evaluate/Deploy sections. 15 | 16 | ## Covered topics 17 | * [Express things](/cognitive-toolkit/How-do-I-Express-Things-In-Python) 18 | * [Train models](/cognitive-toolkit/How-do-I-Train-models-in-Python) 19 | * [Evaluate models](/cognitive-toolkit/How-do-I-Evaluate-models-in-Python) 20 | * [Adapt models](/cognitive-toolkit/How-do-I-Adapt-models-in-Python) 21 | * [Read things](/cognitive-toolkit/How-do-I-Read-Things-in-Python) 22 | * [Deal with errors](/cognitive-toolkit/How-do-I-Deal-with-Errors-in-Python) 23 | 24 | For how-to guides in BrainScript, refer to the [How To](/cognitive-toolkit/How-do-I-Express-Things-in-BrainScript) section under BrainScript. 25 | -------------------------------------------------------------------------------- /articles/Unary-Operations.md: -------------------------------------------------------------------------------- 1 | --- 2 | Title: Unary operations 3 | Author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/15/2016 6 | ms.topic: conceptual 7 | ms.service: Cognitive-services 8 | ms.devlang: NA 9 | --- 10 | 11 | # Unary operations 12 | 13 | Common unary elementwise functions and operations. 14 | 15 | Abs (x) 16 | Ceil (x) 17 | Cosine (x) 18 | Clip (x, minValue, maxValue) 19 | Exp (x) 20 | Floor (x) 21 | Log (x) 22 | Negate (x) 23 | -x 24 | BS.Boolean.Not (b) 25 | !b 26 | Reciprocal (x) 27 | Round (x) 28 | Sin (x) 29 | Sqrt (x) 30 | 31 | ## Parameters 32 | 33 | * `x`: argument to apply the function or operation to 34 | 35 | Sparse values are currently not supported. 36 | 37 | `Clip():` 38 | 39 | * `minValue`: inputs less than this value are replaced by this value 40 | * `maxValue`: likewise, inputs more than this value are replaced by this value 41 | 42 | ## Return Value 43 | 44 | Result of applying the function or operation. The output's tensor shape is the same as the input's. 45 | 46 | ## Description 47 | 48 | These are common functions and unary operations. 49 | 50 | Note that `BS.Boolean.Not()` expects inputs to be 0 or 1. 51 | 52 | ## Example 53 | 54 | MySoftmax (z) = Exp (LogSoftmax (z)) 55 | -------------------------------------------------------------------------------- /articles/Compatible-dimensions-in-reader-and-config.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Compatible dimensions in reader and config 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 10/04/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Compatible dimensions in reader and config 13 | 14 | There are two typical cases where you might see this error message. The underlying reason is that the reader config and the training config do not agree. 15 | 16 | The first typical case is that when you modify a working config file to change the dimensionality of the output (e.g. take a config that works for a 7 class problem and make a new one that should work for a 5 class problem) you may encounter the error message 17 | ``` 18 | NotifyFunctionValuesMBSizeModified: labels InputValue operation had its row dimension 7 changed by the reader to 5 19 | ``` 20 | This is most likely due to only updating the network definition part of your config file but not your reader definition. 21 | Please double-check: 22 | 23 | * all [`Input{}`](./Inputs.md) nodes have corresponding streams in the reader section with the same name 24 | * the dimensions match 25 | 26 | A second case is when you provide a wrong hint to the reader by saying 27 | ``` 28 | featureNodes = (z) 29 | ``` 30 | and `z` is not an actual input but an expression. 31 | -------------------------------------------------------------------------------- /articles/CNTK-Library-Evaluation-Overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Library Evaluation Overview 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 06/22/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Library Evaluation Overview 13 | 14 | The [CNTK Library API](./CNTK-Library-API.md) allows to evaluate both CNTK model-v1 and model-V2 [format](./CNTK-model-format.md). The CNTK Library can be consumed on Windows and Linux by C++, Python, C#, Java and other .NET languages. 15 | 16 | New features of the CNTK Library for Evaluation include 17 | * Support both CPU and GPU device. 18 | * [Support multiple evaluation requests in parallel](./CNTK-Eval-Examples.md#examples-for-evaluating-multiple-requests-in-parallel). 19 | * Optimize memory usage by parameter sharing of the same model between multiple threads. This will significantly reduce memory usage when running evaluation in a service environment. 20 | 21 | The following pages provide detailed information about model evaluation using CNTK Library. 22 | * [CNTK-library evaluation on Windows](./CNTK-Library-Evaluation-on-Windows.md) 23 | * [CNTK-library evaluation on Linux](./CNTK-Library-Evaluation-on-Linux.md) 24 | * [CNTK-library evaluation with Python](./How-do-I-Evaluate-models-in-Python.md) 25 | * [NuGet Packages](./NuGet-Package.md) 26 | * [Evaluation in Azure](./Evaluate-a-model-in-an-Azure-WebApi.md) 27 | 28 | -------------------------------------------------------------------------------- /articles/Feedback-Channels.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Feedback Channels 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/12/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Feedback Channels 13 | 14 | As a development team we want to be present on many channels. We ask CNTK users to utilize the appropriate chanel to post issues or questions regarding the Microsoft Cognitive Toolkit. We hope to avoid answers to similar questions on different chanels. This page describes how the Microsoft team intends to use the different channels. 15 | 16 | ## GitHub issues 17 | 18 | [GitHub issues](https://github.com/Microsoft/CNTK/issues) should be used for bugs and feature requests. We will therefore redirect general 'how-to' questions to [Stack Overflow](http://stackoverflow.com/questions/tagged/cntk). Release announcements, roadmaps, etc. will be posted on GitHub. 19 | 20 | ## Stack Overflow 21 | 22 | [Stack Overflow](https://stackoverflow.com/questions/tagged/cntk) is the best place for general question. Many community members are already on Stack Overflow and provide great answers to 'how-to' questions. Please use the [`cntk` tag](https://stackoverflow.com/questions/tagged/cntk) when posting your CNTK related question. 23 | 24 | ## Gitter Chat 25 | 26 | [Gitter Chat](https://gitter.im/Microsoft/CNTK) can be used for general questions and shorter programming questions. 27 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_3_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_3_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/14/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_3_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 3 release of CNTK V.2.0. 15 | 16 | ## Integration with NVIDIA NCCL 17 | 18 | This Release introduces the integration of CNTK and [NVIDIA NCCL](https://github.com/NVIDIA/nccl), a stand-alone library of standard collective communication routines, such as all-gather, reduce, broadcast, etc., that have been optimized to achieve high bandwidth over PCIe. 19 | 20 | NCCL library supports Linux system only. NCCL can be enabled for those who build CNTK from the source code. See how to enable NCCL [here](../Setup-CNTK-on-Linux.md#cudnn). 21 | 22 | ## CNTK Evaluation library. NuGet package 23 | 24 | This Release features NuGet package for CNTK Evaluation library. **IMPORTANT!** In Visual Studio *Manage NuGet Packages* Window change the default option *Stable Only* to *Include Prerelease*. Otherwise the package will not be visible. The Package version should be ```2.0-beta3```. 25 | 26 | ## Stability Improvements 27 | We continue fine tuning new features and fixing different bugs - thank you once again for the constant feedback. You are not required to adopt your code or models to take an advantage of these improvements. 28 | -------------------------------------------------------------------------------- /articles/Developing-and-Testing.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Developing and testing 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/19/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: cpp 10 | --- 11 | 12 | # Developing and testing 13 | 14 | Once you have setup the CNTK sources on your machine you should follow the CNTK development practices to change CNTK on your machine and contribute to the project. 15 | 16 | If you make modifications to the code please follow the coding guidelines and make sure tests are still passing with your changes in place. 17 | 18 | Our source code doesn't include TAB characters for formatting. If you use Visual Studio as your editor, go to Tools|Options|Text Editor|C/C++|Tabs and make sure it is set to Smart Indenting Tab, Indent Size set to 4, and "Insert Spaces" option selected. You can also load the CppCntk.vssettings file (in the CNTK home directory) which contains settings for C++ editor. To import/export the settings, use Tools -> Import and Export Settings... Visual Studio menu option. 19 | 20 | Please do not auto-format existing code (Edit -> Advanced -> Format Document/Ctrl+E,D). 21 | 22 | For code you write you can use CLANG-FORMAT (see http://clang.llvm.org/) to perform an initial formatting step. A format specification for CLANG-FORMAT (version 3.7) is available in the root of the CNTK repository (.clang-format) 23 | 24 | [Coding Guidelines](./Coding-Guidelines.md) 25 | 26 | [How to Test](./How-to-Test.md) 27 | 28 | -------------------------------------------------------------------------------- /articles/Conference-Appearances.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Conference appearances 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/11/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Conference appearances 13 | 14 | * [KDD -- ACM Conference on Knowledge Discovery and Data 2016](http://www.kdd.org/kdd2016) 15 | * [SIIM -- Society for Imaging Informatics in Medicine Conference on Machine Intelligence in Medical Imaging 2016](http://siim.org/page/2016CMIMI) 16 | * [GTC China 2016](http://www.gputechconf.cn/page/home.html) 17 | * [O'Reilly AI](http://conferences.oreilly.com/artificial-intelligence/ai-deep-learning-bots-ny) 18 | * Microsoft Machine Learning Data Science Summit at Ignite 19 | * [GTC Europe 2016](https://www.gputechconf.eu/Home.aspx) 20 | * [GTC Japan 2016](http://www.gputechconf.jp) 21 | * [GTC India 2016](http://www.gputechconf.in) 22 | * [China AI Conference](http://ccai.caai.cn) 23 | * [China High Performance Computing User Forum](http://www.asc-events.org/HPCUF/2016) 24 | * [China Information Hiding and Multimedia Security Workshop](http://www.cihw.org.cn) 25 | * [Asian Machine Learning Conference](http://acml-conf.org/2016) 26 | * AAAI 2017 27 | * The BIG CUP competition at WWW 2017 28 | * The HongKong-Macau Student Contest on Big Ideas 29 | * [Microsoft Ignite China](https://www.microsoft.com/china/ignite/2016/) 30 | * ODSC West 2016 31 | * NIPS 2016 32 | * CVPR 2017 33 | * [WWW 2017](./WWW-2017-Tutorial.md) 34 | * GTC 2017 35 | -------------------------------------------------------------------------------- /articles/BrainScript-Reader-block.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: BrainScript Reader Block 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/15/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # BrainScript Reader Block 13 | 14 | The reader block is used for all types of readers and the `readerType` parameter determines which reader to use. Each reader implements the same IDataReader interface. Many parameters in the reader block are shared across different types of readers. Some are specific to a particular reader. A simple reader block using the CNTKTextFormatReader could look like this: 15 | 16 | reader = [ 17 | readerType = "CNTKTextFormatReader" 18 | file = "$DataDir$/Train-28x28_cntk_text.txt" 19 | input = [ 20 | features = [ 21 | dim = 784 22 | format = "dense" 23 | ] 24 | labels = [ 25 | dim = 10 26 | format = "dense" 27 | ] 28 | ] 29 | ] 30 | 31 | You can explore different reader settings in the configurations of the [Examples](./Examples.md). For details regarding specific readers see these corresponding pages: 32 | 33 | * [CNTK Text Format Reader](./BrainScript-CNTKTextFormat-Reader.md) 34 | * [UCI Fast Reader (deprecated)](./BrainScript-UCI-Fast-Reader.md) 35 | * [HTKMLF Reader](./BrainScript-HTKMLF-Reader.md) 36 | * [LM Sequence Reader](./BrainScript-LM-Sequence-Reader.md) 37 | * [LU Sequence Reader](./BrainScript-LU-Sequence-Reader.md) 38 | -------------------------------------------------------------------------------- /articles/Update-1bit-SGD-Submodule-Location.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Update 1bit SGD Submodule Location 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/25/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Update 1bit SGD Submodule Location 13 | 14 | Effective January 25, 2017 CNTK [1-bit Stochastic Gradient Descent (1bit-SGD)](./Enabling-1bit-SGD.md) and [BlockMomentumSGD](./Multiple-GPUs-and-machines.md) code is moved to a new Repository in GitHub. 15 | 16 | If you cloned CNTK Repository with [1bit-SGD enabled](./Enabling-1bit-SGD.md) *prior to January 25, 2017* you need to update git submodule configuration as described below. 17 | 18 | If you cloned CNTK Repository after January 25, 2017, or if you do not use 1bit-SGD Repository code, you may safely ignore information in this article. 19 | 20 | ## Updating 1bit-SGD Submodule information 21 | 22 | In the instructions below we assume that the local copy of CNTK Repository is located at: 23 | 24 | * `~/Repos/cntk` (Linux) 25 | * `c:\repos\cntk` (Windows) 26 | 27 | To update the Submodule information for 1bit-SGD do the following: 28 | 29 | * Go to the root of your local Repository: 30 | 31 | ```cd ~/Repos/cntk``` (Linux) 32 | 33 | or 34 | 35 | ```cd c:\repos\cntk``` (Windows) 36 | 37 | * Checkout master branch and ensure it's up-to-date 38 | ``` 39 | git checkout master 40 | git fetch 41 | git pull 42 | ``` 43 | 44 | * Update Submodule information: 45 | 46 | ```git submodule sync``` 47 | 48 | Your code now should be in sync with the new 1bit-SGD location at GitHub. 49 | -------------------------------------------------------------------------------- /articles/Special-Nodes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Special nodes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 09/15/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Special nodes 13 | 14 | CNTK uses *Special nodes* for automatic back propagation updates of Learnable Parameters and the proper inputs identification. Special nodes can be specified in two different ways: 15 | * Node arrays 16 | * Special Tags 17 | If both methods are used the values are combined. 18 | 19 | ## Node Arrays 20 | 21 | CNTK supports multiple nodes for each type, so the values in types are arrays. However in many there is only a single node for each node type. The array syntax (parenthesis) must be used when setting special nodes, even if there is only one element. If more than one element is included, the entries are comma separated and surrounded by parenthesis. For example: 22 | ``` 23 | FeatureNodes=(features) 24 | LabelNodes=(labels) 25 | CriterionNodes=(CrossEntropy) 26 | EvalNodes=(ErrPredictOutputNodes, Plus2) 27 | OutputNodes=(ScaledLogLikelihood) 28 | ``` 29 | 30 | ## Special Tags 31 | 32 | You can use a special *Optional Parameter* named ```tag``` to easily identify special values in the network. For example: 33 | ``` 34 | F1=Input(SDim, tag=feature) 35 | L1=Input(LDim, tag=label) 36 | ``` 37 | 38 | The table below contains acceptable tag names and their correspondence to the respective node types: 39 | 40 | |Tag name |Meaning 41 | |:----------|:-------| 42 | |feature |feature input 43 | |label |label input 44 | |criterion |criterion node, top level node 45 | |eval |evaluation node 46 | |output |output node 47 | 48 | -------------------------------------------------------------------------------- /articles/Debugging-CNTK-source-code-in-Visual-Studio.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Debugging CNTK's GPU source code in Visual Studio 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/18/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Debugging CNTK's GPU source code in Visual Studio 13 | 14 | To debug CNTK's mainline CPU source code, follow the steps below. To additionally debug the **CUDA code for GPUs** in CNTK, follow the steps below first, and then click [here](./Debugging-CNTKs-GPU-source-code-in-Visual-Studio.md) for further steps. 15 | 16 | In Launch Visual Studio, and load the cntk.sln solution. 17 | In the **Solution Explorer**, find the CNTK project and make sure it is the startup project (it should be bolded). If it is not, right click on the project in the **Solution Explorer** and choose **Set as StartUp Project**. 18 | 19 | In the **Solution Explorer**, find the CNTK project and right click on **Properties**. From the **Properties** dialog, click on **Configuration Properties** and then on **Debugging**. 20 | 21 | Assuming you have your CNTK source at `C:\src` and you want to debug with config file `lr_bs.cntk` from the the tutorial 22 | [HelloWorld-LogisticRegression](https://github.com/Microsoft/CNTK/tree/release/latest/Tutorials/HelloWorld-LogisticRegression), set the 23 | Command Arguments as follows: 24 | 25 | `configFile=lr_bs.cntk deviceId=auto makeMode=false` 26 | 27 | In addition, set the Working Directory field as follows: 28 | 29 | `C:/src/cntk/Tutorials/HelloWorld-LogisticRegression` 30 | 31 | If you have your CNTK source somewhere else or you want to debug a different config file, make the appropriate changes 32 | 33 | Set your build target as "Debug" 34 | 35 | Build and run. 36 | 37 | -------------------------------------------------------------------------------- /articles/Setup-Test-Python.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Setup Test Python 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/03/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # Setup Test Python 13 | 14 | ## Testing your CNTK install with Python 15 | 16 | We assume you have Python and CNTK installed on your machine, as well as the CNTK samples and tutorials. The Python installation is included in your PATH statement; if you have installed CNTK in a Python environment you will need to activate that Python environment before you run the samples. 17 | 18 | - Open a standard command prompt 19 | - Activate the Python environment if necessary 20 | - Change into the sample directory 21 | 22 | ### Run an example 23 | 24 | Change into the `Tutorials/NumpyInterop` directory and run the FeedForward example: 25 | ``` 26 | cd Tutorials 27 | cd NumpyInterop 28 | python FeedForwardNet.py 29 | ``` 30 | You will see the following output on the console: 31 | ``` 32 | Minibatch[ 1- 128]: loss = 0.564038 * 3200 33 | Minibatch[ 129- 256]: loss = 0.308571 * 3200 34 | Minibatch[ 257- 384]: loss = 0.295577 * 3200 35 | Minibatch[ 385- 512]: loss = 0.270765 * 3200 36 | Minibatch[ 513- 640]: loss = 0.252143 * 3200 37 | Minibatch[ 641- 768]: loss = 0.234520 * 3200 38 | Minibatch[ 769- 896]: loss = 0.231275 * 3200 39 | Minibatch[ 897-1024]: loss = 0.215522 * 3200 40 | Finished Epoch [1]: loss = 0.296552 * 25600 41 | error rate on an unseen minibatch 0.040000 42 | ``` 43 | 44 | ### Run Jupyter notebooks 45 | 46 | CNTK contains several tutorials based on Jupyter notebooks. To use them, execute the following commands: 47 | ``` 48 | cd Tutorials 49 | jupyter notebook 50 | ``` 51 | This will start a browser with the available notebooks. 52 | 53 | -------------------------------------------------------------------------------- /articles/Variables.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Variables 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 05/18/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Variables 13 | 14 | Variables are defined in NDL when they appear on the left of an equal sign ```=```. From that point on that variable name will be associated with the value it was assigned. Variables are **immutable**, thus assigning new values to an existing variable is not supported. 15 | 16 | Variable names may be any alphanumeric sequence that starts with a letter. A variable can contain a matrix or scalar value. 17 | 18 | ## Reserved words 19 | 20 | Any name that is a function name is reserved and can not be used for a variable. The special node names below are also reserved and can not be used as variable names. 21 | ``` 22 | FeatureNodes 23 | LabelNodes 24 | CriteriaNodes 25 | EvalNodes 26 | OutputNodes 27 | ``` 28 | 29 | ## Variable names with dots 30 | 31 | When it is necessary to access a variable that is defined in a macro it can be accessed using dot syntax. If the Macro ``FF`` is called from the following code: 32 | ``` 33 | L1 = FF(features, HDim, SDim) 34 | ``` 35 | and the Macro ```FF`` is defined as below: 36 | ``` 37 | FF(X1, W1, B1) 38 | { 39 | T=Times(W1,X1); 40 | FF=Plus(T, B1); 41 | } 42 | ``` 43 | then it is possible to get what was the return value of ```Times(W1,X1)``` Function call *before* Function ```Plus(T, B1)``` was called. To do that one needs to get the value of ```L1.T`` variable. 44 | 45 | Dot syntax uses variable names defined within Macro definitions, thus it is important to have all required Macro definitions available to use this technique. 46 | 47 | For nested Macros it is possible to use multi-layer dot syntax. 48 | 49 | 50 | 51 | 52 | -------------------------------------------------------------------------------- /articles/Archive/EvalDLL-Evaluation-on-Linux.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: EvalDLL evaluation on Linux 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: csharp, cpp 10 | --- 11 | 12 | # EvalDLL evaluation on Linux 13 | 14 | The `EvalDll` library on Linux is provided as a C++ library. 15 | 16 | The usage pattern for evaluation is the following: 17 | 18 | 1. Get an instance of the evaluation engine either using GetEvalF() (for the `float` data type) or GetEvalD() (for the `double` data type). 19 | 2. Load the model (or create the network) in the evaluation engine. 20 | 3. Evaluate some input against the model and obtain the corresponding output. 21 | 4. Dispose the model when done. 22 | 23 | The evaluation library, `Cntk.Eval`, can be found under `cntk/lib` in the CNTK binary package. If you build CNTK from source code, The shared library `Cntk.Eval` is available in the `lib` folder of the build directory. 24 | 25 | Any program using the evaluation library needs to link the libraries `Cntk.Core` and `Cntk.Math`, e.g. 26 | ``` 27 | -lCntk.Eval- -lCntk.Math- 28 | ``` 29 | , and set the appropriate search path for these libraries. Please use the same build flavor (Debug/Release) and [the same compiler version](../Setup-CNTK-on-Linux.md#c-compiler) as the one used to create the libraries. The [CPPEvalClient](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/CPPEvalClient) in the CNTK source code illustrates the usage pattern in Linux. The [Makefile](https://github.com/Microsoft/CNTK/tree/release/latest/Makefile) contains the target EVAL_SAMPLE_CLIENT showing how to build the example. 30 | 31 | For details on the C++ API provided by EvalDll, please refer to the [EvalDll C++ API](./EvalDll-Native-API.md) page. 32 | -------------------------------------------------------------------------------- /articles/If-Operation.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: If operation 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/27/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # If operation 13 | 14 | Elementwise selecting of one of two inputs given a condition. 15 | 16 | BS.Boolean.If (condition, thenValue, elseValue) 17 | 18 | ## Parameters 19 | 20 | * `condition`: condition according to which element values are selected 21 | * `thenValue`: element value selected if `condition` element is not 0 22 | * `elseValue`: element value selected if `condition` element is 0 23 | 24 | Sparse values are currently not supported. 25 | 26 | ## Return Value 27 | 28 | A tensor of the dimension of the inputs. If any of the inputs have dimensions of 1, 29 | broadcasting is applied; in that case, the output dimension becomes the maximum over the corresponding three arguments' dimensions. 30 | 31 | ## Description 32 | 33 | `If()` selects elements from two inputs based on a condition, in an elementwise fashion. 34 | For every input element where `condition` is non-0, the corresponding element from `thenValue` 35 | is chosen; and where `condition` is 0, the corresponding `elseValue` element is chosen. 36 | 37 | This function supports broadcasting. For example, it is possible that the condition 38 | is a scalar, or one of the inputs is a constant tensor without time dimension. 39 | 40 | ## Example 41 | The elementwise maximum of two inputs can be computed as a combination of [`Greater()`](./Binary-Operations.md) and `If()`: 42 | 43 | MyElementwiseMax (a, b) = BS.Boolean.If (Greater (a, b), a, b) 44 | 45 | This also works with broadcasting. For example, the linear rectifier can be written with this using 46 | a scalar constant as the second input: 47 | 48 | MyReLU (x) = MyElementwiseMax (x, Constant(0)) 49 | -------------------------------------------------------------------------------- /articles/CNTK-Library-Evaluation-on-UWP.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Model Evaluation on Universal Windows Platform 3 | author: zhouwangzw 4 | ms.author: zhouwang 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: csharp, cpp 10 | --- 11 | 12 | # Model Evaluation on Universal Windows Platform 13 | 14 | [!INCLUDE[versionadded-2.1-block](includes/versionadded-2.1-block.md)] 15 | 16 | Starting from v2.1, CNTK supports loading and evaluating models in applications running on the Universal Windows Platform (UWP). The CNTK Library for UWP is provided as a NuGet Package, `CNTK.UWP.CPUOnly`, for model evaluation on CPU devices using C++. There is also an example showing how to use CNTK UWP library in C# via C++/CX wrapper. 17 | 18 | ## Installation 19 | 20 | The Nuget package `CNTK.UWP.CPUOnly` can be installed through the Nuget Package Manager inside Visual Studio by searching for "CNTK", or downloaded directly from nuget.org: 21 | [https://www.nuget.org/packages/CNTK.UWP.CPUOnly](https://www.nuget.org/packages/CNTK.UWP.CPUOnly). 22 | 23 | Please refer to the [NuGet Package](./NuGet-Package.md) page about details how to install CNTK Library NuGet Packages. 24 | 25 | ## Programming Guide 26 | 27 | The CNTK UWP library currently provides C++ APIs for model evaluation on UWP. It implements the same APIs as the CNTK evaluation library for desktop applications. Please refer to [model evaluation for Windows overview page](./CNTK-Library-Evaluation-on-Windows.md#using-c) and [C++ Eval API](./CNTK-Library-Native-Eval-Interface.md) for details. 28 | 29 | ## Examples 30 | 31 | The [UWPImageRecognition folder] (https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/UWPImageRecognition) contains an example for using CNTK UWP library for model evaluation. It also shows how to use the library in C# via a C++/CX wrapper. 32 | -------------------------------------------------------------------------------- /articles/Blog.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK blog 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/18/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK blog 13 | 14 | * [Recurrent Neural Networks with CNTK and applications to the world of ranking](https://www.microsoft.com/en-us/cognitive-toolkit/blog/2016/08/recurrent-neural-networks-with-cntk-and-applications-to-the-world-of-ranking/) 15 | * [GRUs on CNTK with BrainScript](https://www.microsoft.com/en-us/cognitive-toolkit/blog/2016/08/grus-on-cntk-with-brainscript/) 16 | * [Deep Crossing on CNTK](https://www.microsoft.com/en-us/cognitive-toolkit/blog/2016/08/deep-crossing-on-cntk/) 17 | * [Sequence to Sequence – Deep Recurrent Neural Networks in CNTK – Part 1](https://www.microsoft.com/en-us/cognitive-toolkit/blog/2016/11/sequence-to-sequence-deep-recurrent-neural-networks-in-cntk-part-1/) 18 | * [Sequence to Sequence – Deep Recurrent Neural Networks in CNTK – Part 2 – Machine Translation](https://www.microsoft.com/en-us/cognitive-toolkit/blog/2016/11/sequence-to-sequence-deep-recurrent-neural-networks-in-cntk-part-2-machine-translation/) 19 | * [Announcing CNTK 2.0 Beta](https://blogs.microsoft.com/next/2016/10/25/microsoft-releases-beta-microsoft-cognitive-toolkit-deep-learning-advances/#sm.000004x6ch0g27el9uqyncbkkl6ct) 20 | * [Exploring Machine Learning with CNTK](https://msdn.microsoft.com/en-us/magazine/mt791798.aspx) 21 | * [Building a Deep Handwritten Digits Classifier using Microsoft Cognitive Toolkit CNTK and CNN](https://medium.com/@tuzzer/building-a-deep-handwritten-digits-classifier-using-microsoft-cognitive-toolkit-6ae966caec69#.tveftb27o) 22 | * [Installing CNTK v2.0 beta](https://jamesmccaffrey.wordpress.com/2017/02/25/installing-cntk-v2-0-beta/) 23 | * [Video object tagging tool](https://www.microsoft.com/reallifecode/2017/04/10/end-end-object-detection-box/) 24 | * [General object detection with CNTK](https://www.microsoft.com/reallifecode/2017/04/10/object-detection-using-cntk/) 25 | -------------------------------------------------------------------------------- /articles/ROIPooling.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: ROIPooling 3 | author: pkranen 4 | ms.author: pkranen 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # ROIPooling 13 | ``` 14 | ROIPooling (input, 15 |          ROIs, 16 | {ROI output shape}, 17 | spatialScale = {spatial scale wrt image (float)}) 18 | ``` 19 | The ROI-pooling operation computes a new matrix by selecting the maximum (max pooling) value in the pooling input for each region of interest (ROI). 20 | The regions of interest are given as the second input to the operator as the top left and bottom right corners of the regions in absolute pixels of the original image. 21 | The pooling input is computed per ROI by projecting the coordinates onto the input feature map (first input to the operator) and considering all overlapping positions. 22 | The projection uses the 'spatial scale' which is the size ratio of the input feature map over the input image size. 23 | The spatial scale can be computed by multiplying all strides that occur before the ROI-pooling and taking the inverse, 24 | e.g., a network that has four pooling layers with stride two would have a spatial scale of 1/16. 25 | The output shape's width and height are determined by the third argument, the output depth (number of filters) is the same as the input depth. 26 | 27 | * `input` - pooling input for the entire image 28 | * `ROIs` - ROI coordinates as absolute pixel coordinates `(x_min, y_min, x_max, y_max)` 29 | * `{roi output shape}` - dimensions (width, height) of the ROI output, as a BrainScript vector, e.g. `(4:4)`. 30 | * `spatialScale` - the scale of operand from the original image size. The default is 1/16, which matches for example AlexNet and VGG16 networks. 31 | 32 | > [!NOTE] 33 | > [!INCLUDE[versionchanged-2.1](includes/versionchanged-2.1.md)] 34 | > In CNTK 2.1 the spatial scale parameter was added and the coordinates of the ROIs are now passed as absolute pixel values rather than relative values as in previous versions. 35 | -------------------------------------------------------------------------------- /articles/CNTK-model-format.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK model format 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript,cpp, csharp,dotnet,python 10 | --- 11 | 12 | # CNTK model format 13 | 14 | CNTK allows users to save a model into a file for future use. It can be done by 15 | * specifying "modelPath" in the config file when using BrainScript/cntk.exe, or 16 | * [save()](https://cntk.ai/pythondocs/cntk.ops.functions.html#cntk.ops.functions.Function.save) in Python, or 17 | * [Save()](https://github.com/Microsoft/CNTK/tree/release/latest/Source/CNTKv2LibraryDll/API/CNTKLibrary.h) in C++ when using [CNTK Library API](./CNTK-Library-API.md). 18 | 19 | There are two different file formats to store the model in. 20 | * **The model-v1 format**. This format was originally used prior to the CNTK2 version. A model is stored in the model-v1 format when it is saved by BrainScript/cntk.exe. 21 | 22 | * **The model-v2 format**. With CNTK2, a Protobuf based format is introduced, which is now known as the model-v2 format. A model is saved in this format only when using [CNTK Library API](./CNTK-Library-API.md) 23 | * by [save()](https://cntk.ai/pythondocs/cntk.ops.functions.html#cntk.ops.functions.Function.save) in Python, or 24 | * by [Save()](https://github.com/Microsoft/CNTK/tree/release/latest/Source/CNTKv2LibraryDll/API/CNTKLibrary.h) in C++. 25 | 26 | The following table gives an overview about which model format is created and consumed by which CNTK binary. 27 | 28 | | |Model Creation | Model Evaluation | Lanugage Support | 29 | |:----------------------|:---------------|:-----------------|:-----------------| 30 | | model-v1 format | cntk.exe| [cntk.exe](./CNTK-Evaluation-using-cntk.exe.md) [EvalDll](./EvalDll-Evaluation-Overview.md), [CNTK Library](./CNTK-Library-Evaluation-Overview.md) | BrainScript, C++, C#/.NET 31 | | model-v2 format | CNTK Library | [CNTK Library](./CNTK-Library-Evaluation-Overview.md) | C++, C#/.NET, Java, Python 32 | 33 | -------------------------------------------------------------------------------- /articles/Using-CNTK-with-BrainScript.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Using CNTK with BrainScript 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/15/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # Using CNTK with BrainScript 13 | 14 | ## BrainScript Basics 15 | 16 | * [Overview](./BrainScript-Config-File-Overview.md) 17 | * [Getting Started](./CNTK-usage-overview.md) 18 | * [Train, Test, Eval](./BrainScript-Train-Test-Eval.md) 19 | * [Top-level Configurations](./BrainScript-Top-level-configurations.md) 20 | * [SimpleNetworkBuilder](./Simple-Network-Builder.md) 21 | * [Performance Profiler](./BrainScript-and-Python-Performance-Profiler.md) 22 | 23 | ## Network Definition with BrainScript 24 | 25 | * [BrainScriptNetworkBuilder](./BrainScript-Network-Builder.md) 26 | * [BrainScript Walkthrough](./BrainScript-Basic-Concepts.md) 27 | * [Expressions](./BrainScript-Expressions.md) 28 | * [Defining Functions](./BrainScript-Functions.md) 29 | * [Model Editing](./BrainScript-Model-Editing.md) 30 | * [Full Function Reference](./BrainScript-Full-Function-Reference.md) 31 | * [Layers Reference](./BrainScript-Layers-Reference.md) 32 | * [Activation Functions](./BrainScript-Activation-Functions.md) 33 | 34 | ## Specifying Readers in BrainScript 35 | 36 | * [Reader Configuration](./BrainScript-Reader-Block.md) 37 | * [CNTK Text Format Reader](./BrainScript-CNTKTextFormat-Reader.md) 38 | * [HTKMLF Reader](./BrainScript-HTKMLF-Reader.md) 39 | * [LM Sequence Reader](./BrainScript-LM-Sequence-Reader.md) 40 | * [LU Sequence Reader](./BrainScript-LU-Sequence-Reader.md) 41 | * [Image Reader](./BrainScript-Image-Reader.md) 42 | * [CNTK Binary Reader](./BrainScript-CNTKBinary-Reader.md) 43 | * [Understanding and Extending Readers](./BrainScript-and-Python---Understanding-and-Extending-Readers.md) 44 | * [UCI Fast Reader (deprecated)](./BrainScript-UCI-Fast-Reader.md) 45 | 46 | ## Setting up Learners with BrainScript 47 | 48 | * [SGD Configuration](./BrainScript-SGD-Block.md) 49 | 50 | -------------------------------------------------------------------------------- /articles/CNTK-on-Azure.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK on Azure 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/22/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK on Azure 13 | 14 | CNTK is available as [Microsoft Azure](http://azure.microsoft.com/) Virtual Machine offerings. 15 | 16 | ## Host a CNTK Model through Web APIs in Azure 17 | [Deploy CNTK Azure Web API](./Evaluate-a-model-in-an-Azure-WebApi.md) 18 | 19 | ## Training with CNTK on Azure with Windows 20 | 21 | * [Data Science Virtual Machine - Windows Server 2016](http://aka.ms/dsvm/win2016) 22 | * [Data Science Virtual Machine (Windows) Documentation](https://aka.ms/dsvm/win2016/docs) 23 | 24 | 25 | ## Training with CNTK on Azure with Linux 26 | 27 | * [Data Science Virtual Machine - Linux (Ubuntu)](http://aka.ms/dsvm/ubuntu) 28 | * [Data Science Virtual Machine (Linux) Documentation](http://aka.ms/dsvm/ubuntu/docs) 29 | * We recommend to get yourself familiar with the [Azure Batch Shipyard](https://github.com/Azure/batch-shipyard) which can provision Dockerized CNTK on [Azure Batch](https://azure.microsoft.com/en-us/services/batch/) compute nodes with the ability to execute on CPU, GPU, multinode CPU and multinode GPU with simple configuration files. CNTK recipes can be found [here](https://github.com/Azure/batch-shipyard/tree/master/recipes) 30 | 31 | **NOTE:** The Data Science Virtual Machine (DSVM) can be deployed on either CPU only or GPU based VM instances. The NVidia drivers, CUDA, cuDNN is built into the DSVM. 32 | 33 | ### Installing GPU Drivers for Azure 34 | Also you need to ensure that you have NVIDIA drivers installed on your Virtual Machine in case you are not using the DSVM. See OS specific details in the following articles: 35 | * [Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-n-series-driver-setup?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) 36 | * [Linux](https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-n-series-driver-setup?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) 37 | -------------------------------------------------------------------------------- /articles/Deploy-Model-to-AKS.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Deploy a Model to Azure Container Service 3 | author: mx-iao 4 | ms.author: minxia 5 | ms.date: 12/04/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: 10 | --- 11 | 12 | # Deploy a Model to Azure Container Service 13 | The following resources from the [Cortana Intelligence and Machine Learning](https://blogs.technet.microsoft.com/machinelearning/) team go over tutorials for deploying CNTK models to [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/). 14 | 15 | ## Deploy Pretrained Models to Azure Container Service 16 | Tutorial: [Deployment of Pretrained Models to Azure Container Service](https://blogs.technet.microsoft.com/machinelearning/2017/05/25/deployment-of-pre-trained-models-on-azure-container-services/) 17 | | [Github](https://github.com/Azure/ACS-Deployment-Tutorial) 18 | Authors: Mathew Salvaris, Ilia Karmanov 19 | 20 | This end-to-end tutorial goes over the process of building a simple image classification web application, using a pretrained CNTK ResNet 152 model, and deploying the app via ACS. 21 | 22 | Sections: 23 | * Create Docker image of the application 24 | * Test the application locally 25 | * Create an ACS cluster and deploy the web app 26 | * Test the web app 27 | 28 | ## Train and Serve Models using Kubernetes on Azure 29 | Tutorial: [How to Serve and Train Deep Learning Models at Scale, using Cognitive Toolkit with Kubernetes on Azure](https://blogs.technet.microsoft.com/machinelearning/2017/09/06/how-to-use-cognitive-toolkit-cntk-with-kubernetes-on-azure/) | [Github](https://github.com/weehyong/k8scntkSamples) 30 | Author: Wee Hyong Tok 31 | 32 | This tutorial explains how to deploy a Kubernetes cluster on Azure using the [ACS Engine](https://github.com/Azure/acs-engine) and how to autoscale the cluster. Once the Kubernetes cluster is up and running, the tutorial [repo](https://github.com/weehyong/k8scntkSamples) provides sample CNTK docker images for training on the CIFAR-10 dataset and serving a pretrained ResNet model (based on the [above](#deploy-a-model-to-azure-container-service) tutorial). 33 | -------------------------------------------------------------------------------- /articles/Debugging-CNTKs-GPU-source-code-in-Visual-Studio.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Debugging CNTK source code in Visual Studio 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/14/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Debugging CNTK source code in Visual Studio 13 | 14 | The steps for debugging CUDA kernels: 15 | 16 | 1. Install NVIDIA Nsight following the directions from [here](http://docs.nvidia.com/gameworks/index.html#developertools/desktop/nsight/install_debug_monitor.htm%3FTocPath%3DDeveloper%2520Tools%7CDesktop%2520Developer%2520Tools%7CNVIDIA%2520Nsight%2520Visual%2520Studio%2520Edition%7CNVIDIA%2520Nsight%2520Visual%2520Studio%2520Edition%25205.2%7CInstallation%2520and%2520Setup%2520Essentials%7C_____2) 17 | 1. Follow the directions for for “Local debugging”. 18 | 1. Set the environment variable NSIGHT_CUDA_DEBUGGER = 1. 19 | 1. Run Visual Studio and the Nsight monitor as administrator. 20 | 1. In Nsight Monitor->Options->CUDA, set “Use this monitor for CUDA attach” to True. You may have to restart Nsight. Run as admin again. 21 | 1. In Visual Studio, go to Nsight->Options and make sure the options match up with your options in Nsight monitor (e.g. the ports are the same). Especially make sure ”Establish secure connection” is the same in both. 22 | 1. Right click on the MathCUDA project in the solution explorer and go to Properties. 23 | 1. Go to Configuration Properties -> CUDA C/C++ -> Device and set Generate GPU Debug Information to Yes 24 | 1. Go to Configuration Properties -> CUDA Linker -> General and set Generate GPU Debug Information to Yes 25 | 1. Add your breakpoints in your kernel, rebuild CNTK, and get ready to run whatever you’re trying to debug. 26 | 1. In VS, go to Debug -> Attach to Process, set Transport to Nsight GPU Debugger, and set Qualifier to localhost. 27 | 1. Start CNTK. 28 | 1. Click refresh and find CNTK in the process list, then attach. When it hits a breakpoint you should be able to see all of your local variables from the kernel. If you only see CUDA globals like threadIdx and blockIdx, you haven’t properly set the GPU Debug flags in the MathCUDA properties. 29 | -------------------------------------------------------------------------------- /articles/Loading-Data-Overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Loading Data 3 | ms.author: minxia 4 | ms.date: 11/30/2017 5 | ms.topic: conceptual 6 | ms.service: Cognitive-services 7 | ms.devlang: python, brainscript 8 | --- 9 | 10 | # Loading Data 11 | 12 | This section provides information on the processes for building input data pipelines for training in CNTK. 13 | 14 | ## Python Manuals 15 | We include the following Python manuals in this section: 16 | * ['Read and Feed Data to CNTK Trainer'](https://cntk.ai/pythondocs/Manual_How_to_feed_data.html) describes the different options for feeding data into the CNTK model trainers. 17 | * ['Write a Custom Deserializer'](https://cntk.ai/pythondocs/Manual_How_to_write_a_custom_deserializer.html) goes over the steps for writing custom deserializers to load data that is in some custom format not supported by CNTK's built-in deserializers. 18 | * ['Create User Minibatch Sources'](https://cntk.ai/pythondocs/Manual_How_to_create_user_minibatch_sources.html) explains how to create a user minibatch source for cases where the data does not fit into memory and the user wants fine-grained control over minibatch creation. 19 | 20 | ## Additional Resources 21 | * For more detailed examples on reading data into CNTK in **Python**, refer to the Python [tutorials](https://cntk.ai/pythondocs/tutorials.html). Some tutorials to get started include: 22 | * [CNTK 103D](https://cntk.ai/pythondocs/CNTK_103D_MNIST_ConvolutionalNeuralNetwork.html): Reading data in the CNTK Text Format (CTF). 23 | * [CNTK 201B](https://cntk.ai/pythondocs/CNTK_201B_CIFAR-10_ImageHandsOn.html): Reading image data using the CNTK image deserializer. 24 | * [CNTK 208](https://cntk.ai/pythondocs/CNTK_208_Speech_Connectionist_Temporal_Classification.html): Reading audio data in HTK/MLF format using the HTKFeatureDeserializer/HTKMLFDeserializer. 25 | * [CNTK 200](https://cntk.ai/pythondocs/CNTK_200_GuidedTour.html): An overview of the different ways to feed data to the CNTK trainer. 26 | * For some additional information on CNTK Readers, refer to these articles on the [CTF Reader](/cognitive-toolkit/BrainScript-CNTKTextFormat-Reader) and [CNTK Binary Reader](/cognitive-toolkit/BrainScript-CNTKBinary-Reader). 27 | -------------------------------------------------------------------------------- /articles/How-do-I-Deal-with-Errors-in-Python.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How do I deal with errors in Python 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/05/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # How do I deal with errors in Python 13 | 14 | * [Debug a Python notebook](#debug-a-python-notebook)? 15 | * [Get things to work correctly when I take the last element of a sequence](#get-things-to-work-correctly-when-i-take-the-last-element-of-a-sequence)? 16 | 17 | ## Debug a Python notebook 18 | 19 | You will need to install ipdb via 20 | ``` 21 | pip install ipdb 22 | ``` 23 | Make sure the above installation happens for the correct Python environment (the one used to run the notebook). 24 | Afterwards, in your notebook you should add this line to import the debugger. 25 | ```python 26 | from IPython.core.debugger import Tracer 27 | ``` 28 | Finally, at the point where you want to debug add this line 29 | ```python 30 | Tracer()() 31 | ``` 32 | When you run the cell with the `Tracer()()` the execution will drop into ipdb at the next line. Then you can start inspecting things with the usual [pdb](https://docs.python.org/2/library/pdb.html) commands. Don't forget to quit the debugger when you are done! 33 | 34 | ## Get things to work correctly when I take the last element of a sequence 35 | 36 | There are two issues here. Support that you have a sequence `seq`. You want to take the last element, further process it with a few layers. 37 | ```python 38 | last = C.sequences.last(seq) 39 | z = a_few_layers(last) 40 | ``` 41 | Now you want to plug `z` in your loss but you get an error about dynamic axes. Input variables are created with some default dynamic axes but `last` (and `z`) had its dynamic axes determined by `sequences.last`. So one possibility is to define the label variable at this point and have it copy its dynamic axes from `z`. Typically: 42 | ```python 43 | y = C.input_variable(z.shape, dynamic_axes=z.dynamic_axes) 44 | loss = C.squared_error(y, z) 45 | ``` 46 | Finally, when training, the data labels must have a dynamic axis of size one i.e. each element in the batch should have a shape of (1,)+y.shape. 47 | -------------------------------------------------------------------------------- /articles/CNTK-CSharp-Examples.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Model evaluation examples 3 | author: liqun 4 | ms.author: liqun 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: csharp 10 | --- 11 | 12 | # CNTK C#/.NET API training examples 13 | 14 | ## Overview 15 | CNTK repository contains [examples](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp) using CNTK C# API to build, train, and evaluate CNTK neural network models. 16 | 17 | #### [LogisticRegression](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp/Common/LogisticRegression.cs) 18 | A hello-world example to train and evaluate a logistic regression model using C#/API. See [CNTK 101: Logistic Regression and ML Primer](https://github.com/Microsoft/CNTK/tree/release/latest/Tutorials/CNTK_101_LogisticRegression.ipynb) for more details. 19 | #### [MNISTClassifier](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp/Common/MNISTClassifier.cs) 20 | This class shows how to build and train a classifier for handwritting data (MNIST). 21 | #### [CifarResNetClassifier](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp/Common/CifarResNetClassifier.cs) 22 | This class shows how to do image classification using ResNet. 23 | The model being built is a lite version of [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385). See [Python Tutortials](https://github.com/Microsoft/CNTK/tree/release/latest/Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb) for more details. 24 | #### [TransferLearning](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp/Common/TransferLearning.cs) 25 | This class demonstrates transfer learning using a pretrained ResNet model. 26 | See [Python Tutortials](https://github.com/Microsoft/CNTK/tree/release/latest/Tutorials/CNTK_301_Image_Recognition_with_Deep_Transfer_Learning.ipynb) for more details. 27 | #### [LSTMSequenceClassifier](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/TrainingCSharp/Common/LSTMSequenceClassifier.cs) 28 | This class shows how to build a recurrent neural network model from ground up and how to train the model. 29 | 30 | 31 | -------------------------------------------------------------------------------- /articles/CNTK-Evaluation-Overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Evaluation Overview 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Evaluation Overview 13 | 14 | Once you have trained a model, you can use CNTK Eval library to evaluate the model in your own application. CNTK supports model evaluation from C++, Python, C#/.NET, and Java. Starting from v2.1, CNTK also supports Universal Windows Platform (UWP). 15 | 16 | Features of the CNTK Evaluation include 17 | * Support both CPU and GPU device. 18 | * [Support multiple evaluation requests in parallel](./CNTK-Eval-Examples.md#examples-for-evaluating-multiple-requests-in-parallel). 19 | * Optimize memory usage by parameter sharing of the same model between multiple threads. This will significantly reduce memory usage when running evaluation in a service environment. 20 | 21 | The following pages provide detailed information about model evaluation using CNTK Library. 22 | * [CNTK-library evaluation on Windows](./CNTK-Library-Evaluation-on-Windows.md) 23 | * [CNTK-library evaluation on Linux](./CNTK-Library-Evaluation-on-Linux.md) 24 | * [CNTK-library evaluation with Python](./How-do-I-Evaluate-models-in-Python.md) 25 | * [NuGet-Packages](./NuGet-Package.md) 26 | * [Evaluation in Azure](./Evaluate-a-model-in-an-Azure-WebApi.md) 27 | * [Evaluation on Universal Windows Platform (UWP)](./CNTK-Library-Evaluation-on-UWP.md) 28 | 29 | ### Legacy Applications using CNTK 1.0 30 | 31 | Prior to the CNTK 2.0 version, the CNTK EvalDLL was used to evaluate model trained by using cntk.exe with BrainScript. The EvalDLL 32 | is still supported, but works only for the model created by cntk.exe with BrainScript. It can not be used to evaluate models created by CNTK 2.0 or later using Python. We strongly recommend to use the latest CNTK libraries for evaluation, as it supports model formats and provides more features. 33 | 34 | For more details on different model formats refer to the [CNTK model format](./CNTK-model-format.md) page. 35 | For legacy applications that use use EvalDLL interface please refer to the [CNTK EvalDLL Overview](./EvalDll-Evaluation-Overview.md) page. 36 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_5_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_5_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/25/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_5_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 5 release of CNTK V.2.0. 15 | 16 | ## Windows: The Microsoft Cognitive Toolkit and NVIDIA Cuda 8 17 | 18 | With this Beta release the Windows version of the Cognitive Toolkit supports NVIDIA Cuda 8. The binary beta 5 packages are built using the NVidia Cuda 8 toolkit. If you are a developer and building CNTK on your own system you can still continue using Cuda 7.5. This will change soon, please read details [here](../CNTK-move-to-Cuda8.md). 19 | 20 | ## Linux: The Microsoft Cognitive Toolkit and NVIDIA Cuda 8 21 | 22 | In this Beta the Microsoft Cognitive Toolkit is only supporting NVIDIA Cuda 7.5 on Linux. We expect to move the Linux version of the toolkit to Cuda 8 shortly. 23 | 24 | ## CNTK Evaluation library. NuGet package 25 | 26 | A new NuGet package with the latest eval DLL (managed and native) is [available](../NuGet-Package.md). The `EvaluateRgbImage` function in the [managed Eval API](../EvalDll-Managed-API.md) improves speed of image evaluation. 27 | 28 | **IMPORTANT!** In Visual Studio *Manage NuGet Packages* Window change the default option *Stable Only* to *Include Prerelease*. Otherwise the package will not be visible. The Package version should be ```2.0-beta5```. 29 | 30 | ## Performance 31 | 32 | In the CNTK Text Format deserializer the memory footprint is reduced significantly. This will improve performance and scalability. 33 | 34 | ## Stability Improvements 35 | 36 | We continue fine tuning features and fixing bugs - thank you once again for the constant feedback! 37 | 38 | ## Documentation and Tutorials 39 | 40 | We improve/update documentation continuously. We also are adding new tutorials as quickly as possible. If you have a tutorial you want to contribute, please let us know! 41 | 42 | The latest tutorial we added covers [Sequence-to-Sequence](https://github.com/Microsoft/CNTK/blob/v2.0.beta5.0/Tutorials/CNTK_204_Sequence_To_Sequence.ipynb). 43 | -------------------------------------------------------------------------------- /articles/index.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: The Microsoft Cognitive Toolkit 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/22/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: landing-page 8 | 9 | ms.service: Cognitive-services 10 | ms.devlang: NA 11 | --- 12 | 13 | # The Microsoft Cognitive Toolkit 14 | 15 | The Microsoft Cognitive Toolkit (CNTK) is an open-source toolkit for commercial-grade distributed deep learning. It describes neural networks as a series of computational steps via a directed graph. CNTK allows the user to easily realize and combine popular model types such as feed-forward DNNs, convolutional neural networks (CNNs) and recurrent neural networks (RNNs/LSTMs). CNTK implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. 16 | 17 | [This video](https://youtu.be/9gDDO5ldT-4) provides a high-level overview of the toolkit. In addition, Microsoft offers an introductory course to deep learning with CNTK, [Deep Learning Explained](https://www.edx.org/course/deep-learning-explained-microsoft-dat236x-0). 18 | 19 | The latest release of CNTK is [2.4](./ReleaseNotes/CNTK_2_4_Release_Notes.md). 20 | 21 | CNTK can be included as a library in your Python, C#, or C++ programs, or used as a standalone machine-learning tool through its own model description language (BrainScript). In addition you can use the CNTK model evaluation functionality from your Java programs. 22 | 23 | CNTK supports 64-bit Linux or 64-bit Windows operating systems. To install you can either choose pre-compiled binary packages, or compile the toolkit from the source provided in [GitHub](https://github.com/Microsoft/CNTK). 24 | 25 |
26 | 27 | 28 | CNTK is also one of the first deep-learning toolkits to support the Open Neural Network Exchange [ONNX](https://onnx.ai) format, an open-source shared model representation for framework interoperability and shared optimization. Co-developed by Microsoft and supported by many others, ONNX allows developers to move models between frameworks such as CNTK, Caffe2, MXNet, and PyTorch. 29 | 30 | 31 | The latest release of CNTK supports ONNX v1.0. 32 | 33 | Learn more about ONNX [here](https://github.com/onnx/onnx). 34 | 35 | -------------------------------------------------------------------------------- /articles/Post-Batch-Normalization-Statistics.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Post Batch Normalization Statistics 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 09/09/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Post Batch Normalization Statistics 13 | 14 | Post batch normalization statistics (PBN) is the CNTK version of how to evaluate the population mean and variance of Batch Normalization which could be used in inference [Original Paper](https://arxiv.org/pdf/1502.03167v3.pdf). 15 | 16 | ## Why needs PBN? 17 | 18 | Since the parameters keep updating while training, the accuracy of mean and variance, using running statistics, will be impacted as well. However, PBN, calculated after training and all the parameters are frozen, won't influence by these factors. For ResNet, PBN will achieve better performance than running statistics. 19 | 20 | ## Characters of PBN 21 | 22 | PBN has two characters to make sure its performance: 23 | * `based on frozen parameters`: PBN will begin after the training action, meaning all others parameter have been fixed. the mean and variance of each batch normalization layer are the only un-decided parameters in the network. This will ensure the statistics results won't influence by the parameters updating. 24 | * `calculate layer by layer`: PBN imports a bottom to top mean and variance updating method. The mean and variance of a batch normalization layer won't calculate unless the mean and variance of all the bottom batch normalization layers are fixed. This will ensure for each current calculating layer, the network parameters before its order are all fixed, including earlier layers' mean and variance. 25 | 26 | ## Flow 27 | 28 | PBN executes with following steps: 29 | * `1. Load trained model` 30 | * `2. Find all the batch normalization layers` 31 | * `3. Resort them as eval order` 32 | * `4. From the first layer to last layer do` 33 | * `5. set the mean and variance of current batch normalization to 0` 34 | * `6. partial forward from bottom to current node for n times, and average the mean and variance of these n times.` 35 | * `7. set the mean and variance of current batch normalization to the average value` 36 | * `8. move to the next layer, jump to step 5` 37 | 38 | ## PBN usage 39 | 40 | The usage of PBN can find in [BNStat Command](./Top-level-commands.md#bnstat-command) 41 | 42 | 43 | -------------------------------------------------------------------------------- /articles/Test-Configurations.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Production Test Configurations 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/07/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | # CNTK Production Test Configurations 12 | 13 | This page lists configurations that are used for automated test of the Microsoft Cognitive Toolkit. It doesn't distinguish explicitly between build and test environment, for a detailed overview of the individual requirements you should consult the [development environment setup page](./Setup-development-environment.md) for a build environment, and the [setup from precompiled binaries page](./Setup-CNTK-on-your-machine.md) for the test environment. 14 | 15 | The presented set of product versions is not restrictive, i.e. CNTK may well work in many other configurations. 16 | 17 | ## Windows 18 | 19 | ### Operating System 20 | 21 | * Windows 8.1 Pro (64 bit) 22 | * Windows 10 (64 bit) 23 | * Windows Server 2012 R2 Standard and later 24 | 25 | ### Compiler 26 | 27 | * Visual Studio Enterprise 2017 28 | 29 | ### MPI 30 | 31 | * Microsoft MPI v. 7.0 32 | 33 | ### Math Library 34 | 35 | * Intel® MKLML library 36 | 37 | ### NVIDIA Components 38 | 39 | * NVIDIA CUDA 9.0 40 | * NVIDIA cuDNN 7.0 for CUDA 9.0 41 | * NVIDIA CUB 1.7.4 42 | 43 | ### OpenCV 44 | 45 | * OpenCV v.3.1.0 46 | 47 | ### zlib Library 48 | 49 | * zlib v.1.2.8 50 | 51 | ### libzip Library 52 | 53 | * libzip v.1.1.3 54 | 55 | ### Java 56 | 57 | * Java SE Development Kit 8 v1.8.0\_131, 64-bit 58 | 59 | ### Anaconda Python for Windows 60 | 61 | * Anaconda3 4.1.1 (64 bit) 62 | 63 | ## Linux 64 | 65 | ### Operating System 66 | 67 | * Ubuntu 16.04 LTS (64 bit) 68 | 69 | ### Compiler 70 | 71 | * GNU C++ 5.4.0 72 | 73 | ### MPI 74 | 75 | * Open MPI v. 1.10.7 76 | 77 | ### Math Library 78 | 79 | * Intel® MKLML library 80 | 81 | ### NVIDIA Components 82 | 83 | * NVIDIA CUDA 9.0 84 | * NVIDIA cuDNN 7.0 for CUDA 9.0 85 | * NVIDIA CUB 1.7.4 86 | 87 | ### OpenCV 88 | 89 | * OpenCV v.3.1.0 90 | 91 | ### zlib Library 92 | 93 | * zlib v.1.2.8 94 | 95 | ### libzip Library 96 | 97 | * libzip v.1.1.2 98 | 99 | ### Java 100 | 101 | * OpenJDK 7, 64-bit 102 | 103 | ### Anaconda Python for Linux 104 | 105 | * Anaconda3 4.1.1 (64 bit) -------------------------------------------------------------------------------- /articles/archive.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Archive Page 3 | author: wolfma61 4 | ms.author: wolfma 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: archive 8 | ms.service: cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Archive Page 13 | 14 | This page lists documentation pages which have been moved out of the main documentation flow. They might contain additional information that is already presented in a different location, or might contain information that is only relevant to interim releases (beta versions) of the product. 15 | 16 | * [Breaking Changes between CNTK2-beta15 and CNTK2-beta16](./Breaking-changes-in-Master-compared-to-beta15.md) 17 | * [CNTK moves to Cuda 8](./CNTK-move-to-Cuda8.md) 18 | * [Migrate from VisualStudio 2013 to VisualStudio 2015](./Setup-Migrate-VS13-to-VS15.md) 19 | * [News 2016](./News-2016.md) 20 | * [Shared library naming convention](./CNTK-Shared-Libraries-Naming-Format.md) 21 | * [Build the Protobuf library for CNTK with VisualStudio 2015](./Setup-BuildProtobuf-VS15.md) 22 | * [Build the Zlib/LibZip library for CNTK with VisualStudio 2015](./Setup-BuildZlib-VS15.md) 23 | 24 | ## Model Evaluation using EvalDll 25 | 26 | Prior to the CNTK 2.0 version, the CNTK EvalDLL was used to evaluate model trained by using cntk.exe with BrainScript. The EvalDLL 27 | is still supported, but works only for the model created by cntk.exe with BrainScript. It can not be used to evaluate models created 28 | by CNTK 2.0 or later using Python. We strongly recommend to use the latest CNTK libraries for evaluation, as it supports model formats and provides more features. 29 | 30 | More information about using EvalDll can be found in the following pages. 31 | 32 | * [Model Evaluation using cntk.exe](./CNTK-Evaluation-using-cntk.exe.md) 33 | * [EvalDLL Evaluation Overview](.//EvalDLL-Evaluation-Overview.md) 34 | * [EvalDLL evaluation on Windows](./EvalDLL-Evaluation-on-Windows.md) 35 | * [EvalDLL evaluation on Linux](./EvalDLL-Evaluation-on-Linux.md) 36 | * [EvalDLL evaluation in Azure](./Evaluate-a-model-in-an-Azure-WebApi-using-EvalDll.md) 37 | * [EvalDLL C# API](./EvalDll-Managed-API.md) 38 | * [EvalDLL C++ API](./EvalDll-Native-API.md) 39 | * [EvalDLL Evaluate Hidden Layers](./CNTK-Evaluate-Hidden-Layers.md) 40 | * [EvalDLL Evaluate Image Transforms](.//CNTK-Evaluate-Image-Transforms.md) 41 | * [EvalDLL Evaluate Multiple Models](./CNTK-Evaluate-Multiple-Models.md) 42 | 43 | -------------------------------------------------------------------------------- /articles/Plot-command.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Plot command 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/20/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: reference 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # Plot command 13 | 14 | This command loads a given computation network and describes the network topology (usually a DAG) using the DOT (http://www.graphviz.org/doc/info/lang.html) language. It can also optionally call a third-part tool (http://www.graphviz.org/Documentation/dotguide.pdf) to render the network topology. Note that many DOT rendering tools are optimized for DAG, and the rendered graph becomes quite messy if the network structure is non-DAG. The non-DAG structure is usually caused by the use of delay node. In this case, this command will treat all the delay nodes as leaf nodes. 15 | 16 | The related parameters of this command are: 17 | * `modelPath`: the path to the computation network. 18 | 19 | * `outputDOTFile`: the path to the output DOT file. If user does not specify it, ${modelPath}.dot will be assumed. 20 | 21 | Optionally, if users want to convert the dot description to a figure, they need to specify the following parameters: 22 | 23 | * `outputFile`: the path to the output figure. 24 | 25 | * `renderCmd`: A string indicates how the third party tool is used to convert from a DOT file to a figure. For example, if graphviz is used, the RenderCmd can be written as `RenderCmd="d:\Tools\graphviz\bin\dot.exe -Tjpg -o"` where the plot command will substitute `““` with the `outputDOTFile` command parameters and `“”` with the `outputFile` command parameters; `-Tjpg` indicates a JPG file format is selected. 26 | 27 | Here is a complete example: 28 | 29 | command = topoplot 30 | topoplot = [ 31 | action = "plot" 32 | modelPath = "train\lstm.model.0" 33 | 34 | # outputdotFile specifies the dot file to output 35 | # if the user does not specify this, it will be ${modelPath}.dot 36 | outputdotFile = "train\lstm.model.dot" 37 | 38 | # outputFile specifies the rendered image 39 | outputFile="train\lstm.model.jpg" 40 | 41 | # if RenderCmd is specified, CNTK will call the plot command after replacing 42 | # with ${outputdotFile} and with ${outputfile} 43 | renderCmd="d:\Tools\graphviz\bin\dot.exe -Tjpg -o" 44 | ] 45 | 46 | -------------------------------------------------------------------------------- /articles/Records.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Records 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 05/18/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Records 13 | 14 | Optional Parameters are a feature that allows additional parameters to be specified for NDL Functions and NDL Macros. While the optional parameters can be specified for any Function or Macro, they are limited to constant values and the underlying Function must support the passed optional parameters, or there is no effect on the network. When used with a Macro, the Macro will have a local variables defined that matches the optional parameter name and value. 15 | 16 | ### Parameter initialization 17 | 18 | In the network definition given in NDL Basic concepts the following parameter matrices are defined: 19 | ``` 20 | B0=Parameter(HDim) 21 | W0=Parameter(HDim, SDim) 22 | ``` 23 | where ```W0``` is the weight matrix and ```B0``` is the bias matrix. 24 | 25 | Optional parameters are allowed to specify how the functions will be initialized. For instance: 26 | ``` 27 | B0=Parameter(HDim, init=zero) 28 | W0=Parameter(HDim, SDim, init=uniform) 29 | ``` 30 | Here the Bias matrix will be zero initialized, and the weight matrix will be initialized with uniform random numbers. 31 | 32 | ### Tagging special values 33 | 34 | As an alternate to providing an array of special nodes that are used as features, labels, criteria, etc, optional parameters can be used. So instead of using: 35 | ``` 36 | FeatureNodes = (features) 37 | LabelNodes = (labels) 38 | CriterionNodes = (CrossEntropy) 39 | EvalNodes = (ErrPredict) 40 | OutputNodes = (Plus2) 41 | ``` 42 | the same network can be defined as 43 | ``` 44 | SDim = 784 45 | HDim = 256 46 | LDim = 10 47 | 48 | features = Input(SDim, tag="feature") 49 | labels = Input(LDim, tag="label") 50 | 51 | L1 = RBFF(features, HDim, SDim) 52 | L2 = RBFF(L1, HDim, HDim) 53 | L3 = RBFF(L2, HDim, HDim) 54 | CE = SMBFF(L3, LDim, HDim, labels, tag="criterion") 55 | Err = ErrorPrediction(labels, CE.F, tag="eval") 56 | 57 | OutputNodes = (CE.F) 58 | ``` 59 | This approach avoids adding elements to the node arrays. Instead it sets the ```tag``` optional parameter on the functions or macros that return the value that fits into the specified category. In this case, since the output node is actually computed inside a macro, we must specify it explicitly. 60 | -------------------------------------------------------------------------------- /articles/Archive/EvalDll-Examples.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: EvalDll C++/C# Examples 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript,cpp, csharp,dotnet,python 10 | --- 11 | 12 | # EvalDll C++/C# Examples 13 | 14 | Prior to the CNTK 2.0 version, the CNTK EvalDLL was used to evaluate model trained by using cntk.exe with BrainScript. The EvalDLL 15 | is still supported, but works only for the model created by cntk.exe with BrainScript. It can not be used to evaluate models created 16 | by CNTK 2.0 or later using Python. We strongly recommend to use the latest CNTK libraries for evaluation, as it supports model formats and provides more features. 17 | 18 | For legacy applications that are still using EvalDll, The [EvalClients.sln](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/EvalClients.sln) contains the following examples: 19 | - [`CPPEvalClient`](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/CPPEvalClient): this sample uses the C++ `EvalDll`. 20 | - [`CPPEvalExtendedClient`](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/CPPEvalExtendedClient): this sample uses the C++ extended Eval interface in `EvalDll` to evaluate a RNN model. 21 | - [`CSEvalClient`](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/CSEvalClient): this sample uses the C# `EvalDll` (only for Windows). It uses the [CNTK EvalDll NuGet Package](https://www.nuget.org/packages/Microsoft.Research.CNTK.CpuEval-mkl/). 22 | 23 | On Windows, the solution file [EvalClients.sln](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/LegacyEvalDll/EvalClients.sln) is used to build and run samples. Please note 24 | - You need Visual Studio 2015 Update 3 for using these samples. 25 | - The samples should be built for the 64-bit target platform. Otherwise some issues arise when calling the library. Please also refer to the [Troubleshoot CNTK](../Troubleshoot-CNTK.md) page for more information. 26 | - After a successful build, the executable is saved under the $(SolutionDir)..\..\$(Platform)\$(ProjectName).$(Configuration)\ folder, e.g. ..\..\X64\CPPEvalClient.Release\CppEvalClient.exe. 27 | 28 | On Linux, please refer to the `Makefile` for building samples. The target name EVAL_CLIENT, and EVAL_EXTENDED_CLIENT are used to build these projects. 29 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_4_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_4_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/22/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_4_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 4 release of CNTK V.2.0. 15 | 16 | ## ASGD/Hogwild! training using Microsoft’s Parameter Server (Project Multiverso) 17 | 18 | This release introduces Asynchronous Stochastic Gradient Descent (ASGD)/Hogwild! training parallelization support using Microsoft’s Parameter Server ([Project Multiverso](https://github.com/Microsoft/multiverso)). 19 | 20 | ## CNTK Python API. Distributed Scenarios support 21 | 22 | This release add the support of Distributed scenarios. See more in the sections on Distributed scenarios in [ConvNet](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/Classification/ConvNet/Python/README.md) and [ResNet](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/Classification/ResNet/Python/README.md) examples. (All examples are also available as a part of CNTK Binary Packages) 23 | 24 | ## Memory compression 25 | 26 | This release introduces a *Memory Compression*, an ability to trade off memory usage with compute. See how to enable the feature in the [Top level configurations](../BrainScript-Top-level-configurations.md#forcedeterministicalgorithms) page (```hyperCompressMemory``` variable). 27 | 28 | ## Reorganizing location of Examples and Tutorials 29 | 30 | We continue re-organizing and re-arranging CNTK Examples and Tutorials. Please, be aware that as a part of the process we are gradually removing outdated Python examples. 31 | 32 | ## CNTK Docker image with 1bit-SGD support 33 | 34 | You may now compile and install CNTK with 1bit-SGD support as a Docker container. Thus all three CNTK configurations may be implemented as Docker containers. See more on [CNTK Docker containers here](https://github.com/Microsoft/CNTK/tree/release/latest/Tools/docker/README.md). 35 | 36 | ## CNTK Evaluation library 37 | 38 | **Please note that in this release there will be NO update of CNTK NuGet package.** The latest package is available in CNTK V.2.0 Beta 3. 39 | 40 | 41 | ## Stability Improvements 42 | 43 | We continue fine tuning new features and fixing different bugs - thank you once again for the constant feedback. You are not required to adopt your code or models to take an advantage of these improvements. 44 | -------------------------------------------------------------------------------- /articles/Linux-Environment-Variables.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Linux Environment Variables 3 | author: wolfma61 4 | ms.author: wolfma 5 | ms.date: 07/12/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | # Linux Environment Variables 12 | 13 | If you are building the Microsoft Cognitive Toolkit on your own machine, `configure` and `Makefile` scripts support a limited set of installation paths for all dependent components. In addition the build process and the Microsoft Cognitive Toolkit use environment variables to locate components. Add the following environment variable as required to your current session and your .bashrc profile (prepending the new path, to ensure that the component required by CNTK is used as opposed to a default version available through the OS). 14 | 15 | This page lists the environment variables which are used by the CNTK build process and CNTK itself to find required components. It also lists the *preferred* location for these components. The preferred location is mirroring the configuration of our internal automated build and test machines. The preferred location is also the location used in the documentation to describe the installation process. 16 | 17 | |Environment Variable | Preferred Location | 18 | |:--------|:------------| 19 | | **[OpenMPI](./Setup-CNTK-on-Linux.md#open-mpi) (required)]** | 20 | | PATH | /usr/local/mpi/bin 21 | | LD_LIBRARY_PATH | /usr/local/mpi/lib 22 | | | 23 | | **[LIBZIP](./Setup-CNTK-on-Linux.md#libzip) (required)** | 24 | | LD_LIBRARY_PATH | /usr/local/lib 25 | | | 26 | | **[CUDA 9](./Setup-CNTK-on-Linux.md#cuda-9) (required)** | 27 | | PATH | /usr/local/cuda-9.0/bin 28 | | LD_LIBRARY_PATH | /usr/local/cuda-9.0/lib64 29 | | | 30 | | **[cuDNN](./Setup-CNTK-on-Linux.md#cudnn)** | 31 | | LD_LIBRARY_PATH | /usr/local/cudnn-7.0/cuda/lib64 | 32 | 33 | 34 | 35 | 36 | # Additional Environment Variables 37 | 38 | There are additional environment variables which can influence the compilation process: 39 | 40 | |Environment Variable | | 41 | |:------------|:-------------| 42 | |CNTK_CUDA_CODEGEN_DEBUG CNTK_CUDA_CODEGEN_RELEASE | With these environment variables you define the NVidia Compiler target architectures. For example, setting a variable to `-gencode arch=compute_52,code=\"sm_52,compute_52\"` will only build level 5.2 compatible cubin and PTX information. For detailed information about this refer to the NVidia Compiler documentation. 43 | 44 | **More information** 45 | 46 | * [Setup CNTK on your machine](./Setup-CNTK-on-your-machine.md) 47 | -------------------------------------------------------------------------------- /articles/Dropout.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Dropout 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/15/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Dropout 13 | 14 | Dropout function. 15 | 16 | Dropout (x) 17 | 18 | ## Parameters 19 | 20 | * `x`: the input to apply the dropout function to 21 | 22 | Note: the dropout rate is not a parameter to this function, but instead specified in the `SGD` section. 23 | 24 | ## Return Value 25 | 26 | `Dropout()` will return the result of the dropout operation applied to the input. 27 | The result has the same tensor dimensions as the input. 28 | 29 | ## Description 30 | 31 | The `Dropout()` operation randomly selects elements of the input with a given probability called the *dropout rate*, 32 | and sets them to 0. 33 | This has been shown to improve generalizability of models. 34 | 35 | In CNTK's implementation, 36 | the remaining values that are not set to 0 will instead be multiplied with (1 / (1 - dropout rate)). 37 | This way, the model parameters learned with dropout are directly applicable in inference. 38 | (If this was not done, the user would have to manually scale them before inference.) 39 | 40 | To enable dropout in your training, you also 41 | need to add a parameter `dropoutRate` to the `SGD` section to define the dropout rate. 42 | This is done in the `SGD` a section, instead of a parameter to `Dropout()` itself, 43 | in order to allow to start off a training without dropout, and then enable it after a few epochs, 44 | which is a common scenario. 45 | For this, the `dropoutRate` is specified as a vector, where each 46 | value is for a specific epoch. 47 | 48 | When running inference, the `Dropout()` operation passes its input unmodified (it is a no-op). 49 | 50 | ## Example 51 | 52 | The following is a simple convolutional network with a dropout layer towards the end: 53 | 54 | features = Input{...} 55 | c = ConvolutionalLayer {32, (5:5), activation=ReLU} (features) 56 | p = MaxPoolingLayer {(3:3), stride = (2:2)} (c) 57 | h = DenseLayer {64, activation = ReLU} (p) 58 | d = Dropout (h) 59 | z = LinearLayer {10} (d) 60 | 61 | In addition, you need a corresponding entry in the `SGD` section. 62 | The following example defines to use no dropout for the first 3 epochs, 63 | and then to continue with a dropout rate of 50%. 64 | For convenience, this example uses the asterisk (`*`) syntax to denote repetition: 65 | 66 | SGD = { 67 | ... 68 | dropoutRate = 0*3:0.5 69 | ... 70 | } 71 | -------------------------------------------------------------------------------- /articles/Setup-CNTK-Python-Tools-For-Windows.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Setup CNTK Python Tools For Windows 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | 12 | # Setup CNTK Python Tools For Windows 13 | 14 | If you want to use Python Tools for Visual Studio (PTVS): 15 | * First determine your path to Visual studio (like: "c:\Program Files (x86)\Microsoft Visual Studio 14.0"). Take that path and create an environment variable named MYVSPATH. `set MYVSPATH=thePathToVSDescribedAbove`. 16 | * If you have a source install, make sure you have set PYTHONPATH from [here](./Setup-CNTK-on-Windows.md#pythonpath): 17 | * Next get the path to your CNTK installation: 18 | * If you have a binary install, (take a look in c:\local). For example if you installed CNTK 2.4, you would have a path like: c:\local\CNTK-2-4-Windows-64bit-GPU. Take that path and create an environment variable named MYCNTKPATH: `set MYCNTKPATH=thePathToCNTKDescribedAbove`. 19 | * If you have a source install, find the path just above where you cloned. That is just above the cntk dir. Take that path and create an environment variable named MYCNTKPATH: `set MYCNTKPATH=thePathToCNTKDescribedAbove`. 20 | * Next Setup your environment with `%MYVSPATH%\vc\vcvarsall.bat amd64`. 21 | * Next depending on your install type: 22 | * If you have a binary install update your PATH environment with `set PATH=%MYCNTKPATH%\cntk\cntk;%PATH%`. 23 | * If you have a source install, and built Release, do `set PATH=%MYCNTKPATH%\cntk\x64\Release;%PATH%`. If you built something different than Release, specify that instead. 24 | * Then open Visual Studio with `%MYVSPATH%\Common7\IDE\devenv.exe` 25 | * In VS, go to Tools -> Python Tools -> Python Environments and create a new environment (by clicking on the "+Custom" button). 26 | * Select Configure from the dropdown menu and set the prefix path to the environment dir inside Anaconda. If you did a binary install, this path is likely to be: 27 | `C:\local\Anaconda3-4.1.1-Windows-x86_64\envs\cntk-py35\`. 28 | * Afterwards, click Auto Detect and the rest of the entries will be filled out automatically. 29 | * In Visual Studio, create a new Empty Python project 30 | * Add in your Python file(s). 31 | * If you have a source install, to get IntelliSense, add in the full Python lib dir as a part of your project at: MYCNTKPATH%\CNTK\bindings\python 32 | 33 | **Links** 34 | * [Setup CNTK on Windows](./Setup-CNTK-on-Windows.md) 35 | * [Examples](./Examples.md) 36 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_1_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_1_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 11/16/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_1_Release_Notes 13 | 14 | With this Release we start delivering CNTK V2 - a major upgrade of Microsoft Cognitive Toolkit (former Microsoft Computational Network Toolkit). 15 | 16 | This is a summary of new features delivered with the first Beta release of CNTK V.2.0. 17 | 18 | ## CNTK as a Library. Python and C++ API 19 | 20 | From now on you can use CNTK not only by running CNTK executable and programming the Networks with BrainScript, but also as a software library. 21 | 22 | The following interfaces are supported: 23 | 24 | * Python 25 | * C++ 26 | 27 | Read more about CNTK Library API [here](../CNTK-Library-API.md). 28 | Detailed description of the Python API can be found in [CNTK Python API Documentation](https://cntk.ai/pythondocs/). 29 | 30 | ## Python Examples and Tutorials (Jupyter Notebooks) 31 | 32 | We have prepared a set of Python Examples and Tutorials (the latter implemented as (Jupyter Notebooks). You will find all information at these locations: 33 | 34 | * [Python Examples](https://cntk.ai/pythondocs/examples.html) 35 | * [Python Tutorials (Jupyter Notebooks)](https://cntk.ai/pythondocs/tutorials.html) 36 | 37 | ## Protocol Buffers serialization 38 | 39 | CNTK now supports Google Protocol Buffers serialization for Model and Checkpoint saving. 40 | 41 | ## New automated installation procedures 42 | 43 | With this release we introduce the possibility of automated environment configuration and installation of all CNTK pre-requisites from the script driven binary install option. 44 | 45 | See more on the new installation options [here](../Setup-CNTK-on-your-machine.md). 46 | 47 | ## Fast R-CNN algorithm 48 | 49 | CNTK now supports [object recognition using Fast R-CNN](../Object-Detection-using-Fast-R-CNN.md) algorithm. 50 | 51 | 52 | ## CNTK Evaluation library 53 | 54 | **Please note that in this release there will be NO update of CNTK NuGet package.** We will update the NuGet package in the forthcoming releases. 55 | 56 | The following features are added to the CNTK Evaluation library: 57 | * C++ Evaluation is now using CNTK API. Examples available under Examples\Evaluation\CPPEvalV2Client. 58 | * Support of multiple threads running evaluation requests in parallel. 59 | * Support of sharing model parameters with CNTK API. 60 | * Support of evaluation on GPU device with CNTK API. 61 | -------------------------------------------------------------------------------- /articles/Setup-CNTK-on-your-machine.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Setup CNTK on your machine 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 12/04/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: python, brainscript 10 | --- 11 | 12 | # Setup CNTK on your machine 13 | 14 | The Microsoft Cognitive Toolkit (CNTK) supports both 64-bit Windows and 64-bit Linux platforms. Upon completing the installation, you can [test your installation from Python](./Setup-Test-Python.md) or try the [tutorials](./Tutorials.md) or [examples](./Examples.md) section of the documentation. 15 | 16 | It is recommended you install CNTK from precompiled binaries. If you want to build CNTK from source code, the required steps are described [here](./Setup-CNTK-from-source.md). 17 | 18 | ## Install CNTK from Precompiled Binaries 19 | 20 | To install the latest precompiled binaries to your machine, follow the instructions here: 21 | 22 | |Windows | Linux | 23 | |:------------------------|:------------------------| 24 | |[Python-only installation](./Setup-Windows-Python.md)
Simple pip install of CNTK lib for use in Python| [Python-only installation](./Setup-Linux-Python.md)
Simple pip install of CNTK lib for use in Python | 25 | |[Script-driven installation](./Setup-Windows-Binary-Script.md)
Script that installs CNTK Python lib and CNTK.exe for BrainScript | [Script-driven installation](./Setup-Linux-Binary-Script.md)
Script that installs CNTK Python lib and CNTK.exe for BrainScript 26 | |[Manual installation](./Setup-Windows-Binary-Manual.md)
Manually install CNTK Python lib, CNTK.exe for BrainScript, and dependencies | [Manual installation](./Setup-Linux-Binary-Manual.md)
Manually install CNTK Python lib,CNTK.exe for BrainScript, and dependencies 27 | | | [Docker installation](./CNTK-Docker-Containers.md) 28 | 29 | ## CNTK Versions: CPU, GPU, 1bit-SGD 30 | 31 | CNTK offers three different build versions. The CPU-only build uses the optimized Intel MKLML; MKLML is released with Intel MKL-DNN as a trimmed version of Intel MKL for MKL-DNN. The GPU implementation uses highly optimized NVIDIA libraries (such as CUB and cuDNN) and supports distributed training across multiple GPUs and multiple machines. The 1bit-SGD version is a special GPU build of CNTK that enables the MSR-developed 1bit-quantized SGD and block-momentum SGD parallel training algorithms, which allow for even faster distributed training in CNTK. Note that the 1bit-SGD package is not necessary for performing parallel training in CNTK; the GPU build will suffice. 32 | -------------------------------------------------------------------------------- /articles/How-do-I-Read-Things-in-BrainScript.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How do I read things in BrainScript 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/12/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # How do I read things in BrainScript 13 | 14 | * [Specify multiple label streams with the HTKMLFReader](#specify-multiple-label-streams-with-the-htkmlfreader)? 15 | * [Use the built-in readers to train a network model using multiple input files](#use-built-in-readers-with-multiple-inputs)? 16 | * [Put labels and features in separate files with CNTKTextFormatReader](#put-labels-and-features-in-separate-files-with-cntktextformatreader)? 17 | 18 | ## Specify multiple label streams with the HTKMLFReader 19 | 20 | The HTKMLFReader (the reader to read Master Label Files (MLF) of the Hidden Markov Toolkit (HTK)) 21 | can be configured to read multiple label streams. The example below is taken from 22 | [TIMIT_TrainMultiTask_ndl_deprecated.cntk](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Speech/Miscellaneous/TIMIT/config/TIMIT_TrainMultiTask_ndl_deprecated.cntk) 23 | in the Examples directory: 24 | 25 | reader = { 26 | readerType = "HTKMLFReader" 27 | ... 28 | labels = { 29 | mlfFile = "$MlfDir$/TIMIT.train.align_cistate.mlf.cntk" 30 | labelMappingFile = "$MlfDir$/TIMIT.statelist" 31 | labelDim = 183 32 | labelType = "category" 33 | } 34 | regions = { 35 | mlfFile = "$MlfDir$/TIMIT.train.align_dr.mlf.cntk" 36 | labelDim = 8 37 | labelType = "category" 38 | } 39 | } 40 | 41 | ## Use built in readers with multiple inputs 42 | 43 | See the description at [Understanding and Extending Readers](./BrainScript-and-Python---Understanding-and-Extending-Readers.md) and look for the section describing how to "compose several data deserializers" 44 | 45 | ## Put labels and features in separate files with CNTKTextFormatReader 46 | 47 | Use the composite reader to specify the two files, one for labels, and one for features. And make sure to match sequence id's in labels file and the features file. 48 | 49 | ``` 50 | reader = [ 51 | … 52 | deserializers = ( 53 | [ 54 | type = "CNTKTextFormatDeserializer" ; module = "CNTKTextFormatReader" 55 | file = "$RootDir$/features.txt" 56 | input = [ features = [...]] 57 | ]:[ 58 | type = "CNTKTextFormatDeserializer" ; module = "CNTKTextFormatReader" 59 | file = "$RootDir$/labels.txt" 60 | input = [ labels = [...]] 61 | ] 62 | ] 63 | ``` 64 | 65 | -------------------------------------------------------------------------------- /articles/BrainScript-epochSize-and-Python-epoch_size-in-CNTK.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: BrainScript epochSize in CNTK 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/10/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # BrainScript epochSize in CNTK 13 | 14 | For Python users, see [here](./Interpreting-epoch_size-and-minibatch_size_in_samples-and-MinibatchSource.next_minibatch-in-CNTK.md). 15 | 16 | The number of **label** samples (tensors along a dynamic axis) in each epoch. The `epochSize` in CNTK is the number of **label** samples after which specific additional actions are taken, including 17 | * saving a checkpoint model (training can be restarted from here) 18 | * cross-validation 19 | * learning-rate control 20 | * minibatch-scaling 21 | 22 | Note that the definition of the number of label samples is similar to the number of samples used for [minibatchSize (minibatch_size_in_samples)](./BrainScript-minibatchSize-and-Python-minibatch_size_in_samples-in-CNTK.md). The definition of `epochSize` differs from the definition of `minitbatchSize` in the sense that `epochSize` is **label** samples, not input samples. 23 | 24 | So, importantly, for sequential data, a sample is an individual item of a sequence. 25 | Hence, CNTK's `epochSize` does *not* refer to a number of *sequences*, 26 | but the of sequence *items* across the sequence **labels** that constitute the minibatch. 27 | 28 | Equally important, it is **label** samples, not input samples, and the number of labels per sequence is not necessarily the number of input samples. It is possible, for example, to have one label per sequence and for each sequence to have many samples (in which case `epochSize` acts like number of sequences), and it is possible to have one label per sample in a sequence, in which case `epochSize` acts exactly like `minibatchSize` in that every sample (not sequence) is counted. 29 | 30 | For smaller dataset sizes, `epochSize` is often set equal to the dataset size. In BrainScript you can specify 0 to denote that. In Python you can specify `cntk.io.INFINITELY_REPEAT` for that. In Python only, you can also set it to `cntk.io.FULL_DATA_SWEEP` where processing will stop after one pass of the whole data size. 31 | 32 | For large datasets, you may want to guide your choice for epochSize by checkpointing. For example, if you want to lose at most 30 minutes of computation in case of a power outage or network glitch, you would want a checkpoint to be created about every 30 minutes (from which the training can be resumed). Choose `epochSize` to be the number of samples that takes about 30 minutes to compute. 33 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_7_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_7_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 12/22/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_7_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 7 release of CNTK V.2.0. 15 | 16 | ## Python API 17 | 18 | In the past, the Python API was very permissive in what it accepted as data input types. This caused a lot of confusion (e.g. non-sequence data being wrongly then interpreted as sequences). Therefore we have made the Python API stricter. In particular: 19 | 20 | * Sequences now have to be provided as single NumPy or Scipy sparse arrays. I.e., it is not any more allowed to provide a sequence as a list of NumPy arrays or Scipy sparse arrays. 21 | * The helper function ```cntk_device``` has been hidden in the testing modules. For device selection use the module ```cntk.device``` 22 | * Python default initialization is set to *normal* (```std=1```) and *uniform* (```[-1,1]```). 23 | * ```gaussian``` is renamed to ```normal```. 24 | 25 | ## New Examples and Tutorials 26 | 27 | We have prepared the following Examples and Tutorials: 28 | 29 | * [Artistic Style Transfer](https://github.com/Microsoft/CNTK/blob/v2.0.beta7.0/Tutorials/CNTK_205_Artistic_Style_Transfer.ipynb) 30 | * [GoogLeNet (Inception V3)](https://github.com/Microsoft/CNTK/tree/v2.0.beta7.0/Examples/Image/Classification/GoogLeNet) 31 | 32 | ## C++ / Backend 33 | 34 | We have made some changes and improvements in CNTK Backend: 35 | 36 | * Changed data distribution in multi-GPU training for the composite reader in the frame mode 37 | * Improved performance in the composite reader to use all available CPU threads during deserialization/transformation 38 | * Random number generator and distributions were updated at many places to be the same across supported platforms (Linux and Windows) 39 | * Backward compatibility note: training (in particular initialization) may not be exactly reproducible with respect to the previous versions 40 | * Implemented an optimization to elide the initial zeroing and subsequent accumulation into the gradients for nodes with just one parent/ancestor node 41 | 42 | ## CNTK Evaluation library. NuGet package 43 | 44 | A new NuGet package with the latest eval DLL (managed and native) is [available](../NuGet-Package.md). 45 | 46 | **IMPORTANT!** In Visual Studio *Manage NuGet Packages* Window change the default option *Stable Only* to *Include Prerelease*. Otherwise the package will not be visible. The Package version should be ```2.0-beta7```. 47 | -------------------------------------------------------------------------------- /articles/How-do-I-Evaluate-models-in-Python.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How do I evaluate models in Python 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/05/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: python 10 | --- 11 | # How do I evaluate models in Python 12 | 13 | * [Evaluate a saved convolutional network](#evaluate-a-saved-convolutional-network) 14 | * [Extract features from a specific layer using a trained model](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/FeatureExtraction) 15 | 16 | ## Evaluate a saved convolutional network 17 | 18 | There are a few things to consider with models trained on images. At this point the transformations are not part of the model, so subtracting the mean has to be done manually. Another issue is that PIL loads images in a different order than what was used during training and a transposition is required. 19 | 20 | Assuming that: 21 | 22 | * during training you subtracted 128 from all channels 23 | * the image you want to predict on is "foo.jpg" 24 | * you saved your model in Python using `z.save("mycnn.dnn")` 25 | 26 | then you can do the following: 27 | 28 | ```python 29 | from cntk.ops.functions import load_model 30 | from PIL import Image 31 | import numpy as np 32 | 33 | z = load_model("mycnn.dnn") 34 | rgb_image = np.asarray(Image.open("foo.jpg"), dtype=np.float32) - 128 35 | bgr_image = rgb_image[..., [2, 1, 0]] 36 | pic = np.ascontiguousarray(np.rollaxis(bgr_image, 2)) 37 | 38 | predictions = np.squeeze(z.eval({z.arguments[0]:[pic]})) 39 | top_class = np.argmax(predictions) 40 | ``` 41 | 42 | If you are loading an old model trained by NDL or BrainScript, then you will need to find the model output node as follows: 43 | 44 | ```python 45 | for index in range(len(z.outputs)): 46 | print("Index {} for output: {}.".format(index, z.outputs[index].name)) 47 | 48 | ... 49 | Index 0 for output: CE_output. 50 | Index 1 for output: Err_output. 51 | Index 2 for output: OutputNodes.z_output. 52 | ... 53 | ``` 54 | 55 | We care only about 'z_output' which has index 2. So in order to get the real model output, do the following 56 | 57 | ```python 58 | import cntk 59 | 60 | z_out = cntk.combine([z.outputs[2].owner]) 61 | predictions = np.squeeze(z_out.eval({z_out.arguments[0]:[pic]})) 62 | top_class = np.argmax(predictions) 63 | ``` 64 | 65 | The reason for the above, is that in old model the training information is saved in addition to the actual model parameters. 66 | 67 | ## Extract features from a specific layer using a trained model? 68 | 69 | There is an example [here](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/FeatureExtraction). 70 | -------------------------------------------------------------------------------- /articles/Archive/EvalDLL-Evaluation-Overview.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: EvalDLL Evaluation Overview 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/03/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # EvalDLL Evaluation Overview 13 | 14 | The EvalDLL library provides methods to evaluate pre-trained CNTK models that are saved in the CNTK [model-v1 format](../CNTK-model-format.md). It is available in C++ (on Windows and Linux) and C# (on Windows only). 15 | 16 | * [EvalDll evaluation on Windows](./EvalDll-Evaluation-on-Windows.md) 17 | * [EvalDll evaluation on Linux](./EvalDll-Evaluation-on-Linux.md) 18 | * [EvalDll evaluation in Azure](./Evaluate-a-model-in-an-Azure-WebApi-using-EvalDll.md) 19 | 20 | ## Evaluating different data types and layers 21 | Currently the Eval library supports vectors for input and output. That means that the input vector must match the input nodes in the model (features). Some models are trained with images (e.g. CIFAR-10) however, these images are vectorized first then fed into the network. For example, the CIFAR-10 data set is composed of small images (32 pixels by 32 pixels) or RGB values. Although each is a 3 dimensional coordinate (width, height, color), the data is vectorized into a 1-dimensional vector. It is important thus, to convert the raw data to the vector format prior to evaluation. This conversion *should* be done in the same manner as when fed to the network for training. 22 | 23 | Please refer to the [Evaluate Image Transforms](./CNTK-Evaluate-Image-Transforms.md) page for more information, particularly when dealing with images. 24 | 25 | Although an already trained model has a specific set of output nodes, it is sometimes desirable to obtain the values of other nodes during evaluation (e.g. hidden layers). This is possible using the programmatic interface, please refer to the [Evaluate Hidden Layers](./CNTK-Evaluate-Hidden-Layers.md) page for more information. 26 | 27 | 28 | ## Current limitations 29 | - Single threaded evaluation. 30 | The CNTK evaluation EvalDll library, and by extension the managed EvalWrapper library, are single threaded and single re-entrancy. Concurrent evaluations of a single model instance is not supported. However, it is possible to load multiple instances of a model and evaluate each model with a single thread. This enables multiple models to be evaluated in parallel, yet each model with a single thread. 31 | - Any program that links the pre-built evaluation libraries (`Cntk.Eval` and `Cntk.Eval.Wrapper` DLLs in Windows, and `libCntk.Eval` in Linux) of the CNTK binary package should use the same compiler version as that is used to build the pre-built libraries. 32 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_1_6_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_1_6_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/29/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_1_6_Release_Notes 13 | 14 | This is a summary on what's new in CNTK 1.6 Binary Release. Apart from many bug fixes we have the following new features. 15 | 16 | ## "With 1bit-SGD: no" message in cntk.exe output on Windows for 1bit-SGD edition 17 | 18 | 1bit-SGD on Windows edition of CNTK v.1.6 produces the message "With 1bit-SGD: no" in the "Build info" section of the output (usually the first section of the console output). The easiest way to produce this output is to call cntk.exe from Windows command prompt without supplying any additional parameters. 19 | 20 | IMPACT: There is NO impact on CNTK 1bit-SGD functionality. That means, that 1bit-SGD edition should function as expected with all 1bit-SGD and related functionality enabled. 21 | 22 | The reason of the message is some misconfiguration in the CNTK internal build system. It affects only the console output described above and does NOT affect any other functionality. The wrong output message will be fixed in the next release of CNTK. 23 | 24 | ## MKL Support 25 | 26 | Intel Math Kernel Library (MKL) is now the main Math library for CNTK. Starting from v1.6 CNTK binary releases are MKL-based. 27 | 28 | You can still compile CNTK code with ACML but please be aware that we plan to phase out the support for it. 29 | 30 | ## CNTK Model Evaluation library 31 | 32 | V.1.6 features CNTK Evaluator both for Windows and Linux. 33 | 34 | For Windows you can get CNTK Eval is a NuGet Package or directly from the Binary Distribution. Please note, that this time the NuGet package is MKL-based. 35 | 36 | Linux CNTK Evaluator is included in Linux binary distribution. 37 | 38 | See more on the CNTK Model Evaluation in the [correspondent section](../CNTK-Evaluation-Overview.md). 39 | 40 | ## Deconvolution and Unpooling 41 | 42 | CNTK now supports Deconvolution and Unpooling. See the usage example in the [Network number 4 in MNIST Sample](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/MNIST/README.md). (All Examples are included in the Binary Distribution). 43 | 44 | ## Support of OpenCV 3.1 45 | 46 | In this version ImageReader was compiled against OpenCV v3.1. This fixed some race conditions at the beginning of the first epoch due to invalid implementation of static initializers in OpenCV v3.0 in Windows. 47 | 48 | ## Python support 49 | 50 | Starting from v.1.5 you can use the preview of Python API. See [CNTK v.1.5 Release Notes](./CNTK_1_5_Release_Notes.md) for further instructions. 51 | -------------------------------------------------------------------------------- /articles/Reduction-Operations.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Reduction operations 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 02/10/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Reduction operations 13 | 14 | Reduce an input, e.g. compute sum or mean over elements. 15 | 16 | ReduceSum (x, axis=None) 17 | ReduceLogSum (x, axis=None) 18 | ReduceMean (x, axis=None) 19 | ReduceMax (x, axis=None) 20 | ReduceMin (x, axis=None) 21 | 22 | ## Parameters 23 | 24 | * `x`: data to reduce 25 | * `axis` (default: `None`): if specified, perform reduction along this axis only. This value is 1-based; i.e. 1 stands for the first static axis of `x`. 26 | 27 | ## Return value 28 | 29 | Reduced value. For `axis=1` (default), this is a scalar. If an axis is specified, that axis 30 | is reduced to have dimension 1. 31 | 32 | ## Description 33 | 34 | These functions compute aggregates (sum, mean, etc.) over all values of an input vector or tensor. 35 | Available aggregations are: 36 | * `ReduceSum()`: the sum over the elements 37 | * `ReduceLogSum()`: the sum over elements in log representations (`logC = log (exp (logA) + exp (logB))`) 38 | * `ReduceMean()`: the mean over the elements 39 | * `ReduceMax()`: the maximum value of the elements 40 | * `ReduceMin()`: the minimum value 41 | 42 | By default, aggregation is done over all elements. 43 | In case of a tensor with rank>1, the optional `axis` parameter specifies a single axis 44 | that the reduction is performed over. 45 | For example, `axis=2` applied to a `[M x N]`-dimensional matrix would aggregate over all columns, 46 | yielding a `[M x 1]` result. 47 | 48 | ### Reducing over sequences 49 | If the input is a sequence, reduction is performed separately for every sequence item. 50 | These operations do not support reduction over sequences. 51 | Instead, you can achieve this with a recurrence. 52 | For example, to sum up all elements of a sequence `x`, you can say: 53 | 54 | sum = x + PastValue (0, sum, defaultHiddenActivation=0) 55 | 56 | and for max pooling, you can use 57 | 58 | max = Max(x, PastValue (0, max, defaultHiddenActivation=0)) 59 | 60 | ## Examples 61 | 62 | Normalize a value by subtracting the mean of its elements (e.g. as part of [layer normalization](./BrainScript-Layers-Reference.md#batchnormalizationlayer-layernormalizationlayer-stabilizerlayer)): 63 | 64 | mean = ReduceMean (x) 65 | xNorm = x - mean 66 | 67 | Or, the [cross-entropy with softmax](./Loss-Functions-and-Metrics.md#crossentropy-crossentropywithsoftmax) criterion can be 68 | manually be defined using `ReduceLogSum()`: 69 | 70 | myCrossEntropyWithSoftmax (y/*label*/, z/*logit*/) = ReduceLogSum (z) - ReduceSum (y .* z) 71 | -------------------------------------------------------------------------------- /articles/Archive/CNTK-Evaluation-using-cntk.exe.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Evaluation using cntk.exe 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/03/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Evaluation using cntk.exe 13 | 14 | ## Evaluating a model using cntk.exe 15 | 16 | Evaluating a model using the CNTK executable itself, i.e. cntk.exe, is similar to the training process. But instead of using the "train" command, the "eval" command is placed in the configuration file. 17 | 18 | ### Using the CNTK executable for evaluation has the following advantages: 19 | 20 | #### CPU/GPU capability 21 | Like training, CNTK can leverage the GPU during evaluation. Refer to the [Config File Overview](../BrainScript-Config-file-overview.md) page for more details. 22 | 23 | #### Readers (and their transformations) 24 | Similar to model training, the reader plugins (e.g. ImageReader) may perform some data transforms on the input data prior to feeding it to the network during training. These transforms are not part of CNTK (per se), but of the readers. In order to feed the same *transformed* data during evaluation, the transformations need to occur prior to feeding. When evaluating using the CNTK executable, the same reader (as used during evaluation) can be used, and thus the same transformation can be applied. As we will cover later in this page, when using the programmatic approach, these transforms will need to be performed programmatically outside of evaluation engine prior to submitting the data for evaluation (assuming the model was trained with transformed data). 25 | #### Model tweaking 26 | When using CNTK for evaluation, there is a possibility of modifying the model's layout using BrainScript. This enables additional capabilities, such as exposing hidden layers for evaluation. Refer to the BrainScript page for more information. 27 | 28 | ### Using the CNTK executable for evaluation has the following disadvantages: 29 | 30 | #### Process spin-up time 31 | The CNTK executable (by nature) runs as a process, and thus will take some time to spin up. For services where many requests need to be dynamically processed, the better option would be to use the Evaluation Library in a service. 32 | #### File based input/output 33 | The CNTK executable reads the input data from file(s) and writes the output data to a file. For services running in the cloud, this may cause some performance issues. 34 | 35 | **Note: If you do go the route of evaluating a CNTK model with the CNTK executable, make sure your parameters are adequate for the evaluation. In particular specifying an appropriate size for the MiniBatchSize. Please refer to the [Troubleshoot CNTK](../Troubleshoot-CNTK.md) page for more information.** 36 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_RC_2_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_RC_2_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 05/24/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | ## CNTK v.2.0 RC 2 Release Notes 13 | 14 | With the Release Candidate 2 we ship one of the last previews before the Microsoft Cognitive Toolkit V2 final release by the end of May. There is still time to incorporate your feedback - so keep it coming! 15 | 16 | ### CNTK Core: new and improved features 17 | 18 | * New operators added: 19 | [pow](https://cntk.ai/pythondocs/cntk.ops.html#cntk.ops.pow), 20 | [sequence.reduce_max](https://cntk.ai/pythondocs/cntk.ops.sequence.html#cntk.ops.sequence.reduce_max), and 21 | [sequence.softmax](https://cntk.ai/pythondocs/cntk.ops.sequence.html#cntk.ops.sequence.softmax). 22 | * New feature for Linux source builds: 23 | * GPU Direct RDMA support in distributed gradients aggregation (enabled through `./configure --gdr=yes`). 24 | * NCCL support for Python in V2 gradients aggregation 25 | * Additional bug fixes and improvements, including: 26 | * Added functionality to set and get global tracing verbosity level. 27 | * Added a mechanism to preserve the local state of a distributed worker in the checkpoint. 28 | * The initial random seed can now be specified for dropout and random sample nodes, the auto-generated seed values for these nodes are now worker-specific. 29 | * Progress writers will log changes in learning rates and momentum values. 30 | 31 | ### CNTK Python API 32 | 33 | #### New and improved features 34 | 35 | * Support for Python 3.6 for source and binary installation; see 36 | [setup instructions](../Setup-CNTK-on-your-machine.md). 37 | * `UserMinibatchSource` now allows to write custom minibatch sources and pass them to the CNTK core. 38 | See [here](https://cntk.ai/pythondocs/extend.html#user-defined-minibatch-sources) for documentation. 39 | * CNTK deterministic mode (`cntk.cntk_py.force_deterministic_algorithms()`) can be enabled in all configurations, except for overlapped max pooling. 40 | * Training session switched to distributed test evaluation and cross validation. 41 | 42 | ### CNTK C#/.NET Managed API 43 | 44 | #### New and improved features 45 | 46 | * New APIs: `class NDArrayView` and methods, `SetMaxNumCPUThreads()`, `GetMaxNumCPUThreads()`, `SetTraceLevel()`, `GetTraceLevel()` 47 | * Memory and performance optimization. 48 | 49 | The updated APIs are described [here](../CNTK-Library-Managed-API.md). 50 | 51 | ### CNTK NuGet package 52 | 53 | A new set of NuGet Packages is provided with this Release. 54 | 55 | **IMPORTANT!** For Visual Studio: In the *Manage NuGet Packages* window change the default option *Stable Only* to *Include Prerelease*. Otherwise the release candidate package of CNTK will not be visible. This Package version is ```2.0.0-rc2```. 56 | 57 | -------------------------------------------------------------------------------- /articles/Archive/CNTK-Evaluate-Hidden-Layers.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Evaluate Hidden Layers 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 04/03/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK Evaluate Hidden Layers 13 | 14 | This page describes how to expose a trained model's hidden layer's values. 15 | 16 | ## Overview 17 | A CNTK model is built on interconnected layers. Some of these layers can be evaluated using the `EvalDll` because they are tagged as being 'output' layers. In order to expose other layers through the `EvalDll`, these layers must be tagged as output layers by adding them to the `outputNodes` property. 18 | 19 | For example, the `01_OneHidden_ndl_deprecated.cntk` configuration file refers to the `01_OneHidden.ndl` file for the network definition. In this network description file, we have two layers defined: 20 | 21 | h1 = DNNSigmoidLayer (featDim, hiddenDim, featScaled, 1) 22 | ol = DNNLayer (hiddenDim, labelDim, h1, 1) 23 | 24 | But only one layer is marked as an output: 25 | 26 | outputNodes = (ol) 27 | 28 | Thus, the `EvalDll` will only return values pertaining to the `ol` layer during evaluation. 29 | 30 | In order to be able to evaluate the `h1` hidden layer, we need to expose it first as an output node. There are three possible ways: 31 | 32 | ## 1. Training model with hidden layers exposed 33 | To output the `h1` layer, just add it as an output in the network description (`01_OneHidden.bs` file) when training it, and that layer would be available for reading during evaluation: 34 | 35 | outputNodes = (h1:ol) 36 | 37 | **However this implies the model would need to be (re)trained with this configuration.** 38 | 39 | ## 2. Modifying an already trained model 40 | Models can be modified on the fly when being loaded using BrainScript expressions. 41 | This will be documented in a future update of this documentation. 42 | 43 | ## 3. Changing output-node set of an already trained model while loading it for evaluation using the `EvalDll`/`EvalDllWrapper` modules 44 | If a trained will be evaluated using the `EvalDll`/`EvalDllWrapper` modules, you can add the `outputNodeNames` property with a colon separated list of nodes to the network definition: 45 | 46 | outputNodeNames = "h1.z:ol.z" 47 | 48 | When loading the network, the Eval engine will recognize the `outputNodeNames` property and replace the model's output nodes with the list of nodes specified in the `outputNodeNames` property. 49 | 50 | Looking at the code inside the `CPPEvalClient` example project, shows the (uncommented) line specifying the `outputNodeNames` property: 51 | 52 | networkConfiguration += "outputNodeNames=\"h1.z:ol.z\"\n"; 53 | networkConfiguration += "modelPath=\"" + modelFilePath + "\""; 54 | model->CreateNetwork(networkConfiguration); 55 | 56 | Running the program shows the corresponding output for the `h1.z` layer. 57 | -------------------------------------------------------------------------------- /articles/Setup-UWP-Build-on-Windows.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK Development Environment for Universal Windows Platform (UWP) 3 | author: wolfma61 4 | ms.author: wolfma 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: get-started-article 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | # CNTK Development Environment for Universal Windows Platform (UWP) 12 | 13 | [!INCLUDE[versionadded-2.1-block](includes/versionadded-2.1-block.md)] 14 | 15 | To build the CNTK configurations `Release_UWP` and `Debug_UWP` (for x64) in the CNTK Visual Studio solution file, you need to do the following: 16 | 17 | ## Install Workload: Universal Windows Platform development 18 | 19 | Open the Control Panel, then navigate to Programs -> Programs and Features. Select Visual Studio 2017, and click 'Change', when the Visual Studio setup starts, select Workloads `Universal Windows Platform Development` option 20 | 21 | ![VS Setup](pictures\setup\VS2017Workloads.jpg) 22 | 23 | This will take a few minutes to install. 24 | 25 | ## Install OpenBLAS 26 | 27 | OpenBLAS is used as an alternative math library for CNTK UWP. The source code for OpenBlas can be found on [GitHub](https://github.com/xianyi/OpenBLAS). You can either use the pre-built version of OpenBLAS provided by the Microsoft Cognitive Toolkit team (the recommended installation path), or build it yourself. 28 | 29 | ### Using the pre-built OpenBLAS library 30 | 31 | Create a directory on your machine, e.g.: 32 | 33 | ``` 34 | mkdir c:\local\CNTKopenBLAS 35 | ``` 36 | 37 | Set the environment variable `CNTK_OPENBLAS_PATH` to point to this directory: 38 | 39 | ``` 40 | setx CNTK_OPENBLAS_PATH c:\local\CNTKopenBLAS 41 | ``` 42 | 43 | Download the file [CNTKopenBLAS-Windows-2.zip](https://www.microsoft.com/en-us/cognitive-toolkit/download-openblas-uwp-library/). Unzip it into your CNTK openBLAS path, creating a numbered sub directory within. For example, if you are on latest master, download and extract its contents to `c:\local\CNTKopenBLAS\2` (the top-level folder inside the ZIP archive is called `2`). 44 | 45 | To validate, the file `%CNTK_OPENBLAS_PATH%\2\cblas.h` must exist. 46 | 47 | ### Build OpenBLAS from source 48 | 49 | This is an alternative to using the pre-built OpenBLAS library. Follow instructions from [here](https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-in-Microsoft-Visual-Studio#build-openblas-for-universal-windows-platform), then copy the resulting files into a local directory as described above, setting the `CNTK_OPENBLAS_PATH` environment variable. 50 | 51 | # Build UWP configurations 52 | 53 | Now restart Visual Studio and build `Release_UWP` or `Debug_UWP` configurations. 54 | 55 | # Running tests 56 | 57 | UWP-specific tests are located in the `Tests\EndToEndTests\EvalClientTests\CNTKLibraryCPPUWPEvalExamplesTests` directory. 58 | 59 | Open Test Explorer window in Visual Studio. You should see a list of tests like this: 60 | 61 | ![tests](pictures\setup\uwp-tests.png) 62 | -------------------------------------------------------------------------------- /articles/Tutorials.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Tutorials & Examples 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.topic: conceptual 7 | ms.service: Cognitive-services 8 | ms.devlang: python 9 | --- 10 | 11 | 12 | # Tutorials & Examples 13 | 14 | ## Tutorials 15 | 16 | ### Python Jupyter Notebook (Recommended) 17 | Assuming you have completed [Getting Started](https://www.cntk.ai/pythondocs/gettingstarted.html), use the 18 | CNTK Python Jupyter notebook [tutorials](https://cntk.ai/pythondocs/tutorials.html) to gain familiarity with the toolkit. You may want to start with the CNTK 100 series tutorials before trying out higher series that cover a range of different applications including image classification, language understanding, reinforcement learning and others. 19 | 20 | ### Additional Python recipes: 21 | * ['Build your own image classifier using Transfer Learning'](./Build-your-own-image-classifier-using-Transfer-Learning.md) provides two examples for custom image classifiers using transfer learning. 22 | * ['Object detection using Fast R-CNN'](./Object-Detection-using-Fast-R-CNN.md) describes how to train Fast R-CNN on PASCAL VOC data and custom data for object detection. 23 | * ['Object-Detection-using-Faster-R-CNN'](./Object-Detection-using-Faster-R-CNN.md) describes how to train Faster R-CNN on PASCAL VOC data and custom data for object detection. 24 | 25 | You can also try out the tutorials live with pre-installed CNTK in [Azure Notebooks](https://notebooks.azure.com/CNTK/libraries/tutorials) for free. 26 | 27 | ## Examples 28 | Refer to [Examples](./Examples.md) to find examples of building networks in CNTK using the supported APIs. 29 | 30 | 40 | 44 | 45 | 49 | -------------------------------------------------------------------------------- /articles/BrainScript-LM-sequence-reader.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: BrainScript LM Sequence Reader 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/15/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # BrainScript LM Sequence Reader 13 | 14 | Note: if you are a newcomer, please consider using [CNTK Text Format Reader](./BrainScript-CNTKTextFormat-Reader.md). In the future LMSequenceReader will be deprecated and eventually not supported. 15 | 16 | LMSequenceReader is a reader that reads text string. It is mostly often used for language modeling tasks. An example of its setup is as follows 17 | 18 | reader = [ 19 | readerType = "LMSequenceReader" 20 | randomize = false 21 | nbruttineachrecurrentiter = 10 22 | unk = "" 23 | wordclass = "$DataDir$\wordclass.txt" 24 | file = "$DataDir$\penntreebank.train.txt" 25 | labelIn = [ 26 | labelDim = 10000 27 | beginSequence = "" 28 | endSequence = "" 29 | ] 30 | ] 31 | 32 | The LMSequenceReader has following parameters: 33 | * `randomize`: it is either `None` or `Auto`. This specifies the mode of whether doing sentence randomization of the whole corpus. 34 | 35 | * `nbruttsineachrecurrentiter`: this specifies the limit of the number of sentences in a minibatch. The reader arranges same-length input sentences, up to the specified limit, into each minibatch. For recurrent networks, trainer resets hidden layer activities only at the beginning of sentences. Activities of hidden layers are carried over to the next minibatch if an end of sentence is not reached. Using multiple sentences in a minibatch can speed up training processes. 36 | 37 | * `unk`: this specifies the symbol to represent unseen input symbols. Usually, this symbol is “”. Unseen words will be mapped to the symbol. 38 | 39 | * `wordclass`: this specifies the word class information. This is used for class based language modeling. An example of the class information is below. The first column is the word index. The second column is the number of occurrences, the third column is the word, and the last column is the class id of the word. 40 | 41 | `0 42068 0` 42 | 43 | `1 50770 the 0` 44 | 45 | `2 45020 0` 46 | 47 | `3 32481 N 0` 48 | 49 | `4 24400 of 0` 50 | 51 | `5 23638 to 0` 52 | 53 | `6 21196 a 0` 54 | 55 | `7 18000 in 1` 56 | 57 | `8 17474 and 1` 58 | 59 | * `file`: the file contains text strings. An example is below. In this example you can also notice one sub-blocks named `labelIn`. 60 | 61 | pierre N years old will join the board as a nonexecutive director nov. N 62 | mr. is chairman of n.v. the dutch publishing group 63 | 64 | * `labelIn`: the section for input label. It contains the following setups 65 | * `beginSequence` – the sentence beginning symbol 66 | * `endSequence` – the sentence ending symbol 67 | * `labelDim` – the dimension of labels. This usually means the vocabulary size. 68 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_9_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_9_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 01/25/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_9_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 9 release of CNTK V.2.0. 15 | 16 | ## Breaking changes 17 | 18 | There is a **breaking change** in this release: 19 | 20 | * Image Reader is updated with random area option (useful for Inception-style networks), and color transform option. 21 | 22 | ## New and updated features 23 | 24 | * Lambda rank and NDCG at 1 are accessible from Python for real this time. 25 | * Changes in Learner API: for learners that use momentum (`momentum_sgd`, `nesterov_sgd` and `adam_sgd`) it is now possible to specify if the momentum should be applied in the regular fashion or as a unit-gain filter (default). For more details on the unit-gain momentum, please refer to this [section](../BrainScript-SGD-Block.md#converting-learning-rate-and-momentum-parameters-from-other-toolkits). 26 | * No more partial minibatches at the sweep boundary. Now minibatches are allowed to transparently cross the sweep boundary. `epoch_size` parameter for all hyperparameter schedules now defaults to full data sweep (i.e., by default, hyperparameters change their values on the sweep by sweep basis). 27 | 28 | ## New Examples and Tutorials 29 | 30 | * Deconvolution layer and image auto encoder example using deconvolution and unpooling ([Example **07_Deconvolution** in *Image - Getting Started*](https://github.com/Microsoft/CNTK/tree/v2.0.beta9.0/Examples/Image/GettingStarted)). 31 | * [Basic autoencoder with MNIST data](https://github.com/Microsoft/CNTK/blob/v2.0.beta9.0/Tutorials/CNTK_105_Basic_Autoencoder_for_Dimensionality_Reduction.ipynb). 32 | * [LSTM Timeseries with Simulated Data (Part A)](https://github.com/Microsoft/CNTK/blob/v2.0.beta9.0/Tutorials/CNTK_106A_LSTM_Timeseries_with_Simulated_Data.ipynb). (More will come in the next Releases) 33 | 34 | ## Python API 35 | 36 | The following updates are introduced to Python API: 37 | 38 | * Default Python version for binary installation script was changed to **3.5** for both Windows and Linux. As before you can manually select version 2.7, 3.4, or 3.5 during the installation. Please see [binary and source setup](../Setup-CNTK-on-your-machine.md) instructions to find out about how to select Python version. 39 | * [Docker Hub Runtime Image](../CNTK-Docker-Containers.md) will also contain Python v. 3.5. 40 | * Lambda rank and NDCG at 1 are now accessible from Python. 41 | * Preliminary version of the training session API for distributed learning is exposed in Python. 42 | 43 | ## CNTK Evaluation library. NuGet package 44 | 45 | A new set of NuGet Packages is provided with this Release. 46 | 47 | **IMPORTANT!** In Visual Studio *Manage NuGet Packages* Window change the default option *Stable Only* to *Include Prerelease*. Otherwise the packages will not be visible. The Package version should be ```2.0-beta9```. 48 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_2_0_Beta_10_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_2_0_Beta_10_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 02/01/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_2_0_Beta_10_Release_Notes 13 | 14 | This is a summary of new features delivered with the Beta 10 release of CNTK V.2.0. 15 | 16 | ## New Examples and Tutorials 17 | 18 | * A Python version of the deconvolution layer and image auto encoder example was added ([Example **07_Deconvolution** in *Image - Getting Started*](https://github.com/Microsoft/CNTK/tree/v2.0.beta10.0/Examples/Image/GettingStarted)). 19 | * [Python distributed training example for image classification using AlexNet](https://github.com/Microsoft/CNTK/tree/v2.0.beta10.0/Examples/Image/Classification/AlexNet/Python) 20 | * [Basic implementation of Generative Adversarial Networks (GAN) networks](https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Tutorials/CNTK_206_Basic_GAN.ipynb) 21 | * [Training with Sampled Softmax](https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Tutorials/CNTK_207_Training_with_Sampled_Softmax.ipynb) 22 | 23 | ## Python API 24 | 25 | The following updates are introduced to Python API: 26 | 27 | * Operators can now be implemented in pure Python by means of UserFunctions, 28 | cf. [here](https://cntk.ai/pythondocs/extend.html) 29 | * The API is still experimental and subject to change. 30 | 31 | * Plotting the CNTK graph 32 | Plotting a CNTK graph is now as easy as calling 33 | 34 | `cntk.graph.plot(node, 'node.png')`. 35 | 36 | Prerequisites are that the Python module `pydot_ng` is installed (via `pip`) 37 | and that GraphViz has been installed from graphviz.org and its executable is 38 | in the system's path. 39 | 40 | * API support for object detection using Fast R-CNN was added 41 | * See the 42 | [description](../Object-Detection-using-Fast-R-CNN.md) 43 | and 44 | [code](https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Examples/Image/Detection/FastRCNN/A2_RunCntk_py3.py) 45 | for the Fast R-CNN example. 46 | 47 | * Tensorboard Event Generation 48 | * Initial support for generating Tensorboard events from the ProgressPrinter 49 | class was added, cf. 50 | [here](../Using-TensorBoard-for-Visualization.md) 51 | 52 | ## Bug fixes 53 | * Speed up TimesNodeBase for sparse by avoiding unroll. This improves speed of 54 | CrossEntropyWithSoftmax and ClassificationError for sparse labels. The 55 | language understanding example 56 | ([LanguageUnderstanding.py](https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Examples/LanguageUnderstanding/ATIS/Python/LanguageUnderstanding.py)) 57 | got four times faster now. 58 | 59 | ## CNTK Evaluation library. NuGet package 60 | 61 | A new set of NuGet Packages is provided with this Release. 62 | 63 | **IMPORTANT!** In Visual Studio *Manage NuGet Packages* Window change the default option *Stable Only* to *Include Prerelease*. 64 | After selecting the appropriate NuGet package to install, use the version selector on the right to explicitly select package version `2.0.0-beta10`. 65 | -------------------------------------------------------------------------------- /articles/Sequential.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Sequential 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/26/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # Sequential 13 | 14 | Composes an array of functions into a new function that calls these functions one after another ("forward function composition"). 15 | 16 | Sequential (arrayOfFunctions) 17 | 18 | ## Parameters 19 | 20 | `arrayOfFunctions`: a BrainScript array of functions, e.g. constructed with the `:` operator: `(LinearLayer{1024} : Sigmoid)` 21 | 22 | ## Return value 23 | 24 | This function returns another function. That returned function takes one argument, and returns the result 25 | of applying all given functions in sequence to the input. 26 | 27 | ## Description 28 | 29 | `Sequential()` is a powerful operation that allows to compactly express a very common situation in neural networks 30 | where an input is processed by propagating it through a progression of layers. 31 | You may be familiar with it from other neural-network toolkits. 32 | 33 | `Sequential()` takes an array of functions as its argument, 34 | and returns a *new* function that invokes these function in order, 35 | each time passing the output of one to the next. 36 | Consider this example: 37 | 38 | FGH = Sequential (F:G:H) 39 | y = FGH (x) 40 | 41 | Here, the colon (`:`) is BrainScript's syntax of expressing arrays. For example, 42 | `(F:G:H)` is an array with three elements, `F`, `G`, and `H`. 43 | In Python, for example, this would be written as `[ F, G, H ]`. 44 | 45 | The `FGH` function defined above means the same as 46 | 47 | y = H(G(F(x))) 48 | 49 | This is known as ["function composition"](https://en.wikipedia.org/wiki/Function_composition), 50 | and is especially convenient for expressing neural networks, which often have this form: 51 | 52 | +-------+ +-------+ +-------+ 53 | x -->| F |-->| G |-->| H |--> y 54 | +-------+ +-------+ +-------+ 55 | 56 | which is perfectly expressed by `Sequential (F:G:H)`. 57 | 58 | Lastly, please be aware that the following expression: 59 | 60 | layer1 = DenseLayer{1024} 61 | layer2 = DenseLayer{1024} 62 | z = Sequential (layer1 : layer2) (x) 63 | 64 | means something different from: 65 | 66 | layer = DenseLayer{1024} 67 | z = Sequential (layer : layer) (x) 68 | 69 | In the latter form, the same function *with the same shared set of parameters* is applied twice, 70 | while in the former, the two layers have separate sets of parameters. 71 | 72 | ## Example 73 | 74 | Standard 4-hidden layer feed-forward network as used in the earlier deep-neural network 75 | work on speech recognition: 76 | 77 | myModel = Sequential ( 78 | DenseLayer{2048, activation=Sigmoid} : # four hidden layers 79 | DenseLayer{2048, activation=Sigmoid} : 80 | DenseLayer{2048, activation=Sigmoid} : 81 | DenseLayer{2048, activation=Sigmoid} : 82 | DenseLayer{9000, activation=Softmax} # note: last layer is a Softmax 83 | ) 84 | features = Input{40} 85 | p = myModel (features) 86 | -------------------------------------------------------------------------------- /articles/NuGet-Package.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: NuGet Package 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: csharp, cpp 10 | --- 11 | 12 | # NuGet Package 13 | 14 | ## Overview 15 | 16 | The CNTK NuGet package is a NuGet package containing the necessary libraries and assemblies to enable .NET and Windows C++ applications to perform CNTK model evaluation. There are 3 NuGet packages: 17 | 18 | * **CNTK.CPUOnly**: provides [CNTK C#/.NET Managed Library](./CNTK-Library-Managed-API.md) and [C++ Library](./CNTK-Library-Native-Eval-Interface.md) for CPU only machines. 19 | * **CNTK.GPU**: provides [CNTK C#/.NET Managed Library](./CNTK-Library-Managed-API.md) and [C++ Library](./CNTK-Library-Native-Eval-Interface.md) for GPU enabled machines. 20 | * **CNTK.UWP.CPUOnly**: provides [CNTK C++ UWP Eval Library](./CNTK-Library-Native-Eval-Interface.md) for applications using Universal Windows Platform (UWP) on CPU only machines. 21 | 22 | ## Installation 23 | The package may be obtained through the NuGet Package Manager inside Visual Studio by searching for "CNTK", or downloaded directly from nuget.org: 24 | 25 | * [https://www.nuget.org/packages/CNTK.CPUOnly](https://www.nuget.org/packages/CNTK.CPUOnly) 26 | * [https://www.nuget.org/packages/CNTK.GPU](https://www.nuget.org/packages/CNTK.GPU) 27 | * [https://www.nuget.org/packages/CNTK.UWP.CPUOnly](https://www.nuget.org/packages/CNTK.UWP.CPUOnly) 28 | 29 | The current version is `2.4.0`. 30 | 31 | The CNTK NuGet packages may be installed on a Visual C++, .NET(C#, VB.Net, F#, ...), or UWP projects. The NuGet package contains both debug and release versions of C++ library and DLLs, and the release version of C# assembly and its dependent DLLs. Once installed the project will contain a reference to the managed DLL and the required dependent binary libraries will be copied to the output directory after building the project. 32 | 33 | For instructions on how to install a NuGet package, please refer to the NuGet documentation at: 34 | [https://docs.nuget.org/consume/installing-nuget](https://docs.nuget.org/consume/installing-nuget) 35 | 36 | ## Current Release 37 | The current release of CNTK Eval NuGet Packages support the following interfaces 38 | * [CNTK Library Managed Eval Interface](./CNTK-Library-Managed-API.md) 39 | * [CNTK Library Managed Training Interface](./Using-CNTK-with-CSharp.md) 40 | * [CNTK Library C++ Eval Interface](./CNTK-Library-Native-Eval-Interface.md) 41 | 42 | ## Linux 43 | There is a Linux equivalent set of libraries (albeit not available through NuGet) that enables CNTK model evaluations in Linux using C++. Refer to the [CNTK Evaluation on Linux](./CNTK-Library-Evaluation-on-Linux.md) page for details. 44 | 45 | ## Legacy applications using CNTK EvalDLL interface 46 | For applications that are still using CNTK EvalDLL interface, which only supports the [model-v1 format](./CNTK-model-format.md), please use the **Microsoft.Research.CNTK.CpuEval-mkl** NuGet package: 47 | * [https://www.nuget.org/packages/Microsoft.Research.CNTK.CpuEval-mkl](https://www.nuget.org/packages/Microsoft.Research.CNTK.CpuEval-mkl): supports CPU Only, implements [EvalDll C# Interface](./EvalDll-Managed-API.md) and [EvalDll C++ Interface](./EvalDll-Native-API.md). 48 | -------------------------------------------------------------------------------- /articles/BrainScript-Activation-Functions.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Activation Functions with BrainScript 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 03/09/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # Activation Functions with BrainScript 13 | 14 | ## Sigmoid(), Tanh(), ReLU(), Softmax(), LogSoftmax(), Hardmax() 15 | 16 | Non-linear activation functions for neural networks. 17 | 18 | Sigmoid (x) 19 | Tanh (x) 20 | ReLU (x) 21 | Softmax (x) 22 | LogSoftmax (x) 23 | Hardmax (x) 24 | 25 | ### Parameters 26 | 27 | * `x`: argument to apply the non-linearity to 28 | 29 | ### Return Value 30 | 31 | Result of applying the non-linearity. The output's tensor shape is the same as the input's. 32 | 33 | ### Description 34 | 35 | These are the popular activation functions of neural networks. 36 | All of these except the `Softmax()` family and `Hardmax()` are applied elementwise. 37 | 38 | Note that for efficiency, when using the cross-entropy training criterion, 39 | it is often desirable to not apply a Softmax operation at the end, 40 | but instead pass the *input* of the Softmax to `CrossEntropyWithSoftmax()` 41 | 42 | The `Hardmax()` operation determines the element with the highest value 43 | and represents its location as a one-hot vector/tensor. 44 | This is used for performing classification. 45 | 46 | #### Expressing Other Non-Linearities in BrainScript 47 | 48 | If your needed non-linearity is not one of the above, 49 | it may be composable as a BrainScript expression. 50 | For example, a leaky ReLU with a slope of 0.1 for the negative part could just be written as 51 | 52 | LeakyReLU (x) = 0.1 * x + 0.9 * ReLU (x) 53 | 54 | #### Softmax Along Axes 55 | 56 | The Softmax family is special in that it involves the computation of a denominator. 57 | This denominator is computed over all values of the input vector. 58 | 59 | In some scenarios, however, the input is a tensor with rank>1, where axes should be treated separately. 60 | Consider, for example, an input tensor of shape `[10000 x 20]` that stores 20 different distributions, 61 | each column represents the probability distribution of a distinct input item. 62 | Hence, the Softmax operation should compute 20 separate denominators. 63 | This operation is not supported by the built-in `(Log)Softmax()` functions, but can be realized 64 | in BrainScript using an elementwise reduction operation as follows: 65 | 66 | ColumnwiseLogSoftmax (z) = z - ReduceLogSum (axis=1) 67 | 68 | Here, [`ReduceLogSum()`](./Reduction-Operations.md) computes the (log of) the denominator, resulting in a tensor 69 | with dimension 1 for the reduced axis; `[1 x 20]` in the above example. Subtracting this from the 70 | `[10000 x 20]`-dimensional input vector is a valid operation--as usual, the `1` will automatically "broadcast", 71 | that is, duplicated to match the input dimension. 72 | 73 | ### Example 74 | 75 | A simple MLP that performs a 10-way classification of 40-dimensional feature vectors: 76 | 77 | features = Input{40} 78 | h = Sigmoid (ParameterTensor{256:0} * features + ParameterTensor{256}) 79 | z = ParameterTensor{10:0} * h * ParameterTensor{10} # input to Softmax 80 | labels = Input{10} 81 | ce = CrossEntropyWithSoftmax (labels, z) 82 | 83 | -------------------------------------------------------------------------------- /.openpublishing.redirection.json: -------------------------------------------------------------------------------- 1 | { 2 | "redirections": [ 3 | { 4 | "source_path": "articles/project-a-1D-input-of-dim-inputDim-to-a-1D-output-of-dim-outputDim.md", 5 | "redirect_url": "/cognitive-toolkit/How-do-I-Express-Things-In-Python#port-projection-of-1d-input-to-1d-output-from-python-api-to-c-api" 6 | }, 7 | { 8 | "source_path": "articles/Breaking-changes-in-Master-compared-to-beta15.md", 9 | "redirect_url": "/cognitive-toolkit/Archive/Breaking-changes-in-Master-compared-to-beta15" 10 | }, 11 | { 12 | "source_path": "articles/CNTK-move-to-Cuda8.md", 13 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-move-to-Cuda8" 14 | }, 15 | { 16 | "source_path": "articles/Setup-Migrate-VS13-to-VS15.md", 17 | "redirect_url": "/cognitive-toolkit/Archive/Setup-Migrate-VS13-to-VS15" 18 | }, 19 | { 20 | "source_path": "articles/News-2016.md", 21 | "redirect_url": "/cognitive-toolkit/Archive/News-2016" 22 | }, 23 | { 24 | "source_path": "articles/CNTK-Shared-Libraries-Naming-Format.md", 25 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-Shared-Libraries-Naming-Format" 26 | }, 27 | { 28 | "source_path": "articles/CNTK-Evaluation-using-cntk.exe.md", 29 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-Evaluation-using-cntk.exe" 30 | }, 31 | { 32 | "source_path": "articles//EvalDLL-Evaluation-Overview.md", 33 | "redirect_url": "/cognitive-toolkit/Archive//EvalDLL-Evaluation-Overview" 34 | }, 35 | { 36 | "source_path": "articles/EvalDLL-Evaluation-on-Windows.md", 37 | "redirect_url": "/cognitive-toolkit/Archive/EvalDLL-Evaluation-on-Windows" 38 | }, 39 | { 40 | "source_path": "articles/EvalDLL-Evaluation-on-Linux.md", 41 | "redirect_url": "/cognitive-toolkit/Archive/EvalDLL-Evaluation-on-Linux" 42 | }, 43 | { 44 | "source_path": "articles/Evaluate-a-model-in-an-Azure-WebApi-using-EvalDll.md", 45 | "redirect_url": "/cognitive-toolkit/Archive/Evaluate-a-model-in-an-Azure-WebApi-using-EvalDll" 46 | }, 47 | { 48 | "source_path": "articles/EvalDll-Managed-API.md", 49 | "redirect_url": "/cognitive-toolkit/Archive/EvalDll-Managed-API" 50 | }, 51 | { 52 | "source_path": "articles/EvalDll-Native-API.md", 53 | "redirect_url": "/cognitive-toolkit/Archive/EvalDll-Native-API" 54 | }, 55 | { 56 | "source_path": "articles/CNTK-Evaluate-Hidden-Layers.md", 57 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-Evaluate-Hidden-Layers" 58 | }, 59 | { 60 | "source_path": "articles/CNTK-Evaluate-Image-Transforms.md", 61 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-Evaluate-Image-Transforms" 62 | }, 63 | { 64 | "source_path": "articles/CNTK-Evaluate-Multiple-Models.md", 65 | "redirect_url": "/cognitive-toolkit/Archive/CNTK-Evaluate-Multiple-Models" 66 | }, 67 | { 68 | "source_path": "articles/Setup-BuildProtobuf-VS15.md", 69 | "redirect_url": "/cognitive-toolkit/Archive/Setup-BuildProtobuf-VS15" 70 | }, 71 | { 72 | "source_path": "articles/Setup-Buildzlib-VS15.md", 73 | "redirect_url": "/cognitive-toolkit/Archive/Setup-Buildzlib-VS15" 74 | } 75 | ] 76 | } 77 | -------------------------------------------------------------------------------- /articles/CNTK-Library-Evaluation-on-Linux.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Model Evaluation on Linux 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 07/31/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: cpp 10 | --- 11 | 12 | # Model Evaluation on Linux 13 | 14 | The CNTK Library on Linux is available in C++, Python and Java. 15 | 16 | ## Using C++ 17 | The usage pattern on Linux is the same as that on Windows. 18 | 19 | The evaluation library, `libCntk.Core-.so`, can be found under `cntk/lib` in the CNTK binary package. If you build CNTK from source code, the `libCntk.Core-.so` is available in the `lib` folder of the build directory. 20 | 21 | Any program using the evaluation library needs to link the libraries `libCntk.Core` and `libCntk.Math`, and set the appropriate search path for these libraries. 22 | ``` 23 | -lCntk.Core- -lCntk.Math- 24 | ``` 25 | Please use the same build flavor (Debug/Release) and [the same compiler version](./Setup-CNTK-on-Linux.md#c-compiler) as the one used to create the libraries. The [Examples/Evaluation/CNTKLibraryCPPEvalCPUOnlyExamples](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/CNTKLibraryCPPEvalCPUOnlyExamples) and [Examples/Evaluation/CNTKLibraryCPPEvalGPUExamples](https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Evaluation/CNTKLibraryCPPEvalGPUExamples) in the CNTK source code illustrates the usage pattern in Linux. The [Makefile](https://github.com/Microsoft/CNTK/tree/release/latest/Makefile) contains the target CNTKLIBRARY_CPP_EVAL_EXAMPLES showing how to build the example. 26 | 27 | Please refer to the [CNTK Library C++ Evaluation Interface](./CNTK-Library-Native-Eval-Interface.md) page for APIs in the CNTK C++ Library. 28 | 29 | ## Using Python 30 | You can use Python to evaluate a pre-trained model. Examples can be found [here](./How-do-I-Evaluate-models-in-Python.md). 31 | 32 | ## Using Java 33 | CNTK also provides APIs for evaluating model in Java application. Please note that the CNTK Java API is still experimental and subject to change. 34 | 35 | The [Java example](https://github.com/Microsoft/CNTK/tree/release/latest/Tests/EndToEndTests/EvalClientTests/JavaEvalTest/src/Main.java) shows how to evaluate a CNN model using the Java API. 36 | 37 | For using CNTK Java Library, add the `cntk.jar` file to the `classpath` of your Java project. If you are working with an IDE you should add this as an unmanaged jar. The cntk.jar file can be found in the CNTK binary release package (in the folder cntk/cntk/lib/java). You can also build cntk.jar from CNTK source. Please also set `java.library.path` to the directory containing `libCntk.Core.JavaBinding-.so`. If you use CNTK binary release package, please make sure that the prerequisites have been installed as described in the [Linux binary manual installation](./Setup-Linux-Binary-Manual.md) page, and set the LD_LIBRARY_PATH as follows (assuming the CNTK binaries are installed to /home/username/cntkbin) 38 | ``` 39 | export LD_LIBRARY_PATH=/home/username/cntkbin/cntk/lib:/home/username/cntkbin/cntk/dependencies/lib:$LD_LIBRARY_PATH 40 | ``` 41 | If you get `UnsatisfiedLinkErrors` in Java, it is typically because that the directory is not in the LD_LIBRARY_PATH (or in the wrong order). 42 | 43 | The Java library is currently built and tested with 64-bit OpenJDK 7. 44 | -------------------------------------------------------------------------------- /articles/Pooling.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Pooling 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 09/15/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Pooling 13 | 14 | Pooling (input, 15 | poolKind, # "max" or "average" 16 | {kernel dimensions}, 17 | stride = {stride dimensions}, 18 | autoPadding = {padding flags (boolean)}, 19 | lowerPad = {lower padding (int)}, 20 | upperPad = {upper padding (int)}) 21 | 22 | The pooling operations compute a new matrix by selecting the maximum (max pooling) or average value in the pooling input. In the case of average pooling, count of average does not include padded values. 23 | 24 | N-dimensional pooling allows to create max or average pooling of any dimensions, stride or padding. The syntax is: 25 | 26 | where: 27 | * `input` - pooling input 28 | * `poolKind` - "max" or "average" 29 | * `{kernel dimensions}` - dimensions of the pooling window, as a BrainScript vector, e.g. `(4:4)`. 30 | * `stride` - [named, optional, default is 1] strides. 31 | * `autoPadding` - [named, optional, default is true] automatic padding flags for each input dimension. 32 | * `lowerPad` - [named, optional, default is 0] precise lower padding for each input dimension 33 | * `upperPad` - [named, optional, default is 0] precise upper padding for each input dimension 34 | 35 | All dimensions arrays are colon-separated. Note: If you use the deprecated `NDLNetworkBuilder`, these must be comma-separated and enclosed in `{ }` instead. 36 | 37 | Since the pooling window can have arbitrary dimensions, this allows to build various pooling configurations, for example, a "Maxout" layer (see [Goodfellow et al](http://arxiv.org/abs/1302.4389) for details): 38 | 39 | MaxOutPool (inp, kW, kH, kC, hStride, vStride) = 40 | Pooling (inp, "max", (kW:kH:kC), stride=(hStride:vStride:kC), true:true:false)) 41 | 42 | ## Simplified syntax for 2D pooling 43 | There is a simplified syntax for 2D pooling: 44 | 45 | MaxPooling(m, windowWidth, windowHeight, stepW, stepH, imageLayout="cudnn" /* or "HWC"*/ ) 46 | AveragePooling(m, windowWidth, windowHeight, stepW, stepH, imageLayout="cudnn" /* or "HWC"*/ ) 47 | 48 | with the following parameters: 49 | * `m` - the input matrix. 50 | * `windowWidth` - width of the pooling window 51 | * `windowHeight` - height of the pooling window 52 | * `stepW` - step (or stride) used in the width direction 53 | * `stepH` - step (or stride) used in the height direction 54 | * `imageLayout` - [named optional] the storage format of each image. This is a legacy option that you likely won't need. By default it’s `HWC`, which means each image is stored as `[channel, width, height]` in column major notation. For better performance, it is recommended to use cuDNN in which case you should set it to `cudnn`, which means each image is stored as [width, height, channel] in column major notation. Note that `cudnn` format works both on GPU and CPU. 55 | 56 | **Example (ConvReLULayer NDL macro):** 57 | 58 | # pool2 59 | pool2W = 2 60 | pool2H = 2 61 | pool2hStride = 2 62 | pool2vStride = 2 63 | pool2 = MaxPooling (conv2, pool2W, pool2H, pool2hStride, pool2vStride, imageLayout="$imageLayout$") 64 | 65 | Note: If you are using the deprecated `NDLNetworkBuilder`, the optional `imageLayout` parameter defaults to `"HWC"` instead. 66 | -------------------------------------------------------------------------------- /.spelling: -------------------------------------------------------------------------------- 1 | # markdown-spellcheck spelling configuration file 2 | # Format - lines beginning # are comments 3 | # global dictionary is at the start, file overrides afterwards 4 | # one word per line, to define a file override use ' - filename' 5 | # where filename is relative to this configuration file 6 | 7 | # Note: some of these entries should be revisited. 8 | # Trying to get to fewer number of errors at first. 9 | 10 | 1bit-SGD 11 | 1D 12 | 3D 13 | 64-bit 14 | ACM 15 | Agarwal 16 | Akchurin 17 | Allman 18 | Amit 19 | APIs 20 | ASGD 21 | ASR 22 | ATIS 23 | backend 24 | backpropagation 25 | Barsoum 26 | Basoglu 27 | Bing 28 | BMUF 29 | BPTT 30 | BrainScript 31 | BrainScriptNetworkBuilder 32 | breakpoint 33 | camelCase 34 | checkpointed 35 | checkpointing 36 | CHW 37 | CIFAR-10 38 | CLA 39 | CMake 40 | cmake 41 | CNTK 42 | CNTK.exe 43 | cntk.exe 44 | cntk.jar 45 | CNTKLibraryEvalExamples 46 | codebase 47 | composability 48 | composable 49 | ConvNet 50 | convolutional 51 | Cortana 52 | covariate 53 | CPUs 54 | CUDA 55 | cuda 56 | cuDNN 57 | customizable 58 | dataset 59 | datasets 60 | deconvolution 61 | deserialize 62 | deserialized 63 | deserializers 64 | devInstall.ps1 65 | DLL 66 | DLLs 67 | DNN 68 | DNNs 69 | DNS 70 | Dockerfile 71 | dockerized 72 | Droppo 73 | e.g. 74 | Eldar 75 | Emad 76 | Eversole 77 | executables 78 | exponentials 79 | exponentiate 80 | filename 81 | filenames 82 | Frobenius 83 | gcc 84 | GEMM 85 | Github 86 | globals 87 | GoogLeNet 88 | GPUs 89 | GRU 90 | GRUs 91 | HDFS 92 | HKBU 93 | Hogwild 94 | HTK 95 | HTKMLF 96 | HTKMLFReader 97 | HWC 98 | hyperparameters 99 | i.e. 100 | IIS 101 | ImageNet 102 | initializer 103 | inlined 104 | IntelliSense 105 | Ioffe 106 | IPs 107 | Jasha 108 | JDK 109 | JPG 110 | JSON 111 | Jupyter 112 | Keras 113 | L1 114 | L2 115 | latin 116 | learnable 117 | Lifecycle 118 | lowerCamelCase 119 | LSTMs 120 | LTS 121 | Makefile 122 | metadata 123 | MFCC 124 | minibatch 125 | minibatches 126 | minibatching 127 | MKL 128 | MLF 129 | MLP 130 | MNIST 131 | MPI 132 | MSBuild 133 | MSDN 134 | MSMPI 135 | multinode 136 | Multiverso 137 | MxNet 138 | namespaces 139 | namespacing 140 | natively 141 | NCCL 142 | NDCG 143 | NDL 144 | NLP 145 | NuGet 146 | nvidia 147 | OpenCV 148 | OpenJDK 149 | OpenMP 150 | OpenMPI 151 | parallelization 152 | parameterized 153 | PascalCase 154 | Pathak 155 | pathnames 156 | PCIe 157 | PLP 158 | PNG 159 | PowerShell 160 | precompiled 161 | preconfigured 162 | Prefetch 163 | prefetches 164 | prefetching 165 | preinstalled 166 | prepend 167 | prepending 168 | prepends 169 | preprocessing 170 | preprocessor 171 | pretrained 172 | profiler 173 | programmatically 174 | PTVS 175 | RDMA 176 | re-entrancy 177 | redistributable 178 | regularizer 179 | resize 180 | RGB 181 | RMSE 182 | RNN 183 | RNNs 184 | ROI 185 | ROIs 186 | runtime 187 | Sayan 188 | SciPy 189 | SCP 190 | SDK 191 | Seide 192 | senone 193 | senones 194 | Sergey 195 | SimpleNetworkBuilder 196 | stackoverflow.com 197 | stateful 198 | subfolder 199 | subfolders 200 | submodule 201 | SVM 202 | Szegedy 203 | TensorBoard 204 | TensorBoardProgressWriter 205 | TensorFlow 206 | Theano 207 | timestamp 208 | TIMIT 209 | toolkits 210 | UAC 211 | UCI 212 | un-decided 213 | un-tar 214 | unary 215 | uncategorized 216 | uncheck 217 | uncommented 218 | undisputable 219 | unmanaged 220 | unpooling 221 | UpperCamelCase 222 | VGG 223 | VM 224 | VS13 225 | VS2015 226 | VS2017 227 | walkthrough 228 | webservice 229 | whl 230 | Win32 231 | x64 232 | Yu 233 | -------------------------------------------------------------------------------- /articles/Using-CNTK-with-Keras.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Binary Operations 3 | author: n17s 4 | ms.author: nikosk 5 | ms.date: 07/10/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | 13 | # Using CNTK with Keras (Beta) 14 | 15 | We are happy to bring CNTK as a back end for Keras as a beta release to our fans asking for this feature. While there is still feature and performance work remaining to be done, we appreciate early feedback that would help us bake Keras support. Here are the instructions for you to follow. 16 | 17 | ## Enable CNTK as Keras back end 18 | 19 | **Step 1.** Install CNTK **>=2.0** if you haven't done so already. Earlier CNTK versions **will not work** 20 | 21 | We assume you have a Python installation based on Anaconda. 22 | 23 | Choose the appropriate wheel file from the following pages to match your Python and machine environment. 24 | 25 | - For Windows: Choose a wheel URL from [here](./Setup-Windows-Python.md) 26 | 27 | - [For Linux]: Install the [prerequisites](./Setup-Linux-Python.md#prerequisites) and then choose a wheel URL from [here](./Setup-Linux-Python.md). 28 | 29 | Install the wheel file 30 | 31 | ``` 32 | pip install 33 | ``` 34 | 35 | **Step 2.** Install/Update Keras in the **same environment** as CNTK. 36 | 37 | First, if you have cntk in an environment called *cntkpy* do 38 | 39 | For Windows: 40 | 41 | ``` 42 | activate cntkpy 43 | ``` 44 | 45 | For Linux: 46 | 47 | ``` 48 | source activate cntkpy 49 | ``` 50 | 51 | If you have a Keras installation (in the same environment as your CNTK installation), you will need to upgrade it to the latest version. 52 | 53 | ```pip install -U keras``` 54 | 55 | If you don't have Keras installed, the following command will install the latest version 56 | 57 | ```pip install keras``` 58 | 59 | **Step 3.** Update Keras to use CNTK as back end 60 | 61 | You have two ways to set up CNTK as a Keras back end: 62 | 63 | > 3.1. By keras.json file. 64 | 65 | Please modify the "keras.json" file under %USERPROFILE%/.keras on Windows, or $HOME/.keras on Linux. **Only set the "backend" field to "cntk"**. If you do not have a ```keras.json```, that means you have not run Keras on this machine. Use Step 5.2 or create a .keras directory and a ```keras.json``` file with the following content. 66 | 67 | ``` 68 | { 69 |     "epsilon": 1e-07,  70 |     "image_data_format": "channels_last",  71 |     "backend": "cntk",  72 |     "floatx": "float32"  73 | } 74 | ``` 75 | 76 | > 3.2. By environment variable 77 | 78 | For Windows: 79 | 80 | ``` 81 | set KERAS_BACKEND=cntk 82 | ``` 83 | 84 | For Linux: 85 | 86 | ``` 87 | export KERAS_BACKEND=cntk 88 | ``` 89 | 90 | **Step 4.** Try out the Keras examples 91 | 92 | You can try some example scripts in the Keras' repository: https://github.com/fchollet/keras/tree/master/examples 93 | 94 | For example, clone the "mnist_mlp.py" from the link above, and run: 95 | 96 | ```python mnist_mlp.py``` 97 | 98 | ## Known issues 99 | 100 | * Performance optimization on CPU device in combination with Keras is an ongoing work item. 101 | 102 | * Currently not supported: Gradient as symbolic ops, stateful recurrent layer, masking on recurrent layer, padding with non-specified shape (to use the CNTK backend in Keras with padding, please specify a well-defined input shape), convolution with dilation, randomness op across batch axis, few backend APIs such as `reverse`, `top_k`, `ctc`, `map`, `foldl`, `foldr`, etc. 103 | -------------------------------------------------------------------------------- /articles/Articles3/DeepCrossing.cntk: -------------------------------------------------------------------------------- 1 | WorkingDir = "." 2 | DataDir = "./data/" 3 | ModelDir = "$WorkingDir$/DeepCrossing" 4 | ConfigDir = "." 5 | 6 | MBSize=100 7 | LRate=0.0001 8 | 9 | CROSSDim1 = 512 10 | CROSSDim2 = 512 11 | CROSSDIM3 = 256 12 | CROSSDIM4 = 128 13 | CROSSDIM5 = 64 14 | MAXEPOCHS = 50 15 | 16 | trainFile = "train.bin" 17 | validFile = "valid.bin" 18 | 19 | command = train 20 | precision = float 21 | train = [ 22 | action = train 23 | numMBsToShowResult=500 24 | deviceId = Auto 25 | minibatchSize =$MBSize$ 26 | modelPath = $ModelDir$/DeepCrossing.net 27 | traceLevel = 3 28 | 29 | SGD = [ 30 | epochSize=0 31 | learningRatesPerSample = $LRate$ #run 1 to 2 epochs, check the result and adjust this value 32 | momentumPerMB = 0 33 | maxEpochs=$MAXEPOCHS$ 34 | 35 | gradUpdateType=none 36 | gradientClippingWithTruncation=true 37 | clippingThresholdPerSample=1#INF 38 | 39 | AutoAdjust=[ 40 | reduceLearnRateIfImproveLessThan=0 41 | loadBestModel=true 42 | increaseLearnRateIfImproveMoreThan=1000000000 43 | learnRateDecreaseFactor=0.5 44 | autoAdjustLR=AdjustAfterEpoch 45 | learnRateAdjustInterval=2 46 | ] 47 | 48 | ] 49 | 50 | BrainScriptNetworkBuilder=[ 51 | 52 | // Helper Macro 53 | ResUnit(InnerDim, OutDim, INNODE) 54 | { 55 | L1 = DenseLayer {InnerDim, activation = ReLU, init = "gaussian"}( INNODE ) 56 | L2 = DenseLayer {OutDim, activation = Pass, init = "gaussian"}( L1 ) 57 | ResUnit = ReLU(Plus(INNODE, L2)) 58 | }.ResUnit 59 | 60 | // Constants 61 | T1Dim = XXX 62 | T2Dim = XXX 63 | T3Dim = XXX 64 | D1Dim = XXX 65 | D2Dim = XXX 66 | D3Dim = XXX 67 | S1Dim = XXX 68 | S2Dim = XXX 69 | 70 | // Inputs 71 | T1 = SparseInput(T1Dim) 72 | T2 = SparseInput(T2Dim) 73 | T3 = SparseInput(T3Dim) 74 | D1 = Input(D1Dim) 75 | D2 = Input(D2Dim) 76 | D3 = Input(D3Dim) 77 | S1 = SparseInput(S1Dim) 78 | S2 = SparseInput(S2Dim) 79 | 80 | 81 | // Embedding Constants 82 | EDim = XXX # Previously we stated 128 as a potential default 83 | 84 | // Embedding Layer 85 | T1E = DenseLayer {EDim, activation = ReLU, init = "gaussian"}( T1 ) 86 | T2E = DenseLayer {EDim, activation = ReLU, init = "gaussian"}( T2 ) 87 | T3E = DenseLayer {EDim, activation = ReLU, init = "gaussian"}( T3 ) 88 | S2E = DenseLayer {EDim, activation = ReLU, init = "gaussian"}( S2E ) 89 | 90 | // Stacking 91 | c = Splice(( T1E : T2E : T3E : D1 : D2 : D3 : S1 : S2E )) 92 | 93 | // ResNet Constants 94 | AllDim = EDim + EDim + EDim + EDim + D1Dim + D2Dim + D3Dim + S1Dim 95 | Inner1 = XXX 96 | Inner2 = XXX 97 | Inner3 = XXX 98 | Inner4 = XXX 99 | Inner5 = XXX 100 | 101 | // ResNet Layers 102 | r1 = ResNet( Inner1, AllDim, c ) 103 | r2 = ResNet( Inner1, AllDim, r1 ) 104 | r3 = ResNet( Inner1, AllDim, r2 ) 105 | r4 = ResNet( Inner1, AllDim, r3 ) 106 | r5 = ResNet( Inner1, AllDim, r4 ) 107 | 108 | // Scoring Layer 109 | s = DenseLayer {1, activation = Sigmoid, init = "gaussian"}( r5 ) 110 | CE = logistic(Label, s) 111 | 112 | // Criteria, output, and evaluation nodes 113 | criterionNodes = (CE) 114 | evaluationNodes = (err) 115 | outputNodes = (s) 116 | ] 117 | 118 | reader = [ 119 | readerType = "CNTKTextFormatReader" 120 | file = "$DataDir$/$trainFile$" 121 | ] 122 | cvReader = [ 123 | readerType = "CNTKTextFormatReader" 124 | file = "$DataDir$/$validFile$" 125 | ] 126 | ] 127 | 128 | -------------------------------------------------------------------------------- /articles/ReleaseNotes/CNTK_1_7_1_Release_Notes.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: CNTK_1_7_1_Release_Notes 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 09/30/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # CNTK_1_7_1_Release_Notes 13 | 14 | This is a summary on what's new in CNTK 1.7.1 Binary Release. 15 | 16 | ## Breaking changes 17 | 18 | The are **two breaking changes** in this release. Please, read this section carefully: 19 | 20 | * Layers library default initialization was changed from ```heNormal``` to ```glorotNormal```. Pass ```init=”heNormal”``` to get 1.7 behaviour. 21 | * ```fsAdagrad``` had a bug. Learning rates must be retuned. To somewhat approximate the old behaviour, scale by ```sqrt(number of parameter tensors)/400```. 22 | 23 | ## BrainScript 24 | 25 | We have the following improvements in BrainScript. 26 | 27 | * BrainScript now allows relational operators inline, and scalar constants get automatically casted to ```Constant()```. Example: 28 | ``` 29 | HammingLoss (y, p) = ReduceSum (y != (p > 0.5)) 30 | ``` 31 | compare this to the previous syntax 32 | ``` 33 | HammingLoss (y, p) = ReduceSum (NotEqual (y, (Greater (p, Constant(0.5))))) 34 | ``` 35 | * ```edit``` action can now use BrainScript. 36 | 37 | ## CNTK Model Evaluation library 38 | 39 | The following changes and improvements are introduced in V.1.7.1: 40 | 41 | * Model evaluation support for Azure Applications. The [Evaluate a model in an Azure WebApi page](../Evaluate-a-model-in-an-Azure-WebApi.md) provides detailed steps. 42 | * The Extended Eval interface adds support for evaluation of RNN model. 43 | 44 | ## Deterministic Algorithms 45 | 46 | * Adding ```forceDeterministicAlgorithms=true``` to the configuration will force use of deterministic algorithms if possible. This flag will force use of only a single thread for MKL and OMP operations. 47 | * **IMPORTANT!** The determinism changes require a new version of the CNTK Custom MKL (v2). For binary downloads this is included in the package. If you build CNTK from sources please follow the installation instructions for [Windows](../Setup-CNTK-on-Windows.md#mkl) or [Linux](../Setup-CNTK-on-Linux.md#mkl). 48 | 49 | ## Performance optimization 50 | 51 | We have made the following performance related improvements: 52 | 53 | * GPU prefetch with pinned memory is implemented for the new readers (HTKDeserializers, CNTKTextFormat and Image reader) 54 | * Type optimizations in the image reader that decrease memory pressure 55 | 56 | ## Other changes 57 | 58 | * Optimized logging—most bulk logging output now require ```traceLevel=1``` 59 | * ```ParallelTrain.numGradientBits``` can now change over epochs 60 | 61 | ## Bug Fixes 62 | 63 | You will find the following fixes in this release: 64 | 65 | * Fix for packed sequences 66 | * Correctness for Sequence2Sequence 67 | * Automated minibatch scaling 68 | * Optimized ```lstm``` from cuDNN-5.1 69 | * Automatic minibatch-sizing no longer affects accuracy 70 | * Improved performance for certain kinds of recurrent networks (```PastValue()```) 71 | * ```fanout``` in ```glorot``` initialization is now correct 72 | * Fix for dimension error in cuDNN RNN wrapper 73 | * ```fsAdagrad``` denominator is now aggregated correctly 74 | * ```LSTMBlock{}``` and ```StabilizerLayer{}``` no longer create parameters from inside ```apply()``` 75 | * Improved default for ```BatchNormalizationLayer{}`` time constant 76 | 77 | ## Python support 78 | 79 | Starting from v.1.5 you can use the preview of Python API. See [CNTK v.1.5 Release Notes](./CNTK_1_5_Release_Notes.md) for further instructions. 80 | -------------------------------------------------------------------------------- /articles/BrainScript-and-Python-Performance-Profiler.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: BrainScript Performance Profiler 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 06/01/2017 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: brainscript 10 | --- 11 | 12 | # BrainScript Performance Profiler 13 | 14 | 15 | CNTK has a performance profiler that can help debug performance issues. It generates a summary report and a detailed profile log. 16 | 17 | For details on using the performance profiler in Python, see [here](./Performance-Profiler.md). 18 | 19 | ## BrainScript config file parameters 20 | 21 | The behavior of the profiler is controlled through the following top-level parameters: 22 | * `profilerEnabled`: Enables/disables the profiler (`true` or `false`). Default is `false`. 23 | * `profilerBufferSize`: Size of the memory buffer used to store the event info, in bytes. Default is `33554432` (32 MB). 24 | * `profilerSyncGpu`: Controls whether CNTK waits for GPU processing to finish at the end of an event (`true` or `false`). This enables getting accurate timing for events which involve GPU processing, but might slow down the training since CNTK process is blocked until all pending GPU processing is complete. Default is `true`. 25 | 26 | In most cases it is sufficient to add a single line to the config file: 27 | 28 | profilerEnabled="true" 29 | 30 | ## Output location 31 | 32 | If `WorkDir` macro is defined in the config file profiler output goes to `$WorkDir$/profiler`. Otherwise, it goes to `./profiler`. 33 | 34 | ## Summary report 35 | 36 | There is a separate summary report for each worker node. It is named `_summary_.txt`. This report breaks down the time spent in each of the major phases of mini-batch processing. For each event it provides the average time, standard deviation, minimum/maximum time, total number of events and total time spent in this event while the profiler was turned on. 37 | 38 | While mini-batch processing is taking place on the main thread, the data reader prefetches the next mini-batch on a background thread, so this event is shown separately. 39 | 40 | ## Detailed log 41 | Another set of files contains detailed log of all events recorded during the profiling session. They are named `_detail_.csv`. For each event there is a record containing event name, thread ID, begin timestamp and end timestamp (in milliseconds). The .csv file can be loaded into a spreadsheet program for further analysis. 42 | 43 | ## Notes 44 | * The very first epoch is usually slower compared to subsequent epochs, so the profiler is enabled only at the beginning of the second epoch. 45 | * The overhead of profiling a single event is around 100 ns. 46 | * `profilerSyncGpu` option slightly slows down the training, but enables getting meaningful times. With `profilerSyncGpu="false"` there is no slow-down. Mini-batch processing time is correct, however the distribution of time between sub-events is not accurate (e.g. gradient aggregation event starts before forward/backward processing is complete). 47 | * For ad-hoc profiling of a specific portion of code a custom event can be added using `PROFILE_SCOPE("My custom event")`. Such events are included in the detailed log, but not in the summary report. See [PerformanceProfiler.h](https://github.com/Microsoft/CNTK/tree/release/latest/Source/PerformanceProfilerDll/PerformanceProfiler.h) for more details. 48 | * Summary report and detailed log files are written immediately before the CNTK process finishes. No files are generated if CNTK terminates unexpectedly (e.g. in case of an unhandled exception). 49 | -------------------------------------------------------------------------------- /articles/Contributing-to-CNTK.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: How to contribute to CNTK 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 02/17/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # How to contribute to CNTK 13 | 14 | You want to contribute to CNTK? We're really excited to work together! 15 | 16 | *Please note, that the information on this page is likely to change as we add more services to our GitHub Repository. So we recommend you to check this page every time you want to make a contribution.* 17 | 18 | Here are the simple steps you need to follow to see your code as a part of CNTK: 19 | 20 | ## Preliminary Information 21 | 22 | * Note that in most cases you will be required to accept Microsoft Contribution License Agreement (CLA) *before you contribution is reviewed*. You may study the text of the agreement [here](https://cla.microsoft.com/cladoc/microsoft-contribution-license-agreement.pdf). You will be automatically notified whether you need to accept CLA after you make a Pull Request (see below). The procedure is automated and should not take more, than 5-7 minutes. Also, you will have to accept the CLA only **once**, and we will not bother you with this during the subsequent contributions 23 | * Please, make each contribution reasonably small - it will allow us to review and accept it quicker. Also if you would like to improve several points, divide it in separate Pull Requests 24 | * If you would like to make **a really big** contribution, like develop a brand new feature of CNTK, please, **consult us preliminary** by [raising an issue](https://github.com/Microsoft/CNTK/issues). We value your cooperation and respect your time and thus want to ensure that we're ready for your work 25 | * Consult the section describing how to [setup your development environment](./Setup-CNTK-from-source.md). Get yourself familiar with [Developing and Testing](./Developing-and-Testing.md) and especially [Coding Guidelines](./Coding-Guidelines.md) sections of the CNTK documentation. 26 | 27 | ## Making a contribution 28 | 29 | * [Fork CNTK Repository](https://help.github.com/articles/fork-a-repo/) 30 | * Code your contribution in the fork just created 31 | * To make a contribution create a [GitHub Pull Request](https://help.github.com/articles/creating-a-pull-request/) using [Comparing across forks view](https://help.github.com/articles/comparing-commits-across-time/#comparing-across-forks). Use ```Microsoft/CNTK``` for ```base fork``` and ```master``` branch for ```base``` 32 | * Please provide a short description of your contribution while creating the Pull Request 33 | * If asked accept CLA (see previous section). **Please note, that we can NOT start reviewing your contribution until CLA is in place or is in state "*cla-not-required*".** 34 | * We will *start* reviewing the Pull Request in no later than *two business days*. Note, that the actual length of the review depends of the nature of the proposed change and may take longer. You will see the comments in the Pull Request as it goes along 35 | * We ask to ensure that your branch has no merge conflicts with ```master``` (GitHub Pull Request web interface informs you about it). We ask to ensure this conflict-free state both *before* and *after* the contribution review. (I.e. in case during the time of the review on-going ```master``` updates result in a merge conflict, we will ask you to resolve it and make a new commit before we proceed with the integration) 36 | * If the contribution is accepted and in merge-conflict-free state, it will be merged into the ```master``` branch 37 | 38 | That's it! We're looking forward to getting your contribution! 39 | -------------------------------------------------------------------------------- /articles/OptimizedRNNStack.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Optimized RNN Stack 3 | author: chrisbasoglu 4 | ms.author: cbasoglu 5 | ms.date: 08/28/2016 6 | ms.custom: cognitive-toolkit 7 | ms.topic: conceptual 8 | ms.service: Cognitive-services 9 | ms.devlang: NA 10 | --- 11 | 12 | # Optimized RNN Stack 13 | 14 | Implements the optimized CuDNN5 RNN stack of one or more recurrent network layers. 15 | 16 | OptimizedRNNStack (weights, input, 17 | hiddenDims, numLayers = 1, 18 | bidirectional = false, 19 | recurrentOp='lstm') 20 | 21 | ## Parameters 22 | 23 | * `weights`: one weight matrix containing all model parameters as a single matrix. Use [dimension inference](./Parameters-And-Constants.md#automatic-dimension-inference), cf. description below. 24 | * `input`: data to apply the stack of one or more recurrent networks to. Must be a sequence, and must not be sparse. 25 | * `hiddenDims`: dimension of the hidden state in each layer and, if bidirectional, of each of the two directions 26 | * `numLayers` (default: 1): number of layers 27 | * `bidirectional` (default: false): if true, the model is bidirectional 28 | * `recurrentOp` (default: `lstm`): select the RNN type. Allowed values: `lstm`, `gru`, `rnnTanh`, `rnnReLU` 29 | 30 | ## Description 31 | 32 | This function gives access to the CuDNN5 RNN, a highly efficient implementation 33 | of a stack of one or more layers of recurrent networks. 34 | We have observed speed-ups on the order of 5, compared to an explicit 35 | implementation as a computation network in BrainScript. 36 | Although it is not as flexible as a BrainScript implementation, 37 | the speed-up of training time may be worth the compromise 38 | (note, however, that such models can only be deployed on machines with a GPU). 39 | 40 | The networks can be uni- or bidirectional, and be of the following kind (`recurrentOp` parameter): 41 | * `lstm`: Long Short Term Memory (Hochreiter and Schmidhuber) 42 | * `gru`: Gated Recurrent Unit 43 | * `rnnTanh`: plain RNN with a `tanh` non-linearity 44 | * `rnnReLU`: plain RNN with a rectified linear non-linearity 45 | 46 | All weights are contained in a single matrix 47 | that should have `hiddenDims` rows and as many columns as needed to hold all parameters. 48 | Since this can be cumbersome to determine, 49 | you can have the dimension [inferred](./Parameters-And-Constants.md#automatic-dimension-inference) automatically. 50 | To make sure that random initialization uses the correct fan-in, specify `initOutputRank=-1`: 51 | 52 | W = ParameterTensor {(Inferred:Inferred), initOutputRank=-1} 53 | 54 | If you use the `lstm` operation, we recommend to use this primitive through [`RecurrentLSTMLayerStack{}`](./BrainScript-Layers-Reference.md#recurrentlstmlayer-recurrentlstmlayerstack), which will take care of creating the weights. 55 | 56 | ### Training on GPU, deploy on CPU 57 | Currently, it is not possible to deploy an RNN trained as an `OptimizedRNNStack()` 58 | on systems without GPUs. 59 | We believe it is possible to perform a post-training model-editing action 60 | that replaces the `OptimizedRNNStack()` nodes by CPU-compatible native BrainScript expressions 61 | that precisely emulate the CuDNN5 RNN implementation. 62 | 63 | ## Example 64 | 65 | Speech recognition model that consists of a 3-hidden layer a bidirectional LSTM 66 | with a hidden-state dimension per layer and direction of 512: 67 | 68 | features = Input {40} 69 | W = ParameterTensor {(Inferred:Inferred), initOutputRank=-1, initValueScale=1/10} 70 | h = OptimizedRNNStack (W, features, 512, numLayers=3, bidirectional=true) 71 | p = DenseLayer {9000, activation=Softmax, init='heUniform', initValueScale=1/3} 72 | --------------------------------------------------------------------------------