├── inst └── models │ ├── .gitignore │ ├── tensorflow-mnist │ └── variables │ │ ├── variables.index │ │ └── variables.data-00000-of-00001 │ ├── keras-multiples.R │ ├── tensorflow-multiples.R │ ├── tfestimators-mtcars.R │ ├── keras-mnist.R │ └── tensorflow-mnist.R ├── images ├── MNIST.png ├── swagger.png └── mnist-digits.gif ├── tests ├── testthat.R └── testthat │ ├── models │ ├── keras-1.12.0 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ ├── keras-1.13.1 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ ├── keras-2.0.0-alpha0 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ ├── tensorflow-1.12.0 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ ├── tensorflow-1.13.1 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ ├── keras-multiple-2.0.0-alpha0 │ │ ├── saved_model.pb │ │ └── variables │ │ │ ├── variables.index │ │ │ └── variables.data-00000-of-00001 │ └── keras-multiple-1.13.1 │ │ └── saved_model.pbtxt │ ├── test-load.R │ ├── helper-utils.R │ ├── data │ └── get-mnist.R │ ├── model-building │ ├── keras-multiple-1.13.1.R │ ├── keras-multiple-2.0.0-alpha0.R │ ├── keras-1.12.0.R │ ├── keras-1.13.1.R │ ├── keras-2.0.0-alpha0.R │ ├── tensorflow-1.12.0.R │ └── tensorflow-1.13.1.R │ ├── helper-model-building.R │ ├── test-serve.R │ └── helper-serve.R ├── NEWS.md ├── docs ├── images │ ├── MNIST.png │ ├── swagger.png │ └── mnist-digits.gif ├── tools │ └── readme │ │ └── swagger.png ├── link.svg ├── pkgdown.js ├── jquery.sticky-kit.min.js ├── pkgdown.css ├── articles │ ├── index.html │ ├── guides-exporting.html │ ├── guides-tensorflow-serving.html │ ├── test │ │ └── saved-models-from-R.html │ └── saved-models-from-R.html ├── authors.html └── reference │ ├── reexports.html │ ├── serve.html │ ├── index.html │ ├── mnist_train_save.html │ ├── convert_savedmodel.html │ ├── serve_savedmodel.html │ ├── convert_savedmodel.tflite_conversion.html │ ├── predict_savedmodel.webapi_prediction.html │ ├── predict_savedmodel.cloudml_prediction.html │ ├── predict_savedmodel.export_prediction.html │ ├── load_savedmodel.html │ ├── predict_savedmodel.graph_prediction.html │ └── predict_savedmodel.html ├── tools └── readme │ └── swagger.png ├── vignettes ├── images │ ├── MNIST.png │ ├── swagger.png │ └── mnist-digits.gif └── saved_models.Rmd ├── .travis.R ├── pkgdown └── _pkgdown.yml ├── R ├── reexports.R ├── predict_export.R ├── utils.R ├── predict_webapi.R ├── predict_interface.R ├── load.R ├── tensor.R ├── predict_graph.R ├── serve.R └── swagger.R ├── .Rbuildignore ├── tfdeploy.Rproj ├── .travis.yml ├── .gitignore ├── man ├── reexports.Rd ├── serve_savedmodel.Rd ├── load_savedmodel.Rd ├── predict_savedmodel.webapi_prediction.Rd ├── predict_savedmodel.export_prediction.Rd ├── predict_savedmodel.graph_prediction.Rd └── predict_savedmodel.Rd ├── NAMESPACE ├── scripts └── travis_install.sh ├── DESCRIPTION └── README.md /inst/models/.gitignore: -------------------------------------------------------------------------------- 1 | keras-mnist 2 | -------------------------------------------------------------------------------- /images/MNIST.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/images/MNIST.png -------------------------------------------------------------------------------- /images/swagger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/images/swagger.png -------------------------------------------------------------------------------- /tests/testthat.R: -------------------------------------------------------------------------------- 1 | library(testthat) 2 | library(tfdeploy) 3 | 4 | test_check("tfdeploy") 5 | -------------------------------------------------------------------------------- /NEWS.md: -------------------------------------------------------------------------------- 1 | # tfdeploy 0.6.1 2 | 3 | * Added a `NEWS.md` file to track changes to the package. 4 | -------------------------------------------------------------------------------- /docs/images/MNIST.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/docs/images/MNIST.png -------------------------------------------------------------------------------- /docs/images/swagger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/docs/images/swagger.png -------------------------------------------------------------------------------- /images/mnist-digits.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/images/mnist-digits.gif -------------------------------------------------------------------------------- /tools/readme/swagger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tools/readme/swagger.png -------------------------------------------------------------------------------- /docs/images/mnist-digits.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/docs/images/mnist-digits.gif -------------------------------------------------------------------------------- /vignettes/images/MNIST.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/vignettes/images/MNIST.png -------------------------------------------------------------------------------- /vignettes/images/swagger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/vignettes/images/swagger.png -------------------------------------------------------------------------------- /docs/tools/readme/swagger.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/docs/tools/readme/swagger.png -------------------------------------------------------------------------------- /vignettes/images/mnist-digits.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/vignettes/images/mnist-digits.gif -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.12.0/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.12.0/saved_model.pb -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.13.1/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.13.1/saved_model.pb -------------------------------------------------------------------------------- /inst/models/tensorflow-mnist/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/inst/models/tensorflow-mnist/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/models/keras-2.0.0-alpha0/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-2.0.0-alpha0/saved_model.pb -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.12.0/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.12.0/saved_model.pb -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.13.1/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.13.1/saved_model.pb -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.12.0/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.12.0/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.13.1/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.13.1/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/models/keras-multiple-2.0.0-alpha0/saved_model.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-multiple-2.0.0-alpha0/saved_model.pb -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.12.0/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.12.0/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.13.1/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.13.1/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/test-load.R: -------------------------------------------------------------------------------- 1 | context("Load Model") 2 | 3 | test_that("loading invalid savedmodel should fail", { 4 | expect_error( 5 | load_savedmodel(NULL) 6 | ) 7 | }) 8 | -------------------------------------------------------------------------------- /inst/models/tensorflow-mnist/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/inst/models/tensorflow-mnist/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/models/keras-2.0.0-alpha0/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-2.0.0-alpha0/variables/variables.index -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.12.0/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.12.0/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/models/keras-1.13.1/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-1.13.1/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/models/keras-multiple-2.0.0-alpha0/variables/variables.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-multiple-2.0.0-alpha0/variables/variables.index -------------------------------------------------------------------------------- /.travis.R: -------------------------------------------------------------------------------- 1 | parent_dir <- dir("../", full.names = TRUE) 2 | package <- parent_dir[grepl("tfdeploy_", parent_dir)] 3 | install.packages(package, repos = NULL, type = "source") 4 | 5 | source("testthat.R") 6 | -------------------------------------------------------------------------------- /tests/testthat/models/keras-2.0.0-alpha0/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-2.0.0-alpha0/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.12.0/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.12.0/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/models/tensorflow-1.13.1/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/tensorflow-1.13.1/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /tests/testthat/helper-utils.R: -------------------------------------------------------------------------------- 1 | skip_if_no_tensorflow <- function() { 2 | skip_on_cran() 3 | 4 | if (!reticulate::py_module_available("tensorflow")) 5 | skip("TensorFlow not available for testing") 6 | } 7 | -------------------------------------------------------------------------------- /tests/testthat/models/keras-multiple-2.0.0-alpha0/variables/variables.data-00000-of-00001: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rstudio/tfdeploy/HEAD/tests/testthat/models/keras-multiple-2.0.0-alpha0/variables/variables.data-00000-of-00001 -------------------------------------------------------------------------------- /pkgdown/_pkgdown.yml: -------------------------------------------------------------------------------- 1 | template: 2 | params: 3 | bootswatch: cosmo 4 | 5 | reference: 6 | - title: "Operations" 7 | contents: 8 | - load_savedmodel 9 | - predict_savedmodel 10 | - serve_savedmodel 11 | -------------------------------------------------------------------------------- /R/reexports.R: -------------------------------------------------------------------------------- 1 | #' @importFrom magrittr %>% 2 | magrittr::`%>%` 3 | 4 | #' @importFrom tensorflow export_savedmodel 5 | #' @export 6 | tensorflow::export_savedmodel 7 | 8 | #' @importFrom tensorflow view_savedmodel 9 | #' @export 10 | tensorflow::view_savedmodel 11 | 12 | -------------------------------------------------------------------------------- /.Rbuildignore: -------------------------------------------------------------------------------- 1 | ^.*\.Rproj$ 2 | ^\.Rproj\.user$ 3 | ^models$ 4 | ^docs$ 5 | ^_pkgdown\.yml$ 6 | ^.travis.yml$ 7 | ^README.Rmd$ 8 | ^.*\MNIST-data$ 9 | ^.travis.R$ 10 | ^images$ 11 | ^pkgdown$ 12 | runs$ 13 | ^scripts$ 14 | ^scratch$ 15 | ^.*saved_model\.pb$ 16 | ^.inst/models/.*-mnist$ 17 | ^savedmodel$ 18 | ^tests/testthat/data/mnist.rds$ 19 | -------------------------------------------------------------------------------- /tests/testthat/data/get-mnist.R: -------------------------------------------------------------------------------- 1 | x <- keras::dataset_mnist() 2 | x$train$x <- t(apply(x$train$x, 1, function(x) x/255)) 3 | x$test$x <- t(apply(x$test$x, 1, function(x) x/255)) 4 | 5 | x$train$y <- keras::to_categorical(x$train$y, num_classes = 10) 6 | x$test$y <- keras::to_categorical(x$test$y, num_classes = 10) 7 | 8 | saveRDS(x, "data/mnist.rds") -------------------------------------------------------------------------------- /tfdeploy.Rproj: -------------------------------------------------------------------------------- 1 | Version: 1.0 2 | 3 | RestoreWorkspace: Default 4 | SaveWorkspace: Default 5 | AlwaysSaveHistory: Default 6 | 7 | EnableCodeIndexing: Yes 8 | UseSpacesForTab: Yes 9 | NumSpacesForTab: 2 10 | Encoding: UTF-8 11 | 12 | RnwWeave: Sweave 13 | LaTeX: pdfLaTeX 14 | 15 | AutoAppendNewline: Yes 16 | StripTrailingWhitespace: Yes 17 | 18 | BuildType: Package 19 | PackageUseDevtools: Yes 20 | PackageInstallArgs: --no-multiarch --with-keep.source 21 | PackageRoxygenize: rd,collate,namespace 22 | -------------------------------------------------------------------------------- /inst/models/keras-multiples.R: -------------------------------------------------------------------------------- 1 | library(keras) 2 | 3 | input1 <- layer_input(name = "input1", dtype = "float32", shape = c(1)) 4 | input2 <- layer_input(name = "input2", dtype = "float32", shape = c(1)) 5 | 6 | output1 <- layer_add(name = "output1", inputs = c(input1, input2)) 7 | output2 <- layer_add(name = "output2", inputs = c(input2, input1)) 8 | 9 | model <- keras_model( 10 | inputs = c(input1, input2), 11 | outputs = c(output1, output2) 12 | ) 13 | 14 | export_savedmodel(model, "keras-multiple", as_text = TRUE) 15 | -------------------------------------------------------------------------------- /inst/models/tensorflow-multiples.R: -------------------------------------------------------------------------------- 1 | library(tensorflow) 2 | 3 | # define output folder 4 | model_dir <- tempfile() 5 | 6 | # define simple string-based tensor operations 7 | sess <- tf$Session() 8 | input1 <- tf$placeholder(tf$string) 9 | input2 <- tf$placeholder(tf$string) 10 | output1 <- tf$string_join(inputs = c("Input1: ", input1, "!")) 11 | output2 <- tf$string_join(inputs = c("Input2: ", input2, "!")) 12 | 13 | export_savedmodel( 14 | sess, 15 | "tensorflow-multiple", 16 | inputs = list(i1 = input1, i2 = input2), 17 | outputs = list(o1 = output1, o2 = output2), 18 | as_text = TRUE) 19 | -------------------------------------------------------------------------------- /tests/testthat/model-building/keras-multiple-1.13.1.R: -------------------------------------------------------------------------------- 1 | reticulate::use_virtualenv("tf-1.13.1") 2 | 3 | library(keras) 4 | 5 | input1 <- layer_input(name = "input1", dtype = "float32", shape = c(1)) 6 | input2 <- layer_input(name = "input2", dtype = "float32", shape = c(1)) 7 | 8 | output1 <- layer_add(name = "output1", inputs = c(input1, input2)) 9 | output2 <- layer_add(name = "output2", inputs = c(input2, input1)) 10 | 11 | model <- keras_model( 12 | inputs = c(input1, input2), 13 | outputs = c(output1, output2) 14 | ) 15 | 16 | export_savedmodel(model, "models/keras-multiple-1.13.1", as_text = TRUE) 17 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: r 2 | 3 | dist: trusty 4 | sudo: false 5 | warnings_are_errors: false 6 | 7 | env: 8 | - TF_VERSION="1.12.0" 9 | - TF_VERSION="1.13.1" 10 | - TF_VERSION="2.0.0-alpha0" 11 | - TF_VERSION="nightly" 12 | 13 | cache: 14 | packages: true 15 | directories: 16 | - $HOME/.cache/pip 17 | 18 | r_packages: 19 | - covr 20 | 21 | before_script: 22 | - source scripts/travis_install.sh 23 | 24 | script: 25 | - | 26 | R CMD build . 27 | R CMD check --no-build-vignettes --no-manual --no-tests tfdeploy*tar.gz 28 | cd tests 29 | travis_wait 30 Rscript ../.travis.R 30 | 31 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # History files 2 | .Rhistory 3 | .Rapp.history 4 | # Session Data files 5 | .RData 6 | # User Data files 7 | .Ruserdata 8 | # Example code in package build process 9 | *-Ex.R 10 | .DS_Store 11 | # RStudio files 12 | .Rproj.user 13 | # produced vignettes 14 | vignettes/*.html 15 | vignettes/*.pdf 16 | # internal files (e.g. scratch files) 17 | internal 18 | # OAuth2 token, see https://github.com/hadley/httr/releases/tag/v0.3 19 | .httr-oauth 20 | MNIST-data 21 | docs/trained 22 | docs/articles/trained 23 | runs/ 24 | docs/articles/tensorflow-mnist/ 25 | scratch 26 | savedmodel 27 | tests/testthat/data/mnist.rds 28 | -------------------------------------------------------------------------------- /tests/testthat/model-building/keras-multiple-2.0.0-alpha0.R: -------------------------------------------------------------------------------- 1 | reticulate::use_virtualenv("tf-2.0.0-alpha0") 2 | 3 | library(keras) 4 | 5 | input1 <- layer_input(name = "input1", dtype = "float32", shape = c(1)) 6 | input2 <- layer_input(name = "input2", dtype = "float32", shape = c(1)) 7 | 8 | output1 <- layer_add(name = "output1", inputs = c(input1, input2)) 9 | output2 <- layer_add(name = "output2", inputs = c(input2, input1)) 10 | 11 | model <- keras_model( 12 | inputs = c(input1, input2), 13 | outputs = c(output1, output2) 14 | ) 15 | 16 | export_savedmodel(model, "models/keras-multiple-2.0.0-alpha0", as_text = TRUE) 17 | -------------------------------------------------------------------------------- /man/reexports.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/reexports.R 3 | \docType{import} 4 | \name{reexports} 5 | \alias{reexports} 6 | \alias{\%>\%} 7 | \alias{export_savedmodel} 8 | \alias{view_savedmodel} 9 | \title{Objects exported from other packages} 10 | \keyword{internal} 11 | \description{ 12 | These objects are imported from other packages. Follow the links 13 | below to see their documentation. 14 | 15 | \describe{ 16 | \item{magrittr}{\code{\link[magrittr]{\%>\%}}} 17 | 18 | \item{tensorflow}{\code{\link[tensorflow]{export_savedmodel}}, \code{\link[tensorflow]{view_savedmodel}}} 19 | }} 20 | 21 | -------------------------------------------------------------------------------- /inst/models/tfestimators-mtcars.R: -------------------------------------------------------------------------------- 1 | library(tfestimators) 2 | 3 | mtcars_input_fn <- function(data, num_epochs = 1) { 4 | input_fn(data, 5 | features = c("disp", "cyl"), 6 | response = "mpg", 7 | batch_size = 32, 8 | num_epochs = num_epochs) 9 | } 10 | 11 | cols <- feature_columns(column_numeric("disp"), column_numeric("cyl")) 12 | 13 | model <- linear_regressor(feature_columns = cols) 14 | 15 | indices <- sample(1:nrow(mtcars), size = 0.80 * nrow(mtcars)) 16 | train <- mtcars[indices, ] 17 | test <- mtcars[-indices, ] 18 | 19 | model %>% train(mtcars_input_fn(train, num_epochs = 10)) 20 | 21 | export_savedmodel(model, "tfestimators-mtcars", as_text = TRUE) 22 | -------------------------------------------------------------------------------- /NAMESPACE: -------------------------------------------------------------------------------- 1 | # Generated by roxygen2: do not edit by hand 2 | 3 | S3method(predict_savedmodel,export_prediction) 4 | S3method(predict_savedmodel,graph_prediction) 5 | S3method(predict_savedmodel,webapi_prediction) 6 | S3method(print,savedmodel_predictions) 7 | export(export_savedmodel) 8 | export(load_savedmodel) 9 | export(predict_savedmodel) 10 | export(serve_savedmodel) 11 | export(view_savedmodel) 12 | import(jsonlite) 13 | import(swagger) 14 | import(tensorflow) 15 | importFrom(httpuv,runServer) 16 | importFrom(magrittr,"%>%") 17 | importFrom(reticulate,import_builtins) 18 | importFrom(tensorflow,export_savedmodel) 19 | importFrom(tensorflow,view_savedmodel) 20 | importFrom(tools,file_ext) 21 | importFrom(utils,untar) 22 | -------------------------------------------------------------------------------- /R/predict_export.R: -------------------------------------------------------------------------------- 1 | #' Predict using an Exported SavedModel 2 | #' 3 | #' Performs a prediction using a locally exported SavedModel. 4 | #' 5 | #' @inheritParams predict_savedmodel 6 | #' 7 | #' @param signature_name The named entry point to use in the model for prediction. 8 | #' 9 | #' @export 10 | predict_savedmodel.export_prediction <- function( 11 | instances, 12 | model, 13 | signature_name = "serving_default", 14 | ...) { 15 | with_new_session(function(sess) { 16 | 17 | graph <- load_savedmodel(sess, model) 18 | predict_savedmodel( 19 | instances = instances, 20 | model = graph, 21 | type = "graph", 22 | signature_name = signature_name, 23 | sess = sess 24 | ) 25 | 26 | }) 27 | } 28 | -------------------------------------------------------------------------------- /R/utils.R: -------------------------------------------------------------------------------- 1 | #' @import tensorflow 2 | with_new_session <- function(f) { 3 | 4 | if (tensorflow::tf_version() >= "2.0") 5 | sess <- tf$compat$v1$Session() 6 | else 7 | sess <- tf$Session() 8 | 9 | on.exit(sess$close(), add = TRUE) 10 | 11 | f(sess) 12 | } 13 | 14 | append_predictions_class <- function(x) { 15 | class(x) <- c(class(x), "savedmodel_predictions") 16 | x 17 | } 18 | 19 | #' @importFrom reticulate import_builtins 20 | py_dict_get_keys <- function(x) { 21 | py_builtins <- import_builtins() 22 | keys <- x$keys() 23 | 24 | # python 3 returns keys as KeysView not list 25 | if (!is.list(keys) && !is.character(keys)) { 26 | keys <- as.list(py_builtins$list(x$keys())) 27 | } 28 | 29 | keys 30 | } 31 | -------------------------------------------------------------------------------- /R/predict_webapi.R: -------------------------------------------------------------------------------- 1 | #' Predict using a Web API 2 | #' 3 | #' Performs a prediction using a Web API providing a SavedModel. 4 | #' 5 | #' @inheritParams predict_savedmodel 6 | #' 7 | #' @export 8 | predict_savedmodel.webapi_prediction <- function( 9 | instances, 10 | model, 11 | ...) { 12 | 13 | text_response <- httr::POST( 14 | url = model, 15 | body = list( 16 | instances = instances 17 | ), 18 | encode = "json" 19 | ) %>% httr::content(as = "text") 20 | 21 | tryCatch({ 22 | response <- text_response %>% 23 | jsonlite::fromJSON(simplifyDataFrame = FALSE) 24 | 25 | if (!identical(response$error, NULL)) 26 | stop(response$error) 27 | 28 | append_predictions_class(response) 29 | }, error = function(e) { 30 | stop(text_response) 31 | }) 32 | } 33 | -------------------------------------------------------------------------------- /docs/link.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 5 | 8 | 12 | 13 | -------------------------------------------------------------------------------- /tests/testthat/helper-model-building.R: -------------------------------------------------------------------------------- 1 | 2 | fnames <- list.files("model-building/") 3 | 4 | models <- gsub(".R$", "", fnames) 5 | 6 | get_version <- function(model) { 7 | stringr::str_extract(model, "[0-9].*$") 8 | } 9 | 10 | 11 | for (model in models) { 12 | 13 | if (dir.exists(paste0("models/", model))) 14 | next 15 | 16 | tf_version <- get_version(model) 17 | envname <- paste0("tf-", tf_version) 18 | 19 | if (!envname %in% reticulate::virtualenv_list()) 20 | install_tensorflow(version = tf_version, envname = envname, restart_session = FALSE) 21 | 22 | message("Running ", model) 23 | 24 | p <- processx::process$new( 25 | command = "Rscript", 26 | args = paste0("model-building/", model, ".R"), stderr = "|", stdout = "|", 27 | wd = getwd() 28 | ) 29 | 30 | while(p$is_alive()) Sys.sleep(1) 31 | 32 | } 33 | 34 | 35 | 36 | 37 | 38 | 39 | -------------------------------------------------------------------------------- /tests/testthat/model-building/keras-1.12.0.R: -------------------------------------------------------------------------------- 1 | 2 | reticulate::use_virtualenv("tf-1.12.0") 3 | 4 | # Packages ---------------------------------------------------------------- 5 | 6 | library(tensorflow) 7 | library(keras) 8 | 9 | 10 | mnist <- readRDS("data/mnist.rds") 11 | next_batch <- function() { 12 | ids <- sample.int(nrow(mnist$train$x), size = 32) 13 | list( 14 | x = mnist$train$x[ids,], 15 | y = mnist$train$y[ids,] 16 | ) 17 | } 18 | 19 | model <- keras_model_sequential() %>% 20 | layer_dense(units = 10, input_shape = 784, activation = "softmax") 21 | 22 | model %>% compile(optimizer = "sgd", loss = "categorical_crossentropy", 23 | metrics = "accuracy") 24 | 25 | model %>% fit(mnist$train$x, mnist$train$y, epochs = 1) 26 | 27 | evaluate(model, mnist$test$x, mnist$test$y) 28 | 29 | export_savedmodel( 30 | model, 31 | "models/keras-1.12.0/" 32 | ) 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /tests/testthat/model-building/keras-1.13.1.R: -------------------------------------------------------------------------------- 1 | 2 | reticulate::use_virtualenv("tf-1.13.1") 3 | 4 | # Packages ---------------------------------------------------------------- 5 | 6 | library(tensorflow) 7 | library(keras) 8 | 9 | 10 | mnist <- readRDS("data/mnist.rds") 11 | next_batch <- function() { 12 | ids <- sample.int(nrow(mnist$train$x), size = 32) 13 | list( 14 | x = mnist$train$x[ids,], 15 | y = mnist$train$y[ids,] 16 | ) 17 | } 18 | 19 | model <- keras_model_sequential() %>% 20 | layer_dense(units = 10, input_shape = 784, activation = "softmax") 21 | 22 | model %>% compile(optimizer = "sgd", loss = "categorical_crossentropy", 23 | metrics = "accuracy") 24 | 25 | model %>% fit(mnist$train$x, mnist$train$y, epochs = 1) 26 | 27 | evaluate(model, mnist$test$x, mnist$test$y) 28 | 29 | export_savedmodel( 30 | model, 31 | "models/keras-1.13.1/" 32 | ) 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /tests/testthat/model-building/keras-2.0.0-alpha0.R: -------------------------------------------------------------------------------- 1 | 2 | reticulate::use_virtualenv("tf-2.0.0") 3 | 4 | # Packages ---------------------------------------------------------------- 5 | 6 | library(tensorflow) 7 | library(keras) 8 | 9 | 10 | mnist <- readRDS("data/mnist.rds") 11 | next_batch <- function() { 12 | ids <- sample.int(nrow(mnist$train$x), size = 32) 13 | list( 14 | x = mnist$train$x[ids,], 15 | y = mnist$train$y[ids,] 16 | ) 17 | } 18 | 19 | model <- keras_model_sequential() %>% 20 | layer_dense(units = 10, input_shape = 784, activation = "softmax") 21 | 22 | model %>% compile(optimizer = "sgd", loss = "categorical_crossentropy", 23 | metrics = "accuracy") 24 | 25 | model %>% fit(mnist$train$x, mnist$train$y, epochs = 1) 26 | 27 | evaluate(model, mnist$test$x, mnist$test$y) 28 | 29 | export_savedmodel( 30 | model, 31 | "models/keras-2.0.0-alpha0/" 32 | ) 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /scripts/travis_install.sh: -------------------------------------------------------------------------------- 1 | pip2.7 install --upgrade --ignore-installed --user travis pip setuptools wheel virtualenv 2 | 3 | if [[ "$TF_VERSION" == "1.3" ]]; then 4 | echo "Installing TensorFlow v1.3 ..."; 5 | Rscript -e 'tensorflow::install_tensorflow(version = "1.3")'; 6 | elif [[ "$TF_VERSION" == "1.4" ]]; then 7 | echo "Installing TensorFlow v1.4 ..."; 8 | Rscript -e 'tensorflow::install_tensorflow(version = "1.4")'; 9 | elif [[ "$TF_VERSION" == "1.7" ]]; then 10 | echo "Installing TensorFlow v1.7 ..."; 11 | Rscript -e 'tensorflow::install_tensorflow(version = "1.7")'; 12 | elif [[ "$TF_VERSION" == "nightly" ]]; then 13 | echo "Installing TensorFlow nightly ..."; 14 | Rscript -e 'tensorflow::install_tensorflow(version = "nightly")'; 15 | else 16 | echo "Installing Tensorflow $TF_VERSION ..." 17 | Rscript -e 'tensorflow::install_tensorflow(version = "${TF_VERSION}")'; 18 | fi 19 | -------------------------------------------------------------------------------- /DESCRIPTION: -------------------------------------------------------------------------------- 1 | Package: tfdeploy 2 | Type: Package 3 | Title: Deploy 'TensorFlow' Models 4 | Version: 0.6.1 5 | Authors@R: c( 6 | person("Javier", "Luraschi", email = "javier@rstudio.com", role = c("aut", "ctb")), 7 | person("Daniel", "Falbel", email = "daniel@rstudio.com", role = c("cre", "ctb")), 8 | person(family = "RStudio", role = c("cph")) 9 | ) 10 | Maintainer: Daniel Falbel 11 | Description: Tools to deploy 'TensorFlow' models across 12 | multiple services. Currently, it provides a local server for testing 'cloudml' 13 | compatible services. 14 | License: Apache License 2.0 15 | Encoding: UTF-8 16 | LazyData: true 17 | Imports: 18 | httpuv, 19 | httr, 20 | jsonlite, 21 | magrittr, 22 | reticulate, 23 | swagger, 24 | tensorflow 25 | Suggests: 26 | cloudml, 27 | knitr, 28 | pixels, 29 | processx, 30 | testthat, 31 | yaml, 32 | stringr 33 | Roxygen: list(markdown = TRUE) 34 | RoxygenNote: 6.1.1 35 | VignetteBuilder: knitr 36 | -------------------------------------------------------------------------------- /inst/models/keras-mnist.R: -------------------------------------------------------------------------------- 1 | library(keras) 2 | 3 | # load data 4 | c(c(x_train, y_train), c(x_test, y_test)) %<-% dataset_mnist() 5 | 6 | # reshape and rescale 7 | x_train <- array_reshape(x_train, dim = c(nrow(x_train), 784)) / 255 8 | x_test <- array_reshape(x_test, dim = c(nrow(x_test), 784)) / 255 9 | 10 | # one-hot encode response 11 | y_train <- to_categorical(y_train, 10) 12 | y_test <- to_categorical(y_test, 10) 13 | 14 | # define and compile model 15 | model <- keras_model_sequential() 16 | model %>% 17 | layer_dense(units = 32, activation = 'relu', input_shape = c(784), 18 | name = "image") %>% 19 | layer_dense(units = 16, activation = 'relu') %>% 20 | layer_dense(units = 10, activation = 'softmax', 21 | name = "prediction") %>% 22 | compile( 23 | loss = 'categorical_crossentropy', 24 | optimizer = optimizer_rmsprop(), 25 | metrics = c('accuracy') 26 | ) 27 | 28 | # train model 29 | history <- model %>% fit( 30 | x_train, y_train, 31 | epochs = 30, batch_size = 128, 32 | validation_split = 0.2 33 | ) 34 | 35 | # save model 36 | export_savedmodel(model, "keras-mnist", as_text = TRUE) 37 | -------------------------------------------------------------------------------- /man/serve_savedmodel.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/serve.R 3 | \name{serve_savedmodel} 4 | \alias{serve_savedmodel} 5 | \title{Serve a SavedModel} 6 | \usage{ 7 | serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089, 8 | daemonized = FALSE, browse = !daemonized) 9 | } 10 | \arguments{ 11 | \item{model_dir}{The path to the exported model, as a string.} 12 | 13 | \item{host}{Address to use to serve model, as a string.} 14 | 15 | \item{port}{Port to use to serve model, as numeric.} 16 | 17 | \item{daemonized}{Makes 'httpuv' server daemonized so R interactive sessions 18 | are not blocked to handle requests. To terminate a daemonized server, call 19 | 'httpuv::stopDaemonizedServer()' with the handle returned from this call.} 20 | 21 | \item{browse}{Launch browser with serving landing page?} 22 | } 23 | \description{ 24 | Serve a TensorFlow SavedModel as a local web api. 25 | } 26 | \examples{ 27 | \dontrun{ 28 | # serve an existing model over a web interface 29 | tfdeploy::serve_savedmodel( 30 | system.file("models/tensorflow-mnist", package = "tfdeploy") 31 | ) 32 | } 33 | } 34 | \seealso{ 35 | \code{\link[=export_savedmodel]{export_savedmodel()}} 36 | } 37 | -------------------------------------------------------------------------------- /docs/pkgdown.js: -------------------------------------------------------------------------------- 1 | $(function() { 2 | $("#sidebar").stick_in_parent({offset_top: 40}); 3 | $('body').scrollspy({ 4 | target: '#sidebar', 5 | offset: 60 6 | }); 7 | 8 | var cur_path = paths(location.pathname); 9 | $("#navbar ul li a").each(function(index, value) { 10 | if (value.text == "Home") 11 | return; 12 | if (value.getAttribute("href") === "#") 13 | return; 14 | 15 | var path = paths(value.pathname); 16 | if (is_prefix(cur_path, path)) { 17 | // Add class to parent
  • , and enclosing
  • if in dropdown 18 | var menu_anchor = $(value); 19 | menu_anchor.parent().addClass("active"); 20 | menu_anchor.closest("li.dropdown").addClass("active"); 21 | } 22 | }); 23 | }); 24 | 25 | function paths(pathname) { 26 | var pieces = pathname.split("/"); 27 | pieces.shift(); // always starts with / 28 | 29 | var end = pieces[pieces.length - 1]; 30 | if (end === "index.html" || end === "") 31 | pieces.pop(); 32 | return(pieces); 33 | } 34 | 35 | function is_prefix(needle, haystack) { 36 | if (needle.length > haystack.lengh) 37 | return(false); 38 | 39 | for (var i = 0; i < haystack.length; i++) { 40 | if (needle[i] != haystack[i]) 41 | return(false); 42 | } 43 | 44 | return(true); 45 | } 46 | -------------------------------------------------------------------------------- /tests/testthat/test-serve.R: -------------------------------------------------------------------------------- 1 | context("Serve") 2 | 3 | 4 | test_can_serve_model <- function(model) { 5 | 6 | test_that(paste0("can serve model:", model), { 7 | 8 | skip_if_no_tensorflow() 9 | serve_savedmodel_async(paste0(model, "/"), function() { 10 | 11 | if (grepl("multiple", model)) { 12 | instances <- list( 13 | instances = list( 14 | list( 15 | input1 = list(1), 16 | input2 = list(1) 17 | ) 18 | ) 19 | ) 20 | } else { 21 | instances <- list(instances = list(list(images = rep(0, 784), 22 | dense_input = rep(0, 784)))) 23 | } 24 | 25 | cont <- httr::POST( 26 | url = "http://127.0.0.1:9000/serving_default/predict/", 27 | body = instances, 28 | httr::content_type_json(), 29 | encode = "json" 30 | ) 31 | 32 | pred <- unlist(httr::content(cont)) 33 | 34 | expect_true(is.numeric(pred)) 35 | 36 | swg <- httr::GET("http://127.0.0.1:9000/swagger.json") 37 | 38 | expect_equal(swg$status_code, 200) 39 | 40 | }) 41 | 42 | }) 43 | 44 | } 45 | 46 | models <- list.files("models", full.names = TRUE) 47 | for (i in seq_along(models)) 48 | test_can_serve_model(models[i]) 49 | 50 | -------------------------------------------------------------------------------- /man/load_savedmodel.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/load.R 3 | \name{load_savedmodel} 4 | \alias{load_savedmodel} 5 | \title{Load a SavedModel} 6 | \usage{ 7 | load_savedmodel(sess = NULL, model_dir = NULL) 8 | } 9 | \arguments{ 10 | \item{sess}{The TensorFlow session. \code{NULL} if using Eager execution.} 11 | 12 | \item{model_dir}{The path to the exported model, as a string. Defaults to 13 | a "savedmodel" path or the latest training run.} 14 | } 15 | \description{ 16 | Loads a SavedModel using the given TensorFlow session and 17 | returns the model's graph. 18 | } 19 | \details{ 20 | Loading a model improves performance over multiple \code{predict_savedmodel()} 21 | calls. 22 | } 23 | \examples{ 24 | \dontrun{ 25 | # start session 26 | sess <- tensorflow::tf$Session() 27 | 28 | # preload an existing model into a TensorFlow session 29 | graph <- tfdeploy::load_savedmodel( 30 | sess, 31 | system.file("models/tensorflow-mnist", package = "tfdeploy") 32 | ) 33 | 34 | # perform prediction based on a pre-loaded model 35 | tfdeploy::predict_savedmodel( 36 | list(rep(9, 784)), 37 | graph 38 | ) 39 | 40 | # close session 41 | sess$close() 42 | } 43 | 44 | } 45 | \seealso{ 46 | \code{\link[=export_savedmodel]{export_savedmodel()}}, \code{\link[=predict_savedmodel]{predict_savedmodel()}} 47 | } 48 | -------------------------------------------------------------------------------- /inst/models/tensorflow-mnist.R: -------------------------------------------------------------------------------- 1 | library(tensorflow) 2 | 3 | sess <- tf$Session() 4 | 5 | datasets <- tf$contrib$learn$datasets 6 | mnist <- datasets$mnist$read_data_sets("MNIST-data", one_hot = TRUE) 7 | 8 | x <- tf$placeholder(tf$float32, shape(NULL, 784L)) 9 | 10 | W <- tf$Variable(tf$zeros(shape(784L, 10L))) 11 | b <- tf$Variable(tf$zeros(shape(10L))) 12 | 13 | y <- tf$nn$softmax(tf$matmul(x, W) + b) 14 | 15 | y_ <- tf$placeholder(tf$float32, shape(NULL, 10L)) 16 | cross_entropy <- tf$reduce_mean(-tf$reduce_sum(y_ * tf$log(y), reduction_indices=1L)) 17 | 18 | optimizer <- tf$train$GradientDescentOptimizer(0.5) 19 | train_step <- optimizer$minimize(cross_entropy) 20 | 21 | init <- tf$global_variables_initializer() 22 | 23 | sess$run(init) 24 | 25 | for (i in 1:1000) { 26 | batches <- mnist$train$next_batch(100L) 27 | batch_xs <- batches[[1]] 28 | batch_ys <- batches[[2]] 29 | sess$run(train_step, 30 | feed_dict = dict(x = batch_xs, y_ = batch_ys)) 31 | } 32 | 33 | correct_prediction <- tf$equal(tf$argmax(y, 1L), tf$argmax(y_, 1L)) 34 | accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32)) 35 | 36 | sess$run(accuracy, feed_dict=dict(x = mnist$test$images, y_ = mnist$test$labels)) 37 | 38 | export_savedmodel( 39 | sess, 40 | "tensorflow-mnist", 41 | inputs = list(images = x), 42 | outputs = list(scores = y)) 43 | -------------------------------------------------------------------------------- /tests/testthat/model-building/tensorflow-1.12.0.R: -------------------------------------------------------------------------------- 1 | 2 | reticulate::use_virtualenv("tf-1.12.0") 3 | 4 | # Packages ---------------------------------------------------------------- 5 | 6 | library(tensorflow) 7 | 8 | sess <- tf$Session() 9 | 10 | mnist <- readRDS("data/mnist.rds") 11 | next_batch <- function() { 12 | ids <- sample.int(nrow(mnist$train$x), size = 32) 13 | list( 14 | x = mnist$train$x[ids,], 15 | y = mnist$train$y[ids,] 16 | ) 17 | } 18 | 19 | 20 | x <- tf$placeholder(tf$float32, shape(NULL, 784L)) 21 | 22 | W <- tf$Variable(tf$zeros(shape(784L, 10L))) 23 | b <- tf$Variable(tf$zeros(shape(10L))) 24 | 25 | y <- tf$nn$softmax(tf$matmul(x, W) + b) 26 | 27 | y_ <- tf$placeholder(tf$float32, shape(NULL, 10L)) 28 | cross_entropy <- tf$reduce_mean(-tf$reduce_sum(y_ * tf$log(y), reduction_indices=1L)) 29 | 30 | optimizer <- tf$train$GradientDescentOptimizer(0.05) 31 | train_step <- optimizer$minimize(cross_entropy) 32 | 33 | init <- tf$global_variables_initializer() 34 | 35 | sess$run(init) 36 | 37 | for (i in 1:1000) { 38 | batches <- next_batch() 39 | batch_xs <- batches[[1]] 40 | batch_ys <- batches[[2]] 41 | sess$run(train_step, 42 | feed_dict = dict(x = batch_xs, y_ = batch_ys)) 43 | } 44 | 45 | correct_prediction <- tf$equal(tf$argmax(y, 1L), tf$argmax(y_, 1L)) 46 | accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32)) 47 | 48 | sess$run(accuracy, feed_dict=dict(x = mnist$train$x, y_ = mnist$train$y)) 49 | 50 | export_savedmodel( 51 | sess, 52 | "models/tensorflow-1.12.0/", 53 | inputs = list(images = x), 54 | outputs = list(scores = y)) 55 | 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /tests/testthat/model-building/tensorflow-1.13.1.R: -------------------------------------------------------------------------------- 1 | 2 | reticulate::use_virtualenv("tf-1.13.1") 3 | 4 | # Packages ---------------------------------------------------------------- 5 | 6 | library(tensorflow) 7 | 8 | sess <- tf$Session() 9 | 10 | mnist <- readRDS("data/mnist.rds") 11 | next_batch <- function() { 12 | ids <- sample.int(nrow(mnist$train$x), size = 32) 13 | list( 14 | x = mnist$train$x[ids,], 15 | y = mnist$train$y[ids,] 16 | ) 17 | } 18 | 19 | 20 | x <- tf$placeholder(tf$float32, shape(NULL, 784L)) 21 | 22 | W <- tf$Variable(tf$zeros(shape(784L, 10L))) 23 | b <- tf$Variable(tf$zeros(shape(10L))) 24 | 25 | y <- tf$nn$softmax(tf$matmul(x, W) + b) 26 | 27 | y_ <- tf$placeholder(tf$float32, shape(NULL, 10L)) 28 | cross_entropy <- tf$reduce_mean(-tf$reduce_sum(y_ * tf$log(y), reduction_indices=1L)) 29 | 30 | optimizer <- tf$train$GradientDescentOptimizer(0.05) 31 | train_step <- optimizer$minimize(cross_entropy) 32 | 33 | init <- tf$global_variables_initializer() 34 | 35 | sess$run(init) 36 | 37 | for (i in 1:1000) { 38 | batches <- next_batch() 39 | batch_xs <- batches[[1]] 40 | batch_ys <- batches[[2]] 41 | sess$run(train_step, 42 | feed_dict = dict(x = batch_xs, y_ = batch_ys)) 43 | } 44 | 45 | correct_prediction <- tf$equal(tf$argmax(y, 1L), tf$argmax(y_, 1L)) 46 | accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32)) 47 | 48 | sess$run(accuracy, feed_dict=dict(x = mnist$train$x, y_ = mnist$train$y)) 49 | 50 | export_savedmodel( 51 | sess, 52 | "models/tensorflow-1.13.1/", 53 | inputs = list(images = x), 54 | outputs = list(scores = y) 55 | ) 56 | 57 | 58 | 59 | 60 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | ## Deploying TensorFlow Models from R 3 | 4 | [![Build 5 | Status](https://travis-ci.org/rstudio/tfdeploy.svg?branch=master)](https://travis-ci.org/rstudio/tfdeploy) 6 | [![CRAN\_Status\_Badge](https://www.r-pkg.org/badges/version/tfdeploy)](https://cran.r-project.org/package=tfdeploy) 7 | [![codecov](https://codecov.io/gh/rstudio/tfdeploy/branch/master/graph/badge.svg)](https://codecov.io/gh/rstudio/tfdeploy) 8 | 9 | While TensorFlow models are typically defined and trained using R or Python code, it is possible to deploy TensorFlow models in a wide variety of environments without any runtime dependency on R or Python: 10 | 11 | - [TensorFlow Serving](https://www.tensorflow.org/serving/) is an open-source software library for serving TensorFlow models using a [gRPC](https://grpc.io/) interface. 12 | 13 | - [CloudML](https://tensorflow.rstudio.com/tools/cloudml/) is a managed cloud service that serves TensorFlow models using a [REST](https://cloud.google.com/ml-engine/reference/rest/v1/projects/predict) interface. 14 | 15 | - [RStudio Connect](https://www.rstudio.com/products/connect/) provides support for serving models using the same REST API as CloudML, but on a server within your own organization. 16 | 17 | TensorFlow models can also be deployed to [mobile](https://www.tensorflow.org/mobile/tflite/) and [embedded](https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-lambda-and-tensorflow/) devices including iOS and Android mobile phones and Raspberry Pi computers. 18 | The tfdeploy package includes a variety of tools designed to make exporting and serving TensorFlow models straightforward. For documentation on using tfdeploy, see the package website at . 19 | 20 | 21 | -------------------------------------------------------------------------------- /man/predict_savedmodel.webapi_prediction.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/predict_webapi.R 3 | \name{predict_savedmodel.webapi_prediction} 4 | \alias{predict_savedmodel.webapi_prediction} 5 | \title{Predict using a Web API} 6 | \usage{ 7 | \method{predict_savedmodel}{webapi_prediction}(instances, model, ...) 8 | } 9 | \arguments{ 10 | \item{instances}{A list of prediction instances to be passed as input tensors 11 | to the service. Even for single predictions, a list with one entry is expected.} 12 | 13 | \item{model}{The model as a local path, a REST url or graph object. 14 | 15 | A local path can be exported using \code{export_savedmodel()}, a REST URL 16 | can be created using \code{serve_savedmodel()} and a graph object loaded using 17 | \code{load_savedmodel()}. 18 | 19 | A \code{type} parameter can be specified to explicitly choose the type model 20 | performing the prediction. Valid values are \code{export}, \code{webapi} and 21 | \code{graph}.} 22 | 23 | \item{...}{See \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}}, 24 | \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}}, 25 | \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}} for additional options. 26 | 27 | #' @section Implementations: 28 | \itemize{ 29 | \item \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}} 30 | \item \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}} 31 | \item \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}}] 32 | }} 33 | } 34 | \description{ 35 | Performs a prediction using a Web API providing a SavedModel. 36 | } 37 | -------------------------------------------------------------------------------- /man/predict_savedmodel.export_prediction.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/predict_export.R 3 | \name{predict_savedmodel.export_prediction} 4 | \alias{predict_savedmodel.export_prediction} 5 | \title{Predict using an Exported SavedModel} 6 | \usage{ 7 | \method{predict_savedmodel}{export_prediction}(instances, model, 8 | signature_name = "serving_default", ...) 9 | } 10 | \arguments{ 11 | \item{instances}{A list of prediction instances to be passed as input tensors 12 | to the service. Even for single predictions, a list with one entry is expected.} 13 | 14 | \item{model}{The model as a local path, a REST url or graph object. 15 | 16 | A local path can be exported using \code{export_savedmodel()}, a REST URL 17 | can be created using \code{serve_savedmodel()} and a graph object loaded using 18 | \code{load_savedmodel()}. 19 | 20 | A \code{type} parameter can be specified to explicitly choose the type model 21 | performing the prediction. Valid values are \code{export}, \code{webapi} and 22 | \code{graph}.} 23 | 24 | \item{signature_name}{The named entry point to use in the model for prediction.} 25 | 26 | \item{...}{See \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}}, 27 | \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}}, 28 | \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}} for additional options. 29 | 30 | #' @section Implementations: 31 | \itemize{ 32 | \item \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}} 33 | \item \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}} 34 | \item \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}}] 35 | }} 36 | } 37 | \description{ 38 | Performs a prediction using a locally exported SavedModel. 39 | } 40 | -------------------------------------------------------------------------------- /man/predict_savedmodel.graph_prediction.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/predict_graph.R 3 | \name{predict_savedmodel.graph_prediction} 4 | \alias{predict_savedmodel.graph_prediction} 5 | \title{Predict using a Loaded SavedModel} 6 | \usage{ 7 | \method{predict_savedmodel}{graph_prediction}(instances, model, sess, 8 | signature_name = "serving_default", ...) 9 | } 10 | \arguments{ 11 | \item{instances}{A list of prediction instances to be passed as input tensors 12 | to the service. Even for single predictions, a list with one entry is expected.} 13 | 14 | \item{model}{The model as a local path, a REST url or graph object. 15 | 16 | A local path can be exported using \code{export_savedmodel()}, a REST URL 17 | can be created using \code{serve_savedmodel()} and a graph object loaded using 18 | \code{load_savedmodel()}. 19 | 20 | A \code{type} parameter can be specified to explicitly choose the type model 21 | performing the prediction. Valid values are \code{export}, \code{webapi} and 22 | \code{graph}.} 23 | 24 | \item{sess}{The active TensorFlow session.} 25 | 26 | \item{signature_name}{The named entry point to use in the model for prediction.} 27 | 28 | \item{...}{See \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}}, 29 | \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}}, 30 | \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}} for additional options. 31 | 32 | #' @section Implementations: 33 | \itemize{ 34 | \item \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}} 35 | \item \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}} 36 | \item \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}}] 37 | }} 38 | } 39 | \description{ 40 | Performs a prediction using a SavedModel model already loaded using 41 | \code{load_savedmodel()}. 42 | } 43 | -------------------------------------------------------------------------------- /tests/testthat/helper-serve.R: -------------------------------------------------------------------------------- 1 | server_success <- function(url) { 2 | tryCatch({ 3 | httr::GET(url) 4 | TRUE 5 | }, error = function(e) { 6 | FALSE 7 | }) 8 | } 9 | 10 | retry <- function(do, times = 1, message = NULL, sleep = 1) { 11 | if (!is.function(do)) 12 | stop("The 'do' parameter must be a function.") 13 | 14 | while (!identical(do(), TRUE) && times > 0) { 15 | times <- times - 1 16 | Sys.sleep(sleep) 17 | } 18 | 19 | times > 0 20 | } 21 | 22 | wait_for_server <- function(url, output_log) { 23 | start <- Sys.time() 24 | if (!retry(function() server_success(url), 10)) 25 | stop( 26 | "Failed to connect to server: ", 27 | url, 28 | " after ", 29 | round(as.numeric(Sys.time() - start), 2), 30 | " secs. Logs:\n", 31 | if (!is.null(output_log)) paste(readLines(output_log), collapse = "\n") else "" 32 | ) 33 | } 34 | 35 | serve_savedmodel_async <- function( 36 | model, 37 | operation, 38 | signature_name = "serving_default") { 39 | 40 | full_path <- normalizePath(model) 41 | 42 | output_log <- tempfile() 43 | 44 | port_numer <- 9000 45 | 46 | rscript <- system2("which", "Rscript", stdout = TRUE) 47 | if (length(rscript) == 0) 48 | stop("Failed to find Rscript") 49 | 50 | process <- processx::process$new( 51 | command = rscript, 52 | args = c( 53 | "-e", 54 | paste0( 55 | "library(tfdeploy); ", 56 | "serve_savedmodel('", 57 | full_path, 58 | "', port = ", 59 | port_numer, 60 | ")" 61 | ), 62 | "--vanilla" 63 | ), 64 | stdout = output_log, 65 | stderr = output_log 66 | ) 67 | 68 | Sys.sleep(5) 69 | if (!process$is_alive()) { 70 | stop(paste(readLines(output_log), collapse = "\n")) 71 | } 72 | 73 | on.exit(expr = { 74 | process$signal(signal = 2) 75 | Sys.sleep(2) 76 | }, add = TRUE) 77 | 78 | url <- paste0( 79 | paste("http://127.0.0.1:", port_numer, "/", sep = ""), 80 | signature_name, 81 | "/predict/" 82 | ) 83 | 84 | wait_for_server(url, output_log) 85 | 86 | operation() 87 | } 88 | -------------------------------------------------------------------------------- /man/predict_savedmodel.Rd: -------------------------------------------------------------------------------- 1 | % Generated by roxygen2: do not edit by hand 2 | % Please edit documentation in R/predict_interface.R 3 | \name{predict_savedmodel} 4 | \alias{predict_savedmodel} 5 | \title{Predict using a SavedModel} 6 | \usage{ 7 | predict_savedmodel(instances, model, ...) 8 | } 9 | \arguments{ 10 | \item{instances}{A list of prediction instances to be passed as input tensors 11 | to the service. Even for single predictions, a list with one entry is expected.} 12 | 13 | \item{model}{The model as a local path, a REST url or graph object. 14 | 15 | A local path can be exported using \code{export_savedmodel()}, a REST URL 16 | can be created using \code{serve_savedmodel()} and a graph object loaded using 17 | \code{load_savedmodel()}. 18 | 19 | A \code{type} parameter can be specified to explicitly choose the type model 20 | performing the prediction. Valid values are \code{export}, \code{webapi} and 21 | \code{graph}.} 22 | 23 | \item{...}{See \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}}, 24 | \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}}, 25 | \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}} for additional options. 26 | 27 | #' @section Implementations: 28 | \itemize{ 29 | \item \code{\link[=predict_savedmodel.export_prediction]{predict_savedmodel.export_prediction()}} 30 | \item \code{\link[=predict_savedmodel.graph_prediction]{predict_savedmodel.graph_prediction()}} 31 | \item \code{\link[=predict_savedmodel.webapi_prediction]{predict_savedmodel.webapi_prediction()}}] 32 | }} 33 | } 34 | \description{ 35 | Runs a prediction over a saved model file, web API or graph object. 36 | } 37 | \examples{ 38 | \dontrun{ 39 | # perform prediction based on an existing model 40 | tfdeploy::predict_savedmodel( 41 | list(rep(9, 784)), 42 | system.file("models/tensorflow-mnist", package = "tfdeploy") 43 | ) 44 | } 45 | 46 | } 47 | \seealso{ 48 | \code{\link[=export_savedmodel]{export_savedmodel()}}, \code{\link[=serve_savedmodel]{serve_savedmodel()}}, \code{\link[=load_savedmodel]{load_savedmodel()}} 49 | } 50 | -------------------------------------------------------------------------------- /R/predict_interface.R: -------------------------------------------------------------------------------- 1 | #' Predict using a SavedModel 2 | #' 3 | #' Runs a prediction over a saved model file, web API or graph object. 4 | #' 5 | #' @inheritParams predict_savedmodel 6 | #' 7 | #' @param instances A list of prediction instances to be passed as input tensors 8 | #' to the service. Even for single predictions, a list with one entry is expected. 9 | #' 10 | #' @param model The model as a local path, a REST url or graph object. 11 | #' 12 | #' A local path can be exported using \code{export_savedmodel()}, a REST URL 13 | #' can be created using \code{serve_savedmodel()} and a graph object loaded using 14 | #' \code{load_savedmodel()}. 15 | #' 16 | #' A \code{type} parameter can be specified to explicitly choose the type model 17 | #' performing the prediction. Valid values are \code{export}, \code{webapi} and 18 | #' \code{graph}. 19 | #' 20 | #' @param ... See [predict_savedmodel.export_prediction()], 21 | #' [predict_savedmodel.graph_prediction()], 22 | #' [predict_savedmodel.webapi_prediction()] for additional options. 23 | #' 24 | #' #' @section Implementations: 25 | #' 26 | #' - [predict_savedmodel.export_prediction()] 27 | #' - [predict_savedmodel.graph_prediction()] 28 | #' - [predict_savedmodel.webapi_prediction()]] 29 | #' 30 | #' @seealso [export_savedmodel()], [serve_savedmodel()], [load_savedmodel()] 31 | #' 32 | #' @examples 33 | #' \dontrun{ 34 | #' # perform prediction based on an existing model 35 | #' tfdeploy::predict_savedmodel( 36 | #' list(rep(9, 784)), 37 | #' system.file("models/tensorflow-mnist", package = "tfdeploy") 38 | #' ) 39 | #' } 40 | #' 41 | #' @export 42 | predict_savedmodel <- function( 43 | instances, 44 | model, 45 | ...) { 46 | 47 | params <- list(...) 48 | 49 | if (!is.null(params$type)) 50 | type <- params$type 51 | else if (any(grepl("MetaGraphDef", class(model)))) 52 | type <- "graph" 53 | else if (grepl("https?://", model)) 54 | type <- "webapi" 55 | else 56 | type <- "export" 57 | 58 | class(instances) <- paste0(type, "_prediction") 59 | UseMethod("predict_savedmodel", instances) 60 | } 61 | 62 | #' @export 63 | print.savedmodel_predictions <- function(x, ...) { 64 | predictions <- x$predictions 65 | for (index in seq_along(predictions)) { 66 | prediction <- predictions[[index]] 67 | if (length(predictions) > 1) 68 | message("Prediction ", index, ":") 69 | 70 | print(prediction) 71 | } 72 | } 73 | -------------------------------------------------------------------------------- /R/load.R: -------------------------------------------------------------------------------- 1 | find_savedmodel <- function(path) { 2 | if (is.null(path)) path <- getwd() 3 | 4 | if (file.exists(file.path(path, "saved_model.pb"))) 5 | path 6 | else if (file.exists(file.path(path, "runs"))) { 7 | runs <- dir(file.path(path, "runs")) 8 | ordered <- runs[order(runs, decreasing = T)] 9 | output <- file.path(path, "runs", ordered[[1]]) 10 | model <- dir(output, recursive = T, full.names = T, pattern = "saved_model.pb") 11 | if (length(model) == 1) 12 | dirname(model) 13 | else 14 | path 15 | } 16 | else 17 | path 18 | } 19 | 20 | #' Load a SavedModel 21 | #' 22 | #' Loads a SavedModel using the given TensorFlow session and 23 | #' returns the model's graph. 24 | #' 25 | #' Loading a model improves performance over multiple \code{predict_savedmodel()} 26 | #' calls. 27 | #' 28 | #' @param sess The TensorFlow session. `NULL` if using Eager execution. 29 | #' 30 | #' @param model_dir The path to the exported model, as a string. Defaults to 31 | #' a "savedmodel" path or the latest training run. 32 | #' 33 | #' @seealso [export_savedmodel()], [predict_savedmodel()] 34 | #' 35 | #' @examples 36 | #' \dontrun{ 37 | #' # start session 38 | #' sess <- tensorflow::tf$Session() 39 | #' 40 | #' # preload an existing model into a TensorFlow session 41 | #' graph <- tfdeploy::load_savedmodel( 42 | #' sess, 43 | #' system.file("models/tensorflow-mnist", package = "tfdeploy") 44 | #' ) 45 | #' 46 | #' # perform prediction based on a pre-loaded model 47 | #' tfdeploy::predict_savedmodel( 48 | #' list(rep(9, 784)), 49 | #' graph 50 | #' ) 51 | #' 52 | #' # close session 53 | #' sess$close() 54 | #' } 55 | #' 56 | #' @importFrom tools file_ext 57 | #' @importFrom utils untar 58 | #' @export 59 | load_savedmodel <- function( 60 | sess = NULL, 61 | model_dir = NULL 62 | ) { 63 | 64 | model_dir <- find_savedmodel(model_dir) 65 | 66 | if (identical(file_ext(model_dir), "tar")) { 67 | extracted_dir <- tempfile() 68 | untar(model_dir, exdir = extracted_dir) 69 | model_dir <- extracted_dir 70 | } 71 | 72 | if (tensorflow::tf_version() >= "2.0" && tf$executing_eagerly()) { 73 | saved_model <- tf$compat$v1$saved_model 74 | 75 | if (is.null(sess)) 76 | sess <- tf$compat$v1$Session() 77 | 78 | } else { 79 | saved_model <- tf$saved_model 80 | tf$reset_default_graph() 81 | } 82 | 83 | graph <- saved_model$loader$load( 84 | sess, 85 | list(tf$python$saved_model$tag_constants$SERVING), 86 | model_dir 87 | ) 88 | 89 | graph 90 | } 91 | -------------------------------------------------------------------------------- /R/tensor.R: -------------------------------------------------------------------------------- 1 | # Models created using the data libraries will create an input 2 | # tensor that supports multiple entries. MINST would be (-1, 784) 3 | # instead of just (784). This is to optimize prediction performance, but 4 | # since we feed one at a time, this function is used to ignore instances. 5 | tensor_is_multi_instance <- function(tensor) { 6 | tensor$tensor_shape$dim$`__len__`() > 0 && 7 | tensor$tensor_shape$dim[[0]]$size == -1 8 | } 9 | 10 | # Retrieves the input and output tensors from a graph as a named list. 11 | tensor_get_boundaries <- function(graph, signature_def, signature_name) { 12 | signature_names <- py_dict_get_keys(signature_def) 13 | if (!signature_name %in% signature_names) { 14 | stop( 15 | "Signature '", signature_name, "' not available in model signatures. ", 16 | "Available signatures: ", paste(signature_names, collapse = ","), ".") 17 | } 18 | 19 | signature_obj <- signature_def$get(signature_name) 20 | 21 | signature_input_names <- py_dict_get_keys(signature_obj$inputs) 22 | signature_output_names <- py_dict_get_keys(signature_obj$outputs) 23 | 24 | if (length(signature_input_names) == 0) { 25 | stop("Signature '", signature_name, "' contains no inputs.") 26 | } 27 | 28 | if (length(signature_output_names) == 0) { 29 | stop("Signature '", signature_name, "' contains no outputs.") 30 | } 31 | 32 | signature_inputs <- lapply(seq_along(signature_input_names), function(fetch_idx) { 33 | signature_obj$inputs$get(signature_input_names[[fetch_idx]]) 34 | }) 35 | names(signature_inputs) <- signature_input_names 36 | 37 | tensor_inputs <- lapply(signature_inputs, function(signature_input) { 38 | graph$get_tensor_by_name(signature_input$name) 39 | }) 40 | tensor_input_names <- lapply(tensor_inputs, function(tensor_input) { 41 | tensor_inputs$name 42 | }) 43 | names(tensor_inputs) <- tensor_input_names 44 | 45 | signature_outputs <- lapply(seq_along(signature_output_names), function(fetch_idx) { 46 | signature_obj$outputs$get(signature_output_names[[fetch_idx]]) 47 | }) 48 | names(signature_outputs) <- signature_output_names 49 | 50 | tensor_outputs <- lapply(signature_outputs, function(signature_output) { 51 | graph$get_tensor_by_name(signature_output$name) 52 | }) 53 | tensor_output_names <- lapply(tensor_outputs, function(tensor_output) { 54 | tensor_outputs$name 55 | }) 56 | names(tensor_outputs) <- tensor_output_names 57 | 58 | list( 59 | signatures = list( 60 | inputs = signature_inputs, 61 | outputs = signature_outputs 62 | ), 63 | tensors = list( 64 | inputs = tensor_inputs, 65 | outputs = tensor_outputs 66 | ) 67 | ) 68 | } 69 | -------------------------------------------------------------------------------- /docs/jquery.sticky-kit.min.js: -------------------------------------------------------------------------------- 1 | /* 2 | Sticky-kit v1.1.2 | WTFPL | Leaf Corcoran 2015 | http://leafo.net 3 | */ 4 | (function(){var b,f;b=this.jQuery||window.jQuery;f=b(window);b.fn.stick_in_parent=function(d){var A,w,J,n,B,K,p,q,k,E,t;null==d&&(d={});t=d.sticky_class;B=d.inner_scrolling;E=d.recalc_every;k=d.parent;q=d.offset_top;p=d.spacer;w=d.bottoming;null==q&&(q=0);null==k&&(k=void 0);null==B&&(B=!0);null==t&&(t="is_stuck");A=b(document);null==w&&(w=!0);J=function(a,d,n,C,F,u,r,G){var v,H,m,D,I,c,g,x,y,z,h,l;if(!a.data("sticky_kit")){a.data("sticky_kit",!0);I=A.height();g=a.parent();null!=k&&(g=g.closest(k)); 5 | if(!g.length)throw"failed to find stick parent";v=m=!1;(h=null!=p?p&&a.closest(p):b("
    "))&&h.css("position",a.css("position"));x=function(){var c,f,e;if(!G&&(I=A.height(),c=parseInt(g.css("border-top-width"),10),f=parseInt(g.css("padding-top"),10),d=parseInt(g.css("padding-bottom"),10),n=g.offset().top+c+f,C=g.height(),m&&(v=m=!1,null==p&&(a.insertAfter(h),h.detach()),a.css({position:"",top:"",width:"",bottom:""}).removeClass(t),e=!0),F=a.offset().top-(parseInt(a.css("margin-top"),10)||0)-q, 6 | u=a.outerHeight(!0),r=a.css("float"),h&&h.css({width:a.outerWidth(!0),height:u,display:a.css("display"),"vertical-align":a.css("vertical-align"),"float":r}),e))return l()};x();if(u!==C)return D=void 0,c=q,z=E,l=function(){var b,l,e,k;if(!G&&(e=!1,null!=z&&(--z,0>=z&&(z=E,x(),e=!0)),e||A.height()===I||x(),e=f.scrollTop(),null!=D&&(l=e-D),D=e,m?(w&&(k=e+u+c>C+n,v&&!k&&(v=!1,a.css({position:"fixed",bottom:"",top:c}).trigger("sticky_kit:unbottom"))),eb&&!v&&(c-=l,c=Math.max(b-u,c),c=Math.min(q,c),m&&a.css({top:c+"px"})))):e>F&&(m=!0,b={position:"fixed",top:c},b.width="border-box"===a.css("box-sizing")?a.outerWidth()+"px":a.width()+"px",a.css(b).addClass(t),null==p&&(a.after(h),"left"!==r&&"right"!==r||h.append(a)),a.trigger("sticky_kit:stick")),m&&w&&(null==k&&(k=e+u+c>C+n),!v&&k)))return v=!0,"static"===g.css("position")&&g.css({position:"relative"}), 8 | a.css({position:"absolute",bottom:d,top:"auto"}).trigger("sticky_kit:bottom")},y=function(){x();return l()},H=function(){G=!0;f.off("touchmove",l);f.off("scroll",l);f.off("resize",y);b(document.body).off("sticky_kit:recalc",y);a.off("sticky_kit:detach",H);a.removeData("sticky_kit");a.css({position:"",bottom:"",top:"",width:""});g.position("position","");if(m)return null==p&&("left"!==r&&"right"!==r||a.insertAfter(h),h.remove()),a.removeClass(t)},f.on("touchmove",l),f.on("scroll",l),f.on("resize", 9 | y),b(document.body).on("sticky_kit:recalc",y),a.on("sticky_kit:detach",H),setTimeout(l,0)}};n=0;for(K=this.length;n .container { 3 | display: flex; 4 | padding-top: 60px; 5 | min-height: calc(100vh); 6 | flex-direction: column; 7 | } 8 | 9 | body > .container .row { 10 | flex: 1; 11 | } 12 | 13 | footer { 14 | margin-top: 45px; 15 | padding: 35px 0 36px; 16 | border-top: 1px solid #e5e5e5; 17 | color: #666; 18 | display: flex; 19 | } 20 | footer p { 21 | margin-bottom: 0; 22 | } 23 | footer div { 24 | flex: 1; 25 | } 26 | footer .pkgdown { 27 | text-align: right; 28 | } 29 | footer p { 30 | margin-bottom: 0; 31 | } 32 | 33 | img.icon { 34 | float: right; 35 | } 36 | 37 | img { 38 | max-width: 100%; 39 | } 40 | 41 | /* Section anchors ---------------------------------*/ 42 | 43 | a.anchor { 44 | margin-left: -30px; 45 | display:inline-block; 46 | width: 30px; 47 | height: 30px; 48 | visibility: hidden; 49 | 50 | background-image: url(./link.svg); 51 | background-repeat: no-repeat; 52 | background-size: 20px 20px; 53 | background-position: center center; 54 | } 55 | 56 | .hasAnchor:hover a.anchor { 57 | visibility: visible; 58 | } 59 | 60 | @media (max-width: 767px) { 61 | .hasAnchor:hover a.anchor { 62 | visibility: hidden; 63 | } 64 | } 65 | 66 | 67 | /* Fixes for fixed navbar --------------------------*/ 68 | 69 | .contents h1, .contents h2, .contents h3, .contents h4 { 70 | padding-top: 60px; 71 | margin-top: -60px; 72 | } 73 | 74 | /* Static header placement on mobile devices */ 75 | @media (max-width: 767px) { 76 | .navbar-fixed-top { 77 | position: absolute; 78 | } 79 | .navbar { 80 | padding: 0; 81 | } 82 | } 83 | 84 | 85 | /* Sidebar --------------------------*/ 86 | 87 | #sidebar { 88 | margin-top: 30px; 89 | } 90 | #sidebar h2 { 91 | font-size: 1.5em; 92 | margin-top: 1em; 93 | } 94 | 95 | #sidebar h2:first-child { 96 | margin-top: 0; 97 | } 98 | 99 | #sidebar .list-unstyled li { 100 | margin-bottom: 0.5em; 101 | } 102 | 103 | /* Reference index & topics ----------------------------------------------- */ 104 | 105 | .ref-index th {font-weight: normal;} 106 | .ref-index h2 {font-size: 20px;} 107 | 108 | .ref-index td {vertical-align: top;} 109 | .ref-index .alias {width: 40%;} 110 | .ref-index .title {width: 60%;} 111 | 112 | .ref-index .alias {width: 40%;} 113 | .ref-index .title {width: 60%;} 114 | 115 | .ref-arguments th {text-align: right; padding-right: 10px;} 116 | .ref-arguments th, .ref-arguments td {vertical-align: top;} 117 | .ref-arguments .name {width: 20%;} 118 | .ref-arguments .desc {width: 80%;} 119 | 120 | /* Nice scrolling for wide elements --------------------------------------- */ 121 | 122 | table { 123 | display: block; 124 | overflow: auto; 125 | } 126 | 127 | /* Syntax highlighting ---------------------------------------------------- */ 128 | 129 | pre { 130 | word-wrap: normal; 131 | word-break: normal; 132 | border: 1px solid #eee; 133 | } 134 | 135 | pre, code { 136 | background-color: #f8f8f8; 137 | color: #333; 138 | } 139 | 140 | pre .img { 141 | margin: 5px 0; 142 | } 143 | 144 | pre .img img { 145 | background-color: #fff; 146 | display: block; 147 | height: auto; 148 | } 149 | 150 | code a, pre a { 151 | color: #375f84; 152 | } 153 | 154 | .fl {color: #1514b5;} 155 | .fu {color: #000000;} /* function */ 156 | .ch,.st {color: #036a07;} /* string */ 157 | .kw {color: #264D66;} /* keyword */ 158 | .co {color: #888888;} /* comment */ 159 | 160 | .message { color: black; font-weight: bolder;} 161 | .error { color: orange; font-weight: bolder;} 162 | .warning { color: #6A0366; font-weight: bolder;} 163 | 164 | -------------------------------------------------------------------------------- /docs/articles/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Articles • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 | 83 | 84 |
    85 |
    86 |
    87 |

    All vignettes

    88 |

    89 | 90 | 93 |
    94 |
    95 |
    96 | 97 |
    98 | 101 | 102 |
    103 |

    Site built with pkgdown.

    104 |
    105 | 106 |
    107 |
    108 | 109 | 110 | 111 | -------------------------------------------------------------------------------- /docs/authors.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Authors • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 |
      87 |
    • 88 |

      Javier Luraschi. Author, maintainer. 89 |

      90 |
    • 91 |
    • 92 |

      RStudio. Copyright holder. 93 |

      94 |
    • 95 |
    96 | 97 |
    98 | 99 |
    100 | 101 | 102 |
    103 | 106 | 107 |
    108 |

    Site built with pkgdown.

    109 |
    110 | 111 |
    112 |
    113 | 114 | 115 | 116 | -------------------------------------------------------------------------------- /R/predict_graph.R: -------------------------------------------------------------------------------- 1 | predict_single_savedmodel_export <- function(instance, sess, graph, signature_def, signature_name) { 2 | if (!is.list(instance)) instance <- list(instance) 3 | 4 | tensor_boundaries <- tensor_get_boundaries(sess$graph, signature_def, signature_name) 5 | 6 | signature_output_names <- names(tensor_boundaries$signatures$outputs) 7 | signature_inputs_names <- names(tensor_boundaries$signatures$inputs) 8 | signature_inputs <- tensor_boundaries$signatures$inputs 9 | tensor_outputs <- tensor_boundaries$tensors$outputs 10 | 11 | fetches_list <- tensor_outputs 12 | names(fetches_list) <- signature_output_names 13 | 14 | feed_dict <- list() 15 | for (signature_input_name in signature_inputs_names) { 16 | signature_input <- signature_inputs[[signature_input_name]] 17 | placeholder_name <- signature_input$name 18 | 19 | if (is.null(names(instance)) && length(signature_inputs_names) == 1) { 20 | input_instance <- instance[[1]] 21 | } 22 | else if (!signature_input_name %in% names(instance)) { 23 | stop("Input '", signature_input_name, "' found in model but missing in prediciton instance.") 24 | } else { 25 | input_instance <- instance[[signature_input_name]] 26 | } 27 | 28 | if (is.list(input_instance) && "b64" %in% names(input_instance)) { 29 | feed_dict[[placeholder_name]] <- tf$decode_base64(instance$b64) 30 | } 31 | else { 32 | feed_dict[[placeholder_name]] <- input_instance 33 | } 34 | 35 | is_multi_instance_tensor <- tensor_is_multi_instance(signature_input) 36 | 37 | if (is_multi_instance_tensor) { 38 | if (is.null(dim(feed_dict[[placeholder_name]]))) 39 | input_dim <- length(feed_dict[[placeholder_name]]) 40 | else 41 | input_dim <- dim(feed_dict[[placeholder_name]]) 42 | 43 | tensor_dims <- signature_input$tensor_shape$dim 44 | tensor_dims_r <- c() 45 | for (i in 0:(signature_input$tensor_shape$dim$`__len__`()-1)) { 46 | tensor_dims_r <- c( 47 | tensor_dims_r, 48 | ifelse(tensor_dims[[i]]$size == -1, 1, tensor_dims[[i]]$size) 49 | ) 50 | } 51 | 52 | feed_dict[[placeholder_name]] <- array( 53 | unlist(feed_dict[[placeholder_name]]), 54 | tensor_dims_r 55 | ) 56 | } 57 | } 58 | 59 | result <- sess$run( 60 | fetches = fetches_list, 61 | feed_dict = feed_dict 62 | ) 63 | 64 | if (is_multi_instance_tensor) { 65 | for (result_name in names(result)) { 66 | dim(result[[result_name]]) <- dim(result[[result_name]])[-1] 67 | } 68 | } 69 | 70 | result 71 | } 72 | 73 | predict_savedmodel_export <- function(instances, sess, graph, signature_def, signature_name) { 74 | 75 | lapply(instances, function(instance) { 76 | predict_single_savedmodel_export( 77 | instance = instance, 78 | sess = sess, 79 | graph = graph, 80 | signature_def = signature_def, 81 | signature_name = signature_name 82 | ) 83 | }) 84 | 85 | } 86 | 87 | #' Predict using a Loaded SavedModel 88 | #' 89 | #' Performs a prediction using a SavedModel model already loaded using 90 | #' \code{load_savedmodel()}. 91 | #' 92 | #' @inheritParams predict_savedmodel 93 | #' 94 | #' @param sess The active TensorFlow session. 95 | #' 96 | #' @param signature_name The named entry point to use in the model for prediction. 97 | #' 98 | #' @export 99 | predict_savedmodel.graph_prediction <- function( 100 | instances, 101 | model, 102 | sess, 103 | signature_name = "serving_default", 104 | ...) { 105 | 106 | if (grep("MetaGraphDef", class(model)) == 0) 107 | stop("MetaGraphDef type expected but found '", class(model)[[1]], "' instead.") 108 | 109 | signature_def <- model$signature_def 110 | 111 | if (!is.list(instances)) instances <- list(instances) 112 | 113 | predictions <- predict_savedmodel_export( 114 | instances = instances, 115 | sess = sess, 116 | graph = model, 117 | signature_def = signature_def, 118 | signature_name = signature_name 119 | ) 120 | 121 | list(predictions = predictions) %>% 122 | append_predictions_class() 123 | } 124 | -------------------------------------------------------------------------------- /docs/reference/reexports.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Objects exported from other packages — reexports • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    These objects are imported from other packages. Follow the links 88 | below to see their documentation.

    89 |
    magrittr

    %>%

    90 | 91 |
    tensorflow

    export_savedmodel, view_savedmodel

    92 |
    93 | 94 | 95 | 96 | 97 |
    98 | 104 |
    105 | 106 |
    107 | 110 | 111 |
    112 |

    Site built with pkgdown.

    113 |
    114 | 115 |
    116 |
    117 | 118 | 119 | 120 | -------------------------------------------------------------------------------- /docs/reference/serve.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Serve a TensorFlow Model — serve • tfserve 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 94 | 95 | 96 |
    97 | 98 |
    99 |
    100 | 103 | 104 | 105 |

    Serve a TensorFlow Model into a local REST API.

    106 | 107 | 108 |
    serve(model_path, host = "127.0.0.1", port = 8089, browse = interactive())
    109 | 110 | 111 |
    112 | 118 |
    119 | 120 |
    121 | 124 | 125 |
    126 |

    Site built with pkgdown.

    127 |
    128 | 129 |
    130 |
    131 | 132 | 133 | 134 | -------------------------------------------------------------------------------- /docs/reference/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Function reference • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 88 | 89 |
    90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 103 | 104 | 105 | 106 | 109 | 110 | 111 | 112 | 115 | 116 | 117 | 118 | 121 | 122 | 123 | 124 |
    100 |

    Operations

    101 |

    102 |
    107 |

    load_savedmodel

    108 |

    Load a SavedModel

    113 |

    predict_savedmodel

    114 |

    Predict using a SavedModel

    119 |

    serve_savedmodel

    120 |

    Serve a SavedModel

    125 |
    126 |
    127 | 128 | 134 |
    135 | 136 |
    137 | 140 | 141 |
    142 |

    Site built with pkgdown.

    143 |
    144 | 145 |
    146 |
    147 | 148 | 149 | 150 | -------------------------------------------------------------------------------- /docs/reference/mnist_train_save.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Trains and Save MNIST — mnist_train_save • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 97 | 98 | 99 |
    100 | 101 |
    102 |
    103 | 106 | 107 | 108 |

    Trains and saves MNIST using TensorFlow

    109 | 110 | 111 |
    mnist_train_save(model_path, overwrite = FALSE)
    112 | 113 |

    Arguments

    114 | 115 | 116 | 117 | 118 | 119 | 120 |
    model_path

    Destination path where the model will be saved.

    121 | 122 | 123 |
    124 | 131 |
    132 | 133 |
    134 | 137 | 138 |
    139 |

    Site built with pkgdown.

    140 |
    141 | 142 |
    143 |
    144 | 145 | 146 | 147 | -------------------------------------------------------------------------------- /vignettes/saved_models.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Using Saved Models' 3 | output: rmarkdown::html_vignette 4 | vignette: > 5 | %\VignetteIndexEntry{Using Saved Models from R} 6 | %\VignetteEngine{knitr::rmarkdown} 7 | %\VignetteEncoding{UTF-8} 8 | type: docs 9 | repo: https://github.com/rstudio/tfdeploy 10 | menu: 11 | main: 12 | name: "Using Saved Models" 13 | identifier: "tools-tfdeploy-using-saved-models" 14 | parent: "tfdeploy-top" 15 | weight: 20 16 | --- 17 | 18 | ```{r setup, include=FALSE} 19 | knitr::opts_chunk$set(eval = FALSE) 20 | ``` 21 | 22 | ## Overview 23 | 24 | The main goal of the tfdeploy package is to create models in R and then export, test, and deploy those models to environments without R. However, there may be cases when it makes sense to use a saved model directly from R: 25 | 26 | - If another R user has saved and/or deployed a model that you would like to use for predictions from R. 27 | - If you want to use a saved or deployed model in a Shiny application. 28 | - If you want to compare predictions between a saved or deployed model and a new model that is under development. 29 | 30 | One way to use a deployed model from R would be to execute HTTP requests using a package like `httr`. For non-deployed models, it is possible to use `serve_savedmodel()` - as we did for local testing - along with a tool like `httr`. However, there is an easier way to make predictions from a saved model using the `predict_savedmodel()` function. 31 | 32 | ## Example 33 | 34 | Using the same MNIST model described previously, we can easily make predictions for new pre-processed images. For example, we can load the MNIST test data set and create predictions for the first 10 images: 35 | 36 | ```{r} 37 | library(keras) 38 | library(tfdeploy) 39 | 40 | test_images <- dataset_mnist()$test$x 41 | test_images <- array_reshape(test_images, dim = c(nrow(test_images), 784)) / 255 42 | test_images <- lapply(1:10, function(i) {test_images[i,]}) 43 | 44 | predict_savedmodel(processed_test_list, 'savedmodel') 45 | ``` 46 | ``` 47 | Prediction 1: 48 | $prediction 49 | [1] 3.002971e-37 8.401216e-29 2.932129e-24 4.048731e-22 0.000000e+00 9.172148e-37 50 | [7] 0.000000e+00 1.000000e+00 4.337524e-31 1.772979e-17 51 | 52 | Prediction 2: 53 | $prediction 54 | [1] 0.000000e+00 4.548326e-22 1.000000e+00 2.261879e-31 0.000000e+00 0.000000e+00 55 | [7] 0.000000e+00 0.000000e+00 2.390626e-38 0.000000e+00 56 | 57 | ... 58 | ``` 59 | 60 | A few things to keep in mind: 61 | 62 | 1. Just like the HTTP POST requests, `predict_savedmodel()` expects the new instance data to be pre-processed. 63 | 64 | 2. `predict_savedmodel()` requires the new data to be in a list, and it always returns a list. This requirement faciliates models with more complex inputs or ouputs. 65 | 66 | In the previous example we used `predict_savedmodel()` with the directory, 'savedmodel', which was created with the `export_savedmodel()` function In addition to providing a path to a saved model directory, `predict_savedmodel()` can also be used with a deployed model by supplying a REST URL, a CloudML model by supplying a CloudML name and version, or by supplying a graph object loaded with `load_savedmodel()`. 67 | 68 | The last option above references the `load_savedmodel()` function. `load_savedmodel()` should be used alongside of `predict_savedmodel()` if you'll be calling the prediction function multiple times. `load_savedmodel()` effectively caches the model graph in memory and can speed up repeated calls to `predict_savedmodel()`. This caching is useful, for example, in a Shiny application where user input would drive calls to `predict_savedmodel()`. 69 | 70 | 71 | ```{r} 72 | # if there will only be one batch of predictions 73 | predict_savedmodel(instances, 'savedmodel') 74 | 75 | # if there will be multiple batches of predictions 76 | sess <- tensorflow::tf$Session() 77 | graph <- load_savedmodel(sess, 'savedmodel') 78 | predict_savedmodel(instances, graph) 79 | # ... more work ... 80 | predict_savedmodel(instances, graph) 81 | ``` 82 | 83 | 84 | 85 | ## Model Representations 86 | 87 | There are a few distinct ways that a model can be represented in R. The most straightforward representation is the in-memory, R model object. This object is what is created and used while developing and training a model. 88 | 89 | A second representation is the on-disk saved model. This representation of the model can be used by the `*_savedmodel` functions. As a special case, `load_savedmodel()` creates a new R object pointing to the model graph. It is important to keep in mind that these saved models are not the full R model object. For example, you can not update or re-train a graph from a saved model. 90 | 91 | Finally, for Keras models there are 2 other representations: HDF5 files and serialized R objects. Each of these represenations captures the entire in-memory R object. For example, using `save_model_hdf5()` and then `load_model_hdf5()` will result in a model that can be updated or retrained. Use the `serialize_model()` and `unserialized_model()` to save models as R objects. 92 | 93 | ### What represenation should I use? 94 | 95 | If you are developing a model and have access to the in-memory R model object, you should use the model object for predictions using R's `predict` function. 96 | 97 | If you are developing a Keras model and would like to save the model for use in a different session, you should use the HDF5 file or serialize the model and then save it to an R data format like RDS. 98 | 99 | If you are going to deploy a model and want to test it's HTTP interface, you should export the model using `export_savedmodel()` and then test with either `serve_savedmodel()` and your HTTP client or `predict_savedmodel()`. 100 | 101 | If you are using R and want to create predictions from a deployed or saved model, and you don't have access to the in-memory R model object, you should use `predict_savedmode()l`. 102 | 103 | 104 | -------------------------------------------------------------------------------- /docs/reference/convert_savedmodel.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Converts a SavedModel — convert_savedmodel • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 88 | 89 | 90 |
    91 | 92 |
    93 |
    94 | 97 | 98 | 99 |

    Converts a TensorFlow SavedModel into a other model formats.

    100 | 101 | 102 |
    convert_savedmodel(model_dir = NULL, format = c("tflite"),
    103 |   target = paste("savedmodel", format, sep = "."),
    104 |   signature_name = "serving_default", ...)
    105 | 106 |

    Arguments

    107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 117 | 118 | 119 | 120 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 131 | 132 |
    model_dir

    The path to the exported model, as a string.

    format

    The target format for the converted model. Valid values 116 | are tflite.

    target

    The conversion target, currently only '.tflite' 121 | extensions supported to perform TensorFlow lite conversion.

    signature_name

    The named entry point to use in the model for prediction.

    ...

    Additional arguments. See ?convert_savedmodel.tflite_conversion 130 | for additional options.

    133 | 134 | 135 |
    136 | 143 |
    144 | 145 |
    146 | 149 | 150 |
    151 |

    Site built with pkgdown.

    152 |
    153 | 154 |
    155 |
    156 | 157 | 158 | 159 | -------------------------------------------------------------------------------- /docs/reference/serve_savedmodel.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Serve a SavedModel — serve_savedmodel • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Serve a TensorFlow SavedModel as a local web api under 88 | http://localhost:8089.

    89 | 90 | 91 |
    serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089,
     92 |   daemonized = FALSE, browse = !daemonized)
    93 | 94 |

    Arguments

    95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 114 | 115 | 116 | 117 | 118 | 119 |
    model_dir

    The path to the exported model, as a string.

    host

    Address to use to serve model, as a string.

    port

    Port to use to serve model, as numeric.

    daemonized

    Makes 'httpuv' server daemonized so R interactive sessions 112 | are not blocked to handle requests. To terminate a daemonized server, call 113 | 'httpuv::stopDaemonizedServer()' with the handle returned from this call.

    browse

    Launch browser with serving landing page?

    120 | 121 |

    See also

    122 | 123 |

    export_savedmodel()

    124 | 125 | 126 |

    Examples

    127 |
    # NOT RUN {
    128 | # serve an existing model over a web interface
    129 | tfdeploy::serve_savedmodel(
    130 |   system.file("models/tensorflow-mnist", package = "tfdeploy")
    131 | )
    132 | # }
    133 |
    134 | 145 |
    146 | 147 |
    148 | 151 | 152 |
    153 |

    Site built with pkgdown.

    154 |
    155 | 156 |
    157 |
    158 | 159 | 160 | 161 | -------------------------------------------------------------------------------- /docs/reference/convert_savedmodel.tflite_conversion.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Convert to TensorFlow Lite — convert_savedmodel.tflite_conversion • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 88 | 89 | 90 |
    91 | 92 |
    93 |
    94 | 97 | 98 | 99 |

    Converts a model to TensorFlow Lite.

    100 | 101 | 102 |
    # S3 method for tflite_conversion
    103 | convert_savedmodel(model_dir = NULL,
    104 |   target = "savedmodel.tflite", signature_name = "serving_default",
    105 |   inference_type = "FLOAT", quantized_input_stats = NULL,
    106 |   drop_control_dependency = TRUE, ...)
    107 | 108 |

    Arguments

    109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 129 | 130 | 131 | 132 | 134 | 135 | 136 | 137 | 139 | 140 |
    model_dir

    The path to the exported model, as a string.

    target

    The conversion target, currently only '.tflite' 118 | extensions supported to perform TensorFlow lite conversion.

    signature_name

    The named entry point to use in the model for prediction.

    quantized_input_stats

    For each member of input_tensors the mean and 127 | std deviation of training data. Only needed inference_type is 128 | "QUANTIZED_UINT8".

    ...

    Additional arguments. See ?convert_savedmodel.tflite_conversion 133 | for additional options.

    drop_control_dependency:

    Drops control dependencies silently. This is 138 | due to tf lite not supporting control dependencies.

    141 | 142 | 143 |
    144 | 151 |
    152 | 153 |
    154 | 157 | 158 |
    159 |

    Site built with pkgdown.

    160 |
    161 | 162 |
    163 |
    164 | 165 | 166 | 167 | -------------------------------------------------------------------------------- /R/serve.R: -------------------------------------------------------------------------------- 1 | #' Serve a SavedModel 2 | #' 3 | #' Serve a TensorFlow SavedModel as a local web api. 4 | #' 5 | #' @param model_dir The path to the exported model, as a string. 6 | #' @param host Address to use to serve model, as a string. 7 | #' @param port Port to use to serve model, as numeric. 8 | #' @param daemonized Makes 'httpuv' server daemonized so R interactive sessions 9 | #' are not blocked to handle requests. To terminate a daemonized server, call 10 | #' 'httpuv::stopDaemonizedServer()' with the handle returned from this call. 11 | #' @param browse Launch browser with serving landing page? 12 | #' 13 | #' @seealso [export_savedmodel()] 14 | #' 15 | #' @examples 16 | #' \dontrun{ 17 | #' # serve an existing model over a web interface 18 | #' tfdeploy::serve_savedmodel( 19 | #' system.file("models/tensorflow-mnist", package = "tfdeploy") 20 | #' ) 21 | #' } 22 | #' @importFrom httpuv runServer 23 | #' @import swagger 24 | #' @export 25 | serve_savedmodel <- function( 26 | model_dir, 27 | host = "127.0.0.1", 28 | port = 8089, 29 | daemonized = FALSE, 30 | browse = !daemonized 31 | ) { 32 | httpuv_start <- if (daemonized) httpuv::startDaemonizedServer else httpuv::runServer 33 | serve_run(model_dir, host, port, httpuv_start, browse && interactive()) 34 | } 35 | 36 | serve_content_type <- function(file_path) { 37 | file_split <- strsplit(file_path, split = "\\.")[[1]] 38 | switch(file_split[[length(file_split)]], 39 | "css" = "text/css", 40 | "html" = "text/html", 41 | "js" = "application/javascript", 42 | "json" = "application/json", 43 | "map" = "text/plain", 44 | "png" = "image/png" 45 | ) 46 | } 47 | 48 | serve_static_file_response <- function(package, file_path, replace = NULL) { 49 | file_path <- system.file(file_path, package = package) 50 | file_contents <- if (file.exists(file_path)) readBin(file_path, "raw", n = file.info(file_path)$size) else NULL 51 | 52 | if (!is.null(remove)) { 53 | contents <- rawToChar(file_contents) 54 | for (r in names(replace)) { 55 | contents <- sub(r, replace[[r]], contents) 56 | } 57 | file_contents <- charToRaw(enc2utf8(contents)) 58 | } 59 | 60 | list( 61 | status = 200L, 62 | headers = list( 63 | "Content-Type" = paste0(serve_content_type(file_path)) 64 | ), 65 | body = file_contents 66 | ) 67 | } 68 | 69 | serve_invalid_request <- function(message = NULL) { 70 | list( 71 | status = 404L, 72 | headers = list( 73 | "Content-Type" = "text/plain; charset=UTF-8" 74 | ), 75 | body = charToRaw(enc2utf8( 76 | paste( 77 | "Invalid Request. ", 78 | message 79 | ) 80 | )) 81 | ) 82 | } 83 | 84 | serve_empty_page <- function(req, sess, graph) { 85 | list( 86 | status = 200L, 87 | headers = list( 88 | "Content-Type" = "text/html" 89 | ), 90 | body = "" 91 | ) 92 | } 93 | 94 | serve_handlers <- function(host, port) { 95 | handlers <- list( 96 | "^/swagger.json" = function(req, sess, graph) { 97 | list( 98 | status = 200L, 99 | headers = list( 100 | "Content-Type" = paste0(serve_content_type("json"), "; charset=UTF-8") 101 | ), 102 | body = charToRaw(enc2utf8( 103 | swagger_from_signature_def(graph$signature_def) 104 | )) 105 | ) 106 | }, 107 | "^/$" = function(req, sess, graph) { 108 | serve_static_file_response( 109 | "swagger", 110 | "dist/index.html", 111 | list( 112 | "http://petstore\\.swagger\\.io/v2" = "", 113 | "layout: \"StandaloneLayout\"" = "layout: \"StandaloneLayout\",\nvalidatorUrl : false" 114 | ) 115 | ) 116 | }, 117 | "^/[^/]*$" = function(req, sess, graph) { 118 | serve_static_file_response("swagger", file.path("dist", req$PATH_INFO)) 119 | }, 120 | "^/[^/]+/predict" = function(req, sess, graph) { 121 | signature_name <- strsplit(req$PATH_INFO, "/")[[1]][[2]] 122 | 123 | json_raw <- req$rook.input$read() 124 | 125 | instances <- list() 126 | if (length(json_raw) > 0) { 127 | body <- jsonlite::fromJSON( 128 | rawToChar(json_raw), 129 | simplifyDataFrame = FALSE, 130 | simplifyMatrix = FALSE 131 | ) 132 | 133 | instances <- body$instances 134 | 135 | if (!is.null(body$signature_name)) { 136 | signature_name <- body$signature_name 137 | } 138 | } 139 | 140 | result <- predict_savedmodel( 141 | instances, 142 | graph, 143 | type = "graph", 144 | sess = sess, 145 | signature_name = signature_name 146 | ) 147 | 148 | list( 149 | status = 200L, 150 | headers = list( 151 | "Content-Type" = paste0(serve_content_type("json"), "; charset=UTF-8") 152 | ), 153 | body = charToRaw(enc2utf8( 154 | jsonlite::toJSON(result, auto_unbox = TRUE) 155 | )) 156 | ) 157 | }, 158 | ".*" = function(req, sess, graph) { 159 | stop("Invalid path.") 160 | } 161 | ) 162 | 163 | if (!getOption("tfdeploy.swagger", default = TRUE)) { 164 | handlers[["^/swagger.json"]] <- serve_empty_page 165 | handlers[["^/$"]] <- serve_empty_page 166 | } 167 | 168 | handlers 169 | } 170 | 171 | message_serve_start <- function(host, port, graph) { 172 | hostname <- paste("http://", host, ":", port, sep = "") 173 | 174 | message() 175 | message("Starting server under ", hostname, " with the following API entry points:") 176 | 177 | for (signature_name in py_dict_get_keys(graph$signature_def)) { 178 | message(" ", hostname, "/", signature_name, "/predict/") 179 | } 180 | } 181 | 182 | serve_run <- function(model_dir, host, port, start, browse) { 183 | with_new_session(function(sess) { 184 | 185 | graph <- load_savedmodel(sess, model_dir) 186 | 187 | message_serve_start(host, port, graph) 188 | 189 | if (browse) utils::browseURL(paste0("http://", host, ":", port)) 190 | 191 | handlers <- serve_handlers(host, port) 192 | 193 | start(host, port, list( 194 | onHeaders = function(req) { 195 | NULL 196 | }, 197 | call = function(req) { 198 | tryCatch({ 199 | matches <- sapply(names(handlers), function(e) grepl(e, req$PATH_INFO)) 200 | handlers[matches][[1]](req, sess, graph) 201 | }, error = function(e) { 202 | serve_invalid_request(e$message) 203 | }) 204 | }, 205 | onWSOpen = function(ws) { 206 | NULL 207 | } 208 | )) 209 | 210 | }) 211 | } 212 | -------------------------------------------------------------------------------- /docs/reference/predict_savedmodel.webapi_prediction.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Predict using a Web API — predict_savedmodel.webapi_prediction • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Performs a prediction using a Web API providing a SavedModel.

    88 | 89 | 90 |
    # S3 method for webapi_prediction
     91 | predict_savedmodel(instances, model, ...)
    92 | 93 |

    Arguments

    94 | 95 | 96 | 97 | 98 | 100 | 101 | 102 | 103 | 113 | 114 | 115 | 116 | 126 | 127 |
    instances

    A list of prediction instances to be passed as input tensors 99 | to the service. Even for single predictions, a list with one entry is expected.

    model

    The model as a local path, a REST url, CloudML name or graph object.

    104 |

    A local path can be exported using export_savedmodel(), a REST URL 105 | can be created using serve_savedmodel(), a CloudML model can be deployed 106 | usin cloudml::cloudml_deploy() and a graph object loaded using 107 | load_savedmodel().

    108 |

    Notice that predicting over a CloudML model requires a version 109 | parameter to identify the model.

    110 |

    A type parameter can be specified to explicitly choose the type model 111 | performing the prediction. Valid values are cloudml, export, 112 | webapi and graph.

    ...

    See predict_savedmodel.export_prediction(), 117 | predict_savedmodel.graph_prediction(), 118 | predict_savedmodel.webapi_prediction() 119 | and predict_savedmodel.cloudml_prediction() for additional options.

    120 |

    #' @section Implementations:

    128 | 129 | 130 |
    131 | 138 |
    139 | 140 |
    141 | 144 | 145 |
    146 |

    Site built with pkgdown.

    147 |
    148 | 149 |
    150 |
    151 | 152 | 153 | 154 | -------------------------------------------------------------------------------- /docs/reference/predict_savedmodel.cloudml_prediction.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Predict using a CloudML SavedModel — predict_savedmodel.cloudml_prediction • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Performs a prediction using a CloudML model.

    88 | 89 | 90 |
    # S3 method for cloudml_prediction
     91 | predict_savedmodel(instances, model,
     92 |   version = NULL, ...)
    93 | 94 |

    Arguments

    95 | 96 | 97 | 98 | 99 | 101 | 102 | 103 | 104 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 131 | 132 |
    instances

    A list of prediction instances to be passed as input tensors 100 | to the service. Even for single predictions, a list with one entry is expected.

    model

    The model as a local path, a REST url, CloudML name or graph object.

    105 |

    A local path can be exported using export_savedmodel(), a REST URL 106 | can be created using serve_savedmodel(), a CloudML model can be deployed 107 | usin cloudml::cloudml_deploy() and a graph object loaded using 108 | load_savedmodel().

    109 |

    Notice that predicting over a CloudML model requires a version 110 | parameter to identify the model.

    111 |

    A type parameter can be specified to explicitly choose the type model 112 | performing the prediction. Valid values are cloudml, export, 113 | webapi and graph.

    version

    The version of the CloudML model.

    ...

    See predict_savedmodel.export_prediction(), 122 | predict_savedmodel.graph_prediction(), 123 | predict_savedmodel.webapi_prediction() 124 | and predict_savedmodel.cloudml_prediction() for additional options.

    125 |

    #' @section Implementations:

    133 | 134 | 135 |
    136 | 143 |
    144 | 145 |
    146 | 149 | 150 |
    151 |

    Site built with pkgdown.

    152 |
    153 | 154 |
    155 |
    156 | 157 | 158 | 159 | -------------------------------------------------------------------------------- /R/swagger.R: -------------------------------------------------------------------------------- 1 | #' @import jsonlite 2 | 3 | swagger_from_signature_def <- function( 4 | signature_def) { 5 | def <- c( 6 | swagger_header(), 7 | swagger_paths(signature_def), 8 | swagger_defs(signature_def) 9 | ) 10 | 11 | jsonlite::toJSON(def) 12 | } 13 | 14 | swagger_header <- function() { 15 | list( 16 | swagger = unbox("2.0"), 17 | info = list( 18 | description = unbox("API to TensorFlow Model."), 19 | version = unbox("1.0.0"), 20 | title = unbox("TensorFlow Model") 21 | ), 22 | basePath = unbox("/"), 23 | schemes = list( 24 | unbox("http") 25 | ) 26 | ) 27 | } 28 | 29 | swagger_path <- function(signature_name, signature_id) { 30 | list( 31 | post = list( 32 | summary = unbox(paste0("Perform prediction over '", signature_name, "'")), 33 | description = unbox(""), 34 | consumes = list( 35 | unbox("application/json") 36 | ), 37 | produces = list( 38 | unbox("application/json") 39 | ), 40 | parameters = list( 41 | list( 42 | "in" = unbox("body"), 43 | name = unbox("body"), 44 | description = unbox(paste0("Prediction instances for '", signature_name, "'")), 45 | required = unbox(TRUE), 46 | schema = list( 47 | "$ref" = unbox(paste0("#/definitions/Type", signature_id)) 48 | ) 49 | ) 50 | ), 51 | responses = list( 52 | "200" = list( 53 | description = unbox("Success") 54 | ) 55 | ) 56 | ) 57 | ) 58 | } 59 | 60 | swagger_paths <- function(signature_def) { 61 | path_names <- py_dict_get_keys(signature_def) 62 | path_values <- lapply(seq_along(path_names), function(path_index) { 63 | swagger_path(path_names[[path_index]], path_index) 64 | }) 65 | names(path_values) <- path_names 66 | 67 | if (tensorflow::tf_version() >= "2.0") { 68 | serving_default <- tf$saved_model$DEFAULT_SERVING_SIGNATURE_DEF_KEY 69 | } else { 70 | serving_default <- tf$saved_model$signature_constants$DEFAULT_SERVING_SIGNATURE_DEF_KEY 71 | } 72 | 73 | if (!serving_default %in% path_names) { 74 | warning( 75 | "Signature '", 76 | serving_default, 77 | "' is missing but is required for some services like CloudML." 78 | ) 79 | } 80 | else { 81 | # make serving default first entry in swagger-ui 82 | path_names <- path_names[path_names != serving_default] 83 | serving_default_value <- path_values[[serving_default]] 84 | path_values[[serving_default]] <- NULL 85 | 86 | path_names <- c(serving_default, path_names) 87 | path_values <- c(list(serving_default_value), path_values) 88 | } 89 | 90 | full_urls <- paste0("/", path_names, "/predict/") 91 | 92 | names(path_values) <- full_urls 93 | 94 | path_values[order(unlist(path_values), decreasing=TRUE)] 95 | 96 | list( 97 | paths = path_values 98 | ) 99 | } 100 | 101 | swagger_dtype_to_swagger <- function(dtype) { 102 | # DTypes: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/dtypes.py 103 | # Swagger: https://swagger.io/docs/specification/data-models/data-types/ 104 | 105 | regex_mapping <- list( 106 | "int32" = list(type = "integer", format = "int32"), 107 | "int64" = list(type = "integer", format = "int64"), 108 | "int" = list(type = "integer", format = ""), 109 | "float" = list(type = "number", format = "float"), 110 | "complex" = list(type = "number", format = ""), 111 | "string" = list(type = "string", format = ""), 112 | "bool" = list(type = "boolean", format = "") 113 | ) 114 | 115 | regex_name <- Filter(function(r) grepl(r, dtype$name), names(regex_mapping)) 116 | if (length(regex_name) == 0) { 117 | stop("Failed to map dtype ", dtype$name, " to swagger type.") 118 | } 119 | 120 | result <- regex_mapping[[regex_name[[1]]]] 121 | 122 | lapply(result, jsonlite::unbox) 123 | } 124 | 125 | swagger_type_to_example <- function(type) { 126 | switch(type, 127 | integer = 0.0, 128 | number = 0.0, 129 | string = "ABC", 130 | boolean = TRUE 131 | ) 132 | } 133 | 134 | swagger_input_tensor_def <- function(signature_entry, tensor_input_name) { 135 | tensor_input <- signature_entry$inputs$get(tensor_input_name) 136 | 137 | tensor_input_dim <- tensor_input$tensor_shape$dim 138 | tensor_input_dim_len <- tensor_input_dim$`__len__`() 139 | 140 | is_multi_instance_tensor <- tensor_is_multi_instance(tensor_input) 141 | 142 | properties_def <- list( 143 | b64 = list( 144 | type = unbox("string"), 145 | example = unbox("") 146 | ) 147 | ) 148 | 149 | tensor_input_example_length <- 1 150 | if (tensor_input_dim_len > 0) 151 | tensor_input_example_length <- tensor_input$tensor_shape$dim[[tensor_input_dim_len - 1]]$size 152 | 153 | swagger_items <- swagger_dtype_to_swagger(tf$DType(tensor_input$dtype)) 154 | swagger_example <- swagger_type_to_example(swagger_items$type) 155 | 156 | swagger_type_def <- list( 157 | type = unbox("object"), 158 | items = swagger_items, 159 | example = rep(swagger_example, max(1, tensor_input_example_length)) 160 | ) 161 | 162 | if (tensor_input_dim_len > 0) { 163 | dim_seq <- seq_len(tensor_input_dim_len - 1) 164 | if (is_multi_instance_tensor) 165 | dim_seq <- dim_seq[-1] 166 | 167 | for (idx in dim_seq) { 168 | swagger_type_def <- list( 169 | type = unbox("array"), 170 | items = swagger_type_def 171 | ) 172 | } 173 | } 174 | 175 | swagger_type_def$properties = properties_def 176 | 177 | swagger_type_def 178 | } 179 | 180 | swagger_def <- function(signature_entry, signature_id) { 181 | tensor_input_names <- py_dict_get_keys(signature_entry$inputs) 182 | 183 | swagger_input_defs <- lapply(tensor_input_names, function(tensor_input_name) { 184 | swagger_input_tensor_def( 185 | signature_entry, 186 | tensor_input_name 187 | ) 188 | }) 189 | names(swagger_input_defs) <- tensor_input_names 190 | 191 | if (length(tensor_input_names) == 1) { 192 | properties_def <- tensor_input_names[[1]] 193 | } else { 194 | properties_def <- tensor_input_names 195 | } 196 | 197 | list( 198 | type = unbox("object"), 199 | properties = list( 200 | instances = list( 201 | type = unbox("array"), 202 | items = list( 203 | type = unbox("object"), 204 | properties = swagger_input_defs 205 | ) 206 | ) 207 | ) 208 | ) 209 | } 210 | 211 | swagger_defs <- function(signature_def) { 212 | defs_names <- py_dict_get_keys(signature_def) 213 | defs_values <- lapply(seq_along(defs_names), function(defs_index) { 214 | swagger_def(signature_def$get(defs_names[[defs_index]]), defs_index) 215 | }) 216 | names(defs_values) <- lapply(seq_along(defs_names), function(def_idx) { 217 | paste0("Type", def_idx) 218 | }) 219 | 220 | list( 221 | definitions = defs_values 222 | ) 223 | } 224 | -------------------------------------------------------------------------------- /docs/reference/predict_savedmodel.export_prediction.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Predict using an Exported SavedModel — predict_savedmodel.export_prediction • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Performs a prediction using a locally exported SavedModel.

    88 | 89 | 90 |
    # S3 method for export_prediction
     91 | predict_savedmodel(instances, model,
     92 |   signature_name = "serving_default", ...)
    93 | 94 |

    Arguments

    95 | 96 | 97 | 98 | 99 | 101 | 102 | 103 | 104 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 131 | 132 |
    instances

    A list of prediction instances to be passed as input tensors 100 | to the service. Even for single predictions, a list with one entry is expected.

    model

    The model as a local path, a REST url, CloudML name or graph object.

    105 |

    A local path can be exported using export_savedmodel(), a REST URL 106 | can be created using serve_savedmodel(), a CloudML model can be deployed 107 | usin cloudml::cloudml_deploy() and a graph object loaded using 108 | load_savedmodel().

    109 |

    Notice that predicting over a CloudML model requires a version 110 | parameter to identify the model.

    111 |

    A type parameter can be specified to explicitly choose the type model 112 | performing the prediction. Valid values are cloudml, export, 113 | webapi and graph.

    signature_name

    The named entry point to use in the model for prediction.

    ...

    See predict_savedmodel.export_prediction(), 122 | predict_savedmodel.graph_prediction(), 123 | predict_savedmodel.webapi_prediction() 124 | and predict_savedmodel.cloudml_prediction() for additional options.

    125 |

    #' @section Implementations:

    133 | 134 | 135 |
    136 | 143 |
    144 | 145 |
    146 | 149 | 150 |
    151 |

    Site built with pkgdown.

    152 |
    153 | 154 |
    155 |
    156 | 157 | 158 | 159 | -------------------------------------------------------------------------------- /docs/reference/load_savedmodel.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Load a SavedModel — load_savedmodel • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Loads a SavedModel using the given TensorFlow session and 88 | returns the model's graph.

    89 | 90 | 91 |
    load_savedmodel(sess, model_dir = NULL)
    92 | 93 |

    Arguments

    94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 104 | 105 |
    sess

    The TensorFlow session.

    model_dir

    The path to the exported model, as a string. Defaults to 103 | a "savedmodel" path or the latest training run.

    106 | 107 |

    Details

    108 | 109 |

    Loading a model improves performance over multiple predict_savedmodel() 110 | calls.

    111 | 112 |

    See also

    113 | 114 |

    export_savedmodel(), predict_savedmodel()

    115 | 116 | 117 |

    Examples

    118 |
    # NOT RUN {
    119 | # start session
    120 | sess <- tensorflow::tf$Session()
    121 | 
    122 | # preload an existing model into a TensorFlow session
    123 | graph <- tfdeploy::load_savedmodel(
    124 |   sess,
    125 |   system.file("models/tensorflow-mnist", package = "tfdeploy")
    126 | )
    127 | 
    128 | # perform prediction based on a pre-loaded model
    129 | tfdeploy::predict_savedmodel(
    130 |   list(rep(9, 784)),
    131 |   graph
    132 | )
    133 | 
    134 | # close session
    135 | sess$close()
    136 | # }
    137 |
    138 |
    139 | 152 |
    153 | 154 |
    155 | 158 | 159 |
    160 |

    Site built with pkgdown.

    161 |
    162 | 163 |
    164 |
    165 | 166 | 167 | 168 | -------------------------------------------------------------------------------- /docs/reference/predict_savedmodel.graph_prediction.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Predict using a Loaded SavedModel — predict_savedmodel.graph_prediction • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Performs a prediction using a SavedModel model already loaded using 88 | load_savedmodel().

    89 | 90 | 91 |
    # S3 method for graph_prediction
     92 | predict_savedmodel(instances, model, sess,
     93 |   signature_name = "serving_default", ...)
    94 | 95 |

    Arguments

    96 | 97 | 98 | 99 | 100 | 102 | 103 | 104 | 105 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 136 | 137 |
    instances

    A list of prediction instances to be passed as input tensors 101 | to the service. Even for single predictions, a list with one entry is expected.

    model

    The model as a local path, a REST url, CloudML name or graph object.

    106 |

    A local path can be exported using export_savedmodel(), a REST URL 107 | can be created using serve_savedmodel(), a CloudML model can be deployed 108 | usin cloudml::cloudml_deploy() and a graph object loaded using 109 | load_savedmodel().

    110 |

    Notice that predicting over a CloudML model requires a version 111 | parameter to identify the model.

    112 |

    A type parameter can be specified to explicitly choose the type model 113 | performing the prediction. Valid values are cloudml, export, 114 | webapi and graph.

    sess

    The active TensorFlow session.

    signature_name

    The named entry point to use in the model for prediction.

    ...

    See predict_savedmodel.export_prediction(), 127 | predict_savedmodel.graph_prediction(), 128 | predict_savedmodel.webapi_prediction() 129 | and predict_savedmodel.cloudml_prediction() for additional options.

    130 |

    #' @section Implementations:

    138 | 139 | 140 |
    141 | 148 |
    149 | 150 |
    151 | 154 | 155 |
    156 |

    Site built with pkgdown.

    157 |
    158 | 159 |
    160 |
    161 | 162 | 163 | 164 | -------------------------------------------------------------------------------- /docs/reference/predict_savedmodel.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Predict using a SavedModel — predict_savedmodel • tfdeploy 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 35 | 36 | 37 | 38 | 39 | 40 |
    41 |
    42 | 76 | 77 | 78 |
    79 | 80 |
    81 |
    82 | 85 | 86 | 87 |

    Runs a prediction over a saved model file, local service or cloudml model.

    88 | 89 | 90 |
    predict_savedmodel(instances, model, ...)
    91 | 92 |

    Arguments

    93 | 94 | 95 | 96 | 97 | 99 | 100 | 101 | 102 | 112 | 113 | 114 | 115 | 125 | 126 |
    instances

    A list of prediction instances to be passed as input tensors 98 | to the service. Even for single predictions, a list with one entry is expected.

    model

    The model as a local path, a REST url, CloudML name or graph object.

    103 |

    A local path can be exported using export_savedmodel(), a REST URL 104 | can be created using serve_savedmodel(), a CloudML model can be deployed 105 | usin cloudml::cloudml_deploy() and a graph object loaded using 106 | load_savedmodel().

    107 |

    Notice that predicting over a CloudML model requires a version 108 | parameter to identify the model.

    109 |

    A type parameter can be specified to explicitly choose the type model 110 | performing the prediction. Valid values are cloudml, export, 111 | webapi and graph.

    ...

    See predict_savedmodel.export_prediction(), 116 | predict_savedmodel.graph_prediction(), 117 | predict_savedmodel.webapi_prediction() 118 | and predict_savedmodel.cloudml_prediction() for additional options.

    119 |

    #' @section Implementations:

    127 | 128 |

    See also

    129 | 130 |

    export_savedmodel(), serve_savedmodel(), load_savedmodel()

    131 | 132 | 133 |

    Examples

    134 |
    # NOT RUN {
    135 | # perform prediction based on an existing model
    136 | tfdeploy::predict_savedmodel(
    137 |   list(rep(9, 784)),
    138 |   system.file("models/tensorflow-mnist", package = "tfdeploy")
    139 | )
    140 | # }
    141 |
    142 |
    143 | 154 |
    155 | 156 |
    157 | 160 | 161 |
    162 |

    Site built with pkgdown.

    163 |
    164 | 165 |
    166 |
    167 | 168 | 169 | 170 | -------------------------------------------------------------------------------- /docs/articles/guides-exporting.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Exporting Models • tfdeploy 9 | 10 | 11 | 12 | 16 | 17 | 18 |
    19 |
    66 | 67 | 68 | 69 |
    70 |
    71 | 75 | 76 | 77 | 78 |
    79 |

    The first step towards deploying a trained model is to export this model from TensorFlow. This guide demostrates how to export SavedModels from: TensorFlow, tfestimators and keras using export_savedmodel()

    80 |
    81 |

    82 | TensorFlow

    83 |

    Exporting a SavedModel using the core TensorFlow API requires input and output tensors to be passed to export_savedmodel().

    84 |

    For instance, one can train MNIST as described by MNIST For ML Beginners, track the model’s inputs and outputs named x and y under that particular article and call export_savedmodel() as follows:

    85 |
    export_savedmodel(
     86 |   sess,
     87 |   "tensorflow-mnist",
     88 |   inputs = list(images = x),
     89 |   outputs = list(scores = y)
     90 | )
    91 |

    For convinience, a sample training script is included in tfdeploy to train and export this MNIST model as follows:

    92 |
    tfruns::training_run(
     93 |   system.file("models/tensorflow-mnist.R", package = "tfdeploy")
     94 | )
    95 |
    96 |
    97 |

    98 | Estimators

    99 |

    A sample model using the mtcars data frame is trained using tfestimators. The resulting model is the one that will be saved to disk. To train, we can follow the TensorFlow estimators Quick Start followed by running:

    100 |
    export_savedmodel(
    101 |   model,
    102 |   "tfestimators-mtcars"
    103 | )
    104 |

    For convinience, a sample training script is included in tfdeploy to train and export this mtcars model as follows:

    105 |
    tfruns::training_run(
    106 |   system.file("models/tfestimators-mtcars.R", package = "tfdeploy")
    107 | )
    108 |
    109 |
    110 |

    111 | Keras

    112 |

    To export from Keras, first train a keras model as described under R interface to Keras, set backend()$set_learning_phase(TRUE) before training and then export this model as follows:

    113 |
    export_savedmodel(
    114 |   model,
    115 |   "keras-mnist"
    116 | )
    117 |

    For convinience, a sample training script is included in tfdeploy to train and export this MNIST model as follows:

    118 |
    tfruns::training_run(
    119 |   system.file("models/keras-mnist.R", package = "tfdeploy")
    120 | )
    121 |
    122 |
    123 |
    124 | 125 | 136 | 137 |
    138 | 139 | 140 |
    143 | 144 |
    145 |

    Site built with pkgdown.

    146 |
    147 | 148 |
    149 |
    150 | 151 | 152 | 153 | -------------------------------------------------------------------------------- /tests/testthat/models/keras-multiple-1.13.1/saved_model.pbtxt: -------------------------------------------------------------------------------- 1 | saved_model_schema_version: 1 2 | meta_graphs { 3 | meta_info_def { 4 | stripped_op_list { 5 | op { 6 | name: "Add" 7 | input_arg { 8 | name: "x" 9 | type_attr: "T" 10 | } 11 | input_arg { 12 | name: "y" 13 | type_attr: "T" 14 | } 15 | output_arg { 16 | name: "z" 17 | type_attr: "T" 18 | } 19 | attr { 20 | name: "T" 21 | type: "type" 22 | allowed_values { 23 | list { 24 | type: DT_BFLOAT16 25 | type: DT_HALF 26 | type: DT_FLOAT 27 | type: DT_DOUBLE 28 | type: DT_UINT8 29 | type: DT_INT8 30 | type: DT_INT16 31 | type: DT_INT32 32 | type: DT_INT64 33 | type: DT_COMPLEX64 34 | type: DT_COMPLEX128 35 | type: DT_STRING 36 | } 37 | } 38 | } 39 | } 40 | op { 41 | name: "Placeholder" 42 | output_arg { 43 | name: "output" 44 | type_attr: "dtype" 45 | } 46 | attr { 47 | name: "dtype" 48 | type: "type" 49 | } 50 | attr { 51 | name: "shape" 52 | type: "shape" 53 | default_value { 54 | shape { 55 | unknown_rank: true 56 | } 57 | } 58 | } 59 | } 60 | } 61 | tags: "serve" 62 | tensorflow_version: "1.13.1" 63 | tensorflow_git_version: "v1.13.0-rc2-5-g6612da8951" 64 | } 65 | graph_def { 66 | node { 67 | name: "input1" 68 | op: "Placeholder" 69 | attr { 70 | key: "_output_shapes" 71 | value { 72 | list { 73 | shape { 74 | dim { 75 | size: -1 76 | } 77 | dim { 78 | size: 1 79 | } 80 | } 81 | } 82 | } 83 | } 84 | attr { 85 | key: "dtype" 86 | value { 87 | type: DT_FLOAT 88 | } 89 | } 90 | attr { 91 | key: "shape" 92 | value { 93 | shape { 94 | dim { 95 | size: -1 96 | } 97 | dim { 98 | size: 1 99 | } 100 | } 101 | } 102 | } 103 | } 104 | node { 105 | name: "input2" 106 | op: "Placeholder" 107 | attr { 108 | key: "_output_shapes" 109 | value { 110 | list { 111 | shape { 112 | dim { 113 | size: -1 114 | } 115 | dim { 116 | size: 1 117 | } 118 | } 119 | } 120 | } 121 | } 122 | attr { 123 | key: "dtype" 124 | value { 125 | type: DT_FLOAT 126 | } 127 | } 128 | attr { 129 | key: "shape" 130 | value { 131 | shape { 132 | dim { 133 | size: -1 134 | } 135 | dim { 136 | size: 1 137 | } 138 | } 139 | } 140 | } 141 | } 142 | node { 143 | name: "output1/add" 144 | op: "Add" 145 | input: "input1" 146 | input: "input2" 147 | attr { 148 | key: "T" 149 | value { 150 | type: DT_FLOAT 151 | } 152 | } 153 | attr { 154 | key: "_output_shapes" 155 | value { 156 | list { 157 | shape { 158 | dim { 159 | size: -1 160 | } 161 | dim { 162 | size: 1 163 | } 164 | } 165 | } 166 | } 167 | } 168 | } 169 | node { 170 | name: "output2/add" 171 | op: "Add" 172 | input: "input2" 173 | input: "input1" 174 | attr { 175 | key: "T" 176 | value { 177 | type: DT_FLOAT 178 | } 179 | } 180 | attr { 181 | key: "_output_shapes" 182 | value { 183 | list { 184 | shape { 185 | dim { 186 | size: -1 187 | } 188 | dim { 189 | size: 1 190 | } 191 | } 192 | } 193 | } 194 | } 195 | } 196 | node { 197 | name: "input1_1" 198 | op: "Placeholder" 199 | attr { 200 | key: "_output_shapes" 201 | value { 202 | list { 203 | shape { 204 | dim { 205 | size: -1 206 | } 207 | dim { 208 | size: 1 209 | } 210 | } 211 | } 212 | } 213 | } 214 | attr { 215 | key: "dtype" 216 | value { 217 | type: DT_FLOAT 218 | } 219 | } 220 | attr { 221 | key: "shape" 222 | value { 223 | shape { 224 | dim { 225 | size: -1 226 | } 227 | dim { 228 | size: 1 229 | } 230 | } 231 | } 232 | } 233 | } 234 | node { 235 | name: "input2_1" 236 | op: "Placeholder" 237 | attr { 238 | key: "_output_shapes" 239 | value { 240 | list { 241 | shape { 242 | dim { 243 | size: -1 244 | } 245 | dim { 246 | size: 1 247 | } 248 | } 249 | } 250 | } 251 | } 252 | attr { 253 | key: "dtype" 254 | value { 255 | type: DT_FLOAT 256 | } 257 | } 258 | attr { 259 | key: "shape" 260 | value { 261 | shape { 262 | dim { 263 | size: -1 264 | } 265 | dim { 266 | size: 1 267 | } 268 | } 269 | } 270 | } 271 | } 272 | node { 273 | name: "output1_1/add" 274 | op: "Add" 275 | input: "input1_1" 276 | input: "input2_1" 277 | attr { 278 | key: "T" 279 | value { 280 | type: DT_FLOAT 281 | } 282 | } 283 | attr { 284 | key: "_output_shapes" 285 | value { 286 | list { 287 | shape { 288 | dim { 289 | size: -1 290 | } 291 | dim { 292 | size: 1 293 | } 294 | } 295 | } 296 | } 297 | } 298 | } 299 | node { 300 | name: "output2_1/add" 301 | op: "Add" 302 | input: "input2_1" 303 | input: "input1_1" 304 | attr { 305 | key: "T" 306 | value { 307 | type: DT_FLOAT 308 | } 309 | } 310 | attr { 311 | key: "_output_shapes" 312 | value { 313 | list { 314 | shape { 315 | dim { 316 | size: -1 317 | } 318 | dim { 319 | size: 1 320 | } 321 | } 322 | } 323 | } 324 | } 325 | } 326 | versions { 327 | producer: 27 328 | } 329 | } 330 | signature_def { 331 | key: "serving_default" 332 | value { 333 | inputs { 334 | key: "input1" 335 | value { 336 | name: "input1_1:0" 337 | dtype: DT_FLOAT 338 | tensor_shape { 339 | dim { 340 | size: -1 341 | } 342 | dim { 343 | size: 1 344 | } 345 | } 346 | } 347 | } 348 | inputs { 349 | key: "input2" 350 | value { 351 | name: "input2_1:0" 352 | dtype: DT_FLOAT 353 | tensor_shape { 354 | dim { 355 | size: -1 356 | } 357 | dim { 358 | size: 1 359 | } 360 | } 361 | } 362 | } 363 | outputs { 364 | key: "output1" 365 | value { 366 | name: "output1_1/add:0" 367 | dtype: DT_FLOAT 368 | tensor_shape { 369 | dim { 370 | size: -1 371 | } 372 | dim { 373 | size: 1 374 | } 375 | } 376 | } 377 | } 378 | outputs { 379 | key: "output2" 380 | value { 381 | name: "output2_1/add:0" 382 | dtype: DT_FLOAT 383 | tensor_shape { 384 | dim { 385 | size: -1 386 | } 387 | dim { 388 | size: 1 389 | } 390 | } 391 | } 392 | } 393 | method_name: "tensorflow/serving/predict" 394 | } 395 | } 396 | } 397 | -------------------------------------------------------------------------------- /docs/articles/guides-tensorflow-serving.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Using TensorFlow Serving • tfdeploy 9 | 10 | 11 | 12 | 16 | 17 | 18 |
    19 |
    66 | 67 | 68 | 69 |
    70 |
    71 | 75 | 76 | 77 | 78 |
    79 |
    80 |

    81 | Overview

    82 |

    Tensorflow Serving provides support to serve efficiently and at scale TensorFlow model. This guide provides an overview of installation and serving the models exported in the ‘Exporting Models’ section.

    83 |
    84 |
    85 |

    86 | Installation

    87 |

    See Tensorflow Serving Setup, but in general, from Linux, first install prereqs:

    88 |
    sudo apt-get update && sudo apt-get install -y build-essential curl libcurl3-dev git libfreetype6-dev libpng12-dev libzmq3-dev pkg-config python-dev python-numpy python-pip software-properties-common swig zip zlib1g-dev
    89 |

    Then install Tensorflow Serving:

    90 |
    echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list
     91 | 
     92 | curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
     93 | 
     94 | sudo apt-get update && sudo apt-get install tensorflow-model-server
    95 |

    Optionally, install the api client:

    96 |
    sudo pip install tensorflow-serving-api --no-cache-dir
    97 |

    Then serve the TensorFlow model using:

    98 |
    tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/mnt/hgfs/tfserve/trained/tensorflow-mnist
    99 |
    2017-10-04 14:53:15.291409: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 113175 microseconds.
    100 | 2017-10-04 14:53:15.293200: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: mnist version: 1}
    101 | 2017-10-04 14:53:15.299338: I tensorflow_serving/model_servers/main.cc:288] Running ModelServer at 0.0.0.0:9000 ...
    102 |

    Or the tfestimators model using:

    103 |
    tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/mnt/hgfs/tfserve/trained/tfestimators-mtcars
    104 |
    2017-10-04 15:33:21.279211: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 43820 microseconds.
    105 | 2017-10-04 15:33:21.281129: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: mnist version: 1507156394}
    106 | 2017-10-04 15:33:21.284161: I tensorflow_serving/model_servers/main.cc:288] Running ModelServer at 0.0.0.0:9000 ...
    107 | 
    108 |

    One can use saved_model_cli to inspect model contents, as in:

    109 |
    saved_model_cli show --dir /mnt/hgfs/tfserve/trained/tensorflow-mnist/1
    110 |
    111 |
    112 |

    113 | Serving Models

    114 |

    Manually download mnist_client.py and mnist_input_data.py. Then run:

    115 |
    python mnist_client.py --num_tests=1000 --server=localhost:9000
    116 | 
    117 | Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
    118 | Extracting /tmp/train-images-idx3-ubyte.gz
    119 | Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
    120 | Extracting /tmp/train-labels-idx1-ubyte.gz
    121 | Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
    122 | Extracting /tmp/t10k-images-idx3-ubyte.gz
    123 | Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
    124 | Extracting /tmp/t10k-labels-idx1-ubyte.gz
    125 | ........................................
    126 | Inference error rate: 9.5%
    127 |

    Or while running the Keras model under TF Serving, :

    128 |
    python mnist_client.py --num_tests=1000 --server=localhost:9000
    129 | 
    130 | Extracting /tmp/train-images-idx3-ubyte.gz
    131 | Extracting /tmp/train-labels-idx1-ubyte.gz
    132 | Extracting /tmp/t10k-images-idx3-ubyte.gz
    133 | Extracting /tmp/t10k-labels-idx1-ubyte.gz
    134 | ............
    135 | Inference error rate: 84.5%
    136 |

    TODO: Investigate inference error rate under keras, see: /keras/issues/7848.

    137 |
    138 |
    139 |
    140 | 141 | 152 | 153 |
    154 | 155 | 156 |
    159 | 160 |
    161 |

    Site built with pkgdown.

    162 |
    163 | 164 |
    165 |
    166 | 167 | 168 | 169 | -------------------------------------------------------------------------------- /docs/articles/test/saved-models-from-R.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Using Saved Models from R • tfdeploy 9 | 10 | 11 | 12 | 16 | 17 | 18 |
    19 |
    59 | 60 | 61 | 62 |
    63 |
    64 | 68 | 69 | 70 | 71 |
    72 |

    The main goal of tfdeploy is to create models in R and then export, test, and deploy those models to environments without R. However, there may be cases when it makes sense to use a saved model from R:

    73 |
      74 |
    • If another R user has saved and/or deployed a model that you would like to use for predictions from R.
    • 75 |
    • If you want to use an saved or deployed model in a Shiny application.
    • 76 |
    • If you want to compare predictions between a saved or deployed model and a new model that is under development.
    • 77 |
    78 |

    One way to use a deployed model from R would be to execute HTTP requests using a package like httr. For non-deployed models, it is possible to use serve_savedmodel - as we did for local testing - along with a tool like httr. However, there is an easier way to make predictions from a saved model using the predict_savedmodel function.

    79 |
    80 |

    81 | Examples

    82 |

    Using the same MNIST model described previously, we can easily make predictions for new pre-processed images. For example, we can load the MNIST test data set and predict a digit for the first 10 images:

    83 |
    library(keras)
     84 | library(tfdeploy)
     85 | 
     86 | test <- dataset_mnist()$test$x 
     87 | processed_test <- array_reshape(test, dim = c(nrow(test), 784)) / 255
     88 | processed_test_list <- lapply(1:10, function(i) {processed_test[i,]})
     89 | 
     90 | predict_savedmodel(processed_test_list, '../savedmodel/')
    91 |
    Prediction 1:
     92 | $prediction
     93 |  [1] 3.002971e-37 8.401216e-29 2.932129e-24 4.048731e-22 0.000000e+00 9.172148e-37
     94 |  [7] 0.000000e+00 1.000000e+00 4.337524e-31 1.772979e-17
     95 | 
     96 | Prediction 2:
     97 | $prediction
     98 |  [1] 0.000000e+00 4.548326e-22 1.000000e+00 2.261879e-31 0.000000e+00 0.000000e+00
     99 |  [7] 0.000000e+00 0.000000e+00 2.390626e-38 0.000000e+00
    100 |  
    101 |  ...
    102 |

    A few things to keep in mind:

    103 |
      104 |
    1. 105 | predict_savedmodel expects the new instances to already be pre-processed.
    2. 106 |
    3. 107 | predict_savedmodel requires the new data to be in a list, and it always returns a list. This requirement faciliates models with more complex inputs or ouputs.
    4. 108 |
    109 |

    In this previous example we used predict_savedmodel with the directory, ‘savedmodel’, which was created with the export_savedmodel command. In addition to providing a path to a saved model directory, predict_savedmodel can also be used with a deployed model by supplying a REST URL, a CloudML model by supplying a CloudML name and version, or by supplying a graph object loaded with load_savedmodel.

    110 |

    load_savedmodel can be used alongside of predict_savedmodel if the prediction function will be called multiple times. For example:

    111 |
    predict_savedmodel(instances, 'savedmodel')
    112 | 
    113 | # OR
    114 | sess <- tensorflow::tf$Session()
    115 | graph <- load_savedmodel(sess, '../savedmodel')
    116 | predict_savedmodel(instances, graph)
    117 |
    118 |
    119 |

    120 | Comparing Types of Model Representations

    121 |

    There are many different ways to interact with models from R. Models can be developed directly in-memory. Keras models can be saved and reloaded from HDF5 files, serialized and saved as R data files, or model configurations can be saved to JSON or YAML. TensorFlow, Keras, and tfestimator models can be saved and exported with tfdeploy.

    122 |

    The correct option depends on the task and your goals.

    123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 140 | 141 | 142 | 143 | 144 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 155 | 156 | 158 | 159 | 160 | 161 | 162 | 164 | 165 | 166 |
    ModelCan Be Re-trainedPredictions
    In memory keras, tfestimators, or tensorflow modelYesLocally, with predict 139 |
    keras model saved in hdf5YesLocally, with predict after load_model_hdf5 145 |
    keras model serialized to R objectYesLocally, with predict after unserialzing
    tfdeploy saved model from disk 154 | NoLocally with predict_savedmodel(..., <file path>) 157 |
    tfdeploy saved model deployed on CloudMl or RStudio ConnectNoScoring occurs remotely with HTTP requests or predict_savedmodel(..., <url>) 163 |
    167 |
    168 |
    169 |
    170 | 171 | 181 | 182 |
    183 | 184 | 185 |
    188 | 189 |
    190 |

    Site built with pkgdown.

    191 |
    192 | 193 |
    194 |
    195 | 196 | 197 | 198 | -------------------------------------------------------------------------------- /docs/articles/saved-models-from-R.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Using Saved Models from R • tfdeploy 9 | 10 | 11 | 12 | 16 | 17 | 18 |
    19 |
    54 | 55 | 56 | 57 |
    58 |
    59 | 63 | 64 | 65 | 66 |
    67 |

    The main goal of tfdeploy is to create models in R and then export, test, and deploy those models to environments without R. However, there may be cases when it makes sense to use a saved model from R:

    68 |
      69 |
    • If another R user has saved and/or deployed a model that you would like to use for predictions from R.
    • 70 |
    • If you want to use a saved or deployed model in a Shiny application.
    • 71 |
    • If you want to compare predictions between a saved or deployed model and a new model that is under development.
    • 72 |
    73 |

    One way to use a deployed model from R would be to execute HTTP requests using a package like httr. For non-deployed models, it is possible to use serve_savedmodel - as we did for local testing - along with a tool like httr. However, there is an easier way to make predictions from a saved model using the predict_savedmodel function.

    74 |
    75 |

    76 | Examples

    77 |

    Using the same MNIST model described previously, we can easily make predictions for new pre-processed images. For example, we can load the MNIST test data set and create predictions for the first 10 images:

    78 |
    library(keras)
     79 | library(tfdeploy)
     80 | 
     81 | test <- dataset_mnist()$test$x 
     82 | processed_test <- array_reshape(test, dim = c(nrow(test), 784)) / 255
     83 | processed_test_list <- lapply(1:10, function(i) {processed_test[i,]})
     84 | 
     85 | predict_savedmodel(processed_test_list, '../savedmodel/')
    86 |
    Prediction 1:
     87 | $prediction
     88 |  [1] 3.002971e-37 8.401216e-29 2.932129e-24 4.048731e-22 0.000000e+00 9.172148e-37
     89 |  [7] 0.000000e+00 1.000000e+00 4.337524e-31 1.772979e-17
     90 | 
     91 | Prediction 2:
     92 | $prediction
     93 |  [1] 0.000000e+00 4.548326e-22 1.000000e+00 2.261879e-31 0.000000e+00 0.000000e+00
     94 |  [7] 0.000000e+00 0.000000e+00 2.390626e-38 0.000000e+00
     95 |  
     96 |  ...
    97 |

    A few things to keep in mind:

    98 |
      99 |
    1. Just like the HTTP POST requests, predict_savedmodel expects the new instance data to be pre-processed.
    2. 100 |
    3. 101 | predict_savedmodel requires the new data to be in a list, and it always returns a list. This requirement faciliates models with more complex inputs or ouputs.
    4. 102 |
    103 |

    In the previous example we used predict_savedmodel with the directory, ‘savedmodel’, which was created with the export_savedmodel command. In addition to providing a path to a saved model directory, predict_savedmodel can also be used with a deployed model by supplying a REST URL, a CloudML model by supplying a CloudML name and version, or by supplying a graph object loaded with load_savedmodel.

    104 |

    The last option above references the load_savedmodel function. load_savedmodel should be used alongside of predict_savedmodel if you’ll be calling the prediction function multiple times. load_savedmodel effectively caches the model graph in memory and can speed up repeated calls to predict_savedmodel. This caching is useful, for example, in a Shiny application where user input would drive calls to predict_savedmodel.

    105 |
    # if there will only be one batch of predictions 
    106 | predict_savedmodel(instances, 'savedmodel')
    107 | 
    108 | # if there will be multiple batches of predictions
    109 | sess <- tensorflow::tf$Session()
    110 | graph <- load_savedmodel(sess, '../savedmodel')
    111 | predict_savedmodel(instances, graph)
    112 | # ... more work ... 
    113 | predict_savedmodel(instances, graph)
    114 |
    115 |
    116 |

    117 | Comparing Model Representations

    118 |

    There are a few distinct ways that a model can be represented in R. The most straightforward representation is the in-memory, R model object. This object is what is created and used while developing and training a model.

    119 |

    A second representation is the on-disk saved model. This representation of the model can be used by the *_savedmodel functions. As a special case, load_savedmodel creates a new R object pointing to the model graph. It is important to keep in mind that these saved models are not the full R model object. For example, you can not update or re-train a graph from a saved model.

    120 |

    Finally, for Keras models there are 2 other representations: HDF5 files and serialized objects. Each of these represenations captures the entire in-memory R object. For example, using save_model_hdf5 and then load_model_hdf5 will result in a model that can be updated or retrained.

    121 |
    122 |

    123 | What represenation should I use?

    124 |

    If you are developing a model and have access to the in-memory R model object, you should use the model object for predictions using R’s predict function.

    125 |

    If you are developing a Keras model and would like to save the model for use in a different session, you should use the HDF5 file or serialize the model and then save it to an R data format like RDS.

    126 |

    If you are going to deploy a model and want to test, you should export the model using export_savedmodel and then test with either serve_savedmodel and your HTTP client or predict_savedmodel.

    127 |

    If you are using R and want to create predictions from a deployed or saved model, and you don’t have access to the in-memory R model object, you should use predict_savedmodel.

    128 |
    129 |
    130 |
    131 |
    132 | 133 | 143 | 144 |
    145 | 146 | 147 | 156 |
    157 | 158 | 159 | 160 | --------------------------------------------------------------------------------