├── CODE_OF_CONDUCT.md ├── LICENSE ├── README.md ├── CONTRIBUTING.md ├── ncf.py ├── model-training-notebook.ipynb └── data-preparation-notebook.ipynb /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Build a Customized Recommender System on Amazon SageMaker 2 | 3 | ## Summary 4 | 5 | Recommender systems have been used to tailor customer experience on online platforms. [Amazon Personalize](https://aws.amazon.com/personalize/) is a fully-managed service that makes it easy to develop recommender system solutions; it automatically examines the data, performs feature and algorithm selection, optimizes the model based on your data, and deploys and hosts the model for real-time recommendation inference. However, due to unique constraints in some domains, sometimes recommender systems need to be custom-built. 6 | 7 | In this project, I will walk you through how to build and deploy a customized recommender system using Neural Collaborative Filtering model in TensorFlow 2.0 on [Amazon SageMaker](https://aws.amazon.com/sagemaker/), based on which you can customize further accordingly. 8 | 9 | ## Getting Started 10 | 11 | [Create an Amazon SageMaker notebook instance](https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html) (a `ml.t2.medium` instance will suffice to run the notebooks for this project) 12 | 13 | ## Running Notebooks 14 | 15 | There are two notebooks associated with this project: 16 | 1. [data preparation notebook.ipynb](data-preparation-notebook.ipynb) 17 | This notebook contains data preprocessing code. It downloads MovieLens dataset, performs training testing split and negative sampling, and uploads processed data onto Amazon S3. 18 | 2. [model training notebook.ipynb](model-training-notebook.ipynb) 19 | This notebook requires [ncf.py](ncf.py) file to run. It initiates a [TensorFlow estimator](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html) to train the model, then deploys the model as an [endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html) on Amazon SageMaker Hosting Services. Lastly, it shows how to make batch recommendation inference using the model endpoint. 20 | 21 | ## Security 22 | 23 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 24 | 25 | ## License 26 | 27 | This library is licensed under the MIT-0 License. See the LICENSE file. 28 | 29 | ## Acknowledgement 30 | 31 | MovieLens dataset provided by GroupLens. 32 | 33 | F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4: 19:1–19:19. https://doi.org/10.1145/2827872 34 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /ncf.py: -------------------------------------------------------------------------------- 1 | """ 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | SPDX-License-Identifier: MIT-0 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | software and associated documentation files (the "Software"), to deal in the Software 8 | without restriction, including without limitation the rights to use, copy, modify, 9 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | permit persons to whom the Software is furnished to do so. 11 | 12 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | 19 | """ 20 | 21 | 22 | import tensorflow as tf 23 | import argparse 24 | import os 25 | import numpy as np 26 | import json 27 | 28 | 29 | # for data processing 30 | def _load_training_data(base_dir): 31 | """ load training data """ 32 | df_train = np.load(os.path.join(base_dir, 'train.npy')) 33 | user_train, item_train, y_train = np.split(np.transpose(df_train).flatten(), 3) 34 | return user_train, item_train, y_train 35 | 36 | 37 | def batch_generator(x, y, batch_size, n_batch, shuffle, user_dim, item_dim): 38 | """ batch generator to supply data for training and testing """ 39 | 40 | user_df, item_df = x 41 | 42 | counter = 0 43 | training_index = np.arange(user_df.shape[0]) 44 | 45 | if shuffle: 46 | np.random.shuffle(training_index) 47 | 48 | while True: 49 | batch_index = training_index[batch_size*counter:batch_size*(counter+1)] 50 | user_batch = tf.one_hot(user_df[batch_index], depth=user_dim) 51 | item_batch = tf.one_hot(item_df[batch_index], depth=item_dim) 52 | y_batch = y[batch_index] 53 | counter += 1 54 | yield [user_batch, item_batch], y_batch 55 | 56 | if counter == n_batch: 57 | if shuffle: 58 | np.random.shuffle(training_index) 59 | counter = 0 60 | 61 | 62 | # network 63 | def _get_user_embedding_layers(inputs, emb_dim): 64 | """ create user embeddings """ 65 | user_gmf_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs) 66 | 67 | user_mlp_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs) 68 | 69 | return user_gmf_emb, user_mlp_emb 70 | 71 | 72 | def _get_item_embedding_layers(inputs, emb_dim): 73 | """ create item embeddings """ 74 | item_gmf_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs) 75 | 76 | item_mlp_emb = tf.keras.layers.Dense(emb_dim, activation='relu')(inputs) 77 | 78 | return item_gmf_emb, item_mlp_emb 79 | 80 | 81 | def _gmf(user_emb, item_emb): 82 | """ general matrix factorization branch """ 83 | gmf_mat = tf.keras.layers.Multiply()([user_emb, item_emb]) 84 | 85 | return gmf_mat 86 | 87 | 88 | def _mlp(user_emb, item_emb, dropout_rate): 89 | """ multi-layer perceptron branch """ 90 | def add_layer(dim, input_layer, dropout_rate): 91 | hidden_layer = tf.keras.layers.Dense(dim, activation='relu')(input_layer) 92 | 93 | if dropout_rate: 94 | dropout_layer = tf.keras.layers.Dropout(dropout_rate)(hidden_layer) 95 | return dropout_layer 96 | 97 | return hidden_layer 98 | 99 | concat_layer = tf.keras.layers.Concatenate()([user_emb, item_emb]) 100 | 101 | dropout_l1 = tf.keras.layers.Dropout(dropout_rate)(concat_layer) 102 | 103 | dense_layer_1 = add_layer(64, dropout_l1, dropout_rate) 104 | 105 | dense_layer_2 = add_layer(32, dense_layer_1, dropout_rate) 106 | 107 | dense_layer_3 = add_layer(16, dense_layer_2, None) 108 | 109 | dense_layer_4 = add_layer(8, dense_layer_3, None) 110 | 111 | return dense_layer_4 112 | 113 | 114 | def _neuCF(gmf, mlp, dropout_rate): 115 | concat_layer = tf.keras.layers.Concatenate()([gmf, mlp]) 116 | 117 | output_layer = tf.keras.layers.Dense(1, activation='sigmoid')(concat_layer) 118 | 119 | return output_layer 120 | 121 | 122 | def build_graph(user_dim, item_dim, dropout_rate=0.25): 123 | """ neural collaborative filtering model """ 124 | 125 | user_input = tf.keras.Input(shape=(user_dim)) 126 | item_input = tf.keras.Input(shape=(item_dim)) 127 | 128 | # create embedding layers 129 | user_gmf_emb, user_mlp_emb = _get_user_embedding_layers(user_input, 32) 130 | item_gmf_emb, item_mlp_emb = _get_item_embedding_layers(item_input, 32) 131 | 132 | # general matrix factorization 133 | gmf = _gmf(user_gmf_emb, item_gmf_emb) 134 | 135 | # multi layer perceptron 136 | mlp = _mlp(user_mlp_emb, item_mlp_emb, dropout_rate) 137 | 138 | # output 139 | output = _neuCF(gmf, mlp, dropout_rate) 140 | 141 | # create the model 142 | model = tf.keras.Model(inputs=[user_input, item_input], outputs=output) 143 | 144 | return model 145 | 146 | 147 | def model(x_train, y_train, n_user, n_item, num_epoch, batch_size): 148 | 149 | num_batch = np.ceil(x_train[0].shape[0]/batch_size) 150 | 151 | # build graph 152 | model = build_graph(n_user, n_item) 153 | 154 | # compile and train 155 | optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) 156 | 157 | model.compile(optimizer=optimizer, 158 | loss=tf.keras.losses.BinaryCrossentropy(), 159 | metrics=['accuracy']) 160 | 161 | model.fit_generator( 162 | generator=batch_generator( 163 | x=x_train, y=y_train, 164 | batch_size=batch_size, n_batch=num_batch, 165 | shuffle=True, user_dim=n_user, item_dim=n_item), 166 | epochs=num_epoch, 167 | steps_per_epoch=num_batch, 168 | verbose=2 169 | ) 170 | 171 | return model 172 | 173 | 174 | def _parse_args(): 175 | parser = argparse.ArgumentParser() 176 | 177 | parser.add_argument('--model_dir', type=str) 178 | parser.add_argument('--sm-model-dir', type=str, default=os.environ.get('SM_MODEL_DIR')) 179 | parser.add_argument('--train', type=str, default=os.environ.get('SM_CHANNEL_TRAINING')) 180 | parser.add_argument('--hosts', type=list, default=json.loads(os.environ.get('SM_HOSTS'))) 181 | parser.add_argument('--current-host', type=str, default=os.environ.get('SM_CURRENT_HOST')) 182 | parser.add_argument('--epochs', type=int, default=3) 183 | parser.add_argument('--batch_size', type=int, default=256) 184 | parser.add_argument('--n_user', type=int) 185 | parser.add_argument('--n_item', type=int) 186 | 187 | return parser.parse_known_args() 188 | 189 | 190 | if __name__ == "__main__": 191 | args, unknown = _parse_args() 192 | 193 | # load data 194 | user_train, item_train, train_labels = _load_training_data(args.train) 195 | 196 | # build model 197 | ncf_model = model( 198 | x_train=[user_train, item_train], 199 | y_train=train_labels, 200 | n_user=args.n_user, 201 | n_item=args.n_item, 202 | num_epoch=args.epochs, 203 | batch_size=args.batch_size 204 | ) 205 | 206 | if args.current_host == args.hosts[0]: 207 | # save model to an S3 directory with version number '00000001' 208 | ncf_model.save(os.path.join(args.sm_model_dir, '000000001'), 'neural_collaborative_filtering.h5') 209 | -------------------------------------------------------------------------------- /model-training-notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Train and Deploy a Neural Collaborative Filtering Model\n", 8 | "\n", 9 | "In this notebook, you will execute code blocks to\n", 10 | "\n", 11 | "1. inspect the training script [ncf.py](./ncf.py) \n", 12 | "2. train a model using [Tensorflow Estimator](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html) \n", 13 | "3. deploy and host the trained model as an endpoint using Amazon SageMaker Hosting Services \n", 14 | "4. perform batch inference by calling the model endpoint\n", 15 | "\n", 16 | "\\\n", 17 | "We are recommending the use of the following:\n", 18 | " * Kernel: Python 3.8, Tensorflow 2.6 CPU\n", 19 | " * Instance Type: ml.m5.large" 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": {}, 26 | "outputs": [], 27 | "source": [ 28 | "# In the last notebook (data-preparation-notebook.ipynb), we stored two variables.\n", 29 | "# Let's restore those variables here. These variables are inputs for the model training process.\n", 30 | "\n", 31 | "%store -r n_user\n", 32 | "%store -r n_item\n", 33 | "\n", 34 | "print(n_user)\n", 35 | "print(n_item)" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": null, 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "# import requirements\n", 45 | "import os\n", 46 | "import json\n", 47 | "import sagemaker\n", 48 | "import numpy as np\n", 49 | "import pandas as pd\n", 50 | "import tensorflow as tf\n", 51 | "from sagemaker import get_execution_role\n", 52 | "from sagemaker.tensorflow import TensorFlow\n", 53 | "\n", 54 | "# get current SageMaker session's execution role and default bucket name\n", 55 | "sagemaker_session = sagemaker.Session()\n", 56 | "\n", 57 | "role = get_execution_role()\n", 58 | "print(\"execution role ARN:\", role)\n", 59 | "\n", 60 | "bucket_name = sagemaker_session.default_bucket()\n", 61 | "print(\"default bucket name:\", bucket_name)" 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": null, 67 | "metadata": {}, 68 | "outputs": [], 69 | "source": [ 70 | "# specify the location of the training data\n", 71 | "training_data_uri = os.path.join(f's3://{bucket_name}', 'data')" 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "metadata": { 78 | "scrolled": true 79 | }, 80 | "outputs": [], 81 | "source": [ 82 | "# inspect the training script using `pygmentize` magic\n", 83 | "!pygmentize 'ncf.py'" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": null, 89 | "metadata": { 90 | "scrolled": false 91 | }, 92 | "outputs": [], 93 | "source": [ 94 | "# specify training instance type and model hyperparameters\n", 95 | "# note that for the demo purpose, the number of epoch is set to 1\n", 96 | "\n", 97 | "num_of_instance = 1 # number of instance to use for training\n", 98 | "instance_type = 'ml.c5.2xlarge' # type of instance to use for training\n", 99 | "\n", 100 | "training_script = 'ncf.py'\n", 101 | "\n", 102 | "training_parameters = {\n", 103 | " 'epochs': 1,\n", 104 | " 'batch_size': 256, \n", 105 | " 'n_user': n_user, \n", 106 | " 'n_item': n_item\n", 107 | "}\n", 108 | "\n", 109 | "# training framework specs\n", 110 | "tensorflow_version = '2.1.0'\n", 111 | "python_version = 'py3'\n", 112 | "distributed_training_spec = {'parameter_server': {'enabled': True}}" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "metadata": {}, 119 | "outputs": [], 120 | "source": [ 121 | "# initiate the training job using Tensorflow estimator\n", 122 | "ncf_estimator = TensorFlow(\n", 123 | " entry_point=training_script,\n", 124 | " role=role,\n", 125 | " train_instance_count=num_of_instance,\n", 126 | " train_instance_type=instance_type,\n", 127 | " framework_version=tensorflow_version,\n", 128 | " py_version=python_version,\n", 129 | " distributions=distributed_training_spec,\n", 130 | " hyperparameters=training_parameters\n", 131 | ")" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": null, 137 | "metadata": { 138 | "scrolled": true 139 | }, 140 | "outputs": [], 141 | "source": [ 142 | "# kick off the training job\n", 143 | "ncf_estimator.fit(training_data_uri)" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "## Deploy the Endpoint" 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "# once the model is trained, we can deploy the model using Amazon SageMaker Hosting Services\n", 160 | "# Here we deploy the model using one ml.c5.xlarge instance as a tensorflow-serving endpoint\n", 161 | "# This enables us to invoke the endpoint like how we use Tensorflow serving\n", 162 | "# Read more about Tensorflow serving using the link below\n", 163 | "# https://www.tensorflow.org/tfx/tutorials/serving/rest_simple\n", 164 | "\n", 165 | "endpoint_name = 'neural-collaborative-filtering-model-demo'\n", 166 | "model_name = 'neural-collab-filtering-model'\n", 167 | "\n", 168 | "predictor = ncf_estimator.deploy(\n", 169 | " initial_instance_count=1, \n", 170 | " instance_type=\"ml.t2.medium\", \n", 171 | " endpoint_name=endpoint_name,\n", 172 | " model_name=model_name,\n", 173 | ")" 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "metadata": {}, 179 | "source": [ 180 | "## Invoke" 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": null, 186 | "metadata": {}, 187 | "outputs": [], 188 | "source": [ 189 | "# To use the endpoint in another notebook, we can initiate a predictor object as follows\n", 190 | "from sagemaker.tensorflow import TensorFlowPredictor\n", 191 | "\n", 192 | "predictor = TensorFlowPredictor(endpoint_name)" 193 | ] 194 | }, 195 | { 196 | "cell_type": "code", 197 | "execution_count": null, 198 | "metadata": {}, 199 | "outputs": [], 200 | "source": [ 201 | "# Define a function to read testing data\n", 202 | "def _load_testing_data(base_dir):\n", 203 | " \"\"\" load testing data \"\"\"\n", 204 | " df_test = np.load(os.path.join(base_dir, 'test.npy'))\n", 205 | " user_test, item_test, y_test = np.split(np.transpose(df_test).flatten(), 3)\n", 206 | " return user_test, item_test, y_test" 207 | ] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": null, 212 | "metadata": {}, 213 | "outputs": [], 214 | "source": [ 215 | "# read testing data from local\n", 216 | "user_test, item_test, test_labels = _load_testing_data('./ml-latest-small/s3/')\n", 217 | "\n", 218 | "# one-hot encode the testing data for model input\n", 219 | "with tf.compat.v1.Session() as tf_sess:\n", 220 | " test_user_data = tf_sess.run(tf.one_hot(user_test, depth=n_user)).tolist()\n", 221 | " test_item_data = tf_sess.run(tf.one_hot(item_test, depth=n_item)).tolist()\n", 222 | " \n", 223 | "# if you're using Tensorflow 2.0 for one hot encoding\n", 224 | "# you can convert the tensor to list using:\n", 225 | "# tf.one_hot(uuser_test, depth=n_user).numpy().tolist()" 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": null, 231 | "metadata": {}, 232 | "outputs": [], 233 | "source": [ 234 | "# make batch prediction\n", 235 | "batch_size = 100\n", 236 | "y_pred = []\n", 237 | "for idx in range(0, len(test_user_data), batch_size):\n", 238 | " # reformat test samples into tensorflow serving acceptable format\n", 239 | " input_vals = {\n", 240 | " \"instances\": [\n", 241 | " {'input_1': u, 'input_2': i} \n", 242 | " for (u, i) in zip(test_user_data[idx:idx+batch_size], test_item_data[idx:idx+batch_size])\n", 243 | " ]}\n", 244 | " \n", 245 | " # invoke model endpoint to make inference\n", 246 | " pred = predictor.predict(input_vals)\n", 247 | " \n", 248 | " # store predictions\n", 249 | " y_pred.extend([i[0] for i in pred['predictions']])" 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": null, 255 | "metadata": {}, 256 | "outputs": [], 257 | "source": [ 258 | "# let's see some prediction examples, assuming the threshold \n", 259 | "# --- prediction probability view ---\n", 260 | "print('This is what the prediction output looks like')\n", 261 | "print(y_pred[:5], end='\\n\\n\\n')\n", 262 | "\n", 263 | "# --- user item pair prediction view, with threshold of 0.5 applied ---\n", 264 | "pred_df = pd.DataFrame([\n", 265 | " user_test,\n", 266 | " item_test,\n", 267 | " (np.array(y_pred) >= 0.5).astype(int)],\n", 268 | ").T\n", 269 | "\n", 270 | "pred_df.columns = ['userId', 'movieId', 'prediction']\n", 271 | "\n", 272 | "print('We can convert the output to user-item pair as shown below')\n", 273 | "print(pred_df.head(), end='\\n\\n\\n')\n", 274 | "\n", 275 | "# --- aggregated prediction view, by user ---\n", 276 | "print('Lastly, we can roll up the prediction list by user and view it that way')\n", 277 | "print(pred_df.query('prediction == 1').groupby('userId').movieId.apply(list).head().to_frame(), end='\\n\\n\\n')" 278 | ] 279 | }, 280 | { 281 | "cell_type": "markdown", 282 | "metadata": {}, 283 | "source": [ 284 | "## Delete Endpoint" 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "metadata": {}, 291 | "outputs": [], 292 | "source": [ 293 | "# delete endpoint at the end of the demo\n", 294 | "predictor.delete_endpoint(delete_endpoint_config=True)" 295 | ] 296 | }, 297 | { 298 | "cell_type": "code", 299 | "execution_count": null, 300 | "metadata": {}, 301 | "outputs": [], 302 | "source": [] 303 | } 304 | ], 305 | "metadata": { 306 | "instance_type": "ml.m5.large", 307 | "kernelspec": { 308 | "display_name": "Python 3 (TensorFlow 2.6 Python 3.8 CPU Optimized)", 309 | "language": "python", 310 | "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/tensorflow-2.6-cpu-py38-ubuntu20.04-v1" 311 | }, 312 | "language_info": { 313 | "codemirror_mode": { 314 | "name": "ipython", 315 | "version": 3 316 | }, 317 | "file_extension": ".py", 318 | "mimetype": "text/x-python", 319 | "name": "python", 320 | "nbconvert_exporter": "python", 321 | "pygments_lexer": "ipython3", 322 | "version": "3.8.2" 323 | }, 324 | "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." 325 | }, 326 | "nbformat": 4, 327 | "nbformat_minor": 4 328 | } 329 | -------------------------------------------------------------------------------- /data-preparation-notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Data Preparation Notebook\n", 8 | "\n", 9 | "In this notebook, you will execute code to \n", 10 | "\n", 11 | "1. download [MovieLens](https://grouplens.org/datasets/movielens/) dataset into `ml-latest-small` directory\n", 12 | "2. split the data into training and testing sets\n", 13 | "3. perform negative sampling\n", 14 | "4. calculate statistics needed to train the NCF model\n", 15 | "5. upload data onto S3 bucket" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "## 1. Download dataset" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": null, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "%%bash\n", 32 | "# delete the data directory if exists\n", 33 | "rm -r ml-latest-small\n", 34 | "\n", 35 | "# download movielens small dataset\n", 36 | "curl -O http://files.grouplens.org/datasets/movielens/ml-latest-small.zip\n", 37 | "\n", 38 | "# unzip into data directory\n", 39 | "unzip ml-latest-small.zip\n", 40 | "rm ml-latest-small.zip" 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "metadata": {}, 46 | "source": [ 47 | "**About the Data**" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "metadata": { 54 | "scrolled": true 55 | }, 56 | "outputs": [], 57 | "source": [ 58 | "!cat ml-latest-small/README.txt" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "For this model, we will be using `ratings.csv` mainly, which contains 4 columns,\n", 66 | "- userId\n", 67 | "- movieId\n", 68 | "- rating\n", 69 | "- timestamp" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "metadata": {}, 75 | "source": [ 76 | "## 2. Read data and perform train and test split" 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "# requirements\n", 86 | "import os\n", 87 | "import boto3\n", 88 | "import sagemaker\n", 89 | "import numpy as np\n", 90 | "import pandas as pd" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "metadata": {}, 97 | "outputs": [], 98 | "source": [ 99 | "# read rating data\n", 100 | "fpath = './ml-latest-small/ratings.csv'\n", 101 | "df = pd.read_csv(fpath)" 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": null, 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [ 110 | "# let's see what the data look like\n", 111 | "df.head(2)" 112 | ] 113 | }, 114 | { 115 | "cell_type": "code", 116 | "execution_count": null, 117 | "metadata": {}, 118 | "outputs": [], 119 | "source": [ 120 | "# understand what's the maximum number of hold out portion should be\n", 121 | "df.groupby('userId').movieId.nunique().min()" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "Note: Since the \"least active\" user has 20 ratings, for our testing set, let's hold out 10 items for every user so that the max test set portion is 50%." 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "def train_test_split(df, holdout_num):\n", 138 | " \"\"\" perform training/testing split\n", 139 | " \n", 140 | " @param df: dataframe\n", 141 | " @param holdhout_num: number of items to be held out\n", 142 | " \n", 143 | " @return df_train: training data\n", 144 | " @return df_test testing data\n", 145 | " \n", 146 | " \"\"\"\n", 147 | " # first sort the data by time\n", 148 | " df = df.sort_values(['userId', 'timestamp'], ascending=[True, False])\n", 149 | " \n", 150 | " # perform deep copy on the dataframe to avoid modification on the original dataframe\n", 151 | " df_train = df.copy(deep=True)\n", 152 | " df_test = df.copy(deep=True)\n", 153 | " \n", 154 | " # get test set\n", 155 | " df_test = df_test.groupby(['userId']).head(holdout_num).reset_index()\n", 156 | " \n", 157 | " # get train set\n", 158 | " df_train = df_train.merge(\n", 159 | " df_test[['userId', 'movieId']].assign(remove=1),\n", 160 | " how='left'\n", 161 | " ).query('remove != 1').drop('remove', 1).reset_index(drop=True)\n", 162 | " \n", 163 | " # sanity check to make sure we're not duplicating/losing data\n", 164 | " assert len(df) == len(df_train) + len(df_test)\n", 165 | " \n", 166 | " return df_train, df_test" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "execution_count": null, 172 | "metadata": {}, 173 | "outputs": [], 174 | "source": [ 175 | "df_train, df_test = train_test_split(df, 10)" 176 | ] 177 | }, 178 | { 179 | "cell_type": "markdown", 180 | "metadata": {}, 181 | "source": [ 182 | "## 3. Perform negative sampling" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "metadata": {}, 188 | "source": [ 189 | "Assuming if a user rating an item is a positive label, there is no negative sample in the dataset, which is not possible for model training. Therefore, we random sample `n` items from the unseen movie list for every user to provide the negative samples." 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": null, 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "def negative_sampling(user_ids, movie_ids, items, n_neg):\n", 199 | " \"\"\"This function creates n_neg negative labels for every positive label\n", 200 | " \n", 201 | " @param user_ids: list of user ids\n", 202 | " @param movie_ids: list of movie ids\n", 203 | " @param items: unique list of movie ids\n", 204 | " @param n_neg: number of negative labels to sample\n", 205 | " \n", 206 | " @return df_neg: negative sample dataframe\n", 207 | " \n", 208 | " \"\"\"\n", 209 | " \n", 210 | " neg = []\n", 211 | " ui_pairs = zip(user_ids, movie_ids)\n", 212 | " records = set(ui_pairs)\n", 213 | " \n", 214 | " # for every positive label case\n", 215 | " for (u, i) in records:\n", 216 | " # generate n_neg negative labels\n", 217 | " for _ in range(n_neg):\n", 218 | " # if the randomly sampled movie exists for that user\n", 219 | " j = np.random.choice(items)\n", 220 | " while(u, j) in records:\n", 221 | " # resample\n", 222 | " j = np.random.choice(items)\n", 223 | " neg.append([u, j, 0])\n", 224 | " # conver to pandas dataframe for concatenation later\n", 225 | " df_neg = pd.DataFrame(neg, columns=['userId', 'movieId', 'rating'])\n", 226 | " \n", 227 | " return df_neg" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": null, 233 | "metadata": {}, 234 | "outputs": [], 235 | "source": [ 236 | "# create negative samples for training set\n", 237 | "neg_train = negative_sampling(\n", 238 | " user_ids=df_train.userId.values, \n", 239 | " movie_ids=df_train.movieId.values,\n", 240 | " items=df.movieId.unique(),\n", 241 | " n_neg=5\n", 242 | ")" 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": null, 248 | "metadata": {}, 249 | "outputs": [], 250 | "source": [ 251 | "print(f'created {neg_train.shape[0]:,} negative samples')" 252 | ] 253 | }, 254 | { 255 | "cell_type": "code", 256 | "execution_count": null, 257 | "metadata": {}, 258 | "outputs": [], 259 | "source": [ 260 | "df_train = df_train[['userId', 'movieId']].assign(rating=1)\n", 261 | "df_test = df_test[['userId', 'movieId']].assign(rating=1)\n", 262 | "\n", 263 | "df_train = pd.concat([df_train, neg_train], ignore_index=True)" 264 | ] 265 | }, 266 | { 267 | "cell_type": "markdown", 268 | "metadata": {}, 269 | "source": [ 270 | "## 4. Calulate statistics for our understanding and model training" 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "metadata": {}, 277 | "outputs": [], 278 | "source": [ 279 | "def get_unique_count(df):\n", 280 | " \"\"\"calculate unique user and movie counts\"\"\"\n", 281 | " return df.userId.nunique(), df.movieId.nunique()" 282 | ] 283 | }, 284 | { 285 | "cell_type": "code", 286 | "execution_count": null, 287 | "metadata": {}, 288 | "outputs": [], 289 | "source": [ 290 | "# unique number of user and movie in the whole dataset\n", 291 | "get_unique_count(df)" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": {}, 298 | "outputs": [], 299 | "source": [ 300 | "print('training set shape', get_unique_count(df_train))\n", 301 | "print('testing set shape', get_unique_count(df_test))" 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "metadata": {}, 307 | "source": [ 308 | "Next, we calculate some statistics for training purpose." 309 | ] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "metadata": {}, 315 | "outputs": [], 316 | "source": [ 317 | "# number of unique user and number of unique item/movie\n", 318 | "n_user, n_item = get_unique_count(df_train)\n", 319 | "\n", 320 | "print(\"number of unique users\", n_user)\n", 321 | "print(\"number of unique items\", n_item)" 322 | ] 323 | }, 324 | { 325 | "cell_type": "code", 326 | "execution_count": null, 327 | "metadata": {}, 328 | "outputs": [], 329 | "source": [ 330 | "# save the variable for the model training notebook\n", 331 | "# -----\n", 332 | "# read about `store` magic here: \n", 333 | "# https://ipython.readthedocs.io/en/stable/config/extensions/storemagic.html\n", 334 | "\n", 335 | "%store n_user\n", 336 | "%store n_item" 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "metadata": {}, 342 | "source": [ 343 | "## 5. Preprocess data and upload them onto S3" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": {}, 350 | "outputs": [], 351 | "source": [ 352 | "# get current session region\n", 353 | "session = boto3.session.Session()\n", 354 | "region = session.region_name\n", 355 | "print(f'currently in {region}')" 356 | ] 357 | }, 358 | { 359 | "cell_type": "code", 360 | "execution_count": null, 361 | "metadata": {}, 362 | "outputs": [], 363 | "source": [ 364 | "# use the default sagemaker s3 bucket to store processed data\n", 365 | "# here we figure out what that default bucket name is \n", 366 | "sagemaker_session = sagemaker.Session()\n", 367 | "bucket_name = sagemaker_session.default_bucket()\n", 368 | "print(bucket_name) # bucket name format: \"sagemaker-{region}-{aws_account_id}\"" 369 | ] 370 | }, 371 | { 372 | "cell_type": "markdown", 373 | "metadata": {}, 374 | "source": [ 375 | "**upload data to the bucket**" 376 | ] 377 | }, 378 | { 379 | "cell_type": "code", 380 | "execution_count": null, 381 | "metadata": {}, 382 | "outputs": [], 383 | "source": [ 384 | "# save data locally first\n", 385 | "dest = 'ml-latest-small/s3'\n", 386 | "train_path = os.path.join(dest, 'train.npy')\n", 387 | "test_path = os.path.join(dest, 'test.npy')\n", 388 | "\n", 389 | "!mkdir {dest}\n", 390 | "np.save(train_path, df_train.values)\n", 391 | "np.save(test_path, df_test.values)\n", 392 | "\n", 393 | "# upload to S3 bucket (see the bucket name above)\n", 394 | "sagemaker_session.upload_data(train_path, key_prefix='data')\n", 395 | "sagemaker_session.upload_data(test_path, key_prefix='data')" 396 | ] 397 | }, 398 | { 399 | "cell_type": "code", 400 | "execution_count": null, 401 | "metadata": {}, 402 | "outputs": [], 403 | "source": [] 404 | }, 405 | { 406 | "cell_type": "code", 407 | "execution_count": null, 408 | "metadata": {}, 409 | "outputs": [], 410 | "source": [] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": null, 415 | "metadata": {}, 416 | "outputs": [], 417 | "source": [] 418 | } 419 | ], 420 | "metadata": { 421 | "instance_type": "ml.t3.medium", 422 | "kernelspec": { 423 | "display_name": "Python 3 (Data Science)", 424 | "language": "python", 425 | "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" 426 | }, 427 | "language_info": { 428 | "codemirror_mode": { 429 | "name": "ipython", 430 | "version": 3 431 | }, 432 | "file_extension": ".py", 433 | "mimetype": "text/x-python", 434 | "name": "python", 435 | "nbconvert_exporter": "python", 436 | "pygments_lexer": "ipython3", 437 | "version": "3.7.10" 438 | } 439 | }, 440 | "nbformat": 4, 441 | "nbformat_minor": 4 442 | } 443 | --------------------------------------------------------------------------------