├── .gitattributes ├── results ├── recons.png ├── combined.png └── original.png ├── .gitignore ├── tf_py27_cpu_env.yml ├── tf_py35_gpu_env.yml ├── Dockerfile ├── plot_utils.py ├── README.md ├── vae_gumbel_softmax.py └── LICENSE /.gitattributes: -------------------------------------------------------------------------------- 1 | * linguist-vendored 2 | *.py linguist-vendored=false 3 | -------------------------------------------------------------------------------- /results/recons.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vithursant/VAE-Gumbel-Softmax/HEAD/results/recons.png -------------------------------------------------------------------------------- /results/combined.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vithursant/VAE-Gumbel-Softmax/HEAD/results/combined.png -------------------------------------------------------------------------------- /results/original.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vithursant/VAE-Gumbel-Softmax/HEAD/results/original.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | notebooks/MNIST_data/ 3 | notebooks/.ipynb_checkpoints/ 4 | data/ 5 | log/ 6 | checkpoint/ 7 | -------------------------------------------------------------------------------- /tf_py27_cpu_env.yml: -------------------------------------------------------------------------------- 1 | name: tf-py27-cpu-env 2 | channels: 3 | - conda-forge 4 | - ioam 5 | dependencies: 6 | - python=2.7 7 | - numpy 8 | - holoviews 9 | - jupyter 10 | - pandas 11 | - matplotlib 12 | - seaborn 13 | - pip: 14 | - tqdm 15 | - packaging 16 | - appdirs 17 | - tensorflow 18 | -------------------------------------------------------------------------------- /tf_py35_gpu_env.yml: -------------------------------------------------------------------------------- 1 | name: tf-py35-gpu-env 2 | channels: 3 | - conda-forge 4 | - ioam 5 | dependencies: 6 | - python=3.5 7 | - numpy 8 | - holoviews 9 | - jupyter 10 | - pandas 11 | - matplotlib 12 | - seaborn 13 | - pip: 14 | - tqdm 15 | - packaging 16 | - appdirs 17 | - tensorflow-gpu 18 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | ## Dockerfile to build VAE-Gumbel-Softmax container image 2 | 3 | FROM python:2.7.14 4 | MAINTAINER Vithursan Thangarasa 5 | 6 | # dependencies 7 | RUN \ 8 | apt-get -qq -y update \ 9 | && \ 10 | pip install -U \ 11 | numpy \ 12 | holoviews \ 13 | jupyter \ 14 | pandas \ 15 | matplotlib \ 16 | seaborn \ 17 | tqdm \ 18 | packaging \ 19 | appdirs \ 20 | tensorflow 21 | 22 | COPY ./ /root/vae_gumbel_softmax 23 | 24 | WORKDIR /root/vae_gumbel_softmax 25 | RUN mkdir /root/vae_gumbel_softmax/results 26 | 27 | CMD python vae_gumbel_softmax.py 28 | -------------------------------------------------------------------------------- /plot_utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from matplotlib import pyplot as plt 3 | 4 | def make_squares(images, nr_images_per_side): 5 | images_to_plot = np.concatenate( 6 | [np.concatenate([images[j*nr_images_per_side+i].reshape((28,28)) for i in range(0,nr_images_per_side)], 7 | axis=1) 8 | for j in range(0,nr_images_per_side)], 9 | axis=0) 10 | return images_to_plot 11 | 12 | def plot_squares(originals, reconstructs, nr_images_per_side): 13 | originals_square = make_squares(originals, nr_images_per_side) 14 | plt.imsave('./results/original.png', originals_square, cmap='viridis') 15 | reconstructs_square = make_squares(reconstructs, nr_images_per_side) 16 | plt.imsave('./results/recons.png', reconstructs_square, cmap='viridis') 17 | combined = np.concatenate([originals_square, reconstructs_square], axis=1) 18 | plt.imsave('./results/combined.png', combined, cmap='viridis') 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # VAE with Gumbel-Softmax 2 | 3 | TensorFlow implementation of a Variational Autoencoder with Gumbel-Softmax Distribution. Refer to the following paper: 4 | 5 | * [Categorical Reparametrization with Gumbel-Softmax](https://arxiv.org/pdf/1611.01144.pdf) by Maddison, Mnih and Teh 6 | * [The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables](https://arxiv.org/pdf/1611.00712.pdf) by Jang, Gu and Poole 7 | * [REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models](https://arxiv.org/pdf/1703.07370.pdf) by Tucker, Mnih, Maddison and Sohl-Dickstein 8 | 9 | Also, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions. 10 | 11 | ## Table of Contents 12 | * [Installation](#installation) 13 | * [Ananconda](#anaconda) 14 | * [Docker](#docker) 15 | * [Results](#results) 16 | * [Citing VAE-Gumbel-Softmax](#citing-vae-gumbel-softmax) 17 | 18 | ## Installation 19 | 20 | The program requires the following dependencies (easy to install using pip, Ananconda or Docker): 21 | 22 | * python 2.7/3.5 23 | * tensorflow (tested with r1.1 and r1.5) 24 | * numpy 25 | * holoviews 26 | * jupyter 27 | * pandas 28 | * matplotlib 29 | * seaborn 30 | * tqdm 31 | 32 | ## Anaconda 33 | 34 | ### Anaconda: CPU Installation 35 | 36 | To install VAE-Gumbel-Softmax in an TensorFlow 1.5 CPU - Python 2.7 environment: 37 | 38 | ```python 39 | conda env create -f tf_py26_cpu_env.yml 40 | ``` 41 | 42 | To activate Anaconda environment: 43 | 44 | ```python 45 | source activate tf-py26-cpu-env 46 | ``` 47 | 48 | ### Anaconda: GPU Installation 49 | 50 | To install VAE-Gumbel-Softmax in an TensorFlow 1.5 GPU - Python 3.5 environment: 51 | 52 | ```python 53 | conda env create -f tf_py35_gpu_env.yml 54 | ``` 55 | 56 | To activate Anaconda environment: 57 | 58 | ```python 59 | source activate tf-py35-gpu-env 60 | ``` 61 | 62 | ### Anaconda: Train 63 | 64 | Train VAE-Gumbel-Softmax model on the local machine using MNIST dataset: 65 | 66 | ```python 67 | python vae_gumbel_softmax.py 68 | ``` 69 | 70 | ## Docker 71 | 72 | Train VAE-Gumbel-Softmax model using Docker on the MNIST dataset: 73 | 74 | ```shell 75 | docker build -t vae-gs . 76 | docker run vae-gs 77 | ``` 78 | 79 | Note: Current Dockerfile is for TensorFlow 1.5 CPU training. 80 | 81 | ## Results 82 | 83 | ### Hyperparameters 84 | ```python 85 | Batch Size: 100 86 | Number of Iterations: 50000 87 | Learning Rate: 0.001 88 | Initial Temperature: 1.0 89 | Minimum Temperature: 0.5 90 | Anneal Rate: 0.00003 91 | Straight-Through Gumbel-Softmax: False 92 | KL-divergence: Relaxed 93 | Learnable Temperature: False 94 | ``` 95 | 96 | ### MNIST 97 | | Ground Truth | Reconstructions | 98 | |:------------: |:---------------: | 99 | |![](results/original.png) | ![](results/recons.png)| 100 | 101 | ### Citing VAE-Gumbel-Softmax 102 | If you use VAE-Gumbel-Softmax in a scientific publication, I would appreciate references to the source code. 103 | 104 | Biblatex entry: 105 | 106 | ```latex 107 | @misc{VAEGumbelSoftmax, 108 | author = {Thangarasa, Vithursan}, 109 | title = {VAE-Gumbel-Softmax}, 110 | year = {2017}, 111 | publisher = {GitHub}, 112 | journal = {GitHub repository}, 113 | howpublished = {\url{https://github.com/vithursant/VAE-Gumbel-Softmax}} 114 | } 115 | ``` 116 | -------------------------------------------------------------------------------- /vae_gumbel_softmax.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import matplotlib 3 | matplotlib.use('Agg') 4 | 5 | import tensorflow as tf 6 | import tensorflow.contrib.slim as slim 7 | import numpy as np 8 | import seaborn as sns 9 | import os 10 | import time 11 | from tqdm import trange, tqdm 12 | 13 | from plot_utils import * 14 | 15 | from matplotlib import pyplot as plt 16 | from tensorflow.examples.tutorials.mnist import input_data 17 | 18 | sns.set_style('whitegrid') 19 | 20 | # Define the different distributions 21 | distributions = tf.contrib.distributions 22 | 23 | bernoulli = distributions.Bernoulli 24 | 25 | # Define current_time 26 | current_time = time.strftime('%Y-%m-%d-%H-%M-%S') 27 | 28 | # Define Directory Parameters 29 | flags = tf.app.flags 30 | flags.DEFINE_string('data_dir', os.getcwd() + '/data/', 'Directory for data') 31 | flags.DEFINE_string('log_dir', os.getcwd() + '/log/', 'Directory for logs') 32 | flags.DEFINE_string('results_dir', os.getcwd() + '/results/', 'Directory for results') 33 | flags.DEFINE_string('checkpoint_dir', os.getcwd() + '/checkpoint/' + current_time, 'Directory for checkpoints') 34 | 35 | # Define Model Parameters 36 | flags.DEFINE_integer('batch_size', 100, 'Minibatch size') 37 | flags.DEFINE_integer('num_iters', 50000, 'Number of iterations') 38 | flags.DEFINE_float('learning_rate', 0.001, 'Learning rate') 39 | flags.DEFINE_integer('num_classes', 10, 'Number of classes') 40 | flags.DEFINE_integer('num_cat_dists', 200, 'Number of categorical distributions') # num_cat_dists//num_calsses 41 | flags.DEFINE_float('init_temp', 1.0, 'Initial temperature') 42 | flags.DEFINE_float('min_temp', 0.5, 'Minimum temperature') 43 | flags.DEFINE_float('anneal_rate', 0.00003, 'Anneal rate') 44 | flags.DEFINE_bool('straight_through', False, 'Straight-through Gumbel-Softmax') 45 | flags.DEFINE_string('kl_type', 'relaxed', 'Kullback-Leibler divergence (relaxed or categorical)') 46 | flags.DEFINE_bool('learn_temp', False, 'Learn temperature parameter') 47 | 48 | FLAGS = flags.FLAGS 49 | 50 | def sample_gumbel(shape, eps=1e-20): 51 | U = tf.random_uniform(shape, minval=0, maxval=1) 52 | return -tf.log(-tf.log(U + eps) + eps) 53 | 54 | def gumbel_softmax(logits, temperature, hard=False): 55 | gumbel_softmax_sample = logits + sample_gumbel(tf.shape(logits)) 56 | y = tf.nn.softmax(gumbel_softmax_sample / temperature) 57 | 58 | if hard: 59 | k = tf.shape(logits)[-1] 60 | y_hard = tf.cast(tf.equal(y, tf.reduce_max(y, 1, keep_dims=True)), 61 | y.dtype) 62 | y = tf.stop_gradient(y_hard - y) + y 63 | 64 | return y 65 | 66 | def encoder(x): 67 | # Variational posterior q(y|x), i.e. the encoder (shape=(batch_size, 200)) 68 | net = slim.stack(x, 69 | slim.fully_connected, 70 | [512, 256]) 71 | 72 | # Unnormalized logits for number of classes (N) seperate K-categorical 73 | # distributions 74 | logits_y = tf.reshape(slim.fully_connected(net, 75 | FLAGS.num_classes*FLAGS.num_cat_dists, 76 | activation_fn=None), 77 | [-1, FLAGS.num_cat_dists]) 78 | 79 | q_y = tf.nn.softmax(logits_y) 80 | log_q_y = tf.log(q_y + 1e-20) 81 | 82 | return logits_y, q_y, log_q_y 83 | 84 | def decoder(tau, logits_y): 85 | y = tf.reshape(gumbel_softmax(logits_y, tau, hard=False), 86 | [-1, FLAGS.num_cat_dists, FLAGS.num_classes]) 87 | 88 | # Generative model p(x|y), i.e. the decoder (shape=(batch_size, 200)) 89 | net = slim.stack(slim.flatten(y), 90 | slim.fully_connected, 91 | [256, 512]) 92 | 93 | logits_x = slim.fully_connected(net, 94 | 784, 95 | activation_fn=None) 96 | 97 | # (shape=(batch_size, 784)) 98 | p_x = bernoulli(logits=logits_x) 99 | 100 | return p_x 101 | 102 | def create_train_op(x, lr, q_y, log_q_y, p_x): 103 | 104 | kl_tmp = tf.reshape(q_y * (log_q_y - tf.log(1.0 / FLAGS.num_classes)), 105 | [-1, FLAGS.num_cat_dists, FLAGS.num_classes]) 106 | 107 | KL = tf.reduce_sum(kl_tmp, [1,2]) 108 | elbo = tf.reduce_sum(p_x.log_prob(x), 1) - KL 109 | 110 | loss = tf.reduce_mean(-elbo) 111 | train_op = tf.train.AdamOptimizer(learning_rate=lr).minimize(loss) 112 | 113 | return train_op, loss 114 | 115 | def train(): 116 | 117 | # Setup encoder 118 | inputs = tf.placeholder(tf.float32, shape=[None, 784], name='inputs') 119 | tau = tf.placeholder(tf.float32, [], name='temperature') 120 | learning_rate = tf.placeholder(tf.float32, [], name='lr_value') 121 | 122 | # Get data i.e. MNIST 123 | data = input_data.read_data_sets(FLAGS.data_dir + '/MNIST', one_hot=True) 124 | logits_y, q_y, log_q_y = encoder(inputs) 125 | 126 | # Setup decoder 127 | p_x = decoder(tau, logits_y) 128 | 129 | train_op, loss = create_train_op(inputs, learning_rate, q_y, log_q_y, p_x) 130 | init_op = [tf.global_variables_initializer(), tf.local_variables_initializer()] 131 | 132 | sess = tf.Session() 133 | saver = tf.train.Saver() 134 | 135 | sess.run(init_op) 136 | dat = [] 137 | 138 | # Start input enqueue threads. 139 | coord = tf.train.Coordinator() 140 | threads = tf.train.start_queue_runners(sess=sess, coord=coord) 141 | 142 | try: 143 | for i in tqdm(range(1, FLAGS.num_iters)): 144 | np_x, np_y = data.train.next_batch(FLAGS.batch_size) 145 | _, np_loss = sess.run([train_op, loss], {inputs: np_x, learning_rate: FLAGS.learning_rate, tau: FLAGS.init_temp}) 146 | 147 | if i % 10000 == 1: 148 | path = saver.save(sess, FLAGS.checkpoint_dir + '/modek.ckpt') 149 | print('Model saved at iteration {} in checkpoint {}'.format(i, path)) 150 | dat.append([i, FLAGS.min_temp, np_loss]) 151 | if i % 1000 == 1: 152 | FLAGS.min_temp = np.maximum(FLAGS.init_temp * np.exp(-FLAGS.anneal_rate * i), 153 | FLAGS.min_temp) 154 | FLAGS.learning_rate *= 0.9 155 | print('Temperature updated to {}\n'.format(FLAGS.min_temp) + 156 | 'Learning rate updated to {}'.format(FLAGS.learning_rate)) 157 | if i % 5000 == 1: 158 | print('Iteration {}\nELBO: {}\n'.format(i, -np_loss)) 159 | 160 | #coord.request_stop() 161 | #coord.join(threads) 162 | #sess.close() 163 | plot_vae_gumbel(p_x, inputs, tau, learning_rate, data, sess) 164 | 165 | except KeyboardInterrupt: 166 | print() 167 | 168 | finally: 169 | #save(saver, sess, FLAGS.log_dir, i) 170 | coord.request_stop() 171 | coord.join(threads) 172 | sess.close() 173 | 174 | def plot_vae_gumbel(p_x, inputs, tau, learning_rate, data, sess): 175 | x_mean = p_x.mean() 176 | batch = data.test.next_batch(FLAGS.batch_size) 177 | np_x = sess.run(x_mean, {inputs: batch[0], learning_rate: FLAGS.learning_rate, tau: FLAGS.init_temp}) 178 | 179 | tmp = np.reshape(np_x,(-1,280,28)) # (10,280,28) 180 | img = np.hstack([tmp[i] for i in range(10)]) 181 | plot_squares(batch[0], np_x, 8) 182 | 183 | def main(): 184 | if tf.gfile.Exists(FLAGS.log_dir): 185 | tf.gfile.DeleteRecursively(FLAGS.log_dir) 186 | tf.gfile.MakeDirs(FLAGS.log_dir) 187 | tf.gfile.MakeDirs(FLAGS.data_dir) 188 | tf.gfile.MakeDirs(FLAGS.checkpoint_dir) 189 | tf.gfile.MakeDirs(FLAGS.results_dir) 190 | train() 191 | 192 | if __name__=="__main__": 193 | main() 194 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | --------------------------------------------------------------------------------