) The size of the filter
145 | :param channel: [in_channels, out_channels]
146 | :param stride: The size of the s
147 | :return:
148 | """
149 | input_channel, output_channel = channel
150 |
151 | h = self.conv_bn(input_layer, filter=filter, channel=[input_channel, output_channel], stride=stride)
152 | h = tf.nn.relu(h)
153 | h = self.conv_bn(h, filter=filter, channel=[output_channel, output_channel], stride=stride)
154 |
155 | if input_channel != output_channel:
156 | # Input channel 과 output channel이 dimension이 다르기 때문에 projection 을 통해서 dimension을 맞춰준다.
157 | inp, _filter = self.conv(input_layer, filter=[1, 1], channel=[input_channel, output_channel], stride=stride)
158 | else:
159 | inp = input_layer
160 |
161 | h = tf.add(h, inp)
162 | h = tf.nn.relu(h)
163 | self.layers.append(h)
164 | return h
165 |
166 | def fc(self, input_layer):
167 | global_pool = tf.reduce_mean(input_layer, axis=[1, 2])
168 | fc_w = self.create_variable(name='fc_w', shape=[global_pool.shape[-1], self.n_label])
169 | fc_b = self.create_variable(name='fc_b', shape=[self.n_label])
170 |
171 | output = tf.matmul(global_pool, fc_w) + fc_b
172 | self.layers.append(output)
173 | return output
174 |
175 | def loss(self):
176 | loss_f = tf.nn.sparse_softmax_cross_entropy_with_logits
177 | cross_entropy = loss_f(logits=self.last_layer, labels=self.y_ts, name='cross_entropy')
178 | cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy_mean')
179 | return cross_entropy_mean
180 |
181 | def compile(self, target=None) -> tf.Session:
182 | gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5, allow_growth=True)
183 | sess = tf.Session(target, config=tf.ConfigProto(gpu_options=gpu_options,
184 | allow_soft_placement=False,
185 | log_device_placement=False))
186 | sess.run(tf.global_variables_initializer())
187 | self.sess = sess
188 | return sess
189 |
190 | def save(self, path='/tmp/resnet_anderson.ckpt'):
191 | if self.saver is None:
192 | self.saver = tf.train.Saver()
193 | self.saver.save(self.sess, path)
194 |
195 | def restore(self, path='/tmp/resnet_anderson.ckpt'):
196 | if self.saver is None:
197 | self.saver = tf.train.Saver()
198 | print(f'Restoring "{path}" model')
199 | self.saver.restore(self.sess, path)
200 |
201 | @property
202 | def last_layer(self) -> tf.Tensor:
203 | return self.layers[-1]
204 |
205 | def _naming(self, name=None):
206 | if name is None or not name:
207 | name = 'variable'
208 | name = name.lower()
209 | self._names.setdefault(name, 0)
210 | self._names[name] += 1
211 | count = self._names[name]
212 | return f'{name}_{count:02}'
213 |
--------------------------------------------------------------------------------
/deep-residual-learning-for-image-recognition.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Deep Residual Learning for Image Recognition\n",
8 | "\n",
9 | "## Degradation Problem\n",
10 | "\n",
11 | "Deep convolutional neural networks가 나온 이후로 많은 발전이 있었습니다.
\n",
12 | "기본적으로 deep networks는 features들을 스스로 low/mid/high level features로 나누며$ ^{[01]} $, 근래의 모델들의 경우 layers층을 더 깊이 있게 쌓아서 features들의 levels을 더욱 세분화하고자하는 시도가 있으며$ ^{[02]} $, ImageNet 챌린지에서 16개 또는 30개등의 layers를 사용하는 매우 깊은 모델들을 사용하기도 하였습니다.$ ^{[03]} $\n",
13 | "\n",
14 | "단순히 layers를 더 많이 쌓으면 더 좋은 결과를 낼 것인가? 라는 질문에는.. 사실 문제가 있습니다.
\n",
15 | "이미 잘 알려진 vanishing/exploding gradients$ ^{[04]} $ $ ^{[05]} $의 문제는 convergence자체를 못하게 만듭니다.
\n",
16 | "이러한 문제는 normalized initialization$ ^{[06]} $ $ ^{[04]} $ $ ^{[07]} $, 그리고 intermediate normalization layers $ ^{[08]} $에 의해서 다소 해결이 되어 수십층의 layers들이 SGD를 통해 convergence될 수 있도록 도와줍니다. \n",
17 | "\n",
18 | "Deeper networks를 사용할때 **degradation problem**이 발견되었습니다. degradation problem은 network의 depth가 커질수록 accuracy는 saturated (마치 뭔가 가득차서 현상태에서 더 진전이 없어져 버리는 상태)가 되고 degradation이 진행됩니다. 이때 degradation은 overfitting에 의해서 생겨나는 것이 아니며, 더 많은 layers를 넣을수록 training error가 더 높아집니다.$ ^{[09]} $ (만약 overfitting이었다면 training error는 매우 낮아야 합니다.)\n",
19 | "\n",
20 | "\n",
21 | "\n",
22 | "\n",
23 | "CIFAR-10 데이터에 대한 training error(왼쪽) 그리고 test error(오른쪽) 그래프.
\n",
24 | " 20-layers 그리고 56-layers를 사용했으며, 더 깊은 네트워크일수록 training error가 높으며, 따라서 test error또한 높다.\n",
25 | "
\n",
26 | "\n",
27 | "한가지 실험에서 이를 뒷받침할 근거를 내놓습니다.
\n",
28 | "shallow network에서 학습된 모델위에 다층의 layers를 추가적으로 쌓습니다. 이론적으로는 deeper 모델이 shallower 모델에 추가된 것이기 때문에 더 낮은 training error를 보여야 합니다. 하지만 학습된 shallower 모델에 layers를 더 추가시켜도, 그냥 shallow 모델보다 더 높은 training error를 보여줍니다.\n",
29 | "\n",
30 | "\n",
31 | "\n",
32 | "\n",
33 | "**Constructed Solution**
Shallower model(왼쪽) 그리고 Deeper model(오른쪽).
Shallower model로 부터 학습된 지식을 복사하고, Identity mapping로서 layers를 추가하였다.
Deeper model은 shallower model과 비교하여 더 낮거나 같은 training error를 보여야 하지만
실제는 degradation현상으로 인하여 layers가 깊어질수록 training error는 높아진다\n",
34 | "
\n",
35 | "\n",
36 | "위의 그림에서 보듯이, shallower model로 부터 학습된 지식을 복사하고, 추가적으로 identity mapping으로서 layers를 더 추가하였습니다.
\n",
37 | "여기서 identity mapping이라는 뜻은 $ f(x) = x $ 의 의미로 기존 학습된 layers에서 나온 output을 추가된 layers에서 동일한 output을 생성하는 것입니다. 따라서 identity mapping으로서 추가된 layers들은 최소한 shallower model에서 나온 예측치와 동일하거나 또는 더 깊게 들어갔으니 더 잘 학습이 되어야 합니다.\n",
38 | "\n",
39 | "하지만 현실은.. layers가 깊어지면 깊어질수록 training error가 더 높아지며, 따라서 test error또한 동일하게 높아집니다.
\n",
40 | "이러한 현상을 degradation problem이라고 하며, Deep Residual Network$ ^{[10]} $가 해결하려는 부분입니다."
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {},
46 | "source": [
47 | "## Residual \n",
48 | "\n",
49 | "먼저 residual에 대해서 알아야 합니다.
\n",
50 | "간단하게 이야기 하면 residual이란 관측치(observed data)와 예측값(estimated value)사이의 차이입니다.
\n",
51 | "Linear least square (최소회귀직선) residual의 합은 0이 됩니다.
\n",
52 | "예를 들어 다음의 수식에서 true값인 $ x $를 찾고자 합니다.\n",
53 | "\n",
54 | "$$ f(x) = b $$\n",
55 | "\n",
56 | "이때 $ x $의 근사치(approximation)인 $ x_0 $가 주어졌을때, residual값은 다음과 같습니다.\n",
57 | "\n",
58 | "$$ b - f(x_0) $$\n",
59 | "\n",
60 | "반면에 error는 true값에서 근사치(approximation)의 차이이며 다음과 같습니다.
\n",
61 | "(하나의 예로.. 근사값 3.14 - $ \\pi $ 가 바로 오차입니다.)\n",
62 | "\n",
63 | "$$ x - x_0 $$\n",
64 | "\n",
65 | "좀더 잘 설명한 [영상](https://www.youtube.com/watch?v=snG7sa5CcJQ)을 참고 합니다.\n",
66 | "\n",
67 | "\n",
68 | "## ResNet Explained\n",
69 | "\n",
70 | "Degradation에서 언급한 현상을 보면, 직관적으로 보면 deep neural network에 더 많은 layers를 추가시킵으로서 성능을 향상시킬수 있을거 같지만 그와는 정 반대의 결과가 나왔습니다. 이것이 의미하는 바는 multiple nonlinear layers안에서 identity mappings을 시키는데(approximate) 어려움이 있다는 것입니다. 이는 흔히 딥러닝에서 나타나는 vanishing/exploding gradients 이슈 그리고 curse of dimensionality problem등으로 나타나는 현상으로 생각이 됩니다. \n",
71 | "\n",
72 | "ResNets은 이러한 문제를 해결하기 위하여, residual learning을 통해 강제로 Identity mapping (function)을 학습하도록 하였습니다.
\n",
73 | "\n",
74 | "* $ x $: 해당 레이어들의 input\n",
75 | "* $ H(x) $: (전체가 아닌) 소규모의 다층 레이어(a few stacked layers)의 output \n",
76 | "* $ id(x) $: Identity mapping(function)은 단순히 $ id(x) = x $ 으로서, $ x $값을 받으면 동일한 $ x $를 리턴시킵니다\n",
77 | "* $ H(x) $ 그리고 $ x $ 는 동일한 dimension을 갖고 있다고 가정\n",
78 | "\n",
79 | "일반적인 Neural Network는 $ H(x) $ 자체를 학습니다. \n",
80 | "\n",
81 | "\n",
82 | "\n",
83 | "\n",
84 | "ResNet의 경우에는 residual function을 학습하도록 강제합니다.\n",
85 | "\n",
86 | "$$ F(x) = H(x) - id(x) $$\n",
87 | "\n",
88 | "우리는 실제 true값을 알고자 하는 것이기 때문에 위의 공식은 다음과 같이 재정립(reformulation)할수 있습니다.\n",
89 | "\n",
90 | "$$ \\begin{align}\n",
91 | "H(x) &= F(x) + id(x) \\\\\n",
92 | "&= F(x) + x\n",
93 | "\\end{align} $$\n",
94 | "\n",
95 | "즉 아래의 그림처럼 그래프가 그려집니다.\n",
96 | "\n",
97 | "\n",
98 | "\n",
99 | "이론적으로 identity mappings이 최적화(optimal)되었다면, 다중 레이어의 weights연산 $ F(x) $ 의 값을 0으로 만들것입니다. $ F(x) $ 가 0이 된후 $ id(x) $ 를 더하기 때문에 해당 subnetwork는 identity function으로서 기능을 하게 됩니다. \n",
100 | "\n",
101 | "실제로는 identity mappings (layers)가 최적화되어 0으로 수렴하는 것은 일어나기 힘듬니다.
\n",
102 | "다만 reformulation 된 공식안에 identity function이 존재하기 때문에 reference가 될 수 있고, 따라서 neural network가 학습하는데 도움을 줄 수 있습니다."
103 | ]
104 | },
105 | {
106 | "cell_type": "markdown",
107 | "metadata": {},
108 | "source": [
109 | "## Shortcut Connection\n",
110 | "\n",
111 | "위에서 이미 언급한 그래프에서 논문에서는 building block을 다음과 같이 정의하고 있습니다.
\n",
112 | "(공식을 간략하게 하기 위해서 bias에 대한 부분은 의도적으로 누락되어 있습니다. 당연히 실제 구현시에는 필요합니다.)\n",
113 | "\n",
114 | "$$ y = F(x\\ |\\ W_i) + x $$\n",
115 | "\n",
116 | "$ F(x\\ |\\ W_i) $ 는 학습되야할 residual mapping을 나타내며, $ x $ 그리고 $ y $는 각각 input 그리고 output을 나타냅니다.
\n",
117 | "위의 공식은 아주 간략하게 표현하기 위해서 나타낸것이고 2개의 레이어를 사용하는 경우에는 $ F $ 함수에대한 정의가 바뀝니다.\n",
118 | "\n",
119 | "$$ F = W_2 \\sigma \\left(W_1 x \\right) $$\n",
120 | "\n",
121 | "여기서 $ \\sigma $는 ReLU를 가르킵니다. $ F + x $ 는 **shortcut connection**을 나타내며 element-wise addition을 연산합니다.
\n",
122 | "해당 addtion 이후! 두번째 nonlinearity를 적용합니다. (즉 ReLU를 addition이후에 적용하면 됨)\n",
123 | "\n",
124 | "$ F + x $ 를 연산할때 중요한점은 **dimension이 서로 동일**해야 합니다.\n",
125 | "만약 서로 dimension이 다르다면 (예를 들어서 input/output의 channels이 서로 다름) linear projection $ W_s $ 를 shorcut connection에 적용시켜서 dimension을 서로 동일하게 만들어줄수 있습니다.\n",
126 | "\n",
127 | "$$ y = F(x\\ |\\ W_i) + W_s x $$\n",
128 | "\n",
129 | "Residual function $ F $는 사실 상당히 유연합니다.즉 $ F $는 2개, 3개 또는 3개 이상의 다층을 사용하는 것이 가능합니다.
\n",
130 | "하지만 만약 $ F $안에 1개의 레이어만 갖고 있다면 linear layer와 동일해지게 됩니다.\n",
131 | "\n",
132 | "$$ y = W_1x + x $$\n",
133 | "\n",
134 | "따라서 1개만 갖고 있는 $ F $는 사실상 의미가 없습니다.
\n",
135 | "또한 $ F $는 fully-connected layer 또는 convolution같은 다양한 방법으로 모델링을 할 수 있습니다.\n"
136 | ]
137 | },
138 | {
139 | "cell_type": "markdown",
140 | "metadata": {},
141 | "source": [
142 | "## Implementation\n",
143 | "\n",
144 | "논문에서는 ImageNet에 대한 구현을 다음과 같이 하였습니다. \n",
145 | "\n",
146 | "#### Data Augmentation\n",
147 | "\n",
148 | "224 x 224 싸이즈로 random crop 또는 horizontal flip이 되었으며, 픽섹마다 평균값으로 subtract 되었습니다.
\n",
149 | "Standard color augmentation을 실행했습니다.\n",
150 | "\n",
151 | "#### ResNet Layer\n",
152 | "\n",
153 | "Conv -> Batch -> ReLU -> Conv -> Batch -> Addition -> RELU"
154 | ]
155 | },
156 | {
157 | "cell_type": "markdown",
158 | "metadata": {},
159 | "source": [
160 | "# References\n",
161 | "\n",
162 | "인용한 문서들..\n",
163 | "\n",
164 | "* [01] [Visualizing and understanding convolutional neural networks](https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)\n",
165 | "* [02] [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)\n",
166 | "* [03] [Going Deeper with Convolutions](https://arxiv.org/pdf/1409.4842.pdf)\n",
167 | "* [04] [Understanding the difficulty of training\n",
168 | "deep feedforward neural networks](http://www-prima.imag.fr/jlc/Courses/2016/PRML/XavierInitialisation.pdf)\n",
169 | "* [05] [Learning long-term dependencies\n",
170 | "with gradient descent is difficult](http://www.iro.umontreal.ca/~lisa/pointeurs/ieeetrnn94.pdf)\n",
171 | "* [06] [Efficient backprop. In Neural Networks: Tricks of the Trade](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)\n",
172 | "* [07] [Exact solutions to the nonlinear dynamics of learning in deep linear neural networks](https://arxiv.org/pdf/1312.6120.pdf)\n",
173 | "* [08] [Batch normalization: Accelerating deep\n",
174 | "network training by reducing internal covariate shift](https://arxiv.org/abs/1502.03167)\n",
175 | "* [09] [Convolutional neural networks at constrained time cost](https://arxiv.org/abs/1412.1710)\n",
176 | "* [10] [Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf)\n",
177 | "\n",
178 | "글 쓰면서 참고한 문서들..\n",
179 | "\n",
180 | "* [Deep Residual Learning for Image Recognition - 원래 ResNet Paper](https://arxiv.org/pdf/1512.03385.pdf)\n",
181 | "* [Identity Mappings in Deep Residual Networks - 개선된 Paper](https://arxiv.org/pdf/1603.05027.pdf)\n",
182 | "* [Residual neural networks are an exciting area of deep learning research](https://blog.init.ai/residual-neural-networks-are-an-exciting-area-of-deep-learning-research-acf14f4912e9)\n",
183 | "* [Deep Residual Networks ICML 2016 Tutorial](http://icml.cc/2016/tutorials/icml2016_tutorial_deep_residual_networks_kaiminghe.pdf)"
184 | ]
185 | }
186 | ],
187 | "metadata": {
188 | "kernelspec": {
189 | "display_name": "Python 3",
190 | "language": "python",
191 | "name": "python3"
192 | },
193 | "language_info": {
194 | "codemirror_mode": {
195 | "name": "ipython",
196 | "version": 3.0
197 | },
198 | "file_extension": ".py",
199 | "mimetype": "text/x-python",
200 | "name": "python",
201 | "nbconvert_exporter": "python",
202 | "pygments_lexer": "ipython3",
203 | "version": "3.6.1"
204 | }
205 | },
206 | "nbformat": 4,
207 | "nbformat_minor": 0
208 | }
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "{}"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright 2017 CHANG MIN JO (Anderson Jo)
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------