├── NOTICE
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── README.md
├── TextAdaIN.py
└── LICENSE
/NOTICE:
--------------------------------------------------------------------------------
1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *main* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
60 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## TextAdaIN: Paying Attention to Shortcut Learning in Text Recognizers
2 | This is the official pytorch implementation of [TextAdaIN](https://arxiv.org/abs/2105.03906) (ECCV 2022).
3 |
4 | **[Oren Nuriel](https://scholar.google.com/citations?hl=en&user=x3j-9RwAAAAJ),
5 | [Sharon Fogel](https://scholar.google.com/citations?hl=en&user=fJHpwNkAAAAJ),
6 | [Ron Litman](https://scholar.google.com/citations?hl=en&user=69GY5dEAAAAJ)**
7 |
8 | TextAdaIN creates local distortions in the feature map which prevent the network from overfitting to local statistics. It does so by viewing each feature map as a sequence of elements and deliberately mismatching fine-grained feature statistics between elements in a mini-batch.
9 |
10 |
11 | 
12 |
13 |
14 |
15 |
16 |
17 | ## Overcoming the shortcut
18 | Below we see the attention maps of a text recognizer before and after applying local corruptions to the input image.
19 | Each example shows the input image (bottom), attention map (top) and model prediction (left). Each line in the attention map is a time step representing the attention per character prediction. (a) The baseline model, which uses local statistics as a shortcut, misinterprets the corrupted images. (b) Our proposed method which overcomes this shortcut, enhances performance on both standard and challenging testing conditions
20 |
21 | 
22 |
23 | ## Integrating into your favorite text recognizer backbone
24 | Sample code for the class can be found in [TextAdaIN.py](./TextAdaIN.py)
25 |
26 | As there are weights to this module, after training with this, the model can be loaded with or without this module.
27 |
28 | ```
29 | # in the init of a pytorch module for training (no learnable weights, and isn't applied during inference so can load with or without)
30 | self.text_adain = TextAdaIN()
31 |
32 |
33 |
34 | # in the forward
35 | out = self.conv(out)
36 | out = self.text_adain(out)
37 | out = self.bn(out)
38 | ```
39 |
40 |
41 | ## Results
42 | Below are the results for a variety of settings - scene text and handwriting and multiple architectures, with and without TextAdaIN.
43 | Applying TextAdaIN in state-of-the-art recognizers increases performance.
44 |
45 |
46 |
47 |
48 | | Method |
49 | Scene Text |
50 | Handwritten |
51 |
52 |
53 | | Regular |
54 | Irregular |
55 | IAM |
56 | RIMES |
57 |
58 |
59 | | 5,529 |
60 | 3,010 |
61 | 17,990 |
62 | 7,734 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 | | Baek et al. (CTC) |
72 | 88.7
|
73 | 72.9 |
74 | 80.6 |
75 | 87.8 |
76 |
77 |
78 | | + TextAdaIN |
79 | 89.5 (+0.8) |
80 | 73.8 (+0.9) |
81 | 81.5 (+0.9) |
82 | 90.7 (+2.9) |
83 |
84 |
85 | | Baek et al. (Attn) |
86 | 92.0 |
87 | 77.4 |
88 | 82.7 |
89 | 90.2 |
90 |
91 |
92 | | + TextAdaIN |
93 | 92.2 (+0.2) |
94 | 77.7 (+0.3) |
95 | 84.1 (+1.4) |
96 | 93.0 (+2.8) |
97 |
98 |
99 | | Litman et al. |
100 | 93.6 |
101 | 83.0 |
102 | 85.7 |
103 | 93.3 |
104 |
105 |
106 | | + TextAdaIN |
107 | 94.2 (+0.6) |
108 | 83.4 (+0.4) |
109 | 87.3 (+1.6) |
110 | 94.4 (+1.1) |
111 |
112 |
113 | | Fang et al. |
114 | 93.9 |
115 | 82.0 |
116 | 85.4 |
117 | 92.0 |
118 |
119 |
120 | | + TextAdaIN |
121 | 94.2 (+0.3) |
122 | 82.8 (+0.8) |
123 | 86.3 (+0.9) |
124 | 93.0 (+1.0) |
125 |
126 |
127 |
128 |
129 | ## Experiments - Plug n' play
130 |
131 | ### Standard Text Recognizer
132 |
133 | To run with the [Baek et al](https://github.com/clovaai/deep-text-recognition-benchmark) framework, insert the TextAdaIN module into the ResNet backbone after every convolutional layer in the [feature extractor](https://github.com/clovaai/deep-text-recognition-benchmark/blob/master/modules/feature_extraction.py) as described above.
134 | After this is done, simply run the commandline as instructed in the [training & evaluation section](https://github.com/clovaai/deep-text-recognition-benchmark#training-and-evaluation)
135 |
136 | For scene text we use the original configurations.
137 |
138 | When training on handwriting datasets we run with the following configurations.
139 | ```
140 | python train.py --train_data --valid_data --select_data / --batch_ratio 1 --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn --exp-name handwriting --sensitive --rgb --num_iter 200000 --batch_size 128 --textadain
141 | ```
142 |
143 | ### ABINet
144 |
145 | To run with [ABINet](https://github.com/FangShancheng/ABINet), insert the TextAdaIN module into the ResNet backbone after every convolutional layer into the [feature extractor](https://github.com/FangShancheng/ABINet/blob/main/modules/resnet.py) as described above.
146 | After this is done, simply run the command line as instructed in the [training section](https://github.com/FangShancheng/ABINet#training)
147 |
148 | Please refer to the implementation details in the paper for further information.
149 |
150 | ## Citation
151 | If you find this work useful please consider citing it:
152 | ```
153 | @article{nuriel2021textadain,
154 | title={TextAdaIN: Paying Attention to Shortcut Learning in Text Recognizers},
155 | author={Nuriel, Oren and Fogel, Sharon and Litman, Ron},
156 | journal={arXiv preprint arXiv:2105.03906},
157 | year={2021}
158 | }
159 | ```
160 |
161 | ## Security
162 |
163 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
164 |
165 | ## License
166 |
167 | This project is licensed under the Apache-2.0 License.
--------------------------------------------------------------------------------
/TextAdaIN.py:
--------------------------------------------------------------------------------
1 | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 | # SPDX-License-Identifier: Apache-2.0
3 |
4 | import random
5 |
6 | import torch
7 | import torch.nn as nn
8 |
9 |
10 | def create_padain_class(num_windows=None):
11 | padain_class = PermuteAdaptiveInstanceNorm2d
12 | if num_windows is not None and num_windows > 0:
13 | padain_class = TextAdaIN
14 | return padain_class
15 |
16 |
17 | def adaptive_instance_normalization(content_feat, style_feat, mode=None):
18 | assert (content_feat.size()[:2] == style_feat.size()[:2])
19 | size = content_feat.size()
20 | if mode is None:
21 | mode_func = calc_channel_mean_std
22 | else:
23 | mode_func = mode
24 | style_mean, style_std = mode_func(style_feat.detach())
25 | content_mean, content_std = mode_func(content_feat)
26 | content_std = content_std + 1e-4 # to avoid division by 0
27 | normalized_feat = (content_feat - content_mean.expand(
28 | size)) / content_std.expand(size)
29 |
30 | return normalized_feat * style_std.expand(size) + style_mean.expand(size)
31 |
32 |
33 | def calc_channel_mean_std(feat):
34 | """
35 | Calculates the mean and standard deviation for each channel
36 | :param feat: features post convolutional layer
37 | :return: mean and std for each channel
38 | """
39 | size = feat.size()
40 | assert (len(size) == 4)
41 | N, C, H, W = size
42 | # STD over 1 dim results in NAN
43 | assert (W * H != 1), f"Cannot calculate std over W, H {size} (N,C,H,W), dimensions W={W}, H={H} cannot be 1"
44 | feat_std = feat.view(N, C, -1).std(dim=2).view(N, C, 1, 1)
45 | feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1)
46 | return feat_mean, feat_std
47 |
48 |
49 | def calc_width_mean_std(feat):
50 | """
51 | Calculates the mean and standard deviation for each C and H
52 | :param feat: features post convolutional layer
53 | :return: mean and std for each C and H
54 | """
55 | size = feat.size()
56 | assert (len(size) == 4)
57 | N, C, H, W = size
58 | # STD over 1 dim results in NAN
59 | assert (W != 1), f"Cannot calculate std over W {size} (N,C,H,W), dimensions W={W} cannot be 1"
60 | feat_std = torch.sqrt(feat.var(dim=3).view(N, C, H, 1) + 1e-4)
61 | feat_mean = feat.mean(dim=3).view(N, C, H, 1)
62 | return feat_mean, feat_std
63 |
64 |
65 | def get_adain_dim(dim):
66 | dim2func = {
67 | ("C",): calc_channel_mean_std,
68 | ("C", "H"): calc_width_mean_std,
69 | }
70 | dim = tuple(sorted(dim))
71 | assert dim in dim2func, f"Please insert one of the following : {list(dim2func.keys())}"
72 | return dim2func[dim]
73 |
74 |
75 | class PermuteAdaptiveInstanceNorm2d(nn.Module):
76 |
77 | def __init__(self, p=0.01, dim=('C',), **kwargs):
78 | '''
79 | PermuteAdaptiveInstanceNorm2d
80 | "Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification"
81 | :param p: the probability of applying Permuted AdaIN
82 | :param dim: a tuple of either 'C' for channel as in AdaIN or 'C,H' as in TextAdaIN
83 | '''
84 | super(PermuteAdaptiveInstanceNorm2d, self).__init__()
85 | self.p = p
86 | self.mode_func = get_adain_dim(dim)
87 |
88 | def forward(self, x):
89 | permute = random.random() < self.p
90 | if not self.training or not permute:
91 | return x
92 |
93 | target = x
94 |
95 | N, C, H, W = x.size()
96 |
97 | target = target[torch.randperm(N)]
98 |
99 | x = adaptive_instance_normalization(x, target, mode=self.mode_func)
100 |
101 | return x
102 |
103 | def extra_repr(self) -> str:
104 | return 'p={}, mode={}'.format(self.p, self.mode_func.__name__)
105 |
106 |
107 | class TextAdaIN(PermuteAdaptiveInstanceNorm2d):
108 | def __init__(self, p=0.01, dim=('C', 'H'), num_windows=5, **kwargs):
109 | '''
110 | PermuteAdaptiveInstanceNorm2d running
111 | "Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification"
112 | :param p: the probability of applying Permuted AdaIN
113 | '''
114 | super(TextAdaIN, self).__init__(p=p, dim=dim, **kwargs)
115 | self.num_windows = num_windows
116 |
117 | def _pad_to_k(self, x):
118 | N, C, H, W = x.size()
119 | k = min(self.num_windows, W)
120 | remainder = W % k
121 | if remainder != 0:
122 | x = torch.nn.functional.pad(x, (0, k - remainder), 'constant', 0)
123 | return x
124 |
125 | def forward(self, x):
126 | if not self.training:
127 | return x
128 |
129 | N, C, H, W = x.size()
130 | k = min(self.num_windows, W)
131 | frame_total = W // k * k
132 |
133 | x_without_remainder = create_windows_from_tensor(x, k)
134 | x_without_remainder = super().forward(x_without_remainder)
135 | x_without_remainder = revert_windowed_tensor(x_without_remainder, k, W)
136 | x = torch.cat((x_without_remainder, x[:, :, :, frame_total:]), dim=3).contiguous()
137 | return x
138 |
139 | def extra_repr(self) -> str:
140 | return 'p={}, num_windows={} mode={}'.format(self.p, self.num_windows, self.mode_func.__name__)
141 |
142 |
143 | def revert_windowed_tensor(x_without_remainder, k, W):
144 | """
145 | Reverts a windowed tensor to its original shape, placing the windows back in their place
146 | :param x_without_remainder: N*k x C x H x frame_size (= original W // k)
147 | :param k: number of windows
148 | :param W: Original width
149 | :return: tensor N x C x H x W
150 | """
151 | N, C, H, _ = x_without_remainder.size()
152 | N = N // k
153 | frame_size = W // k
154 | x_without_remainder = x_without_remainder.transpose(1, 3) # N*k x frame_size x H x C
155 | x_without_remainder = x_without_remainder.reshape(N, k * frame_size, H,
156 | C) # revert the windows back to their original position
157 | x_without_remainder = x_without_remainder.transpose(1, 3) # N x C x H x k*frame_size
158 | return x_without_remainder
159 |
160 |
161 | def create_windows_from_tensor(x, k):
162 | """
163 | Splits the tensor into k windows ignoring the remainder
164 | :param x: a tensor with dims NxCxHxW
165 | :param k: number of windows
166 | :return: tensor N*k x C x H x W
167 | """
168 | N, C, H, W = x.size()
169 | frame_size = W // k
170 | frame_total = W // k * k
171 | x_without_remainder = x[:, :, :, : frame_total]
172 | x_without_remainder = x_without_remainder.transpose(1, 3) # NxWxHxC
173 | x_without_remainder = x_without_remainder.reshape(N * k, frame_size, H, C) # N*num_windows x frame_size x H x C
174 | x_without_remainder = x_without_remainder.transpose(1, 3).contiguous() # N*num_windows x C x H x frame_size
175 | return x_without_remainder
176 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
--------------------------------------------------------------------------------