├── BERT-BiLSTM-CRF-NER ├── BERT_NER.py ├── README.md ├── __pycache__ │ ├── feature_input.cpython-36.pyc │ ├── lstm_crf_layer.cpython-36.pyc │ └── model.cpython-36.pyc ├── bert │ ├── CONTRIBUTING.md │ ├── LICENSE │ ├── README.md │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-36.pyc │ │ ├── modeling.cpython-36.pyc │ │ ├── optimization.cpython-36.pyc │ │ └── tokenization.cpython-36.pyc │ ├── create_pretraining_data.py │ ├── extract_features.py │ ├── modeling.py │ ├── modeling_test.py │ ├── multilingual.md │ ├── optimization.py │ ├── optimization_test.py │ ├── requirements.txt │ ├── run_classifier.py │ ├── run_pretraining.py │ ├── run_squad.py │ ├── sample_text.txt │ ├── tokenization.py │ └── tokenization_test.py ├── chinese_L-12_H-768_A-12 │ ├── bert_config.json │ ├── bert_model.ckpt │ ├── bert_model.ckpt.index │ ├── bert_model.ckpt.meta │ └── vocab.txt ├── conlleval.pl ├── conlleval.py ├── data │ └── data.here ├── data_process_utils.py ├── feature_input.py ├── lstm_crf_layer.py ├── model.py ├── requirement.txt ├── run_test.py ├── stopwords.txt ├── tf_metrics.py └── vocab.txt ├── BERT-SA ├── README.md ├── bert │ ├── .gitignore │ ├── .idea │ │ ├── bert.iml │ │ ├── encodings.xml │ │ ├── misc.xml │ │ ├── modules.xml │ │ └── workspace.xml │ ├── CONTRIBUTING.md │ ├── LICENSE │ ├── README.md │ ├── __init__.py │ ├── _gitignore.txt │ ├── create_pretraining_data.py │ ├── extract_features.py │ ├── modeling.py │ ├── modeling_test.py │ ├── multilingual.md │ ├── optimization.py │ ├── optimization_test.py │ ├── predicting_movie_reviews_with_bert_on_tf_hub.ipynb │ ├── requirements.txt │ ├── run_classifier.py │ ├── run_classifier_with_tfhub.py │ ├── run_pretraining.py │ ├── run_squad.py │ ├── sample_text.txt │ ├── tokenization.py │ └── tokenization_test.py ├── input_feature.py ├── model.py ├── processor.py ├── run_classifier.py └── run_test.py └── README.md /BERT-BiLSTM-CRF-NER/README.md: -------------------------------------------------------------------------------- 1 | ### BERT + BiLSTM + CRF 2 | - Only NER 3 | + Input 4 | + Output 5 | * labels => [O N-B N-I] 6 | * (Only Named Entity in labels, and then judge sentiment) 7 | - NER + Sentiment Analyze 8 | + Output 9 | * lables => [O P-B P-I N-B N-I] 10 | * (Positive or Negtive in labels as well) 11 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/__pycache__/feature_input.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/__pycache__/feature_input.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/__pycache__/lstm_crf_layer.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/__pycache__/lstm_crf_layer.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/__pycache__/model.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/__pycache__/model.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # How to Contribute 2 | 3 | BERT needs to maintain permanent compatibility with the pre-trained model files, 4 | so we do not plan to make any major changes to this library (other than what was 5 | promised in the README). However, we can accept small patches related to 6 | re-factoring and documentation. To submit contributes, there are just a few 7 | small guidelines you need to follow. 8 | 9 | ## Contributor License Agreement 10 | 11 | Contributions to this project must be accompanied by a Contributor License 12 | Agreement. You (or your employer) retain the copyright to your contribution; 13 | this simply gives us permission to use and redistribute your contributions as 14 | part of the project. Head over to to see 15 | your current agreements on file or to sign a new one. 16 | 17 | You generally only need to submit a CLA once, so if you've already submitted one 18 | (even if it was for a different project), you probably don't need to do it 19 | again. 20 | 21 | ## Code reviews 22 | 23 | All submissions, including submissions by project members, require review. We 24 | use GitHub pull requests for this purpose. Consult 25 | [GitHub Help](https://help.github.com/articles/about-pull-requests/) for more 26 | information on using pull requests. 27 | 28 | ## Community Guidelines 29 | 30 | This project follows 31 | [Google's Open Source Community Guidelines](https://opensource.google.com/conduct/). 32 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/bert/__init__.py -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/bert/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/__pycache__/modeling.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/bert/__pycache__/modeling.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/__pycache__/optimization.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/bert/__pycache__/optimization.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/__pycache__/tokenization.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/bert/__pycache__/tokenization.cpython-36.pyc -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/modeling_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import collections 20 | import json 21 | import random 22 | import re 23 | 24 | import modeling 25 | import six 26 | import tensorflow as tf 27 | 28 | 29 | class BertModelTest(tf.test.TestCase): 30 | 31 | class BertModelTester(object): 32 | 33 | def __init__(self, 34 | parent, 35 | batch_size=13, 36 | seq_length=7, 37 | is_training=True, 38 | use_input_mask=True, 39 | use_token_type_ids=True, 40 | vocab_size=99, 41 | hidden_size=32, 42 | num_hidden_layers=5, 43 | num_attention_heads=4, 44 | intermediate_size=37, 45 | hidden_act="gelu", 46 | hidden_dropout_prob=0.1, 47 | attention_probs_dropout_prob=0.1, 48 | max_position_embeddings=512, 49 | type_vocab_size=16, 50 | initializer_range=0.02, 51 | scope=None): 52 | self.parent = parent 53 | self.batch_size = batch_size 54 | self.seq_length = seq_length 55 | self.is_training = is_training 56 | self.use_input_mask = use_input_mask 57 | self.use_token_type_ids = use_token_type_ids 58 | self.vocab_size = vocab_size 59 | self.hidden_size = hidden_size 60 | self.num_hidden_layers = num_hidden_layers 61 | self.num_attention_heads = num_attention_heads 62 | self.intermediate_size = intermediate_size 63 | self.hidden_act = hidden_act 64 | self.hidden_dropout_prob = hidden_dropout_prob 65 | self.attention_probs_dropout_prob = attention_probs_dropout_prob 66 | self.max_position_embeddings = max_position_embeddings 67 | self.type_vocab_size = type_vocab_size 68 | self.initializer_range = initializer_range 69 | self.scope = scope 70 | 71 | def create_model(self): 72 | input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length], 73 | self.vocab_size) 74 | 75 | input_mask = None 76 | if self.use_input_mask: 77 | input_mask = BertModelTest.ids_tensor( 78 | [self.batch_size, self.seq_length], vocab_size=2) 79 | 80 | token_type_ids = None 81 | if self.use_token_type_ids: 82 | token_type_ids = BertModelTest.ids_tensor( 83 | [self.batch_size, self.seq_length], self.type_vocab_size) 84 | 85 | config = modeling.BertConfig( 86 | vocab_size=self.vocab_size, 87 | hidden_size=self.hidden_size, 88 | num_hidden_layers=self.num_hidden_layers, 89 | num_attention_heads=self.num_attention_heads, 90 | intermediate_size=self.intermediate_size, 91 | hidden_act=self.hidden_act, 92 | hidden_dropout_prob=self.hidden_dropout_prob, 93 | attention_probs_dropout_prob=self.attention_probs_dropout_prob, 94 | max_position_embeddings=self.max_position_embeddings, 95 | type_vocab_size=self.type_vocab_size, 96 | initializer_range=self.initializer_range) 97 | 98 | model = modeling.BertModel( 99 | config=config, 100 | is_training=self.is_training, 101 | input_ids=input_ids, 102 | input_mask=input_mask, 103 | token_type_ids=token_type_ids, 104 | scope=self.scope) 105 | 106 | outputs = { 107 | "embedding_output": model.get_embedding_output(), 108 | "sequence_output": model.get_sequence_output(), 109 | "pooled_output": model.get_pooled_output(), 110 | "all_encoder_layers": model.get_all_encoder_layers(), 111 | } 112 | return outputs 113 | 114 | def check_output(self, result): 115 | self.parent.assertAllEqual( 116 | result["embedding_output"].shape, 117 | [self.batch_size, self.seq_length, self.hidden_size]) 118 | 119 | self.parent.assertAllEqual( 120 | result["sequence_output"].shape, 121 | [self.batch_size, self.seq_length, self.hidden_size]) 122 | 123 | self.parent.assertAllEqual(result["pooled_output"].shape, 124 | [self.batch_size, self.hidden_size]) 125 | 126 | def test_default(self): 127 | self.run_tester(BertModelTest.BertModelTester(self)) 128 | 129 | def test_config_to_json_string(self): 130 | config = modeling.BertConfig(vocab_size=99, hidden_size=37) 131 | obj = json.loads(config.to_json_string()) 132 | self.assertEqual(obj["vocab_size"], 99) 133 | self.assertEqual(obj["hidden_size"], 37) 134 | 135 | def run_tester(self, tester): 136 | with self.test_session() as sess: 137 | ops = tester.create_model() 138 | init_op = tf.group(tf.global_variables_initializer(), 139 | tf.local_variables_initializer()) 140 | sess.run(init_op) 141 | output_result = sess.run(ops) 142 | tester.check_output(output_result) 143 | 144 | self.assert_all_tensors_reachable(sess, [init_op, ops]) 145 | 146 | @classmethod 147 | def ids_tensor(cls, shape, vocab_size, rng=None, name=None): 148 | """Creates a random int32 tensor of the shape within the vocab size.""" 149 | if rng is None: 150 | rng = random.Random() 151 | 152 | total_dims = 1 153 | for dim in shape: 154 | total_dims *= dim 155 | 156 | values = [] 157 | for _ in range(total_dims): 158 | values.append(rng.randint(0, vocab_size - 1)) 159 | 160 | return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name) 161 | 162 | def assert_all_tensors_reachable(self, sess, outputs): 163 | """Checks that all the tensors in the graph are reachable from outputs.""" 164 | graph = sess.graph 165 | 166 | ignore_strings = [ 167 | "^.*/assert_less_equal/.*$", 168 | "^.*/dilation_rate$", 169 | "^.*/Tensordot/concat$", 170 | "^.*/Tensordot/concat/axis$", 171 | "^testing/.*$", 172 | ] 173 | 174 | ignore_regexes = [re.compile(x) for x in ignore_strings] 175 | 176 | unreachable = self.get_unreachable_ops(graph, outputs) 177 | filtered_unreachable = [] 178 | for x in unreachable: 179 | do_ignore = False 180 | for r in ignore_regexes: 181 | m = r.match(x.name) 182 | if m is not None: 183 | do_ignore = True 184 | if do_ignore: 185 | continue 186 | filtered_unreachable.append(x) 187 | unreachable = filtered_unreachable 188 | 189 | self.assertEqual( 190 | len(unreachable), 0, "The following ops are unreachable: %s" % 191 | (" ".join([x.name for x in unreachable]))) 192 | 193 | @classmethod 194 | def get_unreachable_ops(cls, graph, outputs): 195 | """Finds all of the tensors in graph that are unreachable from outputs.""" 196 | outputs = cls.flatten_recursive(outputs) 197 | output_to_op = collections.defaultdict(list) 198 | op_to_all = collections.defaultdict(list) 199 | assign_out_to_in = collections.defaultdict(list) 200 | 201 | for op in graph.get_operations(): 202 | for x in op.inputs: 203 | op_to_all[op.name].append(x.name) 204 | for y in op.outputs: 205 | output_to_op[y.name].append(op.name) 206 | op_to_all[op.name].append(y.name) 207 | if str(op.type) == "Assign": 208 | for y in op.outputs: 209 | for x in op.inputs: 210 | assign_out_to_in[y.name].append(x.name) 211 | 212 | assign_groups = collections.defaultdict(list) 213 | for out_name in assign_out_to_in.keys(): 214 | name_group = assign_out_to_in[out_name] 215 | for n1 in name_group: 216 | assign_groups[n1].append(out_name) 217 | for n2 in name_group: 218 | if n1 != n2: 219 | assign_groups[n1].append(n2) 220 | 221 | seen_tensors = {} 222 | stack = [x.name for x in outputs] 223 | while stack: 224 | name = stack.pop() 225 | if name in seen_tensors: 226 | continue 227 | seen_tensors[name] = True 228 | 229 | if name in output_to_op: 230 | for op_name in output_to_op[name]: 231 | if op_name in op_to_all: 232 | for input_name in op_to_all[op_name]: 233 | if input_name not in stack: 234 | stack.append(input_name) 235 | 236 | expanded_names = [] 237 | if name in assign_groups: 238 | for assign_name in assign_groups[name]: 239 | expanded_names.append(assign_name) 240 | 241 | for expanded_name in expanded_names: 242 | if expanded_name not in stack: 243 | stack.append(expanded_name) 244 | 245 | unreachable_ops = [] 246 | for op in graph.get_operations(): 247 | is_unreachable = False 248 | all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs] 249 | for name in all_names: 250 | if name not in seen_tensors: 251 | is_unreachable = True 252 | if is_unreachable: 253 | unreachable_ops.append(op) 254 | return unreachable_ops 255 | 256 | @classmethod 257 | def flatten_recursive(cls, item): 258 | """Flattens (potentially nested) a tuple/dictionary/list to a list.""" 259 | output = [] 260 | if isinstance(item, list): 261 | output.extend(item) 262 | elif isinstance(item, tuple): 263 | output.extend(list(item)) 264 | elif isinstance(item, dict): 265 | for (_, v) in six.iteritems(item): 266 | output.append(v) 267 | else: 268 | return [item] 269 | 270 | flat_output = [] 271 | for x in output: 272 | flat_output.extend(cls.flatten_recursive(x)) 273 | return flat_output 274 | 275 | 276 | if __name__ == "__main__": 277 | tf.test.main() 278 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/multilingual.md: -------------------------------------------------------------------------------- 1 | ## Models 2 | 3 | There are two multilingual models currently available. We do not plan to release 4 | more single-language models, but we may release `BERT-Large` versions of these 5 | two in the future: 6 | 7 | * **[`BERT-Base, Multilingual`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)**: 8 | 102 languages, 12-layer, 768-hidden, 12-heads, 110M parameters 9 | * **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**: 10 | Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M 11 | parameters 12 | 13 | See the [list of languages](#list-of-languages) that the Multilingual model 14 | supports. The Multilingual model does include Chinese (and English), but if your 15 | fine-tuning data is Chinese-only, then the Chinese model will likely produce 16 | better results. 17 | 18 | ## Results 19 | 20 | To evaluate these systems, we use the 21 | [XNLI dataset](https://github.com/facebookresearch/XNLI) dataset, which is a 22 | version of [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) where the 23 | dev and test sets have been translated (by humans) into 15 languages. Note that 24 | the training set was *machine* translated (we used the translations provided by 25 | XNLI, not Google NMT). For clarity, we only report on 6 languages below: 26 | 27 | 28 | 29 | | System | English | Chinese | Spanish | German | Arabic | Urdu | 30 | | ------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | 31 | | XNLI Baseline - Translate Train | 73.7 | 67.0 | 68.8 | 66.5 | 65.8 | 56.6 | 32 | | XNLI Baseline - Translate Test | 73.7 | 68.3 | 70.7 | 68.7 | 66.8 | 59.3 | 33 | | BERT -Translate Train | **81.4** | **74.2** | **77.3** | **75.2** | **70.5** | 61.7 | 34 | | BERT - Translate Test | 81.4 | 70.1 | 74.9 | 74.4 | 70.4 | **62.1** | 35 | | BERT - Zero Shot | 81.4 | 63.8 | 74.3 | 70.5 | 62.1 | 58.3 | 36 | 37 | 38 | 39 | The first two rows are baselines from the XNLI paper and the last three rows are 40 | our results with BERT. 41 | 42 | **Translate Train** means that the MultiNLI training set was machine translated 43 | from English into the foreign language. So training and evaluation were both 44 | done in the foreign language. Unfortunately, training was done on 45 | machine-translated data, so it is impossible to quantify how much of the lower 46 | accuracy (compared to English) is due to the quality of the machine translation 47 | vs. the quality of the pre-trained model. 48 | 49 | **Translate Test** means that the XNLI test set was machine translated from the 50 | foreign language into English. So training and evaluation were both done on 51 | English. However, test evaluation was done on machine-translated English, so the 52 | accuracy depends on the quality of the machine translation system. 53 | 54 | **Zero Shot** means that the Multilingual BERT system was fine-tuned on English 55 | MultiNLI, and then evaluated on the foreign language XNLI test. In this case, 56 | machine translation was not involved at all in either the pre-training or 57 | fine-tuning. 58 | 59 | Note that the English result is worse than the 84.2 MultiNLI baseline because 60 | this training used Multilingual BERT rather than English-only BERT. This implies 61 | that for high-resource languages, the Multilingual model is somewhat worse than 62 | a single-language model. However, it is not feasible for us to train and 63 | maintain dozens of single-language model. Therefore, if your goal is to maximize 64 | performance with a language other than English or Chinese, you might find it 65 | beneficial to run pre-training for additional steps starting from our 66 | Multilingual model on data from your language of interest. 67 | 68 | Here is a comparison of training Chinese models with the Multilingual 69 | `BERT-Base` and Chinese-only `BERT-Base`: 70 | 71 | System | Chinese 72 | ----------------------- | ------- 73 | XNLI Baseline | 67.0 74 | BERT Multilingual Model | 74.2 75 | BERT Chinese-only Model | 77.2 76 | 77 | Similar to English, the single-language model does 3% better than the 78 | Multilingual model. 79 | 80 | ## Fine-tuning Example 81 | 82 | The multilingual model does **not** require any special consideration or API 83 | changes. We did update the implementation of `BasicTokenizer` in 84 | `tokenization.py` to support Chinese character tokenization, so please update if 85 | you forked it. However, we did not change the tokenization API. 86 | 87 | To test the new models, we did modify `run_classifier.py` to add support for the 88 | [XNLI dataset](https://github.com/facebookresearch/XNLI). This is a 15-language 89 | version of MultiNLI where the dev/test sets have been human-translated, and the 90 | training set has been machine-translated. 91 | 92 | To run the fine-tuning code, please download the 93 | [XNLI dev/test set](https://s3.amazonaws.com/xnli/XNLI-1.0.zip) and the 94 | [XNLI machine-translated training set](https://s3.amazonaws.com/xnli/XNLI-MT-1.0.zip) 95 | and then unpack both .zip files into some directory `$XNLI_DIR`. 96 | 97 | To run fine-tuning on XNLI. The language is hard-coded into `run_classifier.py` 98 | (Chinese by default), so please modify `XnliProcessor` if you want to run on 99 | another language. 100 | 101 | This is a large dataset, so this will training will take a few hours on a GPU 102 | (or about 30 minutes on a Cloud TPU). To run an experiment quickly for 103 | debugging, just set `num_train_epochs` to a small value like `0.1`. 104 | 105 | ```shell 106 | export BERT_BASE_DIR=/path/to/bert/chinese_L-12_H-768_A-12 # or multilingual_L-12_H-768_A-12 107 | export XNLI_DIR=/path/to/xnli 108 | 109 | python run_classifier.py \ 110 | --task_name=XNLI \ 111 | --do_train=true \ 112 | --do_eval=true \ 113 | --data_dir=$XNLI_DIR \ 114 | --vocab_file=$BERT_BASE_DIR/vocab.txt \ 115 | --bert_config_file=$BERT_BASE_DIR/bert_config.json \ 116 | --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ 117 | --max_seq_length=128 \ 118 | --train_batch_size=32 \ 119 | --learning_rate=5e-5 \ 120 | --num_train_epochs=2.0 \ 121 | --output_dir=/tmp/xnli_output/ 122 | ``` 123 | 124 | With the Chinese-only model, the results should look something like this: 125 | 126 | ``` 127 | ***** Eval results ***** 128 | eval_accuracy = 0.774116 129 | eval_loss = 0.83554 130 | global_step = 24543 131 | loss = 0.74603 132 | ``` 133 | 134 | ## Details 135 | 136 | ### Data Source and Sampling 137 | 138 | The languages chosen were the 139 | [top 100 languages with the largest Wikipedias](https://meta.wikimedia.org/wiki/List_of_Wikipedias). 140 | The entire Wikipedia dump for each language (excluding user and talk pages) was 141 | taken as the training data for each language 142 | 143 | However, the size of the Wikipedia for a given language varies greatly, and 144 | therefore low-resource languages may be "under-represented" in terms of the 145 | neural network model (under the assumption that languages are "competing" for 146 | limited model capacity to some extent). 147 | 148 | However, the size of a Wikipedia also correlates with the number of speakers of 149 | a language, and we also don't want to overfit the model by performing thousands 150 | of epochs over a tiny Wikipedia for a particular language. 151 | 152 | To balance these two factors, we performed exponentially smoothed weighting of 153 | the data during pre-training data creation (and WordPiece vocab creation). In 154 | other words, let's say that the probability of a language is *P(L)*, e.g., 155 | *P(English) = 0.21* means that after concatenating all of the Wikipedias 156 | together, 21% of our data is English. We exponentiate each probability by some 157 | factor *S* and then re-normalize, and sample from that distribution. In our case 158 | we use *S=0.7*. So, high-resource languages like English will be under-sampled, 159 | and low-resource languages like Icelandic will be over-sampled. E.g., in the 160 | original distribution English would be sampled 1000x more than Icelandic, but 161 | after smoothing it's only sampled 100x more. 162 | 163 | ### Tokenization 164 | 165 | For tokenization, we use a 110k shared WordPiece vocabulary. The word counts are 166 | weighted the same way as the data, so low-resource languages are upweighted by 167 | some factor. We intentionally do *not* use any marker to denote the input 168 | language (so that zero-shot training can work). 169 | 170 | Because Chinese does not have whitespace characters, we add spaces around every 171 | character in the 172 | [CJK Unicode range](https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_\(Unicode_block\)) 173 | before applying WordPiece. This means that Chinese is effectively 174 | character-tokenized. Note that the CJK Unicode block only includes 175 | Chinese-origin characters and does *not* include Hangul Korean or 176 | Katakana/Hiragana Japanese, which are tokenized with whitespace+WordPiece like 177 | all other languages. 178 | 179 | For all other languages, we apply the 180 | [same recipe as English](https://github.com/google-research/bert#tokenization): 181 | (a) lower casing+accent removal, (b) punctuation splitting, (c) whitespace 182 | tokenization. We understand that accent markers have substantial meaning in some 183 | languages, but felt that the benefits of reducing the effective vocabulary make 184 | up for this. Generally the strong contextual models of BERT should make up for 185 | any ambiguity introduced by stripping accent markers. 186 | 187 | ### List of Languages 188 | 189 | The multilingual model supports the following languages. These languages were 190 | chosen because they are the top 100 languages with the largest Wikipedias: 191 | 192 | * Afrikaans 193 | * Albanian 194 | * Arabic 195 | * Aragonese 196 | * Armenian 197 | * Asturian 198 | * Azerbaijani 199 | * Bashkir 200 | * Basque 201 | * Bavarian 202 | * Belarusian 203 | * Bengali 204 | * Bishnupriya Manipuri 205 | * Bosnian 206 | * Breton 207 | * Bulgarian 208 | * Burmese 209 | * Catalan 210 | * Cebuano 211 | * Chechen 212 | * Chinese (Simplified) 213 | * Chinese (Traditional) 214 | * Chuvash 215 | * Croatian 216 | * Czech 217 | * Danish 218 | * Dutch 219 | * English 220 | * Estonian 221 | * Finnish 222 | * French 223 | * Galician 224 | * Georgian 225 | * German 226 | * Greek 227 | * Gujarati 228 | * Haitian 229 | * Hebrew 230 | * Hindi 231 | * Hungarian 232 | * Icelandic 233 | * Ido 234 | * Indonesian 235 | * Irish 236 | * Italian 237 | * Japanese 238 | * Javanese 239 | * Kannada 240 | * Kazakh 241 | * Kirghiz 242 | * Korean 243 | * Latin 244 | * Latvian 245 | * Lithuanian 246 | * Lombard 247 | * Low Saxon 248 | * Luxembourgish 249 | * Macedonian 250 | * Malagasy 251 | * Malay 252 | * Malayalam 253 | * Marathi 254 | * Minangkabau 255 | * Nepali 256 | * Newar 257 | * Norwegian (Bokmal) 258 | * Norwegian (Nynorsk) 259 | * Occitan 260 | * Persian (Farsi) 261 | * Piedmontese 262 | * Polish 263 | * Portuguese 264 | * Punjabi 265 | * Romanian 266 | * Russian 267 | * Scots 268 | * Serbian 269 | * Serbo-Croatian 270 | * Sicilian 271 | * Slovak 272 | * Slovenian 273 | * South Azerbaijani 274 | * Spanish 275 | * Sundanese 276 | * Swahili 277 | * Swedish 278 | * Tagalog 279 | * Tajik 280 | * Tamil 281 | * Tatar 282 | * Telugu 283 | * Turkish 284 | * Ukrainian 285 | * Urdu 286 | * Uzbek 287 | * Vietnamese 288 | * Volapük 289 | * Waray-Waray 290 | * Welsh 291 | * West 292 | * Western Punjabi 293 | * Yoruba 294 | 295 | The only language which we had to unfortunately exclude was Thai, since it is 296 | the only language (other than Chinese) that does not use whitespace to delimit 297 | words, and it has too many characters-per-word to use character-based 298 | tokenization. Our WordPiece algorithm is quadratic with respect to the size of 299 | the input token so very long character strings do not work with it. 300 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/optimization.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Functions and classes related to optimization (weight updates).""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import re 22 | import tensorflow as tf 23 | 24 | 25 | def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu): 26 | """Creates an optimizer training op.""" 27 | global_step = tf.train.get_or_create_global_step() 28 | 29 | learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32) 30 | 31 | # Implements linear decay of the learning rate. 32 | learning_rate = tf.train.polynomial_decay( 33 | learning_rate, 34 | global_step, 35 | num_train_steps, 36 | end_learning_rate=0.0, 37 | power=1.0, 38 | cycle=False) 39 | 40 | # Implements linear warmup. I.e., if global_step < num_warmup_steps, the 41 | # learning rate will be `global_step/num_warmup_steps * init_lr`. 42 | if num_warmup_steps: 43 | global_steps_int = tf.cast(global_step, tf.int32) 44 | warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32) 45 | 46 | global_steps_float = tf.cast(global_steps_int, tf.float32) 47 | warmup_steps_float = tf.cast(warmup_steps_int, tf.float32) 48 | 49 | warmup_percent_done = global_steps_float / warmup_steps_float 50 | warmup_learning_rate = init_lr * warmup_percent_done 51 | 52 | is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32) 53 | learning_rate = ( 54 | (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate) 55 | 56 | # It is recommended that you use this optimizer for fine tuning, since this 57 | # is how the model was trained (note that the Adam m/v variables are NOT 58 | # loaded from init_checkpoint.) 59 | optimizer = AdamWeightDecayOptimizer( 60 | learning_rate=learning_rate, 61 | weight_decay_rate=0.01, 62 | beta_1=0.9, 63 | beta_2=0.999, 64 | epsilon=1e-6, 65 | exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"]) 66 | 67 | if use_tpu: 68 | optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer) 69 | 70 | tvars = tf.trainable_variables() 71 | grads = tf.gradients(loss, tvars) 72 | 73 | # This is how the model was pre-trained. 74 | (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0) 75 | 76 | train_op = optimizer.apply_gradients( 77 | zip(grads, tvars), global_step=global_step) 78 | 79 | new_global_step = global_step + 1 80 | train_op = tf.group(train_op, [global_step.assign(new_global_step)]) 81 | return train_op 82 | 83 | 84 | class AdamWeightDecayOptimizer(tf.train.Optimizer): 85 | """A basic Adam optimizer that includes "correct" L2 weight decay.""" 86 | 87 | def __init__(self, 88 | learning_rate, 89 | weight_decay_rate=0.0, 90 | beta_1=0.9, 91 | beta_2=0.999, 92 | epsilon=1e-6, 93 | exclude_from_weight_decay=None, 94 | name="AdamWeightDecayOptimizer"): 95 | """Constructs a AdamWeightDecayOptimizer.""" 96 | super(AdamWeightDecayOptimizer, self).__init__(False, name) 97 | 98 | self.learning_rate = learning_rate 99 | self.weight_decay_rate = weight_decay_rate 100 | self.beta_1 = beta_1 101 | self.beta_2 = beta_2 102 | self.epsilon = epsilon 103 | self.exclude_from_weight_decay = exclude_from_weight_decay 104 | 105 | def apply_gradients(self, grads_and_vars, global_step=None, name=None): 106 | """See base class.""" 107 | assignments = [] 108 | for (grad, param) in grads_and_vars: 109 | if grad is None or param is None: 110 | continue 111 | 112 | param_name = self._get_variable_name(param.name) 113 | 114 | m = tf.get_variable( 115 | name=param_name + "/adam_m", 116 | shape=param.shape.as_list(), 117 | dtype=tf.float32, 118 | trainable=False, 119 | initializer=tf.zeros_initializer()) 120 | v = tf.get_variable( 121 | name=param_name + "/adam_v", 122 | shape=param.shape.as_list(), 123 | dtype=tf.float32, 124 | trainable=False, 125 | initializer=tf.zeros_initializer()) 126 | 127 | # Standard Adam update. 128 | next_m = ( 129 | tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad)) 130 | next_v = ( 131 | tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2, 132 | tf.square(grad))) 133 | 134 | update = next_m / (tf.sqrt(next_v) + self.epsilon) 135 | 136 | # Just adding the square of the weights to the loss function is *not* 137 | # the correct way of using L2 regularization/weight decay with Adam, 138 | # since that will interact with the m and v parameters in strange ways. 139 | # 140 | # Instead we want ot decay the weights in a manner that doesn't interact 141 | # with the m/v parameters. This is equivalent to adding the square 142 | # of the weights to the loss with plain (non-momentum) SGD. 143 | if self._do_use_weight_decay(param_name): 144 | update += self.weight_decay_rate * param 145 | 146 | update_with_lr = self.learning_rate * update 147 | 148 | next_param = param - update_with_lr 149 | 150 | assignments.extend( 151 | [param.assign(next_param), 152 | m.assign(next_m), 153 | v.assign(next_v)]) 154 | return tf.group(*assignments, name=name) 155 | 156 | def _do_use_weight_decay(self, param_name): 157 | """Whether to use L2 weight decay for `param_name`.""" 158 | if not self.weight_decay_rate: 159 | return False 160 | if self.exclude_from_weight_decay: 161 | for r in self.exclude_from_weight_decay: 162 | if re.search(r, param_name) is not None: 163 | return False 164 | return True 165 | 166 | def _get_variable_name(self, param_name): 167 | """Get the variable name from the tensor name.""" 168 | m = re.match("^(.*):\\d+$", param_name) 169 | if m is not None: 170 | param_name = m.group(1) 171 | return param_name 172 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/optimization_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import optimization 20 | import tensorflow as tf 21 | 22 | 23 | class OptimizationTest(tf.test.TestCase): 24 | 25 | def test_adam(self): 26 | with self.test_session() as sess: 27 | w = tf.get_variable( 28 | "w", 29 | shape=[3], 30 | initializer=tf.constant_initializer([0.1, -0.2, -0.1])) 31 | x = tf.constant([0.4, 0.2, -0.5]) 32 | loss = tf.reduce_mean(tf.square(x - w)) 33 | tvars = tf.trainable_variables() 34 | grads = tf.gradients(loss, tvars) 35 | global_step = tf.train.get_or_create_global_step() 36 | optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2) 37 | train_op = optimizer.apply_gradients(zip(grads, tvars), global_step) 38 | init_op = tf.group(tf.global_variables_initializer(), 39 | tf.local_variables_initializer()) 40 | sess.run(init_op) 41 | for _ in range(100): 42 | sess.run(train_op) 43 | w_np = sess.run(w) 44 | self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2) 45 | 46 | 47 | if __name__ == "__main__": 48 | tf.test.main() 49 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/requirements.txt: -------------------------------------------------------------------------------- 1 | tensorflow >= 1.11.0 # CPU Version of TensorFlow. 2 | # tensorflow-gpu >= 1.11.0 # GPU version of TensorFlow. 3 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/sample_text.txt: -------------------------------------------------------------------------------- 1 | This text is included to make sure Unicode is handled properly: 力加勝北区ᴵᴺᵀᵃছজটডণত 2 | Text should be one-sentence-per-line, with empty lines between documents. 3 | This sample text is public domain and was randomly selected from Project Guttenberg. 4 | 5 | The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors. 6 | Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity. 7 | Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them. 8 | "Cass" Beard had risen early that morning, but not with a view to discovery. 9 | A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets. 10 | The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency. 11 | This was nearly opposite. 12 | Mr. Cassius crossed the highway, and stopped suddenly. 13 | Something glittered in the nearest red pool before him. 14 | Gold, surely! 15 | But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring. 16 | Looking at it more attentively, he saw that it bore the inscription, "May to Cass." 17 | Like most of his fellow gold-seekers, Cass was superstitious. 18 | 19 | The fountain of classic wisdom, Hypatia herself. 20 | As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge. 21 | From my youth I felt in me a soul above the matter-entangled herd. 22 | She revealed to me the glorious fact, that I am a spark of Divinity itself. 23 | A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's. 24 | There is a philosophic pleasure in opening one's treasures to the modest young. 25 | Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street. 26 | Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide; 27 | but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind. 28 | Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now. 29 | His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert; 30 | while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts. 31 | At last they reached the quay at the opposite end of the street; 32 | and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers. 33 | He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him. 34 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/tokenization.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Tokenization classes.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import collections 22 | import unicodedata 23 | import six 24 | import tensorflow as tf 25 | 26 | 27 | def convert_to_unicode(text): 28 | """Converts `text` to Unicode (if it's not already), assuming utf-8 input.""" 29 | if six.PY3: 30 | if isinstance(text, str): 31 | return text 32 | elif isinstance(text, bytes): 33 | return text.decode("utf-8", "ignore") 34 | else: 35 | raise ValueError("Unsupported string type: %s" % (type(text))) 36 | elif six.PY2: 37 | if isinstance(text, str): 38 | return text.decode("utf-8", "ignore") 39 | elif isinstance(text, unicode): 40 | return text 41 | else: 42 | raise ValueError("Unsupported string type: %s" % (type(text))) 43 | else: 44 | raise ValueError("Not running on Python2 or Python 3?") 45 | 46 | 47 | def printable_text(text): 48 | """Returns text encoded in a way suitable for print or `tf.logging`.""" 49 | 50 | # These functions want `str` for both Python2 and Python3, but in one case 51 | # it's a Unicode string and in the other it's a byte string. 52 | if six.PY3: 53 | if isinstance(text, str): 54 | return text 55 | elif isinstance(text, bytes): 56 | return text.decode("utf-8", "ignore") 57 | else: 58 | raise ValueError("Unsupported string type: %s" % (type(text))) 59 | elif six.PY2: 60 | if isinstance(text, str): 61 | return text 62 | elif isinstance(text, unicode): 63 | return text.encode("utf-8") 64 | else: 65 | raise ValueError("Unsupported string type: %s" % (type(text))) 66 | else: 67 | raise ValueError("Not running on Python2 or Python 3?") 68 | 69 | 70 | def load_vocab(vocab_file): 71 | """Loads a vocabulary file into a dictionary.""" 72 | vocab = collections.OrderedDict() 73 | index = 0 74 | with tf.gfile.GFile(vocab_file, "r") as reader: 75 | while True: 76 | token = convert_to_unicode(reader.readline()) 77 | if not token: 78 | break 79 | token = token.strip() 80 | vocab[token] = index 81 | index += 1 82 | return vocab 83 | 84 | 85 | def convert_by_vocab(vocab, items): 86 | """Converts a sequence of [tokens|ids] using the vocab.""" 87 | output = [] 88 | for item in items: 89 | #TODO: modify for oov, using [unk] replace, if you using english language do not change this 90 | # output.append(vocab.[item]) 91 | output.append(vocab.get(item, 100)) 92 | return output 93 | 94 | 95 | def convert_tokens_to_ids(vocab, tokens): 96 | return convert_by_vocab(vocab, tokens) 97 | 98 | 99 | def convert_ids_to_tokens(inv_vocab, ids): 100 | return convert_by_vocab(inv_vocab, ids) 101 | 102 | 103 | def whitespace_tokenize(text): 104 | """Runs basic whitespace cleaning and splitting on a peice of text.""" 105 | text = text.strip() 106 | if not text: 107 | return [] 108 | tokens = text.split() 109 | return tokens 110 | 111 | 112 | class FullTokenizer(object): 113 | """Runs end-to-end tokenziation.""" 114 | 115 | def __init__(self, vocab_file, do_lower_case=True): 116 | self.vocab = load_vocab(vocab_file) 117 | self.inv_vocab = {v: k for k, v in self.vocab.items()} 118 | self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case) 119 | self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) 120 | 121 | def tokenize(self, text): 122 | split_tokens = [] 123 | for token in self.basic_tokenizer.tokenize(text): 124 | for sub_token in self.wordpiece_tokenizer.tokenize(token): 125 | split_tokens.append(sub_token) 126 | 127 | return split_tokens 128 | 129 | def convert_tokens_to_ids(self, tokens): 130 | return convert_by_vocab(self.vocab, tokens) 131 | 132 | def convert_ids_to_tokens(self, ids): 133 | return convert_by_vocab(self.inv_vocab, ids) 134 | 135 | 136 | class BasicTokenizer(object): 137 | """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" 138 | 139 | def __init__(self, do_lower_case=True): 140 | """Constructs a BasicTokenizer. 141 | 142 | Args: 143 | do_lower_case: Whether to lower case the input. 144 | """ 145 | self.do_lower_case = do_lower_case 146 | 147 | def tokenize(self, text): 148 | """Tokenizes a piece of text.""" 149 | text = convert_to_unicode(text) 150 | text = self._clean_text(text) 151 | 152 | # This was added on November 1st, 2018 for the multilingual and Chinese 153 | # models. This is also applied to the English models now, but it doesn't 154 | # matter since the English models were not trained on any Chinese data 155 | # and generally don't have any Chinese data in them (there are Chinese 156 | # characters in the vocabulary because Wikipedia does have some Chinese 157 | # words in the English Wikipedia.). 158 | text = self._tokenize_chinese_chars(text) 159 | 160 | orig_tokens = whitespace_tokenize(text) 161 | split_tokens = [] 162 | for token in orig_tokens: 163 | if self.do_lower_case: 164 | token = token.lower() 165 | token = self._run_strip_accents(token) 166 | split_tokens.extend(self._run_split_on_punc(token)) 167 | 168 | output_tokens = whitespace_tokenize(" ".join(split_tokens)) 169 | return output_tokens 170 | 171 | def _run_strip_accents(self, text): 172 | """Strips accents from a piece of text.""" 173 | text = unicodedata.normalize("NFD", text) 174 | output = [] 175 | for char in text: 176 | cat = unicodedata.category(char) 177 | if cat == "Mn": 178 | continue 179 | output.append(char) 180 | return "".join(output) 181 | 182 | def _run_split_on_punc(self, text): 183 | """Splits punctuation on a piece of text.""" 184 | chars = list(text) 185 | i = 0 186 | start_new_word = True 187 | output = [] 188 | while i < len(chars): 189 | char = chars[i] 190 | if _is_punctuation(char): 191 | output.append([char]) 192 | start_new_word = True 193 | else: 194 | if start_new_word: 195 | output.append([]) 196 | start_new_word = False 197 | output[-1].append(char) 198 | i += 1 199 | 200 | return ["".join(x) for x in output] 201 | 202 | def _tokenize_chinese_chars(self, text): 203 | """Adds whitespace around any CJK character.""" 204 | output = [] 205 | for char in text: 206 | cp = ord(char) 207 | if self._is_chinese_char(cp): 208 | output.append(" ") 209 | output.append(char) 210 | output.append(" ") 211 | else: 212 | output.append(char) 213 | return "".join(output) 214 | 215 | def _is_chinese_char(self, cp): 216 | """Checks whether CP is the codepoint of a CJK character.""" 217 | # This defines a "chinese character" as anything in the CJK Unicode block: 218 | # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) 219 | # 220 | # Note that the CJK Unicode block is NOT all Japanese and Korean characters, 221 | # despite its name. The modern Korean Hangul alphabet is a different block, 222 | # as is Japanese Hiragana and Katakana. Those alphabets are used to write 223 | # space-separated words, so they are not treated specially and handled 224 | # like the all of the other languages. 225 | if ((cp >= 0x4E00 and cp <= 0x9FFF) or # 226 | (cp >= 0x3400 and cp <= 0x4DBF) or # 227 | (cp >= 0x20000 and cp <= 0x2A6DF) or # 228 | (cp >= 0x2A700 and cp <= 0x2B73F) or # 229 | (cp >= 0x2B740 and cp <= 0x2B81F) or # 230 | (cp >= 0x2B820 and cp <= 0x2CEAF) or 231 | (cp >= 0xF900 and cp <= 0xFAFF) or # 232 | (cp >= 0x2F800 and cp <= 0x2FA1F)): # 233 | return True 234 | 235 | return False 236 | 237 | def _clean_text(self, text): 238 | """Performs invalid character removal and whitespace cleanup on text.""" 239 | output = [] 240 | for char in text: 241 | cp = ord(char) 242 | if cp == 0 or cp == 0xfffd or _is_control(char): 243 | continue 244 | if _is_whitespace(char): 245 | output.append(" ") 246 | else: 247 | output.append(char) 248 | return "".join(output) 249 | 250 | 251 | class WordpieceTokenizer(object): 252 | """Runs WordPiece tokenziation.""" 253 | 254 | def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100): 255 | self.vocab = vocab 256 | self.unk_token = unk_token 257 | self.max_input_chars_per_word = max_input_chars_per_word 258 | 259 | def tokenize(self, text): 260 | """Tokenizes a piece of text into its word pieces. 261 | 262 | This uses a greedy longest-match-first algorithm to perform tokenization 263 | using the given vocabulary. 264 | 265 | For example: 266 | input = "unaffable" 267 | output = ["un", "##aff", "##able"] 268 | 269 | Args: 270 | text: A single token or whitespace separated tokens. This should have 271 | already been passed through `BasicTokenizer. 272 | 273 | Returns: 274 | A list of wordpiece tokens. 275 | """ 276 | 277 | text = convert_to_unicode(text) 278 | 279 | output_tokens = [] 280 | for token in whitespace_tokenize(text): 281 | chars = list(token) 282 | if len(chars) > self.max_input_chars_per_word: 283 | output_tokens.append(self.unk_token) 284 | continue 285 | 286 | is_bad = False 287 | start = 0 288 | sub_tokens = [] 289 | while start < len(chars): 290 | end = len(chars) 291 | cur_substr = None 292 | while start < end: 293 | substr = "".join(chars[start:end]) 294 | if start > 0: 295 | substr = "##" + substr 296 | if substr in self.vocab: 297 | cur_substr = substr 298 | break 299 | end -= 1 300 | if cur_substr is None: 301 | is_bad = True 302 | break 303 | sub_tokens.append(cur_substr) 304 | start = end 305 | 306 | if is_bad: 307 | output_tokens.append(self.unk_token) 308 | else: 309 | output_tokens.extend(sub_tokens) 310 | return output_tokens 311 | 312 | 313 | def _is_whitespace(char): 314 | """Checks whether `chars` is a whitespace character.""" 315 | # \t, \n, and \r are technically contorl characters but we treat them 316 | # as whitespace since they are generally considered as such. 317 | if char == " " or char == "\t" or char == "\n" or char == "\r": 318 | return True 319 | cat = unicodedata.category(char) 320 | if cat == "Zs": 321 | return True 322 | return False 323 | 324 | 325 | def _is_control(char): 326 | """Checks whether `chars` is a control character.""" 327 | # These are technically control characters but we count them as whitespace 328 | # characters. 329 | if char == "\t" or char == "\n" or char == "\r": 330 | return False 331 | cat = unicodedata.category(char) 332 | if cat.startswith("C"): 333 | return True 334 | return False 335 | 336 | 337 | def _is_punctuation(char): 338 | """Checks whether `chars` is a punctuation character.""" 339 | cp = ord(char) 340 | # We treat all non-letter/number ASCII as punctuation. 341 | # Characters such as "^", "$", and "`" are not in the Unicode 342 | # Punctuation class but we treat them as punctuation anyways, for 343 | # consistency. 344 | if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or 345 | (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)): 346 | return True 347 | cat = unicodedata.category(char) 348 | if cat.startswith("P"): 349 | return True 350 | return False 351 | 352 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/bert/tokenization_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import os 20 | import tempfile 21 | 22 | import tokenization 23 | import tensorflow as tf 24 | 25 | 26 | class TokenizationTest(tf.test.TestCase): 27 | 28 | def test_full_tokenizer(self): 29 | vocab_tokens = [ 30 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 31 | "##ing", "," 32 | ] 33 | with tempfile.NamedTemporaryFile(delete=False) as vocab_writer: 34 | vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) 35 | 36 | vocab_file = vocab_writer.name 37 | 38 | tokenizer = tokenization.FullTokenizer(vocab_file) 39 | os.unlink(vocab_file) 40 | 41 | tokens = tokenizer.tokenize(u"UNwant\u00E9d,running") 42 | self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"]) 43 | 44 | self.assertAllEqual( 45 | tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9]) 46 | 47 | def test_chinese(self): 48 | tokenizer = tokenization.BasicTokenizer() 49 | 50 | self.assertAllEqual( 51 | tokenizer.tokenize(u"ah\u535A\u63A8zz"), 52 | [u"ah", u"\u535A", u"\u63A8", u"zz"]) 53 | 54 | def test_basic_tokenizer_lower(self): 55 | tokenizer = tokenization.BasicTokenizer(do_lower_case=True) 56 | 57 | self.assertAllEqual( 58 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), 59 | ["hello", "!", "how", "are", "you", "?"]) 60 | self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 61 | 62 | def test_basic_tokenizer_no_lower(self): 63 | tokenizer = tokenization.BasicTokenizer(do_lower_case=False) 64 | 65 | self.assertAllEqual( 66 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), 67 | ["HeLLo", "!", "how", "Are", "yoU", "?"]) 68 | 69 | def test_wordpiece_tokenizer(self): 70 | vocab_tokens = [ 71 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 72 | "##ing" 73 | ] 74 | 75 | vocab = {} 76 | for (i, token) in enumerate(vocab_tokens): 77 | vocab[token] = i 78 | tokenizer = tokenization.WordpieceTokenizer(vocab=vocab) 79 | 80 | self.assertAllEqual(tokenizer.tokenize(""), []) 81 | 82 | self.assertAllEqual( 83 | tokenizer.tokenize("unwanted running"), 84 | ["un", "##want", "##ed", "runn", "##ing"]) 85 | 86 | self.assertAllEqual( 87 | tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) 88 | 89 | def test_convert_tokens_to_ids(self): 90 | vocab_tokens = [ 91 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 92 | "##ing" 93 | ] 94 | 95 | vocab = {} 96 | for (i, token) in enumerate(vocab_tokens): 97 | vocab[token] = i 98 | 99 | self.assertAllEqual( 100 | tokenization.convert_tokens_to_ids( 101 | vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9]) 102 | 103 | def test_is_whitespace(self): 104 | self.assertTrue(tokenization._is_whitespace(u" ")) 105 | self.assertTrue(tokenization._is_whitespace(u"\t")) 106 | self.assertTrue(tokenization._is_whitespace(u"\r")) 107 | self.assertTrue(tokenization._is_whitespace(u"\n")) 108 | self.assertTrue(tokenization._is_whitespace(u"\u00A0")) 109 | 110 | self.assertFalse(tokenization._is_whitespace(u"A")) 111 | self.assertFalse(tokenization._is_whitespace(u"-")) 112 | 113 | def test_is_control(self): 114 | self.assertTrue(tokenization._is_control(u"\u0005")) 115 | 116 | self.assertFalse(tokenization._is_control(u"A")) 117 | self.assertFalse(tokenization._is_control(u" ")) 118 | self.assertFalse(tokenization._is_control(u"\t")) 119 | self.assertFalse(tokenization._is_control(u"\r")) 120 | 121 | def test_is_punctuation(self): 122 | self.assertTrue(tokenization._is_punctuation(u"-")) 123 | self.assertTrue(tokenization._is_punctuation(u"$")) 124 | self.assertTrue(tokenization._is_punctuation(u"`")) 125 | self.assertTrue(tokenization._is_punctuation(u".")) 126 | 127 | self.assertFalse(tokenization._is_punctuation(u"A")) 128 | self.assertFalse(tokenization._is_punctuation(u" ")) 129 | 130 | 131 | if __name__ == "__main__": 132 | tf.test.main() 133 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "attention_probs_dropout_prob": 0.1, 3 | "directionality": "bidi", 4 | "hidden_act": "gelu", 5 | "hidden_dropout_prob": 0.1, 6 | "hidden_size": 768, 7 | "initializer_range": 0.02, 8 | "intermediate_size": 3072, 9 | "max_position_embeddings": 512, 10 | "num_attention_heads": 12, 11 | "num_hidden_layers": 12, 12 | "pooler_fc_size": 768, 13 | "pooler_num_attention_heads": 12, 14 | "pooler_num_fc_layers": 3, 15 | "pooler_size_per_head": 128, 16 | "pooler_type": "first_token_transform", 17 | "type_vocab_size": 2, 18 | "vocab_size": 21128 19 | } 20 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_model.ckpt: -------------------------------------------------------------------------------- 1 | bert_model.ckpt here. -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_model.ckpt.index: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_model.ckpt.index -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_model.ckpt.meta: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iseesaw/CoreEntityEmotionClassify/ff239f6b459710eaee75a9b9ecd6f1987649e42c/BERT-BiLSTM-CRF-NER/chinese_L-12_H-768_A-12/bert_model.ckpt.meta -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/conlleval.py: -------------------------------------------------------------------------------- 1 | # Python version of the evaluation script from CoNLL'00- 2 | # Originates from: https://github.com/spyysalo/conlleval.py 3 | 4 | 5 | # Intentional differences: 6 | # - accept any space as delimiter by default 7 | # - optional file argument (default STDIN) 8 | # - option to set boundary (-b argument) 9 | # - LaTeX output (-l argument) not supported 10 | # - raw tags (-r argument) not supported 11 | 12 | # add function :evaluate(predicted_label, ori_label): which will not read from file 13 | 14 | import sys 15 | import re 16 | import codecs 17 | from collections import defaultdict, namedtuple 18 | 19 | ANY_SPACE = '' 20 | 21 | 22 | class FormatError(Exception): 23 | pass 24 | 25 | Metrics = namedtuple('Metrics', 'tp fp fn prec rec fscore') 26 | 27 | 28 | class EvalCounts(object): 29 | def __init__(self): 30 | self.correct_chunk = 0 # number of correctly identified chunks 31 | self.correct_tags = 0 # number of correct chunk tags 32 | self.found_correct = 0 # number of chunks in corpus 33 | self.found_guessed = 0 # number of identified chunks 34 | self.token_counter = 0 # token counter (ignores sentence breaks) 35 | 36 | # counts by type 37 | self.t_correct_chunk = defaultdict(int) 38 | self.t_found_correct = defaultdict(int) 39 | self.t_found_guessed = defaultdict(int) 40 | 41 | 42 | def parse_args(argv): 43 | import argparse 44 | parser = argparse.ArgumentParser( 45 | description='evaluate tagging results using CoNLL criteria', 46 | formatter_class=argparse.ArgumentDefaultsHelpFormatter 47 | ) 48 | arg = parser.add_argument 49 | arg('-b', '--boundary', metavar='STR', default='-X-', 50 | help='sentence boundary') 51 | arg('-d', '--delimiter', metavar='CHAR', default=ANY_SPACE, 52 | help='character delimiting items in input') 53 | arg('-o', '--otag', metavar='CHAR', default='O', 54 | help='alternative outside tag') 55 | arg('file', nargs='?', default=None) 56 | return parser.parse_args(argv) 57 | 58 | 59 | def parse_tag(t): 60 | m = re.match(r'^([^-]*)-(.*)$', t) 61 | return m.groups() if m else (t, '') 62 | 63 | 64 | def evaluate(iterable, options=None): 65 | if options is None: 66 | options = parse_args([]) # use defaults 67 | 68 | counts = EvalCounts() 69 | num_features = None # number of features per line 70 | in_correct = False # currently processed chunks is correct until now 71 | last_correct = 'O' # previous chunk tag in corpus 72 | last_correct_type = '' # type of previously identified chunk tag 73 | last_guessed = 'O' # previously identified chunk tag 74 | last_guessed_type = '' # type of previous chunk tag in corpus 75 | 76 | for line in iterable: 77 | line = line.rstrip('\r\n') 78 | 79 | if options.delimiter == ANY_SPACE: 80 | features = line.split() 81 | else: 82 | features = line.split(options.delimiter) 83 | 84 | if num_features is None: 85 | num_features = len(features) 86 | elif num_features != len(features) and len(features) != 0: 87 | raise FormatError('unexpected number of features: %d (%d)' % 88 | (len(features), num_features)) 89 | 90 | if len(features) == 0 or features[0] == options.boundary: 91 | features = [options.boundary, 'O', 'O'] 92 | if len(features) < 3: 93 | raise FormatError('unexpected number of features in line %s' % line) 94 | 95 | guessed, guessed_type = parse_tag(features.pop()) 96 | correct, correct_type = parse_tag(features.pop()) 97 | first_item = features.pop(0) 98 | 99 | if first_item == options.boundary: 100 | guessed = 'O' 101 | 102 | end_correct = end_of_chunk(last_correct, correct, 103 | last_correct_type, correct_type) 104 | end_guessed = end_of_chunk(last_guessed, guessed, 105 | last_guessed_type, guessed_type) 106 | start_correct = start_of_chunk(last_correct, correct, 107 | last_correct_type, correct_type) 108 | start_guessed = start_of_chunk(last_guessed, guessed, 109 | last_guessed_type, guessed_type) 110 | 111 | if in_correct: 112 | if (end_correct and end_guessed and 113 | last_guessed_type == last_correct_type): 114 | in_correct = False 115 | counts.correct_chunk += 1 116 | counts.t_correct_chunk[last_correct_type] += 1 117 | elif (end_correct != end_guessed or guessed_type != correct_type): 118 | in_correct = False 119 | 120 | if start_correct and start_guessed and guessed_type == correct_type: 121 | in_correct = True 122 | 123 | if start_correct: 124 | counts.found_correct += 1 125 | counts.t_found_correct[correct_type] += 1 126 | if start_guessed: 127 | counts.found_guessed += 1 128 | counts.t_found_guessed[guessed_type] += 1 129 | if first_item != options.boundary: 130 | if correct == guessed and guessed_type == correct_type: 131 | counts.correct_tags += 1 132 | counts.token_counter += 1 133 | 134 | last_guessed = guessed 135 | last_correct = correct 136 | last_guessed_type = guessed_type 137 | last_correct_type = correct_type 138 | 139 | if in_correct: 140 | counts.correct_chunk += 1 141 | counts.t_correct_chunk[last_correct_type] += 1 142 | 143 | return counts 144 | 145 | 146 | 147 | def uniq(iterable): 148 | seen = set() 149 | return [i for i in iterable if not (i in seen or seen.add(i))] 150 | 151 | 152 | def calculate_metrics(correct, guessed, total): 153 | tp, fp, fn = correct, guessed-correct, total-correct 154 | p = 0 if tp + fp == 0 else 1.*tp / (tp + fp) 155 | r = 0 if tp + fn == 0 else 1.*tp / (tp + fn) 156 | f = 0 if p + r == 0 else 2 * p * r / (p + r) 157 | return Metrics(tp, fp, fn, p, r, f) 158 | 159 | 160 | def metrics(counts): 161 | c = counts 162 | overall = calculate_metrics( 163 | c.correct_chunk, c.found_guessed, c.found_correct 164 | ) 165 | by_type = {} 166 | for t in uniq(list(c.t_found_correct) + list(c.t_found_guessed)): 167 | by_type[t] = calculate_metrics( 168 | c.t_correct_chunk[t], c.t_found_guessed[t], c.t_found_correct[t] 169 | ) 170 | return overall, by_type 171 | 172 | 173 | def report(counts, out=None): 174 | if out is None: 175 | out = sys.stdout 176 | 177 | overall, by_type = metrics(counts) 178 | 179 | c = counts 180 | out.write('processed %d tokens with %d phrases; ' % 181 | (c.token_counter, c.found_correct)) 182 | out.write('found: %d phrases; correct: %d.\n' % 183 | (c.found_guessed, c.correct_chunk)) 184 | 185 | if c.token_counter > 0: 186 | out.write('accuracy: %6.2f%%; ' % 187 | (100.*c.correct_tags/c.token_counter)) 188 | out.write('precision: %6.2f%%; ' % (100.*overall.prec)) 189 | out.write('recall: %6.2f%%; ' % (100.*overall.rec)) 190 | out.write('FB1: %6.2f\n' % (100.*overall.fscore)) 191 | 192 | for i, m in sorted(by_type.items()): 193 | out.write('%17s: ' % i) 194 | out.write('precision: %6.2f%%; ' % (100.*m.prec)) 195 | out.write('recall: %6.2f%%; ' % (100.*m.rec)) 196 | out.write('FB1: %6.2f %d\n' % (100.*m.fscore, c.t_found_guessed[i])) 197 | 198 | 199 | def report_notprint(counts, out=None): 200 | if out is None: 201 | out = sys.stdout 202 | 203 | overall, by_type = metrics(counts) 204 | 205 | c = counts 206 | final_report = [] 207 | line = [] 208 | line.append('processed %d tokens with %d phrases; ' % 209 | (c.token_counter, c.found_correct)) 210 | line.append('found: %d phrases; correct: %d.\n' % 211 | (c.found_guessed, c.correct_chunk)) 212 | final_report.append("".join(line)) 213 | 214 | if c.token_counter > 0: 215 | line = [] 216 | line.append('accuracy: %6.2f%%; ' % 217 | (100.*c.correct_tags/c.token_counter)) 218 | line.append('precision: %6.2f%%; ' % (100.*overall.prec)) 219 | line.append('recall: %6.2f%%; ' % (100.*overall.rec)) 220 | line.append('FB1: %6.2f\n' % (100.*overall.fscore)) 221 | final_report.append("".join(line)) 222 | 223 | for i, m in sorted(by_type.items()): 224 | line = [] 225 | line.append('%17s: ' % i) 226 | line.append('precision: %6.2f%%; ' % (100.*m.prec)) 227 | line.append('recall: %6.2f%%; ' % (100.*m.rec)) 228 | line.append('FB1: %6.2f %d\n' % (100.*m.fscore, c.t_found_guessed[i])) 229 | final_report.append("".join(line)) 230 | return final_report 231 | 232 | 233 | def end_of_chunk(prev_tag, tag, prev_type, type_): 234 | # check if a chunk ended between the previous and current word 235 | # arguments: previous and current chunk tags, previous and current types 236 | chunk_end = False 237 | 238 | if prev_tag == 'E': chunk_end = True 239 | if prev_tag == 'S': chunk_end = True 240 | 241 | if prev_tag == 'B' and tag == 'B': chunk_end = True 242 | if prev_tag == 'B' and tag == 'S': chunk_end = True 243 | if prev_tag == 'B' and tag == 'O': chunk_end = True 244 | if prev_tag == 'I' and tag == 'B': chunk_end = True 245 | if prev_tag == 'I' and tag == 'S': chunk_end = True 246 | if prev_tag == 'I' and tag == 'O': chunk_end = True 247 | 248 | if prev_tag != 'O' and prev_tag != '.' and prev_type != type_: 249 | chunk_end = True 250 | 251 | # these chunks are assumed to have length 1 252 | if prev_tag == ']': chunk_end = True 253 | if prev_tag == '[': chunk_end = True 254 | 255 | return chunk_end 256 | 257 | 258 | def start_of_chunk(prev_tag, tag, prev_type, type_): 259 | # check if a chunk started between the previous and current word 260 | # arguments: previous and current chunk tags, previous and current types 261 | chunk_start = False 262 | 263 | if tag == 'B': chunk_start = True 264 | if tag == 'S': chunk_start = True 265 | 266 | if prev_tag == 'E' and tag == 'E': chunk_start = True 267 | if prev_tag == 'E' and tag == 'I': chunk_start = True 268 | if prev_tag == 'S' and tag == 'E': chunk_start = True 269 | if prev_tag == 'S' and tag == 'I': chunk_start = True 270 | if prev_tag == 'O' and tag == 'E': chunk_start = True 271 | if prev_tag == 'O' and tag == 'I': chunk_start = True 272 | 273 | if tag != 'O' and tag != '.' and prev_type != type_: 274 | chunk_start = True 275 | 276 | # these chunks are assumed to have length 1 277 | if tag == '[': chunk_start = True 278 | if tag == ']': chunk_start = True 279 | 280 | return chunk_start 281 | 282 | 283 | def return_report(input_file): 284 | with codecs.open(input_file, "r", "utf8") as f: 285 | counts = evaluate(f) 286 | return report_notprint(counts) 287 | 288 | 289 | def main(argv): 290 | args = parse_args(argv[1:]) 291 | 292 | if args.file is None: 293 | counts = evaluate(sys.stdin, args) 294 | else: 295 | with open(args.file) as f: 296 | counts = evaluate(f, args) 297 | report(counts) 298 | 299 | if __name__ == '__main__': 300 | sys.exit(main(sys.argv)) -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/data/data.here: -------------------------------------------------------------------------------- 1 | data here. -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/data_process_utils.py: -------------------------------------------------------------------------------- 1 | import json 2 | import re 3 | 4 | 5 | def create_example(entry): 6 | data_list = [] 7 | 8 | #entry = json.loads(line,strict=False) 9 | entities = entry['coreEntityEmotions'] 10 | entity_set = set() 11 | for entity in entities: 12 | entity_name = entity['entity'] 13 | entity_set.add(entity_name) 14 | content = entry['title']+entry['content'] 15 | lines = re.split(r"\n", content) 16 | for line in lines: 17 | if not line: 18 | continue 19 | sentences = re.split(r"([。])", line) 20 | sentences.append("") 21 | sentences = ["".join(i) for i in zip(sentences[0::2], sentences[1::2])] 22 | order = 0 23 | 24 | for sentence in sentences: 25 | sentence = sentence.strip() 26 | if sentence == "" or sentence == '\r' or len(sentence) == 1 or len(sentence) > 500: 27 | continue 28 | order += 1 29 | label_result = [] 30 | seg_result = [] 31 | beginset = set() 32 | endset = set() 33 | for name in entity_set: 34 | listb, liste = search(name, sentence) 35 | for b in listb: 36 | beginset.add(b) 37 | for e in liste: 38 | endset.add(e) 39 | state = 0 40 | for index in range(len(sentence)): 41 | seg_result.append(sentence[index]) 42 | if index in beginset: 43 | label_result.append('B') 44 | state = 1 45 | continue 46 | if index in endset: 47 | label_result.append('I') 48 | state = 0 49 | continue 50 | if index not in beginset and index not in endset and state == 1: 51 | label_result.append('I') 52 | continue 53 | 54 | label_result.append('O') 55 | subjson = dict() 56 | subjson['newsId'] = entry['newsId']+"_"+str(order) 57 | subjson['seq'] = seg_result 58 | subjson['label'] = label_result 59 | data_list.append(subjson) 60 | return data_list 61 | 62 | 63 | def search(subtext, string: str): 64 | list = [] 65 | liste = [] 66 | text = string 67 | index = 0 68 | while True: 69 | index = text.find(subtext, index) 70 | if index == -1: 71 | return list,liste 72 | else: 73 | list.append(index) 74 | liste.append(index+len(subtext)-1) 75 | index += len(subtext) 76 | 77 | 78 | if __name__ == "__main__": 79 | # 例子 80 | string = '{"newsId": "7bdc768b", "coreEntityEmotions": [{"entity": "abc", "emotion": "NORM"}, ' \ 81 | '{"entity": "d", "emotion": "NORM"}, {"entity": "e", "emotion": "NORM"}], ' \ 82 | '"title": "abcde", "content": "abcdefadf\nadsea\n"}' 83 | print(create_example(string)) -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/feature_input.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | """ 4 | import os 5 | import pickle 6 | import collections 7 | 8 | import tensorflow as tf 9 | from bert import tokenization 10 | 11 | 12 | class InputExample(object): 13 | """A single training/test example for simple sequence classification.""" 14 | 15 | def __init__(self, guid, text, label=None): 16 | """Constructs a InputExample. 17 | 18 | Args: 19 | guid: Unique id for the example. 20 | text_a: string. The untokenized text of the first sequence. For single 21 | sequence tasks, only this sequence must be specified. 22 | label: (Optional) string. The label of the example. This should be 23 | specified for train and dev examples, but not for test examples. 24 | """ 25 | self.guid = guid 26 | self.text = text 27 | self.label = label 28 | 29 | 30 | class InputFeatures(object): 31 | """A single set of features of data.""" 32 | 33 | def __init__(self, input_ids, input_mask, segment_ids, label_ids, ): 34 | self.input_ids = input_ids 35 | self.input_mask = input_mask 36 | self.segment_ids = segment_ids 37 | self.label_ids = label_ids 38 | # self.label_mask = label_mask 39 | 40 | 41 | def convert_single_example(ex_index, example, label_map, max_seq_length, tokenizer, mode, FLAGS): 42 | textlist = example.text.split(' ') 43 | labellist = example.label.split(' ') 44 | tokens = [] 45 | labels = [] 46 | # print(textlist) 47 | for i, word in enumerate(textlist): 48 | token = tokenizer.tokenize(word) 49 | # print(token) 50 | tokens.extend(token) 51 | label_1 = labellist[i] 52 | # print(label_1) 53 | for m in range(len(token)): 54 | if m == 0: 55 | labels.append(label_1) 56 | else: 57 | labels.append("X") 58 | # print(tokens, labels) 59 | # tokens = tokenizer.tokenize(example.text) 60 | if len(tokens) >= max_seq_length - 1: 61 | tokens = tokens[0:(max_seq_length - 2)] 62 | labels = labels[0:(max_seq_length - 2)] 63 | ntokens = [] 64 | segment_ids = [] 65 | label_ids = [] 66 | ntokens.append("[CLS]") 67 | segment_ids.append(0) 68 | # append("O") or append("[CLS]") not sure! 69 | label_ids.append(label_map["[CLS]"]) 70 | for i, token in enumerate(tokens): 71 | ntokens.append(token) 72 | segment_ids.append(0) 73 | label_ids.append(label_map[labels[i]]) 74 | ntokens.append("[SEP]") 75 | segment_ids.append(0) 76 | # append("O") or append("[SEP]") not sure! 77 | label_ids.append(label_map["[SEP]"]) 78 | input_ids = tokenizer.convert_tokens_to_ids(ntokens) 79 | input_mask = [1] * len(input_ids) 80 | # label_mask = [1] * len(input_ids) 81 | while len(input_ids) < max_seq_length: 82 | input_ids.append(0) 83 | input_mask.append(0) 84 | segment_ids.append(0) 85 | # we don't concerned about it! 86 | label_ids.append(0) 87 | ntokens.append("**NULL**") 88 | # label_mask.append(0) 89 | # print(len(input_ids)) 90 | assert len(input_ids) == max_seq_length 91 | assert len(input_mask) == max_seq_length 92 | assert len(segment_ids) == max_seq_length 93 | assert len(label_ids) == max_seq_length 94 | # assert len(label_mask) == max_seq_length 95 | 96 | if ex_index < 5: 97 | tf.logging.info("*** Example ***") 98 | tf.logging.info("guid: %s" % (example.guid)) 99 | tf.logging.info("tokens: %s" % " ".join( 100 | [tokenization.printable_text(x) for x in tokens])) 101 | tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) 102 | tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) 103 | tf.logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) 104 | tf.logging.info("label_ids: %s" % " ".join([str(x) for x in label_ids])) 105 | # tf.logging.info("label_mask: %s" % " ".join([str(x) for x in label_mask])) 106 | 107 | feature = InputFeatures( 108 | input_ids=input_ids, 109 | input_mask=input_mask, 110 | segment_ids=segment_ids, 111 | label_ids=label_ids, 112 | # label_mask = label_mask 113 | ) 114 | write_tokens(ntokens, mode, FLAGS) 115 | return feature 116 | 117 | 118 | def write_tokens(tokens, mode, FLAGS): 119 | if mode == "test": 120 | path = os.path.join(FLAGS.output_dir, "token_" + mode + ".txt") 121 | wf = open(path, 'a') 122 | for token in tokens: 123 | if token != "**NULL**": 124 | wf.write(token + '\n') 125 | wf.close() 126 | 127 | 128 | def filed_based_convert_examples_to_features( 129 | examples, label_list, max_seq_length, tokenizer, output_file, FLAGS, mode=None): 130 | label_map = {} 131 | for (i, label) in enumerate(label_list, 1): 132 | label_map[label] = i 133 | with open('./output/label2id.pkl', 'wb') as w: 134 | pickle.dump(label_map, w) 135 | 136 | writer = tf.python_io.TFRecordWriter(output_file) 137 | for (ex_index, example) in enumerate(examples): 138 | if ex_index % 5000 == 0: 139 | tf.logging.info("Writing example %d of %d" % (ex_index, len(examples))) 140 | feature = convert_single_example( 141 | ex_index, 142 | example, 143 | label_map, 144 | max_seq_length, 145 | tokenizer, mode, FLAGS) 146 | 147 | def create_int_feature(values): 148 | f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) 149 | return f 150 | 151 | features = collections.OrderedDict() 152 | features["input_ids"] = create_int_feature(feature.input_ids) 153 | features["input_mask"] = create_int_feature(feature.input_mask) 154 | features["segment_ids"] = create_int_feature(feature.segment_ids) 155 | features["label_ids"] = create_int_feature(feature.label_ids) 156 | # features["label_mask"] = create_int_feature(feature.label_mask) 157 | tf_example = tf.train.Example(features=tf.train.Features(feature=features)) 158 | writer.write(tf_example.SerializeToString()) 159 | 160 | 161 | def file_based_input_fn_builder(input_file, seq_length, is_training, drop_remainder): 162 | name_to_features = { 163 | "input_ids": tf.FixedLenFeature([seq_length], tf.int64), 164 | "input_mask": tf.FixedLenFeature([seq_length], tf.int64), 165 | "segment_ids": tf.FixedLenFeature([seq_length], tf.int64), 166 | "label_ids": tf.FixedLenFeature([seq_length], tf.int64), 167 | # "label_ids":tf.VarLenFeature(tf.int64), 168 | # "label_mask": tf.FixedLenFeature([seq_length], tf.int64), 169 | } 170 | 171 | def _decode_record(record, name_to_features): 172 | example = tf.parse_single_example(record, name_to_features) 173 | for name in list(example.keys()): 174 | t = example[name] 175 | if t.dtype == tf.int64: 176 | t = tf.to_int32(t) 177 | example[name] = t 178 | return example 179 | 180 | def input_fn(params): 181 | batch_size = params["batch_size"] 182 | d = tf.data.TFRecordDataset(input_file) 183 | if is_training: 184 | d = d.repeat() 185 | d = d.shuffle(buffer_size=100) 186 | d = d.apply(tf.contrib.data.map_and_batch( 187 | lambda record: _decode_record(record, name_to_features), 188 | batch_size=batch_size, 189 | drop_remainder=drop_remainder 190 | )) 191 | return d 192 | 193 | return input_fn 194 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/lstm_crf_layer.py: -------------------------------------------------------------------------------- 1 | # encoding=utf-8 2 | 3 | """ 4 | bert-blstm-crf layer 5 | @Author:Macan 6 | """ 7 | 8 | import tensorflow as tf 9 | from tensorflow.contrib import rnn 10 | from tensorflow.contrib import crf 11 | 12 | 13 | class BLSTM_CRF(object): 14 | def __init__(self, embedded_chars, hidden_unit, cell_type, num_layers, dropout_rate, 15 | initializers, num_labels, seq_length, labels, lengths, is_training): 16 | """ 17 | BLSTM-CRF 网络 18 | :param embedded_chars: Fine-tuning embedding input 19 | :param hidden_unit: LSTM的隐含单元个数 20 | :param cell_type: RNN类型(LSTM OR GRU DICNN will be add in feature) 21 | :param num_layers: RNN的层数 22 | :param droupout_rate: droupout rate 23 | :param initializers: variable init class 24 | :param num_labels: 标签数量 25 | :param seq_length: 序列最大长度 26 | :param labels: 真实标签 27 | :param lengths: [batch_size] 每个batch下序列的真实长度 28 | :param is_training: 是否是训练过程 29 | """ 30 | self.hidden_unit = hidden_unit 31 | self.dropout_rate = dropout_rate 32 | self.cell_type = cell_type 33 | self.num_layers = num_layers 34 | self.embedded_chars = embedded_chars 35 | self.initializers = initializers 36 | self.seq_length = seq_length 37 | self.num_labels = num_labels 38 | self.labels = labels 39 | self.lengths = lengths 40 | self.embedding_dims = embedded_chars.shape[-1].value 41 | self.is_training = is_training 42 | 43 | def add_blstm_crf_layer(self, crf_only): 44 | """ 45 | blstm-crf网络 46 | :return: 47 | """ 48 | if self.is_training: 49 | # lstm input dropout rate i set 0.9 will get best score 50 | self.embedded_chars = tf.nn.dropout(self.embedded_chars, self.dropout_rate) 51 | 52 | if crf_only: 53 | # [batch_size, num_steps, num_tags] 54 | logits = self.project_crf_layer(self.embedded_chars) 55 | else: 56 | # blstm 57 | lstm_output = self.blstm_layer(self.embedded_chars) 58 | # project 59 | # [batch_size, num_steps, num_tags] 60 | logits = self.project_bilstm_layer(lstm_output) 61 | # crf 62 | # [batch_size, num_steps, num_tags] 63 | loss, trans = self.crf_layer(logits) 64 | # CRF decode, pred_ids 是一条最大概率的标注路径 65 | pred_ids, _ = crf.crf_decode(potentials=logits, transition_params=trans, sequence_length=self.lengths) 66 | return (loss, logits, trans, pred_ids) 67 | 68 | def _witch_cell(self): 69 | """ 70 | RNN 类型 71 | :return: 72 | """ 73 | cell_tmp = None 74 | if self.cell_type == 'lstm': 75 | cell_tmp = rnn.LSTMCell(self.hidden_unit) 76 | elif self.cell_type == 'gru': 77 | cell_tmp = rnn.GRUCell(self.hidden_unit) 78 | return cell_tmp 79 | 80 | def _bi_dir_rnn(self): 81 | """ 82 | 双向RNN 83 | :return: 84 | """ 85 | cell_fw = self._witch_cell() 86 | cell_bw = self._witch_cell() 87 | if self.dropout_rate is not None: 88 | cell_bw = rnn.DropoutWrapper(cell_bw, output_keep_prob=self.dropout_rate) 89 | cell_fw = rnn.DropoutWrapper(cell_fw, output_keep_prob=self.dropout_rate) 90 | return cell_fw, cell_bw 91 | 92 | def blstm_layer(self, embedding_chars): 93 | """ 94 | 95 | :return: 96 | """ 97 | with tf.variable_scope('rnn_layer'): 98 | cell_fw, cell_bw = self._bi_dir_rnn() 99 | if self.num_layers > 1: 100 | cell_fw = rnn.MultiRNNCell([cell_fw] * self.num_layers, state_is_tuple=True) 101 | cell_bw = rnn.MultiRNNCell([cell_bw] * self.num_layers, state_is_tuple=True) 102 | 103 | outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, embedding_chars, 104 | dtype=tf.float32) 105 | outputs = tf.concat(outputs, axis=2) 106 | return outputs 107 | 108 | def project_bilstm_layer(self, lstm_outputs, name=None): 109 | """ 110 | hidden layer between lstm layer and logits 111 | :param lstm_outputs: [batch_size, num_steps, emb_size] 112 | :return: [batch_size, num_steps, num_tags] 113 | """ 114 | with tf.variable_scope("project" if not name else name): 115 | with tf.variable_scope("hidden"): 116 | W = tf.get_variable("W", shape=[self.hidden_unit * 2, self.hidden_unit], 117 | dtype=tf.float32, initializer=self.initializers.xavier_initializer()) 118 | 119 | b = tf.get_variable("b", shape=[self.hidden_unit], dtype=tf.float32, 120 | initializer=tf.zeros_initializer()) 121 | output = tf.reshape(lstm_outputs, shape=[-1, self.hidden_unit * 2]) 122 | hidden = tf.tanh(tf.nn.xw_plus_b(output, W, b)) 123 | 124 | # project to score of tags 125 | with tf.variable_scope("logits"): 126 | W = tf.get_variable("W", shape=[self.hidden_unit, self.num_labels], 127 | dtype=tf.float32, initializer=self.initializers.xavier_initializer()) 128 | 129 | b = tf.get_variable("b", shape=[self.num_labels], dtype=tf.float32, 130 | initializer=tf.zeros_initializer()) 131 | 132 | pred = tf.nn.xw_plus_b(hidden, W, b) 133 | return tf.reshape(pred, [-1, self.seq_length, self.num_labels]) 134 | 135 | def project_crf_layer(self, embedding_chars, name=None): 136 | """ 137 | hidden layer between input layer and logits 138 | :param lstm_outputs: [batch_size, num_steps, emb_size] 139 | :return: [batch_size, num_steps, num_tags] 140 | """ 141 | with tf.variable_scope("project" if not name else name): 142 | with tf.variable_scope("logits"): 143 | W = tf.get_variable("W", shape=[self.embedding_dims, self.num_labels], 144 | dtype=tf.float32, initializer=self.initializers.xavier_initializer()) 145 | 146 | b = tf.get_variable("b", shape=[self.num_labels], dtype=tf.float32, 147 | initializer=tf.zeros_initializer()) 148 | output = tf.reshape(self.embedded_chars, 149 | shape=[-1, self.embedding_dims]) # [batch_size, embedding_dims] 150 | pred = tf.tanh(tf.nn.xw_plus_b(output, W, b)) 151 | return tf.reshape(pred, [-1, self.seq_length, self.num_labels]) 152 | 153 | def crf_layer(self, logits): 154 | """ 155 | calculate crf loss 156 | :param project_logits: [1, num_steps, num_tags] 157 | :return: scalar loss 158 | """ 159 | with tf.variable_scope("crf_loss"): 160 | trans = tf.get_variable( 161 | "transitions", 162 | shape=[self.num_labels, self.num_labels], 163 | initializer=self.initializers.xavier_initializer()) 164 | if self.labels is None: 165 | return None, trans 166 | else: 167 | log_likelihood, trans = tf.contrib.crf.crf_log_likelihood( 168 | inputs=logits, 169 | tag_indices=self.labels, 170 | transition_params=trans, 171 | sequence_lengths=self.lengths) 172 | return tf.reduce_mean(-log_likelihood), trans 173 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/model.py: -------------------------------------------------------------------------------- 1 | #! usr/bin/env python3 2 | # -*- coding:utf-8 -*- 3 | """ 4 | Copyright 2018 The Google AI Language Team Authors. 5 | BASED ON Google_BERT. 6 | reference from :zhoukaiyin/ 7 | 8 | @Author:Macan 9 | """ 10 | 11 | from __future__ import absolute_import 12 | from __future__ import division 13 | from __future__ import print_function 14 | 15 | import tf_metrics 16 | import tensorflow as tf 17 | from bert import modeling, optimization 18 | from lstm_crf_layer import BLSTM_CRF 19 | from tensorflow.contrib.layers.python.layers import initializers 20 | 21 | 22 | def create_model(bert_config, is_training, input_ids, input_mask, 23 | segment_ids, labels, num_labels, use_one_hot_embeddings, 24 | dropout_rate=1.0, lstm_size=1, cell='lstm', num_layers=1): 25 | """创建Bert + LSTM + CRF模型 26 | """ 27 | # 使用数据加载BertModel,获取对应的字embedding 28 | 29 | model = modeling.BertModel( 30 | config=bert_config, 31 | is_training=is_training, 32 | input_ids=input_ids, 33 | input_mask=input_mask, 34 | token_type_ids=segment_ids, 35 | use_one_hot_embeddings=use_one_hot_embeddings 36 | ) 37 | # 获取对应的embedding 输入数据[batch_size, seq_length, embedding_size] 38 | embedding = model.get_sequence_output() 39 | max_seq_length = embedding.shape[1].value 40 | # 算序列真实长度 41 | used = tf.sign(tf.abs(input_ids)) 42 | lengths = tf.reduce_sum(used, reduction_indices=1) # [batch_size] 大小的向量,包含了当前batch中的序列长度 43 | # 添加CRF output layer 44 | blstm_crf = BLSTM_CRF(embedded_chars=embedding, hidden_unit=lstm_size, cell_type=cell, num_layers=num_layers, 45 | dropout_rate=dropout_rate, initializers=initializers, num_labels=num_labels, 46 | seq_length=max_seq_length, labels=labels, lengths=lengths, is_training=is_training) 47 | rst = blstm_crf.add_blstm_crf_layer(crf_only=True) 48 | return rst 49 | 50 | 51 | def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate, 52 | num_train_steps, num_warmup_steps): 53 | """用于Estimator的构建模型""" 54 | 55 | def model_fn(features, labels, mode, params): 56 | tf.logging.info("*** Features ***") 57 | for name in sorted(features.keys()): 58 | tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape)) 59 | input_ids = features["input_ids"] 60 | input_mask = features["input_mask"] 61 | segment_ids = features["segment_ids"] 62 | label_ids = features["label_ids"] 63 | 64 | print('shape of input_ids', input_ids.shape) 65 | is_training = (mode == tf.estimator.ModeKeys.TRAIN) 66 | 67 | # 使用参数构建模型,input_idx 就是输入的样本idx表示,label_ids 就是标签的idx表示 68 | total_loss, logits, trans, pred_ids = create_model( 69 | bert_config, is_training, input_ids, input_mask, segment_ids, label_ids, 70 | num_labels, False) 71 | 72 | tvars = tf.trainable_variables() 73 | # 加载BERT模型 74 | if init_checkpoint: 75 | (assignment_map, initialized_variable_names) = \ 76 | modeling.get_assignment_map_from_checkpoint(tvars, 77 | init_checkpoint) 78 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 79 | 80 | output_spec = None 81 | if mode == tf.estimator.ModeKeys.TRAIN: 82 | train_op = optimization.create_optimizer( 83 | total_loss, learning_rate, num_train_steps, num_warmup_steps, False) 84 | hook_dict = {} 85 | hook_dict['loss'] = total_loss 86 | hook_dict['global_steps'] = tf.train.get_or_create_global_step() 87 | logging_hook = tf.train.LoggingTensorHook( 88 | hook_dict, every_n_iter=100) 89 | 90 | output_spec = tf.estimator.EstimatorSpec( 91 | mode=mode, 92 | loss=total_loss, 93 | train_op=train_op, 94 | training_hooks=[logging_hook]) 95 | 96 | elif mode == tf.estimator.ModeKeys.EVAL: 97 | # 针对NER ,进行了修改 98 | # def metric_fn(label_ids, pred_ids): 99 | # return { 100 | # "eval_loss": tf.metrics.mean_squared_error(labels=label_ids, predictions=pred_ids), 101 | # } 102 | 103 | # eval_metrics = metric_fn(label_ids, pred_ids) 104 | # output_spec = tf.estimator.EstimatorSpec( 105 | # mode=mode, 106 | # loss=total_loss, 107 | # eval_metric_ops=eval_metrics 108 | # ) 109 | 110 | def metric_fn(label_ids, pred_ids): 111 | precision = tf_metrics.precision(label_ids, pred_ids, 11, [2, 3, 4, 5, 6, 7], average="macro") 112 | recall = tf_metrics.recall(label_ids, pred_ids, 11, [2, 3, 4, 5, 6, 7], average="macro") 113 | f = tf_metrics.f1(label_ids, pred_ids, 11, [2, 3, 4, 5, 6, 7], average="macro") 114 | # 115 | return { 116 | "eval_precision": precision, 117 | "eval_recall": recall, 118 | "eval_f": f, 119 | # "eval_loss": loss, 120 | } 121 | 122 | eval_metrics = metric_fn(label_ids, pred_ids) 123 | output_spec = tf.estimator.EstimatorSpec( 124 | mode=mode, 125 | loss=total_loss, 126 | eval_metric_ops=eval_metrics 127 | ) 128 | 129 | else: 130 | output_spec = tf.estimator.EstimatorSpec( 131 | mode=mode, 132 | predictions=pred_ids 133 | ) 134 | return output_spec 135 | 136 | return model_fn 137 | -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/requirement.txt: -------------------------------------------------------------------------------- 1 | # client-side requirements, pretty light-weight right? 2 | # tensorflow >= 1.12.0 3 | # tensorflow-gpu >= 1.12.0 # GPU version of TensorFlow. 4 | GPUtil >= 1.3.0 # no need if you dont have GPU 5 | pyzmq >= 17.1.0 # python zmq 6 | flask # no need if you do not need http 7 | flask_compress # no need if you do not need http 8 | flask_json # no need if you do not need http -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/run_test.py: -------------------------------------------------------------------------------- 1 | import re 2 | import os 3 | from pyltp import Segmentor 4 | import json 5 | 6 | def loadStopwords(): 7 | stopwords = set([]) 8 | with open("stopwords.txt", "r", encoding="utf-8") as f: 9 | for line in f.readlines(): 10 | line = line.strip() 11 | stopwords.add(line) 12 | return stopwords 13 | 14 | def read(): 15 | # LTP_DATA_DIR = "3.4.0/ltp_data_v3.4.0" 16 | # cws_model_path = os.path.join(LTP_DATA_DIR, "cws.model") 17 | # segmentor = Segmentor() 18 | # segmentor.load(cws_model_path) 19 | 20 | # stopwords = loadStopwords() 21 | m = 1000 22 | with open("data/coreEntityEmotion_train.txt", "r", encoding="utf-8") as f: 23 | length = 0 24 | for line in f.readlines()[:1]: 25 | line = line.strip() 26 | data = json.loads(line) 27 | 28 | content = data["content"] 29 | if len(content) < m: 30 | m = len(content) 31 | print(m) 32 | 33 | def saveJson(data): 34 | with open("train.txt", "a", encoding="utf-8") as f: 35 | f.write(json.dumps(data, ensure_ascii=False)) 36 | 37 | 38 | if __name__ == "__main__": 39 | read() -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/stopwords.txt: -------------------------------------------------------------------------------- 1 | ? 2 | 、 3 | 。 4 | “ 5 | ” 6 | 《 7 | 》 8 | ! 9 | , 10 | : 11 | ; 12 | ? 13 | 啊 14 | 阿 15 | 哎 16 | 哎呀 17 | 哎哟 18 | 唉 19 | 俺 20 | 俺们 21 | 按 22 | 按照 23 | 吧 24 | 吧哒 25 | 把 26 | 罢了 27 | 被 28 | 本 29 | 本着 30 | 比 31 | 比方 32 | 比如 33 | 鄙人 34 | 彼 35 | 彼此 36 | 边 37 | 别 38 | 别的 39 | 别说 40 | 并 41 | 并且 42 | 不比 43 | 不成 44 | 不单 45 | 不但 46 | 不独 47 | 不管 48 | 不光 49 | 不过 50 | 不仅 51 | 不拘 52 | 不论 53 | 不怕 54 | 不然 55 | 不如 56 | 不特 57 | 不惟 58 | 不问 59 | 不只 60 | 朝 61 | 朝着 62 | 趁 63 | 趁着 64 | 乘 65 | 冲 66 | 除 67 | 除此之外 68 | 除非 69 | 除了 70 | 此 71 | 此间 72 | 此外 73 | 从 74 | 从而 75 | 打 76 | 待 77 | 但 78 | 但是 79 | 当 80 | 当着 81 | 到 82 | 得 83 | 的 84 | 的话 85 | 等 86 | 等等 87 | 地 88 | 第 89 | 叮咚 90 | 对 91 | 对于 92 | 多 93 | 多少 94 | 而 95 | 而况 96 | 而且 97 | 而是 98 | 而外 99 | 而言 100 | 而已 101 | 尔后 102 | 反过来 103 | 反过来说 104 | 反之 105 | 非但 106 | 非徒 107 | 否则 108 | 嘎 109 | 嘎登 110 | 该 111 | 赶 112 | 个 113 | 各 114 | 各个 115 | 各位 116 | 各种 117 | 各自 118 | 给 119 | 根据 120 | 跟 121 | 故 122 | 故此 123 | 固然 124 | 关于 125 | 管 126 | 归 127 | 果然 128 | 果真 129 | 过 130 | 哈 131 | 哈哈 132 | 呵 133 | 和 134 | 何 135 | 何处 136 | 何况 137 | 何时 138 | 嘿 139 | 哼 140 | 哼唷 141 | 呼哧 142 | 乎 143 | 哗 144 | 还是 145 | 还有 146 | 换句话说 147 | 换言之 148 | 或 149 | 或是 150 | 或者 151 | 极了 152 | 及 153 | 及其 154 | 及至 155 | 即 156 | 即便 157 | 即或 158 | 即令 159 | 即若 160 | 即使 161 | 几 162 | 几时 163 | 己 164 | 既 165 | 既然 166 | 既是 167 | 继而 168 | 加之 169 | 假如 170 | 假若 171 | 假使 172 | 鉴于 173 | 将 174 | 较 175 | 较之 176 | 叫 177 | 接着 178 | 结果 179 | 借 180 | 紧接着 181 | 进而 182 | 尽 183 | 尽管 184 | 经 185 | 经过 186 | 就 187 | 就是 188 | 就是说 189 | 据 190 | 具体地说 191 | 具体说来 192 | 开始 193 | 开外 194 | 靠 195 | 咳 196 | 可 197 | 可见 198 | 可是 199 | 可以 200 | 况且 201 | 啦 202 | 来 203 | 来着 204 | 离 205 | 例如 206 | 哩 207 | 连 208 | 连同 209 | 两者 210 | 了 211 | 临 212 | 另 213 | 另外 214 | 另一方面 215 | 论 216 | 嘛 217 | 吗 218 | 慢说 219 | 漫说 220 | 冒 221 | 么 222 | 每 223 | 每当 224 | 们 225 | 莫若 226 | 某 227 | 某个 228 | 某些 229 | 拿 230 | 哪 231 | 哪边 232 | 哪儿 233 | 哪个 234 | 哪里 235 | 哪年 236 | 哪怕 237 | 哪天 238 | 哪些 239 | 哪样 240 | 那 241 | 那边 242 | 那儿 243 | 那个 244 | 那会儿 245 | 那里 246 | 那么 247 | 那么些 248 | 那么样 249 | 那时 250 | 那些 251 | 那样 252 | 乃 253 | 乃至 254 | 呢 255 | 能 256 | 你 257 | 你们 258 | 您 259 | 宁 260 | 宁可 261 | 宁肯 262 | 宁愿 263 | 哦 264 | 呕 265 | 啪达 266 | 旁人 267 | 呸 268 | 凭 269 | 凭借 270 | 其 271 | 其次 272 | 其二 273 | 其他 274 | 其它 275 | 其一 276 | 其余 277 | 其中 278 | 起 279 | 起见 280 | 起见 281 | 岂但 282 | 恰恰相反 283 | 前后 284 | 前者 285 | 且 286 | 然而 287 | 然后 288 | 然则 289 | 让 290 | 人家 291 | 任 292 | 任何 293 | 任凭 294 | 如 295 | 如此 296 | 如果 297 | 如何 298 | 如其 299 | 如若 300 | 如上所述 301 | 若 302 | 若非 303 | 若是 304 | 啥 305 | 上下 306 | 尚且 307 | 设若 308 | 设使 309 | 甚而 310 | 甚么 311 | 甚至 312 | 省得 313 | 时候 314 | 什么 315 | 什么样 316 | 使得 317 | 是 318 | 是的 319 | 首先 320 | 谁 321 | 谁知 322 | 顺 323 | 顺着 324 | 似的 325 | 虽 326 | 虽然 327 | 虽说 328 | 虽则 329 | 随 330 | 随着 331 | 所 332 | 所以 333 | 他 334 | 他们 335 | 他人 336 | 它 337 | 它们 338 | 她 339 | 她们 340 | 倘 341 | 倘或 342 | 倘然 343 | 倘若 344 | 倘使 345 | 腾 346 | 替 347 | 通过 348 | 同 349 | 同时 350 | 哇 351 | 万一 352 | 往 353 | 望 354 | 为 355 | 为何 356 | 为了 357 | 为什么 358 | 为着 359 | 喂 360 | 嗡嗡 361 | 我 362 | 我们 363 | 呜 364 | 呜呼 365 | 乌乎 366 | 无论 367 | 无宁 368 | 毋宁 369 | 嘻 370 | 吓 371 | 相对而言 372 | 像 373 | 向 374 | 向着 375 | 嘘 376 | 呀 377 | 焉 378 | 沿 379 | 沿着 380 | 要 381 | 要不 382 | 要不然 383 | 要不是 384 | 要么 385 | 要是 386 | 也 387 | 也罢 388 | 也好 389 | 一 390 | 一般 391 | 一旦 392 | 一方面 393 | 一来 394 | 一切 395 | 一样 396 | 一则 397 | 依 398 | 依照 399 | 矣 400 | 以 401 | 以便 402 | 以及 403 | 以免 404 | 以至 405 | 以至于 406 | 以致 407 | 抑或 408 | 因 409 | 因此 410 | 因而 411 | 因为 412 | 哟 413 | 用 414 | 由 415 | 由此可见 416 | 由于 417 | 有 418 | 有的 419 | 有关 420 | 有些 421 | 又 422 | 于 423 | 于是 424 | 于是乎 425 | 与 426 | 与此同时 427 | 与否 428 | 与其 429 | 越是 430 | 云云 431 | 哉 432 | 再说 433 | 再者 434 | 在 435 | 都 436 | 在下 437 | 咱 438 | 咱们 439 | 则 440 | 怎 441 | 怎么 442 | 怎么办 443 | 怎么样 444 | 怎样 445 | 咋 446 | 照 447 | 照着 448 | 者 449 | 这 450 | 这边 451 | 这儿 452 | 这个 453 | 这会儿 454 | 这就是说 455 | 这里 456 | 这么 457 | 这么点儿 458 | 这么些 459 | 这么样 460 | 这时 461 | 这些 462 | 这样 463 | 正如 464 | 吱 465 | 之 466 | 之类 467 | 之所以 468 | 之一 469 | 只是 470 | 只限 471 | 只要 472 | 只有 473 | 至 474 | 至于 475 | 诸位 476 | 着 477 | 着呢 478 | 自 479 | 自从 480 | 自个儿 481 | 自各儿 482 | 自己 483 | 自家 484 | 自身 485 | 综上所述 486 | 总的来看 487 | 总的来说 488 | 总的说来 489 | 总而言之 490 | 总之 491 | 纵 492 | 纵令 493 | 纵然 494 | 纵使 495 | 遵照 496 | 作为 497 | 兮 498 | 呃 499 | 呗 500 | 咚 501 | 咦 502 | 喏 503 | 啐 504 | 喔唷 505 | 嗬 506 | 嗯 507 | 嗳 -------------------------------------------------------------------------------- /BERT-BiLSTM-CRF-NER/tf_metrics.py: -------------------------------------------------------------------------------- 1 | """ 2 | Multiclass 3 | from: 4 | https://github.com/guillaumegenthial/tf_metrics/blob/master/tf_metrics/__init__.py 5 | 6 | """ 7 | 8 | __author__ = "Guillaume Genthial" 9 | 10 | import numpy as np 11 | import tensorflow as tf 12 | from tensorflow.python.ops.metrics_impl import _streaming_confusion_matrix 13 | 14 | 15 | def precision(labels, predictions, num_classes, pos_indices=None, 16 | weights=None, average='micro'): 17 | """Multi-class precision metric for Tensorflow 18 | Parameters 19 | ---------- 20 | labels : Tensor of tf.int32 or tf.int64 21 | The true labels 22 | predictions : Tensor of tf.int32 or tf.int64 23 | The predictions, same shape as labels 24 | num_classes : int 25 | The number of classes 26 | pos_indices : list of int, optional 27 | The indices of the positive classes, default is all 28 | weights : Tensor of tf.int32, optional 29 | Mask, must be of compatible shape with labels 30 | average : str, optional 31 | 'micro': counts the total number of true positives, false 32 | positives, and false negatives for the classes in 33 | `pos_indices` and infer the metric from it. 34 | 'macro': will compute the metric separately for each class in 35 | `pos_indices` and average. Will not account for class 36 | imbalance. 37 | 'weighted': will compute the metric separately for each class in 38 | `pos_indices` and perform a weighted average by the total 39 | number of true labels for each class. 40 | Returns 41 | ------- 42 | tuple of (scalar float Tensor, update_op) 43 | """ 44 | cm, op = _streaming_confusion_matrix( 45 | labels, predictions, num_classes, weights) 46 | pr, _, _ = metrics_from_confusion_matrix( 47 | cm, pos_indices, average=average) 48 | op, _, _ = metrics_from_confusion_matrix( 49 | op, pos_indices, average=average) 50 | return (pr, op) 51 | 52 | 53 | def recall(labels, predictions, num_classes, pos_indices=None, weights=None, 54 | average='micro'): 55 | """Multi-class recall metric for Tensorflow 56 | Parameters 57 | ---------- 58 | labels : Tensor of tf.int32 or tf.int64 59 | The true labels 60 | predictions : Tensor of tf.int32 or tf.int64 61 | The predictions, same shape as labels 62 | num_classes : int 63 | The number of classes 64 | pos_indices : list of int, optional 65 | The indices of the positive classes, default is all 66 | weights : Tensor of tf.int32, optional 67 | Mask, must be of compatible shape with labels 68 | average : str, optional 69 | 'micro': counts the total number of true positives, false 70 | positives, and false negatives for the classes in 71 | `pos_indices` and infer the metric from it. 72 | 'macro': will compute the metric separately for each class in 73 | `pos_indices` and average. Will not account for class 74 | imbalance. 75 | 'weighted': will compute the metric separately for each class in 76 | `pos_indices` and perform a weighted average by the total 77 | number of true labels for each class. 78 | Returns 79 | ------- 80 | tuple of (scalar float Tensor, update_op) 81 | """ 82 | cm, op = _streaming_confusion_matrix( 83 | labels, predictions, num_classes, weights) 84 | _, re, _ = metrics_from_confusion_matrix( 85 | cm, pos_indices, average=average) 86 | _, op, _ = metrics_from_confusion_matrix( 87 | op, pos_indices, average=average) 88 | return (re, op) 89 | 90 | 91 | def f1(labels, predictions, num_classes, pos_indices=None, weights=None, 92 | average='micro'): 93 | return fbeta(labels, predictions, num_classes, pos_indices, weights, 94 | average) 95 | 96 | 97 | def fbeta(labels, predictions, num_classes, pos_indices=None, weights=None, 98 | average='micro', beta=1): 99 | """Multi-class fbeta metric for Tensorflow 100 | Parameters 101 | ---------- 102 | labels : Tensor of tf.int32 or tf.int64 103 | The true labels 104 | predictions : Tensor of tf.int32 or tf.int64 105 | The predictions, same shape as labels 106 | num_classes : int 107 | The number of classes 108 | pos_indices : list of int, optional 109 | The indices of the positive classes, default is all 110 | weights : Tensor of tf.int32, optional 111 | Mask, must be of compatible shape with labels 112 | average : str, optional 113 | 'micro': counts the total number of true positives, false 114 | positives, and false negatives for the classes in 115 | `pos_indices` and infer the metric from it. 116 | 'macro': will compute the metric separately for each class in 117 | `pos_indices` and average. Will not account for class 118 | imbalance. 119 | 'weighted': will compute the metric separately for each class in 120 | `pos_indices` and perform a weighted average by the total 121 | number of true labels for each class. 122 | beta : int, optional 123 | Weight of precision in harmonic mean 124 | Returns 125 | ------- 126 | tuple of (scalar float Tensor, update_op) 127 | """ 128 | cm, op = _streaming_confusion_matrix( 129 | labels, predictions, num_classes, weights) 130 | _, _, fbeta = metrics_from_confusion_matrix( 131 | cm, pos_indices, average=average, beta=beta) 132 | _, _, op = metrics_from_confusion_matrix( 133 | op, pos_indices, average=average, beta=beta) 134 | return (fbeta, op) 135 | 136 | 137 | def safe_div(numerator, denominator): 138 | """Safe division, return 0 if denominator is 0""" 139 | numerator, denominator = tf.to_float(numerator), tf.to_float(denominator) 140 | zeros = tf.zeros_like(numerator, dtype=numerator.dtype) 141 | denominator_is_zero = tf.equal(denominator, zeros) 142 | return tf.where(denominator_is_zero, zeros, numerator / denominator) 143 | 144 | 145 | def pr_re_fbeta(cm, pos_indices, beta=1): 146 | """Uses a confusion matrix to compute precision, recall and fbeta""" 147 | num_classes = cm.shape[0] 148 | neg_indices = [i for i in range(num_classes) if i not in pos_indices] 149 | cm_mask = np.ones([num_classes, num_classes]) 150 | cm_mask[neg_indices, neg_indices] = 0 151 | diag_sum = tf.reduce_sum(tf.diag_part(cm * cm_mask)) 152 | 153 | cm_mask = np.ones([num_classes, num_classes]) 154 | cm_mask[:, neg_indices] = 0 155 | tot_pred = tf.reduce_sum(cm * cm_mask) 156 | 157 | cm_mask = np.ones([num_classes, num_classes]) 158 | cm_mask[neg_indices, :] = 0 159 | tot_gold = tf.reduce_sum(cm * cm_mask) 160 | 161 | pr = safe_div(diag_sum, tot_pred) 162 | re = safe_div(diag_sum, tot_gold) 163 | fbeta = safe_div((1. + beta**2) * pr * re, beta**2 * pr + re) 164 | 165 | return pr, re, fbeta 166 | 167 | 168 | def metrics_from_confusion_matrix(cm, pos_indices=None, average='micro', 169 | beta=1): 170 | """Precision, Recall and F1 from the confusion matrix 171 | Parameters 172 | ---------- 173 | cm : tf.Tensor of type tf.int32, of shape (num_classes, num_classes) 174 | The streaming confusion matrix. 175 | pos_indices : list of int, optional 176 | The indices of the positive classes 177 | beta : int, optional 178 | Weight of precision in harmonic mean 179 | average : str, optional 180 | 'micro', 'macro' or 'weighted' 181 | """ 182 | num_classes = cm.shape[0] 183 | if pos_indices is None: 184 | pos_indices = [i for i in range(num_classes)] 185 | 186 | if average == 'micro': 187 | return pr_re_fbeta(cm, pos_indices, beta) 188 | elif average in {'macro', 'weighted'}: 189 | precisions, recalls, fbetas, n_golds = [], [], [], [] 190 | for idx in pos_indices: 191 | pr, re, fbeta = pr_re_fbeta(cm, [idx], beta) 192 | precisions.append(pr) 193 | recalls.append(re) 194 | fbetas.append(fbeta) 195 | cm_mask = np.zeros([num_classes, num_classes]) 196 | cm_mask[idx, :] = 1 197 | n_golds.append(tf.to_float(tf.reduce_sum(cm * cm_mask))) 198 | 199 | if average == 'macro': 200 | pr = tf.reduce_mean(precisions) 201 | re = tf.reduce_mean(recalls) 202 | fbeta = tf.reduce_mean(fbetas) 203 | return pr, re, fbeta 204 | if average == 'weighted': 205 | n_gold = tf.reduce_sum(n_golds) 206 | pr_sum = sum(p * n for p, n in zip(precisions, n_golds)) 207 | pr = safe_div(pr_sum, n_gold) 208 | re_sum = sum(r * n for r, n in zip(recalls, n_golds)) 209 | re = safe_div(re_sum, n_gold) 210 | fbeta_sum = sum(f * n for f, n in zip(fbetas, n_golds)) 211 | fbeta = safe_div(fbeta_sum, n_gold) 212 | return pr, re, fbeta 213 | 214 | else: 215 | raise NotImplementedError() -------------------------------------------------------------------------------- /BERT-SA/README.md: -------------------------------------------------------------------------------- 1 | ### Sentiment Analyze 2 | Bert + softmax 3 | 4 | - Input 5 | + [CLS]Named Entity[SEQ]The Article[SEQ] 6 | - Output 7 | + positive or negtive -------------------------------------------------------------------------------- /BERT-SA/bert/.gitignore: -------------------------------------------------------------------------------- 1 | # Initially taken from Github's Python gitignore file 2 | 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | build/ 14 | develop-eggs/ 15 | dist/ 16 | downloads/ 17 | eggs/ 18 | .eggs/ 19 | lib/ 20 | lib64/ 21 | parts/ 22 | sdist/ 23 | var/ 24 | wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | 53 | # Translations 54 | *.mo 55 | *.pot 56 | 57 | # Django stuff: 58 | *.log 59 | local_settings.py 60 | db.sqlite3 61 | 62 | # Flask stuff: 63 | instance/ 64 | .webassets-cache 65 | 66 | # Scrapy stuff: 67 | .scrapy 68 | 69 | # Sphinx documentation 70 | docs/_build/ 71 | 72 | # PyBuilder 73 | target/ 74 | 75 | # Jupyter Notebook 76 | .ipynb_checkpoints 77 | 78 | # IPython 79 | profile_default/ 80 | ipython_config.py 81 | 82 | # pyenv 83 | .python-version 84 | 85 | # celery beat schedule file 86 | celerybeat-schedule 87 | 88 | # SageMath parsed files 89 | *.sage.py 90 | 91 | # Environments 92 | .env 93 | .venv 94 | env/ 95 | venv/ 96 | ENV/ 97 | env.bak/ 98 | venv.bak/ 99 | 100 | # Spyder project settings 101 | .spyderproject 102 | .spyproject 103 | 104 | # Rope project settings 105 | .ropeproject 106 | 107 | # mkdocs documentation 108 | /site 109 | 110 | # mypy 111 | .mypy_cache/ 112 | .dmypy.json 113 | dmypy.json 114 | 115 | # Pyre type checker 116 | .pyre/ 117 | -------------------------------------------------------------------------------- /BERT-SA/bert/.idea/bert.iml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 12 | -------------------------------------------------------------------------------- /BERT-SA/bert/.idea/encodings.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | -------------------------------------------------------------------------------- /BERT-SA/bert/.idea/misc.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 6 | 7 | -------------------------------------------------------------------------------- /BERT-SA/bert/.idea/modules.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /BERT-SA/bert/.idea/workspace.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 11 | 12 | 22 | 23 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 1553918153113 108 | 115 | 116 | 117 | 118 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 151 | 152 | 153 | 154 | 155 | Python 156 | #D:/ACourse/2019Spring/CIP/MRC/bert/run_squad.py 157 | 158 | run_squad.FeatureWriter 159 | run_squad.SquadExample 160 | object 161 | run_squad.InputFeatures 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | Inner Classes 186 | Fields 187 | 188 | All 189 | 190 | 191 | 192 | 193 | 194 | 195 | 196 | 197 | 198 | 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | -------------------------------------------------------------------------------- /BERT-SA/bert/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # How to Contribute 2 | 3 | BERT needs to maintain permanent compatibility with the pre-trained model files, 4 | so we do not plan to make any major changes to this library (other than what was 5 | promised in the README). However, we can accept small patches related to 6 | re-factoring and documentation. To submit contributes, there are just a few 7 | small guidelines you need to follow. 8 | 9 | ## Contributor License Agreement 10 | 11 | Contributions to this project must be accompanied by a Contributor License 12 | Agreement. You (or your employer) retain the copyright to your contribution; 13 | this simply gives us permission to use and redistribute your contributions as 14 | part of the project. Head over to to see 15 | your current agreements on file or to sign a new one. 16 | 17 | You generally only need to submit a CLA once, so if you've already submitted one 18 | (even if it was for a different project), you probably don't need to do it 19 | again. 20 | 21 | ## Code reviews 22 | 23 | All submissions, including submissions by project members, require review. We 24 | use GitHub pull requests for this purpose. Consult 25 | [GitHub Help](https://help.github.com/articles/about-pull-requests/) for more 26 | information on using pull requests. 27 | 28 | ## Community Guidelines 29 | 30 | This project follows 31 | [Google's Open Source Community Guidelines](https://opensource.google.com/conduct/). 32 | -------------------------------------------------------------------------------- /BERT-SA/bert/LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /BERT-SA/bert/__init__.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | -------------------------------------------------------------------------------- /BERT-SA/bert/_gitignore.txt: -------------------------------------------------------------------------------- 1 | # Initially taken from Github's Python gitignore file 2 | 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | build/ 14 | develop-eggs/ 15 | dist/ 16 | downloads/ 17 | eggs/ 18 | .eggs/ 19 | lib/ 20 | lib64/ 21 | parts/ 22 | sdist/ 23 | var/ 24 | wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | 53 | # Translations 54 | *.mo 55 | *.pot 56 | 57 | # Django stuff: 58 | *.log 59 | local_settings.py 60 | db.sqlite3 61 | 62 | # Flask stuff: 63 | instance/ 64 | .webassets-cache 65 | 66 | # Scrapy stuff: 67 | .scrapy 68 | 69 | # Sphinx documentation 70 | docs/_build/ 71 | 72 | # PyBuilder 73 | target/ 74 | 75 | # Jupyter Notebook 76 | .ipynb_checkpoints 77 | 78 | # IPython 79 | profile_default/ 80 | ipython_config.py 81 | 82 | # pyenv 83 | .python-version 84 | 85 | # celery beat schedule file 86 | celerybeat-schedule 87 | 88 | # SageMath parsed files 89 | *.sage.py 90 | 91 | # Environments 92 | .env 93 | .venv 94 | env/ 95 | venv/ 96 | ENV/ 97 | env.bak/ 98 | venv.bak/ 99 | 100 | # Spyder project settings 101 | .spyderproject 102 | .spyproject 103 | 104 | # Rope project settings 105 | .ropeproject 106 | 107 | # mkdocs documentation 108 | /site 109 | 110 | # mypy 111 | .mypy_cache/ 112 | .dmypy.json 113 | dmypy.json 114 | 115 | # Pyre type checker 116 | .pyre/ 117 | -------------------------------------------------------------------------------- /BERT-SA/bert/modeling_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import collections 20 | import json 21 | import random 22 | import re 23 | 24 | import modeling 25 | import six 26 | import tensorflow as tf 27 | 28 | 29 | class BertModelTest(tf.test.TestCase): 30 | 31 | class BertModelTester(object): 32 | 33 | def __init__(self, 34 | parent, 35 | batch_size=13, 36 | seq_length=7, 37 | is_training=True, 38 | use_input_mask=True, 39 | use_token_type_ids=True, 40 | vocab_size=99, 41 | hidden_size=32, 42 | num_hidden_layers=5, 43 | num_attention_heads=4, 44 | intermediate_size=37, 45 | hidden_act="gelu", 46 | hidden_dropout_prob=0.1, 47 | attention_probs_dropout_prob=0.1, 48 | max_position_embeddings=512, 49 | type_vocab_size=16, 50 | initializer_range=0.02, 51 | scope=None): 52 | self.parent = parent 53 | self.batch_size = batch_size 54 | self.seq_length = seq_length 55 | self.is_training = is_training 56 | self.use_input_mask = use_input_mask 57 | self.use_token_type_ids = use_token_type_ids 58 | self.vocab_size = vocab_size 59 | self.hidden_size = hidden_size 60 | self.num_hidden_layers = num_hidden_layers 61 | self.num_attention_heads = num_attention_heads 62 | self.intermediate_size = intermediate_size 63 | self.hidden_act = hidden_act 64 | self.hidden_dropout_prob = hidden_dropout_prob 65 | self.attention_probs_dropout_prob = attention_probs_dropout_prob 66 | self.max_position_embeddings = max_position_embeddings 67 | self.type_vocab_size = type_vocab_size 68 | self.initializer_range = initializer_range 69 | self.scope = scope 70 | 71 | def create_model(self): 72 | input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length], 73 | self.vocab_size) 74 | 75 | input_mask = None 76 | if self.use_input_mask: 77 | input_mask = BertModelTest.ids_tensor( 78 | [self.batch_size, self.seq_length], vocab_size=2) 79 | 80 | token_type_ids = None 81 | if self.use_token_type_ids: 82 | token_type_ids = BertModelTest.ids_tensor( 83 | [self.batch_size, self.seq_length], self.type_vocab_size) 84 | 85 | config = modeling.BertConfig( 86 | vocab_size=self.vocab_size, 87 | hidden_size=self.hidden_size, 88 | num_hidden_layers=self.num_hidden_layers, 89 | num_attention_heads=self.num_attention_heads, 90 | intermediate_size=self.intermediate_size, 91 | hidden_act=self.hidden_act, 92 | hidden_dropout_prob=self.hidden_dropout_prob, 93 | attention_probs_dropout_prob=self.attention_probs_dropout_prob, 94 | max_position_embeddings=self.max_position_embeddings, 95 | type_vocab_size=self.type_vocab_size, 96 | initializer_range=self.initializer_range) 97 | 98 | model = modeling.BertModel( 99 | config=config, 100 | is_training=self.is_training, 101 | input_ids=input_ids, 102 | input_mask=input_mask, 103 | token_type_ids=token_type_ids, 104 | scope=self.scope) 105 | 106 | outputs = { 107 | "embedding_output": model.get_embedding_output(), 108 | "sequence_output": model.get_sequence_output(), 109 | "pooled_output": model.get_pooled_output(), 110 | "all_encoder_layers": model.get_all_encoder_layers(), 111 | } 112 | return outputs 113 | 114 | def check_output(self, result): 115 | self.parent.assertAllEqual( 116 | result["embedding_output"].shape, 117 | [self.batch_size, self.seq_length, self.hidden_size]) 118 | 119 | self.parent.assertAllEqual( 120 | result["sequence_output"].shape, 121 | [self.batch_size, self.seq_length, self.hidden_size]) 122 | 123 | self.parent.assertAllEqual(result["pooled_output"].shape, 124 | [self.batch_size, self.hidden_size]) 125 | 126 | def test_default(self): 127 | self.run_tester(BertModelTest.BertModelTester(self)) 128 | 129 | def test_config_to_json_string(self): 130 | config = modeling.BertConfig(vocab_size=99, hidden_size=37) 131 | obj = json.loads(config.to_json_string()) 132 | self.assertEqual(obj["vocab_size"], 99) 133 | self.assertEqual(obj["hidden_size"], 37) 134 | 135 | def run_tester(self, tester): 136 | with self.test_session() as sess: 137 | ops = tester.create_model() 138 | init_op = tf.group(tf.global_variables_initializer(), 139 | tf.local_variables_initializer()) 140 | sess.run(init_op) 141 | output_result = sess.run(ops) 142 | tester.check_output(output_result) 143 | 144 | self.assert_all_tensors_reachable(sess, [init_op, ops]) 145 | 146 | @classmethod 147 | def ids_tensor(cls, shape, vocab_size, rng=None, name=None): 148 | """Creates a random int32 tensor of the shape within the vocab size.""" 149 | if rng is None: 150 | rng = random.Random() 151 | 152 | total_dims = 1 153 | for dim in shape: 154 | total_dims *= dim 155 | 156 | values = [] 157 | for _ in range(total_dims): 158 | values.append(rng.randint(0, vocab_size - 1)) 159 | 160 | return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name) 161 | 162 | def assert_all_tensors_reachable(self, sess, outputs): 163 | """Checks that all the tensors in the graph are reachable from outputs.""" 164 | graph = sess.graph 165 | 166 | ignore_strings = [ 167 | "^.*/assert_less_equal/.*$", 168 | "^.*/dilation_rate$", 169 | "^.*/Tensordot/concat$", 170 | "^.*/Tensordot/concat/axis$", 171 | "^testing/.*$", 172 | ] 173 | 174 | ignore_regexes = [re.compile(x) for x in ignore_strings] 175 | 176 | unreachable = self.get_unreachable_ops(graph, outputs) 177 | filtered_unreachable = [] 178 | for x in unreachable: 179 | do_ignore = False 180 | for r in ignore_regexes: 181 | m = r.match(x.name) 182 | if m is not None: 183 | do_ignore = True 184 | if do_ignore: 185 | continue 186 | filtered_unreachable.append(x) 187 | unreachable = filtered_unreachable 188 | 189 | self.assertEqual( 190 | len(unreachable), 0, "The following ops are unreachable: %s" % 191 | (" ".join([x.name for x in unreachable]))) 192 | 193 | @classmethod 194 | def get_unreachable_ops(cls, graph, outputs): 195 | """Finds all of the tensors in graph that are unreachable from outputs.""" 196 | outputs = cls.flatten_recursive(outputs) 197 | output_to_op = collections.defaultdict(list) 198 | op_to_all = collections.defaultdict(list) 199 | assign_out_to_in = collections.defaultdict(list) 200 | 201 | for op in graph.get_operations(): 202 | for x in op.inputs: 203 | op_to_all[op.name].append(x.name) 204 | for y in op.outputs: 205 | output_to_op[y.name].append(op.name) 206 | op_to_all[op.name].append(y.name) 207 | if str(op.type) == "Assign": 208 | for y in op.outputs: 209 | for x in op.inputs: 210 | assign_out_to_in[y.name].append(x.name) 211 | 212 | assign_groups = collections.defaultdict(list) 213 | for out_name in assign_out_to_in.keys(): 214 | name_group = assign_out_to_in[out_name] 215 | for n1 in name_group: 216 | assign_groups[n1].append(out_name) 217 | for n2 in name_group: 218 | if n1 != n2: 219 | assign_groups[n1].append(n2) 220 | 221 | seen_tensors = {} 222 | stack = [x.name for x in outputs] 223 | while stack: 224 | name = stack.pop() 225 | if name in seen_tensors: 226 | continue 227 | seen_tensors[name] = True 228 | 229 | if name in output_to_op: 230 | for op_name in output_to_op[name]: 231 | if op_name in op_to_all: 232 | for input_name in op_to_all[op_name]: 233 | if input_name not in stack: 234 | stack.append(input_name) 235 | 236 | expanded_names = [] 237 | if name in assign_groups: 238 | for assign_name in assign_groups[name]: 239 | expanded_names.append(assign_name) 240 | 241 | for expanded_name in expanded_names: 242 | if expanded_name not in stack: 243 | stack.append(expanded_name) 244 | 245 | unreachable_ops = [] 246 | for op in graph.get_operations(): 247 | is_unreachable = False 248 | all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs] 249 | for name in all_names: 250 | if name not in seen_tensors: 251 | is_unreachable = True 252 | if is_unreachable: 253 | unreachable_ops.append(op) 254 | return unreachable_ops 255 | 256 | @classmethod 257 | def flatten_recursive(cls, item): 258 | """Flattens (potentially nested) a tuple/dictionary/list to a list.""" 259 | output = [] 260 | if isinstance(item, list): 261 | output.extend(item) 262 | elif isinstance(item, tuple): 263 | output.extend(list(item)) 264 | elif isinstance(item, dict): 265 | for (_, v) in six.iteritems(item): 266 | output.append(v) 267 | else: 268 | return [item] 269 | 270 | flat_output = [] 271 | for x in output: 272 | flat_output.extend(cls.flatten_recursive(x)) 273 | return flat_output 274 | 275 | 276 | if __name__ == "__main__": 277 | tf.test.main() 278 | -------------------------------------------------------------------------------- /BERT-SA/bert/multilingual.md: -------------------------------------------------------------------------------- 1 | ## Models 2 | 3 | There are two multilingual models currently available. We do not plan to release 4 | more single-language models, but we may release `BERT-Large` versions of these 5 | two in the future: 6 | 7 | * **[`BERT-Base, Multilingual Cased (New, recommended)`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)**: 8 | 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters 9 | * **[`BERT-Base, Multilingual Uncased (Orig, not recommended)`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)**: 10 | 102 languages, 12-layer, 768-hidden, 12-heads, 110M parameters 11 | * **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**: 12 | Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M 13 | parameters 14 | 15 | **The `Multilingual Cased (New)` model also fixes normalization issues in many 16 | languages, so it is recommended in languages with non-Latin alphabets (and is 17 | often better for most languages with Latin alphabets). When using this model, 18 | make sure to pass `--do_lower_case=false` to `run_pretraining.py` and other 19 | scripts.** 20 | 21 | See the [list of languages](#list-of-languages) that the Multilingual model 22 | supports. The Multilingual model does include Chinese (and English), but if your 23 | fine-tuning data is Chinese-only, then the Chinese model will likely produce 24 | better results. 25 | 26 | ## Results 27 | 28 | To evaluate these systems, we use the 29 | [XNLI dataset](https://github.com/facebookresearch/XNLI) dataset, which is a 30 | version of [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) where the 31 | dev and test sets have been translated (by humans) into 15 languages. Note that 32 | the training set was *machine* translated (we used the translations provided by 33 | XNLI, not Google NMT). For clarity, we only report on 6 languages below: 34 | 35 | 36 | 37 | | System | English | Chinese | Spanish | German | Arabic | Urdu | 38 | | --------------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | 39 | | XNLI Baseline - Translate Train | 73.7 | 67.0 | 68.8 | 66.5 | 65.8 | 56.6 | 40 | | XNLI Baseline - Translate Test | 73.7 | 68.3 | 70.7 | 68.7 | 66.8 | 59.3 | 41 | | BERT - Translate Train Cased | **81.9** | **76.6** | **77.8** | **75.9** | **70.7** | 61.6 | 42 | | BERT - Translate Train Uncased | 81.4 | 74.2 | 77.3 | 75.2 | 70.5 | 61.7 | 43 | | BERT - Translate Test Uncased | 81.4 | 70.1 | 74.9 | 74.4 | 70.4 | **62.1** | 44 | | BERT - Zero Shot Uncased | 81.4 | 63.8 | 74.3 | 70.5 | 62.1 | 58.3 | 45 | 46 | 47 | 48 | The first two rows are baselines from the XNLI paper and the last three rows are 49 | our results with BERT. 50 | 51 | **Translate Train** means that the MultiNLI training set was machine translated 52 | from English into the foreign language. So training and evaluation were both 53 | done in the foreign language. Unfortunately, training was done on 54 | machine-translated data, so it is impossible to quantify how much of the lower 55 | accuracy (compared to English) is due to the quality of the machine translation 56 | vs. the quality of the pre-trained model. 57 | 58 | **Translate Test** means that the XNLI test set was machine translated from the 59 | foreign language into English. So training and evaluation were both done on 60 | English. However, test evaluation was done on machine-translated English, so the 61 | accuracy depends on the quality of the machine translation system. 62 | 63 | **Zero Shot** means that the Multilingual BERT system was fine-tuned on English 64 | MultiNLI, and then evaluated on the foreign language XNLI test. In this case, 65 | machine translation was not involved at all in either the pre-training or 66 | fine-tuning. 67 | 68 | Note that the English result is worse than the 84.2 MultiNLI baseline because 69 | this training used Multilingual BERT rather than English-only BERT. This implies 70 | that for high-resource languages, the Multilingual model is somewhat worse than 71 | a single-language model. However, it is not feasible for us to train and 72 | maintain dozens of single-language model. Therefore, if your goal is to maximize 73 | performance with a language other than English or Chinese, you might find it 74 | beneficial to run pre-training for additional steps starting from our 75 | Multilingual model on data from your language of interest. 76 | 77 | Here is a comparison of training Chinese models with the Multilingual 78 | `BERT-Base` and Chinese-only `BERT-Base`: 79 | 80 | System | Chinese 81 | ----------------------- | ------- 82 | XNLI Baseline | 67.0 83 | BERT Multilingual Model | 74.2 84 | BERT Chinese-only Model | 77.2 85 | 86 | Similar to English, the single-language model does 3% better than the 87 | Multilingual model. 88 | 89 | ## Fine-tuning Example 90 | 91 | The multilingual model does **not** require any special consideration or API 92 | changes. We did update the implementation of `BasicTokenizer` in 93 | `tokenization.py` to support Chinese character tokenization, so please update if 94 | you forked it. However, we did not change the tokenization API. 95 | 96 | To test the new models, we did modify `run_classifier.py` to add support for the 97 | [XNLI dataset](https://github.com/facebookresearch/XNLI). This is a 15-language 98 | version of MultiNLI where the dev/test sets have been human-translated, and the 99 | training set has been machine-translated. 100 | 101 | To run the fine-tuning code, please download the 102 | [XNLI dev/test set](https://s3.amazonaws.com/xnli/XNLI-1.0.zip) and the 103 | [XNLI machine-translated training set](https://s3.amazonaws.com/xnli/XNLI-MT-1.0.zip) 104 | and then unpack both .zip files into some directory `$XNLI_DIR`. 105 | 106 | To run fine-tuning on XNLI. The language is hard-coded into `run_classifier.py` 107 | (Chinese by default), so please modify `XnliProcessor` if you want to run on 108 | another language. 109 | 110 | This is a large dataset, so this will training will take a few hours on a GPU 111 | (or about 30 minutes on a Cloud TPU). To run an experiment quickly for 112 | debugging, just set `num_train_epochs` to a small value like `0.1`. 113 | 114 | ```shell 115 | export BERT_BASE_DIR=/path/to/bert/chinese_L-12_H-768_A-12 # or multilingual_L-12_H-768_A-12 116 | export XNLI_DIR=/path/to/xnli 117 | 118 | python run_classifier.py \ 119 | --task_name=XNLI \ 120 | --do_train=true \ 121 | --do_eval=true \ 122 | --data_dir=$XNLI_DIR \ 123 | --vocab_file=$BERT_BASE_DIR/vocab.txt \ 124 | --bert_config_file=$BERT_BASE_DIR/bert_config.json \ 125 | --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \ 126 | --max_seq_length=128 \ 127 | --train_batch_size=32 \ 128 | --learning_rate=5e-5 \ 129 | --num_train_epochs=2.0 \ 130 | --output_dir=/tmp/xnli_output/ 131 | ``` 132 | 133 | With the Chinese-only model, the results should look something like this: 134 | 135 | ``` 136 | ***** Eval results ***** 137 | eval_accuracy = 0.774116 138 | eval_loss = 0.83554 139 | global_step = 24543 140 | loss = 0.74603 141 | ``` 142 | 143 | ## Details 144 | 145 | ### Data Source and Sampling 146 | 147 | The languages chosen were the 148 | [top 100 languages with the largest Wikipedias](https://meta.wikimedia.org/wiki/List_of_Wikipedias). 149 | The entire Wikipedia dump for each language (excluding user and talk pages) was 150 | taken as the training data for each language 151 | 152 | However, the size of the Wikipedia for a given language varies greatly, and 153 | therefore low-resource languages may be "under-represented" in terms of the 154 | neural network model (under the assumption that languages are "competing" for 155 | limited model capacity to some extent). 156 | 157 | However, the size of a Wikipedia also correlates with the number of speakers of 158 | a language, and we also don't want to overfit the model by performing thousands 159 | of epochs over a tiny Wikipedia for a particular language. 160 | 161 | To balance these two factors, we performed exponentially smoothed weighting of 162 | the data during pre-training data creation (and WordPiece vocab creation). In 163 | other words, let's say that the probability of a language is *P(L)*, e.g., 164 | *P(English) = 0.21* means that after concatenating all of the Wikipedias 165 | together, 21% of our data is English. We exponentiate each probability by some 166 | factor *S* and then re-normalize, and sample from that distribution. In our case 167 | we use *S=0.7*. So, high-resource languages like English will be under-sampled, 168 | and low-resource languages like Icelandic will be over-sampled. E.g., in the 169 | original distribution English would be sampled 1000x more than Icelandic, but 170 | after smoothing it's only sampled 100x more. 171 | 172 | ### Tokenization 173 | 174 | For tokenization, we use a 110k shared WordPiece vocabulary. The word counts are 175 | weighted the same way as the data, so low-resource languages are upweighted by 176 | some factor. We intentionally do *not* use any marker to denote the input 177 | language (so that zero-shot training can work). 178 | 179 | Because Chinese (and Japanese Kanji and Korean Hanja) does not have whitespace 180 | characters, we add spaces around every character in the 181 | [CJK Unicode range](https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_\(Unicode_block\)) 182 | before applying WordPiece. This means that Chinese is effectively 183 | character-tokenized. Note that the CJK Unicode block only includes 184 | Chinese-origin characters and does *not* include Hangul Korean or 185 | Katakana/Hiragana Japanese, which are tokenized with whitespace+WordPiece like 186 | all other languages. 187 | 188 | For all other languages, we apply the 189 | [same recipe as English](https://github.com/google-research/bert#tokenization): 190 | (a) lower casing+accent removal, (b) punctuation splitting, (c) whitespace 191 | tokenization. We understand that accent markers have substantial meaning in some 192 | languages, but felt that the benefits of reducing the effective vocabulary make 193 | up for this. Generally the strong contextual models of BERT should make up for 194 | any ambiguity introduced by stripping accent markers. 195 | 196 | ### List of Languages 197 | 198 | The multilingual model supports the following languages. These languages were 199 | chosen because they are the top 100 languages with the largest Wikipedias: 200 | 201 | * Afrikaans 202 | * Albanian 203 | * Arabic 204 | * Aragonese 205 | * Armenian 206 | * Asturian 207 | * Azerbaijani 208 | * Bashkir 209 | * Basque 210 | * Bavarian 211 | * Belarusian 212 | * Bengali 213 | * Bishnupriya Manipuri 214 | * Bosnian 215 | * Breton 216 | * Bulgarian 217 | * Burmese 218 | * Catalan 219 | * Cebuano 220 | * Chechen 221 | * Chinese (Simplified) 222 | * Chinese (Traditional) 223 | * Chuvash 224 | * Croatian 225 | * Czech 226 | * Danish 227 | * Dutch 228 | * English 229 | * Estonian 230 | * Finnish 231 | * French 232 | * Galician 233 | * Georgian 234 | * German 235 | * Greek 236 | * Gujarati 237 | * Haitian 238 | * Hebrew 239 | * Hindi 240 | * Hungarian 241 | * Icelandic 242 | * Ido 243 | * Indonesian 244 | * Irish 245 | * Italian 246 | * Japanese 247 | * Javanese 248 | * Kannada 249 | * Kazakh 250 | * Kirghiz 251 | * Korean 252 | * Latin 253 | * Latvian 254 | * Lithuanian 255 | * Lombard 256 | * Low Saxon 257 | * Luxembourgish 258 | * Macedonian 259 | * Malagasy 260 | * Malay 261 | * Malayalam 262 | * Marathi 263 | * Minangkabau 264 | * Nepali 265 | * Newar 266 | * Norwegian (Bokmal) 267 | * Norwegian (Nynorsk) 268 | * Occitan 269 | * Persian (Farsi) 270 | * Piedmontese 271 | * Polish 272 | * Portuguese 273 | * Punjabi 274 | * Romanian 275 | * Russian 276 | * Scots 277 | * Serbian 278 | * Serbo-Croatian 279 | * Sicilian 280 | * Slovak 281 | * Slovenian 282 | * South Azerbaijani 283 | * Spanish 284 | * Sundanese 285 | * Swahili 286 | * Swedish 287 | * Tagalog 288 | * Tajik 289 | * Tamil 290 | * Tatar 291 | * Telugu 292 | * Turkish 293 | * Ukrainian 294 | * Urdu 295 | * Uzbek 296 | * Vietnamese 297 | * Volapük 298 | * Waray-Waray 299 | * Welsh 300 | * West Frisian 301 | * Western Punjabi 302 | * Yoruba 303 | 304 | The **Multilingual Cased (New)** release contains additionally **Thai** and 305 | **Mongolian**, which were not included in the original release. 306 | -------------------------------------------------------------------------------- /BERT-SA/bert/optimization.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Functions and classes related to optimization (weight updates).""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import re 22 | import tensorflow as tf 23 | 24 | 25 | def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu): 26 | """Creates an optimizer training op.""" 27 | global_step = tf.train.get_or_create_global_step() 28 | 29 | learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32) 30 | 31 | # Implements linear decay of the learning rate. 32 | learning_rate = tf.train.polynomial_decay( 33 | learning_rate, 34 | global_step, 35 | num_train_steps, 36 | end_learning_rate=0.0, 37 | power=1.0, 38 | cycle=False) 39 | 40 | # Implements linear warmup. I.e., if global_step < num_warmup_steps, the 41 | # learning rate will be `global_step/num_warmup_steps * init_lr`. 42 | if num_warmup_steps: 43 | global_steps_int = tf.cast(global_step, tf.int32) 44 | warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32) 45 | 46 | global_steps_float = tf.cast(global_steps_int, tf.float32) 47 | warmup_steps_float = tf.cast(warmup_steps_int, tf.float32) 48 | 49 | warmup_percent_done = global_steps_float / warmup_steps_float 50 | warmup_learning_rate = init_lr * warmup_percent_done 51 | 52 | is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32) 53 | learning_rate = ( 54 | (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate) 55 | 56 | # It is recommended that you use this optimizer for fine tuning, since this 57 | # is how the model was trained (note that the Adam m/v variables are NOT 58 | # loaded from init_checkpoint.) 59 | optimizer = AdamWeightDecayOptimizer( 60 | learning_rate=learning_rate, 61 | weight_decay_rate=0.01, 62 | beta_1=0.9, 63 | beta_2=0.999, 64 | epsilon=1e-6, 65 | exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"]) 66 | 67 | if use_tpu: 68 | optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer) 69 | 70 | tvars = tf.trainable_variables() 71 | grads = tf.gradients(loss, tvars) 72 | 73 | # This is how the model was pre-trained. 74 | (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0) 75 | 76 | train_op = optimizer.apply_gradients( 77 | zip(grads, tvars), global_step=global_step) 78 | 79 | # Normally the global step update is done inside of `apply_gradients`. 80 | # However, `AdamWeightDecayOptimizer` doesn't do this. But if you use 81 | # a different optimizer, you should probably take this line out. 82 | new_global_step = global_step + 1 83 | train_op = tf.group(train_op, [global_step.assign(new_global_step)]) 84 | return train_op 85 | 86 | 87 | class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 | """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89 | 90 | def __init__(self, 91 | learning_rate, 92 | weight_decay_rate=0.0, 93 | beta_1=0.9, 94 | beta_2=0.999, 95 | epsilon=1e-6, 96 | exclude_from_weight_decay=None, 97 | name="AdamWeightDecayOptimizer"): 98 | """Constructs a AdamWeightDecayOptimizer.""" 99 | super(AdamWeightDecayOptimizer, self).__init__(False, name) 100 | 101 | self.learning_rate = learning_rate 102 | self.weight_decay_rate = weight_decay_rate 103 | self.beta_1 = beta_1 104 | self.beta_2 = beta_2 105 | self.epsilon = epsilon 106 | self.exclude_from_weight_decay = exclude_from_weight_decay 107 | 108 | def apply_gradients(self, grads_and_vars, global_step=None, name=None): 109 | """See base class.""" 110 | assignments = [] 111 | for (grad, param) in grads_and_vars: 112 | if grad is None or param is None: 113 | continue 114 | 115 | param_name = self._get_variable_name(param.name) 116 | 117 | m = tf.get_variable( 118 | name=param_name + "/adam_m", 119 | shape=param.shape.as_list(), 120 | dtype=tf.float32, 121 | trainable=False, 122 | initializer=tf.zeros_initializer()) 123 | v = tf.get_variable( 124 | name=param_name + "/adam_v", 125 | shape=param.shape.as_list(), 126 | dtype=tf.float32, 127 | trainable=False, 128 | initializer=tf.zeros_initializer()) 129 | 130 | # Standard Adam update. 131 | next_m = ( 132 | tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad)) 133 | next_v = ( 134 | tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2, 135 | tf.square(grad))) 136 | 137 | update = next_m / (tf.sqrt(next_v) + self.epsilon) 138 | 139 | # Just adding the square of the weights to the loss function is *not* 140 | # the correct way of using L2 regularization/weight decay with Adam, 141 | # since that will interact with the m and v parameters in strange ways. 142 | # 143 | # Instead we want ot decay the weights in a manner that doesn't interact 144 | # with the m/v parameters. This is equivalent to adding the square 145 | # of the weights to the loss with plain (non-momentum) SGD. 146 | if self._do_use_weight_decay(param_name): 147 | update += self.weight_decay_rate * param 148 | 149 | update_with_lr = self.learning_rate * update 150 | 151 | next_param = param - update_with_lr 152 | 153 | assignments.extend( 154 | [param.assign(next_param), 155 | m.assign(next_m), 156 | v.assign(next_v)]) 157 | return tf.group(*assignments, name=name) 158 | 159 | def _do_use_weight_decay(self, param_name): 160 | """Whether to use L2 weight decay for `param_name`.""" 161 | if not self.weight_decay_rate: 162 | return False 163 | if self.exclude_from_weight_decay: 164 | for r in self.exclude_from_weight_decay: 165 | if re.search(r, param_name) is not None: 166 | return False 167 | return True 168 | 169 | def _get_variable_name(self, param_name): 170 | """Get the variable name from the tensor name.""" 171 | m = re.match("^(.*):\\d+$", param_name) 172 | if m is not None: 173 | param_name = m.group(1) 174 | return param_name 175 | -------------------------------------------------------------------------------- /BERT-SA/bert/optimization_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import optimization 20 | import tensorflow as tf 21 | 22 | 23 | class OptimizationTest(tf.test.TestCase): 24 | 25 | def test_adam(self): 26 | with self.test_session() as sess: 27 | w = tf.get_variable( 28 | "w", 29 | shape=[3], 30 | initializer=tf.constant_initializer([0.1, -0.2, -0.1])) 31 | x = tf.constant([0.4, 0.2, -0.5]) 32 | loss = tf.reduce_mean(tf.square(x - w)) 33 | tvars = tf.trainable_variables() 34 | grads = tf.gradients(loss, tvars) 35 | global_step = tf.train.get_or_create_global_step() 36 | optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2) 37 | train_op = optimizer.apply_gradients(zip(grads, tvars), global_step) 38 | init_op = tf.group(tf.global_variables_initializer(), 39 | tf.local_variables_initializer()) 40 | sess.run(init_op) 41 | for _ in range(100): 42 | sess.run(train_op) 43 | w_np = sess.run(w) 44 | self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2) 45 | 46 | 47 | if __name__ == "__main__": 48 | tf.test.main() 49 | -------------------------------------------------------------------------------- /BERT-SA/bert/requirements.txt: -------------------------------------------------------------------------------- 1 | tensorflow >= 1.11.0 # CPU Version of TensorFlow. 2 | # tensorflow-gpu >= 1.11.0 # GPU version of TensorFlow. 3 | -------------------------------------------------------------------------------- /BERT-SA/bert/run_classifier_with_tfhub.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """BERT finetuning runner with TF-Hub.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import os 22 | import optimization 23 | import run_classifier 24 | import tokenization 25 | import tensorflow as tf 26 | import tensorflow_hub as hub 27 | 28 | flags = tf.flags 29 | 30 | FLAGS = flags.FLAGS 31 | 32 | flags.DEFINE_string( 33 | "bert_hub_module_handle", None, 34 | "Handle for the BERT TF-Hub module.") 35 | 36 | 37 | def create_model(is_training, input_ids, input_mask, segment_ids, labels, 38 | num_labels, bert_hub_module_handle): 39 | """Creates a classification model.""" 40 | tags = set() 41 | if is_training: 42 | tags.add("train") 43 | bert_module = hub.Module(bert_hub_module_handle, tags=tags, trainable=True) 44 | bert_inputs = dict( 45 | input_ids=input_ids, 46 | input_mask=input_mask, 47 | segment_ids=segment_ids) 48 | bert_outputs = bert_module( 49 | inputs=bert_inputs, 50 | signature="tokens", 51 | as_dict=True) 52 | 53 | # In the demo, we are doing a simple classification task on the entire 54 | # segment. 55 | # 56 | # If you want to use the token-level output, use 57 | # bert_outputs["sequence_output"] instead. 58 | output_layer = bert_outputs["pooled_output"] 59 | 60 | hidden_size = output_layer.shape[-1].value 61 | 62 | output_weights = tf.get_variable( 63 | "output_weights", [num_labels, hidden_size], 64 | initializer=tf.truncated_normal_initializer(stddev=0.02)) 65 | 66 | output_bias = tf.get_variable( 67 | "output_bias", [num_labels], initializer=tf.zeros_initializer()) 68 | 69 | with tf.variable_scope("loss"): 70 | if is_training: 71 | # I.e., 0.1 dropout 72 | output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) 73 | 74 | logits = tf.matmul(output_layer, output_weights, transpose_b=True) 75 | logits = tf.nn.bias_add(logits, output_bias) 76 | probabilities = tf.nn.softmax(logits, axis=-1) 77 | log_probs = tf.nn.log_softmax(logits, axis=-1) 78 | 79 | one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) 80 | 81 | per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) 82 | loss = tf.reduce_mean(per_example_loss) 83 | 84 | return (loss, per_example_loss, logits, probabilities) 85 | 86 | 87 | def model_fn_builder(num_labels, learning_rate, num_train_steps, 88 | num_warmup_steps, use_tpu, bert_hub_module_handle): 89 | """Returns `model_fn` closure for TPUEstimator.""" 90 | 91 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument 92 | """The `model_fn` for TPUEstimator.""" 93 | 94 | tf.logging.info("*** Features ***") 95 | for name in sorted(features.keys()): 96 | tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape)) 97 | 98 | input_ids = features["input_ids"] 99 | input_mask = features["input_mask"] 100 | segment_ids = features["segment_ids"] 101 | label_ids = features["label_ids"] 102 | 103 | is_training = (mode == tf.estimator.ModeKeys.TRAIN) 104 | 105 | (total_loss, per_example_loss, logits, probabilities) = create_model( 106 | is_training, input_ids, input_mask, segment_ids, label_ids, num_labels, 107 | bert_hub_module_handle) 108 | 109 | output_spec = None 110 | if mode == tf.estimator.ModeKeys.TRAIN: 111 | train_op = optimization.create_optimizer( 112 | total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu) 113 | 114 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 115 | mode=mode, 116 | loss=total_loss, 117 | train_op=train_op) 118 | elif mode == tf.estimator.ModeKeys.EVAL: 119 | 120 | def metric_fn(per_example_loss, label_ids, logits): 121 | predictions = tf.argmax(logits, axis=-1, output_type=tf.int32) 122 | accuracy = tf.metrics.accuracy(label_ids, predictions) 123 | loss = tf.metrics.mean(per_example_loss) 124 | return { 125 | "eval_accuracy": accuracy, 126 | "eval_loss": loss, 127 | } 128 | 129 | eval_metrics = (metric_fn, [per_example_loss, label_ids, logits]) 130 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 131 | mode=mode, 132 | loss=total_loss, 133 | eval_metrics=eval_metrics) 134 | elif mode == tf.estimator.ModeKeys.PREDICT: 135 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 136 | mode=mode, predictions={"probabilities": probabilities}) 137 | else: 138 | raise ValueError( 139 | "Only TRAIN, EVAL and PREDICT modes are supported: %s" % (mode)) 140 | 141 | return output_spec 142 | 143 | return model_fn 144 | 145 | 146 | def create_tokenizer_from_hub_module(bert_hub_module_handle): 147 | """Get the vocab file and casing info from the Hub module.""" 148 | with tf.Graph().as_default(): 149 | bert_module = hub.Module(bert_hub_module_handle) 150 | tokenization_info = bert_module(signature="tokenization_info", as_dict=True) 151 | with tf.Session() as sess: 152 | vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"], 153 | tokenization_info["do_lower_case"]]) 154 | return tokenization.FullTokenizer( 155 | vocab_file=vocab_file, do_lower_case=do_lower_case) 156 | 157 | 158 | def main(_): 159 | tf.logging.set_verbosity(tf.logging.INFO) 160 | 161 | processors = { 162 | "cola": run_classifier.ColaProcessor, 163 | "mnli": run_classifier.MnliProcessor, 164 | "mrpc": run_classifier.MrpcProcessor, 165 | } 166 | 167 | if not FLAGS.do_train and not FLAGS.do_eval: 168 | raise ValueError("At least one of `do_train` or `do_eval` must be True.") 169 | 170 | tf.gfile.MakeDirs(FLAGS.output_dir) 171 | 172 | task_name = FLAGS.task_name.lower() 173 | 174 | if task_name not in processors: 175 | raise ValueError("Task not found: %s" % (task_name)) 176 | 177 | processor = processors[task_name]() 178 | 179 | label_list = processor.get_labels() 180 | 181 | tokenizer = create_tokenizer_from_hub_module(FLAGS.bert_hub_module_handle) 182 | 183 | tpu_cluster_resolver = None 184 | if FLAGS.use_tpu and FLAGS.tpu_name: 185 | tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver( 186 | FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project) 187 | 188 | is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 189 | run_config = tf.contrib.tpu.RunConfig( 190 | cluster=tpu_cluster_resolver, 191 | master=FLAGS.master, 192 | model_dir=FLAGS.output_dir, 193 | save_checkpoints_steps=FLAGS.save_checkpoints_steps, 194 | tpu_config=tf.contrib.tpu.TPUConfig( 195 | iterations_per_loop=FLAGS.iterations_per_loop, 196 | num_shards=FLAGS.num_tpu_cores, 197 | per_host_input_for_training=is_per_host)) 198 | 199 | train_examples = None 200 | num_train_steps = None 201 | num_warmup_steps = None 202 | if FLAGS.do_train: 203 | train_examples = processor.get_train_examples(FLAGS.data_dir) 204 | num_train_steps = int( 205 | len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs) 206 | num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion) 207 | 208 | model_fn = model_fn_builder( 209 | num_labels=len(label_list), 210 | learning_rate=FLAGS.learning_rate, 211 | num_train_steps=num_train_steps, 212 | num_warmup_steps=num_warmup_steps, 213 | use_tpu=FLAGS.use_tpu, 214 | bert_hub_module_handle=FLAGS.bert_hub_module_handle) 215 | 216 | # If TPU is not available, this will fall back to normal Estimator on CPU 217 | # or GPU. 218 | estimator = tf.contrib.tpu.TPUEstimator( 219 | use_tpu=FLAGS.use_tpu, 220 | model_fn=model_fn, 221 | config=run_config, 222 | train_batch_size=FLAGS.train_batch_size, 223 | eval_batch_size=FLAGS.eval_batch_size, 224 | predict_batch_size=FLAGS.predict_batch_size) 225 | 226 | if FLAGS.do_train: 227 | train_features = run_classifier.convert_examples_to_features( 228 | train_examples, label_list, FLAGS.max_seq_length, tokenizer) 229 | tf.logging.info("***** Running training *****") 230 | tf.logging.info(" Num examples = %d", len(train_examples)) 231 | tf.logging.info(" Batch size = %d", FLAGS.train_batch_size) 232 | tf.logging.info(" Num steps = %d", num_train_steps) 233 | train_input_fn = run_classifier.input_fn_builder( 234 | features=train_features, 235 | seq_length=FLAGS.max_seq_length, 236 | is_training=True, 237 | drop_remainder=True) 238 | estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) 239 | 240 | if FLAGS.do_eval: 241 | eval_examples = processor.get_dev_examples(FLAGS.data_dir) 242 | eval_features = run_classifier.convert_examples_to_features( 243 | eval_examples, label_list, FLAGS.max_seq_length, tokenizer) 244 | 245 | tf.logging.info("***** Running evaluation *****") 246 | tf.logging.info(" Num examples = %d", len(eval_examples)) 247 | tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size) 248 | 249 | # This tells the estimator to run through the entire set. 250 | eval_steps = None 251 | # However, if running eval on the TPU, you will need to specify the 252 | # number of steps. 253 | if FLAGS.use_tpu: 254 | # Eval will be slightly WRONG on the TPU because it will truncate 255 | # the last batch. 256 | eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size) 257 | 258 | eval_drop_remainder = True if FLAGS.use_tpu else False 259 | eval_input_fn = run_classifier.input_fn_builder( 260 | features=eval_features, 261 | seq_length=FLAGS.max_seq_length, 262 | is_training=False, 263 | drop_remainder=eval_drop_remainder) 264 | 265 | result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps) 266 | 267 | output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt") 268 | with tf.gfile.GFile(output_eval_file, "w") as writer: 269 | tf.logging.info("***** Eval results *****") 270 | for key in sorted(result.keys()): 271 | tf.logging.info(" %s = %s", key, str(result[key])) 272 | writer.write("%s = %s\n" % (key, str(result[key]))) 273 | 274 | if FLAGS.do_predict: 275 | predict_examples = processor.get_test_examples(FLAGS.data_dir) 276 | if FLAGS.use_tpu: 277 | # Discard batch remainder if running on TPU 278 | n = len(predict_examples) 279 | predict_examples = predict_examples[:(n - n % FLAGS.predict_batch_size)] 280 | 281 | predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record") 282 | run_classifier.file_based_convert_examples_to_features( 283 | predict_examples, label_list, FLAGS.max_seq_length, tokenizer, 284 | predict_file) 285 | 286 | tf.logging.info("***** Running prediction*****") 287 | tf.logging.info(" Num examples = %d", len(predict_examples)) 288 | tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size) 289 | 290 | predict_input_fn = run_classifier.file_based_input_fn_builder( 291 | input_file=predict_file, 292 | seq_length=FLAGS.max_seq_length, 293 | is_training=False, 294 | drop_remainder=FLAGS.use_tpu) 295 | 296 | result = estimator.predict(input_fn=predict_input_fn) 297 | 298 | output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv") 299 | with tf.gfile.GFile(output_predict_file, "w") as writer: 300 | tf.logging.info("***** Predict results *****") 301 | for prediction in result: 302 | probabilities = prediction["probabilities"] 303 | output_line = "\t".join( 304 | str(class_probability) 305 | for class_probability in probabilities) + "\n" 306 | writer.write(output_line) 307 | 308 | 309 | if __name__ == "__main__": 310 | flags.mark_flag_as_required("data_dir") 311 | flags.mark_flag_as_required("task_name") 312 | flags.mark_flag_as_required("bert_hub_module_handle") 313 | flags.mark_flag_as_required("output_dir") 314 | tf.app.run() 315 | -------------------------------------------------------------------------------- /BERT-SA/bert/sample_text.txt: -------------------------------------------------------------------------------- 1 | This text is included to make sure Unicode is handled properly: 力加勝北区ᴵᴺᵀᵃছজটডণত 2 | Text should be one-sentence-per-line, with empty lines between documents. 3 | This sample text is public domain and was randomly selected from Project Guttenberg. 4 | 5 | The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors. 6 | Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity. 7 | Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them. 8 | "Cass" Beard had risen early that morning, but not with a view to discovery. 9 | A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets. 10 | The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency. 11 | This was nearly opposite. 12 | Mr. Cassius crossed the highway, and stopped suddenly. 13 | Something glittered in the nearest red pool before him. 14 | Gold, surely! 15 | But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring. 16 | Looking at it more attentively, he saw that it bore the inscription, "May to Cass." 17 | Like most of his fellow gold-seekers, Cass was superstitious. 18 | 19 | The fountain of classic wisdom, Hypatia herself. 20 | As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge. 21 | From my youth I felt in me a soul above the matter-entangled herd. 22 | She revealed to me the glorious fact, that I am a spark of Divinity itself. 23 | A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's. 24 | There is a philosophic pleasure in opening one's treasures to the modest young. 25 | Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street. 26 | Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide; 27 | but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind. 28 | Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now. 29 | His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert; 30 | while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts. 31 | At last they reached the quay at the opposite end of the street; 32 | and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers. 33 | He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him. 34 | -------------------------------------------------------------------------------- /BERT-SA/bert/tokenization_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import os 20 | import tempfile 21 | import tokenization 22 | import six 23 | import tensorflow as tf 24 | 25 | 26 | class TokenizationTest(tf.test.TestCase): 27 | 28 | def test_full_tokenizer(self): 29 | vocab_tokens = [ 30 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 31 | "##ing", "," 32 | ] 33 | with tempfile.NamedTemporaryFile(delete=False) as vocab_writer: 34 | if six.PY2: 35 | vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) 36 | else: 37 | vocab_writer.write("".join( 38 | [x + "\n" for x in vocab_tokens]).encode("utf-8")) 39 | 40 | vocab_file = vocab_writer.name 41 | 42 | tokenizer = tokenization.FullTokenizer(vocab_file) 43 | os.unlink(vocab_file) 44 | 45 | tokens = tokenizer.tokenize(u"UNwant\u00E9d,running") 46 | self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"]) 47 | 48 | self.assertAllEqual( 49 | tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9]) 50 | 51 | def test_chinese(self): 52 | tokenizer = tokenization.BasicTokenizer() 53 | 54 | self.assertAllEqual( 55 | tokenizer.tokenize(u"ah\u535A\u63A8zz"), 56 | [u"ah", u"\u535A", u"\u63A8", u"zz"]) 57 | 58 | def test_basic_tokenizer_lower(self): 59 | tokenizer = tokenization.BasicTokenizer(do_lower_case=True) 60 | 61 | self.assertAllEqual( 62 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), 63 | ["hello", "!", "how", "are", "you", "?"]) 64 | self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) 65 | 66 | def test_basic_tokenizer_no_lower(self): 67 | tokenizer = tokenization.BasicTokenizer(do_lower_case=False) 68 | 69 | self.assertAllEqual( 70 | tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), 71 | ["HeLLo", "!", "how", "Are", "yoU", "?"]) 72 | 73 | def test_wordpiece_tokenizer(self): 74 | vocab_tokens = [ 75 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 76 | "##ing" 77 | ] 78 | 79 | vocab = {} 80 | for (i, token) in enumerate(vocab_tokens): 81 | vocab[token] = i 82 | tokenizer = tokenization.WordpieceTokenizer(vocab=vocab) 83 | 84 | self.assertAllEqual(tokenizer.tokenize(""), []) 85 | 86 | self.assertAllEqual( 87 | tokenizer.tokenize("unwanted running"), 88 | ["un", "##want", "##ed", "runn", "##ing"]) 89 | 90 | self.assertAllEqual( 91 | tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) 92 | 93 | def test_convert_tokens_to_ids(self): 94 | vocab_tokens = [ 95 | "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", 96 | "##ing" 97 | ] 98 | 99 | vocab = {} 100 | for (i, token) in enumerate(vocab_tokens): 101 | vocab[token] = i 102 | 103 | self.assertAllEqual( 104 | tokenization.convert_tokens_to_ids( 105 | vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9]) 106 | 107 | def test_is_whitespace(self): 108 | self.assertTrue(tokenization._is_whitespace(u" ")) 109 | self.assertTrue(tokenization._is_whitespace(u"\t")) 110 | self.assertTrue(tokenization._is_whitespace(u"\r")) 111 | self.assertTrue(tokenization._is_whitespace(u"\n")) 112 | self.assertTrue(tokenization._is_whitespace(u"\u00A0")) 113 | 114 | self.assertFalse(tokenization._is_whitespace(u"A")) 115 | self.assertFalse(tokenization._is_whitespace(u"-")) 116 | 117 | def test_is_control(self): 118 | self.assertTrue(tokenization._is_control(u"\u0005")) 119 | 120 | self.assertFalse(tokenization._is_control(u"A")) 121 | self.assertFalse(tokenization._is_control(u" ")) 122 | self.assertFalse(tokenization._is_control(u"\t")) 123 | self.assertFalse(tokenization._is_control(u"\r")) 124 | self.assertFalse(tokenization._is_control(u"\U0001F4A9")) 125 | 126 | def test_is_punctuation(self): 127 | self.assertTrue(tokenization._is_punctuation(u"-")) 128 | self.assertTrue(tokenization._is_punctuation(u"$")) 129 | self.assertTrue(tokenization._is_punctuation(u"`")) 130 | self.assertTrue(tokenization._is_punctuation(u".")) 131 | 132 | self.assertFalse(tokenization._is_punctuation(u"A")) 133 | self.assertFalse(tokenization._is_punctuation(u" ")) 134 | 135 | 136 | if __name__ == "__main__": 137 | tf.test.main() 138 | -------------------------------------------------------------------------------- /BERT-SA/model.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import tensorflow as tf 3 | from bert import modeling, optimization 4 | 5 | 6 | def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, 7 | labels, num_labels, use_one_hot_embeddings): 8 | """Creates a classification model.""" 9 | model = modeling.BertModel( 10 | config=bert_config, 11 | is_training=is_training, 12 | input_ids=input_ids, 13 | input_mask=input_mask, 14 | token_type_ids=segment_ids, 15 | use_one_hot_embeddings=use_one_hot_embeddings) 16 | 17 | # In the demo, we are doing a simple classification task on the entire 18 | # segment. 19 | # 20 | # If you want to use the token-level output, use model.get_sequence_output() 21 | # instead. 22 | output_layer = model.get_pooled_output() 23 | 24 | hidden_size = output_layer.shape[-1].value 25 | 26 | output_weights = tf.get_variable( 27 | "output_weights", [num_labels, hidden_size], 28 | initializer=tf.truncated_normal_initializer(stddev=0.02)) 29 | 30 | output_bias = tf.get_variable( 31 | "output_bias", [num_labels], initializer=tf.zeros_initializer()) 32 | 33 | with tf.variable_scope("loss"): 34 | if is_training: 35 | # I.e., 0.1 dropout 36 | output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) 37 | 38 | logits = tf.matmul(output_layer, output_weights, transpose_b=True) 39 | logits = tf.nn.bias_add(logits, output_bias) 40 | probabilities = tf.nn.softmax(logits, axis=-1) 41 | log_probs = tf.nn.log_softmax(logits, axis=-1) 42 | 43 | one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) 44 | 45 | per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) 46 | loss = tf.reduce_mean(per_example_loss) 47 | 48 | return (loss, per_example_loss, logits, probabilities) 49 | 50 | 51 | def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate, 52 | num_train_steps, num_warmup_steps, use_tpu, 53 | use_one_hot_embeddings): 54 | """Returns `model_fn` closure for TPUEstimator.""" 55 | 56 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument 57 | """The `model_fn` for TPUEstimator.""" 58 | 59 | tf.logging.info("*** Features ***") 60 | for name in sorted(features.keys()): 61 | tf.logging.info(" name = %s, shape = %s" % 62 | (name, features[name].shape)) 63 | 64 | input_ids = features["input_ids"] 65 | input_mask = features["input_mask"] 66 | segment_ids = features["segment_ids"] 67 | label_ids = features["label_ids"] 68 | is_real_example = None 69 | if "is_real_example" in features: 70 | is_real_example = tf.cast( 71 | features["is_real_example"], dtype=tf.float32) 72 | else: 73 | is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32) 74 | 75 | is_training = (mode == tf.estimator.ModeKeys.TRAIN) 76 | 77 | (total_loss, per_example_loss, logits, probabilities) = create_model( 78 | bert_config, is_training, input_ids, input_mask, segment_ids, label_ids, 79 | num_labels, use_one_hot_embeddings) 80 | 81 | tvars = tf.trainable_variables() 82 | initialized_variable_names = {} 83 | scaffold_fn = None 84 | if init_checkpoint: 85 | (assignment_map, initialized_variable_names 86 | ) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint) 87 | if use_tpu: 88 | 89 | def tpu_scaffold(): 90 | tf.train.init_from_checkpoint( 91 | init_checkpoint, assignment_map) 92 | return tf.train.Scaffold() 93 | 94 | scaffold_fn = tpu_scaffold 95 | else: 96 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 97 | 98 | tf.logging.info("**** Trainable Variables ****") 99 | for var in tvars: 100 | init_string = "" 101 | if var.name in initialized_variable_names: 102 | init_string = ", *INIT_FROM_CKPT*" 103 | tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape, 104 | init_string) 105 | 106 | output_spec = None 107 | if mode == tf.estimator.ModeKeys.TRAIN: 108 | 109 | train_op = optimization.create_optimizer( 110 | total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu) 111 | 112 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 113 | mode=mode, 114 | loss=total_loss, 115 | train_op=train_op, 116 | scaffold_fn=scaffold_fn) 117 | elif mode == tf.estimator.ModeKeys.EVAL: 118 | 119 | def metric_fn(per_example_loss, label_ids, logits, is_real_example): 120 | predictions = tf.argmax(logits, axis=-1, output_type=tf.int32) 121 | accuracy = tf.metrics.accuracy( 122 | labels=label_ids, predictions=predictions, weights=is_real_example) 123 | loss = tf.metrics.mean( 124 | values=per_example_loss, weights=is_real_example) 125 | return { 126 | "eval_accuracy": accuracy, 127 | "eval_loss": loss, 128 | } 129 | 130 | eval_metrics = (metric_fn, 131 | [per_example_loss, label_ids, logits, is_real_example]) 132 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 133 | mode=mode, 134 | loss=total_loss, 135 | eval_metrics=eval_metrics, 136 | scaffold_fn=scaffold_fn) 137 | else: 138 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 139 | mode=mode, 140 | predictions={"probabilities": probabilities}, 141 | scaffold_fn=scaffold_fn) 142 | return output_spec 143 | 144 | return model_fn 145 | -------------------------------------------------------------------------------- /BERT-SA/processor.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | """ 4 | import os 5 | import re 6 | import random 7 | import tensorflow as tf 8 | import pandas as pd 9 | from bert import tokenization 10 | from input_feature import InputExample, InputFeatures 11 | 12 | 13 | class DataProcessor(object): 14 | """Base class for data converters for sequence classification data sets.""" 15 | 16 | def get_train_examples(self, data_dir): 17 | """Gets a collection of `InputExample`s for the train set.""" 18 | raise NotImplementedError() 19 | 20 | def get_dev_examples(self, data_dir): 21 | """Gets a collection of `InputExample`s for the dev set.""" 22 | raise NotImplementedError() 23 | 24 | def get_test_examples(self, data_dir): 25 | """Gets a collection of `InputExample`s for prediction.""" 26 | raise NotImplementedError() 27 | 28 | def get_labels(self): 29 | """Gets the list of labels for this data set.""" 30 | raise NotImplementedError() 31 | 32 | @classmethod 33 | def _read_tsv(cls, input_file, quotechar=None): 34 | """Reads a tab separated value file.""" 35 | with tf.gfile.Open(input_file, "r") as f: 36 | lines = f.readlines() 37 | 38 | examples = [] 39 | example = [] 40 | 41 | for line in lines: 42 | if " pos else 1) 9 | 10 | with open("1160300607.csv", "w") as f: 11 | for i, l in enumerate(y): 12 | f.write("{},{}\n".format(i, l)) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### Core Entity Emotion Classify 2 | [2019搜狐校园算法大赛](https://www.biendata.com/competition/sohu2019/) 3 | 4 | ### 竞赛任务 5 | 给定若干文章,目标是判断文章的核心实体以及对核心实体的情感态度。每篇文章识别最多三个核心实体,并分别判断文章对上述核心实体的情感倾向(积极、中立、消极三种)。 6 | 7 | ### 模型说明 8 | #### 命名实体识别 9 | - BERT + BiLSTM + CRF 10 | + [SEQ]Sentence[SEQ] 11 | - 预处理 12 | + 将训练集文章拆分成句子级样本作为模型输入 13 | 14 | #### 情感分类 15 | - BERT 16 | + [CLS]Entity[SEQ]Article[SEQ] --------------------------------------------------------------------------------