├── requirements.txt ├── LICENCE ├── scripts ├── booknames.json ├── finetune_data_preprocess.py ├── math_corpora_data_process.py ├── convert_evaluate_test.py └── MathBERT_finetune.ipynb ├── README.md └── mathbert ├── optimization.py ├── tokenization.py ├── create_pretraining_data.py ├── run_pretraining.py ├── modeling.py └── run_classifier.py /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | transformers 3 | tensorflow 4 | os 5 | pprint 6 | json 7 | time 8 | sklearn 9 | pandas 10 | numpy 11 | -------------------------------------------------------------------------------- /LICENCE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) Facebook, Inc. and its affiliates. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /scripts/booknames.json: -------------------------------------------------------------------------------- 1 | ["Abstract Algebra: The Basic Graduate Year", "Abstract Algebra: Theory and Applications", "Advanced Algebra II", "Advanced Calculus", "Algebra 1", "Algebra and Analysis for Computer Science", "Applied Probability", "Basic Analysis", "Basic Concepts of Mathematics", "Basic Probability Theory", "Basic Probability and Statistics", "Book of Proof", "Calculus for Beginners and Artists", "Calculus Online Textbook", "Calculus", "Calculus", "Calculus 1", "Calculus 2", "Calculus 3", "Calculus with Applications", "CK-12 Geometry (Grades 10-12)", "CK-12 Single Variable Calculus (Grades 11-12)", "CK-12 Trigonometry \u2013 Second Edition", "Collaborative Statistics", "College Algebra", "Complex Analysis", "Complex Variables\u00a0", "Computational Geometry", "A Computational Introduction to Number Theory and Algebra", "A Course In Algebraic Number Theory", "A Course In Commutative Algebra", "Design of Comparative Experiments", "Difference Equations to Differential Equations: An Introduction to Calculus", "Differential Equations", "Discrete Mathematics: An Open Introduction", "Dynamical Systems", "Electronic Statistics Textbook", "Elementary Abstract Algebra", "Elementary Calculus: An Infinitesimal Approach", "Elementary Differential Equations", "Elementary Differential Equations with Boundary Value Problems", "Student Solutions Manual for\u00a0Elementary Differential Equations\u00a0and\u00a0Elementary Differential Equations with Boundary Value Problems", "Elementary Linear Algebra", "Elementary Number Theory", "Elements of Abstract and Linear Algebra", "First Course in Complex Analysis", "First Course in Linear Algebra", "Functions Defined by Improper Integrals", "GeneratingFunctionology", "Introduction to Matrix Algebra", "Introduction to Probability, Statistics, and Random Processes", "Introduction to Probability", "Introduction to Real Analysis", "An Introduction to Statistical Learning", "Introduction to Social Network Methods", "Introduction to Statistical Thought", "Introduction to the Theory of Numbers", "Introductory Statistics: Concepts, Models, and Applications", "Lectures on Probability, Statistics and Econometrics", "Lectures on Statistics", "Linear Algebra", "Linear Algebra", "Math Alive", "Math in Society", "Math, Numerics, and Programming", "Mathematical Methods of Engineering Analysis", "Multivariable Calculus", "Proofs and Concepts: The Fundamentals of Abstract Mathematics", "Real Variables with Basic Metric Space Topology", "Statistics", "A Summary of Calculus", "The Book \u201cA=B\u201d", "The Elements of Statistical Learning", "The Method of Lagrange Multipliers", "The Calculus of Functions of Several Variables", "Yet Another Calculus Text"] -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MathBERT 2 | 3 | ```MathBERT``` is a BERT model trained on the below mathematics text. 4 | 5 | + pre-k to high school math curriculum from engageny.org 6 | + G6-8 math curriculum from utahmiddleschoolmath.org 7 | + G6-high school math from illustrativemathematics.org 8 | + high school to college math text books from openculture.com 9 | + G6-8 math curriculum from ck12.org 10 | + College to graduate level MOOC math course syllabus from classcentral.com 11 | + math paper abstracts from arxiv.org 12 | 13 | MathBERT has its own vocabulary (```mathVocab```) that's built via ```BertTokenizer``` to best match the training corpus. We also trained MathBERT with the original BERT vocabulary (```baseVocab```) for comparison. Both models are uncased versions. 14 | 15 | 16 | 17 | #### Downloading Trained Models 18 | We release the tensorflow and the pytorch version of the trained models. The tensorflow version is compatible with code that works with the model from [Google Research](https://github.com/google-research/bert). The pytorch version is created using the [Hugging Face library](https://github.com/huggingface/transformers). 19 | + Tensorflow download 20 | + note: to download mathbert-mathvocab version, change the model name to ```mathbert-mathvocab-uncased``` in the below code 21 | ``` 22 | wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_config.json 23 | wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/vocab.txt 24 | wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.index 25 | wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.meta 26 | wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.data-00000-of-00001 27 | + Pytorch download 28 | + MathBERT models now can be installable directly within Huggingface's framework under the name space tbs17 at https://huggingface.co/tbs17/MathBERT or https://huggingface.co/tbs17/MathBERT-custom. 29 | ``` 30 | from transformers import * 31 | 32 | tokenizer = AutoTokenizer.from_pretrained('tbs17/MathBERT') 33 | model = AutoModel.from_pretrained('tbs17/MathBERT') 34 | 35 | tokenizer = AutoTokenizer.from_pretrained('tbs17/MathBERT-custom') 36 | model = AutoModel.from_pretrained('tbs17/MathBERT-custom') 37 | ``` 38 | 39 | #### Pretraining and fine-tuning 40 | 41 | The pretraining code is located at /mathbert/ and fine-tuning notebook is at /scripts/MathBERT_finetune.ipynb. Unfortunately, we can't release the fine-tuning data set per the data owner's request. All the packages we use is in the requirements.txt file. 42 | 43 | 44 | #### Legal 45 | MathBERT is MIT-licensed, refer to the [LICENSE](https://github.com/tbs17/MathBERT/blob/main/LICENCE) file in the top level directory. 46 | -------------------------------------------------------------------------------- /scripts/finetune_data_preprocess.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from IPython import display 3 | import os, time 4 | # below is to calculate tokens 5 | def cal_tokens(data_dir,out_dir=None): 6 | from nltk.tokenize import word_tokenize 7 | from pathlib import Path 8 | for i, file in enumerate(os.listdir(data_dir)): 9 | f=os.path.join(data_dir,file) 10 | print(f'{i}:{file} has size {round(os.path.getsize(f)/1024/1024,3)} MB') 11 | data=pd.read_csv(f,encoding='latin-1') 12 | data_tokens=data.iloc[:,0].apply(word_tokenize) 13 | data_corp=[] 14 | for i in data_tokens: 15 | for j in i: 16 | data_corp.append(j) 17 | data_corp_uni=np.unique(data_corp) 18 | print(f'{len(data_corp)} total tokens found\n' 19 | f'{len(data_corp_uni)} total unique tokens found') 20 | 21 | return data_corp,data_corp_uni 22 | 23 | def cal_tokens2(data_dir,out_dir=None): 24 | from nltk.tokenize import word_tokenize 25 | from pathlib import Path 26 | for i, file in enumerate(os.listdir(data_dir)): 27 | f=os.path.join(data_dir,file) 28 | 29 | data=pd.read_csv(f,encoding='latin-1') 30 | print(f'{i}:{file} has shape {data.shape}') 31 | 32 | data['row_tokens']=data.iloc[:,0].apply(lambda x:len(word_tokenize(x))) 33 | print(data.sum(axis=0)) 34 | 35 | 36 | return data 37 | 38 | 39 | # below cleaning is for problem text which comes in with html tags and images. But the data is not shared in this repo due to provider's request. 40 | def remove_html_tags(text): 41 | """Remove html tags from a string""" 42 | import re 43 | clean = re.compile('<.*?>') 44 | return re.sub(clean, '', text) 45 | def tag_removal(data_path): 46 | 47 | data=pd.read_csv(data_path,encoding='latin-1') 48 | print('---df before removing html tags---') 49 | data.iloc[:,1]=data.iloc[:,1].str.strip() 50 | data.iloc[:,1]=list(map(remove_html_tags,data.iloc[:,1].astype(str))) 51 | data.iloc[:,1]=data.iloc[:,1].str.replace(' ',' ') 52 | data.iloc[:,1]=data.iloc[:,1].str.replace('©',' ').str.replace('( ){2,10}',' ') 53 | data.iloc[:,1]=data.iloc[:,1].str.replace('[\r\n]{1,10}',' ') 54 | data.iloc[:,1]=data.iloc[:,1].replace('',np.nan) 55 | data=data.dropna() 56 | print('---df after removing html tags---') 57 | print(f'shape after removing empty rows{data.shape}') 58 | marker=data_path.split('.')[0] 59 | data.columns=['Index','question','answer','label'] 60 | data.to_csv('{}_clean.csv'.format(marker),index=False) 61 | return data 62 | 63 | def tag_removal_v2(data_path,text_col): 64 | import numpy as np 65 | 66 | data=pd.read_csv(data_path,encoding='latin-1') 67 | # print(f'{i}:{file} has size {round(os.path.getsize(f)/1024/1024,3)} MB and shape {data.shape}') 68 | print('---df before removing html tags---') 69 | display(data.tail()) 70 | data[text_col+'_cleaned']=data[text_col].str.strip() 71 | data[text_col+'_cleaned']=list(map(remove_html_tags,data[text_col+'_cleaned'])) 72 | data[text_col+'_cleaned']=data[text_col+'_cleaned'].str.replace(' ',' ') 73 | data[text_col+'_cleaned']=data[text_col+'_cleaned'].str.replace('©',' ').str.replace('( ){2,10}',' ') 74 | data[text_col+'_cleaned']=data[text_col+'_cleaned'].str.replace('[\r\n]{1,10}',' ') 75 | data[text_col+'_cleaned']=data[text_col+'_cleaned'].replace('',np.nan) 76 | data=data.dropna() 77 | print('---df after removing html tags---') 78 | display(data.tail()) 79 | print(f'shape after removing empty rows{data.shape}') 80 | marker=data_path.split('.')[0] 81 | data.to_csv('{}_clean.csv'.format(marker),index=False) 82 | return data 83 | 84 | # =====Executing above code to clean auto-grade data=== 85 | valid_answer_clean=tag_removal_v2('mathBERT-downstream Tasks/auto_grade/valid_answer_texts.csv','problem_text') 86 | # below is to split the data into train/dev/test into the ratio of 72:8:20 and output as .tsv files 87 | # you will also need to convert the original .csv files (before splitting) to .txt files to create pre-training data. 88 | def split_3data(org_path,out_dir): 89 | import pandas as pd 90 | import os 91 | from sklearn.model_selection import train_test_split 92 | title_cc_code=pd.read_csv(org_path,encoding='utf-8',names=['text','label'],header=0) 93 | from sklearn.preprocessing import LabelEncoder 94 | title_cc_code.head() 95 | title_cc_code=title_cc_code.dropna() 96 | le=LabelEncoder() 97 | title_cc_code['label']=title_cc_code['label'].str.strip() 98 | # print('{} unique labels'.format(title_cc_code['label'].nunique())) 99 | title_cc_code['text']=title_cc_code['text'].str.strip() 100 | title_cc_code['label_en']=le.fit_transform(title_cc_code['label']).astype(str) 101 | # title_cc_code0=title_cc_code.drop('label',axis=1) 102 | print(f'total sample is {title_cc_code.shape[0]}') 103 | df_train, df_test=train_test_split(title_cc_code,test_size=0.2,random_state=111) 104 | df_bert = pd.DataFrame({'guid': df_train.index, 105 | 'label': df_train['label_en'], 106 | 'alpha': ['a']*df_train.shape[0], 107 | 'text': df_train['text'].str.replace('\n',' ')}) 108 | #split into test, dev 109 | df_bert_train, df_bert_dev = train_test_split(df_bert, test_size=0.1,random_state=111) 110 | #create new title_cc_codeframe for test title_cc_code 111 | df_bert_test = pd.DataFrame({'guid': df_test.index, 112 | 'text': df_test['text'].str.replace('\n',' ')}) 113 | #output tsv file, no header for train and dev 114 | if not os.path.exists(out_dir): 115 | os.makedirs(out_dir) 116 | df_bert_train.to_csv('{}/train.tsv'.format(out_dir), sep='\t', index=False, header=False) 117 | df_bert_dev.to_csv('{}/dev.tsv'.format(out_dir), sep='\t', index=False, header=False) 118 | df_bert_test.to_csv('{}/test.tsv'.format(out_dir), sep='\t', index=False, header=True) 119 | df_test[['label']].to_csv('{}/test_labels.csv'.format(out_dir),index=False) 120 | print(f'training samples are {df_bert_train.shape[0]}\n' 121 | f'eval samples are {df_bert_dev.shape[0]}\n' 122 | f'testing samples are {df_bert_test.shape[0]}' 123 | ) 124 | print('{} unique labels'.format(title_cc_code['label_en'].nunique())) 125 | # print(f'WITH NAs, there are total of 55791, W/O NAS, there are {title_cc_code.shape[0]} 234 unique labels') 126 | -------------------------------------------------------------------------------- /mathbert/optimization.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Functions and classes related to optimization (weight updates).""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import re 22 | import tensorflow as tf 23 | 24 | 25 | def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu): 26 | """Creates an optimizer training op.""" 27 | global_step = tf.train.get_or_create_global_step() 28 | 29 | learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32) 30 | 31 | # Implements linear decay of the learning rate. 32 | learning_rate = tf.train.polynomial_decay( 33 | learning_rate, 34 | global_step, 35 | num_train_steps, 36 | end_learning_rate=0.0, 37 | power=1.0, 38 | cycle=False) 39 | 40 | # Implements linear warmup. I.e., if global_step < num_warmup_steps, the 41 | # learning rate will be `global_step/num_warmup_steps * init_lr`. 42 | if num_warmup_steps: 43 | global_steps_int = tf.cast(global_step, tf.int32) 44 | warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32) 45 | 46 | global_steps_float = tf.cast(global_steps_int, tf.float32) 47 | warmup_steps_float = tf.cast(warmup_steps_int, tf.float32) 48 | 49 | warmup_percent_done = global_steps_float / warmup_steps_float 50 | warmup_learning_rate = init_lr * warmup_percent_done 51 | 52 | is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32) 53 | learning_rate = ( 54 | (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate) 55 | 56 | # It is recommended that you use this optimizer for fine tuning, since this 57 | # is how the model was trained (note that the Adam m/v variables are NOT 58 | # loaded from init_checkpoint.) 59 | optimizer = AdamWeightDecayOptimizer( 60 | learning_rate=learning_rate, 61 | weight_decay_rate=0.01, 62 | beta_1=0.9, 63 | beta_2=0.999, 64 | epsilon=1e-6, 65 | exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"]) 66 | 67 | if use_tpu: 68 | optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer) 69 | 70 | tvars = tf.trainable_variables() 71 | grads = tf.gradients(loss, tvars) 72 | 73 | # This is how the model was pre-trained. 74 | (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0) 75 | 76 | train_op = optimizer.apply_gradients( 77 | zip(grads, tvars), global_step=global_step) 78 | 79 | # Normally the global step update is done inside of `apply_gradients`. 80 | # However, `AdamWeightDecayOptimizer` doesn't do this. But if you use 81 | # a different optimizer, you should probably take this line out. 82 | new_global_step = global_step + 1 83 | train_op = tf.group(train_op, [global_step.assign(new_global_step)]) 84 | return train_op 85 | 86 | 87 | class AdamWeightDecayOptimizer(tf.train.Optimizer): 88 | """A basic Adam optimizer that includes "correct" L2 weight decay.""" 89 | 90 | def __init__(self, 91 | learning_rate, 92 | weight_decay_rate=0.0, 93 | beta_1=0.9, 94 | beta_2=0.999, 95 | epsilon=1e-6, 96 | exclude_from_weight_decay=None, 97 | name="AdamWeightDecayOptimizer"): 98 | """Constructs a AdamWeightDecayOptimizer.""" 99 | super(AdamWeightDecayOptimizer, self).__init__(False, name) 100 | 101 | self.learning_rate = learning_rate 102 | self.weight_decay_rate = weight_decay_rate 103 | self.beta_1 = beta_1 104 | self.beta_2 = beta_2 105 | self.epsilon = epsilon 106 | self.exclude_from_weight_decay = exclude_from_weight_decay 107 | 108 | def apply_gradients(self, grads_and_vars, global_step=None, name=None): 109 | """See base class.""" 110 | assignments = [] 111 | for (grad, param) in grads_and_vars: 112 | if grad is None or param is None: 113 | continue 114 | 115 | param_name = self._get_variable_name(param.name) 116 | 117 | m = tf.get_variable( 118 | name=param_name + "/adam_m", 119 | shape=param.shape.as_list(), 120 | dtype=tf.float32, 121 | trainable=False, 122 | initializer=tf.zeros_initializer()) 123 | v = tf.get_variable( 124 | name=param_name + "/adam_v", 125 | shape=param.shape.as_list(), 126 | dtype=tf.float32, 127 | trainable=False, 128 | initializer=tf.zeros_initializer()) 129 | 130 | # Standard Adam update. 131 | next_m = ( 132 | tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad)) 133 | next_v = ( 134 | tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2, 135 | tf.square(grad))) 136 | 137 | update = next_m / (tf.sqrt(next_v) + self.epsilon) 138 | 139 | # Just adding the square of the weights to the loss function is *not* 140 | # the correct way of using L2 regularization/weight decay with Adam, 141 | # since that will interact with the m and v parameters in strange ways. 142 | # 143 | # Instead we want ot decay the weights in a manner that doesn't interact 144 | # with the m/v parameters. This is equivalent to adding the square 145 | # of the weights to the loss with plain (non-momentum) SGD. 146 | if self._do_use_weight_decay(param_name): 147 | update += self.weight_decay_rate * param 148 | 149 | update_with_lr = self.learning_rate * update 150 | 151 | next_param = param - update_with_lr 152 | 153 | assignments.extend( 154 | [param.assign(next_param), 155 | m.assign(next_m), 156 | v.assign(next_v)]) 157 | return tf.group(*assignments, name=name) 158 | 159 | def _do_use_weight_decay(self, param_name): 160 | """Whether to use L2 weight decay for `param_name`.""" 161 | if not self.weight_decay_rate: 162 | return False 163 | if self.exclude_from_weight_decay: 164 | for r in self.exclude_from_weight_decay: 165 | if re.search(r, param_name) is not None: 166 | return False 167 | return True 168 | 169 | def _get_variable_name(self, param_name): 170 | """Get the variable name from the tensor name.""" 171 | m = re.match("^(.*):\\d+$", param_name) 172 | if m is not None: 173 | param_name = m.group(1) 174 | return param_name 175 | -------------------------------------------------------------------------------- /mathbert/tokenization.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Tokenization classes.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import collections 22 | import re 23 | import unicodedata 24 | import six 25 | import tensorflow as tf 26 | 27 | 28 | def validate_case_matches_checkpoint(do_lower_case, init_checkpoint): 29 | """Checks whether the casing config is consistent with the checkpoint name.""" 30 | 31 | # The casing has to be passed in by the user and there is no explicit check 32 | # as to whether it matches the checkpoint. The casing information probably 33 | # should have been stored in the bert_config.json file, but it's not, so 34 | # we have to heuristically detect it to validate. 35 | 36 | if not init_checkpoint: 37 | return 38 | 39 | m = re.match("^.*?([A-Za-z0-9_-]+)/bert_model.ckpt", init_checkpoint) 40 | if m is None: 41 | return 42 | 43 | model_name = m.group(1) 44 | 45 | lower_models = [ 46 | "uncased_L-24_H-1024_A-16", "uncased_L-12_H-768_A-12", 47 | "multilingual_L-12_H-768_A-12", "chinese_L-12_H-768_A-12" 48 | ] 49 | 50 | cased_models = [ 51 | "cased_L-12_H-768_A-12", "cased_L-24_H-1024_A-16", 52 | "multi_cased_L-12_H-768_A-12" 53 | ] 54 | 55 | is_bad_config = False 56 | if model_name in lower_models and not do_lower_case: 57 | is_bad_config = True 58 | actual_flag = "False" 59 | case_name = "lowercased" 60 | opposite_flag = "True" 61 | 62 | if model_name in cased_models and do_lower_case: 63 | is_bad_config = True 64 | actual_flag = "True" 65 | case_name = "cased" 66 | opposite_flag = "False" 67 | 68 | if is_bad_config: 69 | raise ValueError( 70 | "You passed in `--do_lower_case=%s` with `--init_checkpoint=%s`. " 71 | "However, `%s` seems to be a %s model, so you " 72 | "should pass in `--do_lower_case=%s` so that the fine-tuning matches " 73 | "how the model was pre-training. If this error is wrong, please " 74 | "just comment out this check." % (actual_flag, init_checkpoint, 75 | model_name, case_name, opposite_flag)) 76 | 77 | 78 | def convert_to_unicode(text): 79 | """Converts `text` to Unicode (if it's not already), assuming utf-8 input.""" 80 | if six.PY3: 81 | if isinstance(text, str): 82 | return text 83 | elif isinstance(text, bytes): 84 | return text.decode("utf-8", "ignore") 85 | else: 86 | raise ValueError("Unsupported string type: %s" % (type(text))) 87 | elif six.PY2: 88 | if isinstance(text, str): 89 | return text.decode("utf-8", "ignore") 90 | elif isinstance(text, unicode): 91 | return text 92 | else: 93 | raise ValueError("Unsupported string type: %s" % (type(text))) 94 | else: 95 | raise ValueError("Not running on Python2 or Python 3?") 96 | 97 | 98 | def printable_text(text): 99 | """Returns text encoded in a way suitable for print or `tf.logging`.""" 100 | 101 | # These functions want `str` for both Python2 and Python3, but in one case 102 | # it's a Unicode string and in the other it's a byte string. 103 | if six.PY3: 104 | if isinstance(text, str): 105 | return text 106 | elif isinstance(text, bytes): 107 | return text.decode("utf-8", "ignore") 108 | else: 109 | raise ValueError("Unsupported string type: %s" % (type(text))) 110 | elif six.PY2: 111 | if isinstance(text, str): 112 | return text 113 | elif isinstance(text, unicode): 114 | return text.encode("utf-8") 115 | else: 116 | raise ValueError("Unsupported string type: %s" % (type(text))) 117 | else: 118 | raise ValueError("Not running on Python2 or Python 3?") 119 | 120 | 121 | def load_vocab(vocab_file): 122 | """Loads a vocabulary file into a dictionary.""" 123 | vocab = collections.OrderedDict() 124 | index = 0 125 | with tf.gfile.GFile(vocab_file, "r") as reader: 126 | while True: 127 | token = convert_to_unicode(reader.readline()) 128 | if not token: 129 | break 130 | token = token.strip() 131 | vocab[token] = index 132 | index += 1 133 | return vocab 134 | 135 | 136 | def convert_by_vocab(vocab, items): 137 | """Converts a sequence of [tokens|ids] using the vocab.""" 138 | output = [] 139 | for item in items: 140 | output.append(vocab[item]) 141 | return output 142 | 143 | 144 | def convert_tokens_to_ids(vocab, tokens): 145 | return convert_by_vocab(vocab, tokens) 146 | 147 | 148 | def convert_ids_to_tokens(inv_vocab, ids): 149 | return convert_by_vocab(inv_vocab, ids) 150 | 151 | 152 | def whitespace_tokenize(text): 153 | """Runs basic whitespace cleaning and splitting on a piece of text.""" 154 | text = text.strip() 155 | if not text: 156 | return [] 157 | tokens = text.split() 158 | return tokens 159 | 160 | 161 | class FullTokenizer(object): 162 | """Runs end-to-end tokenziation.""" 163 | 164 | def __init__(self, vocab_file, do_lower_case=True): 165 | self.vocab = load_vocab(vocab_file) 166 | self.inv_vocab = {v: k for k, v in self.vocab.items()} 167 | self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case) 168 | self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) 169 | 170 | def tokenize(self, text): 171 | split_tokens = [] 172 | for token in self.basic_tokenizer.tokenize(text): 173 | for sub_token in self.wordpiece_tokenizer.tokenize(token): 174 | split_tokens.append(sub_token) 175 | 176 | return split_tokens 177 | 178 | def convert_tokens_to_ids(self, tokens): 179 | return convert_by_vocab(self.vocab, tokens) 180 | 181 | def convert_ids_to_tokens(self, ids): 182 | return convert_by_vocab(self.inv_vocab, ids) 183 | 184 | 185 | class BasicTokenizer(object): 186 | """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" 187 | 188 | def __init__(self, do_lower_case=True): 189 | """Constructs a BasicTokenizer. 190 | 191 | Args: 192 | do_lower_case: Whether to lower case the input. 193 | """ 194 | self.do_lower_case = do_lower_case 195 | 196 | def tokenize(self, text): 197 | """Tokenizes a piece of text.""" 198 | text = convert_to_unicode(text) 199 | text = self._clean_text(text) 200 | 201 | # This was added on November 1st, 2018 for the multilingual and Chinese 202 | # models. This is also applied to the English models now, but it doesn't 203 | # matter since the English models were not trained on any Chinese data 204 | # and generally don't have any Chinese data in them (there are Chinese 205 | # characters in the vocabulary because Wikipedia does have some Chinese 206 | # words in the English Wikipedia.). 207 | text = self._tokenize_chinese_chars(text) 208 | 209 | orig_tokens = whitespace_tokenize(text) 210 | split_tokens = [] 211 | for token in orig_tokens: 212 | if self.do_lower_case: 213 | token = token.lower() 214 | token = self._run_strip_accents(token) 215 | split_tokens.extend(self._run_split_on_punc(token)) 216 | 217 | output_tokens = whitespace_tokenize(" ".join(split_tokens)) 218 | return output_tokens 219 | 220 | def _run_strip_accents(self, text): 221 | """Strips accents from a piece of text.""" 222 | text = unicodedata.normalize("NFD", text) 223 | output = [] 224 | for char in text: 225 | cat = unicodedata.category(char) 226 | if cat == "Mn": 227 | continue 228 | output.append(char) 229 | return "".join(output) 230 | 231 | def _run_split_on_punc(self, text): 232 | """Splits punctuation on a piece of text.""" 233 | chars = list(text) 234 | i = 0 235 | start_new_word = True 236 | output = [] 237 | while i < len(chars): 238 | char = chars[i] 239 | if _is_punctuation(char): 240 | output.append([char]) 241 | start_new_word = True 242 | else: 243 | if start_new_word: 244 | output.append([]) 245 | start_new_word = False 246 | output[-1].append(char) 247 | i += 1 248 | 249 | return ["".join(x) for x in output] 250 | 251 | def _tokenize_chinese_chars(self, text): 252 | """Adds whitespace around any CJK character.""" 253 | output = [] 254 | for char in text: 255 | cp = ord(char) 256 | if self._is_chinese_char(cp): 257 | output.append(" ") 258 | output.append(char) 259 | output.append(" ") 260 | else: 261 | output.append(char) 262 | return "".join(output) 263 | 264 | def _is_chinese_char(self, cp): 265 | """Checks whether CP is the codepoint of a CJK character.""" 266 | # This defines a "chinese character" as anything in the CJK Unicode block: 267 | # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) 268 | # 269 | # Note that the CJK Unicode block is NOT all Japanese and Korean characters, 270 | # despite its name. The modern Korean Hangul alphabet is a different block, 271 | # as is Japanese Hiragana and Katakana. Those alphabets are used to write 272 | # space-separated words, so they are not treated specially and handled 273 | # like the all of the other languages. 274 | if ((cp >= 0x4E00 and cp <= 0x9FFF) or # 275 | (cp >= 0x3400 and cp <= 0x4DBF) or # 276 | (cp >= 0x20000 and cp <= 0x2A6DF) or # 277 | (cp >= 0x2A700 and cp <= 0x2B73F) or # 278 | (cp >= 0x2B740 and cp <= 0x2B81F) or # 279 | (cp >= 0x2B820 and cp <= 0x2CEAF) or 280 | (cp >= 0xF900 and cp <= 0xFAFF) or # 281 | (cp >= 0x2F800 and cp <= 0x2FA1F)): # 282 | return True 283 | 284 | return False 285 | 286 | def _clean_text(self, text): 287 | """Performs invalid character removal and whitespace cleanup on text.""" 288 | output = [] 289 | for char in text: 290 | cp = ord(char) 291 | if cp == 0 or cp == 0xfffd or _is_control(char): 292 | continue 293 | if _is_whitespace(char): 294 | output.append(" ") 295 | else: 296 | output.append(char) 297 | return "".join(output) 298 | 299 | 300 | class WordpieceTokenizer(object): 301 | """Runs WordPiece tokenziation.""" 302 | 303 | def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=200): 304 | self.vocab = vocab 305 | self.unk_token = unk_token 306 | self.max_input_chars_per_word = max_input_chars_per_word 307 | 308 | def tokenize(self, text): 309 | """Tokenizes a piece of text into its word pieces. 310 | 311 | This uses a greedy longest-match-first algorithm to perform tokenization 312 | using the given vocabulary. 313 | 314 | For example: 315 | input = "unaffable" 316 | output = ["un", "##aff", "##able"] 317 | 318 | Args: 319 | text: A single token or whitespace separated tokens. This should have 320 | already been passed through `BasicTokenizer. 321 | 322 | Returns: 323 | A list of wordpiece tokens. 324 | """ 325 | 326 | text = convert_to_unicode(text) 327 | 328 | output_tokens = [] 329 | for token in whitespace_tokenize(text): 330 | chars = list(token) 331 | if len(chars) > self.max_input_chars_per_word: 332 | output_tokens.append(self.unk_token) 333 | continue 334 | 335 | is_bad = False 336 | start = 0 337 | sub_tokens = [] 338 | while start < len(chars): 339 | end = len(chars) 340 | cur_substr = None 341 | while start < end: 342 | substr = "".join(chars[start:end]) 343 | if start > 0: 344 | substr = "##" + substr 345 | if substr in self.vocab: 346 | cur_substr = substr 347 | break 348 | end -= 1 349 | if cur_substr is None: 350 | is_bad = True 351 | break 352 | sub_tokens.append(cur_substr) 353 | start = end 354 | 355 | if is_bad: 356 | output_tokens.append(self.unk_token) 357 | else: 358 | output_tokens.extend(sub_tokens) 359 | return output_tokens 360 | 361 | 362 | def _is_whitespace(char): 363 | """Checks whether `chars` is a whitespace character.""" 364 | # \t, \n, and \r are technically contorl characters but we treat them 365 | # as whitespace since they are generally considered as such. 366 | if char == " " or char == "\t" or char == "\n" or char == "\r": 367 | return True 368 | cat = unicodedata.category(char) 369 | if cat == "Zs": 370 | return True 371 | return False 372 | 373 | 374 | def _is_control(char): 375 | """Checks whether `chars` is a control character.""" 376 | # These are technically control characters but we count them as whitespace 377 | # characters. 378 | if char == "\t" or char == "\n" or char == "\r": 379 | return False 380 | cat = unicodedata.category(char) 381 | if cat in ("Cc", "Cf"): 382 | return True 383 | return False 384 | 385 | 386 | def _is_punctuation(char): 387 | """Checks whether `chars` is a punctuation character.""" 388 | cp = ord(char) 389 | # We treat all non-letter/number ASCII as punctuation. 390 | # Characters such as "^", "$", and "`" are not in the Unicode 391 | # Punctuation class but we treat them as punctuation anyways, for 392 | # consistency. 393 | if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or 394 | (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)): 395 | return True 396 | cat = unicodedata.category(char) 397 | if cat.startswith("P"): 398 | return True 399 | return False 400 | -------------------------------------------------------------------------------- /scripts/math_corpora_data_process.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter 3 | from pdfminer.pdfpage import PDFPage 4 | from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter 5 | from pdfminer.layout import LAParams 6 | import io 7 | import pandas as pd 8 | import numpy as np 9 | import os, re, json, csv 10 | import warnings 11 | warnings.filterwarnings('ignore') 12 | import seaborn as sns 13 | import smart_open 14 | import gensim 15 | from node2vec import Node2Vec 16 | import networkx as nx 17 | from nltk.tokenize import word_tokenize 18 | 19 | import os 20 | import requests 21 | from urllib.parse import urljoin 22 | from bs4 import BeautifulSoup 23 | from string import ascii_lowercase 24 | # -------auto-download for utah math curriculum--------- 25 | 26 | def utah_math(url_list,folder_location): 27 | for url in url_list: 28 | if not os.path.exists(folder_location):os.mkdir(folder_location) 29 | 30 | response = requests.get(url) 31 | soup= BeautifulSoup(response.text, "html.parser") 32 | marker='middle-school-math-'+url.split('/')[-2] 33 | print(marker) 34 | for i, link in enumerate(soup.select("a[href]")): 35 | # if i<15: 36 | if marker in link['href']: 37 | print(link['href']) 38 | 39 | soup2=BeautifulSoup(requests.get(link['href']).text,'html.parser') 40 | # if soup2.a['class']=="dropdown-item": 41 | for l in soup2.select("a[class$='dropdown-item']"): 42 | print(l['href']) 43 | 44 | #Name the pdf files using the last portion of each link which are unique in this case 45 | filename = os.path.join(folder_location,l['href'].split('/')[-1]) 46 | print(filename) 47 | with open(filename, 'wb') as f: 48 | f.write(requests.get(l['href']).content) 49 | f.close() 50 | 51 | 52 | # to execute 53 | url_list=["http://utahmiddleschoolmath.org/6th-grade/student-materials.shtml","http://utahmiddleschoolmath.org/7th-grade/student-materials.shtml","http://utahmiddleschoolmath.org/8th-grade/student-materials.shtml"] 54 | folder_location = r'../utah_math/' 55 | utah_math(url_list,folder_location) 56 | 57 | # ----auto-download for engageNY curriculum--- 58 | def engage_NY(url, folder_location, keywords): 59 | 60 | if not os.path.exists(folder_location):os.mkdir(folder_location) 61 | 62 | for i, k in enumerate(keywords): 63 | if i>=0: 64 | link1=urljoin(url,k) 65 | # print(link1) 66 | response = requests.get(link1) 67 | soup= BeautifulSoup(response.text, "html.parser") 68 | for i, link in enumerate(soup.select("a[href]")): 69 | link_list=[] 70 | if 'module' in link['href']: 71 | link_list.append(link['href']) 72 | for l in link_list: 73 | join2=urljoin(url,l) 74 | res2=requests.get(join2) 75 | soup2= BeautifulSoup(res2.text, "html.parser") 76 | for i, link in enumerate(soup2.select("a[href]")): 77 | if '.pdf' in link['href']: 78 | join3=urljoin(url,link['href']) 79 | print(join3) 80 | marker=link['href'].split('?')[0].split('/')[-1] 81 | filename = os.path.join(folder_location,marker) 82 | print(filename) 83 | with open(filename, 'wb') as f: 84 | f.write(requests.get(join3).content) 85 | f.close() 86 | 87 | # to execute 88 | url = "https://www.engageny.org/" 89 | folder_location = r'../engageNY/' 90 | keywords=['resource/grade-1-mathematics','resource/grade-2-mathematics','resource/grade-3-mathematics', 91 | 'resource/grade-4-mathematics','resource/grade-5-mathematics','resource/grade-6-mathematics', 92 | 'resource/grade-7-mathematics','resource/kindergarten-mathematics','content/prekindergarten-mathematics', 93 | 'resource/high-school-algebra-i','resource/high-school-algebra-ii','resource/high-school-geometry', 94 | 'content/precalculus-and-advanced-topics','resource/grade-8-mathematics'] 95 | engage_NY(url, folder_location, keywords) 96 | 97 | 98 | # ----auto-download math books----- 99 | def crawl_books(url,folder_location, booknames): 100 | def remove(value): 101 | deletechars='\/:*?"<>|' 102 | for c in deletechars: 103 | value = value.replace(c,'') 104 | return value 105 | 106 | if not os.path.exists(folder_location):os.mkdir(folder_location) 107 | 108 | response = requests.get(url) 109 | soup= BeautifulSoup(response.text, "html.parser") 110 | 111 | link_list=[] 112 | pdf_list=[] 113 | non_pdf=[] 114 | m=0 115 | for i, link in enumerate(soup.select("a[href]")): 116 | # link here is the book name level link 117 | try: 118 | if any(word in link.text for word in booknames): 119 | m+=1 120 | book_name=marker='_'.join(link.text.split(' ')) 121 | print(f'{m}:Book name is {book_name}!') 122 | link_list.append(link['href']) 123 | 124 | if any(word in link['href'] for word in ['.PDF','pdf']): 125 | print('pdf book:',link['href']) 126 | marker='_'.join(link.text.split(' '))+'.pdf' 127 | pdf_location = r'../FREE_MATH_BOOK/more/pdf' 128 | if not os.path.exists(pdf_location):os.mkdir(pdf_location) 129 | filename = os.path.join(pdf_location,marker) 130 | pdf_list.append(marker) 131 | print(filename) 132 | # with open(filename, 'wb') as f: 133 | # f.write(requests.get(link['href']).content) 134 | # f.close() 135 | else: 136 | print('non-direct pdf',link['href']) 137 | 138 | book_name=marker='_'.join(link.text.split(' ')) 139 | book_name=remove(book_name) 140 | # make a book name folder 141 | folder_location = r'../FREE_MATH_BOOK/more/non_pdf' 142 | if not os.path.exists(folder_location):os.mkdir(folder_location) 143 | 144 | non_pdf.append(book_name) 145 | non_pdf_direct=[] 146 | res2=requests.get(link['href']) 147 | soup2=BeautifulSoup(res2.text,'html.parser') 148 | chapter_list=[] 149 | # link2 is the chapter level link 150 | for i, link2 in enumerate(soup2.select('a[href]')): 151 | non_pdf_direct.append(book_name) 152 | 153 | if '.pdf' in link2['href']: 154 | 155 | marker=link2['href'].split('/')[-1] 156 | book_dir=os.path.join(folder_location,book_name) 157 | if not os.path.exists(book_dir):os.mkdir(book_dir) 158 | filename = os.path.join(book_dir,marker) 159 | print(f'Save file at {filename}!') 160 | # join=urljoin(link,c) 161 | # print(f'download link{join}') 162 | 163 | with open(filename, 'wb') as f: 164 | f.write(requests.get(urljoin(link['href'],link2['href'])).content) 165 | f.close() 166 | print('\n') 167 | except: 168 | pass 169 | 170 | print(f'We extracted {len(pdf_list)} direct pdf books and {len(non_pdf)} total non-direct pdf books and {len(non_pdf_direct)} books have chapters!') 171 | print(f'We extracted {len(link_list)} books!') 172 | # to execute 173 | url="https://www.openculture.com/free-math-textbooks" 174 | 175 | folder_location = r'../FREE_MATH_BOOK/more/' 176 | fileObject=open('booknames.json','r') 177 | jsonContent=fileObject.read() 178 | booknames=json.loads(jsonContent) 179 | crawl_books(url,folder_location, booknames) 180 | 181 | 182 | 183 | # ----convert pdf to text----- 184 | def pdfparser(in_path,out_path): 185 | 186 | fp = open(in_path, 'rb') 187 | rsrcmgr = PDFResourceManager() 188 | retstr = io.StringIO() 189 | codec ='utf-8' 190 | laparams = LAParams() 191 | device = TextConverter(rsrcmgr, retstr, laparams=laparams) 192 | # Create a PDF interpreter object. 193 | interpreter = PDFPageInterpreter(rsrcmgr, device) 194 | # Process each page contained in the document. 195 | 196 | for page in PDFPage.get_pages(fp): 197 | interpreter.process_page(page) 198 | data = retstr.getvalue() 199 | with open(out_path,'w',encoding=codec) as f: 200 | f.write(data) 201 | 202 | f.close() 203 | return data 204 | 205 | 206 | def pdfparser_multi(data_dir,out_path): 207 | tokens=0 208 | for i, f in enumerate(os.listdir(data_dir)): 209 | 210 | if os.path.getsize(os.path.join(data_dir,f))!=0 and f.endswith('.pdf'): 211 | # print(i,f) 212 | in_path=os.path.join(data_dir,f) 213 | fp = open(in_path, 'rb') 214 | rsrcmgr = PDFResourceManager() 215 | retstr = io.StringIO() 216 | codec ='utf-8' 217 | laparams = LAParams() 218 | device = TextConverter(rsrcmgr, retstr, laparams=laparams) 219 | # Create a PDF interpreter object. 220 | interpreter = PDFPageInterpreter(rsrcmgr, device) 221 | # Process each page contained in the document. 222 | 223 | for page in PDFPage.get_pages(fp): 224 | interpreter.process_page(page) 225 | data =retstr.getvalue() 226 | token=word_tokenize(data) 227 | print(f'file {i}:{f}: {len(token)} tokens!') 228 | tokens+=len(token) 229 | 230 | with open(out_path,'a+',encoding=codec) as f: 231 | f.write(data) 232 | f.close() 233 | else: 234 | pass 235 | print(f'===Total token has {tokens}!===') 236 | return data, tokens 237 | 238 | # TO EXECUTE 239 | 240 | data_dir='../FREE_MATH_BOOK/non_pdf/Basic_Concepts_of_Mathematics' 241 | out_path='../FREE_MATH_BOOK_CVT/Basic_Concepts_of_Mathematics.txt' 242 | data,token=pdfparser_multi(data_dir,out_path) 243 | 244 | 245 | # -----cut on tokens for more efficient training--- 246 | 247 | # the output data will have 2 blank lines 248 | def cut_on_tokens(data_path,out_dir,token_len): 249 | out_path=os.path.join(out_dir,data_path.split('/')[-1].split('.')[0]+'_2nl_{}_wD.txt'.format(token_len)) 250 | 251 | b=open(data_path,'r',encoding='utf-8') 252 | pb=open(out_path,'w',encoding='utf-8') 253 | eff_line='' 254 | token_dict={} 255 | 256 | try: 257 | for i, line in enumerate(b): 258 | 259 | if line!='.' and len(line)>1: 260 | 261 | line=line.rstrip('\n')+' ' 262 | eff_line+=line 263 | token=word_tokenize(eff_line) 264 | token_dict[i]=[len(token),eff_line] 265 | if token_dict.get(i)[0]>token_len: 266 | 267 | print(i-1,token_dict[i-1][0]) 268 | pb.write(token_dict[i-1][1]+'\n\n') 269 | eff_line='' 270 | line=line.rstrip('\n')+' ' 271 | eff_line+=line 272 | token2=word_tokenize(line) 273 | token_dict[i]=[len(token2),eff_line] 274 | 275 | pb.close() 276 | b.close() 277 | except: 278 | pb.close() 279 | b.close() 280 | 281 | # to execute the above function to get 256 tokens per sequence 282 | data_dir='Corpus_data/mathCorpus/individual_txt/' 283 | out_dir='Corpus_data/mathCorpus/individual_txt/processed1/' 284 | token_len=256 285 | for i, f in enumerate(os.listdir(data_dir)): 286 | if f.endswith('.txt'): 287 | print(i,f) 288 | data_path=os.path.join(data_dir,f) 289 | cut_on_tokens(data_path,out_dir,token_len) 290 | 291 | # ----delete extra lines from original files--- 292 | # per tensorflow pre-training format request, there should be 1 blank line between each sequence, therefore, we delete one line from the above output 293 | def delete_extra_line_v2(data_path,out_dir): 294 | out_path=data_path.split('.')[0]+'_1NL_short.txt' 295 | if not os.path.exists(out_dir):os.mkdir(out_dir) 296 | out_path=os.path.join(out_dir,out_path.split('/')[-1]) 297 | b=open(data_path,'r',encoding='utf-8') 298 | pb=open(out_path,'w',encoding='utf-8') 299 | # no_break_lines='' 300 | try: 301 | for line in b.readlines()[1:]: 302 | line=line.rstrip('\n') 303 | lines=re.split('\.|\?', line.strip()) 304 | GE2_lines=[l.strip() for l in lines if len(l.strip())!=1 and l.strip()!=''] 305 | for l in GE2_lines: 306 | pb.write(l+'\n') 307 | except: 308 | pb.close() 309 | b.close() 310 | pb.close() 311 | b.close() 312 | 313 | # to execute 314 | data_dir='K12BERT/Corpus_data/mathCorpus/individual_txt/dedup/dedup_files/' 315 | for i, f in enumerate(os.listdir(data_dir)): 316 | print(i,f) 317 | data_path=os.path.join(data_dir,f) 318 | out_dir='K12BERT/Corpus_data/mathCorpus/individual_txt/dedup/processed1' 319 | if data_path.endswith('.txt'): 320 | 321 | delete_extra_line_v2(data_path,out_dir) 322 | 323 | # ---to convert the pretrained data from csv files to txt format with 1 blank line between--- 324 | # we have data saved in csv format, we convert it to txt file for tensorflow training and insert 1 blank line in between 325 | def convert_pretrain_csv0(data_path): 326 | out_path=data_path.split('.')[0]+'_short.txt' 327 | b=open(data_path,'r',encoding='utf-8') 328 | pb=open(out_path,'w',encoding='utf-8') 329 | for line in b.readlines()[1:]: 330 | lines=line.split('.') 331 | non_empty_lines=[l for l in lines if l.strip()!=""] 332 | if non_empty_lines!=['"']: 333 | valid_lines='' 334 | for line in non_empty_lines: 335 | valid_lines+=line+'\n\n' 336 | pb.write(valid_lines) -------------------------------------------------------------------------------- /scripts/convert_evaluate_test.py: -------------------------------------------------------------------------------- 1 | # =====For KC task prediction results conversion and evaluation================ 2 | def convert_test_result(orig_testPath,pred_testPath,org_dataPath,out_dir): 3 | from pathlib import Path 4 | import pandas as pd 5 | #read the original test data for the text and id 6 | df_test = pd.read_csv(orig_testPath, sep='\t',engine='python') 7 | df_test['guid']=df_test['guid'].astype(str) 8 | print(f'original test file has shape {df_test.shape}') 9 | #read the results data for the probabilities 10 | df_result = pd.read_csv(pred_testPath, sep='\t', header=None) 11 | print(f'predicted test file has shape {df_result.shape}') 12 | out_dir=Path(out_dir) 13 | Path.mkdir(out_dir,exist_ok=True) 14 | import numpy as np 15 | # df_map 16 | df_map_result = pd.DataFrame({'guid': df_test['guid'], 17 | 'text': df_test['text'], 18 | 'top1': df_result.idxmax(axis=1), 19 | 'top1_probability':df_result.max(axis=1), 20 | 'top2': df_result.columns[df_result.values.argsort(1)[:,-2]], 21 | 'top2_probability':df_result.apply(lambda x: sorted(x)[-2],axis=1), 22 | 'top3': df_result.columns[df_result.values.argsort(1)[:,-3]], 23 | 'top3_probability':df_result.apply(lambda x: sorted(x)[-3],axis=1), 24 | 'top4': df_result.columns[df_result.values.argsort(1)[:,-4]], 25 | 'top4_probability':df_result.apply(lambda x: sorted(x)[-4],axis=1), 26 | 'top5': df_result.columns[df_result.values.argsort(1)[:,-5]], 27 | 'top5_probability':df_result.apply(lambda x: sorted(x)[-5],axis=1) 28 | }) 29 | #view sample rows of the newly created dataframe 30 | df_map_result.head(10) 31 | df_map_result['top1']=df_map_result['top1'].astype(str) 32 | df_map_result['top2']=df_map_result['top2'].astype(str) 33 | df_map_result['top3']=df_map_result['top3'].astype(str) 34 | df_map_result.dtypes 35 | df_map_result['top4']=df_map_result['top4'].astype(str) 36 | df_map_result['top5']=df_map_result['top5'].astype(str) 37 | print(f'mapped test file has shape {df_map_result.shape}') 38 | data=pd.read_csv(org_dataPath,encoding='utf-8',names=['text','label'],header=0) 39 | from sklearn.preprocessing import LabelEncoder 40 | le=LabelEncoder() 41 | data['label_en']=le.fit_transform(data['label']) 42 | # data0=data.drop('Label',axis=1) 43 | data['Label']=le.inverse_transform(data['label_en']) 44 | key=pd.DataFrame({'code':data['label_en'].unique(),'Label':le.inverse_transform(data['label_en'].unique())}) 45 | key.to_csv('further-pre-training/label-map.csv',index=False) 46 | 47 | key['code']=key.code.astype(str) 48 | label_map_dict=dict(key.to_dict(orient='split')['data']) 49 | marker=pred_testPath.split('/')[2].split('.')[0] 50 | df_map_result=df_map_result.replace({'top1':label_map_dict,'top2':label_map_dict,'top3':label_map_dict,'top4':label_map_dict,'top5':label_map_dict}) 51 | df_map_result.to_csv('{}/{}_converted.csv'.format(out_dir,marker),index=False) 52 | print(df_map_result.shape)#(702, 12) 53 | return df_map_result 54 | 55 | 56 | def match_top3_f1_acc(data_dir,label_Path,out_dir): 57 | from datetime import datetime 58 | import numpy as np 59 | import pandas as pd 60 | import os 61 | from sklearn.metrics import accuracy_score 62 | from sklearn.metrics import f1_score 63 | label_data=pd.read_csv(label_Path,names=['label'],header=0) 64 | print(f'test data shape{label_data.shape}') 65 | for i, file in enumerate(os.listdir(data_dir)): 66 | 67 | pred_dataPath=os.path.join(data_dir,file) 68 | print('======') 69 | print(i,pred_dataPath) 70 | 71 | if pred_dataPath.endswith('.csv'): 72 | 73 | pred_data=pd.read_csv(pred_dataPath,encoding='ISO-8859-1') 74 | print(f'predicted data shape: {pred_data.shape}') 75 | print('total {} classes are predicted as top class'.format(len(pred_data['top1'].unique()))) 76 | match=pd.concat([pred_data,label_data],axis=1) 77 | print(f'merged data shape{match.shape}') 78 | correct_top1=match[match['top1']==match['label']].top1.unique() 79 | correct_top2=match[match['top2']==match['label']].top2.unique() 80 | correct_top3=match[match['top3']==match['label']].top3.unique() 81 | correct_all=list(set(list(correct_top1)+list(correct_top2)+list(correct_top3))) 82 | print('Correct top 1 label {}'.format(len(correct_top1))) 83 | print('Correct top 2 label {}'.format(len(correct_top2))) 84 | print('Correct top 3 label {}'.format(len(correct_top3))) 85 | print('Total top 3 Correct labels {}'.format(len(correct_all))) 86 | match['matched1']=np.where(match['top1']==match['label'],1,0) 87 | match['matched2']=np.where((match['top1']==match['label']) | (match['top2']==match['label']),1,0) 88 | match['matched3']=np.where((match['top1']==match['label']) | (match['top2']==match['label']) | (match['top3']==match['label']),1,0) 89 | 90 | marker=file.split('.')[0] 91 | from pathlib import Path 92 | out_dir=Path(out_dir) 93 | out_dir.mkdir(exist_ok=True) 94 | match.to_csv('{}/{}_matched.csv'.format(out_dir,marker),index=False) 95 | 96 | 97 | print('---Below is F1 Score(weighted)--') 98 | top1_f1=round(f1_score(match['label'], match['top1'], average='weighted')*100,3) 99 | print('Top1 label F1 score(weighted): {}%'.format(top1_f1)) 100 | top2_f1=round(f1_score(match['label'], match['top2'], average='weighted')*100,3) 101 | print('Top1 label F1 score(weighted): {}%'.format(top2_f1+top1_f1)) 102 | top3_f1=round(f1_score(match['label'], match['top3'], average='weighted')*100,3) 103 | print('Top1 label F1 score(weighted): {}%'.format(top3_f1+top2_f1+top1_f1)) 104 | 105 | print ('---Below is Sklearn accuracy---') 106 | top1_accuracy=round(accuracy_score(match['label'], match['top1'])*100,3) 107 | print('Top1 label accuracy score: {}%'.format(top1_accuracy)) 108 | top2_accuracy=round(accuracy_score(match['label'], match['top2'])*100,3) 109 | print('Top1 label accuracy score: {}%'.format(top2_accuracy+top1_accuracy)) 110 | top3_accuracy=round(accuracy_score(match['label'], match['top3'])*100,3) 111 | print('Top1 label accuracy score: {}%'.format(top3_accuracy+top2_accuracy+top1_accuracy)) 112 | 113 | 114 | # example code to execute the functions 115 | 116 | orig_testPath='TAPT/skill_code_prob_v2/CoLA/test.tsv' 117 | pred_testPath='EVAL_RESULT/MathBERT/COLA_385_MathBERT_LR5E-5_BS64_MS512_EP25_customVocab_FIT_SEED4_problem-test_results.tsv' 118 | org_dataPath='further-pre-training/CORPUS/ALL_GRADES/problem_text_all_grades_v2.csv' 119 | out_dir='TEST_converted_PROB_V2_MATHBERT' 120 | df_map_result=convert_test_result(orig_testPath,pred_testPath,org_dataPath,out_dir) 121 | df_map_result.head() 122 | 123 | data_dir='TEST_converted_PROB_V2_MATHBERT' 124 | org_dataPath='TAPT/skill_code_prob_v2/CoLA/test_labels.csv' 125 | out_dir='SKILL_CODE_PROB_V2_matched_MATHBERT-TAPT' 126 | match_top3_f1_acc(data_dir,org_dataPath,out_dir) 127 | # =====For auto-grading prediction results conversion and evaluation================ 128 | 129 | def convert_autoGrade(orig_testPath,pred_testPath,org_dataPath,out_dir): 130 | from pathlib import Path 131 | import pandas as pd 132 | #read the original test data for the text and id 133 | df_test = pd.read_csv(orig_testPath, sep='\t')#,engine='python' 134 | df_test['guid']=df_test.iloc[:,0].astype(str) 135 | print(f'original test file has shape {df_test.shape}') 136 | #read the results data for the probabilities 137 | df_result = pd.read_csv(pred_testPath, sep='\t', header=None) 138 | print(f'predicted test file has shape {df_result.shape}') 139 | out_dir=Path(out_dir) 140 | Path.mkdir(out_dir,exist_ok=True) 141 | import numpy as np 142 | # df_map 143 | df_map_result = pd.DataFrame({'guid': df_test['guid'], 144 | 'question': df_test['question'], 145 | 'answer': df_test['answer'], 146 | 'top1': df_result.idxmax(axis=1), 147 | 'top1_probability':df_result.max(axis=1), 148 | 'top2': df_result.columns[df_result.values.argsort(1)[:,-2]], 149 | 'top2_probability':df_result.apply(lambda x: sorted(x)[-2],axis=1), 150 | 'top3': df_result.columns[df_result.values.argsort(1)[:,-3]], 151 | 'top3_probability':df_result.apply(lambda x: sorted(x)[-3],axis=1), 152 | 'top4': df_result.columns[df_result.values.argsort(1)[:,-4]], 153 | 'top4_probability':df_result.apply(lambda x: sorted(x)[-4],axis=1), 154 | 'top5': df_result.columns[df_result.values.argsort(1)[:,-5]], 155 | 'top5_probability':df_result.apply(lambda x: sorted(x)[-5],axis=1) 156 | }) 157 | #view sample rows of the newly created dataframe 158 | # display(df_map_result.head()) 159 | df_map_result['top1']=df_map_result['top1'].astype(str) 160 | df_map_result['top2']=df_map_result['top2'].astype(str) 161 | df_map_result['top3']=df_map_result['top3'].astype(str) 162 | df_map_result.dtypes 163 | df_map_result['top4']=df_map_result['top4'].astype(str) 164 | df_map_result['top5']=df_map_result['top5'].astype(str) 165 | print(f'mapped test file has shape {df_map_result.shape}') 166 | label_map_dict={'0':1,'1':2,'2':3,'3':4,'4':5} 167 | marker=pred_testPath.split('/')[-1].split('.')[0] 168 | df_map_result=df_map_result.replace({'top1':label_map_dict,'top2':label_map_dict,'top3':label_map_dict,'top4':label_map_dict,'top5':label_map_dict}) 169 | df_map_result.to_csv('{}/{}_converted.csv'.format(out_dir,marker),index=False) 170 | print(df_map_result.shape)#(702, 12) 171 | return df_map_result 172 | 173 | 174 | def convert_auc_autoGrade(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir): 175 | from pathlib import Path 176 | import pandas as pd 177 | from sklearn.metrics import roc_auc_score 178 | import numpy as np 179 | import os 180 | #read the original test data for the text and id 181 | df_test = pd.read_csv(orig_testPath, sep='\t',engine='python') 182 | df_test['guid']=df_test.iloc[:,0].astype(str) 183 | print(f'original test file has shape {df_test.shape}') 184 | print('----') 185 | out_dir=Path(out_dir) 186 | Path.mkdir(out_dir,exist_ok=True) 187 | #read the results data for the probabilities 188 | for i, f in enumerate(os.listdir(pred_testDir)): 189 | if f.endswith('test_results.tsv'): 190 | print(i, f) 191 | 192 | df_result = pd.read_csv(os.path.join(pred_testDir,f), sep='\t', header=None) 193 | print(f'predicted test file has shape {df_result.shape}') 194 | label_data=pd.read_csv(label_Path,usecols=['label']) 195 | auc=round(roc_auc_score(label_data['label'], df_result,multi_class='ovo',average='weighted')*100,3) 196 | print(f'average auc for 5 classes is {auc} %!') 197 | print('----') 198 | 199 | # example code to execute the functions 200 | orig_testPath='mathBERT-downstreamTasks/auto_grade/test.tsv' 201 | pred_testPath='EVAL_RESULT/MathBERT/AUTO-GRADE/auto_grade_MathBERT_LR2E-5_BS64_MS512_EP5_customVocab_FIT_seed5-v2_600k-test_results.tsv' 202 | org_dataPath='mathBERT-downstreamTasks/auto_grade/valid_answers_clean_bert.csv' 203 | out_dir='TEST_converted_autoGrade_MATHBERT' 204 | df_map_result=convert_autoGrade(orig_testPath,pred_testPath,org_dataPath,out_dir) 205 | df_map_result.head() 206 | 207 | orig_testPath='mathBERT-downstreamTasks/auto_grade/test.tsv' 208 | pred_testDir='EVAL_RESULT/MathBERT/AUTO-GRADE/' 209 | org_dataPath='mathBERT-downstreamTasks/auto_grade/valid_answers_clean_bert.csv' 210 | label_Path='mathBERT-downstreamTasks/auto_grade/test_labels.csv' 211 | out_dir='TEST_converted_autoGrade_MATHBERT' 212 | convert_auc_autoGrade(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir) 213 | 214 | # =====For KT prediction results conversion and evaluation================ 215 | def convert_KT(orig_testPath,pred_testPath,org_dataPath,out_dir): 216 | from pathlib import Path 217 | import pandas as pd 218 | #read the original test data for the text and id 219 | df_test = pd.read_csv(orig_testPath, sep='\t')#,engine='python' 220 | df_test['guid']=df_test.iloc[:,0].astype(str) 221 | print(f'original test file has shape {df_test.shape}') 222 | #read the results data for the probabilities 223 | df_result = pd.read_csv(pred_testPath, sep='\t', header=None) 224 | print(f'predicted test file has shape {df_result.shape}') 225 | out_dir=Path(out_dir) 226 | Path.mkdir(out_dir,exist_ok=True) 227 | import numpy as np 228 | # df_map 229 | df_map_result = pd.DataFrame({'guid': df_test['guid'], 230 | 'question': df_test['question'], 231 | 'answer': df_test['answer'], 232 | 'top1': df_result.idxmax(axis=1), 233 | 'top1_probability':df_result.max(axis=1), 234 | 'top2': df_result.columns[df_result.values.argsort(1)[:,-2]], 235 | 'top2_probability':df_result.apply(lambda x: sorted(x)[-2],axis=1), 236 | 237 | }) 238 | #view sample rows of the newly created dataframe 239 | # display(df_map_result.head()) 240 | df_map_result['top1']=df_map_result['top1'].astype(str) 241 | df_map_result['top2']=df_map_result['top2'].astype(str) 242 | 243 | print(f'mapped test file has shape {df_map_result.shape}') 244 | 245 | label_map_dict={'0':0,'1':1} 246 | marker=pred_testPath.split('/')[-1].split('.')[0] 247 | df_map_result=df_map_result.replace({'top1':label_map_dict,'top2':label_map_dict}) 248 | df_map_result.to_csv('{}/{}_converted.csv'.format(out_dir,marker),index=False) 249 | print(df_map_result.shape)#(702, 12) 250 | return df_map_result 251 | 252 | def match_top3_f1_acc_KT(data_dir,label_Path,out_dir): 253 | from datetime import datetime 254 | import numpy as np 255 | import pandas as pd 256 | import os 257 | from sklearn.metrics import accuracy_score 258 | from sklearn.metrics import f1_score 259 | from pathlib import Path 260 | label_data=pd.read_csv(label_Path,names=['guid','label'],header=0) 261 | print(f'test data shape{label_data.shape}') 262 | for i, file in enumerate(os.listdir(data_dir)): 263 | 264 | pred_dataPath=os.path.join(data_dir,file) 265 | print('======') 266 | print(i,pred_dataPath) 267 | 268 | 269 | if pred_dataPath.endswith('.csv'): 270 | 271 | pred_data=pd.read_csv(pred_dataPath,encoding='ISO-8859-1') 272 | print(f'predicted data shape: {pred_data.shape}') 273 | print('total {} classes are predicted as top class'.format(len(pred_data['top1'].unique()))) 274 | match=pd.concat([pred_data,label_data],axis=1) 275 | # marker=file.split('.')[0] 276 | # out_dir=Path(out_dir) 277 | # out_dir.mkdir(exist_ok=True) 278 | # match.to_csv('{}/{}_matched.csv'.format(out_dir,marker),index=False) 279 | print(f'merged data shape{match.shape}') 280 | # display(match.head()) 281 | correct_top1=match[match['top1']==match['label']].top1.unique() 282 | correct_top2=match[match['top2']==match['label']].top2.unique() 283 | 284 | correct_all=list(set(list(correct_top1)+list(correct_top2))) 285 | print('Correct top 1 label {}'.format(len(correct_top1))) 286 | print('Correct top 2 label {}'.format(len(correct_top2))) 287 | 288 | print('Total top 2 Correct labels {}'.format(len(correct_all))) 289 | match['matched1']=np.where(match['top1']==match['label'],1,0) 290 | match['matched2']=np.where((match['top1']==match['label']) | (match['top2']==match['label']),1,0) 291 | 292 | 293 | marker=file.split('.')[0] 294 | from pathlib import Path 295 | out_dir=Path(out_dir) 296 | out_dir.mkdir(exist_ok=True) 297 | match.to_csv('{}/{}_matched.csv'.format(out_dir,marker),index=False) 298 | 299 | 300 | print('---Below is F1 Score(binary)--') 301 | top1_f1=round(f1_score(match['label'], match['top1'])*100,3) 302 | print('Top1 label F1 score(binary): {}%'.format(top1_f1)) 303 | 304 | 305 | print ('---Below is Sklearn accuracy---') 306 | top1_accuracy=round(accuracy_score(match['label'], match['top1'])*100,3) 307 | print('Top1 label accuracy score: {}%'.format(top1_accuracy)) 308 | 309 | def convert_auc_KT(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir): 310 | from pathlib import Path 311 | import pandas as pd 312 | from sklearn.metrics import roc_auc_score 313 | import numpy as np 314 | import os 315 | #read the original test data for the text and id 316 | df_test = pd.read_csv(orig_testPath, sep='\t',engine='python') 317 | df_test['guid']=df_test['Index'].astype(str) 318 | print(f'original test file has shape {df_test.shape}') 319 | print('----') 320 | out_dir=Path(out_dir) 321 | Path.mkdir(out_dir,exist_ok=True) 322 | #read the results data for the probabilities 323 | for i, f in enumerate(os.listdir(pred_testDir)): 324 | if f.endswith('test_results.tsv'): 325 | print(i, f) 326 | 327 | df_result = pd.read_csv(os.path.join(pred_testDir,f), sep='\t', header=None) 328 | print(f'predicted test file has shape {df_result.shape}') 329 | label_data=pd.read_csv(label_Path,names=['Index','label'],header=0) 330 | auc=round(roc_auc_score(label_data['label'], df_result.iloc[:,1])*100,3) 331 | print(f'average auc for 5 classes is {auc} %!') 332 | print('----') 333 | # example code to execute the functions 334 | orig_testPath='mathBERT-downstreamTasks/KT/test.tsv' 335 | pred_testPath='EVAL_RESULT/MathBERT/KT/KT_MathBERT_LR5E-5_BS128_MS512_EP5_origVocab_FIT_seed2_600K-test_results.tsv' 336 | org_dataPath='mathBERT-downstreamTasks/KT/valid_answers_clean_bert.csv' 337 | out_dir='TEST_converted_KT_MATHBERT' 338 | df_map_result=convert_KT(orig_testPath,pred_testPath,org_dataPath,out_dir) 339 | df_map_result.head() 340 | 341 | data_dir='TEST_converted_KT_MATHBERT' 342 | label_Path='mathBERT-downstreamTasks/KT/test_labels.csv' 343 | out_dir='KT_matched_MATHBERT' 344 | match_top3_f1_acc_KT(data_dir,label_Path,out_dir) -------------------------------------------------------------------------------- /mathbert/create_pretraining_data.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Create masked LM/next sentence masked_lm TF examples for BERT.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import collections 22 | import random 23 | import tokenization 24 | import tensorflow as tf 25 | 26 | flags = tf.flags 27 | 28 | FLAGS = flags.FLAGS 29 | 30 | flags.DEFINE_string("input_file", None, 31 | "Input raw text file (or comma-separated list of files).") 32 | 33 | flags.DEFINE_string( 34 | "output_file", None, 35 | "Output TF example file (or comma-separated list of files).") 36 | 37 | flags.DEFINE_string("vocab_file", None, 38 | "The vocabulary file that the BERT model was trained on.") 39 | 40 | flags.DEFINE_bool( 41 | "do_lower_case", True, 42 | "Whether to lower case the input text. Should be True for uncased " 43 | "models and False for cased models.") 44 | 45 | flags.DEFINE_bool( 46 | "do_whole_word_mask", False, 47 | "Whether to use whole word masking rather than per-WordPiece masking.") 48 | 49 | flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.") 50 | 51 | flags.DEFINE_integer("max_predictions_per_seq", 20, 52 | "Maximum number of masked LM predictions per sequence.") 53 | 54 | flags.DEFINE_integer("random_seed", 12345, "Random seed for data generation.") 55 | 56 | flags.DEFINE_integer( 57 | "dupe_factor", 10, 58 | "Number of times to duplicate the input data (with different masks).") 59 | 60 | flags.DEFINE_float("masked_lm_prob", 0.15, "Masked LM probability.") 61 | 62 | flags.DEFINE_float( 63 | "short_seq_prob", 0.1, 64 | "Probability of creating sequences which are shorter than the " 65 | "maximum length.") 66 | 67 | 68 | class TrainingInstance(object): 69 | """A single training instance (sentence pair).""" 70 | 71 | def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels, 72 | is_random_next): 73 | self.tokens = tokens 74 | self.segment_ids = segment_ids 75 | self.is_random_next = is_random_next 76 | self.masked_lm_positions = masked_lm_positions 77 | self.masked_lm_labels = masked_lm_labels 78 | 79 | def __str__(self): 80 | s = "" 81 | s += "tokens: %s\n" % (" ".join( 82 | [tokenization.printable_text(x) for x in self.tokens])) 83 | s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids])) 84 | s += "is_random_next: %s\n" % self.is_random_next 85 | s += "masked_lm_positions: %s\n" % (" ".join( 86 | [str(x) for x in self.masked_lm_positions])) 87 | s += "masked_lm_labels: %s\n" % (" ".join( 88 | [tokenization.printable_text(x) for x in self.masked_lm_labels])) 89 | s += "\n" 90 | return s 91 | 92 | def __repr__(self): 93 | return self.__str__() 94 | 95 | 96 | def write_instance_to_example_files(instances, tokenizer, max_seq_length, 97 | max_predictions_per_seq, output_files): 98 | """Create TF example files from `TrainingInstance`s.""" 99 | writers = [] 100 | for output_file in output_files: 101 | writers.append(tf.python_io.TFRecordWriter(output_file)) 102 | 103 | writer_index = 0 104 | 105 | total_written = 0 106 | for (inst_index, instance) in enumerate(instances): 107 | input_ids = tokenizer.convert_tokens_to_ids(instance.tokens) 108 | input_mask = [1] * len(input_ids) 109 | segment_ids = list(instance.segment_ids) 110 | assert len(input_ids) <= max_seq_length 111 | 112 | while len(input_ids) < max_seq_length: 113 | input_ids.append(0) 114 | input_mask.append(0) 115 | segment_ids.append(0) 116 | 117 | assert len(input_ids) == max_seq_length 118 | assert len(input_mask) == max_seq_length 119 | assert len(segment_ids) == max_seq_length 120 | 121 | masked_lm_positions = list(instance.masked_lm_positions) 122 | masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels) 123 | masked_lm_weights = [1.0] * len(masked_lm_ids) 124 | 125 | while len(masked_lm_positions) < max_predictions_per_seq: 126 | masked_lm_positions.append(0) 127 | masked_lm_ids.append(0) 128 | masked_lm_weights.append(0.0) 129 | 130 | next_sentence_label = 1 if instance.is_random_next else 0 131 | 132 | features = collections.OrderedDict() 133 | features["input_ids"] = create_int_feature(input_ids) 134 | features["input_mask"] = create_int_feature(input_mask) 135 | features["segment_ids"] = create_int_feature(segment_ids) 136 | features["masked_lm_positions"] = create_int_feature(masked_lm_positions) 137 | features["masked_lm_ids"] = create_int_feature(masked_lm_ids) 138 | features["masked_lm_weights"] = create_float_feature(masked_lm_weights) 139 | features["next_sentence_labels"] = create_int_feature([next_sentence_label]) 140 | 141 | tf_example = tf.train.Example(features=tf.train.Features(feature=features)) 142 | 143 | writers[writer_index].write(tf_example.SerializeToString()) 144 | writer_index = (writer_index + 1) % len(writers) 145 | 146 | total_written += 1 147 | 148 | if inst_index < 20: 149 | tf.logging.info("*** Example ***") 150 | tf.logging.info("tokens: %s" % " ".join( 151 | [tokenization.printable_text(x) for x in instance.tokens])) 152 | 153 | for feature_name in features.keys(): 154 | feature = features[feature_name] 155 | values = [] 156 | if feature.int64_list.value: 157 | values = feature.int64_list.value 158 | elif feature.float_list.value: 159 | values = feature.float_list.value 160 | tf.logging.info( 161 | "%s: %s" % (feature_name, " ".join([str(x) for x in values]))) 162 | 163 | for writer in writers: 164 | writer.close() 165 | 166 | tf.logging.info("Wrote %d total instances", total_written) 167 | 168 | 169 | def create_int_feature(values): 170 | feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) 171 | return feature 172 | 173 | 174 | def create_float_feature(values): 175 | feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values))) 176 | return feature 177 | 178 | 179 | def create_training_instances(input_files, tokenizer, max_seq_length, 180 | dupe_factor, short_seq_prob, masked_lm_prob, 181 | max_predictions_per_seq, rng): 182 | """Create `TrainingInstance`s from raw text.""" 183 | all_documents = [[]] 184 | 185 | # Input file format: 186 | # (1) One sentence per line. These should ideally be actual sentences, not 187 | # entire paragraphs or arbitrary spans of text. (Because we use the 188 | # sentence boundaries for the "next sentence prediction" task). 189 | # (2) Blank lines between documents. Document boundaries are needed so 190 | # that the "next sentence prediction" task doesn't span between documents. 191 | for input_file in input_files: 192 | with tf.gfile.GFile(input_file, "r") as reader: 193 | while True: 194 | line = tokenization.convert_to_unicode(reader.readline()) 195 | if not line: 196 | break 197 | line = line.strip() 198 | 199 | # Empty lines are used as document delimiters 200 | if not line: 201 | all_documents.append([]) 202 | tokens = tokenizer.tokenize(line) 203 | if tokens: 204 | all_documents[-1].append(tokens) 205 | 206 | # Remove empty documents 207 | all_documents = [x for x in all_documents if x] 208 | rng.shuffle(all_documents) 209 | 210 | vocab_words = list(tokenizer.vocab.keys()) 211 | instances = [] 212 | for _ in range(dupe_factor): 213 | for document_index in range(len(all_documents)): 214 | instances.extend( 215 | create_instances_from_document( 216 | all_documents, document_index, max_seq_length, short_seq_prob, 217 | masked_lm_prob, max_predictions_per_seq, vocab_words, rng)) 218 | 219 | rng.shuffle(instances) 220 | return instances 221 | 222 | 223 | def create_instances_from_document( 224 | all_documents, document_index, max_seq_length, short_seq_prob, 225 | masked_lm_prob, max_predictions_per_seq, vocab_words, rng): 226 | """Creates `TrainingInstance`s for a single document.""" 227 | document = all_documents[document_index] 228 | 229 | # Account for [CLS], [SEP], [SEP] 230 | max_num_tokens = max_seq_length - 3 231 | 232 | # We *usually* want to fill up the entire sequence since we are padding 233 | # to `max_seq_length` anyways, so short sequences are generally wasted 234 | # computation. However, we *sometimes* 235 | # (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter 236 | # sequences to minimize the mismatch between pre-training and fine-tuning. 237 | # The `target_seq_length` is just a rough target however, whereas 238 | # `max_seq_length` is a hard limit. 239 | target_seq_length = max_num_tokens 240 | if rng.random() < short_seq_prob: 241 | target_seq_length = rng.randint(2, max_num_tokens) 242 | 243 | # We DON'T just concatenate all of the tokens from a document into a long 244 | # sequence and choose an arbitrary split point because this would make the 245 | # next sentence prediction task too easy. Instead, we split the input into 246 | # segments "A" and "B" based on the actual "sentences" provided by the user 247 | # input. 248 | instances = [] 249 | current_chunk = [] 250 | current_length = 0 251 | i = 0 252 | while i < len(document): 253 | segment = document[i] 254 | current_chunk.append(segment) 255 | current_length += len(segment) 256 | if i == len(document) - 1 or current_length >= target_seq_length: 257 | if current_chunk: 258 | # `a_end` is how many segments from `current_chunk` go into the `A` 259 | # (first) sentence. 260 | a_end = 1 261 | if len(current_chunk) >= 2: 262 | a_end = rng.randint(1, len(current_chunk) - 1) 263 | 264 | tokens_a = [] 265 | for j in range(a_end): 266 | tokens_a.extend(current_chunk[j]) 267 | 268 | tokens_b = [] 269 | # Random next 270 | is_random_next = False 271 | if len(current_chunk) == 1 or rng.random() < 0.5: 272 | is_random_next = True 273 | target_b_length = target_seq_length - len(tokens_a) 274 | 275 | # This should rarely go for more than one iteration for large 276 | # corpora. However, just to be careful, we try to make sure that 277 | # the random document is not the same as the document 278 | # we're processing. 279 | for _ in range(10): 280 | random_document_index = rng.randint(0, len(all_documents) - 1) 281 | if random_document_index != document_index: 282 | break 283 | 284 | random_document = all_documents[random_document_index] 285 | random_start = rng.randint(0, len(random_document) - 1) 286 | for j in range(random_start, len(random_document)): 287 | tokens_b.extend(random_document[j]) 288 | if len(tokens_b) >= target_b_length: 289 | break 290 | # We didn't actually use these segments so we "put them back" so 291 | # they don't go to waste. 292 | num_unused_segments = len(current_chunk) - a_end 293 | i -= num_unused_segments 294 | # Actual next 295 | else: 296 | is_random_next = False 297 | for j in range(a_end, len(current_chunk)): 298 | tokens_b.extend(current_chunk[j]) 299 | truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng) 300 | 301 | assert len(tokens_a) >= 1 302 | assert len(tokens_b) >= 1 303 | 304 | tokens = [] 305 | segment_ids = [] 306 | tokens.append("[CLS]") 307 | segment_ids.append(0) 308 | for token in tokens_a: 309 | tokens.append(token) 310 | segment_ids.append(0) 311 | 312 | tokens.append("[SEP]") 313 | segment_ids.append(0) 314 | 315 | for token in tokens_b: 316 | tokens.append(token) 317 | segment_ids.append(1) 318 | tokens.append("[SEP]") 319 | segment_ids.append(1) 320 | 321 | (tokens, masked_lm_positions, 322 | masked_lm_labels) = create_masked_lm_predictions( 323 | tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng) 324 | instance = TrainingInstance( 325 | tokens=tokens, 326 | segment_ids=segment_ids, 327 | is_random_next=is_random_next, 328 | masked_lm_positions=masked_lm_positions, 329 | masked_lm_labels=masked_lm_labels) 330 | instances.append(instance) 331 | current_chunk = [] 332 | current_length = 0 333 | i += 1 334 | 335 | return instances 336 | 337 | 338 | MaskedLmInstance = collections.namedtuple("MaskedLmInstance", 339 | ["index", "label"]) 340 | 341 | 342 | def create_masked_lm_predictions(tokens, masked_lm_prob, 343 | max_predictions_per_seq, vocab_words, rng): 344 | """Creates the predictions for the masked LM objective.""" 345 | 346 | cand_indexes = [] 347 | for (i, token) in enumerate(tokens): 348 | if token == "[CLS]" or token == "[SEP]": 349 | continue 350 | # Whole Word Masking means that if we mask all of the wordpieces 351 | # corresponding to an original word. When a word has been split into 352 | # WordPieces, the first token does not have any marker and any subsequence 353 | # tokens are prefixed with ##. So whenever we see the ## token, we 354 | # append it to the previous set of word indexes. 355 | # 356 | # Note that Whole Word Masking does *not* change the training code 357 | # at all -- we still predict each WordPiece independently, softmaxed 358 | # over the entire vocabulary. 359 | if (FLAGS.do_whole_word_mask and len(cand_indexes) >= 1 and 360 | token.startswith("##")): 361 | cand_indexes[-1].append(i) 362 | else: 363 | cand_indexes.append([i]) 364 | 365 | rng.shuffle(cand_indexes) 366 | 367 | output_tokens = list(tokens) 368 | 369 | num_to_predict = min(max_predictions_per_seq, 370 | max(1, int(round(len(tokens) * masked_lm_prob)))) 371 | 372 | masked_lms = [] 373 | covered_indexes = set() 374 | for index_set in cand_indexes: 375 | if len(masked_lms) >= num_to_predict: 376 | break 377 | # If adding a whole-word mask would exceed the maximum number of 378 | # predictions, then just skip this candidate. 379 | if len(masked_lms) + len(index_set) > num_to_predict: 380 | continue 381 | is_any_index_covered = False 382 | for index in index_set: 383 | if index in covered_indexes: 384 | is_any_index_covered = True 385 | break 386 | if is_any_index_covered: 387 | continue 388 | for index in index_set: 389 | covered_indexes.add(index) 390 | 391 | masked_token = None 392 | # 80% of the time, replace with [MASK] 393 | if rng.random() < 0.8: 394 | masked_token = "[MASK]" 395 | else: 396 | # 10% of the time, keep original 397 | if rng.random() < 0.5: 398 | masked_token = tokens[index] 399 | # 10% of the time, replace with random word 400 | else: 401 | masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)] 402 | 403 | output_tokens[index] = masked_token 404 | 405 | masked_lms.append(MaskedLmInstance(index=index, label=tokens[index])) 406 | assert len(masked_lms) <= num_to_predict 407 | masked_lms = sorted(masked_lms, key=lambda x: x.index) 408 | 409 | masked_lm_positions = [] 410 | masked_lm_labels = [] 411 | for p in masked_lms: 412 | masked_lm_positions.append(p.index) 413 | masked_lm_labels.append(p.label) 414 | 415 | return (output_tokens, masked_lm_positions, masked_lm_labels) 416 | 417 | 418 | def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng): 419 | """Truncates a pair of sequences to a maximum sequence length.""" 420 | while True: 421 | total_length = len(tokens_a) + len(tokens_b) 422 | if total_length <= max_num_tokens: 423 | break 424 | 425 | trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b 426 | assert len(trunc_tokens) >= 1 427 | 428 | # We want to sometimes truncate from the front and sometimes from the 429 | # back to add more randomness and avoid biases. 430 | if rng.random() < 0.5: 431 | del trunc_tokens[0] 432 | else: 433 | trunc_tokens.pop() 434 | 435 | 436 | def main(_): 437 | tf.logging.set_verbosity(tf.logging.INFO) 438 | 439 | tokenizer = tokenization.FullTokenizer( 440 | vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) 441 | 442 | input_files = [] 443 | for input_pattern in FLAGS.input_file.split(","): 444 | input_files.extend(tf.gfile.Glob(input_pattern)) 445 | 446 | tf.logging.info("*** Reading from input files ***") 447 | for input_file in input_files: 448 | tf.logging.info(" %s", input_file) 449 | 450 | rng = random.Random(FLAGS.random_seed) 451 | instances = create_training_instances( 452 | input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor, 453 | FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq, 454 | rng) 455 | 456 | output_files = FLAGS.output_file.split(",") 457 | tf.logging.info("*** Writing to output files ***") 458 | for output_file in output_files: 459 | tf.logging.info(" %s", output_file) 460 | 461 | write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length, 462 | FLAGS.max_predictions_per_seq, output_files) 463 | 464 | 465 | if __name__ == "__main__": 466 | flags.mark_flag_as_required("input_file") 467 | flags.mark_flag_as_required("output_file") 468 | flags.mark_flag_as_required("vocab_file") 469 | tf.app.run() 470 | -------------------------------------------------------------------------------- /mathbert/run_pretraining.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Run masked LM/next sentence masked_lm pre-training for BERT.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import os 22 | import modeling 23 | import optimization 24 | import tensorflow as tf 25 | 26 | flags = tf.flags 27 | 28 | FLAGS = flags.FLAGS 29 | 30 | ## Required parameters 31 | flags.DEFINE_string( 32 | "bert_config_file", None, 33 | "The config json file corresponding to the pre-trained BERT model. " 34 | "This specifies the model architecture.") 35 | 36 | flags.DEFINE_string( 37 | "input_file", None, 38 | "Input TF example files (can be a glob or comma separated).") 39 | 40 | flags.DEFINE_string( 41 | "output_dir", None, 42 | "The output directory where the model checkpoints will be written.") 43 | 44 | ## Other parameters 45 | flags.DEFINE_string( 46 | "init_checkpoint", None, 47 | "Initial checkpoint (usually from a pre-trained BERT model).") 48 | 49 | flags.DEFINE_integer( 50 | "max_seq_length", 128, 51 | "The maximum total input sequence length after WordPiece tokenization. " 52 | "Sequences longer than this will be truncated, and sequences shorter " 53 | "than this will be padded. Must match data generation.") 54 | 55 | flags.DEFINE_integer( 56 | "max_predictions_per_seq", 20, 57 | "Maximum number of masked LM predictions per sequence. " 58 | "Must match data generation.") 59 | 60 | flags.DEFINE_bool("do_train", False, "Whether to run training.") 61 | 62 | flags.DEFINE_bool("do_eval", False, "Whether to run eval on the dev set.") 63 | 64 | flags.DEFINE_integer("train_batch_size", 32, "Total batch size for training.") 65 | 66 | flags.DEFINE_integer("eval_batch_size", 8, "Total batch size for eval.") 67 | 68 | flags.DEFINE_float("learning_rate", 5e-5, "The initial learning rate for Adam.") 69 | 70 | flags.DEFINE_integer("num_train_steps", 100000, "Number of training steps.") 71 | 72 | flags.DEFINE_integer("num_warmup_steps", 10000, "Number of warmup steps.") 73 | 74 | flags.DEFINE_integer("save_checkpoints_steps", 1000, 75 | "How often to save the model checkpoint.") 76 | 77 | flags.DEFINE_integer("iterations_per_loop", 1000, 78 | "How many steps to make in each estimator call.") 79 | 80 | flags.DEFINE_integer("max_eval_steps", 100, "Maximum number of eval steps.") 81 | 82 | flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.") 83 | 84 | tf.flags.DEFINE_string( 85 | "tpu_name", None, 86 | "The Cloud TPU to use for training. This should be either the name " 87 | "used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 " 88 | "url.") 89 | 90 | tf.flags.DEFINE_string( 91 | "tpu_zone", None, 92 | "[Optional] GCE zone where the Cloud TPU is located in. If not " 93 | "specified, we will attempt to automatically detect the GCE project from " 94 | "metadata.") 95 | 96 | tf.flags.DEFINE_string( 97 | "gcp_project", None, 98 | "[Optional] Project name for the Cloud TPU-enabled project. If not " 99 | "specified, we will attempt to automatically detect the GCE project from " 100 | "metadata.") 101 | 102 | tf.flags.DEFINE_string("master", None, "[Optional] TensorFlow master URL.") 103 | 104 | flags.DEFINE_integer( 105 | "num_tpu_cores", 8, 106 | "Only used if `use_tpu` is True. Total number of TPU cores to use.") 107 | 108 | 109 | def model_fn_builder(bert_config, init_checkpoint, learning_rate, 110 | num_train_steps, num_warmup_steps, use_tpu, 111 | use_one_hot_embeddings): 112 | """Returns `model_fn` closure for TPUEstimator.""" 113 | 114 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument 115 | """The `model_fn` for TPUEstimator.""" 116 | 117 | tf.logging.info("*** Features ***") 118 | for name in sorted(features.keys()): 119 | tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape)) 120 | 121 | input_ids = features["input_ids"] 122 | input_mask = features["input_mask"] 123 | segment_ids = features["segment_ids"] 124 | masked_lm_positions = features["masked_lm_positions"] 125 | masked_lm_ids = features["masked_lm_ids"] 126 | masked_lm_weights = features["masked_lm_weights"] 127 | next_sentence_labels = features["next_sentence_labels"] 128 | 129 | is_training = (mode == tf.estimator.ModeKeys.TRAIN) 130 | 131 | model = modeling.BertModel( 132 | config=bert_config, 133 | is_training=is_training, 134 | input_ids=input_ids, 135 | input_mask=input_mask, 136 | token_type_ids=segment_ids, 137 | use_one_hot_embeddings=use_one_hot_embeddings) 138 | 139 | (masked_lm_loss, 140 | masked_lm_example_loss, masked_lm_log_probs) = get_masked_lm_output( 141 | bert_config, model.get_sequence_output(), model.get_embedding_table(), 142 | masked_lm_positions, masked_lm_ids, masked_lm_weights) 143 | 144 | (next_sentence_loss, next_sentence_example_loss, 145 | next_sentence_log_probs) = get_next_sentence_output( 146 | bert_config, model.get_pooled_output(), next_sentence_labels) 147 | 148 | total_loss = masked_lm_loss + next_sentence_loss 149 | 150 | tvars = tf.trainable_variables() 151 | 152 | initialized_variable_names = {} 153 | scaffold_fn = None 154 | if init_checkpoint: 155 | (assignment_map, initialized_variable_names 156 | ) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint) 157 | if use_tpu: 158 | 159 | def tpu_scaffold(): 160 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 161 | return tf.train.Scaffold() 162 | 163 | scaffold_fn = tpu_scaffold 164 | else: 165 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 166 | 167 | tf.logging.info("**** Trainable Variables ****") 168 | for var in tvars: 169 | init_string = "" 170 | if var.name in initialized_variable_names: 171 | init_string = ", *INIT_FROM_CKPT*" 172 | tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape, 173 | init_string) 174 | 175 | output_spec = None 176 | if mode == tf.estimator.ModeKeys.TRAIN: 177 | train_op = optimization.create_optimizer( 178 | total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu) 179 | 180 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 181 | mode=mode, 182 | loss=total_loss, 183 | train_op=train_op, 184 | scaffold_fn=scaffold_fn) 185 | elif mode == tf.estimator.ModeKeys.EVAL: 186 | 187 | def metric_fn(masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids, 188 | masked_lm_weights, next_sentence_example_loss, 189 | next_sentence_log_probs, next_sentence_labels): 190 | """Computes the loss and accuracy of the model.""" 191 | masked_lm_log_probs = tf.reshape(masked_lm_log_probs, 192 | [-1, masked_lm_log_probs.shape[-1]]) 193 | masked_lm_predictions = tf.argmax( 194 | masked_lm_log_probs, axis=-1, output_type=tf.int32) 195 | masked_lm_example_loss = tf.reshape(masked_lm_example_loss, [-1]) 196 | masked_lm_ids = tf.reshape(masked_lm_ids, [-1]) 197 | masked_lm_weights = tf.reshape(masked_lm_weights, [-1]) 198 | masked_lm_accuracy = tf.metrics.accuracy( 199 | labels=masked_lm_ids, 200 | predictions=masked_lm_predictions, 201 | weights=masked_lm_weights) 202 | masked_lm_mean_loss = tf.metrics.mean( 203 | values=masked_lm_example_loss, weights=masked_lm_weights) 204 | 205 | next_sentence_log_probs = tf.reshape( 206 | next_sentence_log_probs, [-1, next_sentence_log_probs.shape[-1]]) 207 | next_sentence_predictions = tf.argmax( 208 | next_sentence_log_probs, axis=-1, output_type=tf.int32) 209 | next_sentence_labels = tf.reshape(next_sentence_labels, [-1]) 210 | next_sentence_accuracy = tf.metrics.accuracy( 211 | labels=next_sentence_labels, predictions=next_sentence_predictions) 212 | next_sentence_mean_loss = tf.metrics.mean( 213 | values=next_sentence_example_loss) 214 | 215 | return { 216 | "masked_lm_accuracy": masked_lm_accuracy, 217 | "masked_lm_loss": masked_lm_mean_loss, 218 | "next_sentence_accuracy": next_sentence_accuracy, 219 | "next_sentence_loss": next_sentence_mean_loss, 220 | } 221 | 222 | eval_metrics = (metric_fn, [ 223 | masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids, 224 | masked_lm_weights, next_sentence_example_loss, 225 | next_sentence_log_probs, next_sentence_labels 226 | ]) 227 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 228 | mode=mode, 229 | loss=total_loss, 230 | eval_metrics=eval_metrics, 231 | scaffold_fn=scaffold_fn) 232 | else: 233 | raise ValueError("Only TRAIN and EVAL modes are supported: %s" % (mode)) 234 | 235 | return output_spec 236 | 237 | return model_fn 238 | 239 | 240 | def get_masked_lm_output(bert_config, input_tensor, output_weights, positions, 241 | label_ids, label_weights): 242 | """Get loss and log probs for the masked LM.""" 243 | input_tensor = gather_indexes(input_tensor, positions) 244 | 245 | with tf.variable_scope("cls/predictions"): 246 | # We apply one more non-linear transformation before the output layer. 247 | # This matrix is not used after pre-training. 248 | with tf.variable_scope("transform"): 249 | input_tensor = tf.layers.dense( 250 | input_tensor, 251 | units=bert_config.hidden_size, 252 | activation=modeling.get_activation(bert_config.hidden_act), 253 | kernel_initializer=modeling.create_initializer( 254 | bert_config.initializer_range)) 255 | input_tensor = modeling.layer_norm(input_tensor) 256 | 257 | # The output weights are the same as the input embeddings, but there is 258 | # an output-only bias for each token. 259 | output_bias = tf.get_variable( 260 | "output_bias", 261 | shape=[bert_config.vocab_size], 262 | initializer=tf.zeros_initializer()) 263 | logits = tf.matmul(input_tensor, output_weights, transpose_b=True) 264 | logits = tf.nn.bias_add(logits, output_bias) 265 | log_probs = tf.nn.log_softmax(logits, axis=-1) 266 | 267 | label_ids = tf.reshape(label_ids, [-1]) 268 | label_weights = tf.reshape(label_weights, [-1]) 269 | 270 | one_hot_labels = tf.one_hot( 271 | label_ids, depth=bert_config.vocab_size, dtype=tf.float32) 272 | 273 | # The `positions` tensor might be zero-padded (if the sequence is too 274 | # short to have the maximum number of predictions). The `label_weights` 275 | # tensor has a value of 1.0 for every real prediction and 0.0 for the 276 | # padding predictions. 277 | per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1]) 278 | numerator = tf.reduce_sum(label_weights * per_example_loss) 279 | denominator = tf.reduce_sum(label_weights) + 1e-5 280 | loss = numerator / denominator 281 | 282 | return (loss, per_example_loss, log_probs) 283 | 284 | 285 | def get_next_sentence_output(bert_config, input_tensor, labels): 286 | """Get loss and log probs for the next sentence prediction.""" 287 | 288 | # Simple binary classification. Note that 0 is "next sentence" and 1 is 289 | # "random sentence". This weight matrix is not used after pre-training. 290 | with tf.variable_scope("cls/seq_relationship"): 291 | output_weights = tf.get_variable( 292 | "output_weights", 293 | shape=[2, bert_config.hidden_size], 294 | initializer=modeling.create_initializer(bert_config.initializer_range)) 295 | output_bias = tf.get_variable( 296 | "output_bias", shape=[2], initializer=tf.zeros_initializer()) 297 | 298 | logits = tf.matmul(input_tensor, output_weights, transpose_b=True) 299 | logits = tf.nn.bias_add(logits, output_bias) 300 | log_probs = tf.nn.log_softmax(logits, axis=-1) 301 | labels = tf.reshape(labels, [-1]) 302 | one_hot_labels = tf.one_hot(labels, depth=2, dtype=tf.float32) 303 | per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) 304 | loss = tf.reduce_mean(per_example_loss) 305 | return (loss, per_example_loss, log_probs) 306 | 307 | 308 | def gather_indexes(sequence_tensor, positions): 309 | """Gathers the vectors at the specific positions over a minibatch.""" 310 | sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3) 311 | batch_size = sequence_shape[0] 312 | seq_length = sequence_shape[1] 313 | width = sequence_shape[2] 314 | 315 | flat_offsets = tf.reshape( 316 | tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1]) 317 | flat_positions = tf.reshape(positions + flat_offsets, [-1]) 318 | flat_sequence_tensor = tf.reshape(sequence_tensor, 319 | [batch_size * seq_length, width]) 320 | output_tensor = tf.gather(flat_sequence_tensor, flat_positions) 321 | return output_tensor 322 | 323 | 324 | def input_fn_builder(input_files, 325 | max_seq_length, 326 | max_predictions_per_seq, 327 | is_training, 328 | num_cpu_threads=4): 329 | """Creates an `input_fn` closure to be passed to TPUEstimator.""" 330 | 331 | def input_fn(params): 332 | """The actual input function.""" 333 | batch_size = params["batch_size"] 334 | 335 | name_to_features = { 336 | "input_ids": 337 | tf.FixedLenFeature([max_seq_length], tf.int64), 338 | "input_mask": 339 | tf.FixedLenFeature([max_seq_length], tf.int64), 340 | "segment_ids": 341 | tf.FixedLenFeature([max_seq_length], tf.int64), 342 | "masked_lm_positions": 343 | tf.FixedLenFeature([max_predictions_per_seq], tf.int64), 344 | "masked_lm_ids": 345 | tf.FixedLenFeature([max_predictions_per_seq], tf.int64), 346 | "masked_lm_weights": 347 | tf.FixedLenFeature([max_predictions_per_seq], tf.float32), 348 | "next_sentence_labels": 349 | tf.FixedLenFeature([1], tf.int64), 350 | } 351 | 352 | # For training, we want a lot of parallel reading and shuffling. 353 | # For eval, we want no shuffling and parallel reading doesn't matter. 354 | if is_training: 355 | d = tf.data.Dataset.from_tensor_slices(tf.constant(input_files)) 356 | d = d.repeat() 357 | d = d.shuffle(buffer_size=len(input_files)) 358 | 359 | # `cycle_length` is the number of parallel files that get read. 360 | cycle_length = min(num_cpu_threads, len(input_files)) 361 | 362 | # `sloppy` mode means that the interleaving is not exact. This adds 363 | # even more randomness to the training pipeline. 364 | d = d.apply( 365 | tf.contrib.data.parallel_interleave( 366 | tf.data.TFRecordDataset, 367 | sloppy=is_training, 368 | cycle_length=cycle_length)) 369 | d = d.shuffle(buffer_size=100) 370 | else: 371 | d = tf.data.TFRecordDataset(input_files) 372 | # Since we evaluate for a fixed number of steps we don't want to encounter 373 | # out-of-range exceptions. 374 | d = d.repeat() 375 | 376 | # We must `drop_remainder` on training because the TPU requires fixed 377 | # size dimensions. For eval, we assume we are evaluating on the CPU or GPU 378 | # and we *don't* want to drop the remainder, otherwise we wont cover 379 | # every sample. 380 | d = d.apply( 381 | tf.contrib.data.map_and_batch( 382 | lambda record: _decode_record(record, name_to_features), 383 | batch_size=batch_size, 384 | num_parallel_batches=num_cpu_threads, 385 | drop_remainder=True)) 386 | return d 387 | 388 | return input_fn 389 | 390 | 391 | def _decode_record(record, name_to_features): 392 | """Decodes a record to a TensorFlow example.""" 393 | example = tf.parse_single_example(record, name_to_features) 394 | 395 | # tf.Example only supports tf.int64, but the TPU only supports tf.int32. 396 | # So cast all int64 to int32. 397 | for name in list(example.keys()): 398 | t = example[name] 399 | if t.dtype == tf.int64: 400 | t = tf.to_int32(t) 401 | example[name] = t 402 | 403 | return example 404 | 405 | 406 | def main(_): 407 | tf.logging.set_verbosity(tf.logging.INFO) 408 | 409 | if not FLAGS.do_train and not FLAGS.do_eval: 410 | raise ValueError("At least one of `do_train` or `do_eval` must be True.") 411 | 412 | bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file) 413 | 414 | tf.gfile.MakeDirs(FLAGS.output_dir) 415 | 416 | input_files = [] 417 | for input_pattern in FLAGS.input_file.split(","): 418 | input_files.extend(tf.gfile.Glob(input_pattern)) 419 | 420 | tf.logging.info("*** Input Files ***") 421 | for input_file in input_files: 422 | tf.logging.info(" %s" % input_file) 423 | 424 | tpu_cluster_resolver = None 425 | if FLAGS.use_tpu and FLAGS.tpu_name: 426 | tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver( 427 | FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project) 428 | 429 | is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 430 | run_config = tf.contrib.tpu.RunConfig( 431 | cluster=tpu_cluster_resolver, 432 | master=FLAGS.master, 433 | model_dir=FLAGS.output_dir, 434 | save_checkpoints_steps=FLAGS.save_checkpoints_steps, 435 | tpu_config=tf.contrib.tpu.TPUConfig( 436 | iterations_per_loop=FLAGS.iterations_per_loop, 437 | num_shards=FLAGS.num_tpu_cores, 438 | per_host_input_for_training=is_per_host)) 439 | 440 | model_fn = model_fn_builder( 441 | bert_config=bert_config, 442 | init_checkpoint=FLAGS.init_checkpoint, 443 | learning_rate=FLAGS.learning_rate, 444 | num_train_steps=FLAGS.num_train_steps, 445 | num_warmup_steps=FLAGS.num_warmup_steps, 446 | use_tpu=FLAGS.use_tpu, 447 | use_one_hot_embeddings=FLAGS.use_tpu) 448 | 449 | # If TPU is not available, this will fall back to normal Estimator on CPU 450 | # or GPU. 451 | estimator = tf.contrib.tpu.TPUEstimator( 452 | use_tpu=FLAGS.use_tpu, 453 | model_fn=model_fn, 454 | config=run_config, 455 | train_batch_size=FLAGS.train_batch_size, 456 | eval_batch_size=FLAGS.eval_batch_size) 457 | 458 | if FLAGS.do_train: 459 | tf.logging.info("***** Running training *****") 460 | tf.logging.info(" Batch size = %d", FLAGS.train_batch_size) 461 | train_input_fn = input_fn_builder( 462 | input_files=input_files, 463 | max_seq_length=FLAGS.max_seq_length, 464 | max_predictions_per_seq=FLAGS.max_predictions_per_seq, 465 | is_training=True) 466 | estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps) 467 | 468 | if FLAGS.do_eval: 469 | tf.logging.info("***** Running evaluation *****") 470 | tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size) 471 | 472 | eval_input_fn = input_fn_builder( 473 | input_files=input_files, 474 | max_seq_length=FLAGS.max_seq_length, 475 | max_predictions_per_seq=FLAGS.max_predictions_per_seq, 476 | is_training=False) 477 | 478 | result = estimator.evaluate( 479 | input_fn=eval_input_fn, steps=FLAGS.max_eval_steps) 480 | 481 | output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt") 482 | with tf.gfile.GFile(output_eval_file, "w") as writer: 483 | tf.logging.info("***** Eval results *****") 484 | for key in sorted(result.keys()): 485 | tf.logging.info(" %s = %s", key, str(result[key])) 486 | writer.write("%s = %s\n" % (key, str(result[key]))) 487 | 488 | 489 | if __name__ == "__main__": 490 | flags.mark_flag_as_required("input_file") 491 | flags.mark_flag_as_required("bert_config_file") 492 | flags.mark_flag_as_required("output_dir") 493 | tf.app.run() 494 | -------------------------------------------------------------------------------- /scripts/MathBERT_finetune.ipynb: -------------------------------------------------------------------------------- 1 | {"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"MathBERT_finetune.ipynb","provenance":[],"collapsed_sections":[]},"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"},"accelerator":"TPU"},"cells":[{"cell_type":"markdown","metadata":{"id":"3Bex7ap4oSHi"},"source":["#### Open a TPU session and mount your drive to Google Colab"]},{"cell_type":"code","metadata":{"id":"vPL7bLSXn3V5"},"source":["%tensorflow_version 1.x\n","import os\n","import pprint\n","import json\n","import tensorflow as tf\n","\n","assert \"COLAB_TPU_ADDR\" in os.environ, \"ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!\"\n","TPU_ADDRESS = \"grpc://\" + os.environ[\"COLAB_TPU_ADDR\"] \n","TPU_TOPOLOGY = \"2x2\"\n","print(\"TPU address is\", TPU_ADDRESS)\n","\n","from google.colab import auth\n","auth.authenticate_user()\n","with tf.Session(TPU_ADDRESS) as session:\n"," print('TPU devices:')\n"," pprint.pprint(session.list_devices())\n","\n"," # Upload credentials to TPU. \n"," with open('/content/adc.json', 'r') as f:\n"," auth_info = json.load(f)\n"," tf.contrib.cloud.configure_gcs(session, credentials=auth_info)\n"," # Now credentials are set for all future sessions on this TPU."],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"mWQef8cBoDaD"},"source":["\n","from google.colab import drive\n","drive.mount('/content/drive')\n","import os\n","os.chdir('/content/drive/My Drive/Colab Notebooks/your_working_dir')\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"mV2HPxjZoMb4"},"source":["#### Download tensorflow checkpoint from S3 location provided\n","+ MathBERT-basevocab-uncased model artifacts are stored in s3 bucket and can be downloaded using 'wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_config.json'\n","+ MathBERT-mathvocab-uncased model artifacts are stored in s3 bucket and can be downloaded using 'wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-mathvocab-uncased/bert_config.json'"]},{"cell_type":"code","metadata":{"id":"HSl8tzzVwYwC"},"source":["!mkdir your_model_folder\n","%cd your_model_folder"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"pUlu0e0toD1K","outputId":"cb2f9595-c355-4e89-b5b2-c34479661974"},"source":["!wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_config.json\n","!wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/vocab.txt\n","!wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.index\n","!wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.meta\n","!wget http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_model.ckpt.data-00000-of-00001"],"execution_count":null,"outputs":[{"output_type":"stream","text":["--2021-05-20 02:47:06-- http://tracy-nlp-models.s3.amazonaws.com/mathbert-basevocab-uncased/bert_config.json\n","Resolving tracy-nlp-models.s3.amazonaws.com (tracy-nlp-models.s3.amazonaws.com)... 52.216.101.115\n","Connecting to tracy-nlp-models.s3.amazonaws.com (tracy-nlp-models.s3.amazonaws.com)|52.216.101.115|:80... connected.\n","HTTP request sent, awaiting response... 200 OK\n","Length: 570 [application/json]\n","Saving to: ‘bert_config.json’\n","\n","\rbert_config.json 0%[ ] 0 --.-KB/s \rbert_config.json 100%[===================>] 570 --.-KB/s in 0s \n","\n","2021-05-20 02:47:06 (78.6 MB/s) - ‘bert_config.json’ saved [570/570]\n","\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"PDSvYILZvl_j"},"source":["#### Download MathBERT training scripts\n","+ git clone the MathBERT repo at https://github.com/tbs17/MathBERT/\n"]},{"cell_type":"code","metadata":{"id":"1VJThb1-voo8"},"source":["%cd ../ #clone the repo outside of your model folder\n","!git clone https://github.com/tbs17/MathBERT.git"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"etceOFRUvosB"},"source":[""],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"XtwgtVfWvout"},"source":["#### Start Fine-tuning\n","+ prepare your dataset to be compatible with BERT training data format. Please refer to the details at https://github.com/google-research/bert. \n","+ Alternatively, you can use our function 'split_3data_label()' to split your data into 3 parts and save your label data\n","\n","+ using the below script to utilize the MathBERT artifact for fine-tuning\n","+ for the 3 downstream tasks we had, we used the below task name:\n"," + skill code prediction: 'COLA_385'\n"," + auto-grading classification: 'auto_grade'\n"," + knowledge tracing classificationsingle: 'KT'\n"," + depending on your number of labels, just change the line accordingly in the run_classifier.py\n"," ```\n"," def get_labels(self):\n"," return [str(x) for x in range(your_num_labels)]\n"," ```"]},{"cell_type":"code","metadata":{"id":"k8U2CyHY1nVb"},"source":["def split_3data_label(org_path,out_dir):\n"," import pandas as pd\n"," import os\n"," from sklearn.model_selection import train_test_split\n"," data=pd.read_csv(org_path,encoding='utf-8',names=['text','label'],header=0)\n"," \n"," print(f'total sample is {data.shape[0]}')\n"," df_train, df_test=train_test_split(data,test_size=0.2,random_state=111)\n","\n"," df_bert_train, df_bert_dev = train_test_split(df_train, test_size=0.1,random_state=111)\n"," #create new dataframe for test data\n","\n"," #output tsv file, no header for train and dev\n"," if not os.path.exists(out_dir):\n"," os.makedirs(out_dir)\n"," df_bert_train.to_csv('{}/train_with_label.csv'.format(out_dir), index=False)\n"," df_bert_dev.to_csv('{}/dev_with_label.csv'.format(out_dir),index=False)\n"," df_test.to_csv('{}/test_with_label.csv'.format(out_dir), index=False)\n"," print(f'training samples are {df_bert_train.shape[0]}\\n'\n"," f'eval samples are {df_bert_dev.shape[0]}\\n'\n"," f'testing samples are {df_test.shape[0]}'\n"," )\n","# print('{} unique labels'.format(data0['label_en'].nunique()))"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"08PPLWv6wrRG"},"source":["# Please find the full list of tasks and their fintuning hyperparameters\n","# here https://github.com/google-research/albert/blob/master/run_glue.sh\n","\n","BUCKET = \"your_google_cloud_bucket\" #@param { type: \"string\" }\n","TASK = 'auto_grade' #@param {type:\"string\"}\n","\n","BASE_DIR = \"gs://\" + BUCKET\n","if not BASE_DIR or BASE_DIR == \"gs://\":\n"," raise ValueError(\"You must enter a BUCKET.\")\n","DATA_DIR = os.path.join(BASE_DIR, \"data\")\n","\n","OUTPUT_DIR = 'gs://{}/BERT/mathBERT/{}_{}'.format(BUCKET, TASK,'MathBERT_LR2E-5_BS64_MS512_EP5_baseVocab_testing')\n","tf.gfile.MakeDirs(OUTPUT_DIR)\n","print('***** Model output directory: {} *****'.format(OUTPUT_DIR))\n","\n"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"vVfrHOGKynBk"},"source":["import time\n","start=time.time()\n","os.environ['TFHUB_CACHE_DIR'] = OUTPUT_DIR\n","!python bert/run_classifier.py \\\n"," --data_dir=mathBERT-downstreamTasks/auto_grade/ \\\n"," --bert_config_file=Upload_Models/MathBERT-orig/bert_config.json \\\n"," --vocab_file=Upload_Models/MathBERT-orig/vocab.txt \\\n"," --task_name=$TASK \\\n"," --output_dir=$OUTPUT_DIR \\\n"," --init_checkpoint='gs://your_bucket/MathBERT-basevocab-uncased/bert_model.ckpt' \\\n"," --do_lower_case=True \\\n"," --do_train=True \\\n"," --do_eval=True \\\n"," --do_predict=True \\\n"," --max_seq_length=512 \\\n"," --warmup_step=1000 \\\n"," --learning_rate=2e-5 \\\n"," --num_train_epochs=5 \\\n"," --save_checkpoints_steps=2000 \\\n"," --train_batch_size=64 \\\n"," --eval_batch_size=32 \\\n"," --predict_batch_size=16 \\\n"," --tpu_name=$TPU_ADDRESS \\\n"," --use_tpu=True\n","end=time.time()\n","print(f'it took {(end-start)/60} mins to finish ')\n"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"v2LGHeoJy1KW"},"source":["+ Downloading the prediction result and eval result to your local folder"]},{"cell_type":"code","metadata":{"id":"Dv-SwhSny68e"},"source":["import os\n","folder_name=OUTPUT_DIR.split('/')[5]\n","os.environ['OUTPUT_DIR']=OUTPUT_DIR\n","os.environ['folder_name']=folder_name\n","!gsutil cp $OUTPUT_DIR/eval_results.txt your_output_folder/$folder_name-eval_results.txt\n","!gsutil cp $OUTPUT_DIR/test_results.tsv your_output_folder/$folder_name-test_results.tsv"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"d-thmx0KzIOF"},"source":["#### Convert Testing result \n","+ the prediction output need to be formated so that you can evaluate \n","+ below examples are for predicting the auto-grading and KC tasks with 5 labels and 385 labels in the dataset"]},{"cell_type":"code","metadata":{"id":"Mz9_ZwdkzIad"},"source":["def match_top3_f1_acc(data_dir,label_Path,out_dir):\n"," from datetime import datetime\n"," import numpy as np\n"," import pandas as pd\n"," import os\n"," from sklearn.metrics import accuracy_score\n"," from sklearn.metrics import f1_score\n"," label_data=pd.read_csv(label_Path,names=['label'],header=0)\n"," print(f'test data shape{label_data.shape}')\n"," for i, file in enumerate(os.listdir(data_dir)):\n"," \n"," pred_dataPath=os.path.join(data_dir,file)\n"," print('======')\n"," print(i,pred_dataPath)\n"," \n"," if pred_dataPath.endswith('.csv'):\n","\n"," pred_data=pd.read_csv(pred_dataPath,encoding='ISO-8859-1')\n"," print(f'predicted data shape: {pred_data.shape}')\n"," print('total {} classes are predicted as top class'.format(len(pred_data['top1'].unique())))\n"," match=pd.concat([pred_data,label_data],axis=1)\n"," print(f'merged data shape{match.shape}')\n"," correct_top1=match[match['top1']==match['label']].top1.unique()\n"," correct_top2=match[match['top2']==match['label']].top2.unique()\n"," correct_top3=match[match['top3']==match['label']].top3.unique()\n"," correct_all=list(set(list(correct_top1)+list(correct_top2)+list(correct_top3)))\n"," print('Correct top 1 label {}'.format(len(correct_top1)))\n"," print('Correct top 2 label {}'.format(len(correct_top2)))\n"," print('Correct top 3 label {}'.format(len(correct_top3)))\n"," print('Total top 3 Correct labels {}'.format(len(correct_all)))\n"," match['matched1']=np.where(match['top1']==match['label'],1,0)\n"," match['matched2']=np.where((match['top1']==match['label']) | (match['top2']==match['label']),1,0)\n"," match['matched3']=np.where((match['top1']==match['label']) | (match['top2']==match['label']) | (match['top3']==match['label']),1,0)\n"," \n"," marker=file.split('.')[0]\n"," from pathlib import Path\n"," out_dir=Path(out_dir)\n"," out_dir.mkdir(exist_ok=True)\n"," match.to_csv('{}/{}_matched.csv'.format(out_dir,marker),index=False)\n","\n"," \n"," print('---Below is F1 Score(weighted)--')\n"," top1_f1=round(f1_score(match['label'], match['top1'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top1_f1))\n"," top2_f1=round(f1_score(match['label'], match['top2'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top2_f1+top1_f1))\n"," top3_f1=round(f1_score(match['label'], match['top3'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top3_f1+top2_f1+top1_f1))\n"," \n"," print ('---Below is Sklearn accuracy---')\n"," top1_accuracy=round(accuracy_score(match['label'], match['top1'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top1_accuracy))\n"," top2_accuracy=round(accuracy_score(match['label'], match['top2'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top2_accuracy+top1_accuracy))\n"," top3_accuracy=round(accuracy_score(match['label'], match['top3'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top3_accuracy+top2_accuracy+top1_accuracy))\n","\n","\n","def convert_autoGrade(orig_testPath,pred_testPath,org_dataPath,out_dir):\n"," from pathlib import Path\n"," import pandas as pd\n"," #read the original test data for the text and id\n"," df_test = pd.read_csv(orig_testPath, sep='\\t')#,engine='python'\n"," df_test['guid']=df_test.iloc[:,0].astype(str)\n"," print(f'original test file has shape {df_test.shape}')\n"," #read the results data for the probabilities\n"," df_result = pd.read_csv(pred_testPath, sep='\\t', header=None)\n"," print(f'predicted test file has shape {df_result.shape}')\n"," out_dir=Path(out_dir)\n"," Path.mkdir(out_dir,exist_ok=True)\n"," import numpy as np\n"," # df_map\n"," df_map_result = pd.DataFrame({'guid': df_test['guid'],\n"," 'question': df_test['question'],\n"," 'answer': df_test['answer'],\n"," 'top1': df_result.idxmax(axis=1),\n"," 'top1_probability':df_result.max(axis=1),\n"," 'top2': df_result.columns[df_result.values.argsort(1)[:,-2]],\n"," 'top2_probability':df_result.apply(lambda x: sorted(x)[-2],axis=1),\n"," 'top3': df_result.columns[df_result.values.argsort(1)[:,-3]],\n"," 'top3_probability':df_result.apply(lambda x: sorted(x)[-3],axis=1),\n"," 'top4': df_result.columns[df_result.values.argsort(1)[:,-4]],\n"," 'top4_probability':df_result.apply(lambda x: sorted(x)[-4],axis=1),\n"," 'top5': df_result.columns[df_result.values.argsort(1)[:,-5]],\n"," 'top5_probability':df_result.apply(lambda x: sorted(x)[-5],axis=1)\n"," })\n"," #view sample rows of the newly created dataframe\n","# display(df_map_result.head())\n"," df_map_result['top1']=df_map_result['top1'].astype(str)\n"," df_map_result['top2']=df_map_result['top2'].astype(str)\n"," df_map_result['top3']=df_map_result['top3'].astype(str)\n"," df_map_result.dtypes\n"," df_map_result['top4']=df_map_result['top4'].astype(str)\n"," df_map_result['top5']=df_map_result['top5'].astype(str)\n"," print(f'mapped test file has shape {df_map_result.shape}')\n","\n"," label_map_dict={'0':1,'1':2,'2':3,'3':4,'4':5}\n"," marker=pred_testPath.split('/')[-1].split('.')[0]\n"," df_map_result=df_map_result.replace({'top1':label_map_dict,'top2':label_map_dict,'top3':label_map_dict,'top4':label_map_dict,'top5':label_map_dict})\n"," df_map_result.to_csv('{}/{}_converted.csv'.format(out_dir,marker),index=False)\n"," print(df_map_result.shape)#(702, 12)\n"," return df_map_result"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"39lH4BEa0ghz"},"source":["orig_testPath='auto_grade/test.tsv'#original test data\n","pred_testPath='output/auto_grade_MathBERT_LR2E-5_BS64_MS512_EP5_baseVocab_testing-test_results.tsv' #your own predicted test data\n","org_dataPath='auto_grade/auto_grade_original_full_data.csv'#the original full set of auto grade data\n","out_dir='TEST_converted_autoGrade_MATHBERT'# name your own output dir\n","df_map_result=convert_autoGrade(orig_testPath,pred_testPath,org_dataPath,out_dir)\n","df_map_result.head()"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"kzQ0wEJjfEJX"},"source":["orig_testPath='TAPT/skill_code_DESC_TITLE_PROBLEM/CoLA/test.tsv'#original test data\n","pred_testPath='EVAL_RESULT/MathBERT/COLA_385_MathBERT-TAPT_LR5E-5_BS64_MS512_EP25_customVocab-V2_FIT_SEED5_V2-test_results.tsv'#your own predicted test data\n","org_dataPath='further-pre-training/CORPUS/ALL_GRADES/DESC_TITLE_PROBLEM_combined.csv'#the original full set \n","out_dir='TEST_converted_DESC_TITLE_PROB_MATHBERT'# name your own output dir\n","df_map_result=convert_test_result(orig_testPath,pred_testPath,org_dataPath,out_dir)\n","df_map_result.head()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bCKX4RXSzrQb"},"source":["#### Evaluate for F1, Accuracy and AUC"]},{"cell_type":"code","metadata":{"id":"v9fVPmScznAb"},"source":["def match_top3_f1_acc_autoGrade(data_dir,label_Path,out_dir):\n"," from datetime import datetime\n"," import numpy as np\n"," import pandas as pd\n"," import os\n"," from sklearn.metrics import roc_curve, accuracy_score\n"," from sklearn.metrics import f1_score, roc_auc_score\n"," label_data=pd.read_csv(label_Path,names=['Index','label'],header=0)\n"," label_data_nna=label_data.dropna()\n"," \n"," print(f'test data shape{label_data.shape}')\n"," print(f'test data shape after dropping na {label_data_nna.shape}')\n"," for i, file in enumerate(os.listdir(data_dir)):\n"," \n"," pred_dataPath=os.path.join(data_dir,file)\n"," print('======')\n"," print(i, pred_dataPath)\n","\n"," if pred_dataPath.endswith('.csv'):\n"," \n"," pred_data=pd.read_csv(pred_dataPath,encoding='ISO-8859-1')\n"," print(f'predicted data shape: {pred_data.shape}')\n"," print('total {} classes are predicted as top class'.format(len(pred_data['top1'].unique())))\n"," match=pd.concat([pred_data,label_data_nna],axis=1)\n"," print(f'merged data shape{match.shape}')\n"," correct_top1=match[match['top1']==match['label']].top1.unique()\n"," correct_top2=match[match['top2']==match['label']].top2.unique()\n"," correct_top3=match[match['top3']==match['label']].top3.unique()\n"," correct_all=list(set(list(correct_top1)+list(correct_top2)+list(correct_top3)))\n"," print('Correct top 1 label {}'.format(len(correct_top1)))\n"," print('Correct top 2 label {}'.format(len(correct_top2)))\n"," print('Correct top 3 label {}'.format(len(correct_top3)))\n"," print('Total top 3 Correct labels {}'.format(len(correct_all)))\n"," match['matched1']=np.where(match['top1']==match['label'],1,0)\n"," match['matched2']=np.where((match['top1']==match['label']) | (match['top2']==match['label']),1,0)\n"," match['matched3']=np.where((match['top1']==match['label']) | (match['top2']==match['label']) | (match['top3']==match['label']),1,0)\n"," \n"," marker=file.split('.')[0]\n"," from pathlib import Path\n"," out_dir=Path(out_dir)\n"," out_dir.mkdir(exist_ok=True)\n"," match.to_csv('{}/{}_matched.csv'.format(out_dir,marker),index=False)\n"," # display(match.head())\n"," \n"," print('---Below is F1 Score(weighted)--')\n"," top1_f1=round(f1_score(match['label'], match['top1'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top1_f1))\n"," top2_f1=round(f1_score(match['label'], match['top2'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top2_f1+top1_f1))\n"," top3_f1=round(f1_score(match['label'], match['top3'], average='weighted')*100,3)\n"," print('Top1 label F1 score(weighted): {}%'.format(top3_f1+top2_f1+top1_f1))\n"," print ('---Below is Sklearn accuracy---')\n"," top1_accuracy=round(accuracy_score(match['label'], match['top1'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top1_accuracy))\n"," top2_accuracy=round(accuracy_score(match['label'], match['top2'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top2_accuracy+top1_accuracy))\n"," top3_accuracy=round(accuracy_score(match['label'], match['top3'])*100,3)\n"," print('Top1 label accuracy score: {}%'.format(top3_accuracy+top2_accuracy+top1_accuracy))"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"IJkloBmvzn3O"},"source":["def convert_auc_autoGrade(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir):\n"," from pathlib import Path\n"," import pandas as pd\n"," from sklearn.metrics import roc_auc_score\n"," import numpy as np\n"," import os\n"," #read the original test data for the text and id\n"," df_test = pd.read_csv(orig_testPath, sep='\\t',engine='python')\n"," df_test['guid']=df_test.iloc[:,0].astype(str)\n"," print(f'original test file has shape {df_test.shape}')\n"," print('----')\n"," out_dir=Path(out_dir)\n"," Path.mkdir(out_dir,exist_ok=True)\n"," #read the results data for the probabilities\n"," for i, f in enumerate(os.listdir(pred_testDir)):\n"," if f.endswith('test_results.tsv'):\n"," print(i, f)\n"," \n"," df_result = pd.read_csv(os.path.join(pred_testDir,f), sep='\\t', header=None)\n"," print(f'predicted test file has shape {df_result.shape}')\n"," label_data=pd.read_csv(label_Path,usecols=['label'])\n"," auc=round(roc_auc_score(label_data['label'], df_result,multi_class='ovo',average='weighted')*100,3)\n"," print(f'average auc for 5 classes is {auc} %!')\n"," print('----')"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"9ziMy7hd0KEU"},"source":["data_dir='TEST_converted_autoGrade_MATHBERT'\n","label_Path='auto_grade/test_labels.csv'\n","out_dir='AUTO_GRADE_matched_MATHBERT'\n","match_top3_f1_acc_autoGrade(data_dir,label_Path,out_dir)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"vCb-0-9C0-Yc"},"source":["orig_testPath='auto_grade/test.tsv'\n","pred_testDir='MathBERT/AUTO-GRADE/'\n","org_dataPath='auto_grade/auto_grade_original_full_data.csv'\n","label_Path='auto_grade/test_labels.csv'\n","out_dir='TEST_converted_autoGrade_MATHBERT'\n","convert_auc_autoGrade(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir)\n"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"2fjMdkE3e2ge"},"source":["orig_testPath='TAPT/skill_code_DESC_TITLE_PROBLEM/CoLA/test.tsv'\n","pred_testDir='EVAL_RESULT/MathBERT/skillCode_TAPT/'\n","org_dataPath='further-pre-training/CORPUS/ALL_GRADES/DESC_TITLE_PROBLEM_combined.csv'\n","label_Path='TAPT/skill_code_DESC_TITLE_PROBLEM/CoLA/test_labels.csv'\n","out_dir='TEST_converted_DESC_TITLE_PROB_MATHTAPT_V2'\n","convert_auc_skillCode(orig_testPath,pred_testDir,label_Path,org_dataPath,out_dir)\n","# df_map_result.head()"],"execution_count":null,"outputs":[]}]} -------------------------------------------------------------------------------- /mathbert/modeling.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """The main BERT model and related functions.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import collections 22 | import copy 23 | import json 24 | import math 25 | import re 26 | import numpy as np 27 | import six 28 | import tensorflow as tf 29 | 30 | 31 | class BertConfig(object): 32 | """Configuration for `BertModel`.""" 33 | 34 | def __init__(self, 35 | vocab_size, 36 | hidden_size=768, 37 | num_hidden_layers=12, 38 | num_attention_heads=12, 39 | intermediate_size=3072, 40 | hidden_act="gelu", 41 | hidden_dropout_prob=0.1, 42 | attention_probs_dropout_prob=0.1, 43 | max_position_embeddings=512, 44 | type_vocab_size=16, 45 | initializer_range=0.02): 46 | """Constructs BertConfig. 47 | 48 | Args: 49 | vocab_size: Vocabulary size of `inputs_ids` in `BertModel`. 50 | hidden_size: Size of the encoder layers and the pooler layer. 51 | num_hidden_layers: Number of hidden layers in the Transformer encoder. 52 | num_attention_heads: Number of attention heads for each attention layer in 53 | the Transformer encoder. 54 | intermediate_size: The size of the "intermediate" (i.e., feed-forward) 55 | layer in the Transformer encoder. 56 | hidden_act: The non-linear activation function (function or string) in the 57 | encoder and pooler. 58 | hidden_dropout_prob: The dropout probability for all fully connected 59 | layers in the embeddings, encoder, and pooler. 60 | attention_probs_dropout_prob: The dropout ratio for the attention 61 | probabilities. 62 | max_position_embeddings: The maximum sequence length that this model might 63 | ever be used with. Typically set this to something large just in case 64 | (e.g., 512 or 1024 or 2048). 65 | type_vocab_size: The vocabulary size of the `token_type_ids` passed into 66 | `BertModel`. 67 | initializer_range: The stdev of the truncated_normal_initializer for 68 | initializing all weight matrices. 69 | """ 70 | self.vocab_size = vocab_size 71 | self.hidden_size = hidden_size 72 | self.num_hidden_layers = num_hidden_layers 73 | self.num_attention_heads = num_attention_heads 74 | self.hidden_act = hidden_act 75 | self.intermediate_size = intermediate_size 76 | self.hidden_dropout_prob = hidden_dropout_prob 77 | self.attention_probs_dropout_prob = attention_probs_dropout_prob 78 | self.max_position_embeddings = max_position_embeddings 79 | self.type_vocab_size = type_vocab_size 80 | self.initializer_range = initializer_range 81 | 82 | @classmethod 83 | def from_dict(cls, json_object): 84 | """Constructs a `BertConfig` from a Python dictionary of parameters.""" 85 | config = BertConfig(vocab_size=None) 86 | for (key, value) in six.iteritems(json_object): 87 | config.__dict__[key] = value 88 | return config 89 | 90 | @classmethod 91 | def from_json_file(cls, json_file): 92 | """Constructs a `BertConfig` from a json file of parameters.""" 93 | with tf.gfile.GFile(json_file, "r") as reader: 94 | text = reader.read() 95 | return cls.from_dict(json.loads(text)) 96 | 97 | def to_dict(self): 98 | """Serializes this instance to a Python dictionary.""" 99 | output = copy.deepcopy(self.__dict__) 100 | return output 101 | 102 | def to_json_string(self): 103 | """Serializes this instance to a JSON string.""" 104 | return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n" 105 | 106 | 107 | class BertModel(object): 108 | """BERT model ("Bidirectional Encoder Representations from Transformers"). 109 | 110 | Example usage: 111 | 112 | ```python 113 | # Already been converted into WordPiece token ids 114 | input_ids = tf.constant([[31, 51, 99], [15, 5, 0]]) 115 | input_mask = tf.constant([[1, 1, 1], [1, 1, 0]]) 116 | token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]]) 117 | 118 | config = modeling.BertConfig(vocab_size=32000, hidden_size=512, 119 | num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024) 120 | 121 | model = modeling.BertModel(config=config, is_training=True, 122 | input_ids=input_ids, input_mask=input_mask, token_type_ids=token_type_ids) 123 | 124 | label_embeddings = tf.get_variable(...) 125 | pooled_output = model.get_pooled_output() 126 | logits = tf.matmul(pooled_output, label_embeddings) 127 | ... 128 | ``` 129 | """ 130 | 131 | def __init__(self, 132 | config, 133 | is_training, 134 | input_ids, 135 | input_mask=None, 136 | token_type_ids=None, 137 | use_one_hot_embeddings=False, 138 | scope=None): 139 | """Constructor for BertModel. 140 | 141 | Args: 142 | config: `BertConfig` instance. 143 | is_training: bool. true for training model, false for eval model. Controls 144 | whether dropout will be applied. 145 | input_ids: int32 Tensor of shape [batch_size, seq_length]. 146 | input_mask: (optional) int32 Tensor of shape [batch_size, seq_length]. 147 | token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length]. 148 | use_one_hot_embeddings: (optional) bool. Whether to use one-hot word 149 | embeddings or tf.embedding_lookup() for the word embeddings. 150 | scope: (optional) variable scope. Defaults to "bert". 151 | 152 | Raises: 153 | ValueError: The config is invalid or one of the input tensor shapes 154 | is invalid. 155 | """ 156 | config = copy.deepcopy(config) 157 | if not is_training: 158 | config.hidden_dropout_prob = 0.0 159 | config.attention_probs_dropout_prob = 0.0 160 | 161 | input_shape = get_shape_list(input_ids, expected_rank=2) 162 | batch_size = input_shape[0] 163 | seq_length = input_shape[1] 164 | 165 | if input_mask is None: 166 | input_mask = tf.ones(shape=[batch_size, seq_length], dtype=tf.int32) 167 | 168 | if token_type_ids is None: 169 | token_type_ids = tf.zeros(shape=[batch_size, seq_length], dtype=tf.int32) 170 | 171 | with tf.variable_scope(scope, default_name="bert"): 172 | with tf.variable_scope("embeddings"): 173 | # Perform embedding lookup on the word ids. 174 | (self.embedding_output, self.embedding_table) = embedding_lookup( 175 | input_ids=input_ids, 176 | vocab_size=config.vocab_size, 177 | embedding_size=config.hidden_size, 178 | initializer_range=config.initializer_range, 179 | word_embedding_name="word_embeddings", 180 | use_one_hot_embeddings=use_one_hot_embeddings) 181 | 182 | # Add positional embeddings and token type embeddings, then layer 183 | # normalize and perform dropout. 184 | self.embedding_output = embedding_postprocessor( 185 | input_tensor=self.embedding_output, 186 | use_token_type=True, 187 | token_type_ids=token_type_ids, 188 | token_type_vocab_size=config.type_vocab_size, 189 | token_type_embedding_name="token_type_embeddings", 190 | use_position_embeddings=True, 191 | position_embedding_name="position_embeddings", 192 | initializer_range=config.initializer_range, 193 | max_position_embeddings=config.max_position_embeddings, 194 | dropout_prob=config.hidden_dropout_prob) 195 | 196 | with tf.variable_scope("encoder"): 197 | # This converts a 2D mask of shape [batch_size, seq_length] to a 3D 198 | # mask of shape [batch_size, seq_length, seq_length] which is used 199 | # for the attention scores. 200 | attention_mask = create_attention_mask_from_input_mask( 201 | input_ids, input_mask) 202 | 203 | # Run the stacked transformer. 204 | # `sequence_output` shape = [batch_size, seq_length, hidden_size]. 205 | self.all_encoder_layers = transformer_model( 206 | input_tensor=self.embedding_output, 207 | attention_mask=attention_mask, 208 | hidden_size=config.hidden_size, 209 | num_hidden_layers=config.num_hidden_layers, 210 | num_attention_heads=config.num_attention_heads, 211 | intermediate_size=config.intermediate_size, 212 | intermediate_act_fn=get_activation(config.hidden_act), 213 | hidden_dropout_prob=config.hidden_dropout_prob, 214 | attention_probs_dropout_prob=config.attention_probs_dropout_prob, 215 | initializer_range=config.initializer_range, 216 | do_return_all_layers=True) 217 | 218 | self.sequence_output = self.all_encoder_layers[-1] 219 | # The "pooler" converts the encoded sequence tensor of shape 220 | # [batch_size, seq_length, hidden_size] to a tensor of shape 221 | # [batch_size, hidden_size]. This is necessary for segment-level 222 | # (or segment-pair-level) classification tasks where we need a fixed 223 | # dimensional representation of the segment. 224 | with tf.variable_scope("pooler"): 225 | # We "pool" the model by simply taking the hidden state corresponding 226 | # to the first token. We assume that this has been pre-trained 227 | first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1) 228 | self.pooled_output = tf.layers.dense( 229 | first_token_tensor, 230 | config.hidden_size, 231 | activation=tf.tanh, 232 | kernel_initializer=create_initializer(config.initializer_range)) 233 | 234 | def get_pooled_output(self): 235 | return self.pooled_output 236 | 237 | def get_sequence_output(self): 238 | """Gets final hidden layer of encoder. 239 | 240 | Returns: 241 | float Tensor of shape [batch_size, seq_length, hidden_size] corresponding 242 | to the final hidden of the transformer encoder. 243 | """ 244 | return self.sequence_output 245 | 246 | def get_all_encoder_layers(self): 247 | return self.all_encoder_layers 248 | 249 | def get_embedding_output(self): 250 | """Gets output of the embedding lookup (i.e., input to the transformer). 251 | 252 | Returns: 253 | float Tensor of shape [batch_size, seq_length, hidden_size] corresponding 254 | to the output of the embedding layer, after summing the word 255 | embeddings with the positional embeddings and the token type embeddings, 256 | then performing layer normalization. This is the input to the transformer. 257 | """ 258 | return self.embedding_output 259 | 260 | def get_embedding_table(self): 261 | return self.embedding_table 262 | 263 | 264 | def gelu(x): 265 | """Gaussian Error Linear Unit. 266 | 267 | This is a smoother version of the RELU. 268 | Original paper: https://arxiv.org/abs/1606.08415 269 | Args: 270 | x: float Tensor to perform activation. 271 | 272 | Returns: 273 | `x` with the GELU activation applied. 274 | """ 275 | cdf = 0.5 * (1.0 + tf.tanh( 276 | (np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))) 277 | return x * cdf 278 | 279 | 280 | def get_activation(activation_string): 281 | """Maps a string to a Python function, e.g., "relu" => `tf.nn.relu`. 282 | 283 | Args: 284 | activation_string: String name of the activation function. 285 | 286 | Returns: 287 | A Python function corresponding to the activation function. If 288 | `activation_string` is None, empty, or "linear", this will return None. 289 | If `activation_string` is not a string, it will return `activation_string`. 290 | 291 | Raises: 292 | ValueError: The `activation_string` does not correspond to a known 293 | activation. 294 | """ 295 | 296 | # We assume that anything that"s not a string is already an activation 297 | # function, so we just return it. 298 | if not isinstance(activation_string, six.string_types): 299 | return activation_string 300 | 301 | if not activation_string: 302 | return None 303 | 304 | act = activation_string.lower() 305 | if act == "linear": 306 | return None 307 | elif act == "relu": 308 | return tf.nn.relu 309 | elif act == "gelu": 310 | return gelu 311 | elif act == "tanh": 312 | return tf.tanh 313 | else: 314 | raise ValueError("Unsupported activation: %s" % act) 315 | 316 | 317 | def get_assignment_map_from_checkpoint(tvars, init_checkpoint): 318 | """Compute the union of the current variables and checkpoint variables.""" 319 | assignment_map = {} 320 | initialized_variable_names = {} 321 | 322 | name_to_variable = collections.OrderedDict() 323 | for var in tvars: 324 | name = var.name 325 | m = re.match("^(.*):\\d+$", name) 326 | if m is not None: 327 | name = m.group(1) 328 | name_to_variable[name] = var 329 | 330 | init_vars = tf.train.list_variables(init_checkpoint) 331 | 332 | assignment_map = collections.OrderedDict() 333 | for x in init_vars: 334 | (name, var) = (x[0], x[1]) 335 | if name not in name_to_variable: 336 | continue 337 | assignment_map[name] = name 338 | initialized_variable_names[name] = 1 339 | initialized_variable_names[name + ":0"] = 1 340 | 341 | return (assignment_map, initialized_variable_names) 342 | 343 | 344 | def dropout(input_tensor, dropout_prob): 345 | """Perform dropout. 346 | 347 | Args: 348 | input_tensor: float Tensor. 349 | dropout_prob: Python float. The probability of dropping out a value (NOT of 350 | *keeping* a dimension as in `tf.nn.dropout`). 351 | 352 | Returns: 353 | A version of `input_tensor` with dropout applied. 354 | """ 355 | if dropout_prob is None or dropout_prob == 0.0: 356 | return input_tensor 357 | 358 | output = tf.nn.dropout(input_tensor, 1.0 - dropout_prob) 359 | return output 360 | 361 | 362 | def layer_norm(input_tensor, name=None): 363 | """Run layer normalization on the last dimension of the tensor.""" 364 | return tf.contrib.layers.layer_norm( 365 | inputs=input_tensor, begin_norm_axis=-1, begin_params_axis=-1, scope=name) 366 | 367 | 368 | def layer_norm_and_dropout(input_tensor, dropout_prob, name=None): 369 | """Runs layer normalization followed by dropout.""" 370 | output_tensor = layer_norm(input_tensor, name) 371 | output_tensor = dropout(output_tensor, dropout_prob) 372 | return output_tensor 373 | 374 | 375 | def create_initializer(initializer_range=0.02): 376 | """Creates a `truncated_normal_initializer` with the given range.""" 377 | return tf.truncated_normal_initializer(stddev=initializer_range) 378 | 379 | 380 | def embedding_lookup(input_ids, 381 | vocab_size, 382 | embedding_size=128, 383 | initializer_range=0.02, 384 | word_embedding_name="word_embeddings", 385 | use_one_hot_embeddings=False): 386 | """Looks up words embeddings for id tensor. 387 | 388 | Args: 389 | input_ids: int32 Tensor of shape [batch_size, seq_length] containing word 390 | ids. 391 | vocab_size: int. Size of the embedding vocabulary. 392 | embedding_size: int. Width of the word embeddings. 393 | initializer_range: float. Embedding initialization range. 394 | word_embedding_name: string. Name of the embedding table. 395 | use_one_hot_embeddings: bool. If True, use one-hot method for word 396 | embeddings. If False, use `tf.gather()`. 397 | 398 | Returns: 399 | float Tensor of shape [batch_size, seq_length, embedding_size]. 400 | """ 401 | # This function assumes that the input is of shape [batch_size, seq_length, 402 | # num_inputs]. 403 | # 404 | # If the input is a 2D tensor of shape [batch_size, seq_length], we 405 | # reshape to [batch_size, seq_length, 1]. 406 | if input_ids.shape.ndims == 2: 407 | input_ids = tf.expand_dims(input_ids, axis=[-1]) 408 | 409 | embedding_table = tf.get_variable( 410 | name=word_embedding_name, 411 | shape=[vocab_size, embedding_size], 412 | initializer=create_initializer(initializer_range)) 413 | 414 | flat_input_ids = tf.reshape(input_ids, [-1]) 415 | if use_one_hot_embeddings: 416 | one_hot_input_ids = tf.one_hot(flat_input_ids, depth=vocab_size) 417 | output = tf.matmul(one_hot_input_ids, embedding_table) 418 | else: 419 | output = tf.gather(embedding_table, flat_input_ids) 420 | 421 | input_shape = get_shape_list(input_ids) 422 | 423 | output = tf.reshape(output, 424 | input_shape[0:-1] + [input_shape[-1] * embedding_size]) 425 | return (output, embedding_table) 426 | 427 | 428 | def embedding_postprocessor(input_tensor, 429 | use_token_type=False, 430 | token_type_ids=None, 431 | token_type_vocab_size=16, 432 | token_type_embedding_name="token_type_embeddings", 433 | use_position_embeddings=True, 434 | position_embedding_name="position_embeddings", 435 | initializer_range=0.02, 436 | max_position_embeddings=512, 437 | dropout_prob=0.1): 438 | """Performs various post-processing on a word embedding tensor. 439 | 440 | Args: 441 | input_tensor: float Tensor of shape [batch_size, seq_length, 442 | embedding_size]. 443 | use_token_type: bool. Whether to add embeddings for `token_type_ids`. 444 | token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length]. 445 | Must be specified if `use_token_type` is True. 446 | token_type_vocab_size: int. The vocabulary size of `token_type_ids`. 447 | token_type_embedding_name: string. The name of the embedding table variable 448 | for token type ids. 449 | use_position_embeddings: bool. Whether to add position embeddings for the 450 | position of each token in the sequence. 451 | position_embedding_name: string. The name of the embedding table variable 452 | for positional embeddings. 453 | initializer_range: float. Range of the weight initialization. 454 | max_position_embeddings: int. Maximum sequence length that might ever be 455 | used with this model. This can be longer than the sequence length of 456 | input_tensor, but cannot be shorter. 457 | dropout_prob: float. Dropout probability applied to the final output tensor. 458 | 459 | Returns: 460 | float tensor with same shape as `input_tensor`. 461 | 462 | Raises: 463 | ValueError: One of the tensor shapes or input values is invalid. 464 | """ 465 | input_shape = get_shape_list(input_tensor, expected_rank=3) 466 | batch_size = input_shape[0] 467 | seq_length = input_shape[1] 468 | width = input_shape[2] 469 | 470 | output = input_tensor 471 | 472 | if use_token_type: 473 | if token_type_ids is None: 474 | raise ValueError("`token_type_ids` must be specified if" 475 | "`use_token_type` is True.") 476 | token_type_table = tf.get_variable( 477 | name=token_type_embedding_name, 478 | shape=[token_type_vocab_size, width], 479 | initializer=create_initializer(initializer_range)) 480 | # This vocab will be small so we always do one-hot here, since it is always 481 | # faster for a small vocabulary. 482 | flat_token_type_ids = tf.reshape(token_type_ids, [-1]) 483 | one_hot_ids = tf.one_hot(flat_token_type_ids, depth=token_type_vocab_size) 484 | token_type_embeddings = tf.matmul(one_hot_ids, token_type_table) 485 | token_type_embeddings = tf.reshape(token_type_embeddings, 486 | [batch_size, seq_length, width]) 487 | output += token_type_embeddings 488 | 489 | if use_position_embeddings: 490 | assert_op = tf.assert_less_equal(seq_length, max_position_embeddings) 491 | with tf.control_dependencies([assert_op]): 492 | full_position_embeddings = tf.get_variable( 493 | name=position_embedding_name, 494 | shape=[max_position_embeddings, width], 495 | initializer=create_initializer(initializer_range)) 496 | # Since the position embedding table is a learned variable, we create it 497 | # using a (long) sequence length `max_position_embeddings`. The actual 498 | # sequence length might be shorter than this, for faster training of 499 | # tasks that do not have long sequences. 500 | # 501 | # So `full_position_embeddings` is effectively an embedding table 502 | # for position [0, 1, 2, ..., max_position_embeddings-1], and the current 503 | # sequence has positions [0, 1, 2, ... seq_length-1], so we can just 504 | # perform a slice. 505 | position_embeddings = tf.slice(full_position_embeddings, [0, 0], 506 | [seq_length, -1]) 507 | num_dims = len(output.shape.as_list()) 508 | 509 | # Only the last two dimensions are relevant (`seq_length` and `width`), so 510 | # we broadcast among the first dimensions, which is typically just 511 | # the batch size. 512 | position_broadcast_shape = [] 513 | for _ in range(num_dims - 2): 514 | position_broadcast_shape.append(1) 515 | position_broadcast_shape.extend([seq_length, width]) 516 | position_embeddings = tf.reshape(position_embeddings, 517 | position_broadcast_shape) 518 | output += position_embeddings 519 | 520 | output = layer_norm_and_dropout(output, dropout_prob) 521 | return output 522 | 523 | 524 | def create_attention_mask_from_input_mask(from_tensor, to_mask): 525 | """Create 3D attention mask from a 2D tensor mask. 526 | 527 | Args: 528 | from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...]. 529 | to_mask: int32 Tensor of shape [batch_size, to_seq_length]. 530 | 531 | Returns: 532 | float Tensor of shape [batch_size, from_seq_length, to_seq_length]. 533 | """ 534 | from_shape = get_shape_list(from_tensor, expected_rank=[2, 3]) 535 | batch_size = from_shape[0] 536 | from_seq_length = from_shape[1] 537 | 538 | to_shape = get_shape_list(to_mask, expected_rank=2) 539 | to_seq_length = to_shape[1] 540 | 541 | to_mask = tf.cast( 542 | tf.reshape(to_mask, [batch_size, 1, to_seq_length]), tf.float32) 543 | 544 | # We don't assume that `from_tensor` is a mask (although it could be). We 545 | # don't actually care if we attend *from* padding tokens (only *to* padding) 546 | # tokens so we create a tensor of all ones. 547 | # 548 | # `broadcast_ones` = [batch_size, from_seq_length, 1] 549 | broadcast_ones = tf.ones( 550 | shape=[batch_size, from_seq_length, 1], dtype=tf.float32) 551 | 552 | # Here we broadcast along two dimensions to create the mask. 553 | mask = broadcast_ones * to_mask 554 | 555 | return mask 556 | 557 | 558 | def attention_layer(from_tensor, 559 | to_tensor, 560 | attention_mask=None, 561 | num_attention_heads=1, 562 | size_per_head=512, 563 | query_act=None, 564 | key_act=None, 565 | value_act=None, 566 | attention_probs_dropout_prob=0.0, 567 | initializer_range=0.02, 568 | do_return_2d_tensor=False, 569 | batch_size=None, 570 | from_seq_length=None, 571 | to_seq_length=None): 572 | """Performs multi-headed attention from `from_tensor` to `to_tensor`. 573 | 574 | This is an implementation of multi-headed attention based on "Attention 575 | is all you Need". If `from_tensor` and `to_tensor` are the same, then 576 | this is self-attention. Each timestep in `from_tensor` attends to the 577 | corresponding sequence in `to_tensor`, and returns a fixed-with vector. 578 | 579 | This function first projects `from_tensor` into a "query" tensor and 580 | `to_tensor` into "key" and "value" tensors. These are (effectively) a list 581 | of tensors of length `num_attention_heads`, where each tensor is of shape 582 | [batch_size, seq_length, size_per_head]. 583 | 584 | Then, the query and key tensors are dot-producted and scaled. These are 585 | softmaxed to obtain attention probabilities. The value tensors are then 586 | interpolated by these probabilities, then concatenated back to a single 587 | tensor and returned. 588 | 589 | In practice, the multi-headed attention are done with transposes and 590 | reshapes rather than actual separate tensors. 591 | 592 | Args: 593 | from_tensor: float Tensor of shape [batch_size, from_seq_length, 594 | from_width]. 595 | to_tensor: float Tensor of shape [batch_size, to_seq_length, to_width]. 596 | attention_mask: (optional) int32 Tensor of shape [batch_size, 597 | from_seq_length, to_seq_length]. The values should be 1 or 0. The 598 | attention scores will effectively be set to -infinity for any positions in 599 | the mask that are 0, and will be unchanged for positions that are 1. 600 | num_attention_heads: int. Number of attention heads. 601 | size_per_head: int. Size of each attention head. 602 | query_act: (optional) Activation function for the query transform. 603 | key_act: (optional) Activation function for the key transform. 604 | value_act: (optional) Activation function for the value transform. 605 | attention_probs_dropout_prob: (optional) float. Dropout probability of the 606 | attention probabilities. 607 | initializer_range: float. Range of the weight initializer. 608 | do_return_2d_tensor: bool. If True, the output will be of shape [batch_size 609 | * from_seq_length, num_attention_heads * size_per_head]. If False, the 610 | output will be of shape [batch_size, from_seq_length, num_attention_heads 611 | * size_per_head]. 612 | batch_size: (Optional) int. If the input is 2D, this might be the batch size 613 | of the 3D version of the `from_tensor` and `to_tensor`. 614 | from_seq_length: (Optional) If the input is 2D, this might be the seq length 615 | of the 3D version of the `from_tensor`. 616 | to_seq_length: (Optional) If the input is 2D, this might be the seq length 617 | of the 3D version of the `to_tensor`. 618 | 619 | Returns: 620 | float Tensor of shape [batch_size, from_seq_length, 621 | num_attention_heads * size_per_head]. (If `do_return_2d_tensor` is 622 | true, this will be of shape [batch_size * from_seq_length, 623 | num_attention_heads * size_per_head]). 624 | 625 | Raises: 626 | ValueError: Any of the arguments or tensor shapes are invalid. 627 | """ 628 | 629 | def transpose_for_scores(input_tensor, batch_size, num_attention_heads, 630 | seq_length, width): 631 | output_tensor = tf.reshape( 632 | input_tensor, [batch_size, seq_length, num_attention_heads, width]) 633 | 634 | output_tensor = tf.transpose(output_tensor, [0, 2, 1, 3]) 635 | return output_tensor 636 | 637 | from_shape = get_shape_list(from_tensor, expected_rank=[2, 3]) 638 | to_shape = get_shape_list(to_tensor, expected_rank=[2, 3]) 639 | 640 | if len(from_shape) != len(to_shape): 641 | raise ValueError( 642 | "The rank of `from_tensor` must match the rank of `to_tensor`.") 643 | 644 | if len(from_shape) == 3: 645 | batch_size = from_shape[0] 646 | from_seq_length = from_shape[1] 647 | to_seq_length = to_shape[1] 648 | elif len(from_shape) == 2: 649 | if (batch_size is None or from_seq_length is None or to_seq_length is None): 650 | raise ValueError( 651 | "When passing in rank 2 tensors to attention_layer, the values " 652 | "for `batch_size`, `from_seq_length`, and `to_seq_length` " 653 | "must all be specified.") 654 | 655 | # Scalar dimensions referenced here: 656 | # B = batch size (number of sequences) 657 | # F = `from_tensor` sequence length 658 | # T = `to_tensor` sequence length 659 | # N = `num_attention_heads` 660 | # H = `size_per_head` 661 | 662 | from_tensor_2d = reshape_to_matrix(from_tensor) 663 | to_tensor_2d = reshape_to_matrix(to_tensor) 664 | 665 | # `query_layer` = [B*F, N*H] 666 | query_layer = tf.layers.dense( 667 | from_tensor_2d, 668 | num_attention_heads * size_per_head, 669 | activation=query_act, 670 | name="query", 671 | kernel_initializer=create_initializer(initializer_range)) 672 | 673 | # `key_layer` = [B*T, N*H] 674 | key_layer = tf.layers.dense( 675 | to_tensor_2d, 676 | num_attention_heads * size_per_head, 677 | activation=key_act, 678 | name="key", 679 | kernel_initializer=create_initializer(initializer_range)) 680 | 681 | # `value_layer` = [B*T, N*H] 682 | value_layer = tf.layers.dense( 683 | to_tensor_2d, 684 | num_attention_heads * size_per_head, 685 | activation=value_act, 686 | name="value", 687 | kernel_initializer=create_initializer(initializer_range)) 688 | 689 | # `query_layer` = [B, N, F, H] 690 | query_layer = transpose_for_scores(query_layer, batch_size, 691 | num_attention_heads, from_seq_length, 692 | size_per_head) 693 | 694 | # `key_layer` = [B, N, T, H] 695 | key_layer = transpose_for_scores(key_layer, batch_size, num_attention_heads, 696 | to_seq_length, size_per_head) 697 | 698 | # Take the dot product between "query" and "key" to get the raw 699 | # attention scores. 700 | # `attention_scores` = [B, N, F, T] 701 | attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) 702 | attention_scores = tf.multiply(attention_scores, 703 | 1.0 / math.sqrt(float(size_per_head))) 704 | 705 | if attention_mask is not None: 706 | # `attention_mask` = [B, 1, F, T] 707 | attention_mask = tf.expand_dims(attention_mask, axis=[1]) 708 | 709 | # Since attention_mask is 1.0 for positions we want to attend and 0.0 for 710 | # masked positions, this operation will create a tensor which is 0.0 for 711 | # positions we want to attend and -10000.0 for masked positions. 712 | adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0 713 | 714 | # Since we are adding it to the raw scores before the softmax, this is 715 | # effectively the same as removing these entirely. 716 | attention_scores += adder 717 | 718 | # Normalize the attention scores to probabilities. 719 | # `attention_probs` = [B, N, F, T] 720 | attention_probs = tf.nn.softmax(attention_scores) 721 | 722 | # This is actually dropping out entire tokens to attend to, which might 723 | # seem a bit unusual, but is taken from the original Transformer paper. 724 | attention_probs = dropout(attention_probs, attention_probs_dropout_prob) 725 | 726 | # `value_layer` = [B, T, N, H] 727 | value_layer = tf.reshape( 728 | value_layer, 729 | [batch_size, to_seq_length, num_attention_heads, size_per_head]) 730 | 731 | # `value_layer` = [B, N, T, H] 732 | value_layer = tf.transpose(value_layer, [0, 2, 1, 3]) 733 | 734 | # `context_layer` = [B, N, F, H] 735 | context_layer = tf.matmul(attention_probs, value_layer) 736 | 737 | # `context_layer` = [B, F, N, H] 738 | context_layer = tf.transpose(context_layer, [0, 2, 1, 3]) 739 | 740 | if do_return_2d_tensor: 741 | # `context_layer` = [B*F, N*H] 742 | context_layer = tf.reshape( 743 | context_layer, 744 | [batch_size * from_seq_length, num_attention_heads * size_per_head]) 745 | else: 746 | # `context_layer` = [B, F, N*H] 747 | context_layer = tf.reshape( 748 | context_layer, 749 | [batch_size, from_seq_length, num_attention_heads * size_per_head]) 750 | 751 | return context_layer 752 | 753 | 754 | def transformer_model(input_tensor, 755 | attention_mask=None, 756 | hidden_size=768, 757 | num_hidden_layers=12, 758 | num_attention_heads=12, 759 | intermediate_size=3072, 760 | intermediate_act_fn=gelu, 761 | hidden_dropout_prob=0.1, 762 | attention_probs_dropout_prob=0.1, 763 | initializer_range=0.02, 764 | do_return_all_layers=False): 765 | """Multi-headed, multi-layer Transformer from "Attention is All You Need". 766 | 767 | This is almost an exact implementation of the original Transformer encoder. 768 | 769 | See the original paper: 770 | https://arxiv.org/abs/1706.03762 771 | 772 | Also see: 773 | https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py 774 | 775 | Args: 776 | input_tensor: float Tensor of shape [batch_size, seq_length, hidden_size]. 777 | attention_mask: (optional) int32 Tensor of shape [batch_size, seq_length, 778 | seq_length], with 1 for positions that can be attended to and 0 in 779 | positions that should not be. 780 | hidden_size: int. Hidden size of the Transformer. 781 | num_hidden_layers: int. Number of layers (blocks) in the Transformer. 782 | num_attention_heads: int. Number of attention heads in the Transformer. 783 | intermediate_size: int. The size of the "intermediate" (a.k.a., feed 784 | forward) layer. 785 | intermediate_act_fn: function. The non-linear activation function to apply 786 | to the output of the intermediate/feed-forward layer. 787 | hidden_dropout_prob: float. Dropout probability for the hidden layers. 788 | attention_probs_dropout_prob: float. Dropout probability of the attention 789 | probabilities. 790 | initializer_range: float. Range of the initializer (stddev of truncated 791 | normal). 792 | do_return_all_layers: Whether to also return all layers or just the final 793 | layer. 794 | 795 | Returns: 796 | float Tensor of shape [batch_size, seq_length, hidden_size], the final 797 | hidden layer of the Transformer. 798 | 799 | Raises: 800 | ValueError: A Tensor shape or parameter is invalid. 801 | """ 802 | if hidden_size % num_attention_heads != 0: 803 | raise ValueError( 804 | "The hidden size (%d) is not a multiple of the number of attention " 805 | "heads (%d)" % (hidden_size, num_attention_heads)) 806 | 807 | attention_head_size = int(hidden_size / num_attention_heads) 808 | input_shape = get_shape_list(input_tensor, expected_rank=3) 809 | batch_size = input_shape[0] 810 | seq_length = input_shape[1] 811 | input_width = input_shape[2] 812 | 813 | # The Transformer performs sum residuals on all layers so the input needs 814 | # to be the same as the hidden size. 815 | if input_width != hidden_size: 816 | raise ValueError("The width of the input tensor (%d) != hidden size (%d)" % 817 | (input_width, hidden_size)) 818 | 819 | # We keep the representation as a 2D tensor to avoid re-shaping it back and 820 | # forth from a 3D tensor to a 2D tensor. Re-shapes are normally free on 821 | # the GPU/CPU but may not be free on the TPU, so we want to minimize them to 822 | # help the optimizer. 823 | prev_output = reshape_to_matrix(input_tensor) 824 | 825 | all_layer_outputs = [] 826 | for layer_idx in range(num_hidden_layers): 827 | with tf.variable_scope("layer_%d" % layer_idx): 828 | layer_input = prev_output 829 | 830 | with tf.variable_scope("attention"): 831 | attention_heads = [] 832 | with tf.variable_scope("self"): 833 | attention_head = attention_layer( 834 | from_tensor=layer_input, 835 | to_tensor=layer_input, 836 | attention_mask=attention_mask, 837 | num_attention_heads=num_attention_heads, 838 | size_per_head=attention_head_size, 839 | attention_probs_dropout_prob=attention_probs_dropout_prob, 840 | initializer_range=initializer_range, 841 | do_return_2d_tensor=True, 842 | batch_size=batch_size, 843 | from_seq_length=seq_length, 844 | to_seq_length=seq_length) 845 | attention_heads.append(attention_head) 846 | 847 | attention_output = None 848 | if len(attention_heads) == 1: 849 | attention_output = attention_heads[0] 850 | else: 851 | # In the case where we have other sequences, we just concatenate 852 | # them to the self-attention head before the projection. 853 | attention_output = tf.concat(attention_heads, axis=-1) 854 | 855 | # Run a linear projection of `hidden_size` then add a residual 856 | # with `layer_input`. 857 | with tf.variable_scope("output"): 858 | attention_output = tf.layers.dense( 859 | attention_output, 860 | hidden_size, 861 | kernel_initializer=create_initializer(initializer_range)) 862 | attention_output = dropout(attention_output, hidden_dropout_prob) 863 | attention_output = layer_norm(attention_output + layer_input) 864 | 865 | # The activation is only applied to the "intermediate" hidden layer. 866 | with tf.variable_scope("intermediate"): 867 | intermediate_output = tf.layers.dense( 868 | attention_output, 869 | intermediate_size, 870 | activation=intermediate_act_fn, 871 | kernel_initializer=create_initializer(initializer_range)) 872 | 873 | # Down-project back to `hidden_size` then add the residual. 874 | with tf.variable_scope("output"): 875 | layer_output = tf.layers.dense( 876 | intermediate_output, 877 | hidden_size, 878 | kernel_initializer=create_initializer(initializer_range)) 879 | layer_output = dropout(layer_output, hidden_dropout_prob) 880 | layer_output = layer_norm(layer_output + attention_output) 881 | prev_output = layer_output 882 | all_layer_outputs.append(layer_output) 883 | 884 | if do_return_all_layers: 885 | final_outputs = [] 886 | for layer_output in all_layer_outputs: 887 | final_output = reshape_from_matrix(layer_output, input_shape) 888 | final_outputs.append(final_output) 889 | return final_outputs 890 | else: 891 | final_output = reshape_from_matrix(prev_output, input_shape) 892 | return final_output 893 | 894 | 895 | def get_shape_list(tensor, expected_rank=None, name=None): 896 | """Returns a list of the shape of tensor, preferring static dimensions. 897 | 898 | Args: 899 | tensor: A tf.Tensor object to find the shape of. 900 | expected_rank: (optional) int. The expected rank of `tensor`. If this is 901 | specified and the `tensor` has a different rank, and exception will be 902 | thrown. 903 | name: Optional name of the tensor for the error message. 904 | 905 | Returns: 906 | A list of dimensions of the shape of tensor. All static dimensions will 907 | be returned as python integers, and dynamic dimensions will be returned 908 | as tf.Tensor scalars. 909 | """ 910 | if name is None: 911 | name = tensor.name 912 | 913 | if expected_rank is not None: 914 | assert_rank(tensor, expected_rank, name) 915 | 916 | shape = tensor.shape.as_list() 917 | 918 | non_static_indexes = [] 919 | for (index, dim) in enumerate(shape): 920 | if dim is None: 921 | non_static_indexes.append(index) 922 | 923 | if not non_static_indexes: 924 | return shape 925 | 926 | dyn_shape = tf.shape(tensor) 927 | for index in non_static_indexes: 928 | shape[index] = dyn_shape[index] 929 | return shape 930 | 931 | 932 | def reshape_to_matrix(input_tensor): 933 | """Reshapes a >= rank 2 tensor to a rank 2 tensor (i.e., a matrix).""" 934 | ndims = input_tensor.shape.ndims 935 | if ndims < 2: 936 | raise ValueError("Input tensor must have at least rank 2. Shape = %s" % 937 | (input_tensor.shape)) 938 | if ndims == 2: 939 | return input_tensor 940 | 941 | width = input_tensor.shape[-1] 942 | output_tensor = tf.reshape(input_tensor, [-1, width]) 943 | return output_tensor 944 | 945 | 946 | def reshape_from_matrix(output_tensor, orig_shape_list): 947 | """Reshapes a rank 2 tensor back to its original rank >= 2 tensor.""" 948 | if len(orig_shape_list) == 2: 949 | return output_tensor 950 | 951 | output_shape = get_shape_list(output_tensor) 952 | 953 | orig_dims = orig_shape_list[0:-1] 954 | width = output_shape[-1] 955 | 956 | return tf.reshape(output_tensor, orig_dims + [width]) 957 | 958 | 959 | def assert_rank(tensor, expected_rank, name=None): 960 | """Raises an exception if the tensor rank is not of the expected rank. 961 | 962 | Args: 963 | tensor: A tf.Tensor to check the rank of. 964 | expected_rank: Python integer or list of integers, expected rank. 965 | name: Optional name of the tensor for the error message. 966 | 967 | Raises: 968 | ValueError: If the expected shape doesn't match the actual shape. 969 | """ 970 | if name is None: 971 | name = tensor.name 972 | 973 | expected_rank_dict = {} 974 | if isinstance(expected_rank, six.integer_types): 975 | expected_rank_dict[expected_rank] = True 976 | else: 977 | for x in expected_rank: 978 | expected_rank_dict[x] = True 979 | 980 | actual_rank = tensor.shape.ndims 981 | if actual_rank not in expected_rank_dict: 982 | scope_name = tf.get_variable_scope().name 983 | raise ValueError( 984 | "For the tensor `%s` in scope `%s`, the actual rank " 985 | "`%d` (shape = %s) is not equal to the expected rank `%s`" % 986 | (name, scope_name, actual_rank, str(tensor.shape), str(expected_rank))) 987 | -------------------------------------------------------------------------------- /mathbert/run_classifier.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """BERT finetuning runner.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import collections 22 | import csv 23 | import os 24 | import modeling 25 | import optimization 26 | import tokenization 27 | import tensorflow as tf 28 | 29 | flags = tf.flags 30 | 31 | FLAGS = flags.FLAGS 32 | 33 | ## Required parameters 34 | flags.DEFINE_string( 35 | "data_dir", None, 36 | "The input data dir. Should contain the .tsv files (or other data files) " 37 | "for the task.") 38 | 39 | flags.DEFINE_string( 40 | "bert_config_file", None, 41 | "The config json file corresponding to the pre-trained BERT model. " 42 | "This specifies the model architecture.") 43 | 44 | flags.DEFINE_string("task_name", None, "The name of the task to train.") 45 | 46 | flags.DEFINE_string("vocab_file", None, 47 | "The vocabulary file that the BERT model was trained on.") 48 | 49 | flags.DEFINE_string( 50 | "output_dir", None, 51 | "The output directory where the model checkpoints will be written.") 52 | 53 | ## Other parameters 54 | 55 | flags.DEFINE_string( 56 | "init_checkpoint", None, 57 | "Initial checkpoint (usually from a pre-trained BERT model).") 58 | 59 | flags.DEFINE_bool( 60 | "do_lower_case", True, 61 | "Whether to lower case the input text. Should be True for uncased " 62 | "models and False for cased models.") 63 | 64 | flags.DEFINE_integer( 65 | "max_seq_length", 128, 66 | "The maximum total input sequence length after WordPiece tokenization. " 67 | "Sequences longer than this will be truncated, and sequences shorter " 68 | "than this will be padded.") 69 | 70 | flags.DEFINE_bool("do_train", False, "Whether to run training.") 71 | 72 | flags.DEFINE_bool("do_eval", False, "Whether to run eval on the dev set.") 73 | 74 | flags.DEFINE_bool( 75 | "do_predict", False, 76 | "Whether to run the model in inference mode on the test set.") 77 | 78 | flags.DEFINE_integer("train_batch_size", 32, "Total batch size for training.") 79 | 80 | flags.DEFINE_integer("eval_batch_size", 8, "Total batch size for eval.") 81 | 82 | flags.DEFINE_integer("predict_batch_size", 8, "Total batch size for predict.") 83 | 84 | flags.DEFINE_float("learning_rate", 5e-5, "The initial learning rate for Adam.") 85 | 86 | flags.DEFINE_float("num_train_epochs", 3.0, 87 | "Total number of training epochs to perform.") 88 | 89 | flags.DEFINE_float( 90 | "warmup_proportion", 0.1, 91 | "Proportion of training to perform linear learning rate warmup for. " 92 | "E.g., 0.1 = 10% of training.") 93 | 94 | flags.DEFINE_integer("save_checkpoints_steps", 1000, 95 | "How often to save the model checkpoint.") 96 | 97 | flags.DEFINE_integer("iterations_per_loop", 1000, 98 | "How many steps to make in each estimator call.") 99 | 100 | flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.") 101 | 102 | tf.flags.DEFINE_string( 103 | "tpu_name", None, 104 | "The Cloud TPU to use for training. This should be either the name " 105 | "used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 " 106 | "url.") 107 | 108 | tf.flags.DEFINE_string( 109 | "tpu_zone", None, 110 | "[Optional] GCE zone where the Cloud TPU is located in. If not " 111 | "specified, we will attempt to automatically detect the GCE project from " 112 | "metadata.") 113 | 114 | tf.flags.DEFINE_string( 115 | "gcp_project", None, 116 | "[Optional] Project name for the Cloud TPU-enabled project. If not " 117 | "specified, we will attempt to automatically detect the GCE project from " 118 | "metadata.") 119 | 120 | tf.flags.DEFINE_string("master", None, "[Optional] TensorFlow master URL.") 121 | 122 | flags.DEFINE_integer( 123 | "num_tpu_cores", 8, 124 | "Only used if `use_tpu` is True. Total number of TPU cores to use.") 125 | 126 | 127 | class InputExample(object): 128 | """A single training/test example for simple sequence classification.""" 129 | 130 | def __init__(self, guid, text_a, text_b=None, label=None): 131 | """Constructs a InputExample. 132 | 133 | Args: 134 | guid: Unique id for the example. 135 | text_a: string. The untokenized text of the first sequence. For single 136 | sequence tasks, only this sequence must be specified. 137 | text_b: (Optional) string. The untokenized text of the second sequence. 138 | Only must be specified for sequence pair tasks. 139 | label: (Optional) string. The label of the example. This should be 140 | specified for train and dev examples, but not for test examples. 141 | """ 142 | self.guid = guid 143 | self.text_a = text_a 144 | self.text_b = text_b 145 | self.label = label 146 | 147 | 148 | class PaddingInputExample(object): 149 | """Fake example so the num input examples is a multiple of the batch size. 150 | 151 | When running eval/predict on the TPU, we need to pad the number of examples 152 | to be a multiple of the batch size, because the TPU requires a fixed batch 153 | size. The alternative is to drop the last batch, which is bad because it means 154 | the entire output data won't be generated. 155 | 156 | We use this class instead of `None` because treating `None` as padding 157 | battches could cause silent errors. 158 | """ 159 | 160 | 161 | class InputFeatures(object): 162 | """A single set of features of data.""" 163 | 164 | def __init__(self, 165 | input_ids, 166 | input_mask, 167 | segment_ids, 168 | label_id, 169 | is_real_example=True): 170 | self.input_ids = input_ids 171 | self.input_mask = input_mask 172 | self.segment_ids = segment_ids 173 | self.label_id = label_id 174 | self.is_real_example = is_real_example 175 | 176 | 177 | class DataProcessor(object): 178 | """Base class for data converters for sequence classification data sets.""" 179 | 180 | def get_train_examples(self, data_dir): 181 | """Gets a collection of `InputExample`s for the train set.""" 182 | raise NotImplementedError() 183 | 184 | def get_dev_examples(self, data_dir): 185 | """Gets a collection of `InputExample`s for the dev set.""" 186 | raise NotImplementedError() 187 | 188 | def get_test_examples(self, data_dir): 189 | """Gets a collection of `InputExample`s for prediction.""" 190 | raise NotImplementedError() 191 | 192 | def get_labels(self): 193 | """Gets the list of labels for this data set.""" 194 | raise NotImplementedError() 195 | 196 | @classmethod 197 | def _read_tsv(cls, input_file, quotechar=None): 198 | """Reads a tab separated value file.""" 199 | with tf.gfile.Open(input_file, "r") as f: 200 | reader = csv.reader(f, delimiter="\t", quotechar=quotechar) 201 | lines = [] 202 | for line in reader: 203 | lines.append(line) 204 | return lines 205 | 206 | 207 | class XnliProcessor(DataProcessor): 208 | """Processor for the XNLI data set.""" 209 | 210 | def __init__(self): 211 | self.language = "zh" 212 | 213 | def get_train_examples(self, data_dir): 214 | """See base class.""" 215 | lines = self._read_tsv( 216 | os.path.join(data_dir, "multinli", 217 | "multinli.train.%s.tsv" % self.language)) 218 | examples = [] 219 | for (i, line) in enumerate(lines): 220 | if i == 0: 221 | continue 222 | guid = "train-%d" % (i) 223 | text_a = tokenization.convert_to_unicode(line[0]) 224 | text_b = tokenization.convert_to_unicode(line[1]) 225 | label = tokenization.convert_to_unicode(line[2]) 226 | if label == tokenization.convert_to_unicode("contradictory"): 227 | label = tokenization.convert_to_unicode("contradiction") 228 | examples.append( 229 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 230 | return examples 231 | 232 | def get_dev_examples(self, data_dir): 233 | """See base class.""" 234 | lines = self._read_tsv(os.path.join(data_dir, "xnli.dev.tsv")) 235 | examples = [] 236 | for (i, line) in enumerate(lines): 237 | if i == 0: 238 | continue 239 | guid = "dev-%d" % (i) 240 | language = tokenization.convert_to_unicode(line[0]) 241 | if language != tokenization.convert_to_unicode(self.language): 242 | continue 243 | text_a = tokenization.convert_to_unicode(line[6]) 244 | text_b = tokenization.convert_to_unicode(line[7]) 245 | label = tokenization.convert_to_unicode(line[1]) 246 | examples.append( 247 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 248 | return examples 249 | 250 | def get_labels(self): 251 | """See base class.""" 252 | return ["contradiction", "entailment", "neutral"] 253 | 254 | 255 | class MnliProcessor(DataProcessor): 256 | """Processor for the MultiNLI data set (GLUE version).""" 257 | 258 | def get_train_examples(self, data_dir): 259 | """See base class.""" 260 | return self._create_examples( 261 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 262 | 263 | def get_dev_examples(self, data_dir): 264 | """See base class.""" 265 | return self._create_examples( 266 | self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), 267 | "dev_matched") 268 | 269 | def get_test_examples(self, data_dir): 270 | """See base class.""" 271 | return self._create_examples( 272 | self._read_tsv(os.path.join(data_dir, "test_matched.tsv")), "test") 273 | 274 | def get_labels(self): 275 | """See base class.""" 276 | return ["contradiction", "entailment", "neutral"] 277 | 278 | def _create_examples(self, lines, set_type): 279 | """Creates examples for the training and dev sets.""" 280 | examples = [] 281 | for (i, line) in enumerate(lines): 282 | if i == 0: 283 | continue 284 | guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0])) 285 | text_a = tokenization.convert_to_unicode(line[8]) 286 | text_b = tokenization.convert_to_unicode(line[9]) 287 | if set_type == "test": 288 | label = "contradiction" 289 | else: 290 | label = tokenization.convert_to_unicode(line[-1]) 291 | examples.append( 292 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 293 | return examples 294 | 295 | 296 | # ======MNLI adapted to auto-grade problem=== 297 | 298 | class AutoGradeProcessor(DataProcessor): 299 | """Processor for the MultiNLI data set (GLUE version).""" 300 | 301 | def get_train_examples(self, data_dir): 302 | """See base class.""" 303 | return self._create_examples( 304 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 305 | 306 | def get_dev_examples(self, data_dir): 307 | """See base class.""" 308 | return self._create_examples( 309 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), 310 | "dev") 311 | 312 | def get_test_examples(self, data_dir): 313 | """See base class.""" 314 | return self._create_examples( 315 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 316 | 317 | def get_labels(self): 318 | """See base class.""" 319 | return [str(x+1) for x in range(5)] 320 | # # ===== 321 | # def _create_examples(self, lines, set_type): 322 | # """Creates examples for the training and dev sets.""" 323 | # examples = [] 324 | # for (i, line) in enumerate(lines): 325 | # # Only the test set has a header 326 | # if set_type == "test" and i == 0: 327 | # continue 328 | # guid = "%s-%s" % (set_type, i) 329 | # if set_type == "test": 330 | # text_a = tokenization.convert_to_unicode(line[1]) 331 | # label = "0" 332 | # else: 333 | # text_a = tokenization.convert_to_unicode(line[3]) 334 | # label = tokenization.convert_to_unicode(line[1]) 335 | # examples.append( 336 | # InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 337 | # return examples 338 | 339 | # ======== 340 | def _create_examples(self, lines, set_type): 341 | """Creates examples for the training and dev sets.""" 342 | examples = [] 343 | for (i, line) in enumerate(lines): 344 | if i == 0: 345 | continue 346 | guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0])) 347 | text_a = tokenization.convert_to_unicode(line[1]) 348 | text_b = tokenization.convert_to_unicode(line[2]) 349 | if set_type == "test": 350 | label = "1" 351 | else: 352 | label = tokenization.convert_to_unicode(line[-1]) 353 | examples.append( 354 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 355 | return examples 356 | 357 | 358 | # ===========KT NLI PROCESSOR==== 359 | class KTProcessor(DataProcessor): 360 | """Processor for the MultiNLI data set (GLUE version).""" 361 | 362 | def get_train_examples(self, data_dir): 363 | """See base class.""" 364 | return self._create_examples( 365 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 366 | 367 | def get_dev_examples(self, data_dir): 368 | """See base class.""" 369 | return self._create_examples( 370 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), 371 | "dev") 372 | 373 | def get_test_examples(self, data_dir): 374 | """See base class.""" 375 | return self._create_examples( 376 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 377 | 378 | def get_labels(self): 379 | """See base class.""" 380 | return [str(x) for x in range(2)] 381 | 382 | def _create_examples(self, lines, set_type): 383 | """Creates examples for the training and dev sets.""" 384 | examples = [] 385 | for (i, line) in enumerate(lines): 386 | if i == 0: 387 | continue 388 | guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0])) 389 | text_a = tokenization.convert_to_unicode(line[1]) 390 | text_b = tokenization.convert_to_unicode(line[2]) 391 | if set_type == "test": 392 | label = "1" 393 | else: 394 | label = tokenization.convert_to_unicode(line[-1]) 395 | examples.append( 396 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 397 | return examples 398 | 399 | 400 | # ======================= 401 | class MrpcProcessor(DataProcessor): 402 | """Processor for the MRPC data set (GLUE version).""" 403 | 404 | def get_train_examples(self, data_dir): 405 | """See base class.""" 406 | return self._create_examples( 407 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 408 | 409 | def get_dev_examples(self, data_dir): 410 | """See base class.""" 411 | return self._create_examples( 412 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 413 | 414 | def get_test_examples(self, data_dir): 415 | """See base class.""" 416 | return self._create_examples( 417 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 418 | 419 | def get_labels(self): 420 | """See base class.""" 421 | return ["0", "1"] 422 | 423 | def _create_examples(self, lines, set_type): 424 | """Creates examples for the training and dev sets.""" 425 | examples = [] 426 | for (i, line) in enumerate(lines): 427 | if i == 0: 428 | continue 429 | guid = "%s-%s" % (set_type, i) 430 | text_a = tokenization.convert_to_unicode(line[3]) 431 | text_b = tokenization.convert_to_unicode(line[4]) 432 | if set_type == "test": 433 | label = "0" 434 | else: 435 | label = tokenization.convert_to_unicode(line[0]) 436 | examples.append( 437 | InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) 438 | return examples 439 | 440 | # ======begin ColaProcessor variant classes====== 441 | 442 | class ColaProcessor_152(DataProcessor): 443 | """Processor for the CoLA data set (GLUE version).""" 444 | 445 | def get_train_examples(self, data_dir): 446 | """See base class.""" 447 | return self._create_examples( 448 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 449 | 450 | def get_dev_examples(self, data_dir): 451 | """See base class.""" 452 | return self._create_examples( 453 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 454 | 455 | def get_test_examples(self, data_dir): 456 | """See base class.""" 457 | return self._create_examples( 458 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 459 | 460 | def get_labels(self): 461 | """See base class.""" 462 | return [str(x) for x in range(152)] 463 | 464 | def _create_examples(self, lines, set_type): 465 | """Creates examples for the training and dev sets.""" 466 | examples = [] 467 | for (i, line) in enumerate(lines): 468 | # Only the test set has a header 469 | if set_type == "test" and i == 0: 470 | continue 471 | guid = "%s-%s" % (set_type, i) 472 | if set_type == "test": 473 | text_a = tokenization.convert_to_unicode(line[1]) 474 | label = "0" 475 | else: 476 | text_a = tokenization.convert_to_unicode(line[3]) 477 | label = tokenization.convert_to_unicode(line[1]) 478 | examples.append( 479 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 480 | return examples 481 | 482 | 483 | 484 | 485 | class ColaProcessor_385(DataProcessor): 486 | """Processor for the CoLA data set (GLUE version).""" 487 | 488 | def get_train_examples(self, data_dir): 489 | """See base class.""" 490 | return self._create_examples( 491 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 492 | 493 | def get_dev_examples(self, data_dir): 494 | """See base class.""" 495 | return self._create_examples( 496 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 497 | 498 | def get_test_examples(self, data_dir): 499 | """See base class.""" 500 | return self._create_examples( 501 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 502 | 503 | def get_labels(self): 504 | """See base class.""" 505 | return [str(x) for x in range(385)] 506 | 507 | def _create_examples(self, lines, set_type): 508 | """Creates examples for the training and dev sets.""" 509 | examples = [] 510 | for (i, line) in enumerate(lines): 511 | # Only the test set has a header 512 | if set_type == "test" and i == 0: 513 | continue 514 | guid = "%s-%s" % (set_type, i) 515 | if set_type == "test": 516 | text_a = tokenization.convert_to_unicode(line[1]) 517 | label = "0" 518 | else: 519 | text_a = tokenization.convert_to_unicode(line[3]) 520 | label = tokenization.convert_to_unicode(line[1]) 521 | examples.append( 522 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 523 | return examples 524 | 525 | 526 | class ColaProcessor_272(DataProcessor): 527 | """Processor for the CoLA data set (GLUE version).""" 528 | 529 | def get_train_examples(self, data_dir): 530 | """See base class.""" 531 | return self._create_examples( 532 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 533 | 534 | def get_dev_examples(self, data_dir): 535 | """See base class.""" 536 | return self._create_examples( 537 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 538 | 539 | def get_test_examples(self, data_dir): 540 | """See base class.""" 541 | return self._create_examples( 542 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 543 | 544 | def get_labels(self): 545 | """See base class.""" 546 | return [str(x) for x in range(272)] 547 | 548 | def _create_examples(self, lines, set_type): 549 | """Creates examples for the training and dev sets.""" 550 | examples = [] 551 | for (i, line) in enumerate(lines): 552 | # Only the test set has a header 553 | if set_type == "test" and i == 0: 554 | continue 555 | guid = "%s-%s" % (set_type, i) 556 | if set_type == "test": 557 | text_a = tokenization.convert_to_unicode(line[1]) 558 | label = "0" 559 | else: 560 | text_a = tokenization.convert_to_unicode(line[3]) 561 | label = tokenization.convert_to_unicode(line[1]) 562 | examples.append( 563 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 564 | return examples 565 | 566 | class ColaProcessor_266(DataProcessor): 567 | """Processor for the CoLA data set (GLUE version).""" 568 | 569 | def get_train_examples(self, data_dir): 570 | """See base class.""" 571 | return self._create_examples( 572 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 573 | 574 | def get_dev_examples(self, data_dir): 575 | """See base class.""" 576 | return self._create_examples( 577 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 578 | 579 | def get_test_examples(self, data_dir): 580 | """See base class.""" 581 | return self._create_examples( 582 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 583 | 584 | def get_labels(self): 585 | """See base class.""" 586 | return [str(x) for x in range(266)] 587 | 588 | def _create_examples(self, lines, set_type): 589 | """Creates examples for the training and dev sets.""" 590 | examples = [] 591 | for (i, line) in enumerate(lines): 592 | # Only the test set has a header 593 | if set_type == "test" and i == 0: 594 | continue 595 | guid = "%s-%s" % (set_type, i) 596 | if set_type == "test": 597 | text_a = tokenization.convert_to_unicode(line[1]) 598 | label = "0" 599 | else: 600 | text_a = tokenization.convert_to_unicode(line[3]) 601 | label = tokenization.convert_to_unicode(line[1]) 602 | examples.append( 603 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 604 | return examples 605 | 606 | 607 | class ColaProcessor_234(DataProcessor): 608 | """Processor for the CoLA data set (GLUE version).""" 609 | 610 | def get_train_examples(self, data_dir): 611 | """See base class.""" 612 | return self._create_examples( 613 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 614 | 615 | def get_dev_examples(self, data_dir): 616 | """See base class.""" 617 | return self._create_examples( 618 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 619 | 620 | def get_test_examples(self, data_dir): 621 | """See base class.""" 622 | return self._create_examples( 623 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 624 | 625 | def get_labels(self): 626 | """See base class.""" 627 | return [str(x) for x in range(234)] 628 | 629 | def _create_examples(self, lines, set_type): 630 | """Creates examples for the training and dev sets.""" 631 | examples = [] 632 | for (i, line) in enumerate(lines): 633 | # Only the test set has a header 634 | if set_type == "test" and i == 0: 635 | continue 636 | guid = "%s-%s" % (set_type, i) 637 | if set_type == "test": 638 | text_a = tokenization.convert_to_unicode(line[1]) 639 | label = "0" 640 | else: 641 | text_a = tokenization.convert_to_unicode(line[3]) 642 | label = tokenization.convert_to_unicode(line[1]) 643 | examples.append( 644 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 645 | return examples 646 | 647 | class ColaProcessor_135(DataProcessor): 648 | """Processor for the CoLA data set (GLUE version).""" 649 | 650 | def get_train_examples(self, data_dir): 651 | """See base class.""" 652 | return self._create_examples( 653 | self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") 654 | 655 | def get_dev_examples(self, data_dir): 656 | """See base class.""" 657 | return self._create_examples( 658 | self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") 659 | 660 | def get_test_examples(self, data_dir): 661 | """See base class.""" 662 | return self._create_examples( 663 | self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") 664 | 665 | def get_labels(self): 666 | """See base class.""" 667 | return [str(x) for x in range(135)] 668 | 669 | def _create_examples(self, lines, set_type): 670 | """Creates examples for the training and dev sets.""" 671 | examples = [] 672 | for (i, line) in enumerate(lines): 673 | # Only the test set has a header 674 | if set_type == "test" and i == 0: 675 | continue 676 | guid = "%s-%s" % (set_type, i) 677 | if set_type == "test": 678 | text_a = tokenization.convert_to_unicode(line[1]) 679 | label = "0" 680 | else: 681 | text_a = tokenization.convert_to_unicode(line[3]) 682 | label = tokenization.convert_to_unicode(line[1]) 683 | examples.append( 684 | InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) 685 | return examples 686 | 687 | 688 | 689 | 690 | # =========begin other functions==== 691 | def convert_single_example(ex_index, example, label_list, max_seq_length, 692 | tokenizer): 693 | """Converts a single `InputExample` into a single `InputFeatures`.""" 694 | 695 | if isinstance(example, PaddingInputExample): 696 | return InputFeatures( 697 | input_ids=[0] * max_seq_length, 698 | input_mask=[0] * max_seq_length, 699 | segment_ids=[0] * max_seq_length, 700 | label_id=0, 701 | is_real_example=False) 702 | 703 | label_map = {} 704 | for (i, label) in enumerate(label_list): 705 | label_map[label] = i 706 | 707 | tokens_a = tokenizer.tokenize(example.text_a) 708 | tokens_b = None 709 | if example.text_b: 710 | tokens_b = tokenizer.tokenize(example.text_b) 711 | 712 | if tokens_b: 713 | # Modifies `tokens_a` and `tokens_b` in place so that the total 714 | # length is less than the specified length. 715 | # Account for [CLS], [SEP], [SEP] with "- 3" 716 | _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3) 717 | else: 718 | # Account for [CLS] and [SEP] with "- 2" 719 | if len(tokens_a) > max_seq_length - 2: 720 | tokens_a = tokens_a[0:(max_seq_length - 2)] 721 | 722 | # The convention in BERT is: 723 | # (a) For sequence pairs: 724 | # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP] 725 | # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1 726 | # (b) For single sequences: 727 | # tokens: [CLS] the dog is hairy . [SEP] 728 | # type_ids: 0 0 0 0 0 0 0 729 | # 730 | # Where "type_ids" are used to indicate whether this is the first 731 | # sequence or the second sequence. The embedding vectors for `type=0` and 732 | # `type=1` were learned during pre-training and are added to the wordpiece 733 | # embedding vector (and position vector). This is not *strictly* necessary 734 | # since the [SEP] token unambiguously separates the sequences, but it makes 735 | # it easier for the model to learn the concept of sequences. 736 | # 737 | # For classification tasks, the first vector (corresponding to [CLS]) is 738 | # used as the "sentence vector". Note that this only makes sense because 739 | # the entire model is fine-tuned. 740 | tokens = [] 741 | segment_ids = [] 742 | tokens.append("[CLS]") 743 | segment_ids.append(0) 744 | for token in tokens_a: 745 | tokens.append(token) 746 | segment_ids.append(0) 747 | tokens.append("[SEP]") 748 | segment_ids.append(0) 749 | 750 | if tokens_b: 751 | for token in tokens_b: 752 | tokens.append(token) 753 | segment_ids.append(1) 754 | tokens.append("[SEP]") 755 | segment_ids.append(1) 756 | 757 | input_ids = tokenizer.convert_tokens_to_ids(tokens) 758 | 759 | # The mask has 1 for real tokens and 0 for padding tokens. Only real 760 | # tokens are attended to. 761 | input_mask = [1] * len(input_ids) 762 | 763 | # Zero-pad up to the sequence length. 764 | while len(input_ids) < max_seq_length: 765 | input_ids.append(0) 766 | input_mask.append(0) 767 | segment_ids.append(0) 768 | 769 | assert len(input_ids) == max_seq_length 770 | assert len(input_mask) == max_seq_length 771 | assert len(segment_ids) == max_seq_length 772 | 773 | label_id = label_map[example.label] 774 | if ex_index < 5: 775 | tf.logging.info("*** Example ***") 776 | tf.logging.info("guid: %s" % (example.guid)) 777 | tf.logging.info("tokens: %s" % " ".join( 778 | [tokenization.printable_text(x) for x in tokens])) 779 | tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) 780 | tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) 781 | tf.logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) 782 | tf.logging.info("label: %s (id = %d)" % (example.label, label_id)) 783 | 784 | feature = InputFeatures( 785 | input_ids=input_ids, 786 | input_mask=input_mask, 787 | segment_ids=segment_ids, 788 | label_id=label_id, 789 | is_real_example=True) 790 | return feature 791 | 792 | 793 | def file_based_convert_examples_to_features( 794 | examples, label_list, max_seq_length, tokenizer, output_file): 795 | """Convert a set of `InputExample`s to a TFRecord file.""" 796 | 797 | writer = tf.python_io.TFRecordWriter(output_file) 798 | 799 | for (ex_index, example) in enumerate(examples): 800 | if ex_index % 10000 == 0: 801 | tf.logging.info("Writing example %d of %d" % (ex_index, len(examples))) 802 | 803 | feature = convert_single_example(ex_index, example, label_list, 804 | max_seq_length, tokenizer) 805 | 806 | def create_int_feature(values): 807 | f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) 808 | return f 809 | 810 | features = collections.OrderedDict() 811 | features["input_ids"] = create_int_feature(feature.input_ids) 812 | features["input_mask"] = create_int_feature(feature.input_mask) 813 | features["segment_ids"] = create_int_feature(feature.segment_ids) 814 | features["label_ids"] = create_int_feature([feature.label_id]) 815 | features["is_real_example"] = create_int_feature( 816 | [int(feature.is_real_example)]) 817 | 818 | tf_example = tf.train.Example(features=tf.train.Features(feature=features)) 819 | writer.write(tf_example.SerializeToString()) 820 | writer.close() 821 | 822 | 823 | def file_based_input_fn_builder(input_file, seq_length, is_training, 824 | drop_remainder): 825 | """Creates an `input_fn` closure to be passed to TPUEstimator.""" 826 | 827 | name_to_features = { 828 | "input_ids": tf.FixedLenFeature([seq_length], tf.int64), 829 | "input_mask": tf.FixedLenFeature([seq_length], tf.int64), 830 | "segment_ids": tf.FixedLenFeature([seq_length], tf.int64), 831 | "label_ids": tf.FixedLenFeature([], tf.int64), 832 | "is_real_example": tf.FixedLenFeature([], tf.int64), 833 | } 834 | 835 | def _decode_record(record, name_to_features): 836 | """Decodes a record to a TensorFlow example.""" 837 | example = tf.parse_single_example(record, name_to_features) 838 | 839 | # tf.Example only supports tf.int64, but the TPU only supports tf.int32. 840 | # So cast all int64 to int32. 841 | for name in list(example.keys()): 842 | t = example[name] 843 | if t.dtype == tf.int64: 844 | t = tf.to_int32(t) 845 | example[name] = t 846 | 847 | return example 848 | 849 | def input_fn(params): 850 | """The actual input function.""" 851 | batch_size = params["batch_size"] 852 | 853 | # For training, we want a lot of parallel reading and shuffling. 854 | # For eval, we want no shuffling and parallel reading doesn't matter. 855 | d = tf.data.TFRecordDataset(input_file) 856 | if is_training: 857 | d = d.repeat() 858 | d = d.shuffle(buffer_size=100) 859 | 860 | d = d.apply( 861 | tf.contrib.data.map_and_batch( 862 | lambda record: _decode_record(record, name_to_features), 863 | batch_size=batch_size, 864 | drop_remainder=drop_remainder)) 865 | 866 | return d 867 | 868 | return input_fn 869 | 870 | 871 | def _truncate_seq_pair(tokens_a, tokens_b, max_length): 872 | """Truncates a sequence pair in place to the maximum length.""" 873 | 874 | # This is a simple heuristic which will always truncate the longer sequence 875 | # one token at a time. This makes more sense than truncating an equal percent 876 | # of tokens from each, since if one sequence is very short then each token 877 | # that's truncated likely contains more information than a longer sequence. 878 | while True: 879 | total_length = len(tokens_a) + len(tokens_b) 880 | if total_length <= max_length: 881 | break 882 | if len(tokens_a) > len(tokens_b): 883 | tokens_a.pop() 884 | else: 885 | tokens_b.pop() 886 | 887 | 888 | def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, 889 | labels, num_labels, use_one_hot_embeddings): 890 | """Creates a classification model.""" 891 | model = modeling.BertModel( 892 | config=bert_config, 893 | is_training=is_training, 894 | input_ids=input_ids, 895 | input_mask=input_mask, 896 | token_type_ids=segment_ids, 897 | use_one_hot_embeddings=use_one_hot_embeddings) 898 | 899 | # In the demo, we are doing a simple classification task on the entire 900 | # segment. 901 | # 902 | # If you want to use the token-level output, use model.get_sequence_output() 903 | # instead. 904 | output_layer = model.get_pooled_output() 905 | 906 | hidden_size = output_layer.shape[-1].value 907 | 908 | output_weights = tf.get_variable( 909 | "output_weights", [num_labels, hidden_size], 910 | initializer=tf.truncated_normal_initializer(stddev=0.02)) 911 | 912 | output_bias = tf.get_variable( 913 | "output_bias", [num_labels], initializer=tf.zeros_initializer()) 914 | 915 | with tf.variable_scope("loss"): 916 | if is_training: 917 | # I.e., 0.1 dropout 918 | output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) 919 | 920 | logits = tf.matmul(output_layer, output_weights, transpose_b=True) 921 | logits = tf.nn.bias_add(logits, output_bias) 922 | probabilities = tf.nn.softmax(logits, axis=-1) 923 | log_probs = tf.nn.log_softmax(logits, axis=-1) 924 | 925 | one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) 926 | 927 | per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) 928 | loss = tf.reduce_mean(per_example_loss) 929 | 930 | return (loss, per_example_loss, logits, probabilities) 931 | 932 | 933 | def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate, 934 | num_train_steps, num_warmup_steps, use_tpu, 935 | use_one_hot_embeddings): 936 | """Returns `model_fn` closure for TPUEstimator.""" 937 | 938 | def model_fn(features, labels, mode, params): # pylint: disable=unused-argument 939 | """The `model_fn` for TPUEstimator.""" 940 | 941 | tf.logging.info("*** Features ***") 942 | for name in sorted(features.keys()): 943 | tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape)) 944 | 945 | input_ids = features["input_ids"] 946 | input_mask = features["input_mask"] 947 | segment_ids = features["segment_ids"] 948 | label_ids = features["label_ids"] 949 | is_real_example = None 950 | if "is_real_example" in features: 951 | is_real_example = tf.cast(features["is_real_example"], dtype=tf.float32) 952 | else: 953 | is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32) 954 | 955 | is_training = (mode == tf.estimator.ModeKeys.TRAIN) 956 | 957 | (total_loss, per_example_loss, logits, probabilities) = create_model( 958 | bert_config, is_training, input_ids, input_mask, segment_ids, label_ids, 959 | num_labels, use_one_hot_embeddings) 960 | 961 | tvars = tf.trainable_variables() 962 | initialized_variable_names = {} 963 | scaffold_fn = None 964 | if init_checkpoint: 965 | (assignment_map, initialized_variable_names 966 | ) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint) 967 | if use_tpu: 968 | 969 | def tpu_scaffold(): 970 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 971 | return tf.train.Scaffold() 972 | 973 | scaffold_fn = tpu_scaffold 974 | else: 975 | tf.train.init_from_checkpoint(init_checkpoint, assignment_map) 976 | 977 | tf.logging.info("**** Trainable Variables ****") 978 | for var in tvars: 979 | init_string = "" 980 | if var.name in initialized_variable_names: 981 | init_string = ", *INIT_FROM_CKPT*" 982 | tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape, 983 | init_string) 984 | 985 | output_spec = None 986 | if mode == tf.estimator.ModeKeys.TRAIN: 987 | 988 | train_op = optimization.create_optimizer( 989 | total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu) 990 | 991 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 992 | mode=mode, 993 | loss=total_loss, 994 | train_op=train_op, 995 | scaffold_fn=scaffold_fn) 996 | elif mode == tf.estimator.ModeKeys.EVAL: 997 | 998 | def metric_fn(per_example_loss, label_ids, logits, is_real_example): 999 | predictions = tf.argmax(logits, axis=-1, output_type=tf.int32) 1000 | accuracy = tf.metrics.accuracy( 1001 | labels=label_ids, predictions=predictions, weights=is_real_example) 1002 | loss = tf.metrics.mean(values=per_example_loss, weights=is_real_example) 1003 | return { 1004 | "eval_accuracy": accuracy, 1005 | "eval_loss": loss, 1006 | } 1007 | 1008 | eval_metrics = (metric_fn, 1009 | [per_example_loss, label_ids, logits, is_real_example]) 1010 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 1011 | mode=mode, 1012 | loss=total_loss, 1013 | eval_metrics=eval_metrics, 1014 | scaffold_fn=scaffold_fn) 1015 | else: 1016 | output_spec = tf.contrib.tpu.TPUEstimatorSpec( 1017 | mode=mode, 1018 | predictions={"probabilities": probabilities}, 1019 | scaffold_fn=scaffold_fn) 1020 | return output_spec 1021 | 1022 | return model_fn 1023 | 1024 | 1025 | # This function is not used by this file but is still used by the Colab and 1026 | # people who depend on it. 1027 | def input_fn_builder(features, seq_length, is_training, drop_remainder): 1028 | """Creates an `input_fn` closure to be passed to TPUEstimator.""" 1029 | 1030 | all_input_ids = [] 1031 | all_input_mask = [] 1032 | all_segment_ids = [] 1033 | all_label_ids = [] 1034 | 1035 | for feature in features: 1036 | all_input_ids.append(feature.input_ids) 1037 | all_input_mask.append(feature.input_mask) 1038 | all_segment_ids.append(feature.segment_ids) 1039 | all_label_ids.append(feature.label_id) 1040 | 1041 | def input_fn(params): 1042 | """The actual input function.""" 1043 | batch_size = params["batch_size"] 1044 | 1045 | num_examples = len(features) 1046 | 1047 | # This is for demo purposes and does NOT scale to large data sets. We do 1048 | # not use Dataset.from_generator() because that uses tf.py_func which is 1049 | # not TPU compatible. The right way to load data is with TFRecordReader. 1050 | d = tf.data.Dataset.from_tensor_slices({ 1051 | "input_ids": 1052 | tf.constant( 1053 | all_input_ids, shape=[num_examples, seq_length], 1054 | dtype=tf.int32), 1055 | "input_mask": 1056 | tf.constant( 1057 | all_input_mask, 1058 | shape=[num_examples, seq_length], 1059 | dtype=tf.int32), 1060 | "segment_ids": 1061 | tf.constant( 1062 | all_segment_ids, 1063 | shape=[num_examples, seq_length], 1064 | dtype=tf.int32), 1065 | "label_ids": 1066 | tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32), 1067 | }) 1068 | 1069 | if is_training: 1070 | d = d.repeat() 1071 | d = d.shuffle(buffer_size=100) 1072 | 1073 | d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder) 1074 | return d 1075 | 1076 | return input_fn 1077 | 1078 | 1079 | # This function is not used by this file but is still used by the Colab and 1080 | # people who depend on it. 1081 | def convert_examples_to_features(examples, label_list, max_seq_length, 1082 | tokenizer): 1083 | """Convert a set of `InputExample`s to a list of `InputFeatures`.""" 1084 | 1085 | features = [] 1086 | for (ex_index, example) in enumerate(examples): 1087 | if ex_index % 10000 == 0: 1088 | tf.logging.info("Writing example %d of %d" % (ex_index, len(examples))) 1089 | 1090 | feature = convert_single_example(ex_index, example, label_list, 1091 | max_seq_length, tokenizer) 1092 | 1093 | features.append(feature) 1094 | return features 1095 | 1096 | 1097 | def main(_): 1098 | tf.logging.set_verbosity(tf.logging.INFO) 1099 | 1100 | processors = { 1101 | "cola_385":ColaProcessor_385, 1102 | 'cola_272':ColaProcessor_272, 1103 | 'cola_234':ColaProcessor_234, 1104 | 'cola_135':ColaProcessor_135, 1105 | 'cola_266':ColaProcessor_266, 1106 | 'cola_152':ColaProcessor_152, 1107 | "mnli": MnliProcessor, 1108 | 'auto_grade':AutoGradeProcessor, 1109 | 'kt':KTProcessor, 1110 | "mrpc": MrpcProcessor, 1111 | "xnli": XnliProcessor, 1112 | } 1113 | 1114 | tokenization.validate_case_matches_checkpoint(FLAGS.do_lower_case, 1115 | FLAGS.init_checkpoint) 1116 | 1117 | if not FLAGS.do_train and not FLAGS.do_eval and not FLAGS.do_predict: 1118 | raise ValueError( 1119 | "At least one of `do_train`, `do_eval` or `do_predict' must be True.") 1120 | 1121 | bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file) 1122 | 1123 | if FLAGS.max_seq_length > bert_config.max_position_embeddings: 1124 | raise ValueError( 1125 | "Cannot use sequence length %d because the BERT model " 1126 | "was only trained up to sequence length %d" % 1127 | (FLAGS.max_seq_length, bert_config.max_position_embeddings)) 1128 | 1129 | tf.gfile.MakeDirs(FLAGS.output_dir) 1130 | 1131 | task_name = FLAGS.task_name.lower() 1132 | 1133 | if task_name not in processors: 1134 | raise ValueError("Task not found: %s" % (task_name)) 1135 | 1136 | processor = processors[task_name]() 1137 | 1138 | label_list = processor.get_labels() 1139 | 1140 | tokenizer = tokenization.FullTokenizer( 1141 | vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) 1142 | 1143 | tpu_cluster_resolver = None 1144 | if FLAGS.use_tpu and FLAGS.tpu_name: 1145 | tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver( 1146 | FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project) 1147 | 1148 | is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 1149 | run_config = tf.contrib.tpu.RunConfig( 1150 | cluster=tpu_cluster_resolver, 1151 | master=FLAGS.master, 1152 | model_dir=FLAGS.output_dir, 1153 | save_checkpoints_steps=FLAGS.save_checkpoints_steps, 1154 | tpu_config=tf.contrib.tpu.TPUConfig( 1155 | iterations_per_loop=FLAGS.iterations_per_loop, 1156 | num_shards=FLAGS.num_tpu_cores, 1157 | per_host_input_for_training=is_per_host)) 1158 | 1159 | train_examples = None 1160 | num_train_steps = None 1161 | num_warmup_steps = None 1162 | if FLAGS.do_train: 1163 | train_examples = processor.get_train_examples(FLAGS.data_dir) 1164 | num_train_steps = int( 1165 | len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs) 1166 | num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion) 1167 | 1168 | model_fn = model_fn_builder( 1169 | bert_config=bert_config, 1170 | num_labels=len(label_list), 1171 | init_checkpoint=FLAGS.init_checkpoint, 1172 | learning_rate=FLAGS.learning_rate, 1173 | num_train_steps=num_train_steps, 1174 | num_warmup_steps=num_warmup_steps, 1175 | use_tpu=FLAGS.use_tpu, 1176 | use_one_hot_embeddings=FLAGS.use_tpu) 1177 | 1178 | # If TPU is not available, this will fall back to normal Estimator on CPU 1179 | # or GPU. 1180 | estimator = tf.contrib.tpu.TPUEstimator( 1181 | use_tpu=FLAGS.use_tpu, 1182 | model_fn=model_fn, 1183 | config=run_config, 1184 | train_batch_size=FLAGS.train_batch_size, 1185 | eval_batch_size=FLAGS.eval_batch_size, 1186 | predict_batch_size=FLAGS.predict_batch_size) 1187 | 1188 | if FLAGS.do_train: 1189 | train_file = os.path.join(FLAGS.output_dir, "train.tf_record") 1190 | file_based_convert_examples_to_features( 1191 | train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file) 1192 | tf.logging.info("***** Running training *****") 1193 | tf.logging.info(" Num examples = %d", len(train_examples)) 1194 | tf.logging.info(" Batch size = %d", FLAGS.train_batch_size) 1195 | tf.logging.info(" Num steps = %d", num_train_steps) 1196 | train_input_fn = file_based_input_fn_builder( 1197 | input_file=train_file, 1198 | seq_length=FLAGS.max_seq_length, 1199 | is_training=True, 1200 | drop_remainder=True) 1201 | estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) 1202 | 1203 | if FLAGS.do_eval: 1204 | eval_examples = processor.get_dev_examples(FLAGS.data_dir) 1205 | num_actual_eval_examples = len(eval_examples) 1206 | if FLAGS.use_tpu: 1207 | # TPU requires a fixed batch size for all batches, therefore the number 1208 | # of examples must be a multiple of the batch size, or else examples 1209 | # will get dropped. So we pad with fake examples which are ignored 1210 | # later on. These do NOT count towards the metric (all tf.metrics 1211 | # support a per-instance weight, and these get a weight of 0.0). 1212 | while len(eval_examples) % FLAGS.eval_batch_size != 0: 1213 | eval_examples.append(PaddingInputExample()) 1214 | 1215 | eval_file = os.path.join(FLAGS.output_dir, "eval.tf_record") 1216 | file_based_convert_examples_to_features( 1217 | eval_examples, label_list, FLAGS.max_seq_length, tokenizer, eval_file) 1218 | 1219 | tf.logging.info("***** Running evaluation *****") 1220 | tf.logging.info(" Num examples = %d (%d actual, %d padding)", 1221 | len(eval_examples), num_actual_eval_examples, 1222 | len(eval_examples) - num_actual_eval_examples) 1223 | tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size) 1224 | 1225 | # This tells the estimator to run through the entire set. 1226 | eval_steps = None 1227 | # However, if running eval on the TPU, you will need to specify the 1228 | # number of steps. 1229 | if FLAGS.use_tpu: 1230 | assert len(eval_examples) % FLAGS.eval_batch_size == 0 1231 | eval_steps = int(len(eval_examples) // FLAGS.eval_batch_size) 1232 | 1233 | eval_drop_remainder = True if FLAGS.use_tpu else False 1234 | eval_input_fn = file_based_input_fn_builder( 1235 | input_file=eval_file, 1236 | seq_length=FLAGS.max_seq_length, 1237 | is_training=False, 1238 | drop_remainder=eval_drop_remainder) 1239 | 1240 | result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps) 1241 | 1242 | output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt") 1243 | with tf.gfile.GFile(output_eval_file, "w") as writer: 1244 | tf.logging.info("***** Eval results *****") 1245 | for key in sorted(result.keys()): 1246 | tf.logging.info(" %s = %s", key, str(result[key])) 1247 | writer.write("%s = %s\n" % (key, str(result[key]))) 1248 | 1249 | if FLAGS.do_predict: 1250 | predict_examples = processor.get_test_examples(FLAGS.data_dir) 1251 | num_actual_predict_examples = len(predict_examples) 1252 | if FLAGS.use_tpu: 1253 | # TPU requires a fixed batch size for all batches, therefore the number 1254 | # of examples must be a multiple of the batch size, or else examples 1255 | # will get dropped. So we pad with fake examples which are ignored 1256 | # later on. 1257 | while len(predict_examples) % FLAGS.predict_batch_size != 0: 1258 | predict_examples.append(PaddingInputExample()) 1259 | 1260 | predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record") 1261 | file_based_convert_examples_to_features(predict_examples, label_list, 1262 | FLAGS.max_seq_length, tokenizer, 1263 | predict_file) 1264 | 1265 | tf.logging.info("***** Running prediction*****") 1266 | tf.logging.info(" Num examples = %d (%d actual, %d padding)", 1267 | len(predict_examples), num_actual_predict_examples, 1268 | len(predict_examples) - num_actual_predict_examples) 1269 | tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size) 1270 | 1271 | predict_drop_remainder = True if FLAGS.use_tpu else False 1272 | predict_input_fn = file_based_input_fn_builder( 1273 | input_file=predict_file, 1274 | seq_length=FLAGS.max_seq_length, 1275 | is_training=False, 1276 | drop_remainder=predict_drop_remainder) 1277 | 1278 | result = estimator.predict(input_fn=predict_input_fn) 1279 | 1280 | output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv") 1281 | with tf.gfile.GFile(output_predict_file, "w") as writer: 1282 | num_written_lines = 0 1283 | tf.logging.info("***** Predict results *****") 1284 | for (i, prediction) in enumerate(result): 1285 | probabilities = prediction["probabilities"] 1286 | if i >= num_actual_predict_examples: 1287 | break 1288 | output_line = "\t".join( 1289 | str(class_probability) 1290 | for class_probability in probabilities) + "\n" 1291 | writer.write(output_line) 1292 | num_written_lines += 1 1293 | assert num_written_lines == num_actual_predict_examples 1294 | 1295 | 1296 | if __name__ == "__main__": 1297 | flags.mark_flag_as_required("data_dir") 1298 | flags.mark_flag_as_required("task_name") 1299 | flags.mark_flag_as_required("vocab_file") 1300 | flags.mark_flag_as_required("bert_config_file") 1301 | flags.mark_flag_as_required("output_dir") 1302 | tf.app.run() 1303 | --------------------------------------------------------------------------------