├── README.md ├── data ├── data_orign │ └── SemEval2014-Task4 │ │ ├── Website │ │ ├── Task Description Aspect Based Sentiment Analysis (ABSA) ( SemEval-2014 Task 4.html │ │ └── Task Description Aspect Based Sentiment Analysis (ABSA) ( SemEval-2014 Task 4_files │ │ │ └── style.css │ │ ├── data.png │ │ ├── eval │ │ └── semeval14-absa-base-eval-valid │ │ │ ├── README.txt │ │ │ ├── SemEvalSchema.xsd │ │ │ ├── eval.jar │ │ │ └── semeval_base.py │ │ ├── test │ │ ├── ABSA_TestData_PhaseA │ │ │ ├── Laptops_Test_Data_PhaseA.xml │ │ │ └── Restaurants_Test_Data_PhaseA.xml │ │ ├── ABSA_TestData_PhaseB │ │ │ ├── Laptops_Test_Data_phaseB.xml │ │ │ └── Restaurants_Test_Data_phaseB.xml │ │ ├── Laptop_Test.xml │ │ └── Restaurants_Test.xml │ │ ├── train │ │ ├── Laptop_Train_v2.xml │ │ ├── Restaurants_Train_v2.xml │ │ └── SemEval14_ABSA_AnnotationGuidelines.pdf │ │ └── trial │ │ ├── laptops-trial.xml │ │ └── restaurants-trial.xml ├── data_processed │ └── SemEval2014 │ │ ├── laptop-statistic.txt │ │ ├── laptop-test-segment.json │ │ ├── laptop-test.json │ │ ├── laptop-test.xml │ │ ├── laptop-train-segment.json │ │ ├── laptop-train.json │ │ ├── laptop-train.xml │ │ ├── restaurants-statistic.txt │ │ ├── restaurants-test-segment.json │ │ ├── restaurants-test.json │ │ ├── restaurants-test.xml │ │ ├── restaurants-train-segment.json │ │ ├── restaurants-train.json │ │ └── restaurants-train.xml ├── store │ └── glove.840B.300d.txt └── tmp │ └── restaurants14_test.txt ├── data_processing ├── MPQA.py ├── Michell-en.py ├── SemEval2014-Laptop.py ├── SemEval2014-Restaurants.py ├── SemEval2015-Restaurants.py ├── SemEval2016-Restaurants.py ├── Sentihood.py ├── Twitter.py ├── __init__.py ├── __init__.pyc ├── clean.py ├── clean.pyc ├── process2segment.py └── sample.py ├── data_utils.py ├── layers ├── Attention.py ├── Dynamic_GRU.py ├── Dynamic_LSTM.py ├── Dynamic_RNN.py ├── Squeeze_embedding.py └── __init__.py ├── models ├── AE_CNN.py ├── AE_ContextAvg.py ├── ATAE_BiGRU.py ├── ATAE_BiLSTM.py ├── ATAE_GRU.py ├── ATAE_LSTM.py ├── AT_BiGRU.py ├── AT_BiLSTM.py ├── AT_GRU.py ├── AT_LSTM.py ├── BiGRU.py ├── BiLSTM.py ├── CABASC.py ├── CNN.py ├── ContextAvg.py ├── GCAE.py ├── GRU.py ├── HAN.py ├── HAN_PA.py ├── HAN_test.py ├── IAN.py ├── LCRS.py ├── LSTM.py ├── MemNet.py ├── PRET_MULT.py ├── RAM.py ├── TC_LSTM.py ├── TD_LSTM.py ├── TNet.py └── __init__.py ├── results ├── ans │ ├── ContextAvg_laptop14_Adam_0.0005_-1_0.3_False_16_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.0005_-1_0.3_False_32_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.0005_-1_0.5_False_16_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.0005_-1_0.5_False_32_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.001_-1_0.3_False_16_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.001_-1_0.3_False_32_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.001_-1_0.5_False_16_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.001_-1_0.5_False_32_0.1.txt │ ├── ContextAvg_laptop14_Adam_0.001_-1_0.7_False_16_0.1.txt │ └── ContextAvg_laptop14_Adam_0.001_-1_0.7_False_32_0.1.txt ├── attention_weight │ └── weight.txt ├── log │ └── restaurants14_ContextAvg_dev_0.1.log └── model │ └── LSTM.model.txt ├── run.sh ├── train.py └── train_han.py /data/data_orign/SemEval2014-Task4/data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data/data_orign/SemEval2014-Task4/data.png -------------------------------------------------------------------------------- /data/data_orign/SemEval2014-Task4/eval/semeval14-absa-base-eval-valid/README.txt: -------------------------------------------------------------------------------- 1 | Aspect Based Sentiment Analysis (ABSA) 2 | Task 4 of SemEval 2014 3 | ----------------------------------------------------- 4 | 5 | This folder contains scripts/code for: 6 | 7 | A. Running the ABSA baselines. 8 | B. Evaluating the output of your system. 9 | C. Validating the XML file that you will submit to ABSA 2014. 10 | 11 | 12 | Running the Baselines 13 | ----------------------- 14 | 15 | The semeval_base.py script is an implementation of the baselines of SemEval Task 4 (Aspect Based Sentiment Analysis). 16 | A high level description of them can be found at the following address: 17 | 18 | http://alt.qcri.org/semeval2014/task4/data/uploads/baselinesystemdescription.pdf 19 | 20 | By running python semeval_base.py from you shell, a list of possible options will be displayed. 21 | (**Caution: We tested semeval_base.py only in Linux.) 22 | 23 | Assuming that rest.xml and lap.xml are the training data for the restaurants and laptops 24 | domain, respectively, we recommend you run the baselines as follows: 25 | 26 | -- restaurants 27 | 28 | python semeval_base.py --train rest.xml --task 5 29 | 30 | It reads the given data (rest.xml) and splits them in a train (absa--train.xml) and a test part (absa--test.xml) using a 80:20 ratio. 31 | Then, it tags the sentences of the test part with the found aspect terms and categories and stores the result to absa--test.predicted-stageI.xml. 32 | absa--test.gold.xml contains the gold (correct) aspect terms and categories. 33 | 34 | 35 | python semeval_base.py --train rest.xml --task 6 36 | 37 | It reads the given data (rest.xml) splits them in a train (absa--train.xml) and a test part (absa--test.xml) using a 80:20 ratio. 38 | Then, it finds the polarity for the aspect terms and categories of the test part and stores the result to absa--test.predicted-stageII.xml. 39 | absa--test.gold.xml contains the gold (correct) polarities. 40 | 41 | -- laptops 42 | 43 | python semeval_base.py --train lap.xml --task 1 44 | 45 | It reads the given data (lap.xml), splits them in a train (absa--train.xml) and a test part (absa--test.xml) using a 80:20 ratio. 46 | Then, it tags the sentences of the test part with the found aspect terms and stores the result to absa--test.predicted-aspect.xml. 47 | absa--test.gold.xml contains the gold (correct) aspect terms and categories 48 | 49 | python semeval_base.py --train lap.xml --task 3 50 | 51 | It reads the given data (rest.xml), splits them in a train (absa--train.xml) and a test part (absa--test.xml) using a 80:20 ratio. 52 | Then, it finds the polarity for the aspect terms of the test part and stores the result to absa--test.predicted-stageII.xml. 53 | absa--test.gold.xml contains the gold (correct) polarities. 54 | 55 | 56 | In all cases above, the baseline script calculates and displays evaluation scores (precision, recall, and F1 for aspect term and aspect category extraction; accuracy for aspect term and aspect category polarity detection). 57 | 58 | 59 | Evaluation 60 | ----------------------- 61 | 62 | java -cp ./eval.jar Main.Aspects test.xml ref.xml 63 | 64 | It calculates and displays the precision, recall and F1 for aspect term and category extraction for a system that generated test.xml, comparing it to ref.xml that contains 65 | the gold correct annotations. The same measures are also calculated and displayed by semeval_base.py. 66 | 67 | java -cp ./eval.jar Main.Polarity test.xml ref.xml 68 | 69 | In contrast to semeval_base.py that calculates only the overall accuracy for the polarity detection task, the above command also calculates F1, Precision and Recall 70 | for all labels (positive|negative|neutral|conflict). As previously, test.xml is the file that the system generated and ref.xml 71 | is the one that contains the gold (correct) annotations. 72 | 73 | 74 | Submit your system 75 | ----------------------- 76 | 77 | The Aspect Based Sentiment Analysis task will run in two stages. 78 | 79 | In the first stage, you will be provided with a XML file that will contain a set of sentences. 80 | If you want to participate in this stage, you have to return a file tagged with the aspect terms and categories in the same way they are tagged in the training data. 81 | 82 | In the second stage, we will provide you with the correct aspect terms and categories 83 | and you will have to find their polarity (positive|negative|neutral|conflict) and tag them as in the training data. 84 | 85 | 86 | Before uploading your results (for stage one or two), we highly recommend you validate (as shown below) the XML your system produced against the provided XSD schema (SemEvalSchema.xsd). 87 | This way you will verify that your XML output is well-formed and can be processed/parsed by our evaluation scripts. 88 | 89 | java -cp ./eval.jar Main.Valid test.xml SemEvalSchema.xsd 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /data/data_orign/SemEval2014-Task4/eval/semeval14-absa-base-eval-valid/SemEvalSchema.xsd: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | -------------------------------------------------------------------------------- /data/data_orign/SemEval2014-Task4/eval/semeval14-absa-base-eval-valid/eval.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data/data_orign/SemEval2014-Task4/eval/semeval14-absa-base-eval-valid/eval.jar -------------------------------------------------------------------------------- /data/data_orign/SemEval2014-Task4/train/SemEval14_ABSA_AnnotationGuidelines.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data/data_orign/SemEval2014-Task4/train/SemEval14_ABSA_AnnotationGuidelines.pdf -------------------------------------------------------------------------------- /data/data_processed/SemEval2014/laptop-statistic.txt: -------------------------------------------------------------------------------- 1 | train sample numbers: 1462 2 | train average target number/sample: 1.58207934337 3 | train total target number: 939 4 | train average target length: 1.9190628328 5 | train average sample length: 18.585499316 6 | train negative,neutral,positive number: 866,460,987; rate: 0.374405533939,0.19887591872,0.426718547341 7 | train top 10 aspects: screen:60;price:56;use:53;battery life:52;keyboard:50;battery:47;programs:37;features:35;software:33;warranty:31; 8 | ---------------------------------------- 9 | test sample numbers: 411 10 | test average target number/sample: 1.55231143552 11 | test total target number: 389 12 | test average target length: 1.94344473008 13 | test average sample length: 14.9562043796 14 | test negative,neutral,positive number: 128,169,341; rate: 0.200626959248,0.264890282132,0.534482758621 15 | test top 10 aspects: performance:15;price:15;works:14;os:13;features:11;windows 8:11;use:9;screen:9;size:8;keyboard:7; 16 | ---------------------------------------- 17 | train+test sample numbers: 1873 18 | train+test average target number/sample: 1.5755472504 19 | train+test total target number: 1170 20 | train+test average target length: 2.00769230769 21 | train+test average sample length: 17.7891083823 22 | train+test negative,neutral,positive number: 994,629,1328; rate: 0.336834971196,0.213148085395,0.450016943409 23 | train+test top 10 aspects: price:71;screen:69;use:62;battery life:58;keyboard:57;battery:53;features:46;performance:38;programs:37;software:37; 24 | ---------------------------------------- 25 | -------------------------------------------------------------------------------- /data/data_processed/SemEval2014/restaurants-statistic.txt: -------------------------------------------------------------------------------- 1 | train sample numbers: 1978 2 | train average target number/sample: 1.82103134479 3 | train total target number: 1191 4 | train average target length: 2.07220822838 5 | train average sample length: 16.2856420627 6 | train negative,neutral,positive number: 805,633,2164; rate: 0.223486951694,0.175735702388,0.600777345919 7 | train top 10 aspects: food:360;service:225;prices:63;place:59;dinner:56;staff:55;menu:55;pizza:51;atmosphere:46;price:41; 8 | ---------------------------------------- 9 | test sample numbers: 600 10 | test average target number/sample: 1.86666666667 11 | test total target number: 520 12 | test average target length: 1.99423076923 13 | test average sample length: 15.415 14 | test negative,neutral,positive number: 196,196,728; rate: 0.175,0.175,0.65 15 | test top 10 aspects: food:125;service:74;menu:22;atmosphere:21;staff:21;place:19;prices:18;meal:14;sushi:14;drinks:13; 16 | ---------------------------------------- 17 | train+test sample numbers: 2578 18 | train+test average target number/sample: 1.83165244375 19 | train+test total target number: 1523 20 | train+test average target length: 2.14576493762 21 | train+test average sample length: 16.0830100853 22 | train+test negative,neutral,positive number: 1001,829,2892; rate: 0.211986446421,0.17556120288,0.612452350699 23 | train+test top 10 aspects: food:485;service:299;prices:81;place:78;menu:77;staff:76;atmosphere:67;dinner:63;pizza:59;meal:54; 24 | ---------------------------------------- 25 | -------------------------------------------------------------------------------- /data/store/glove.840B.300d.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data/store/glove.840B.300d.txt -------------------------------------------------------------------------------- /data_processing/MPQA.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data_processing/MPQA.py -------------------------------------------------------------------------------- /data_processing/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data_processing/__init__.py -------------------------------------------------------------------------------- /data_processing/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data_processing/__init__.pyc -------------------------------------------------------------------------------- /data_processing/clean.py: -------------------------------------------------------------------------------- 1 | import re 2 | import string 3 | from bs4 import BeautifulSoup 4 | 5 | 6 | def strip_punctuation(s): 7 | # print "output_2", s 8 | s = s.encode('raw_unicode_escape') 9 | return s.translate(str.maketrans("", ""), string.punctuation) 10 | # return ''.join(c for c in s if c not in string.punctuation) 11 | 12 | 13 | def process_text(x): 14 | x = x.lower() 15 | x = x.replace(""", " ") 16 | x = x.replace('"', " ") 17 | x = BeautifulSoup(x, "lxml").text 18 | x = re.sub('[^A-Za-z0-9]+', ' ', x) 19 | x = x.strip().split(' ') 20 | # x = [strip_punctuation(y) for y in x] 21 | ans = [] 22 | for y in x: 23 | if len(y) == 0: 24 | continue 25 | ans.append(y) 26 | # ptxt = nltk.word_tokenize(ptxt) 27 | return ans 28 | 29 | 30 | def clean_str(string, max_seq_len=-1): 31 | string = string.replace('"', " ") 32 | string = string.replace(""", " ") 33 | string = BeautifulSoup(string, "lxml").text 34 | string = re.sub(r"[^A-Za-z0-9(),!?\"\`]", " ", string) 35 | string = re.sub(r"\"s", " \"s", string) 36 | string = re.sub(r"\"ve", " \"ve", string) 37 | string = re.sub(r"n\"t", " n\"t", string) 38 | string = re.sub(r"\"re", " \"re", string) 39 | string = re.sub(r"\"d", " \"d", string) 40 | string = re.sub(r"\"ll", " \"ll", string) 41 | string = re.sub(r",", " , ", string) 42 | string = re.sub(r"!", " ! ", string) 43 | string = re.sub(r"\(", " \( ", string) 44 | string = re.sub(r"\)", " \) ", string) 45 | string = re.sub(r"\?", " \? ", string) 46 | string = re.sub(r"\s{2,}", " ", string) 47 | s = string.strip().lower().split(" ") 48 | if max_seq_len == -1: 49 | return s 50 | elif len(s) > max_seq_len: 51 | return s[0:max_seq_len] 52 | else: 53 | return s 54 | 55 | 56 | if __name__ == '__main__': 57 | s = '' 58 | print(clean_str(s, 205)) 59 | print(process_text(s)) 60 | -------------------------------------------------------------------------------- /data_processing/clean.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/data_processing/clean.pyc -------------------------------------------------------------------------------- /layers/Attention.py: -------------------------------------------------------------------------------- 1 | import math 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | 6 | 7 | class Attention(nn.Module): 8 | def __init__(self, embed_dim, hidden_dim=None, out_dim=None, n_head=1, score_function='scaled_dot_product', 9 | dropout=0): 10 | ''' Attention Mechanism 11 | :param embed_dim: 12 | :param hidden_dim: 13 | :param out_dim: 14 | :param n_head: num of head (Multi-Head Attention) 15 | :param score_function: scaled_dot_product / mlp (concat) / bi_linear (general dot) 16 | :return (?, q_len, out_dim,) 17 | ''' 18 | super(Attention, self).__init__() 19 | if hidden_dim is None: 20 | hidden_dim = embed_dim // n_head 21 | if out_dim is None: 22 | out_dim = embed_dim 23 | self.embed_dim = embed_dim 24 | self.hidden_dim = hidden_dim 25 | self.n_head = n_head 26 | self.score_function = score_function 27 | self.w_kx = nn.Parameter(torch.FloatTensor(n_head, embed_dim, hidden_dim)) 28 | self.w_qx = nn.Parameter(torch.FloatTensor(n_head, embed_dim, hidden_dim)) 29 | self.proj = nn.Linear(n_head * hidden_dim, out_dim) 30 | self.dropout = nn.Dropout(dropout) 31 | if score_function == 'mlp': 32 | self.weight = nn.Parameter(torch.Tensor(hidden_dim * 2)) 33 | elif self.score_function == 'bi_linear': 34 | self.weight = nn.Parameter(torch.Tensor(hidden_dim, hidden_dim)) 35 | else: 36 | self.register_parameter('weight', None) 37 | self.reset_parameters() 38 | 39 | def reset_parameters(self): 40 | stdv = 1. / math.sqrt(self.hidden_dim) 41 | self.w_kx.data.uniform_(-stdv, stdv) 42 | self.w_qx.data.uniform_(-stdv, stdv) 43 | if self.weight is not None: 44 | self.weight.data.uniform_(-stdv, stdv) 45 | 46 | def forward(self, k, q): 47 | if len(q.shape) == 2: # q_len missing 48 | q = torch.unsqueeze(q, dim=1) 49 | if len(k.shape) == 2: # k_len missing 50 | k = torch.unsqueeze(k, dim=1) 51 | mb_size = k.shape[0] # ? 52 | k_len = k.shape[1] 53 | q_len = q.shape[1] 54 | # k: (?, k_len, embed_dim,) 55 | # q: (?, q_len, embed_dim,) 56 | # kx: (n_head, ?*k_len, embed_dim) -> (n_head*?, k_len, hidden_dim) 57 | # qx: (n_head, ?*q_len, embed_dim) -> (n_head*?, q_len, hidden_dim) 58 | # score: (n_head*?, q_len, k_len,) 59 | # output: (?, q_len, out_dim,) 60 | kx = k.repeat(self.n_head, 1, 1).view(self.n_head, -1, self.embed_dim) # (n_head, ?*k_len, embed_dim) 61 | qx = q.repeat(self.n_head, 1, 1).view(self.n_head, -1, self.embed_dim) # (n_head, ?*q_len, embed_dim) 62 | kx = torch.bmm(kx, self.w_kx).view(-1, k_len, self.hidden_dim) # (n_head*?, k_len, hidden_dim) 63 | qx = torch.bmm(qx, self.w_qx).view(-1, q_len, self.hidden_dim) # (n_head*?, q_len, hidden_dim) 64 | if self.score_function == 'scaled_dot_product': 65 | kt = kx.permute(0, 2, 1) 66 | qkt = torch.bmm(qx, kt) 67 | score = torch.div(qkt, math.sqrt(self.hidden_dim)) 68 | elif self.score_function == 'mlp': 69 | kxx = torch.unsqueeze(kx, dim=1).expand(-1, q_len, -1, -1) 70 | qxx = torch.unsqueeze(qx, dim=2).expand(-1, -1, k_len, -1) 71 | kq = torch.cat((kxx, qxx), dim=-1) # (n_head*?, q_len, k_len, hidden_dim*2) 72 | score = F.tanh(torch.matmul(kq, self.weight)) 73 | elif self.score_function == 'bi_linear': 74 | qw = torch.matmul(qx, self.weight) 75 | kt = kx.permute(0, 2, 1) 76 | score = torch.bmm(qw, kt) 77 | else: 78 | raise RuntimeError('invalid score_function') 79 | score = F.softmax(score, dim=-1) 80 | output = torch.bmm(score, kx) # (n_head*?, q_len, hidden_dim) 81 | output = torch.cat(torch.split(output, mb_size, dim=0), dim=-1) # (?, q_len, n_head*hidden_dim) 82 | output = self.proj(output) # (?, q_len, out_dim) 83 | output = self.dropout(output) 84 | return output 85 | 86 | 87 | class AttentionAspect(nn.Module): 88 | def __init__(self, embed_dim_k, embed_dim_q, hidden_dim_k=None, hidden_dim_q=None, out_dim=None, n_head=1, score_function='scaled_dot_product', 89 | dropout=0): 90 | ''' Attention Mechanism 91 | :param embed_dim: 92 | :param hidden_dim: 93 | :param out_dim: 94 | :param n_head: num of head (Multi-Head Attention) 95 | :param score_function: scaled_dot_product / mlp (concat) / bi_linear (general dot) 96 | :return (?, q_len, out_dim,) 97 | ''' 98 | super(AttentionAspect, self).__init__() 99 | if hidden_dim_k is None: 100 | hidden_dim_k = embed_dim_k // n_head 101 | if hidden_dim_q is None: 102 | hidden_dim_q = embed_dim_q // n_head 103 | if out_dim is None: 104 | out_dim = embed_dim_k 105 | self.embed_dim_k = embed_dim_k 106 | self.embed_dim_q = embed_dim_q 107 | self.hidden_dim_k = hidden_dim_k 108 | self.hidden_dim_q = hidden_dim_q 109 | self.n_head = n_head 110 | self.score_function = score_function 111 | self.w_kx = nn.Parameter(torch.FloatTensor(n_head, embed_dim_k, hidden_dim_k)) 112 | self.w_qx = nn.Parameter(torch.FloatTensor(n_head, embed_dim_q, hidden_dim_q)) 113 | self.proj = nn.Linear(n_head * hidden_dim_k, out_dim) 114 | self.dropout = nn.Dropout(dropout) 115 | if score_function == 'mlp': 116 | self.weight = nn.Parameter(torch.Tensor(hidden_dim_k + hidden_dim_q)) 117 | elif self.score_function == 'bi_linear': 118 | self.weight = nn.Parameter(torch.Tensor(hidden_dim_q, hidden_dim_k)) 119 | else: 120 | self.register_parameter('weight', None) 121 | self.reset_parameters() 122 | 123 | def reset_parameters(self): 124 | stdv_k = 1. / math.sqrt(self.hidden_dim_k) 125 | self.w_kx.data.uniform_(-stdv_k, stdv_k) 126 | stdv_q = 1. / math.sqrt(self.hidden_dim_q) 127 | self.w_qx.data.uniform_(-stdv_q, stdv_q) 128 | if self.weight is not None: 129 | stdv = 1. / math.sqrt((self.hidden_dim_k+self.hidden_dim_q)/2) 130 | self.weight.data.uniform_(-stdv, stdv) 131 | 132 | def forward(self, k, q): 133 | if len(q.shape) == 2: # q_len missing 134 | q = torch.unsqueeze(q, dim=1) 135 | if len(k.shape) == 2: # k_len missing 136 | k = torch.unsqueeze(k, dim=1) 137 | mb_size = k.shape[0] # ? 138 | k_len = k.shape[1] 139 | q_len = q.shape[1] 140 | # k: (?, k_len, embed_dim,) 141 | # q: (?, q_len, embed_dim,) 142 | # kx: (n_head, ?*k_len, embed_dim) -> (n_head*?, k_len, hidden_dim) 143 | # qx: (n_head, ?*q_len, embed_dim) -> (n_head*?, q_len, hidden_dim) 144 | # score: (n_head*?, q_len, k_len,) 145 | # output: (?, q_len, out_dim,) 146 | kx = k.repeat(self.n_head, 1, 1).view(self.n_head, -1, self.embed_dim_k) # (n_head, ?*k_len, embed_dim) 147 | qx = q.repeat(self.n_head, 1, 1).view(self.n_head, -1, self.embed_dim_q) # (n_head, ?*q_len, embed_dim) 148 | kx = torch.bmm(kx, self.w_kx).view(-1, k_len, self.hidden_dim_k) # (n_head*?, k_len, hidden_dim) 149 | qx = torch.bmm(qx, self.w_qx).view(-1, q_len, self.hidden_dim_q) # (n_head*?, q_len, hidden_dim) 150 | if self.score_function == 'scaled_dot_product': 151 | kt = kx.permute(0, 2, 1) 152 | qkt = torch.bmm(qx, kt) 153 | score = torch.div(qkt, math.sqrt(self.hidden_dim_k)) 154 | elif self.score_function == 'mlp': 155 | kxx = torch.unsqueeze(kx, dim=1).expand(-1, q_len, -1, -1) 156 | qxx = torch.unsqueeze(qx, dim=2).expand(-1, -1, k_len, -1) 157 | kq = torch.cat((kxx, qxx), dim=-1) # (n_head*?, q_len, k_len, hidden_dim*2) 158 | score = F.tanh(torch.matmul(kq, self.weight)) 159 | elif self.score_function == 'bi_linear': 160 | qw = torch.matmul(qx, self.weight) 161 | kt = kx.permute(0, 2, 1) 162 | score = torch.bmm(qw, kt) 163 | else: 164 | raise RuntimeError('invalid score_function') 165 | score = F.softmax(score, dim=-1) 166 | output = torch.bmm(score, kx) # (n_head*?, q_len, hidden_dim) 167 | output = torch.cat(torch.split(output, mb_size, dim=0), dim=-1) # (?, q_len, n_head*hidden_dim) 168 | output = self.proj(output) # (?, q_len, out_dim) 169 | output = self.dropout(output) 170 | return output, score 171 | 172 | 173 | 174 | class BasicAttention(nn.Module): 175 | def __init__(self, hidden_dim=None, score_function='basic'): 176 | super(BasicAttention, self).__init__() 177 | self.hidden_dim = hidden_dim 178 | self.score_function = score_function 179 | if score_function == "basic": 180 | self.w = nn.Parameter(torch.Tensor(hidden_dim, hidden_dim)) 181 | self.b = nn.Parameter(torch.Tensor(hidden_dim)) 182 | self.u = nn.Parameter(torch.Tensor(hidden_dim, 1)) 183 | if score_function == 'aspect': 184 | self.w = nn.Parameter(torch.Tensor(hidden_dim, hidden_dim)) 185 | if score_function == 'simple': 186 | self.w = nn.Parameter(torch.Tensor(hidden_dim, 1)) 187 | self.reset_parameters() 188 | 189 | def reset_parameters(self): 190 | stdv = 1. / math.sqrt(self.hidden_dim) 191 | if self.score_function == "basic": 192 | self.w.data.uniform_(-stdv, stdv) 193 | self.u.data.uniform_(-stdv, stdv) 194 | elif self.score_function == "aspect": 195 | self.w.data.uniform_(-stdv, stdv) 196 | elif self.score_function == "simple": 197 | self.w.data.uniform_(-stdv, stdv) 198 | 199 | def forward(self, k, q=None): 200 | if self.score_function == "basic": 201 | # after this, we have (batch, dim1) with a diff weight per each cell 202 | attention_score = torch.matmul(k, self.w) + self.b 203 | attention_score = torch.tanh(attention_score) 204 | attention_score = torch.matmul(attention_score, self.u).squeeze() 205 | elif self.score_function == "aspect": 206 | if len(q.shape) == 2: # q_len missing 207 | q = torch.unsqueeze(q, dim=2) 208 | attention_score = torch.matmul(k, self.w) 209 | attention_score = torch.bmm(attention_score, q).squeeze() 210 | elif self.score_function == "simple": 211 | attention_score = torch.matmul(k, self.w).squeeze() 212 | attention_score = F.softmax(attention_score).view(k.size(0), k.size(1), 1) 213 | scored_x = k * attention_score 214 | # now, sum across dim 1 to get the expected feature vector 215 | condensed_x = torch.sum(scored_x, dim=1) 216 | return condensed_x, attention_score 217 | -------------------------------------------------------------------------------- /layers/Dynamic_GRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import numpy as np 4 | 5 | 6 | class DynamicGRU(nn.Module): 7 | def __init__(self, input_size, hidden_size, num_layers=1, bias=True, batch_first=True, dropout=0, 8 | bidirectional=False, only_use_last_hidden_state=False): 9 | """ 10 | LSTM which can hold variable length sequence, use like TensorFlow's RNN(input, length...). 11 | :param input_size:The number of expected features in the input x 12 | :param hidden_size:The number of features in the hidden state h 13 | :param num_layers:Number of recurrent layers. 14 | :param bias:If False, then the layer does not use bias weights b_ih and b_hh. Default: True 15 | :param batch_first:If True, then the input and output tensors are provided as (batch, seq, feature) 16 | :param dropout:If non-zero, introduces a dropout layer on the outputs of each RNN layer except the last layer 17 | :param bidirectional:If True, becomes a bidirectional RNN. Default: False 18 | """ 19 | super(DynamicGRU, self).__init__() 20 | self.input_size = input_size 21 | self.hidden_size = hidden_size 22 | self.num_layers = num_layers 23 | self.bias = bias 24 | self.batch_first = batch_first 25 | self.dropout = dropout 26 | self.bidirectional = bidirectional 27 | self.only_use_last_hidden_state = only_use_last_hidden_state 28 | self.GRU = nn.GRU( 29 | input_size=input_size, 30 | hidden_size=hidden_size, 31 | num_layers=num_layers, 32 | bias=bias, 33 | batch_first=batch_first, 34 | dropout=dropout, 35 | bidirectional=bidirectional 36 | ) 37 | 38 | def forward(self, x, x_len): 39 | """ 40 | sequence -> sort -> pad and pack ->process using RNN -> unpack ->unsort 41 | :param x: sequence embedding vectors 42 | :param x_len: numpy/tensor list 43 | :return: 44 | """ 45 | """sort""" 46 | x_sort_idx = np.argsort(-x_len) 47 | x_unsort_idx = torch.LongTensor(np.argsort(x_sort_idx)) 48 | x_len = x_len[x_sort_idx] 49 | x = x[torch.LongTensor(x_sort_idx)] 50 | """pack""" 51 | x_emb_p = torch.nn.utils.rnn.pack_padded_sequence(x, x_len, batch_first=self.batch_first) 52 | """process using RNN""" 53 | out_pack, ht = self.GRU(x_emb_p, None) 54 | """unsort: h""" 55 | ht = torch.transpose(ht, 0, 1)[ 56 | x_unsort_idx] # (num_layers * num_directions, batch, hidden_size) -> (batch, ...) 57 | ht = torch.transpose(ht, 0, 1) 58 | 59 | if self.only_use_last_hidden_state: 60 | return ht 61 | else: 62 | """unpack: out""" 63 | out = torch.nn.utils.rnn.pad_packed_sequence(out_pack, batch_first=self.batch_first) # (sequence, lengths) 64 | out = out[0] # 65 | out = out[x_unsort_idx] 66 | 67 | return out, ht 68 | -------------------------------------------------------------------------------- /layers/Dynamic_LSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import numpy as np 4 | 5 | 6 | class DynamicLSTM(nn.Module): 7 | def __init__(self, input_size, hidden_size, num_layers=1, bias=True, batch_first=True, dropout=0, 8 | bidirectional=False, only_use_last_hidden_state=False): 9 | """ 10 | LSTM which can hold variable length sequence, use like TensorFlow's RNN(input, length...). 11 | 12 | :param input_size:The number of expected features in the input x 13 | :param hidden_size:The number of features in the hidden state h 14 | :param num_layers:Number of recurrent layers. 15 | :param bias:If False, then the layer does not use bias weights b_ih and b_hh. Default: True 16 | :param batch_first:If True, then the input and output tensors are provided as (batch, seq, feature) 17 | :param dropout:If non-zero, introduces a dropout layer on the outputs of each RNN layer except the last layer 18 | :param bidirectional:If True, becomes a bidirectional RNN. Default: False 19 | """ 20 | super(DynamicLSTM, self).__init__() 21 | self.input_size = input_size 22 | self.hidden_size = hidden_size 23 | self.num_layers = num_layers 24 | self.bias = bias 25 | self.batch_first = batch_first 26 | self.dropout = dropout 27 | self.bidirectional = bidirectional 28 | self.only_use_last_hidden_state = only_use_last_hidden_state 29 | self.LSTM = nn.LSTM( 30 | input_size=input_size, 31 | hidden_size=hidden_size, 32 | num_layers=num_layers, 33 | bias=bias, 34 | batch_first=batch_first, 35 | dropout=dropout, 36 | bidirectional=bidirectional 37 | ) 38 | 39 | def forward(self, x, x_len): 40 | """ 41 | sequence -> sort -> pad and pack ->process using RNN -> unpack ->unsort 42 | 43 | :param x: sequence embedding vectors 44 | :param x_len: numpy/tensor list 45 | :return: 46 | """ 47 | """sort""" 48 | x_sort_idx = np.argsort(-x_len) 49 | x_unsort_idx = torch.LongTensor(np.argsort(x_sort_idx)) 50 | x_len = x_len[x_sort_idx] 51 | x = x[torch.LongTensor(x_sort_idx)] 52 | """pack""" 53 | x_emb_p = torch.nn.utils.rnn.pack_padded_sequence(x, x_len, batch_first=self.batch_first) 54 | """process using RNN""" 55 | out_pack, (ht, ct) = self.LSTM(x_emb_p, None) 56 | """unsort: h""" 57 | ht = torch.transpose(ht, 0, 1)[ 58 | x_unsort_idx] # (num_layers * num_directions, batch, hidden_size) -> (batch, ...) 59 | ht = torch.transpose(ht, 0, 1) 60 | 61 | if self.only_use_last_hidden_state: 62 | return ht 63 | else: 64 | """unpack: out""" 65 | out = torch.nn.utils.rnn.pad_packed_sequence(out_pack, batch_first=self.batch_first) # (sequence, lengths) 66 | out = out[0] # 67 | out = out[x_unsort_idx] 68 | """unsort: out c""" 69 | ct = torch.transpose(ct, 0, 1)[ 70 | x_unsort_idx] # (num_layers * num_directions, batch, hidden_size) -> (batch, ...) 71 | ct = torch.transpose(ct, 0, 1) 72 | 73 | return out, (ht, ct) 74 | -------------------------------------------------------------------------------- /layers/Dynamic_RNN.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | 4 | import torch 5 | import torch.nn as nn 6 | import numpy as np 7 | 8 | 9 | class DynamicRNN(nn.Module): 10 | def __init__(self, input_size, hidden_size, num_layers=1, bias=True, batch_first=True, dropout=0, 11 | bidirectional=False, only_use_last_hidden_state=False, rnn_type='LSTM'): 12 | """ 13 | LSTM which can hold variable length sequence, use like TensorFlow's RNN(input, length...). 14 | 15 | :param input_size:The number of expected features in the input x 16 | :param hidden_size:The number of features in the hidden state h 17 | :param num_layers:Number of recurrent layers. 18 | :param bias:If False, then the layer does not use bias weights b_ih and b_hh. Default: True 19 | :param batch_first:If True, then the input and output tensors are provided as (batch, seq, feature) 20 | :param dropout:If non-zero, introduces a dropout layer on the outputs of each RNN layer except the last layer 21 | :param bidirectional:If True, becomes a bidirectional RNN. Default: False 22 | :param rnn_type: {LSTM, GRU, RNN} 23 | """ 24 | super(DynamicRNN, self).__init__() 25 | self.input_size = input_size 26 | self.hidden_size = hidden_size 27 | self.num_layers = num_layers 28 | self.bias = bias 29 | self.batch_first = batch_first 30 | self.dropout = dropout 31 | self.bidirectional = bidirectional 32 | self.only_use_last_hidden_state = only_use_last_hidden_state 33 | self.rnn_type = rnn_type 34 | 35 | if self.rnn_type == 'LSTM': 36 | self.RNN = nn.LSTM( 37 | input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, 38 | bias=bias, batch_first=batch_first, dropout=dropout, bidirectional=bidirectional) 39 | elif self.rnn_type == 'GRU': 40 | self.RNN = nn.GRU( 41 | input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, 42 | bias=bias, batch_first=batch_first, dropout=dropout, bidirectional=bidirectional) 43 | elif self.rnn_type == 'RNN': 44 | self.RNN = nn.RNN( 45 | input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, 46 | bias=bias, batch_first=batch_first, dropout=dropout, bidirectional=bidirectional) 47 | 48 | def forward(self, x, x_len): 49 | """ 50 | sequence -> sort -> pad and pack ->process using RNN -> unpack ->unsort 51 | 52 | :param x: sequence embedding vectors 53 | :param x_len: numpy/tensor list 54 | :return: 55 | """ 56 | """sort""" 57 | x_sort_idx = np.argsort(-x_len) 58 | x_unsort_idx = torch.LongTensor(np.argsort(x_sort_idx)) 59 | x_len = x_len[x_sort_idx] 60 | x = x[torch.LongTensor(x_sort_idx)] 61 | """pack""" 62 | x_emb_p = torch.nn.utils.rnn.pack_padded_sequence(x, x_len, batch_first=self.batch_first) 63 | 64 | # process using the selected RNN 65 | if self.rnn_type == 'LSTM': 66 | out_pack, (ht, ct) = self.RNN(x_emb_p, None) 67 | else: 68 | out_pack, ht = self.RNN(x_emb_p, None) 69 | ct = None 70 | """unsort: h""" 71 | ht = torch.transpose(ht, 0, 1)[ 72 | x_unsort_idx] # (num_layers * num_directions, batch, hidden_size) -> (batch, ...) 73 | ht = torch.transpose(ht, 0, 1) 74 | 75 | if self.only_use_last_hidden_state: 76 | return ht 77 | else: 78 | """unpack: out""" 79 | out = torch.nn.utils.rnn.pad_packed_sequence(out_pack, batch_first=self.batch_first) # (sequence, lengths) 80 | out = out[0] # 81 | out = out[x_unsort_idx] 82 | """unsort: out c""" 83 | if self.rnn_type == 'LSTM': 84 | ct = torch.transpose(ct, 0, 1)[ 85 | x_unsort_idx] # (num_layers * num_directions, batch, hidden_size) -> (batch, ...) 86 | ct = torch.transpose(ct, 0, 1) 87 | 88 | return out, (ht, ct) 89 | -------------------------------------------------------------------------------- /layers/Squeeze_embedding.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import numpy as np 4 | 5 | 6 | class SqueezeEmbedding(nn.Module): 7 | """ 8 | Squeeze sequence embedding length to the longest one in the batch 9 | """ 10 | 11 | def __init__(self, batch_first=True): 12 | super(SqueezeEmbedding, self).__init__() 13 | self.batch_first = batch_first 14 | 15 | def forward(self, x, x_len): 16 | """ 17 | sequence -> sort -> pad and pack -> unpack ->unsort 18 | :param x: sequence embedding vectors 19 | :param x_len: numpy/tensor list 20 | :return: 21 | """ 22 | """sort""" 23 | x_sort_idx = np.argsort(-x_len) 24 | x_unsort_idx = torch.LongTensor(np.argsort(x_sort_idx)) 25 | x_len = x_len[x_sort_idx] 26 | x = x[torch.LongTensor(x_sort_idx)] 27 | """pack""" 28 | x_emb_p = torch.nn.utils.rnn.pack_padded_sequence(x, x_len, batch_first=self.batch_first) 29 | """unpack: out""" 30 | out = torch.nn.utils.rnn.pad_packed_sequence(x_emb_p, batch_first=self.batch_first) # (sequence, lengths) 31 | out = out[0] # 32 | """unsort""" 33 | out = out[x_unsort_idx] 34 | return out 35 | -------------------------------------------------------------------------------- /layers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/layers/__init__.py -------------------------------------------------------------------------------- /models/AE_CNN.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/models/AE_CNN.py -------------------------------------------------------------------------------- /models/AE_ContextAvg.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | 5 | 6 | class AEContextAvg(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(AEContextAvg, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 13 | self.dense = nn.Linear(args.embed_dim * 2, args.polarities_dim) 14 | self.softmax = nn.Softmax() 15 | self.dropout = nn.Dropout(args.dropout) 16 | 17 | def forward(self, inputs): 18 | text_raw_indices = inputs[0] 19 | aspect_indices = inputs[1] 20 | 21 | x = self.encoder(text_raw_indices) 22 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 23 | emb = self.squeeze_embedding(x, x_len) 24 | output = torch.sum(emb, dim=1) / x_len.view(-1, 1).float() 25 | output = output.view(output.size(0), -1) 26 | 27 | aspect = self.encoder_aspect(aspect_indices) 28 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 29 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 30 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 31 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 32 | 33 | output = torch.cat((output, aspect_emb), dim=1) 34 | output = self.dropout(output) 35 | output = self.dense(output) 36 | if self.args.softmax: 37 | output = self.softmax(output) 38 | return output 39 | -------------------------------------------------------------------------------- /models/ATAE_BiGRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | from layers.Dynamic_RNN import DynamicRNN 5 | from layers.Attention import BasicAttention, AttentionAspect 6 | 7 | 8 | class ATAE_BiGRU(nn.Module): 9 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 10 | super(ATAE_BiGRU, self).__init__() 11 | self.args = args 12 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 13 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 14 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 15 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 16 | bidirectional=True, rnn_type="GRU") 17 | # self.attention = BasicAttention(hidden_dim=args.hidden_dim*2, score_function="aspect") 18 | self.attention = AttentionAspect(embed_dim_k=args.hidden_dim * 2, embed_dim_q=args.embed_dim, 19 | score_function="mlp") 20 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 21 | self.softmax = nn.Softmax() 22 | self.dropout = nn.Dropout(args.dropout) 23 | 24 | def forward(self, inputs): 25 | text_raw_indices = inputs[0] 26 | aspect_indices = inputs[1] 27 | 28 | x = self.encoder(text_raw_indices) 29 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 30 | output, (h_n, _) = self.gru(x, x_len) 31 | 32 | aspect = self.encoder_aspect(aspect_indices) 33 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 34 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 35 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 36 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 37 | 38 | output, _ = self.attention(output, aspect_emb) 39 | output = output.view(output.size(0), output.size(2)) 40 | output = self.dropout(output) 41 | output = self.dense(output) 42 | if self.args.softmax: 43 | output = self.softmax(output) 44 | return output 45 | -------------------------------------------------------------------------------- /models/ATAE_BiLSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | from layers.Dynamic_RNN import DynamicRNN 5 | from layers.Attention import BasicAttention,AttentionAspect 6 | 7 | 8 | class ATAE_BiLSTM(nn.Module): 9 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 10 | super(ATAE_BiLSTM, self).__init__() 11 | self.args = args 12 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 13 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 14 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 15 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 16 | bidirectional=True, rnn_type="LSTM") 17 | self.attention = AttentionAspect(embed_dim_k=args.hidden_dim * 2, embed_dim_q=args.embed_dim, 18 | score_function="mlp") 19 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 20 | self.softmax = nn.Softmax() 21 | self.dropout = nn.Dropout(args.dropout) 22 | 23 | def forward(self, inputs): 24 | text_raw_indices = inputs[0] 25 | aspect_indices = inputs[1] 26 | 27 | x = self.encoder(text_raw_indices) 28 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 29 | output, (h_n, _) = self.lstm(x, x_len) 30 | 31 | aspect = self.encoder_aspect(aspect_indices) 32 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 33 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 34 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 35 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 36 | 37 | output, _ = self.attention(output, aspect_emb) 38 | output = output.view(output.size(0), output.size(2)) 39 | output = self.dropout(output) 40 | output = self.dense(output) 41 | if self.args.softmax: 42 | output = self.softmax(output) 43 | return output 44 | -------------------------------------------------------------------------------- /models/ATAE_GRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | from layers.Dynamic_RNN import DynamicRNN 5 | from layers.Attention import BasicAttention,AttentionAspect 6 | 7 | 8 | class ATAE_GRU(nn.Module): 9 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 10 | super(ATAE_GRU, self).__init__() 11 | self.args = args 12 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 13 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 14 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 15 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 16 | rnn_type="GRU") 17 | self.attention = BasicAttention(hidden_dim=args.hidden_dim, score_function="aspect") 18 | self.attention = AttentionAspect(embed_dim_k=args.hidden_dim, embed_dim_q=args.embed_dim, 19 | score_function="mlp") 20 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 21 | self.softmax = nn.Softmax() 22 | self.dropout = nn.Dropout(args.dropout) 23 | 24 | def forward(self, inputs): 25 | text_raw_indices = inputs[0] 26 | aspect_indices = inputs[1] 27 | 28 | x = self.encoder(text_raw_indices) 29 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 30 | output, (h_n, _) = self.gru(x, x_len) 31 | 32 | aspect = self.encoder_aspect(aspect_indices) 33 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 34 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 35 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 36 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 37 | 38 | output, _ = self.attention(output, aspect_emb) 39 | output = output.view(output.size(0), output.size(2)) 40 | output = self.dropout(output) 41 | output = self.dense(output) 42 | if self.args.softmax: 43 | output = self.softmax(output) 44 | return output 45 | -------------------------------------------------------------------------------- /models/ATAE_LSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | from layers.Dynamic_RNN import DynamicRNN 5 | from layers.Attention import BasicAttention,AttentionAspect 6 | 7 | 8 | class ATAE_LSTM(nn.Module): 9 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 10 | super(ATAE_LSTM, self).__init__() 11 | self.args = args 12 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 13 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 14 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 15 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 16 | rnn_type="LSTM") 17 | # self.attention = BasicAttention(hidden_dim=args.hidden_dim, score_function="aspect") 18 | self.attention = AttentionAspect(embed_dim_k=args.hidden_dim, embed_dim_q=args.embed_dim, 19 | score_function="mlp") 20 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 21 | self.softmax = nn.Softmax() 22 | self.dropout = nn.Dropout(args.dropout) 23 | 24 | def forward(self, inputs): 25 | text_raw_indices = inputs[0] 26 | aspect_indices = inputs[1] 27 | 28 | x = self.encoder(text_raw_indices) 29 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 30 | output, (h_n, _) = self.lstm(x, x_len) 31 | 32 | aspect = self.encoder_aspect(aspect_indices) 33 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 34 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 35 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 36 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 37 | 38 | output, _ = self.attention(output, aspect_emb) 39 | output = output.view(output.size(0), output.size(2)) 40 | output = self.dropout(output) 41 | output = self.dense(output) 42 | if self.args.softmax: 43 | output = self.softmax(output) 44 | return output 45 | -------------------------------------------------------------------------------- /models/AT_BiGRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | 6 | 7 | class AT_BiGRU(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(AT_BiGRU, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, bidirectional=True, num_layers=1, dropout=args.dropout, 14 | rnn_type="GRU") 15 | self.attention = BasicAttention(hidden_dim=args.hidden_dim * 2) 16 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 17 | self.softmax = nn.Softmax() 18 | self.dropout = nn.Dropout(args.dropout) 19 | 20 | def forward(self, inputs): 21 | text_raw_indices = inputs[0] 22 | x = self.dropout(self.encoder(text_raw_indices)) 23 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 24 | output, (h_n, _) = self.gru(x, x_len) 25 | output, _ = self.attention(k=output) 26 | output = self.dropout(output) 27 | output = self.dense(output) 28 | if self.args.softmax: 29 | output = self.softmax(output) 30 | return output 31 | -------------------------------------------------------------------------------- /models/AT_BiLSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | 6 | 7 | class AT_BiLSTM(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(AT_BiLSTM, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 14 | bidirectional=True, rnn_type="LSTM") 15 | self.attention = BasicAttention(hidden_dim=args.hidden_dim*2) 16 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 17 | self.softmax = nn.Softmax() 18 | self.dropout = nn.Dropout(args.dropout) 19 | 20 | def forward(self, inputs): 21 | text_raw_indices = inputs[0] 22 | x = self.dropout(self.encoder(text_raw_indices)) 23 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 24 | output, (h_n, _) = self.lstm(x, x_len) 25 | output, _ = self.attention(k=output) 26 | output = self.dropout(output) 27 | output = self.dense(output) 28 | if self.args.softmax: 29 | output = self.softmax(output) 30 | return output 31 | -------------------------------------------------------------------------------- /models/AT_GRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | 6 | 7 | class AT_GRU(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(AT_GRU, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, dropout=args.dropout, rnn_type="GRU") 14 | self.attention = BasicAttention(hidden_dim=args.hidden_dim) 15 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 16 | self.softmax = nn.Softmax() 17 | self.dropout = nn.Dropout(args.dropout) 18 | 19 | def forward(self, inputs): 20 | text_raw_indices = inputs[0] 21 | x = self.encoder(text_raw_indices) 22 | x = self.dropout(x) 23 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 24 | output, (h_n, _) = self.gru(x, x_len) 25 | output, _ = self.attention(k=output) 26 | output = self.dropout(output) 27 | output = self.dense(output) 28 | if self.args.softmax: 29 | output = self.softmax(output) 30 | return output1996 31 | 32 | -------------------------------------------------------------------------------- /models/AT_LSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | 6 | 7 | class AT_LSTM(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(AT_LSTM, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 14 | rnn_type="LSTM") 15 | self.attention = BasicAttention(hidden_dim=args.hidden_dim) 16 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 17 | self.softmax = nn.Softmax() 18 | self.dropout = nn.Dropout(args.dropout) 19 | 20 | def forward(self, inputs): 21 | text_raw_indices = inputs[0] 22 | x = self.dropout(self.encoder(text_raw_indices)) 23 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 24 | output, (h_n, _) = self.lstm(x, x_len) 25 | output, _ = self.attention(k=output) 26 | output = self.dropout(output) 27 | output = self.dense(output) 28 | if self.args.softmax: 29 | output = self.softmax(output) 30 | return output 31 | -------------------------------------------------------------------------------- /models/BiGRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | 5 | 6 | class BiGRU(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(BiGRU, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, bidirectional=True, num_layers=1, dropout=args.dropout, 13 | rnn_type="GRU") 14 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 15 | self.softmax = nn.Softmax() 16 | self.dropout = nn.Dropout(args.dropout) 17 | 18 | def forward(self, inputs): 19 | text_raw_indices = inputs[0] 20 | x = self.dropout(self.encoder(text_raw_indices)) 21 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 22 | _, (h_n, _) = self.gru(x, x_len) 23 | output = torch.cat((h_n[0], h_n[1]), dim=1) 24 | output = self.dropout(output) 25 | output = self.dense(output) 26 | if self.args.softmax: 27 | output = self.softmax(output) 28 | return output 29 | -------------------------------------------------------------------------------- /models/BiLSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | 5 | 6 | class BiLSTM(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(BiLSTM, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 13 | bidirectional=True, rnn_type="LSTM") 14 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 15 | self.softmax = nn.Softmax() 16 | self.dropout = nn.Dropout(args.dropout) 17 | 18 | def forward(self, inputs): 19 | text_raw_indices = inputs[0] 20 | x = self.dropout(self.encoder(text_raw_indices)) 21 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 22 | _, (h_n, _) = self.lstm(x, x_len) 23 | output = torch.cat((h_n[0], h_n[1]), dim=1) 24 | output = self.dropout(output) 25 | output = self.dense(output) 26 | if self.args.softmax: 27 | output = self.softmax(output) 28 | return output 29 | -------------------------------------------------------------------------------- /models/CABASC.py: -------------------------------------------------------------------------------- 1 | from layers.Attention import Attention 2 | import torch 3 | import torch.nn as nn 4 | import torch.nn.functional as F 5 | 6 | from layers.Squeeze_embedding import SqueezeEmbedding 7 | from layers.Dynamic_RNN import DynamicRNN 8 | 9 | 10 | class CABASC(nn.Module): 11 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 12 | super(CABASC, self).__init__() 13 | self.args = args 14 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 15 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 16 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 17 | self.attention = Attention(args.embed_dim, score_function='mlp') # content attention 18 | self.m_linear = nn.Linear(args.embed_dim, args.embed_dim, bias=False) 19 | self.mlp = nn.Linear(args.embed_dim, args.embed_dim) # W4 20 | self.dense = nn.Linear(args.embed_dim, args.polarities_dim) # W5 21 | # context attention layer 22 | self.rnn_l = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 23 | rnn_type='GRU') 24 | self.rnn_r = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 25 | rnn_type='GRU') 26 | self.mlp_l = nn.Linear(args.hidden_dim, 1) 27 | self.mlp_r = nn.Linear(args.hidden_dim, 1) 28 | self.dropout = nn.Dropout(args.dropout) 29 | self.softmax = nn.Softmax() 30 | 31 | def context_attention(self, x_l, x_r, memory, memory_len, aspect_len): 32 | # Context representation 33 | left_len, right_len = torch.sum(x_l != 0, dim=-1), torch.sum(x_r != 0, dim=-1) 34 | x_l, x_r = self.encoder(x_l), self.encoder(x_r) 35 | 36 | context_l, (_, _) = self.rnn_l(x_l, left_len) # left, right context : (batch size, max_len, embedds) 37 | context_r, (_, _) = self.rnn_r(x_r, right_len) 38 | 39 | # Attention weights : (batch_size, max_batch_len, 1) 40 | attn_l = F.sigmoid(self.mlp_l(context_l)) + 0.5 41 | attn_r = F.sigmoid(self.mlp_r(context_r)) + 0.5 42 | 43 | # apply weights one sample at a time 44 | for i in range(memory.size(0)): 45 | aspect_start = (left_len[i] - aspect_len[i]).item() 46 | aspect_end = left_len[i] 47 | # attention weights for each element in the sentence 48 | for idx in range(memory_len[i]): 49 | if idx < aspect_start: 50 | memory[i][idx] *= attn_l[i][idx] 51 | elif idx < aspect_end: 52 | memory[i][idx] *= attn_l[i][idx] + attn_r[i][idx - aspect_start] 53 | else: 54 | memory[i][idx] *= attn_r[i][idx - aspect_start] 55 | return memory 56 | 57 | def locationed_memory(self, memory, memory_len, left_len, aspect_len): 58 | # based on the absolute distance to the first border word of the aspect 59 | for i in range(memory.size(0)): 60 | for idx in range(memory_len[i]): 61 | aspect_start = left_len[i] - aspect_len[i] 62 | aspect_end = left_len[i] 63 | if idx < aspect_start: 64 | l = aspect_start.item() - idx 65 | elif idx <= aspect_end: 66 | l = 0 67 | else: 68 | l = idx - aspect_end.item() 69 | memory[i][idx] *= (1 - float(l) / int(memory_len[i])) 70 | return memory 71 | 72 | def forward(self, inputs): 73 | # inputs 74 | text_raw_indices, aspect_indices, x_l, x_r = inputs[0], inputs[1], inputs[2], inputs[3] 75 | memory_len = torch.sum(text_raw_indices != 0, dim=-1) 76 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 77 | left_len = torch.sum(x_l != 0, dim=-1) 78 | # aspect representation 79 | nonzeros_aspect = torch.tensor(aspect_len, dtype=torch.float).to(self.args.device) 80 | aspect = self.encoder_aspect(aspect_indices) 81 | aspect = self.squeeze_embedding(aspect, aspect_len) 82 | aspect = torch.sum(aspect, dim=1) 83 | aspect = torch.div(aspect, nonzeros_aspect.view(nonzeros_aspect.size(0), 1)) 84 | x = aspect.unsqueeze(dim=1) 85 | 86 | # memory module 87 | memory = self.encoder(text_raw_indices) 88 | memory = self.squeeze_embedding(memory, memory_len) 89 | 90 | # sentence representation 91 | nonzeros_memory = torch.tensor(memory_len, dtype=torch.float).to(self.args.device) 92 | v_s = torch.sum(memory, dim=1) 93 | v_s = torch.div(v_s, nonzeros_memory.view(nonzeros_memory.size(0), 1)) 94 | v_s = v_s.unsqueeze(dim=1) 95 | 96 | # position attention module 97 | if type == 'c': 98 | memory = self.locationed_memory(memory, memory_len, left_len, aspect_len) 99 | elif type == 'cabasc': 100 | # context attention 101 | memory = self.context_attention(x_l, x_r, memory, memory_len, aspect_len) 102 | # recalculate sentence rep with new memory 103 | v_s = torch.sum(memory, dim=1) 104 | v_s = torch.div(v_s, nonzeros_memory.view(nonzeros_memory.size(0), 1)) 105 | v_s = v_s.unsqueeze(dim=1) 106 | 107 | # content attention module 108 | for _ in range(self.args.hops): 109 | # x = self.x_linear(x) 110 | v_ts = self.attention(memory, x) 111 | 112 | # classifier 113 | v_ns = v_ts + v_s # embedd the sentence 114 | v_ns = v_ns.view(v_ns.size(0), -1) 115 | v_ms = F.tanh(self.mlp(v_ns)) 116 | v_ms = self.dropout(v_ms) 117 | out = self.dense(v_ms) 118 | if self.args.softmax: 119 | out = self.softmax(out) 120 | return out 121 | -------------------------------------------------------------------------------- /models/CNN.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | from layers.Squeeze_embedding import SqueezeEmbedding 5 | 6 | 7 | class CNN(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(CNN, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 14 | self.convs1 = [ 15 | nn.Conv2d(in_channels=1, out_channels=self.args.kernel_num, kernel_size=(K, self.args.embed_dim), bias=True) 16 | for K in self.args.kernel_sizes] 17 | for conv in self.convs1: 18 | conv = conv.to(self.args.device) 19 | in_feat = len(self.args.kernel_sizes) * self.args.kernel_num 20 | self.dense = nn.Linear(in_feat, self.args.polarities_dim, bias=True) 21 | 22 | if args.batch_normalizations is True: 23 | self.convs1_bn = nn.BatchNorm2d(num_features=self.args.kernel_num) 24 | self.fc1 = nn.Linear(in_feat, in_feat / 2, bias=True) 25 | self.fc1_bn = nn.BatchNorm1d(num_features=in_feat / 2) 26 | self.fc2 = nn.Linear(in_feat / 2, self.args.polarities_dim) 27 | self.fc2_bn = nn.BatchNorm1d(num_features=self.args.polarities_dim) 28 | self.softmax = nn.Softmax() 29 | self.dropout = nn.Dropout(args.dropout) 30 | 31 | def forward(self, inputs): 32 | text_raw_indices = inputs[0] 33 | x = self.encoder(text_raw_indices) 34 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 35 | emb = self.squeeze_embedding(x, x_len) 36 | # emb = emb.transpose(0, 1) 37 | emb = self.dropout(emb) 38 | emb = emb.unsqueeze(1) 39 | if self.args.batch_normalizations is True: 40 | x = [self.convs1_bn(F.tanh(conv(emb))).squeeze(3) for conv in self.convs1] # [(N,Co,W), ...]*len(Ks) 41 | x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # [(N,Co), ...]*len(Ks) 42 | else: 43 | x = [F.relu(conv(emb)).squeeze(3) for conv in self.convs1] # [(N,Co,W), ...]*len(Ks) 44 | x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # [(N,Co), ...]*len(Ks) 45 | x = torch.cat(x, 1) 46 | x = self.dropout(x) # (N,len(Ks)*Co) 47 | if self.args.batch_normalizations is True: 48 | x = self.fc1_bn(self.fc1(x)) 49 | output = self.fc2_bn(self.fc2(F.tanh(x))) 50 | else: 51 | output = self.dense(x) 52 | if self.args.softmax: 53 | output = self.softmax(output) 54 | return output 55 | -------------------------------------------------------------------------------- /models/ContextAvg.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | 5 | 6 | class ContextAvg(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(ContextAvg, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 13 | self.dense = nn.Linear(args.embed_dim, args.polarities_dim) 14 | self.softmax = nn.Softmax() 15 | self.dropout = nn.Dropout(args.dropout) 16 | 17 | def forward(self, inputs): 18 | text_raw_indices = inputs[0] 19 | x = self.encoder(text_raw_indices) 20 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 21 | emb = self.squeeze_embedding(x, x_len) 22 | output = torch.sum(emb, dim=1) / x_len.view(-1, 1).float() 23 | output = output.view(output.size(0), -1) 24 | output = self.dropout(output) 25 | output = self.dense(output) 26 | if self.args.softmax: 27 | output = self.softmax(output) 28 | return output 29 | -------------------------------------------------------------------------------- /models/GCAE.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | from layers.Squeeze_embedding import SqueezeEmbedding 5 | 6 | 7 | class GCAE(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(GCAE, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 14 | self.convs1 = nn.ModuleList([nn.Conv1d(args.embed_dim, args.kernel_num, K) for K in args.kernel_sizes]) 15 | self.convs2 = nn.ModuleList([nn.Conv1d(args.embed_dim, args.kernel_num, K) for K in args.kernel_sizes]) 16 | self.convs3 = nn.ModuleList([nn.Conv1d(args.embed_dim, args.kernel_num, K, padding=K - 2) for K in [3]]) 17 | self.fc1 = nn.Linear(len(args.kernel_sizes) * args.kernel_num, args.polarities_dim) 18 | self.fc_aspect = nn.Linear(args.kernel_num, args.kernel_num) 19 | self.dropout = nn.Dropout(args.dropout) 20 | self.softmax = nn.Softmax() 21 | 22 | def forward(self, inputs): 23 | text_raw_indices, aspect_indices = inputs[0], inputs[1] 24 | x = self.encoder(text_raw_indices) 25 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 26 | emb = self.squeeze_embedding(x, x_len) 27 | 28 | aspect = self.encoder(aspect_indices) 29 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 30 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 31 | 32 | aa = [F.relu(conv(aspect_emb.transpose(1, 2))) for conv in self.convs3] # [(N,Co,L), ...]*len(Ks) 33 | aa = [F.max_pool1d(a, a.size(2)).squeeze(2) for a in aa] 34 | aspect_v = torch.cat(aa, 1) 35 | 36 | x = [F.tanh(conv(emb.transpose(1, 2))) for conv in self.convs1] # [(N,Co,L), ...]*len(Ks) 37 | y = [F.relu(conv(emb.transpose(1, 2)) + self.fc_aspect(aspect_v).unsqueeze(2)) for conv in self.convs2] 38 | x = [i * j for i, j in zip(x, y)] 39 | 40 | # pooling method 41 | x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # [(N,Co), ...]*len(Ks) 42 | # x = [F.adaptive_max_pool1d(i, 2) for i in x] 43 | # x = [i.view(i.size(0), -1) for i in x] 44 | 45 | x = torch.cat(x, 1) 46 | x = self.dropout(x) # (N,len(Ks)*Co) 47 | output = self.fc1(x) # (N,C) 48 | if self.args.softmax: 49 | output = self.softmax(output) 50 | return output 51 | -------------------------------------------------------------------------------- /models/GRU.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | 5 | 6 | class GRU(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(GRU, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.gru = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, dropout=args.dropout, rnn_type="GRU") 13 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 14 | self.softmax = nn.Softmax() 15 | self.dropout = nn.Dropout(args.dropout) 16 | 17 | def forward(self, inputs): 18 | text_raw_indices = inputs[0] 19 | x = self.encoder(text_raw_indices) 20 | x = self.dropout(x) 21 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 22 | _, (h_n, _) = self.gru(x, x_len) 23 | output = h_n[0] 24 | output = self.dropout(output) 25 | output = self.dense(output) 26 | if self.args.softmax: 27 | output = self.softmax(output) 28 | return output 29 | -------------------------------------------------------------------------------- /models/HAN.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | from layers.Squeeze_embedding import SqueezeEmbedding 6 | 7 | 8 | def batch_matmul_bias(seq, weight, bias, nonlinearity=''): 9 | s = None 10 | bias_dim = bias.size() 11 | for i in range(seq.size(0)): 12 | _s = torch.mm(seq[i], weight) 13 | _s_bias = _s + bias.expand(bias_dim[0], _s.size()[0]).transpose(0, 1) 14 | if (nonlinearity == 'tanh'): 15 | _s_bias = torch.tanh(_s_bias) 16 | _s_bias = _s_bias.unsqueeze(0) 17 | if (s is None): 18 | s = _s_bias 19 | else: 20 | s = torch.cat((s, _s_bias), 0) 21 | return s.squeeze() 22 | 23 | 24 | def batch_matmul(seq, weight, nonlinearity=''): 25 | s = None 26 | for i in range(seq.size(0)): 27 | _s = torch.mm(seq[i], weight) 28 | if (nonlinearity == 'tanh'): 29 | _s = torch.tanh(_s) 30 | _s = _s.unsqueeze(0) 31 | if (s is None): 32 | s = _s 33 | else: 34 | s = torch.cat((s, _s), 0) 35 | return s.squeeze() 36 | 37 | 38 | def attention_mul(rnn_outputs, att_weights): 39 | attn_vectors = None 40 | for i in range(rnn_outputs.size(0)): 41 | h_i = rnn_outputs[i] 42 | a_i = att_weights[i].unsqueeze(1).expand_as(h_i) 43 | h_i = a_i * h_i 44 | h_i = h_i.unsqueeze(0) 45 | if (attn_vectors is None): 46 | attn_vectors = h_i 47 | else: 48 | attn_vectors = torch.cat((attn_vectors, h_i), 0) 49 | return torch.sum(attn_vectors, 0) 50 | 51 | 52 | class HAN(nn.Module): 53 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 54 | super(HAN, self).__init__() 55 | self.position_dim = 100 56 | self.args = args 57 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 58 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 59 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 60 | self.word_rnn = DynamicRNN(args.embed_dim + self.position_dim, args.hidden_dim, num_layers=1, batch_first=True, 61 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 62 | self.sentence_rnn = DynamicRNN(args.hidden_dim * 2 + self.position_dim, args.hidden_dim, num_layers=1, 63 | batch_first=True, 64 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 65 | self.word_W = nn.Parameter( 66 | torch.Tensor(args.hidden_dim * 2 + self.position_dim, args.hidden_dim * 2 + self.position_dim)) 67 | self.word_bias = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim, 1)) 68 | self.word_weight_proj = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim, 1)) 69 | self.word_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 70 | self.sentence_W = nn.Parameter( 71 | torch.Tensor(args.hidden_dim * 2 + self.position_dim, args.hidden_dim * 2 + self.position_dim)) 72 | self.sentence_bias = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim, 1)) 73 | self.sentence_weight_proj = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim, 1)) 74 | self.sentence_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 75 | self.dense = nn.Linear(args.hidden_dim * 4, args.polarities_dim) 76 | self.word_position_embed = nn.Embedding(1005, self.position_dim) 77 | self.segment_position_embed = nn.Embedding(25, self.position_dim) 78 | self.softmax = nn.Softmax() 79 | self.dropout = nn.Dropout(args.dropout) 80 | 81 | def forward(self, inputs): 82 | text_raw_indices, aspect_indices, word_position, segment_position = inputs[0], inputs[1], inputs[2], inputs[3] 83 | batch_size = text_raw_indices.size(0) 84 | max_sentence_num = text_raw_indices.size(1) 85 | document_len = torch.sum(torch.sum(text_raw_indices, dim=-1) != 0, dim=-1) 86 | text_raw_indices = text_raw_indices.view(-1, text_raw_indices.size(2)) 87 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 88 | 89 | # aspect = self.encoder_aspect(aspect_indices) 90 | # aspect_len = torch.sum(aspect_indices != 0, dim=-1) 91 | # aspect_emb = self.squeeze_embedding(aspect, aspect_len) 92 | # aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 93 | # aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 94 | 95 | word_position = word_position.view(-1, word_position.size(2)) 96 | word_position_emd = self.word_position_embed(word_position) 97 | x = self.encoder(text_raw_indices) 98 | x = torch.cat((x, word_position_emd), dim=-1) 99 | x = self.dropout(x) 100 | 101 | sen_output = torch.zeros((x.size(0), torch.max(x_len), self.args.hidden_dim * 2)).to(self.args.device) 102 | sen_h_n = torch.zeros((2, x.size(0), self.args.hidden_dim)).to(self.args.device) 103 | sen_output[x_len > 0], (sen_h_n[:, x_len > 0, :], _) = self.word_rnn(x[x_len > 0], x_len[x_len > 0]) 104 | 105 | word_position_emd_tmp = torch.zeros((word_position.size(0), torch.max(x_len), self.position_dim)).to( 106 | self.args.device) 107 | word_position_emd_tmp[x_len > 0] = self.squeeze_embedding(word_position_emd[x_len > 0], x_len[x_len > 0]) 108 | 109 | # print("sen_output", sen_output) 110 | # atten_sen_output, _ = self.word_attention(k=sen_output) 111 | sen_squish = batch_matmul_bias(torch.cat((sen_output, word_position_emd_tmp), dim=-1).transpose(0, 1), 112 | self.word_W, self.word_bias, nonlinearity='tanh') 113 | sen_attn = batch_matmul(sen_squish, self.word_weight_proj) 114 | sen_attn_norm = self.softmax(sen_attn.transpose(1, 0)) 115 | atten_sen_output = attention_mul(sen_output.transpose(0, 1), sen_attn_norm.transpose(1, 0)) 116 | # print("atten_sen_output1", atten_sen_output) 117 | last_sen_output = torch.cat((sen_h_n[0], sen_h_n[1]), dim=1) 118 | # sen_output = torch.cat((atten_sen_output, last_sen_output), dim=1) 119 | sen_output = atten_sen_output 120 | # print("atten_sen_output2", atten_sen_output) 121 | sen_output = sen_output.view(batch_size, max_sentence_num, -1) 122 | # sen_output = self.dropout(sen_output) 123 | segment_position_emb = self.segment_position_embed(segment_position) 124 | # print("sen_output", sen_output) 125 | sen_output = torch.cat((sen_output, segment_position_emb), dim=-1) 126 | doc_output, (doc_h_n, _) = self.sentence_rnn(sen_output, document_len) 127 | # atten_doc_output, _ = self.sentence_attention(doc_output) 128 | segment_position_emb_tmp = self.squeeze_embedding(segment_position_emb, document_len) 129 | doc_squish = batch_matmul_bias(torch.cat((doc_output, segment_position_emb_tmp), dim=-1).transpose(0, 1), 130 | self.sentence_W, self.sentence_bias, nonlinearity='tanh') 131 | # print("doc_output", doc_output.size()) 132 | # print("doc_squish", doc_squish.size()) 133 | doc_squish = doc_squish.view(doc_output.size(1), doc_output.size(0), -1) 134 | doc_attn = batch_matmul(doc_squish, self.sentence_weight_proj) 135 | # print(doc_attn.size()) 136 | doc_attn = doc_attn.view(doc_output.size(1), doc_output.size(0)) 137 | doc_attn_norm = self.softmax(doc_attn.transpose(1, 0)) 138 | atten_doc_output = attention_mul(doc_output.transpose(0, 1), doc_attn_norm.transpose(1, 0)) 139 | last_doc_output = torch.cat((doc_h_n[0], doc_h_n[1]), dim=1) 140 | doc_output = torch.cat((atten_doc_output, last_doc_output), dim=1) 141 | # doc_output = atten_doc_output 142 | output = doc_output 143 | output = self.dropout(output) 144 | output = self.dense(output) 145 | if self.args.softmax: 146 | output = self.softmax(output) 147 | return output 148 | -------------------------------------------------------------------------------- /models/HAN_PA.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | from layers.Squeeze_embedding import SqueezeEmbedding 6 | 7 | 8 | def batch_matmul_bias(seq, weight, bias, nonlinearity=''): 9 | s = None 10 | bias_dim = bias.size() 11 | for i in range(seq.size(0)): 12 | _s = torch.mm(seq[i], weight) 13 | _s_bias = _s + bias.expand(bias_dim[0], _s.size()[0]).transpose(0, 1) 14 | if (nonlinearity == 'tanh'): 15 | _s_bias = torch.tanh(_s_bias) 16 | _s_bias = _s_bias.unsqueeze(0) 17 | if (s is None): 18 | s = _s_bias 19 | else: 20 | s = torch.cat((s, _s_bias), 0) 21 | return s.squeeze() 22 | 23 | 24 | def batch_matmul(seq, weight, nonlinearity=''): 25 | s = None 26 | for i in range(seq.size(0)): 27 | _s = torch.mm(seq[i], weight) 28 | if (nonlinearity == 'tanh'): 29 | _s = torch.tanh(_s) 30 | _s = _s.unsqueeze(0) 31 | if (s is None): 32 | s = _s 33 | else: 34 | s = torch.cat((s, _s), 0) 35 | return s.squeeze() 36 | 37 | 38 | def attention_mul(rnn_outputs, att_weights): 39 | attn_vectors = None 40 | for i in range(rnn_outputs.size(0)): 41 | h_i = rnn_outputs[i] 42 | a_i = att_weights[i].unsqueeze(1).expand_as(h_i) 43 | h_i = a_i * h_i 44 | h_i = h_i.unsqueeze(0) 45 | if (attn_vectors is None): 46 | attn_vectors = h_i 47 | else: 48 | attn_vectors = torch.cat((attn_vectors, h_i), 0) 49 | return torch.sum(attn_vectors, 0) 50 | 51 | 52 | class HAN(nn.Module): 53 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 54 | super(HAN, self).__init__() 55 | self.position_dim = 100 56 | self.args = args 57 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 58 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 59 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 60 | self.word_rnn = DynamicRNN(args.embed_dim + self.position_dim, args.hidden_dim, num_layers=1, batch_first=True, 61 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 62 | self.aspect_rnn = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 63 | dropout=args.dropout, bidirectional=False, rnn_type="LSTM") 64 | self.sentence_rnn = DynamicRNN(args.hidden_dim * 2 + self.position_dim, args.hidden_dim, num_layers=1, 65 | batch_first=True, 66 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 67 | self.word_W = nn.Parameter( 68 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 69 | args.hidden_dim * 2 + self.position_dim + self.args.embed_dim)) 70 | self.word_bias = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 71 | self.word_weight_proj = nn.Parameter( 72 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 73 | self.word_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 74 | self.sentence_W = nn.Parameter( 75 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 76 | args.hidden_dim * 2 + self.position_dim + self.args.embed_dim)) 77 | self.sentence_bias = nn.Parameter( 78 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 79 | self.sentence_weight_proj = nn.Parameter( 80 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 81 | self.sentence_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 82 | self.dense = nn.Linear(args.hidden_dim * 4, args.polarities_dim) 83 | self.word_position_embed = nn.Embedding(1005, self.position_dim) 84 | self.segment_position_embed = nn.Embedding(25, self.position_dim) 85 | self.softmax = nn.Softmax() 86 | self.dropout = nn.Dropout(args.dropout) 87 | 88 | def forward(self, inputs): 89 | text_raw_indices, aspect_indices, word_position, segment_position = inputs[0], inputs[1], inputs[2], inputs[3] 90 | batch_size = text_raw_indices.size(0) 91 | max_sentence_num = text_raw_indices.size(1) 92 | document_len = torch.sum(torch.sum(text_raw_indices, dim=-1) != 0, dim=-1) 93 | text_raw_indices = text_raw_indices.view(-1, text_raw_indices.size(2)) 94 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 95 | 96 | aspect = self.encoder_aspect(aspect_indices) 97 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 98 | aspect_output, (aspect_h_n, _) = self.aspect_rnn(aspect, aspect_len) 99 | aspect_emb = aspect_h_n[0] 100 | # aspect_emb = self.squeeze_embedding(aspect, aspect_len) 101 | # aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 102 | # aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 103 | # aspect_emb = self.dropout(aspect_emb) 104 | 105 | word_position = word_position.view(-1, word_position.size(2)) 106 | word_position_emd = self.word_position_embed(word_position) 107 | x = self.encoder(text_raw_indices) 108 | x = torch.cat((x, word_position_emd), dim=-1) 109 | x = self.dropout(x) 110 | 111 | sen_output = torch.zeros((x.size(0), torch.max(x_len), self.args.hidden_dim * 2)).to(self.args.device) 112 | sen_h_n = torch.zeros((2, x.size(0), self.args.hidden_dim)).to(self.args.device) 113 | sen_output[x_len > 0], (sen_h_n[:, x_len > 0, :], _) = self.word_rnn(x[x_len > 0], x_len[x_len > 0]) 114 | 115 | word_position_emd_tmp = torch.zeros((word_position.size(0), torch.max(x_len), self.position_dim)).to( 116 | self.args.device) 117 | word_position_emd_tmp[x_len > 0] = self.squeeze_embedding(word_position_emd[x_len > 0], x_len[x_len > 0]) 118 | 119 | word_aspect_emb = torch.zeros((sen_output.size(0), torch.max(x_len), aspect_emb.size(1))).to( 120 | self.args.device) 121 | word_aspect_emb[x_len > 0] = aspect_emb.repeat(1, torch.max(x_len) * max_sentence_num).view( 122 | aspect_emb.size(0) * max_sentence_num, -1, aspect_emb.size(1))[x_len>0] 123 | # print(aspect_emb.size(), word_position_emd_tmp.size(), word_aspect_emb.size()) 124 | word_position_with_aspect = torch.cat((word_position_emd_tmp, word_aspect_emb), dim=-1) 125 | # print("sen_output", sen_output) 126 | # atten_sen_output, _ = self.word_attention(k=sen_output) 127 | sen_squish = batch_matmul_bias(torch.cat((sen_output, word_position_with_aspect), dim=-1).transpose(0, 1), 128 | self.word_W, self.word_bias, nonlinearity='tanh') 129 | sen_attn = batch_matmul(sen_squish, self.word_weight_proj) 130 | sen_attn_norm = self.softmax(sen_attn.transpose(1, 0)) 131 | atten_sen_output = attention_mul(sen_output.transpose(0, 1), sen_attn_norm.transpose(1, 0)) 132 | # print("atten_sen_output1", atten_sen_output) 133 | last_sen_output = torch.cat((sen_h_n[0], sen_h_n[1]), dim=1) 134 | # sen_output = torch.cat((atten_sen_output, last_sen_output), dim=1) 135 | sen_output = atten_sen_output 136 | # print("atten_sen_output2", atten_sen_output) 137 | sen_output = sen_output.view(batch_size, max_sentence_num, -1) 138 | # print("1", sen_output.size()) 139 | # sen_output = self.dropout(sen_output) 140 | segment_position_emb = self.segment_position_embed(segment_position) 141 | # print("2", segment_position_emb.size()) 142 | # print("sen_output", sen_output) 143 | sen_output = torch.cat((sen_output, segment_position_emb), dim=-1) 144 | # print("3", sen_output.size()) 145 | doc_output, (doc_h_n, _) = self.sentence_rnn(sen_output, document_len) 146 | # atten_doc_output, _ = self.sentence_attention(doc_output) 147 | segment_position_emb_tmp = self.squeeze_embedding(segment_position_emb, document_len) 148 | segment_aspect_emb = aspect_emb.repeat(1, segment_position_emb_tmp.size(1)).view(aspect_emb.size(0), -1, 149 | aspect_emb.size(1)) 150 | segment_position_with_aspect = torch.cat((segment_position_emb_tmp, segment_aspect_emb), dim=-1) 151 | doc_squish = batch_matmul_bias(torch.cat((doc_output, segment_position_with_aspect), dim=-1).transpose(0, 1), 152 | self.sentence_W, self.sentence_bias, nonlinearity='tanh') 153 | # print("doc_output", doc_output.size()) 154 | # print("doc_squish", doc_squish.size()) 155 | doc_squish = doc_squish.view(doc_output.size(1), doc_output.size(0), -1) 156 | doc_attn = batch_matmul(doc_squish, self.sentence_weight_proj) 157 | # print(doc_attn.size()) 158 | doc_attn = doc_attn.view(doc_output.size(1), doc_output.size(0)) 159 | doc_attn_norm = self.softmax(doc_attn.transpose(1, 0)) 160 | atten_doc_output = attention_mul(doc_output.transpose(0, 1), doc_attn_norm.transpose(1, 0)) 161 | last_doc_output = torch.cat((doc_h_n[0], doc_h_n[1]), dim=1) 162 | doc_output = torch.cat((atten_doc_output, last_doc_output), dim=1) 163 | # doc_output = atten_doc_output 164 | output = doc_output 165 | output = self.dropout(output) 166 | output = self.dense(output) 167 | if self.args.softmax: 168 | output = self.softmax(output) 169 | return output 170 | -------------------------------------------------------------------------------- /models/HAN_test.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import BasicAttention 5 | from layers.Squeeze_embedding import SqueezeEmbedding 6 | 7 | 8 | def batch_matmul_bias(seq, weight, bias, nonlinearity=''): 9 | s = None 10 | bias_dim = bias.size() 11 | for i in range(seq.size(0)): 12 | _s = torch.mm(seq[i], weight) 13 | _s_bias = _s + bias.expand(bias_dim[0], _s.size()[0]).transpose(0, 1) 14 | if (nonlinearity == 'tanh'): 15 | _s_bias = torch.tanh(_s_bias) 16 | _s_bias = _s_bias.unsqueeze(0) 17 | if (s is None): 18 | s = _s_bias 19 | else: 20 | s = torch.cat((s, _s_bias), 0) 21 | return s.squeeze() 22 | 23 | 24 | def batch_matmul(seq, weight, nonlinearity=''): 25 | s = None 26 | for i in range(seq.size(0)): 27 | _s = torch.mm(seq[i], weight) 28 | if (nonlinearity == 'tanh'): 29 | _s = torch.tanh(_s) 30 | _s = _s.unsqueeze(0) 31 | if (s is None): 32 | s = _s 33 | else: 34 | s = torch.cat((s, _s), 0) 35 | return s.squeeze() 36 | 37 | 38 | def attention_mul(rnn_outputs, att_weights): 39 | attn_vectors = None 40 | for i in range(rnn_outputs.size(0)): 41 | h_i = rnn_outputs[i] 42 | a_i = att_weights[i].unsqueeze(1).expand_as(h_i) 43 | h_i = a_i * h_i 44 | h_i = h_i.unsqueeze(0) 45 | if (attn_vectors is None): 46 | attn_vectors = h_i 47 | else: 48 | attn_vectors = torch.cat((attn_vectors, h_i), 0) 49 | return torch.sum(attn_vectors, 0) 50 | 51 | 52 | class HAN(nn.Module): 53 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 54 | super(HAN, self).__init__() 55 | self.position_dim = 100 56 | self.args = args 57 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 58 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 59 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 60 | self.word_rnn = DynamicRNN(args.embed_dim + self.position_dim, args.hidden_dim, num_layers=1, batch_first=True, 61 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 62 | self.aspect_rnn = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 63 | dropout=args.dropout, bidirectional=False, rnn_type="LSTM") 64 | self.sentence_rnn = DynamicRNN(args.hidden_dim * 2 + self.position_dim, args.hidden_dim, num_layers=1, 65 | batch_first=True, 66 | dropout=args.dropout, bidirectional=True, rnn_type="LSTM") 67 | self.word_W = nn.Parameter( 68 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 69 | args.hidden_dim * 2 + self.position_dim + self.args.embed_dim)) 70 | self.word_bias = nn.Parameter(torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 71 | self.word_weight_proj = nn.Parameter( 72 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 73 | self.word_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 74 | self.sentence_W = nn.Parameter( 75 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 76 | args.hidden_dim * 2 + self.position_dim + self.args.embed_dim)) 77 | self.sentence_bias = nn.Parameter( 78 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 79 | self.sentence_weight_proj = nn.Parameter( 80 | torch.Tensor(args.hidden_dim * 2 + self.position_dim + self.args.embed_dim, 1)) 81 | self.sentence_attention = BasicAttention(hidden_dim=args.hidden_dim * 2, score_function="basic") 82 | self.dense = nn.Linear(args.hidden_dim * 4, args.polarities_dim) 83 | self.word_position_embed = nn.Embedding(1005, self.position_dim) 84 | self.segment_position_embed = nn.Embedding(25, self.position_dim) 85 | self.softmax = nn.Softmax() 86 | self.dropout = nn.Dropout(args.dropout) 87 | 88 | def forward(self, inputs): 89 | text_raw_indices, aspect_indices, word_position, segment_position = inputs[0], inputs[1], inputs[2], inputs[3] 90 | batch_size = text_raw_indices.size(0) 91 | max_sentence_num = text_raw_indices.size(1) 92 | document_len = torch.sum(torch.sum(text_raw_indices, dim=-1) != 0, dim=-1) 93 | text_raw_indices = text_raw_indices.view(-1, text_raw_indices.size(2)) 94 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 95 | 96 | aspect = self.encoder_aspect(aspect_indices) 97 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 98 | aspect_output, (aspect_h_n, _) = self.aspect_rnn(aspect, aspect_len) 99 | aspect_emb = aspect_h_n[0] 100 | # aspect_emb = self.squeeze_embedding(aspect, aspect_len) 101 | # aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 102 | # aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 103 | # aspect_emb = self.dropout(aspect_emb) 104 | 105 | word_position = word_position.view(-1, word_position.size(2)) 106 | word_position_emd = self.word_position_embed(word_position) 107 | x = self.encoder(text_raw_indices) 108 | x = torch.cat((x, word_position_emd), dim=-1) 109 | x = self.dropout(x) 110 | 111 | sen_output = torch.zeros((x.size(0), torch.max(x_len), self.args.hidden_dim * 2)).to(self.args.device) 112 | sen_h_n = torch.zeros((2, x.size(0), self.args.hidden_dim)).to(self.args.device) 113 | sen_output[x_len > 0], (sen_h_n[:, x_len > 0, :], _) = self.word_rnn(x[x_len > 0], x_len[x_len > 0]) 114 | 115 | word_position_emd_tmp = torch.zeros((word_position.size(0), torch.max(x_len), self.position_dim)).to( 116 | self.args.device) 117 | word_position_emd_tmp[x_len > 0] = self.squeeze_embedding(word_position_emd[x_len > 0], x_len[x_len > 0]) 118 | 119 | word_aspect_emb = torch.zeros((sen_output.size(0), torch.max(x_len), aspect_emb.size(1))).to( 120 | self.args.device) 121 | word_aspect_emb[x_len > 0] = aspect_emb.repeat(1, torch.max(x_len) * max_sentence_num).view( 122 | aspect_emb.size(0) * max_sentence_num, -1, aspect_emb.size(1))[x_len>0] 123 | # print(aspect_emb.size(), word_position_emd_tmp.size(), word_aspect_emb.size()) 124 | word_position_with_aspect = torch.cat((word_position_emd_tmp, word_aspect_emb), dim=-1) 125 | # print("sen_output", sen_output) 126 | # atten_sen_output, _ = self.word_attention(k=sen_output) 127 | sen_squish = batch_matmul_bias(torch.cat((sen_output, word_position_with_aspect), dim=-1).transpose(0, 1), 128 | self.word_W, self.word_bias, nonlinearity='tanh') 129 | sen_attn = batch_matmul(sen_squish, self.word_weight_proj) 130 | sen_attn_norm = self.softmax(sen_attn.transpose(1, 0)) 131 | atten_sen_output = attention_mul(sen_output.transpose(0, 1), sen_attn_norm.transpose(1, 0)) 132 | # print("atten_sen_output1", atten_sen_output) 133 | last_sen_output = torch.cat((sen_h_n[0], sen_h_n[1]), dim=1) 134 | # sen_output = torch.cat((atten_sen_output, last_sen_output), dim=1) 135 | sen_output = atten_sen_output 136 | # print("atten_sen_output2", atten_sen_output) 137 | sen_output = sen_output.view(batch_size, max_sentence_num, -1) 138 | # print("1", sen_output.size()) 139 | # sen_output = self.dropout(sen_output) 140 | segment_position_emb = self.segment_position_embed(segment_position) 141 | # print("2", segment_position_emb.size()) 142 | # print("sen_output", sen_output) 143 | sen_output = torch.cat((sen_output, segment_position_emb), dim=-1) 144 | # print("3", sen_output.size()) 145 | doc_output, (doc_h_n, _) = self.sentence_rnn(sen_output, document_len) 146 | # atten_doc_output, _ = self.sentence_attention(doc_output) 147 | segment_position_emb_tmp = self.squeeze_embedding(segment_position_emb, document_len) 148 | segment_aspect_emb = aspect_emb.repeat(1, segment_position_emb_tmp.size(1)).view(aspect_emb.size(0), -1, 149 | aspect_emb.size(1)) 150 | segment_position_with_aspect = torch.cat((segment_position_emb_tmp, segment_aspect_emb), dim=-1) 151 | doc_squish = batch_matmul_bias(torch.cat((doc_output, segment_position_with_aspect), dim=-1).transpose(0, 1), 152 | self.sentence_W, self.sentence_bias, nonlinearity='tanh') 153 | # print("doc_output", doc_output.size()) 154 | # print("doc_squish", doc_squish.size()) 155 | doc_squish = doc_squish.view(doc_output.size(1), doc_output.size(0), -1) 156 | doc_attn = batch_matmul(doc_squish, self.sentence_weight_proj) 157 | # print(doc_attn.size()) 158 | doc_attn = doc_attn.view(doc_output.size(1), doc_output.size(0)) 159 | doc_attn_norm = self.softmax(doc_attn.transpose(1, 0)) 160 | atten_doc_output = attention_mul(doc_output.transpose(0, 1), doc_attn_norm.transpose(1, 0)) 161 | last_doc_output = torch.cat((doc_h_n[0], doc_h_n[1]), dim=1) 162 | doc_output = torch.cat((atten_doc_output, last_doc_output), dim=1) 163 | # doc_output = atten_doc_output 164 | output = doc_output 165 | output = self.dropout(output) 166 | output = self.dense(output) 167 | if self.args.softmax: 168 | output = self.softmax(output) 169 | return output 170 | -------------------------------------------------------------------------------- /models/IAN.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | from layers.Attention import Attention 5 | 6 | 7 | class IAN(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(IAN, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.lstm_context = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 14 | dropout=args.dropout, rnn_type="LSTM") 15 | self.lstm_aspect = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 16 | dropout=args.dropout, rnn_type="LSTM") 17 | self.attention_aspect = Attention(args.hidden_dim, score_function='bi_linear') 18 | self.attention_context = Attention(args.hidden_dim, score_function='bi_linear') 19 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 20 | self.droput = nn.Dropout(args.dropout) 21 | self.softmax = nn.Softmax() 22 | 23 | def forward(self, inputs): 24 | text_raw_indices, aspect_indices = inputs[0], inputs[1] 25 | text_raw_len = torch.sum(text_raw_indices != 0, dim=-1) 26 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 27 | 28 | context = self.encoder(text_raw_indices) 29 | context = self.droput(context) 30 | aspect = self.encoder(aspect_indices) 31 | aspect = self.droput(aspect) 32 | context, (_, _) = self.lstm_context(context, text_raw_len) 33 | aspect, (_, _) = self.lstm_aspect(aspect, aspect_len) 34 | 35 | aspect_len = torch.tensor(aspect_len, dtype=torch.float).to(self.args.device) 36 | aspect = torch.sum(aspect, dim=1) 37 | aspect = torch.div(aspect, aspect_len.view(aspect_len.size(0), 1)) 38 | 39 | text_raw_len = torch.tensor(text_raw_len, dtype=torch.float).to(self.args.device) 40 | context = torch.sum(context, dim=1) 41 | context = torch.div(context, text_raw_len.view(text_raw_len.size(0), 1)) 42 | 43 | aspect_final = self.attention_aspect(aspect, context).squeeze(dim=1) 44 | context_final = self.attention_context(context, aspect).squeeze(dim=1) 45 | 46 | x = torch.cat((aspect_final, context_final), dim=-1) 47 | x = self.droput(x) 48 | out = self.dense(x) 49 | if self.args.softmax: 50 | out = self.softmax(out) 51 | return out 52 | -------------------------------------------------------------------------------- /models/LCRS.py: -------------------------------------------------------------------------------- 1 | from layers.Dynamic_RNN import DynamicRNN 2 | from layers.Attention import Attention 3 | import torch 4 | import torch.nn as nn 5 | import torch.nn.functional as F 6 | 7 | 8 | class LCRS(nn.Module): 9 | def __init__(self, args, embedding_matrix, aspect_embedding_matrix=None, memory_weighter='no'): 10 | super(LCRS, self).__init__() 11 | self.args = args 12 | self.memory_weighter = memory_weighter 13 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 14 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 15 | self.blstm_l = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 16 | rnn_type='LSTM') 17 | self.blstm_c = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 18 | rnn_type='LSTM') 19 | self.blstm_r = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 20 | rnn_type='LSTM') 21 | self.dense = nn.Linear(args.hidden_dim * 4, args.polarities_dim) 22 | # target to context attention 23 | self.t2c_l_attention = Attention(args.hidden_dim, score_function='bi_linear') 24 | self.t2c_r_attention = Attention(args.hidden_dim, score_function='bi_linear') 25 | # context to target attention 26 | self.c2t_l_attention = Attention(args.hidden_dim, score_function='bi_linear') 27 | self.c2t_r_attention = Attention(args.hidden_dim, score_function='bi_linear') 28 | self.softmax = nn.Softmax() 29 | self.dropout = nn.Dropout(args.dropout) 30 | 31 | def locationed_memory(self, memory_l, memory_r, x_l_len, x_c_len, x_r_len, dep_offset=None): 32 | position = 'linear' 33 | memory_len = x_l_len + x_c_len + x_r_len 34 | # loop over samples in the batch 35 | for i in range(memory_l.size(0)): 36 | # Loop over memory slices 37 | for idx in range(memory_len[i]): 38 | aspect_start = x_l_len[i] + 1 # INCORRECT: ASSUME x_l_len = 0 THEN aspect_start = 0 39 | aspect_end = x_l_len[i] + x_c_len[i] 40 | if idx < aspect_start: # left locationed memory 41 | if position == 'linear': 42 | l = aspect_start.item() - idx 43 | memory_l[i][idx] *= (1 - float(l) / int(memory_len[i])) 44 | elif position == 'dependency': 45 | memory_l[i][idx] *= (1 - float(dep_offset[i][idx]) / int(memory_len[i])) 46 | elif idx > aspect_end: # right locationed memory 47 | if position == 'linear': 48 | l = idx - aspect_end.item() 49 | memory_r[i][idx] *= (1 - float(l) / int(memory_len[i])) 50 | elif position == 'dependency': 51 | memory_r[i][idx] *= (1 - float(dep_offset[i][idx]) / int(memory_len[i])) 52 | 53 | return memory_l, memory_r 54 | 55 | def forward(self, inputs): 56 | # raw indices for left, center, and right parts 57 | x_l, x_c, x_r = inputs[0], inputs[1], inputs[2]# , dep_offset, inputs[3] 58 | x_l_len = torch.sum(x_l != 0, dim=-1) 59 | x_c_len = torch.sum(x_c != 0, dim=-1) 60 | x_r_len = torch.sum(x_r != 0, dim=-1) 61 | 62 | # embedding layer 63 | x_l, x_c, x_r = self.encoder(x_l), self.encoder(x_c), self.encoder(x_r) 64 | 65 | # Memory module: 66 | # ---------------------------- 67 | # left memory 68 | if torch.max(x_l_len) == 0: 69 | memory_l = torch.zeros((x_l.size(0), 1, x_l.size(2))).to(self.args.device) 70 | else: 71 | memory_l = torch.zeros((x_l.size(0), torch.max(x_l_len), x_l.size(2))).to(self.args.device) #x_l[:, :torch.max(x_l_len), :] 72 | memory_l[x_l_len > 0], (_, _) = self.blstm_l(x_l[x_l_len>0], x_l_len[x_l_len>0]) 73 | # print(x_l[x_l_len>0].size(),x_l_len[x_l_len>0].size(),memory_l_tmp.size(), memory_l[x_l_len>0].size()) 74 | # center memory 75 | # memory_c = x_c[:, :torch.max(x_c_len), :] 76 | memory_c, (_, _) = self.blstm_c(x_c[x_c_len>0], x_c_len[x_c_len>0]) 77 | # print(memory_c.size(),memory_c_tmp.size()) 78 | # right memory 79 | # if x_r_len == 0: memory_r = x_r 80 | # if x_r_len > 0: 81 | # print((x_r.size(0), torch.max(x_r_len), x_r.size(2))) 82 | if torch.max(x_r_len) == 0: 83 | memory_r = torch.zeros((x_r.size(0), 1, x_r.size(2))).to(self.args.device) 84 | else: 85 | memory_r = torch.zeros((x_r.size(0), torch.max(x_r_len), x_r.size(2))).to(self.args.device) #x_r[:, :torch.max(x_r_len), :] 86 | memory_r[x_r_len>0], (_, _) = self.blstm_r(x_r[x_r_len>0], x_r_len[x_r_len>0]) 87 | 88 | # Target-Aware memory 89 | 90 | # locationed-memory 91 | if self.memory_weighter == 'position': 92 | memory_l, memory_r = self.locationed_memory(memory_l, memory_r, x_l_len, 93 | x_c_len, x_r_len) #, dep_offset 94 | # context-attended-memory 95 | if self.memory_weighter == 'cam': 96 | pass 97 | # ---------------------------- 98 | 99 | # Aspect vector representation 100 | x_c_len = torch.tensor(x_c_len, dtype=torch.float).to(self.args.device) 101 | v_c = torch.sum(memory_c, dim=1) 102 | v_c = torch.div(v_c, x_c_len.view(x_c_len.size(0), 1)) 103 | 104 | # Rotatory attention: 105 | # ---------------------------- 106 | # [1] Target2Context Attention 107 | v_l = self.t2c_l_attention(memory_l, v_c).squeeze(dim=1) # left vector representation 108 | v_r = self.t2c_r_attention(memory_r, v_c).squeeze(dim=1) # Right vector representation 109 | 110 | # [2] Context2Target Attention 111 | v_c_l = self.c2t_l_attention(memory_c, v_l).squeeze(dim=1) # Left-aware target 112 | v_c_r = self.c2t_r_attention(memory_c, v_r).squeeze(dim=1) # Right-aware target 113 | # ---------------------------- 114 | 115 | # sentence representation 116 | v_s = torch.cat((v_l, v_c_l, v_c_r, v_r), dim=-1) # dim : (1, 800) 117 | # v_s = torch.cat((v_l, v_c_l, v_c_r, v_r), dim = 0) # dim : (4, 300) 118 | 119 | # Classifier 120 | v_s = self.dropout(v_s) 121 | out = self.dense(v_s) 122 | if self.args.softmax: 123 | out = self.softmax(out) 124 | return out 125 | -------------------------------------------------------------------------------- /models/LSTM.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Dynamic_RNN import DynamicRNN 4 | 5 | 6 | class LSTM(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(LSTM, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.lstm = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 13 | rnn_type="LSTM") 14 | self.dense = nn.Linear(args.hidden_dim, args.polarities_dim) 15 | self.softmax = nn.Softmax() 16 | self.dropout = nn.Dropout(args.dropout) 17 | 18 | def forward(self, inputs): 19 | text_raw_indices = inputs[0] 20 | x = self.dropout(self.encoder(text_raw_indices)) 21 | x_len = torch.sum(text_raw_indices != 0, dim=-1) 22 | _, (h_n, _) = self.lstm(x, x_len) 23 | output = h_n[0] 24 | output = self.dropout(output) 25 | output = self.dense(output) 26 | if self.args.softmax: 27 | output = self.softmax(output) 28 | return output 29 | -------------------------------------------------------------------------------- /models/MemNet.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from layers.Squeeze_embedding import SqueezeEmbedding 4 | from layers.Attention import Attention 5 | 6 | 7 | class MemNet(nn.Module): 8 | def locationed_memory(self, memory, memory_len, left_len, aspect_len): 9 | # here we just simply calculate the location vector in Model2's manner 10 | ''' 11 | Updated to calculate location as the absolute diference between context word and aspect 12 | ''' 13 | for i in range(memory.size(0)): 14 | for idx in range(memory_len[i]): 15 | aspect_start = left_len[i] - aspect_len[i] 16 | if idx < aspect_start: 17 | l = aspect_start.item() - idx # l: absolute distance to the aspect 18 | else: 19 | l = idx + 1 - aspect_start.item() 20 | memory[i][idx] *= (1 - float(l) / int(memory_len[i])) 21 | return memory 22 | 23 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 24 | super(MemNet, self).__init__() 25 | self.args = args 26 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 27 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 28 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 29 | self.attention = Attention(args.embed_dim, score_function='mlp') 30 | self.x_linear = nn.Linear(args.embed_dim, args.embed_dim) 31 | self.dense = nn.Linear(args.embed_dim, args.polarities_dim) 32 | self.dropout = nn.Dropout(args.dropout) 33 | self.softmax = nn.Softmax() 34 | 35 | def forward(self, inputs): 36 | text_raw_without_aspect_indices, aspect_indices, left_with_aspect_indices = inputs[0], inputs[1], inputs[2] 37 | left_len = torch.sum(left_with_aspect_indices != 0, dim=-1) 38 | memory_len = torch.sum(text_raw_without_aspect_indices != 0, dim=-1) 39 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 40 | nonzeros_aspect = torch.tensor(aspect_len, dtype=torch.float).to(self.args.device) 41 | 42 | memory = self.encoder(text_raw_without_aspect_indices) 43 | memory = self.squeeze_embedding(memory, memory_len) 44 | # memory = self.locationed_memory(memory, memory_len, left_len, aspect_len) 45 | aspect = self.encoder(aspect_indices) 46 | aspect = self.squeeze_embedding(aspect, aspect_len) 47 | aspect = torch.sum(aspect, dim=1) 48 | aspect = torch.div(aspect, nonzeros_aspect.view(nonzeros_aspect.size(0), 1)) 49 | x = aspect.unsqueeze(dim=1) 50 | for _ in range(self.args.hops): 51 | x = self.x_linear(x) 52 | out_at = self.attention(memory, x) 53 | x = out_at + x 54 | x = x.view(x.size(0), -1) 55 | # x = self.dropout(x) 56 | out = self.dense(x) 57 | if self.args.softmax: 58 | out = self.softmax(out) 59 | return out 60 | -------------------------------------------------------------------------------- /models/PRET_MULT.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/models/PRET_MULT.py -------------------------------------------------------------------------------- /models/RAM.py: -------------------------------------------------------------------------------- 1 | from layers.Dynamic_RNN import DynamicRNN 2 | from layers.Attention import Attention 3 | import torch 4 | import torch.nn as nn 5 | 6 | 7 | class RAM(nn.Module): 8 | def locationed_memory(self, memory, memory_len): 9 | # here we just simply calculate the location vector in Model2's manner 10 | for i in range(memory.size(0)): 11 | for idx in range(memory_len[i]): 12 | memory[i][idx] *= (1 - float(idx) / int(memory_len[i])) 13 | return memory 14 | 15 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 16 | super(RAM, self).__init__() 17 | self.args = args 18 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 19 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 20 | self.bi_lstm_context = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 21 | bidirectional=True, dropout=args.dropout, rnn_type="LSTM") 22 | self.bi_lstm_aspect = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, 23 | bidirectional=True, dropout=args.dropout, rnn_type="LSTM") 24 | self.attention = Attention(args.hidden_dim * 2, score_function='mlp') 25 | self.gru_cell = nn.GRUCell(args.hidden_dim * 2, args.hidden_dim * 2) 26 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 27 | self.softmax = nn.Softmax() 28 | self.dropout = nn.Dropout(args.dropout) 29 | 30 | def forward(self, inputs): 31 | text_raw_indices, aspect_indices = inputs[0], inputs[1] 32 | memory_len = torch.sum(text_raw_indices != 0, dim=-1) 33 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 34 | nonzeros_aspect = torch.tensor(aspect_len, dtype=torch.float).to(self.args.device) 35 | 36 | memory = self.encoder(text_raw_indices) 37 | memory = self.dropout(memory) 38 | memory, (_, _) = self.bi_lstm_context(memory, memory_len) 39 | # memory = self.locationed_memory(memory, memory_len) 40 | aspect = self.encoder(aspect_indices) 41 | aspect = self.dropout(aspect) 42 | aspect, (_, _) = self.bi_lstm_aspect(aspect, aspect_len) 43 | aspect = torch.sum(aspect, dim=1) 44 | aspect = torch.div(aspect, nonzeros_aspect.view(nonzeros_aspect.size(0), 1)) 45 | 46 | et = aspect 47 | for _ in range(self.args.hops): 48 | it_al = self.attention(memory, et).squeeze(dim=1) 49 | et = self.gru_cell(it_al, et) 50 | et = self.dropout(et) 51 | out = self.dense(et) 52 | if self.args.softmax: 53 | out = self.softmax(out) 54 | return out 55 | -------------------------------------------------------------------------------- /models/TC_LSTM.py: -------------------------------------------------------------------------------- 1 | from layers.Dynamic_RNN import DynamicRNN 2 | from layers.Squeeze_embedding import SqueezeEmbedding 3 | import torch 4 | import torch.nn as nn 5 | 6 | 7 | class TC_LSTM(nn.Module): 8 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 9 | super(TC_LSTM, self).__init__() 10 | self.args = args 11 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 12 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 13 | self.squeeze_embedding = SqueezeEmbedding(batch_first=True) 14 | self.lstm_l = DynamicRNN(args.embed_dim*2, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 15 | rnn_type="LSTM") 16 | self.lstm_r = DynamicRNN(args.embed_dim*2, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 17 | rnn_type="LSTM") 18 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 19 | self.dropout = nn.Dropout(args.dropout) 20 | self.softmax = nn.Softmax() 21 | 22 | def forward(self, inputs): 23 | x_l, x_r, aspect_indices = inputs[0], inputs[1], inputs[2] 24 | x_l_len, x_r_len = torch.sum(x_l != 0, dim=-1), torch.sum(x_r != 0, dim=-1) 25 | x_l, x_r = self.encoder(x_l), self.encoder(x_r) 26 | 27 | aspect = self.encoder_aspect(aspect_indices) 28 | aspect_len = torch.sum(aspect_indices != 0, dim=-1) 29 | aspect_emb = self.squeeze_embedding(aspect, aspect_len) 30 | aspect_emb = torch.sum(aspect_emb, dim=1) / aspect_len.view(-1, 1).float() 31 | aspect_emb = aspect_emb.view(aspect_emb.size(0), -1) 32 | aspect_emb_l = aspect_emb.repeat(1, x_l.size(1)).view(aspect_emb.size(0), -1, aspect_emb.size(1)) 33 | aspect_emb_r = aspect_emb.repeat(1, x_r.size(1)).view(aspect_emb.size(0), -1, aspect_emb.size(1)) 34 | x_l, x_r = self.dropout(x_l), self.dropout(x_r) 35 | x_l = torch.cat((x_l, aspect_emb_l), -1) 36 | x_r = torch.cat((x_r, aspect_emb_r), -1) 37 | _, (h_n_l, _) = self.lstm_l(x_l, x_l_len) 38 | _, (h_n_r, _) = self.lstm_r(x_r, x_r_len) 39 | h_n = torch.cat((h_n_l[0], h_n_r[0]), dim=-1) 40 | output = self.dropout(h_n) 41 | output = self.dense(output) 42 | if self.args.softmax: 43 | output = self.softmax(output) 44 | return output 45 | -------------------------------------------------------------------------------- /models/TD_LSTM.py: -------------------------------------------------------------------------------- 1 | from layers.Dynamic_RNN import DynamicRNN 2 | import torch 3 | import torch.nn as nn 4 | 5 | 6 | class TD_LSTM(nn.Module): 7 | def __init__(self, args, embedding_matrix=None, aspect_embedding_matrix=None): 8 | super(TD_LSTM, self).__init__() 9 | self.args = args 10 | self.encoder = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) 11 | self.encoder_aspect = nn.Embedding.from_pretrained(torch.tensor(aspect_embedding_matrix, dtype=torch.float)) 12 | self.lstm_l = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 13 | rnn_type="LSTM") 14 | self.lstm_r = DynamicRNN(args.embed_dim, args.hidden_dim, num_layers=1, batch_first=True, dropout=args.dropout, 15 | rnn_type="LSTM") 16 | self.dense = nn.Linear(args.hidden_dim * 2, args.polarities_dim) 17 | self.dropout = nn.Dropout(args.dropout) 18 | self.softmax = nn.Softmax() 19 | 20 | def forward(self, inputs): 21 | x_l, x_r = inputs[0], inputs[1] 22 | x_l_len, x_r_len = torch.sum(x_l != 0, dim=-1), torch.sum(x_r != 0, dim=-1) 23 | x_l, x_r = self.encoder(x_l), self.encoder(x_r) 24 | x_l, x_r = self.dropout(x_l), self.dropout(x_r) 25 | _, (h_n_l, _) = self.lstm_l(x_l, x_l_len) 26 | _, (h_n_r, _) = self.lstm_r(x_r, x_r_len) 27 | h_n = torch.cat((h_n_l[0], h_n_r[0]), dim=-1) 28 | output = self.dropout(h_n) 29 | output = self.dense(output) 30 | if self.args.softmax: 31 | output = self.softmax(output) 32 | return output 33 | -------------------------------------------------------------------------------- /models/TNet.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/models/TNet.py -------------------------------------------------------------------------------- /models/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/models/__init__.py -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.0005_-1_0.3_False_16_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.0005 2 | batch_size: 16 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6677115987460815 7 | recall: [0.6484375 0.27218935 0.87096774] 8 | precision: [0.46111111 0.67647059 0.76153846] 9 | f1: [0.53896104 0.38818565 0.8125855 ] 10 | macro_recall: 0.59719819701597 11 | macro_precision: 0.6330400536282889 12 | macro_f1: 0.5799107307618278 13 | micro_recall: 0.6677115987460815 14 | micro_precision: 0.6677115987460815 15 | micro_f1: 0.6677115987460815 16 | weighted_recall: 0.6677115987460815 17 | weighted_precision: 0.6787309827877779 18 | weighted_f1: 0.6452696610990549 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 0,2.0 82 | 62: 0,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 0,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 0,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 0,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 1,1.0 464 | 444: 1,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 0,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.0005_-1_0.3_False_32_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.0005 2 | batch_size: 32 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6661442006269592 7 | recall: [0.6484375 0.27218935 0.86803519] 8 | precision: [0.45604396 0.67647059 0.7628866 ] 9 | f1: [0.53548387 0.38818565 0.81207133] 10 | macro_recall: 0.5962206799094206 11 | macro_precision: 0.6318003807391315 12 | macro_f1: 0.5785802851886767 13 | micro_recall: 0.6661442006269592 14 | micro_precision: 0.6661442006269592 15 | micro_f1: 0.6661442006269592 16 | weighted_recall: 0.6661442006269592 17 | weighted_precision: 0.6784349305365177 18 | weighted_f1: 0.6442972331386139 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 0,2.0 82 | 62: 0,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 0,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 0,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 0,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 0,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 0,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 1,1.0 464 | 444: 1,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 0,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.0005_-1_0.5_False_16_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.0005 2 | batch_size: 16 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6692789968652038 7 | recall: [0.6171875 0.29585799 0.87390029] 8 | precision: [0.46470588 0.68493151 0.75443038] 9 | f1: [0.53020134 0.41322314 0.80978261] 10 | macro_recall: 0.5956485938069375 11 | macro_precision: 0.6346892563163639 12 | macro_f1: 0.5844023638244663 13 | micro_recall: 0.6692789968652038 14 | micro_precision: 0.6692789968652038 15 | micro_f1: 0.6692789968652038 16 | weighted_recall: 0.6692789968652038 17 | weighted_precision: 0.6778942587654884 18 | weighted_f1: 0.6486463199390274 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 1,2.0 82 | 62: 1,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 2,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 2,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 1,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 0,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 2,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 1,1.0 464 | 444: 1,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 2,2.0 580 | 560: 2,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 0,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 1,1.0 614 | 594: 1,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.0005_-1_0.5_False_32_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.0005 2 | batch_size: 32 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.664576802507837 7 | recall: [0.6171875 0.28994083 0.86803519] 8 | precision: [0.47878788 0.62820513 0.74936709] 9 | f1: [0.53924915 0.39676113 0.80434783] 10 | macro_recall: 0.5917211730060675 11 | macro_precision: 0.618786698533534 12 | macro_f1: 0.5801193688159582 13 | micro_recall: 0.664576802507837 14 | micro_precision: 0.664576802507837 15 | micro_f1: 0.664576802507837 16 | weighted_recall: 0.664576802507837 17 | weighted_precision: 0.6629869786311992 18 | weighted_f1: 0.6431961301874333 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 0,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 2,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 1,2.0 82 | 62: 1,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 0,2.0 104 | 84: 0,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 2,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 1,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 2,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 1,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 0,1.0 317 | 297: 0,1.0 318 | 298: 2,1.0 319 | 299: 2,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 1,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 2,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 0,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 2,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 2,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 0,1.0 424 | 404: 2,2.0 425 | 405: 1,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 2,1.0 464 | 444: 2,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 1,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 2,2.0 506 | 486: 2,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 1,1.0 522 | 502: 1,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 2,1.0 531 | 511: 2,2.0 532 | 512: 2,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 1,1.0 557 | 537: 1,0.0 558 | 538: 1,2.0 559 | 539: 1,1.0 560 | 540: 0,1.0 561 | 541: 0,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 0,2.0 570 | 550: 0,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 2,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 0,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 2,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 1,1.0 642 | 622: 1,1.0 643 | 623: 1,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 2,2.0 657 | 637: 2,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.001_-1_0.3_False_16_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.001 2 | batch_size: 16 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6630094043887147 7 | recall: [0.6484375 0.24852071 0.87390029] 8 | precision: [0.46111111 0.66666667 0.75443038] 9 | f1: [0.53896104 0.36206897 0.80978261] 10 | macro_recall: 0.5902861677714345 11 | macro_precision: 0.6274027191748711 12 | macro_f1: 0.5702708710579775 13 | micro_recall: 0.6630094043887147 14 | micro_precision: 0.6630094043887147 15 | micro_f1: 0.6630094043887147 16 | weighted_recall: 0.6630094043887147 17 | weighted_precision: 0.6723348720729777 18 | weighted_f1: 0.6368535074053984 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 0,2.0 82 | 62: 0,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 0,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 2,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 2,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 0,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 0,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 2,1.0 464 | 444: 2,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 2,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 0,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.001_-1_0.3_False_32_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.001 2 | batch_size: 32 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6692789968652038 7 | recall: [0.65625 0.27218935 0.87096774] 8 | precision: [0.47191011 0.67647059 0.75765306] 9 | f1: [0.54901961 0.38818565 0.81036835] 10 | macro_recall: 0.5998023636826367 11 | macro_precision: 0.6353445872731115 12 | macro_f1: 0.582524537033745 13 | micro_recall: 0.6692789968652038 14 | micro_precision: 0.6692789968652038 15 | micro_f1: 0.6692789968652038 16 | weighted_recall: 0.6692789968652038 17 | weighted_precision: 0.6788208740930065 18 | weighted_f1: 0.6461026527045164 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 0,2.0 82 | 62: 0,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 0,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 1,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 0,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 2,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 1,1.0 464 | 444: 1,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 2,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 2,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.001_-1_0.5_False_32_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.001 2 | batch_size: 32 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.6630094043887147 7 | recall: [0.6171875 0.30177515 0.85923754] 8 | precision: [0.48466258 0.61445783 0.74744898] 9 | f1: [0.54295533 0.4047619 0.7994543 ] 10 | macro_recall: 0.5927333948619619 11 | macro_precision: 0.6155231292014182 12 | macro_f1: 0.5823905095434329 13 | micro_recall: 0.6630094043887147 14 | micro_precision: 0.6630094043887147 15 | micro_f1: 0.6630094043887147 16 | weighted_recall: 0.6630094043887147 17 | weighted_precision: 0.659498879860099 18 | weighted_f1: 0.6434435095733569 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 0,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 2,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 1,2.0 82 | 62: 1,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 0,2.0 104 | 84: 0,2.0 105 | 85: 2,0.0 106 | 86: 1,1.0 107 | 87: 1,1.0 108 | 88: 1,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 2,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 1,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 2,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 1,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 1,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 0,1.0 317 | 297: 0,1.0 318 | 298: 2,1.0 319 | 299: 2,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 1,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 2,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 1,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 0,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 0,2.0 394 | 374: 2,2.0 395 | 375: 0,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 2,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 2,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 0,1.0 424 | 404: 2,2.0 425 | 405: 1,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 2,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 2,1.0 464 | 444: 2,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 1,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 0,0.0 487 | 467: 0,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 2,2.0 504 | 484: 0,1.0 505 | 485: 2,2.0 506 | 486: 2,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 1,1.0 522 | 502: 1,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 1,2.0 528 | 508: 1,2.0 529 | 509: 1,2.0 530 | 510: 2,1.0 531 | 511: 2,2.0 532 | 512: 2,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 1,1.0 557 | 537: 1,0.0 558 | 538: 1,2.0 559 | 539: 1,1.0 560 | 540: 0,1.0 561 | 541: 0,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 0,2.0 570 | 550: 0,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 2,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 0,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 2,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 1,1.0 642 | 622: 1,1.0 643 | 623: 1,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 2,2.0 657 | 637: 2,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.001_-1_0.7_False_16_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.001 2 | batch_size: 16 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.658307210031348 7 | recall: [0.6171875 0.27218935 0.86510264] 8 | precision: [0.49068323 0.60526316 0.73566085] 9 | f1: [0.5467128 0.3755102 0.79514825] 10 | macro_recall: 0.5848264961362046 11 | macro_precision: 0.6105357451962335 12 | macro_f1: 0.5724570849427452 13 | micro_recall: 0.658307210031348 14 | micro_precision: 0.658307210031348 15 | micro_f1: 0.658307210031348 16 | weighted_recall: 0.658307210031348 17 | weighted_precision: 0.6519706523942659 18 | weighted_f1: 0.6341473601955613 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 0,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 2,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 0,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 1,2.0 82 | 62: 1,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 0,2.0 104 | 84: 0,2.0 105 | 85: 2,0.0 106 | 86: 1,1.0 107 | 87: 1,1.0 108 | 88: 1,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 2,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 1,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 2,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 2,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 0,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 1,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 1,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 0,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 2,1.0 268 | 248: 2,1.0 269 | 249: 2,1.0 270 | 250: 2,1.0 271 | 251: 2,1.0 272 | 252: 1,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 2,1.0 277 | 257: 2,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 0,1.0 317 | 297: 0,1.0 318 | 298: 2,1.0 319 | 299: 2,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 1,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 2,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 1,1.0 355 | 335: 1,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 1,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 0,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 0,0.0 396 | 376: 1,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 2,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 2,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 0,0.0 422 | 402: 2,2.0 423 | 403: 0,1.0 424 | 404: 2,2.0 425 | 405: 1,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 2,1.0 464 | 444: 2,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 1,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 0,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 0,0.0 487 | 467: 0,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 2,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 2,2.0 504 | 484: 0,1.0 505 | 485: 2,2.0 506 | 486: 2,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 1,1.0 522 | 502: 1,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 1,2.0 528 | 508: 1,2.0 529 | 509: 1,2.0 530 | 510: 2,1.0 531 | 511: 2,2.0 532 | 512: 2,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 2,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 1,1.0 557 | 537: 1,0.0 558 | 538: 1,2.0 559 | 539: 1,1.0 560 | 540: 0,1.0 561 | 541: 0,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 0,2.0 570 | 550: 0,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 2,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 0,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 2,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 1,1.0 642 | 622: 1,1.0 643 | 623: 1,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 2,2.0 657 | 637: 2,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/ans/ContextAvg_laptop14_Adam_0.001_-1_0.7_False_32_0.1.txt: -------------------------------------------------------------------------------- 1 | lr: 0.001 2 | batch_size: 32 3 | opt: Adam 4 | max_sentence_len: -1 5 | end params ----------------------------------------------------------------- 6 | acc: 0.664576802507837 7 | recall: [0.625 0.26627219 0.87683284] 8 | precision: [0.45714286 0.68181818 0.75314861] 9 | f1: [0.52805281 0.38297872 0.8102981 ] 10 | macro_recall: 0.5893683446412975 11 | macro_precision: 0.6307032178568702 12 | macro_f1: 0.5737765438886044 13 | micro_recall: 0.664576802507837 14 | micro_precision: 0.664576802507837 15 | micro_f1: 0.664576802507837 16 | weighted_recall: 0.664576802507837 17 | weighted_precision: 0.674867141102543 18 | weighted_f1: 0.6404793361250124 19 | end metrics ----------------------------------------------------------------- 20 | 0: 2,2.0 21 | 1: 0,0.0 22 | 2: 2,2.0 23 | 3: 2,0.0 24 | 4: 2,0.0 25 | 5: 0,0.0 26 | 6: 0,2.0 27 | 7: 0,0.0 28 | 8: 0,1.0 29 | 9: 2,2.0 30 | 10: 2,2.0 31 | 11: 2,2.0 32 | 12: 2,2.0 33 | 13: 2,2.0 34 | 14: 2,2.0 35 | 15: 2,2.0 36 | 16: 0,0.0 37 | 17: 2,0.0 38 | 18: 2,2.0 39 | 19: 2,2.0 40 | 20: 0,0.0 41 | 21: 2,2.0 42 | 22: 0,0.0 43 | 23: 2,2.0 44 | 24: 2,2.0 45 | 25: 0,2.0 46 | 26: 2,2.0 47 | 27: 0,2.0 48 | 28: 2,2.0 49 | 29: 2,2.0 50 | 30: 2,2.0 51 | 31: 2,2.0 52 | 32: 0,0.0 53 | 33: 2,2.0 54 | 34: 2,2.0 55 | 35: 2,2.0 56 | 36: 0,2.0 57 | 37: 2,2.0 58 | 38: 2,2.0 59 | 39: 2,2.0 60 | 40: 0,0.0 61 | 41: 2,2.0 62 | 42: 2,2.0 63 | 43: 0,0.0 64 | 44: 0,0.0 65 | 45: 2,2.0 66 | 46: 2,2.0 67 | 47: 2,2.0 68 | 48: 2,2.0 69 | 49: 2,2.0 70 | 50: 2,2.0 71 | 51: 2,2.0 72 | 52: 2,2.0 73 | 53: 2,2.0 74 | 54: 0,0.0 75 | 55: 2,2.0 76 | 56: 2,0.0 77 | 57: 2,2.0 78 | 58: 2,2.0 79 | 59: 2,2.0 80 | 60: 2,2.0 81 | 61: 0,2.0 82 | 62: 0,1.0 83 | 63: 2,0.0 84 | 64: 1,2.0 85 | 65: 2,2.0 86 | 66: 2,2.0 87 | 67: 2,2.0 88 | 68: 2,0.0 89 | 69: 2,2.0 90 | 70: 2,0.0 91 | 71: 2,2.0 92 | 72: 2,2.0 93 | 73: 2,2.0 94 | 74: 1,2.0 95 | 75: 1,0.0 96 | 76: 2,2.0 97 | 77: 0,0.0 98 | 78: 2,1.0 99 | 79: 2,1.0 100 | 80: 2,2.0 101 | 81: 2,2.0 102 | 82: 2,2.0 103 | 83: 2,2.0 104 | 84: 2,2.0 105 | 85: 2,0.0 106 | 86: 0,1.0 107 | 87: 0,1.0 108 | 88: 0,0.0 109 | 89: 2,2.0 110 | 90: 2,2.0 111 | 91: 2,2.0 112 | 92: 2,2.0 113 | 93: 2,2.0 114 | 94: 2,2.0 115 | 95: 2,2.0 116 | 96: 2,2.0 117 | 97: 2,2.0 118 | 98: 2,2.0 119 | 99: 2,2.0 120 | 100: 0,0.0 121 | 101: 0,0.0 122 | 102: 2,2.0 123 | 103: 2,2.0 124 | 104: 2,2.0 125 | 105: 2,2.0 126 | 106: 0,1.0 127 | 107: 2,2.0 128 | 108: 2,2.0 129 | 109: 2,2.0 130 | 110: 2,2.0 131 | 111: 2,2.0 132 | 112: 0,0.0 133 | 113: 2,2.0 134 | 114: 0,0.0 135 | 115: 2,1.0 136 | 116: 2,0.0 137 | 117: 2,0.0 138 | 118: 2,0.0 139 | 119: 2,2.0 140 | 120: 2,2.0 141 | 121: 0,2.0 142 | 122: 0,2.0 143 | 123: 2,2.0 144 | 124: 0,0.0 145 | 125: 2,2.0 146 | 126: 2,2.0 147 | 127: 2,2.0 148 | 128: 2,2.0 149 | 129: 2,2.0 150 | 130: 2,2.0 151 | 131: 0,0.0 152 | 132: 2,2.0 153 | 133: 0,0.0 154 | 134: 2,2.0 155 | 135: 2,2.0 156 | 136: 2,2.0 157 | 137: 2,2.0 158 | 138: 2,2.0 159 | 139: 1,1.0 160 | 140: 1,2.0 161 | 141: 2,2.0 162 | 142: 2,0.0 163 | 143: 2,2.0 164 | 144: 0,1.0 165 | 145: 0,0.0 166 | 146: 0,0.0 167 | 147: 2,2.0 168 | 148: 2,2.0 169 | 149: 1,0.0 170 | 150: 2,2.0 171 | 151: 2,2.0 172 | 152: 0,0.0 173 | 153: 2,0.0 174 | 154: 2,2.0 175 | 155: 2,2.0 176 | 156: 2,2.0 177 | 157: 2,2.0 178 | 158: 1,1.0 179 | 159: 1,1.0 180 | 160: 1,1.0 181 | 161: 1,1.0 182 | 162: 1,1.0 183 | 163: 2,0.0 184 | 164: 2,0.0 185 | 165: 2,1.0 186 | 166: 2,0.0 187 | 167: 2,0.0 188 | 168: 2,0.0 189 | 169: 2,1.0 190 | 170: 1,1.0 191 | 171: 2,2.0 192 | 172: 0,2.0 193 | 173: 2,2.0 194 | 174: 2,2.0 195 | 175: 2,2.0 196 | 176: 2,1.0 197 | 177: 2,1.0 198 | 178: 2,2.0 199 | 179: 2,2.0 200 | 180: 2,2.0 201 | 181: 2,1.0 202 | 182: 2,2.0 203 | 183: 2,2.0 204 | 184: 2,2.0 205 | 185: 2,2.0 206 | 186: 0,0.0 207 | 187: 2,1.0 208 | 188: 2,2.0 209 | 189: 2,2.0 210 | 190: 2,2.0 211 | 191: 2,2.0 212 | 192: 2,2.0 213 | 193: 2,1.0 214 | 194: 2,1.0 215 | 195: 2,1.0 216 | 196: 2,1.0 217 | 197: 0,0.0 218 | 198: 2,1.0 219 | 199: 2,0.0 220 | 200: 0,0.0 221 | 201: 0,0.0 222 | 202: 0,0.0 223 | 203: 0,0.0 224 | 204: 0,0.0 225 | 205: 2,2.0 226 | 206: 2,2.0 227 | 207: 2,2.0 228 | 208: 2,2.0 229 | 209: 2,2.0 230 | 210: 0,1.0 231 | 211: 2,1.0 232 | 212: 2,2.0 233 | 213: 2,2.0 234 | 214: 2,2.0 235 | 215: 2,2.0 236 | 216: 2,2.0 237 | 217: 2,2.0 238 | 218: 2,2.0 239 | 219: 1,1.0 240 | 220: 1,1.0 241 | 221: 0,0.0 242 | 222: 2,2.0 243 | 223: 2,2.0 244 | 224: 2,2.0 245 | 225: 2,2.0 246 | 226: 2,2.0 247 | 227: 2,2.0 248 | 228: 2,2.0 249 | 229: 2,2.0 250 | 230: 2,2.0 251 | 231: 2,2.0 252 | 232: 2,0.0 253 | 233: 0,2.0 254 | 234: 0,0.0 255 | 235: 0,2.0 256 | 236: 2,2.0 257 | 237: 2,2.0 258 | 238: 2,2.0 259 | 239: 2,2.0 260 | 240: 2,2.0 261 | 241: 2,2.0 262 | 242: 2,2.0 263 | 243: 2,2.0 264 | 244: 2,2.0 265 | 245: 0,2.0 266 | 246: 1,0.0 267 | 247: 1,1.0 268 | 248: 1,1.0 269 | 249: 1,1.0 270 | 250: 1,1.0 271 | 251: 1,1.0 272 | 252: 2,2.0 273 | 253: 2,2.0 274 | 254: 2,2.0 275 | 255: 2,2.0 276 | 256: 1,1.0 277 | 257: 1,0.0 278 | 258: 2,2.0 279 | 259: 2,2.0 280 | 260: 2,2.0 281 | 261: 2,2.0 282 | 262: 2,1.0 283 | 263: 2,2.0 284 | 264: 2,2.0 285 | 265: 0,1.0 286 | 266: 0,1.0 287 | 267: 0,1.0 288 | 268: 2,2.0 289 | 269: 0,1.0 290 | 270: 0,0.0 291 | 271: 2,2.0 292 | 272: 2,2.0 293 | 273: 2,2.0 294 | 274: 2,0.0 295 | 275: 2,2.0 296 | 276: 0,0.0 297 | 277: 0,0.0 298 | 278: 2,2.0 299 | 279: 0,0.0 300 | 280: 2,2.0 301 | 281: 2,2.0 302 | 282: 2,2.0 303 | 283: 2,2.0 304 | 284: 2,2.0 305 | 285: 2,2.0 306 | 286: 2,2.0 307 | 287: 2,2.0 308 | 288: 2,2.0 309 | 289: 0,0.0 310 | 290: 0,0.0 311 | 291: 0,0.0 312 | 292: 2,2.0 313 | 293: 2,2.0 314 | 294: 2,2.0 315 | 295: 2,2.0 316 | 296: 2,1.0 317 | 297: 2,1.0 318 | 298: 1,1.0 319 | 299: 1,1.0 320 | 300: 2,0.0 321 | 301: 0,1.0 322 | 302: 0,2.0 323 | 303: 2,2.0 324 | 304: 2,2.0 325 | 305: 2,2.0 326 | 306: 0,2.0 327 | 307: 0,2.0 328 | 308: 0,2.0 329 | 309: 0,0.0 330 | 310: 2,0.0 331 | 311: 2,1.0 332 | 312: 2,1.0 333 | 313: 2,1.0 334 | 314: 2,1.0 335 | 315: 0,2.0 336 | 316: 0,0.0 337 | 317: 2,2.0 338 | 318: 2,2.0 339 | 319: 2,2.0 340 | 320: 2,2.0 341 | 321: 2,2.0 342 | 322: 2,2.0 343 | 323: 2,2.0 344 | 324: 0,0.0 345 | 325: 0,1.0 346 | 326: 1,0.0 347 | 327: 2,2.0 348 | 328: 2,2.0 349 | 329: 2,2.0 350 | 330: 0,1.0 351 | 331: 0,1.0 352 | 332: 2,2.0 353 | 333: 2,2.0 354 | 334: 0,1.0 355 | 335: 0,0.0 356 | 336: 0,0.0 357 | 337: 0,1.0 358 | 338: 2,2.0 359 | 339: 2,2.0 360 | 340: 0,0.0 361 | 341: 2,1.0 362 | 342: 2,2.0 363 | 343: 2,2.0 364 | 344: 2,2.0 365 | 345: 2,2.0 366 | 346: 2,2.0 367 | 347: 0,0.0 368 | 348: 2,2.0 369 | 349: 2,2.0 370 | 350: 2,2.0 371 | 351: 2,1.0 372 | 352: 2,0.0 373 | 353: 2,2.0 374 | 354: 2,2.0 375 | 355: 1,2.0 376 | 356: 2,2.0 377 | 357: 2,2.0 378 | 358: 2,2.0 379 | 359: 2,2.0 380 | 360: 2,2.0 381 | 361: 1,2.0 382 | 362: 0,0.0 383 | 363: 0,1.0 384 | 364: 1,1.0 385 | 365: 2,1.0 386 | 366: 2,1.0 387 | 367: 2,2.0 388 | 368: 0,0.0 389 | 369: 0,0.0 390 | 370: 2,2.0 391 | 371: 0,0.0 392 | 372: 2,2.0 393 | 373: 2,2.0 394 | 374: 2,2.0 395 | 375: 2,0.0 396 | 376: 2,1.0 397 | 377: 2,2.0 398 | 378: 2,2.0 399 | 379: 2,1.0 400 | 380: 2,1.0 401 | 381: 1,1.0 402 | 382: 0,1.0 403 | 383: 2,1.0 404 | 384: 2,1.0 405 | 385: 2,1.0 406 | 386: 2,1.0 407 | 387: 2,2.0 408 | 388: 0,1.0 409 | 389: 0,2.0 410 | 390: 2,1.0 411 | 391: 2,2.0 412 | 392: 1,2.0 413 | 393: 2,2.0 414 | 394: 0,1.0 415 | 395: 0,0.0 416 | 396: 0,1.0 417 | 397: 0,1.0 418 | 398: 0,1.0 419 | 399: 2,2.0 420 | 400: 2,2.0 421 | 401: 2,0.0 422 | 402: 2,2.0 423 | 403: 1,1.0 424 | 404: 2,2.0 425 | 405: 0,1.0 426 | 406: 2,2.0 427 | 407: 0,0.0 428 | 408: 0,0.0 429 | 409: 0,0.0 430 | 410: 2,2.0 431 | 411: 2,2.0 432 | 412: 2,2.0 433 | 413: 2,2.0 434 | 414: 2,2.0 435 | 415: 2,2.0 436 | 416: 2,1.0 437 | 417: 2,1.0 438 | 418: 2,1.0 439 | 419: 2,2.0 440 | 420: 2,2.0 441 | 421: 0,0.0 442 | 422: 1,2.0 443 | 423: 1,2.0 444 | 424: 1,2.0 445 | 425: 0,0.0 446 | 426: 0,0.0 447 | 427: 0,0.0 448 | 428: 2,2.0 449 | 429: 2,2.0 450 | 430: 2,2.0 451 | 431: 2,2.0 452 | 432: 2,2.0 453 | 433: 1,1.0 454 | 434: 1,1.0 455 | 435: 1,1.0 456 | 436: 1,1.0 457 | 437: 2,2.0 458 | 438: 0,2.0 459 | 439: 0,1.0 460 | 440: 0,1.0 461 | 441: 2,0.0 462 | 442: 2,0.0 463 | 443: 1,1.0 464 | 444: 1,1.0 465 | 445: 0,2.0 466 | 446: 0,1.0 467 | 447: 2,2.0 468 | 448: 0,0.0 469 | 449: 0,1.0 470 | 450: 0,1.0 471 | 451: 0,0.0 472 | 452: 2,2.0 473 | 453: 2,2.0 474 | 454: 1,1.0 475 | 455: 1,1.0 476 | 456: 0,0.0 477 | 457: 0,1.0 478 | 458: 0,1.0 479 | 459: 0,1.0 480 | 460: 0,1.0 481 | 461: 0,1.0 482 | 462: 2,1.0 483 | 463: 2,0.0 484 | 464: 0,1.0 485 | 465: 0,0.0 486 | 466: 1,0.0 487 | 467: 1,0.0 488 | 468: 2,2.0 489 | 469: 2,2.0 490 | 470: 2,2.0 491 | 471: 1,2.0 492 | 472: 2,0.0 493 | 473: 1,2.0 494 | 474: 0,0.0 495 | 475: 0,1.0 496 | 476: 2,2.0 497 | 477: 2,2.0 498 | 478: 2,1.0 499 | 479: 2,2.0 500 | 480: 2,2.0 501 | 481: 2,2.0 502 | 482: 2,1.0 503 | 483: 0,2.0 504 | 484: 0,1.0 505 | 485: 0,2.0 506 | 486: 0,2.0 507 | 487: 2,0.0 508 | 488: 0,0.0 509 | 489: 2,2.0 510 | 490: 2,2.0 511 | 491: 2,2.0 512 | 492: 2,1.0 513 | 493: 1,1.0 514 | 494: 1,1.0 515 | 495: 1,1.0 516 | 496: 1,1.0 517 | 497: 1,1.0 518 | 498: 2,0.0 519 | 499: 2,1.0 520 | 500: 2,1.0 521 | 501: 0,1.0 522 | 502: 0,1.0 523 | 503: 0,1.0 524 | 504: 2,0.0 525 | 505: 2,2.0 526 | 506: 2,2.0 527 | 507: 2,2.0 528 | 508: 2,2.0 529 | 509: 1,2.0 530 | 510: 0,1.0 531 | 511: 0,2.0 532 | 512: 0,1.0 533 | 513: 2,1.0 534 | 514: 2,1.0 535 | 515: 1,1.0 536 | 516: 2,2.0 537 | 517: 0,1.0 538 | 518: 0,1.0 539 | 519: 0,0.0 540 | 520: 0,1.0 541 | 521: 1,1.0 542 | 522: 1,1.0 543 | 523: 1,1.0 544 | 524: 0,1.0 545 | 525: 0,0.0 546 | 526: 0,0.0 547 | 527: 0,0.0 548 | 528: 0,1.0 549 | 529: 2,2.0 550 | 530: 0,0.0 551 | 531: 2,1.0 552 | 532: 2,1.0 553 | 533: 2,2.0 554 | 534: 0,0.0 555 | 535: 0,1.0 556 | 536: 0,1.0 557 | 537: 0,0.0 558 | 538: 0,2.0 559 | 539: 0,1.0 560 | 540: 2,1.0 561 | 541: 2,1.0 562 | 542: 1,2.0 563 | 543: 1,1.0 564 | 544: 1,1.0 565 | 545: 1,1.0 566 | 546: 1,1.0 567 | 547: 0,0.0 568 | 548: 2,1.0 569 | 549: 2,2.0 570 | 550: 2,2.0 571 | 551: 2,2.0 572 | 552: 2,2.0 573 | 553: 0,1.0 574 | 554: 0,0.0 575 | 555: 0,1.0 576 | 556: 2,1.0 577 | 557: 2,1.0 578 | 558: 2,2.0 579 | 559: 0,2.0 580 | 560: 0,0.0 581 | 561: 2,2.0 582 | 562: 2,2.0 583 | 563: 2,2.0 584 | 564: 2,2.0 585 | 565: 0,1.0 586 | 566: 0,0.0 587 | 567: 0,1.0 588 | 568: 0,1.0 589 | 569: 2,2.0 590 | 570: 2,2.0 591 | 571: 2,2.0 592 | 572: 2,2.0 593 | 573: 2,2.0 594 | 574: 2,2.0 595 | 575: 2,2.0 596 | 576: 2,2.0 597 | 577: 2,2.0 598 | 578: 2,2.0 599 | 579: 2,2.0 600 | 580: 2,2.0 601 | 581: 0,1.0 602 | 582: 1,1.0 603 | 583: 2,0.0 604 | 584: 2,0.0 605 | 585: 2,2.0 606 | 586: 2,2.0 607 | 587: 2,0.0 608 | 588: 2,0.0 609 | 589: 2,0.0 610 | 590: 0,1.0 611 | 591: 0,0.0 612 | 592: 2,1.0 613 | 593: 0,1.0 614 | 594: 0,1.0 615 | 595: 1,1.0 616 | 596: 1,1.0 617 | 597: 2,2.0 618 | 598: 2,2.0 619 | 599: 2,2.0 620 | 600: 0,1.0 621 | 601: 2,2.0 622 | 602: 2,0.0 623 | 603: 2,1.0 624 | 604: 2,0.0 625 | 605: 2,2.0 626 | 606: 2,2.0 627 | 607: 2,2.0 628 | 608: 2,2.0 629 | 609: 2,2.0 630 | 610: 0,0.0 631 | 611: 0,0.0 632 | 612: 1,1.0 633 | 613: 1,2.0 634 | 614: 0,1.0 635 | 615: 0,1.0 636 | 616: 2,1.0 637 | 617: 2,2.0 638 | 618: 0,1.0 639 | 619: 0,0.0 640 | 620: 0,0.0 641 | 621: 2,1.0 642 | 622: 2,1.0 643 | 623: 2,2.0 644 | 624: 2,2.0 645 | 625: 2,0.0 646 | 626: 2,2.0 647 | 627: 2,2.0 648 | 628: 2,2.0 649 | 629: 2,2.0 650 | 630: 2,2.0 651 | 631: 2,2.0 652 | 632: 0,2.0 653 | 633: 0,1.0 654 | 634: 0,1.0 655 | 635: 0,1.0 656 | 636: 0,2.0 657 | 637: 0,2.0 658 | end ans ----------------------------------------------------------------- 659 | -------------------------------------------------------------------------------- /results/attention_weight/weight.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/results/attention_weight/weight.txt -------------------------------------------------------------------------------- /results/model/LSTM.model.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DrJZhou/Deep-Learning-for-Aspect-Level-Sentiment-Classification-Baselines/8594ebf6745531add1b385a264f3f0344d63f78f/results/model/LSTM.model.txt -------------------------------------------------------------------------------- /run.sh: -------------------------------------------------------------------------------- 1 | datasets=("restaurants14" "laptop14" "restaurants15" "restaurants16") 2 | model_names=("HAN") 3 | #model_names=("ATAE_LSTM" "ATAE_GRU" "ATAE_BiLSTM" "ATAE_BiGRU" "TC_LSTM" "GCAE" "CABASC" "LCRS" "RAM" "ContextAvg" "AEContextAvg" "TD_LSTM" "LSTM" "BiLSTM" "GRU" "BiGRU" "MemNet" "IAN") 4 | batchsizes=(16 32) 5 | dropouts=(0.5) 6 | max_seq_lens=(-1) 7 | optimizers=("Adam") 8 | learning_rates=(0.001 0.0005) 9 | devs=(0.1 0.2) 10 | for dataset in ${datasets[@]}; do 11 | for model_name in ${model_names[@]}; do 12 | for batchsize in ${batchsizes[@]}; do 13 | for optimizer in ${optimizers[@]}; do 14 | for dropout in ${dropouts[@]}; do 15 | for max_seq_len in ${max_seq_lens[@]}; do 16 | for learning_rate in ${learning_rates[@]}; do 17 | echo $model_name $dataset $batchsize $optimizer $learning_rate $max_seq_len $dropout ${model_name}_${dataset}.log 18 | python train_han.py --model_name $model_name --dataset $dataset --batch_size $batchsize --optimizer $optimizer --learning_rate $learning_rate --max_seq_len $max_seq_len --dropout $dropout >> result/log/${dataset}_${model_name}.log 19 | # python train_han.py --softmax --model_name $model_name --dataset $dataset --batch_size $batchsize --optimizer $optimizer --learning_rate $learning_rate --max_seq_len $max_seq_len --dropout $dropout >> result/log/${dataset}_${model_name}.log 20 | done 21 | done 22 | done 23 | done 24 | done 25 | done 26 | done 27 | --------------------------------------------------------------------------------