├── README.md ├── best_models ├── best_modelTextClassificationModel_BiGRU.pt ├── best_modelTextClassificationModel_BiGRU_MHA.pt ├── best_modelTextClassificationModel_BiLSTM_CNN_MHA.pt ├── best_modelTextClassificationModel_BiLSTM_MHA.pt ├── best_modelTextClassificationModel_CNN_MHA.pt └── best_modelTextClassificationModel_GRU_MHA.pt ├── dataset.py ├── datasets ├── README.txt └── processed_data.csv ├── main.py ├── model.py ├── model_outputs ├── TextClassificationModel_BiGRU.txt ├── TextClassificationModel_BiGRU_MHA.txt ├── TextClassificationModel_BiLSTM_CNN_MHA.txt ├── TextClassificationModel_BiLSTM_MHA.txt ├── TextClassificationModel_CNN_MHA.txt └── TextClassificationModel_GRU_MHA.txt ├── slurm-788571.out └── sources ├── Figure_1.png ├── Figure_2.png ├── Figure_3.png ├── Figure_4.png ├── Figure_5.png └── Figure_6.png /README.md: -------------------------------------------------------------------------------- 1 | # 几种主流模型在商品短文本精细化分类上的表现 2 | 3 | 写在前面:这是一个存档。并且我承认这些模型都很古典,而且很多模型显然没有收敛或者显然过拟合。 4 | 5 | 运行环境:cuda:12 on RTX3090 6 | 7 | ## 1.数据集 8 | 数据集的内容主要是微博上李佳琦带货的商品评价区的评价。 9 | 10 | **下面是一些数据的示例** 11 | 12 | | 文本 | 分类 | 情感 | 编号 | 13 | | ------------------------------------------------------------ | ---------- | -------- | ---- | 14 | | 李佳琦争气很好很优秀,衣服也更符合大家的眼光。 | 主播品质 | positive | 1 | 15 | | 说什么比李佳琦好,李佳琦直播间时尚沙漠,李佳琦老头眼光。 | 选品 | positive | 2 | 16 | | 人家都说了,李佳琦直播间没办法上网红店衣服,只能选质量好一点的舒服的。 | 服饰类质量 | positive | 3 | 17 | | 李佳琦直播间的产品选品就这? | 选品 | negative | 4 | 18 | | 咱好歹是钻粉5,买的东西也不少,太失望了,品控太差了 | 选品 | negative | 5 | 19 | | 李佳琦直播间现在选品是真的有问题啊,以前生活品都闭眼买质量绝对没问题、现在真的是. | 选品 | negative | 6 | 20 | | 他怎么能做到闭眼吹这个品好的?? | 主播信任度 | negative | 7 | 21 | | 我之前真的是随便点进去就选几样现在这种信任渐渐崩塌了我已经开始害怕 | 主播信任度 | negative | 8 | 22 | 23 | **下面目标类别** 24 | 25 | | 类簇 | 类 | 26 | | ------ | ------------------------------------------------------------ | 27 | | 主播 | 主播专业度、主播信任度、主播支持度、主播语速、主播品质、主播责任感、主播话术、主播公益事业 | 28 | | 价格 | 优惠、促销活动、福利、折扣、性价比、赠品、红包、满减活动、赠品、积分兑换 | 29 | | 直播间 | 直播间氛围、直播间预告、直播设备、直播间画质 | 30 | | 物流 | 快递、发货、包装 | 31 | | 售后 | 服务态度、客服、退款、退货 | 32 | | 选品 | 美妆类质量、日用品质量、生活用品质量、母婴品质量、电器质量、食品质量、饮品质量、服饰类质量、饰品质量、家具质量 | 33 | | 系统 | 系统定价、系统问题、链接数量 | 34 | 35 | 总体而言类别很多。同时还进行情感分类。 36 | 37 | ## 2.模型结构 38 | 39 | ### 2.1word2vec 40 | 41 | 如果有人想在本地再跑一遍我的代码的话(应该没有)除了需要配置一下cuda和那些python库还需要下面的预训练的word2vec 42 | 43 | [GitHub - Embedding/Chinese-Word-Vectors: 100+ Chinese Word Vectors 上百种预训练中文词向量](https://github.com/Embedding/Chinese-Word-Vectors) 44 | 45 | 使用了他们提供的 微博专门 预训练word2vec模型(也就是[SGNS的weibo的Word + Character + Ngram](https://pan.baidu.com/s/1FHl_bQkYucvVk-j2KG4dxA)) 46 | 47 | ### 2.2 模型结构 48 | 49 | #### TextClassificationModel_CNN_MHA 50 | 51 | | 参数 | 值 | 52 | | ---------------- | ------------------------------- | 53 | | 模型名称 | TextClassificationModel_CNN_MHA | 54 | | 嵌入维度 | 300 | 55 | | 卷积核大小 | 3 | 56 | | 卷积层滤波器数量 | 64 | 57 | | 注意力头数 | 8 | 58 | | 输出情感类别数量 | 2 | 59 | | 输出特征类别数量 | 10 | 60 | | 使用预训练嵌入 | 是 | 61 | 62 | 该模型包括一个嵌入层、卷积层、多头自注意力层和多个全连接层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 63 | 64 | #### TextClassificationModel_BiLSTM_CNN_MHA 65 | 66 | | 参数 | 值 | 67 | | ---------------- | -------------------------------------- | 68 | | 模型名称 | TextClassificationModel_BiLSTM_CNN_MHA | 69 | | 嵌入维度 | 300 | 70 | | LSTM隐藏维度 | 128 | 71 | | 卷积核大小 | 3 | 72 | | 卷积层滤波器数量 | 64 | 73 | | 注意力头数 | 8 | 74 | | 输出情感类别数量 | 2 | 75 | | 输出特征类别数量 | 10 | 76 | | 使用预训练嵌入 | 是 | 77 | 78 | 该模型包括一个嵌入层、双向LSTM层、卷积层、多头自注意力层和多个全连接层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 79 | 80 | #### TextClassificationModel_BiGRU 81 | 82 | | 参数 | 值 | 83 | | ---------------- | ----------------------------- | 84 | | 模型名称 | TextClassificationModel_BiGRU | 85 | | 嵌入维度 | 300 | 86 | | GRU隐藏维度 | 32 | 87 | | 注意力头数 | 8 | 88 | | 输出情感类别数量 | 2 | 89 | | 输出特征类别数量 | 10 | 90 | | 使用预训练嵌入 | 是 | 91 | 92 | 该模型包括一个嵌入层、双向GRU层和多头自注意力层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 93 | 94 | #### TextClassificationModel_BiLSTM_MHA 95 | 96 | | 参数 | 值 | 97 | | ---------------- | ---------------------------------- | 98 | | 模型名称 | TextClassificationModel_BiLSTM_MHA | 99 | | 嵌入维度 | 300 | 100 | | LSTM隐藏维度 | 128 | 101 | | 注意力头数 | 8 | 102 | | 输出情感类别数量 | 2 | 103 | | 输出特征类别数量 | 10 | 104 | | 使用预训练嵌入 | 是 | 105 | 106 | 该模型包括一个嵌入层、双向LSTM层、多头自注意力层和多个全连接层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 107 | 108 | #### TextClassificationModel_GRU_MHA 109 | 110 | | 参数 | 值 | 111 | | ---------------- | ------------------------------- | 112 | | 模型名称 | TextClassificationModel_GRU_MHA | 113 | | 嵌入维度 | 300 | 114 | | GRU隐藏维度 | 128 | 115 | | 注意力头数 | 8 | 116 | | 输出情感类别数量 | 2 | 117 | | 输出特征类别数量 | 10 | 118 | | 使用预训练嵌入 | 是 | 119 | 120 | 该模型包括一个嵌入层、GRU层、多头自注意力层和多个全连接层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 121 | 122 | #### TextClassificationModel_BiGRU_MHA 123 | 124 | | 参数 | 值 | 125 | | ---------------- | --------------------------------- | 126 | | 模型名称 | TextClassificationModel_BiGRU_MHA | 127 | | 嵌入维度 | 300 | 128 | | GRU隐藏维度 | 128 | 129 | | 注意力头数 | 8 | 130 | | 输出情感类别数量 | 2 | 131 | | 输出特征类别数量 | 10 | 132 | | 使用预训练嵌入 | 是 | 133 | 134 | 该模型包括一个嵌入层、双向GRU层、多头自注意力层和多个全连接层,用于同时处理文本序列和特征序列。它具有多个输出,用于情感分类和特征分类。 135 | 136 | 137 | 138 | ## 3.模型成绩 139 | 140 | 注意,事实上Emotion和Feature是一起进行预测的。这里为了方便单独展示。但一般情况下。Emotion准确率高的情况下Feature准确率也会很高。 141 | 142 | ### 3.1最高Emotion准确率 143 | 144 | | 模型名称 | 最高Emotion成绩 | 145 | | -------------------------------------- | --------------- | 146 | | TextClassificationModel_CNN_MHA | 0.8197 | 147 | | TextClassificationModel_BiLSTM_CNN_MHA | 0.7793 | 148 | | TextClassificationModel_BiGRU | 0.8495 | 149 | | TextClassificationModel_BiLSTM_MHA | 0.8495 | 150 | | TextClassificationModel_GRU_MHA | 0.8602 | 151 | | TextClassificationModel_BiGRU_MHA | 0.8510 | 152 | 153 | ### 3.2最高Feature准确率 154 | 155 | | 模型名称 | 最高Feature成绩 | 156 | | -------------------------------------- | --------------- | 157 | | TextClassificationModel_CNN_MHA | 0.5202 | 158 | | TextClassificationModel_BiLSTM_CNN_MHA | 0.7892 | 159 | | TextClassificationModel_BiGRU | 0.8616 | 160 | | TextClassificationModel_BiLSTM_MHA | 0.9595 | 161 | | TextClassificationModel_GRU_MHA | 0.9865 | 162 | | TextClassificationModel_BiGRU_MHA | 0.9929 | 163 | 164 | ### 3.3训练情况 165 | 166 | Figure_6 167 | 168 | Figure_2 169 | 170 | Figure_1 171 | 172 | Figure_3 173 | 174 | Figure_4 175 | 176 | Figure_5 177 | 178 | ## 4.总结 179 | 180 | 发现这几个主流模型的成绩都非常可观。有些超出预期。特别是在应对细致化分类和同时进行文本特征分类和情感分类两个任务的时候表现优异。当然,单独把情感特征分类拿出来是因为它是唯一的一个二分类,并且难度相对较大。 181 | 182 | 我猜测模型表现优异的原因有部分来自数据集和使用了专门的预训练word2vec模型。 183 | -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_BiGRU.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_BiGRU.pt -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_BiGRU_MHA.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_BiGRU_MHA.pt -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_BiLSTM_CNN_MHA.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_BiLSTM_CNN_MHA.pt -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_BiLSTM_MHA.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_BiLSTM_MHA.pt -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_CNN_MHA.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_CNN_MHA.pt -------------------------------------------------------------------------------- /best_models/best_modelTextClassificationModel_GRU_MHA.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/best_models/best_modelTextClassificationModel_GRU_MHA.pt -------------------------------------------------------------------------------- /dataset.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from torch.utils.data import Dataset 3 | import jieba 4 | import pandas as pd 5 | import gensim 6 | import numpy as np 7 | 8 | def read_csv(file_path): 9 | df = pd.read_csv(file_path) 10 | texts = df['文本'].tolist() 11 | features = df['分类'].tolist() 12 | emotions = [1 if emotion == 'positive' else 0 for emotion in df['情感'].tolist()] 13 | return texts, features, emotions 14 | 15 | class TextDataset(Dataset): 16 | def __init__(self, texts, features, emotions, tokenizer, vocab, feature_vocab): 17 | self.texts = texts 18 | self.features = features 19 | self.emotions = emotions 20 | self.tokenizer = tokenizer 21 | self.vocab = vocab 22 | self.feature_vocab = feature_vocab 23 | 24 | def __len__(self): 25 | return len(self.texts) 26 | 27 | def __getitem__(self, index): 28 | tokens = self.tokenizer(self.texts[index]) 29 | text_tensor = torch.tensor([self.vocab[token] for token in tokens], dtype=torch.long) 30 | feature_tensor = torch.tensor(self.feature_vocab[self.features[index]], dtype=torch.long) 31 | emotion_tensor = torch.tensor(self.emotions[index], dtype=torch.long) 32 | return text_tensor, feature_tensor, emotion_tensor 33 | 34 | def build_vocab_from_iterator(iterator): 35 | vocab = {} 36 | for tokens in iterator: 37 | for token in tokens: 38 | if token not in vocab: 39 | vocab[token] = len(vocab) 40 | return vocab 41 | 42 | def build_feature_vocab(features): 43 | feature_vocab = {} 44 | for feature in features: 45 | if feature not in feature_vocab: 46 | feature_vocab[feature] = len(feature_vocab) 47 | return feature_vocab 48 | 49 | def get_tokenizer(): 50 | return lambda text: list(jieba.cut(text)) 51 | 52 | def load_embeddings(embedding_file, vocab): 53 | pretrained_model = gensim.models.KeyedVectors.load_word2vec_format(embedding_file, binary=False) 54 | embedding_matrix = np.zeros((len(vocab), pretrained_model.vector_size)) 55 | 56 | for word, idx in vocab.items(): 57 | if word in pretrained_model: 58 | embedding_matrix[idx] = pretrained_model[word] 59 | 60 | return torch.tensor(embedding_matrix, dtype=torch.float32) 61 | -------------------------------------------------------------------------------- /datasets/README.txt: -------------------------------------------------------------------------------- 1 | 由于关心到版权问题,暂时不提供完全版的数据 2 | -------------------------------------------------------------------------------- /datasets/processed_data.csv: -------------------------------------------------------------------------------- 1 | 文本,分类,情感,编号 2 | 李佳琦争气很好很优秀,衣服也更符合大家的眼光。,主播品质,positive,1 3 | 说什么比李佳琦好,李佳琦直播间时尚沙漠,李佳琦老头眼光。,选品,positive,2 4 | 人家都说了,李佳琦直播间没办法上网红店衣服,只能选质量好一点的舒服的。,服饰类质量,positive,3 5 | 李佳琦直播间的产品选品就这?,选品,negative,4 6 | 咱好歹是钻粉5,买的东西也不少,太失望了,品控太差了,选品,negative,5 7 | 李佳琦直播间现在选品是真的有问题啊,以前生活品都闭眼买质量绝对没问题、现在真的是.,选品,negative,6 8 | 他怎么能做到闭眼吹这个品好的??,主播信任度,negative,7 9 | 我之前真的是随便点进去就选几样现在这种信任渐渐崩塌了我已经开始害怕,主播信任度,negative,8 10 | 几次一来我也关注了哦对再加一句话,老李头你给我遵纪守法啊,该税就税,别再玩消失了,主播支持度,negative,9 11 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.optim as optim 3 | from torch.utils.data import DataLoader 4 | from sklearn.model_selection import train_test_split 5 | import torch.nn as nn 6 | from dataset import read_csv, TextDataset, build_vocab_from_iterator, build_feature_vocab, get_tokenizer, load_embeddings 7 | from model import TextClassificationModel_BiLSTM_MHA 8 | from model import TextClassificationModel_CNN_MHA 9 | from model import TextClassificationModel_GRU_MHA 10 | from model import TextClassificationModel_BiGRU 11 | from model import TextClassificationModel_BiLSTM_CNN_MHA 12 | from model import TextClassificationModel_BiGRU_MHA 13 | 14 | def pad_collate(batch): 15 | (texts, features, emotions) = zip(*batch) 16 | text_lengths = [len(t) for t in texts] 17 | text_padded = torch.zeros(len(texts), max(text_lengths), dtype=torch.long) 18 | for i, t in enumerate(texts): 19 | text_padded[i, :len(t)] = t 20 | return text_padded, torch.stack(features), torch.stack(emotions) 21 | 22 | def train(model, dataloader, optimizer, criterion, device): 23 | model.train() 24 | epoch_loss = 0 25 | for batch in dataloader: 26 | text, feature_label, emotion_label = batch 27 | text = text.to(device) 28 | feature_label = feature_label.to(device) 29 | emotion_label = emotion_label.to(device) 30 | optimizer.zero_grad() 31 | emotion_output, feature_output = model(text, feature_label) 32 | emotion_loss = criterion(emotion_output, emotion_label) 33 | feature_loss = criterion(feature_output, feature_label) 34 | loss = emotion_loss + feature_loss 35 | loss.backward() 36 | optimizer.step() 37 | epoch_loss += loss.item() 38 | return epoch_loss / len(dataloader) 39 | 40 | def evaluate(model, dataloader, criterion, device): 41 | model.eval() 42 | epoch_loss = 0 43 | emotion_correct = 0 44 | feature_correct = 0 45 | total = 0 46 | all_emotion_preds = [] 47 | all_feature_preds = [] 48 | 49 | with torch.no_grad(): 50 | for batch in dataloader: 51 | text, feature_label, emotion_label = batch 52 | text = text.to(device) 53 | feature_label = feature_label.to(device) 54 | emotion_label = emotion_label.to(device) 55 | emotion_output, feature_output = model(text, feature_label) 56 | emotion_loss = criterion(emotion_output, emotion_label) 57 | feature_loss = criterion(feature_output, feature_label) 58 | loss = emotion_loss + feature_loss 59 | epoch_loss += loss.item() 60 | emotion_correct += (emotion_output.argmax(1) == emotion_label).sum().item() 61 | feature_correct += (feature_output.argmax(1) == feature_label).sum().item() 62 | total += emotion_label.size(0) 63 | 64 | emotion_preds = torch.argmax(emotion_output, dim=1) 65 | feature_preds = torch.argmax(feature_output, dim=1) 66 | 67 | all_emotion_preds.extend(emotion_preds.tolist()) 68 | all_feature_preds.extend(feature_preds.tolist()) 69 | 70 | return epoch_loss / len(dataloader), emotion_correct / total, feature_correct / total, all_emotion_preds, all_feature_preds 71 | 72 | def main(): 73 | datasetPath = '/dev/shm/datasets/' 74 | data_file = datasetPath + 'processed_data.csv' 75 | embedding_file = datasetPath + 'sgns.weibo.bigram-char' 76 | texts, features, emotions = read_csv(data_file) 77 | tokenizer = get_tokenizer() 78 | tokenized_texts = [tokenizer(text) for text in texts] 79 | vocab = build_vocab_from_iterator(tokenized_texts) 80 | feature_vocab = build_feature_vocab(features) 81 | 82 | train_texts, test_texts, train_features, test_features, train_emotions, test_emotions = train_test_split(texts, features, emotions, test_size=0.2, random_state=42) 83 | 84 | train_dataset = TextDataset(train_texts, train_features, train_emotions, tokenizer, vocab, feature_vocab) 85 | test_dataset = TextDataset(test_texts, test_features, test_emotions, tokenizer, vocab, feature_vocab) 86 | 87 | train_dataloader = DataLoader(train_dataset, batch_size=64, shuffle=True, collate_fn=pad_collate) 88 | test_dataloader = DataLoader(test_dataset, batch_size=64, shuffle=False, collate_fn=pad_collate) 89 | 90 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 91 | num_gpus = torch.cuda.device_count() 92 | 93 | print(f"Using device: {device}") 94 | 95 | embeddings = load_embeddings(embedding_file, vocab) 96 | modelList = list() 97 | 98 | modelList.append( 99 | TextClassificationModel_CNN_MHA( 100 | len(vocab), 101 | len(feature_vocab), 102 | 300, 103 | 8, 104 | 2, 105 | len(feature_vocab), 106 | embeddings=embeddings, 107 | kernel_size=3, 108 | num_filters=64)) 109 | modelList.append(TextClassificationModel_BiLSTM_CNN_MHA( 110 | len(vocab), 111 | len(feature_vocab), 112 | 300, 113 | 128, 114 | 8, 115 | 2, 116 | len(feature_vocab), 117 | embeddings=embeddings, 118 | kernel_size=3, 119 | num_filters=64 120 | ).to(device)) 121 | modelList.append(TextClassificationModel_BiGRU(len(vocab), len(feature_vocab), 300, 32, 8, 2, len(feature_vocab), embeddings=embeddings)) 122 | modelList.append(TextClassificationModel_BiLSTM_MHA(len(vocab), len(feature_vocab), 300, 128, 8, 2, len(feature_vocab), embeddings=embeddings)) 123 | modelList.append(TextClassificationModel_GRU_MHA(len(vocab), len(feature_vocab), 300, 128, 8, 2, len(feature_vocab), embeddings=embeddings)) 124 | modelList.append(TextClassificationModel_BiGRU_MHA(len(vocab), len(feature_vocab), 300, 128, 8, 2, len(feature_vocab), embeddings=embeddings)) 125 | 126 | for model in modelList: 127 | name = model.name 128 | print("IN:",model.name) 129 | 130 | if torch.cuda.device_count() > 1: 131 | print("Using", torch.cuda.device_count(), "GPUs") 132 | model = nn.DataParallel(model) 133 | model.to(device) 134 | if num_gpus > 1: 135 | model = nn.DataParallel(model) 136 | else: 137 | model = model.to(device) 138 | 139 | optimizer = optim.Adam(model.parameters(), lr=0.0002) 140 | criterion = torch.nn.CrossEntropyLoss() 141 | 142 | num_epochs = 100 143 | best_test_loss = float('inf') 144 | 145 | for epoch in range(num_epochs): 146 | train_loss = train(model, train_dataloader, optimizer, criterion, device) 147 | test_loss, test_emotion_accuracy, test_feature_accuracy, test_emotion_preds, test_feature_preds = evaluate( 148 | model, test_dataloader, criterion, device) 149 | print( 150 | f'Test Loss: {test_loss:.4f}, Train Loss: {train_loss:.4f}, Test Emotion Accuracy: {test_emotion_accuracy:.4f}, Test Feature Accuracy: {test_feature_accuracy:.4f}') 151 | 152 | if test_loss < best_test_loss: 153 | best_test_loss = test_loss 154 | torch.save(model.state_dict(), 'best_model'+name+'.pt') 155 | with open(name+'.txt','w', encoding='utf-8') as file: 156 | for i, (text, true_feature, true_emotion) in enumerate(zip(test_texts, test_features, test_emotions)): 157 | pred_emotion = 'positive' if test_emotion_preds[i] == 1 else 'negative' 158 | pred_feature = list(feature_vocab.keys())[list(feature_vocab.values()).index(test_feature_preds[i])] 159 | file.write( 160 | f"Text: {text}, True Feature: {true_feature}, True Emotion: {true_emotion}, Predicted Feature: {pred_feature}, Predicted Emotion: {pred_emotion}") 161 | 162 | 163 | if __name__ == '__main__': 164 | main() 165 | -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | import torch.nn.functional as F 4 | from torch.nn.modules.activation import MultiheadAttention 5 | 6 | class TextClassificationModel_BiLSTM_CNN_MHA(nn.Module): 7 | def __init__(self, vocab_size, feature_size, embed_dim, lstm_hidden_dim, attn_heads, num_classes, feature_classes, embeddings=None, kernel_size=3, num_filters=64): 8 | super(TextClassificationModel_BiLSTM_CNN_MHA, self).__init__() 9 | self.embedding = nn.Embedding(vocab_size, embed_dim) 10 | self.name = 'TextClassificationModel_BiLSTM_CNN_MHA' 11 | if embeddings is not None: 12 | self.embedding.weight = nn.Parameter(embeddings) 13 | self.embedding.weight.requires_grad = False 14 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 15 | self.lstm = nn.LSTM(embed_dim, lstm_hidden_dim, batch_first=True, bidirectional=True) 16 | self.conv1d = nn.Conv1d(lstm_hidden_dim * 2, num_filters, kernel_size) 17 | self.max_pool = nn.MaxPool1d(kernel_size) 18 | self.self_attn = MultiheadAttention(num_filters, attn_heads) 19 | self.dropout = nn.Dropout(0.5) 20 | self.fc1 = nn.Linear(num_filters, lstm_hidden_dim) 21 | self.fc2_emotion = nn.Linear(lstm_hidden_dim, num_classes) 22 | self.fc2_feature = nn.Linear(lstm_hidden_dim, feature_classes) 23 | 24 | def forward(self, text, features): 25 | embedded_text = self.embedding(text) 26 | embedded_features = self.feature_embedding(features).unsqueeze(1) 27 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 28 | lstm_output, _ = self.lstm(embedded) 29 | cnn_input = lstm_output.permute(0, 2, 1) 30 | cnn_output = F.relu(self.conv1d(cnn_input)) 31 | max_pool_output = self.max_pool(cnn_output).permute(0, 2, 1) 32 | attn_output, _ = self.self_attn(max_pool_output, max_pool_output, max_pool_output) 33 | feature_vector = torch.mean(attn_output, dim=1) 34 | feature_vector = self.dropout(feature_vector) 35 | fc1_output = F.relu(self.fc1(feature_vector)) 36 | fc1_output = self.dropout(fc1_output) 37 | emotion_logits = self.fc2_emotion(fc1_output) 38 | feature_logits = self.fc2_feature(fc1_output) 39 | return emotion_logits, feature_logits 40 | 41 | 42 | class TextClassificationModel_BiLSTM_MHA(nn.Module): 43 | def __init__(self, vocab_size, feature_size, embed_dim, lstm_hidden_dim, attn_heads, num_classes, feature_classes, embeddings=None): 44 | super(TextClassificationModel_BiLSTM_MHA, self).__init__() 45 | self.embedding = nn.Embedding(vocab_size, embed_dim) 46 | self.name = 'TextClassificationModel_BiLSTM_MHA' 47 | if embeddings is not None: 48 | self.embedding.weight = nn.Parameter(embeddings) 49 | self.embedding.weight.requires_grad = False 50 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 51 | self.lstm = nn.LSTM(embed_dim, lstm_hidden_dim, batch_first=True, bidirectional=True) 52 | self.self_attn = MultiheadAttention(lstm_hidden_dim * 2, attn_heads) 53 | self.dropout = nn.Dropout(0.5) 54 | self.fc1 = nn.Linear(lstm_hidden_dim * 2, lstm_hidden_dim) 55 | self.fc2_emotion = nn.Linear(lstm_hidden_dim, num_classes) 56 | self.fc2_feature = nn.Linear(lstm_hidden_dim, feature_classes) 57 | 58 | def forward(self, text, features): 59 | embedded_text = self.embedding(text) 60 | embedded_features = self.feature_embedding(features).unsqueeze(1) 61 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 62 | lstm_output, _ = self.lstm(embedded) 63 | attn_output, _ = self.self_attn(lstm_output, lstm_output, lstm_output) 64 | feature_vector = torch.mean(attn_output, dim=1) 65 | feature_vector = self.dropout(feature_vector) 66 | fc1_output = F.relu(self.fc1(feature_vector)) 67 | fc1_output = self.dropout(fc1_output) 68 | emotion_logits = self.fc2_emotion(fc1_output) 69 | feature_logits = self.fc2_feature(fc1_output) 70 | return emotion_logits, feature_logits 71 | 72 | class TextClassificationModel_CNN_MHA(nn.Module): 73 | def __init__(self, vocab_size, feature_size, embed_dim, attn_heads, num_classes, feature_classes, embeddings=None, kernel_size=3, num_filters=64): 74 | super(TextClassificationModel_CNN_MHA, self).__init__() 75 | self.embedding = nn.Embedding(vocab_size, embed_dim) 76 | self.name = 'TextClassificationModel_CNN_MHA' 77 | if embeddings is not None: 78 | self.embedding.weight = nn.Parameter(embeddings) 79 | self.embedding.weight.requires_grad = False 80 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 81 | self.conv1d = nn.Conv1d(embed_dim, num_filters, kernel_size) 82 | self.max_pool = nn.MaxPool1d(kernel_size) 83 | self.self_attn = MultiheadAttention(num_filters, attn_heads) 84 | self.dropout = nn.Dropout(0.5) 85 | self.fc1 = nn.Linear(num_filters, embed_dim) 86 | self.fc2_emotion = nn.Linear(embed_dim, num_classes) 87 | self.fc2_feature = nn.Linear(embed_dim, feature_classes) 88 | 89 | def forward(self, text, features): 90 | embedded_text = self.embedding(text) 91 | embedded_features = self.feature_embedding(features).unsqueeze(1) 92 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 93 | cnn_output = F.relu(self.conv1d(embedded.permute(0,2,1))) 94 | max_pool_output = self.max_pool(cnn_output).permute(0, 2, 1) 95 | attn_output, _ = self.self_attn(max_pool_output, max_pool_output, max_pool_output) 96 | feature_vector = torch.mean(attn_output, dim=1) 97 | feature_vector = self.dropout(feature_vector) 98 | fc1_output = F.relu(self.fc1(feature_vector)) 99 | fc1_output = self.dropout(fc1_output) 100 | emotion_logits = self.fc2_emotion(fc1_output) 101 | feature_logits = self.fc2_feature(fc1_output) 102 | return emotion_logits, feature_logits 103 | 104 | 105 | 106 | class TextClassificationModel_GRU_MHA(nn.Module): 107 | def __init__(self, vocab_size, feature_size, embed_dim, gru_hidden_dim, attn_heads, num_classes, feature_classes, embeddings=None): 108 | super(TextClassificationModel_GRU_MHA, self).__init__() 109 | self.name = 'TextClassificationModel_GRU_MHA' 110 | self.embedding = nn.Embedding(vocab_size, embed_dim) 111 | if embeddings is not None: 112 | self.embedding.weight = nn.Parameter(embeddings) 113 | self.embedding.weight.requires_grad = False 114 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 115 | self.gru = nn.GRU(embed_dim, gru_hidden_dim, batch_first=True) 116 | self.self_attn = MultiheadAttention(gru_hidden_dim, attn_heads) 117 | self.dropout = nn.Dropout(0.5) 118 | self.fc1 = nn.Linear(gru_hidden_dim, gru_hidden_dim) 119 | self.fc2_emotion = nn.Linear 120 | self.fc2_emotion = nn.Linear(gru_hidden_dim, num_classes) 121 | self.fc2_feature = nn.Linear(gru_hidden_dim, feature_classes) 122 | 123 | def forward(self, text, features): 124 | embedded_text = self.embedding(text) 125 | embedded_features = self.feature_embedding(features).unsqueeze(1) 126 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 127 | gru_output, _ = self.gru(embedded) 128 | attn_output, _ = self.self_attn(gru_output, gru_output, gru_output) 129 | feature_vector = torch.mean(attn_output, dim=1) 130 | feature_vector = self.dropout(feature_vector) 131 | fc1_output = F.relu(self.fc1(feature_vector)) 132 | fc1_output = self.dropout(fc1_output) 133 | emotion_logits = self.fc2_emotion(fc1_output) 134 | feature_logits = self.fc2_feature(fc1_output) 135 | return emotion_logits, feature_logits 136 | 137 | class TextClassificationModel_BiGRU_MHA(nn.Module): 138 | def __init__(self, vocab_size, feature_size, embed_dim, gru_hidden_dim, attn_heads, num_classes, feature_classes, embeddings=None): 139 | super(TextClassificationModel_BiGRU_MHA, self).__init__() 140 | self.embedding = nn.Embedding(vocab_size, embed_dim) 141 | self.name = 'TextClassificationModel_BiGRU_MHA' 142 | if embeddings is not None: 143 | self.embedding.weight = nn.Parameter(embeddings) 144 | self.embedding.weight.requires_grad = False 145 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 146 | self.gru = nn.GRU(embed_dim, gru_hidden_dim, batch_first=True, bidirectional=True) 147 | self.self_attn = MultiheadAttention(gru_hidden_dim * 2, attn_heads) 148 | self.dropout = nn.Dropout(0.5) 149 | self.fc1 = nn.Linear(gru_hidden_dim * 2, gru_hidden_dim) 150 | self.fc2_emotion = nn.Linear(gru_hidden_dim, num_classes) 151 | self.fc2_feature = nn.Linear(gru_hidden_dim, feature_classes) 152 | 153 | def forward(self, text, features): 154 | embedded_text = self.embedding(text) 155 | embedded_features = self.feature_embedding(features).unsqueeze(1) 156 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 157 | gru_output, _ = self.gru(embedded) 158 | attn_output, _ = self.self_attn(gru_output, gru_output, gru_output) 159 | feature_vector = torch.mean(attn_output, dim=1) 160 | feature_vector = self.dropout(feature_vector) 161 | fc1_output = F.relu(self.fc1(feature_vector)) 162 | fc1_output = self.dropout(fc1_output) 163 | emotion_logits = self.fc2_emotion(fc1_output) 164 | feature_logits = self.fc2_feature(fc1_output) 165 | return emotion_logits, feature_logits 166 | 167 | class TextClassificationModel_BiGRU(nn.Module): 168 | def __init__(self, vocab_size, feature_size, embed_dim, gru_hidden_dim, attn_heads, num_classes,feature_classes, embeddings=None): 169 | super(TextClassificationModel_BiGRU, self).__init__() 170 | self.embedding = nn.Embedding(vocab_size, embed_dim) 171 | self.name = 'TextClassificationModel_BiGRU' 172 | if embeddings is not None: 173 | self.embedding.weight = nn.Parameter(embeddings) 174 | self.embedding.weight.requires_grad = False 175 | self.feature_embedding = nn.Embedding(feature_size, embed_dim) 176 | self.gru = nn.GRU(embed_dim, gru_hidden_dim, batch_first=True, bidirectional=True) 177 | self.dropout = nn.Dropout(0.5) 178 | self.fc1 = nn.Linear(gru_hidden_dim * 2, gru_hidden_dim) 179 | self.fc2_emotion = nn.Linear(gru_hidden_dim, num_classes) 180 | self.fc2_feature = nn.Linear(gru_hidden_dim, feature_classes) 181 | 182 | def forward(self, text, features): 183 | embedded_text = self.embedding(text) 184 | embedded_features = self.feature_embedding(features).unsqueeze(1) 185 | embedded = torch.cat((embedded_text, embedded_features), dim=1) 186 | gru_output, _ = self.gru(embedded) 187 | feature_vector = torch.mean(gru_output, dim=1) 188 | feature_vector = self.dropout(feature_vector) 189 | fc1_output = F.relu(self.fc1(feature_vector)) 190 | fc1_output = self.dropout(fc1_output) 191 | emotion_logits = self.fc2_emotion(fc1_output) 192 | feature_logits = self.fc2_feature(fc1_output) 193 | return emotion_logits, feature_logits 194 | -------------------------------------------------------------------------------- /slurm-788571.out: -------------------------------------------------------------------------------- 1 | Building prefix dict from the default dictionary ... 2 | Dumping model to file cache /tmp/jieba.cache 3 | Loading model cost 0.608 seconds. 4 | Prefix dict has been built successfully. 5 | Using device: cuda 6 | IN: TextClassificationModel_CNN_MHA 7 | Test Loss: 3.8629, Train Loss: 4.3759, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 8 | Test Loss: 3.8529, Train Loss: 3.9306, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 9 | Test Loss: 3.8250, Train Loss: 3.8909, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1249 10 | Test Loss: 3.7853, Train Loss: 3.8684, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 11 | Test Loss: 3.7583, Train Loss: 3.8063, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 12 | Test Loss: 3.7214, Train Loss: 3.7292, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 13 | Test Loss: 3.6904, Train Loss: 3.6437, Test Emotion Accuracy: 0.7111, Test Feature Accuracy: 0.1391 14 | Test Loss: 3.5887, Train Loss: 3.5344, Test Emotion Accuracy: 0.7161, Test Feature Accuracy: 0.1789 15 | Test Loss: 3.5300, Train Loss: 3.3685, Test Emotion Accuracy: 0.7615, Test Feature Accuracy: 0.1852 16 | Test Loss: 3.5386, Train Loss: 3.2574, Test Emotion Accuracy: 0.7828, Test Feature Accuracy: 0.1845 17 | Test Loss: 3.3680, Train Loss: 3.1797, Test Emotion Accuracy: 0.7779, Test Feature Accuracy: 0.2023 18 | Test Loss: 3.3492, Train Loss: 3.0993, Test Emotion Accuracy: 0.7850, Test Feature Accuracy: 0.2321 19 | Test Loss: 3.1874, Train Loss: 3.0180, Test Emotion Accuracy: 0.7956, Test Feature Accuracy: 0.2697 20 | Test Loss: 3.1983, Train Loss: 2.9014, Test Emotion Accuracy: 0.7928, Test Feature Accuracy: 0.2661 21 | Test Loss: 3.1462, Train Loss: 2.8651, Test Emotion Accuracy: 0.7970, Test Feature Accuracy: 0.3286 22 | Test Loss: 3.0693, Train Loss: 2.7680, Test Emotion Accuracy: 0.8048, Test Feature Accuracy: 0.3400 23 | Test Loss: 2.9927, Train Loss: 2.7418, Test Emotion Accuracy: 0.7906, Test Feature Accuracy: 0.3634 24 | Test Loss: 3.0128, Train Loss: 2.6574, Test Emotion Accuracy: 0.8055, Test Feature Accuracy: 0.3683 25 | Test Loss: 2.9175, Train Loss: 2.5833, Test Emotion Accuracy: 0.8048, Test Feature Accuracy: 0.3847 26 | Test Loss: 2.9033, Train Loss: 2.5484, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.3953 27 | Test Loss: 2.8995, Train Loss: 2.5169, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.4024 28 | Test Loss: 2.8556, Train Loss: 2.5098, Test Emotion Accuracy: 0.8027, Test Feature Accuracy: 0.4344 29 | Test Loss: 2.7720, Train Loss: 2.3921, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.4521 30 | Test Loss: 2.8649, Train Loss: 2.3911, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.4500 31 | Test Loss: 2.8758, Train Loss: 2.3105, Test Emotion Accuracy: 0.7885, Test Feature Accuracy: 0.4556 32 | Test Loss: 2.7658, Train Loss: 2.2305, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.4698 33 | Test Loss: 2.7871, Train Loss: 2.2243, Test Emotion Accuracy: 0.7942, Test Feature Accuracy: 0.4649 34 | Test Loss: 2.6947, Train Loss: 2.1675, Test Emotion Accuracy: 0.8062, Test Feature Accuracy: 0.4918 35 | Test Loss: 2.7305, Train Loss: 2.1473, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.4862 36 | Test Loss: 2.7733, Train Loss: 2.1308, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.4833 37 | Test Loss: 2.7371, Train Loss: 2.0559, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.4890 38 | Test Loss: 2.6408, Train Loss: 2.0296, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.5060 39 | Test Loss: 2.7324, Train Loss: 1.9844, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.5117 40 | Test Loss: 2.6885, Train Loss: 1.8679, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.5089 41 | Test Loss: 2.6783, Train Loss: 1.9492, Test Emotion Accuracy: 0.8190, Test Feature Accuracy: 0.5082 42 | Test Loss: 2.7588, Train Loss: 1.8864, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.5117 43 | Test Loss: 2.7089, Train Loss: 1.8552, Test Emotion Accuracy: 0.8211, Test Feature Accuracy: 0.5231 44 | Test Loss: 2.7589, Train Loss: 1.8383, Test Emotion Accuracy: 0.8183, Test Feature Accuracy: 0.5259 45 | Test Loss: 2.8290, Train Loss: 1.7848, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.5266 46 | Test Loss: 2.7825, Train Loss: 1.7370, Test Emotion Accuracy: 0.8112, Test Feature Accuracy: 0.5358 47 | Test Loss: 2.8695, Train Loss: 1.7242, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.5366 48 | Test Loss: 2.8063, Train Loss: 1.6549, Test Emotion Accuracy: 0.8176, Test Feature Accuracy: 0.5380 49 | Test Loss: 2.8374, Train Loss: 1.6356, Test Emotion Accuracy: 0.8297, Test Feature Accuracy: 0.5486 50 | Test Loss: 2.9272, Train Loss: 1.6262, Test Emotion Accuracy: 0.7963, Test Feature Accuracy: 0.5387 51 | Test Loss: 2.9225, Train Loss: 1.6165, Test Emotion Accuracy: 0.8226, Test Feature Accuracy: 0.5479 52 | Test Loss: 2.9966, Train Loss: 1.6188, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.5380 53 | Test Loss: 2.8547, Train Loss: 1.5342, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.5564 54 | Test Loss: 3.0012, Train Loss: 1.5547, Test Emotion Accuracy: 0.8297, Test Feature Accuracy: 0.5479 55 | Test Loss: 3.1026, Train Loss: 1.5118, Test Emotion Accuracy: 0.8282, Test Feature Accuracy: 0.5415 56 | Test Loss: 3.0501, Train Loss: 1.4723, Test Emotion Accuracy: 0.8297, Test Feature Accuracy: 0.5557 57 | IN: TextClassificationModel_BiLSTM_CNN_MHA 58 | Test Loss: 3.9943, Train Loss: 4.4943, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 59 | Test Loss: 3.8459, Train Loss: 4.0210, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 60 | Test Loss: 3.8164, Train Loss: 3.9615, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 61 | Test Loss: 3.8142, Train Loss: 3.9176, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 62 | Test Loss: 3.5400, Train Loss: 3.8591, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1192 63 | Test Loss: 3.1953, Train Loss: 3.3563, Test Emotion Accuracy: 0.6835, Test Feature Accuracy: 0.1838 64 | Test Loss: 3.0752, Train Loss: 3.1954, Test Emotion Accuracy: 0.6671, Test Feature Accuracy: 0.1831 65 | Test Loss: 3.0337, Train Loss: 3.1119, Test Emotion Accuracy: 0.7048, Test Feature Accuracy: 0.1909 66 | Test Loss: 2.8522, Train Loss: 3.0099, Test Emotion Accuracy: 0.7708, Test Feature Accuracy: 0.3009 67 | Test Loss: 2.6951, Train Loss: 2.8349, Test Emotion Accuracy: 0.7537, Test Feature Accuracy: 0.3463 68 | Test Loss: 2.5837, Train Loss: 2.7203, Test Emotion Accuracy: 0.7878, Test Feature Accuracy: 0.3570 69 | Test Loss: 2.4904, Train Loss: 2.5994, Test Emotion Accuracy: 0.8020, Test Feature Accuracy: 0.3648 70 | Test Loss: 2.4537, Train Loss: 2.5076, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.3563 71 | Test Loss: 2.3316, Train Loss: 2.4130, Test Emotion Accuracy: 0.8233, Test Feature Accuracy: 0.3882 72 | Test Loss: 2.2709, Train Loss: 2.3289, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.4173 73 | Test Loss: 2.1850, Train Loss: 2.2768, Test Emotion Accuracy: 0.8190, Test Feature Accuracy: 0.4216 74 | Test Loss: 2.1356, Train Loss: 2.1961, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.4351 75 | Test Loss: 2.0837, Train Loss: 2.1228, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.4386 76 | Test Loss: 2.0544, Train Loss: 2.0781, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.4485 77 | Test Loss: 2.0165, Train Loss: 2.0501, Test Emotion Accuracy: 0.8176, Test Feature Accuracy: 0.4436 78 | Test Loss: 2.0002, Train Loss: 2.0228, Test Emotion Accuracy: 0.8204, Test Feature Accuracy: 0.4592 79 | Test Loss: 1.9680, Train Loss: 1.9857, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.4528 80 | Test Loss: 1.9208, Train Loss: 1.9468, Test Emotion Accuracy: 0.8233, Test Feature Accuracy: 0.4564 81 | Test Loss: 1.8933, Train Loss: 1.9142, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.4613 82 | Test Loss: 1.9448, Train Loss: 1.8953, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.4819 83 | Test Loss: 1.8840, Train Loss: 1.8767, Test Emotion Accuracy: 0.8190, Test Feature Accuracy: 0.4741 84 | Test Loss: 1.8610, Train Loss: 1.8425, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.4833 85 | Test Loss: 1.8204, Train Loss: 1.7996, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.4883 86 | Test Loss: 1.7976, Train Loss: 1.7860, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.5174 87 | Test Loss: 1.7701, Train Loss: 1.7679, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.5131 88 | Test Loss: 1.7820, Train Loss: 1.7655, Test Emotion Accuracy: 0.8119, Test Feature Accuracy: 0.4904 89 | Test Loss: 1.7497, Train Loss: 1.7265, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.5266 90 | Test Loss: 1.7539, Train Loss: 1.7529, Test Emotion Accuracy: 0.8141, Test Feature Accuracy: 0.5436 91 | Test Loss: 1.6751, Train Loss: 1.6863, Test Emotion Accuracy: 0.8183, Test Feature Accuracy: 0.5713 92 | Test Loss: 1.6425, Train Loss: 1.6349, Test Emotion Accuracy: 0.8055, Test Feature Accuracy: 0.6004 93 | Test Loss: 1.6166, Train Loss: 1.6131, Test Emotion Accuracy: 0.8112, Test Feature Accuracy: 0.6125 94 | Test Loss: 1.6314, Train Loss: 1.5644, Test Emotion Accuracy: 0.7928, Test Feature Accuracy: 0.6111 95 | Test Loss: 1.6070, Train Loss: 1.4925, Test Emotion Accuracy: 0.8141, Test Feature Accuracy: 0.5848 96 | Test Loss: 1.6074, Train Loss: 1.4957, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.6097 97 | Test Loss: 1.5366, Train Loss: 1.4868, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.6480 98 | Test Loss: 1.5317, Train Loss: 1.4379, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.6487 99 | Test Loss: 1.4629, Train Loss: 1.4188, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.6764 100 | Test Loss: 1.4392, Train Loss: 1.3674, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.6969 101 | Test Loss: 1.5116, Train Loss: 1.3786, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.6678 102 | Test Loss: 1.4143, Train Loss: 1.3518, Test Emotion Accuracy: 0.8006, Test Feature Accuracy: 0.6891 103 | Test Loss: 1.4408, Train Loss: 1.3321, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.7069 104 | Test Loss: 1.4189, Train Loss: 1.3043, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.7140 105 | Test Loss: 1.4258, Train Loss: 1.3026, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.6820 106 | Test Loss: 1.4612, Train Loss: 1.3038, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.6764 107 | Test Loss: 1.3528, Train Loss: 1.2597, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.7260 108 | IN: TextClassificationModel_BiGRU 109 | Test Loss: 4.6569, Train Loss: 4.7395, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.0987 110 | Test Loss: 4.2490, Train Loss: 4.5065, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1164 111 | Test Loss: 3.8521, Train Loss: 4.1671, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1093 112 | Test Loss: 3.6989, Train Loss: 3.9358, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1128 113 | Test Loss: 3.4895, Train Loss: 3.7582, Test Emotion Accuracy: 0.6757, Test Feature Accuracy: 0.1100 114 | Test Loss: 3.3489, Train Loss: 3.5637, Test Emotion Accuracy: 0.7346, Test Feature Accuracy: 0.1994 115 | Test Loss: 3.2211, Train Loss: 3.4231, Test Emotion Accuracy: 0.7260, Test Feature Accuracy: 0.2960 116 | Test Loss: 3.0969, Train Loss: 3.3200, Test Emotion Accuracy: 0.6927, Test Feature Accuracy: 0.3180 117 | Test Loss: 2.9887, Train Loss: 3.2090, Test Emotion Accuracy: 0.7040, Test Feature Accuracy: 0.3542 118 | Test Loss: 2.8897, Train Loss: 3.1067, Test Emotion Accuracy: 0.7090, Test Feature Accuracy: 0.3655 119 | Test Loss: 2.7770, Train Loss: 3.0112, Test Emotion Accuracy: 0.7260, Test Feature Accuracy: 0.3691 120 | Test Loss: 2.6810, Train Loss: 2.9102, Test Emotion Accuracy: 0.7310, Test Feature Accuracy: 0.3719 121 | Test Loss: 2.5654, Train Loss: 2.8193, Test Emotion Accuracy: 0.7502, Test Feature Accuracy: 0.4038 122 | Test Loss: 2.4521, Train Loss: 2.7193, Test Emotion Accuracy: 0.8027, Test Feature Accuracy: 0.4698 123 | Test Loss: 2.3535, Train Loss: 2.6071, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.5295 124 | Test Loss: 2.2644, Train Loss: 2.5414, Test Emotion Accuracy: 0.8112, Test Feature Accuracy: 0.5962 125 | Test Loss: 2.1761, Train Loss: 2.4639, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.6111 126 | Test Loss: 2.1016, Train Loss: 2.3785, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.6324 127 | Test Loss: 2.0471, Train Loss: 2.3018, Test Emotion Accuracy: 0.8204, Test Feature Accuracy: 0.6352 128 | Test Loss: 1.9733, Train Loss: 2.2519, Test Emotion Accuracy: 0.8183, Test Feature Accuracy: 0.6309 129 | Test Loss: 1.9266, Train Loss: 2.1917, Test Emotion Accuracy: 0.8183, Test Feature Accuracy: 0.6579 130 | Test Loss: 1.8715, Train Loss: 2.1488, Test Emotion Accuracy: 0.8211, Test Feature Accuracy: 0.6927 131 | Test Loss: 1.8216, Train Loss: 2.0873, Test Emotion Accuracy: 0.8247, Test Feature Accuracy: 0.7026 132 | Test Loss: 1.7722, Train Loss: 2.0439, Test Emotion Accuracy: 0.8254, Test Feature Accuracy: 0.7182 133 | Test Loss: 1.7351, Train Loss: 1.9937, Test Emotion Accuracy: 0.8240, Test Feature Accuracy: 0.7402 134 | Test Loss: 1.6853, Train Loss: 1.9608, Test Emotion Accuracy: 0.8290, Test Feature Accuracy: 0.7331 135 | Test Loss: 1.6532, Train Loss: 1.9225, Test Emotion Accuracy: 0.8275, Test Feature Accuracy: 0.7608 136 | Test Loss: 1.6238, Train Loss: 1.8794, Test Emotion Accuracy: 0.8268, Test Feature Accuracy: 0.7679 137 | Test Loss: 1.5797, Train Loss: 1.8332, Test Emotion Accuracy: 0.8318, Test Feature Accuracy: 0.7729 138 | Test Loss: 1.5447, Train Loss: 1.8006, Test Emotion Accuracy: 0.8325, Test Feature Accuracy: 0.7757 139 | Test Loss: 1.5099, Train Loss: 1.7661, Test Emotion Accuracy: 0.8325, Test Feature Accuracy: 0.7771 140 | Test Loss: 1.4878, Train Loss: 1.7292, Test Emotion Accuracy: 0.8297, Test Feature Accuracy: 0.7793 141 | Test Loss: 1.4566, Train Loss: 1.6824, Test Emotion Accuracy: 0.8332, Test Feature Accuracy: 0.7757 142 | Test Loss: 1.4321, Train Loss: 1.6496, Test Emotion Accuracy: 0.8325, Test Feature Accuracy: 0.7779 143 | Test Loss: 1.4128, Train Loss: 1.6437, Test Emotion Accuracy: 0.8311, Test Feature Accuracy: 0.7786 144 | Test Loss: 1.4057, Train Loss: 1.6282, Test Emotion Accuracy: 0.8275, Test Feature Accuracy: 0.7793 145 | Test Loss: 1.3745, Train Loss: 1.6149, Test Emotion Accuracy: 0.8325, Test Feature Accuracy: 0.7800 146 | Test Loss: 1.3549, Train Loss: 1.5447, Test Emotion Accuracy: 0.8346, Test Feature Accuracy: 0.7821 147 | Test Loss: 1.3395, Train Loss: 1.5354, Test Emotion Accuracy: 0.8353, Test Feature Accuracy: 0.7821 148 | Test Loss: 1.3245, Train Loss: 1.5304, Test Emotion Accuracy: 0.8339, Test Feature Accuracy: 0.7828 149 | Test Loss: 1.3101, Train Loss: 1.5016, Test Emotion Accuracy: 0.8346, Test Feature Accuracy: 0.7864 150 | Test Loss: 1.2945, Train Loss: 1.4865, Test Emotion Accuracy: 0.8353, Test Feature Accuracy: 0.7814 151 | Test Loss: 1.2856, Train Loss: 1.4641, Test Emotion Accuracy: 0.8361, Test Feature Accuracy: 0.7864 152 | Test Loss: 1.2812, Train Loss: 1.4446, Test Emotion Accuracy: 0.8368, Test Feature Accuracy: 0.7842 153 | Test Loss: 1.2802, Train Loss: 1.4127, Test Emotion Accuracy: 0.8375, Test Feature Accuracy: 0.7928 154 | Test Loss: 1.2635, Train Loss: 1.4218, Test Emotion Accuracy: 0.8353, Test Feature Accuracy: 0.7857 155 | Test Loss: 1.2563, Train Loss: 1.3853, Test Emotion Accuracy: 0.8396, Test Feature Accuracy: 0.7906 156 | Test Loss: 1.2330, Train Loss: 1.3833, Test Emotion Accuracy: 0.8396, Test Feature Accuracy: 0.7942 157 | Test Loss: 1.2280, Train Loss: 1.3745, Test Emotion Accuracy: 0.8396, Test Feature Accuracy: 0.7991 158 | Test Loss: 1.2185, Train Loss: 1.3555, Test Emotion Accuracy: 0.8382, Test Feature Accuracy: 0.7991 159 | IN: TextClassificationModel_BiLSTM_MHA 160 | Test Loss: 3.9264, Train Loss: 4.3928, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 161 | Test Loss: 3.8289, Train Loss: 3.9937, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1065 162 | Test Loss: 3.7979, Train Loss: 3.9226, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 163 | Test Loss: 3.7901, Train Loss: 3.9144, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1086 164 | Test Loss: 3.7626, Train Loss: 3.8803, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 165 | Test Loss: 3.7474, Train Loss: 3.7889, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 166 | Test Loss: 3.2377, Train Loss: 3.5023, Test Emotion Accuracy: 0.6806, Test Feature Accuracy: 0.1810 167 | Test Loss: 3.0302, Train Loss: 3.1555, Test Emotion Accuracy: 0.7445, Test Feature Accuracy: 0.2044 168 | Test Loss: 2.8703, Train Loss: 3.0004, Test Emotion Accuracy: 0.7857, Test Feature Accuracy: 0.2505 169 | Test Loss: 2.5369, Train Loss: 2.7498, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.3634 170 | Test Loss: 2.4005, Train Loss: 2.5357, Test Emotion Accuracy: 0.8119, Test Feature Accuracy: 0.3804 171 | Test Loss: 2.2291, Train Loss: 2.3207, Test Emotion Accuracy: 0.7991, Test Feature Accuracy: 0.3918 172 | Test Loss: 2.1134, Train Loss: 2.2269, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.4010 173 | Test Loss: 2.0323, Train Loss: 2.1081, Test Emotion Accuracy: 0.7899, Test Feature Accuracy: 0.4329 174 | Test Loss: 1.9619, Train Loss: 2.0334, Test Emotion Accuracy: 0.7991, Test Feature Accuracy: 0.4720 175 | Test Loss: 1.8651, Train Loss: 1.9526, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.5131 176 | Test Loss: 1.8445, Train Loss: 1.9040, Test Emotion Accuracy: 0.8006, Test Feature Accuracy: 0.5366 177 | Test Loss: 1.8084, Train Loss: 1.8456, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.5224 178 | Test Loss: 1.7544, Train Loss: 1.7927, Test Emotion Accuracy: 0.8006, Test Feature Accuracy: 0.5486 179 | Test Loss: 1.7472, Train Loss: 1.7815, Test Emotion Accuracy: 0.7999, Test Feature Accuracy: 0.5465 180 | Test Loss: 1.7227, Train Loss: 1.7018, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.5749 181 | Test Loss: 1.6571, Train Loss: 1.6757, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.5735 182 | Test Loss: 1.6744, Train Loss: 1.6459, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.5827 183 | Test Loss: 1.6003, Train Loss: 1.6437, Test Emotion Accuracy: 0.8034, Test Feature Accuracy: 0.6210 184 | Test Loss: 1.5614, Train Loss: 1.5664, Test Emotion Accuracy: 0.7885, Test Feature Accuracy: 0.6345 185 | Test Loss: 1.5571, Train Loss: 1.5180, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.6395 186 | Test Loss: 1.5314, Train Loss: 1.5714, Test Emotion Accuracy: 0.8027, Test Feature Accuracy: 0.6224 187 | Test Loss: 1.5334, Train Loss: 1.4939, Test Emotion Accuracy: 0.7857, Test Feature Accuracy: 0.6224 188 | Test Loss: 1.4856, Train Loss: 1.4585, Test Emotion Accuracy: 0.8034, Test Feature Accuracy: 0.6473 189 | Test Loss: 1.4602, Train Loss: 1.4601, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.6906 190 | Test Loss: 1.4661, Train Loss: 1.4132, Test Emotion Accuracy: 0.8006, Test Feature Accuracy: 0.6906 191 | Test Loss: 1.4743, Train Loss: 1.4039, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.6622 192 | Test Loss: 1.3826, Train Loss: 1.3661, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.6955 193 | Test Loss: 1.4353, Train Loss: 1.3699, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.6877 194 | Test Loss: 1.3718, Train Loss: 1.3301, Test Emotion Accuracy: 0.7835, Test Feature Accuracy: 0.7076 195 | Test Loss: 1.3213, Train Loss: 1.3298, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.7104 196 | Test Loss: 1.3380, Train Loss: 1.3232, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.7012 197 | Test Loss: 1.3632, Train Loss: 1.2843, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.6991 198 | Test Loss: 1.3190, Train Loss: 1.2916, Test Emotion Accuracy: 0.8048, Test Feature Accuracy: 0.7282 199 | Test Loss: 1.3019, Train Loss: 1.2805, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.7218 200 | Test Loss: 1.2884, Train Loss: 1.2336, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.7076 201 | Test Loss: 1.2701, Train Loss: 1.2272, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.7189 202 | Test Loss: 1.2366, Train Loss: 1.2072, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.7346 203 | Test Loss: 1.2391, Train Loss: 1.1913, Test Emotion Accuracy: 0.8062, Test Feature Accuracy: 0.7402 204 | Test Loss: 1.2264, Train Loss: 1.1721, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.7239 205 | Test Loss: 1.2108, Train Loss: 1.1740, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.7402 206 | Test Loss: 1.2185, Train Loss: 1.1842, Test Emotion Accuracy: 0.7793, Test Feature Accuracy: 0.7275 207 | Test Loss: 1.2061, Train Loss: 1.1674, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.7331 208 | Test Loss: 1.1899, Train Loss: 1.1432, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.7367 209 | Test Loss: 1.2370, Train Loss: 1.1433, Test Emotion Accuracy: 0.8062, Test Feature Accuracy: 0.7268 210 | IN: TextClassificationModel_GRU_MHA 211 | Test Loss: 3.9194, Train Loss: 4.4221, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 212 | Test Loss: 3.8218, Train Loss: 3.9896, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 213 | Test Loss: 3.8410, Train Loss: 3.9444, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 214 | Test Loss: 3.7956, Train Loss: 3.9039, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 215 | Test Loss: 3.7774, Train Loss: 3.8829, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 216 | Test Loss: 3.7142, Train Loss: 3.8493, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 217 | Test Loss: 3.6686, Train Loss: 3.7217, Test Emotion Accuracy: 0.6657, Test Feature Accuracy: 0.1143 218 | Test Loss: 3.5775, Train Loss: 3.5764, Test Emotion Accuracy: 0.6764, Test Feature Accuracy: 0.1746 219 | Test Loss: 3.3950, Train Loss: 3.3748, Test Emotion Accuracy: 0.7587, Test Feature Accuracy: 0.1796 220 | Test Loss: 3.3404, Train Loss: 3.2367, Test Emotion Accuracy: 0.7850, Test Feature Accuracy: 0.1909 221 | Test Loss: 3.3236, Train Loss: 3.1694, Test Emotion Accuracy: 0.7786, Test Feature Accuracy: 0.1923 222 | Test Loss: 3.2988, Train Loss: 3.0783, Test Emotion Accuracy: 0.7807, Test Feature Accuracy: 0.2150 223 | Test Loss: 3.2823, Train Loss: 3.0189, Test Emotion Accuracy: 0.7913, Test Feature Accuracy: 0.2051 224 | Test Loss: 3.2536, Train Loss: 2.9686, Test Emotion Accuracy: 0.7942, Test Feature Accuracy: 0.2257 225 | Test Loss: 3.1656, Train Loss: 2.9383, Test Emotion Accuracy: 0.7970, Test Feature Accuracy: 0.2292 226 | Test Loss: 3.1030, Train Loss: 2.8697, Test Emotion Accuracy: 0.7942, Test Feature Accuracy: 0.2271 227 | Test Loss: 3.0471, Train Loss: 2.8092, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.2328 228 | Test Loss: 2.9973, Train Loss: 2.7289, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.2378 229 | Test Loss: 2.9485, Train Loss: 2.6642, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.2605 230 | Test Loss: 2.7997, Train Loss: 2.5745, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.2825 231 | Test Loss: 2.7365, Train Loss: 2.5104, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.3144 232 | Test Loss: 2.6165, Train Loss: 2.4816, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.3456 233 | Test Loss: 2.5426, Train Loss: 2.3920, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.3534 234 | Test Loss: 2.4952, Train Loss: 2.3227, Test Emotion Accuracy: 0.8119, Test Feature Accuracy: 0.3712 235 | Test Loss: 2.4559, Train Loss: 2.2649, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.3712 236 | Test Loss: 2.3682, Train Loss: 2.2326, Test Emotion Accuracy: 0.8141, Test Feature Accuracy: 0.4159 237 | Test Loss: 2.3444, Train Loss: 2.1221, Test Emotion Accuracy: 0.8055, Test Feature Accuracy: 0.4116 238 | Test Loss: 2.2808, Train Loss: 2.0614, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.4826 239 | Test Loss: 2.2388, Train Loss: 1.9880, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.4975 240 | Test Loss: 2.1847, Train Loss: 1.9576, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.4826 241 | Test Loss: 2.1397, Train Loss: 1.8877, Test Emotion Accuracy: 0.8062, Test Feature Accuracy: 0.5209 242 | Test Loss: 2.1226, Train Loss: 1.8441, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.5089 243 | Test Loss: 2.1036, Train Loss: 1.7993, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.5160 244 | Test Loss: 2.0975, Train Loss: 1.7630, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.5330 245 | Test Loss: 2.0967, Train Loss: 1.7376, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.5316 246 | Test Loss: 2.0037, Train Loss: 1.6979, Test Emotion Accuracy: 0.8233, Test Feature Accuracy: 0.5507 247 | Test Loss: 2.0065, Train Loss: 1.6774, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.5522 248 | Test Loss: 2.0174, Train Loss: 1.6724, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.5465 249 | Test Loss: 1.9827, Train Loss: 1.6249, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.5444 250 | Test Loss: 1.9544, Train Loss: 1.6109, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.5770 251 | Test Loss: 1.9931, Train Loss: 1.5577, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.5571 252 | Test Loss: 1.9319, Train Loss: 1.5163, Test Emotion Accuracy: 0.8226, Test Feature Accuracy: 0.5678 253 | Test Loss: 1.8665, Train Loss: 1.4834, Test Emotion Accuracy: 0.8141, Test Feature Accuracy: 0.5898 254 | Test Loss: 1.9364, Train Loss: 1.4445, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.5869 255 | Test Loss: 1.8664, Train Loss: 1.3955, Test Emotion Accuracy: 0.8183, Test Feature Accuracy: 0.6026 256 | Test Loss: 1.8427, Train Loss: 1.3287, Test Emotion Accuracy: 0.8176, Test Feature Accuracy: 0.6288 257 | Test Loss: 1.7316, Train Loss: 1.2862, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.6700 258 | Test Loss: 1.7944, Train Loss: 1.2236, Test Emotion Accuracy: 0.8233, Test Feature Accuracy: 0.6799 259 | Test Loss: 1.7351, Train Loss: 1.1992, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.6962 260 | Test Loss: 1.6778, Train Loss: 1.1384, Test Emotion Accuracy: 0.8219, Test Feature Accuracy: 0.6941 261 | IN: TextClassificationModel_BiGRU_MHA 262 | Test Loss: 3.8542, Train Loss: 4.3565, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 263 | Test Loss: 3.8265, Train Loss: 3.9834, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 264 | Test Loss: 3.8095, Train Loss: 3.9266, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 265 | Test Loss: 3.6557, Train Loss: 3.8578, Test Emotion Accuracy: 0.6650, Test Feature Accuracy: 0.1079 266 | Test Loss: 3.1155, Train Loss: 3.4786, Test Emotion Accuracy: 0.7182, Test Feature Accuracy: 0.2832 267 | Test Loss: 2.6911, Train Loss: 3.0066, Test Emotion Accuracy: 0.7750, Test Feature Accuracy: 0.3272 268 | Test Loss: 2.4061, Train Loss: 2.6514, Test Emotion Accuracy: 0.7935, Test Feature Accuracy: 0.3854 269 | Test Loss: 1.9733, Train Loss: 2.2754, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.5302 270 | Test Loss: 1.6603, Train Loss: 1.9491, Test Emotion Accuracy: 0.8013, Test Feature Accuracy: 0.6813 271 | Test Loss: 1.4618, Train Loss: 1.6943, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.7317 272 | Test Loss: 1.3173, Train Loss: 1.5225, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.7708 273 | Test Loss: 1.2284, Train Loss: 1.3753, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.7814 274 | Test Loss: 1.1350, Train Loss: 1.2824, Test Emotion Accuracy: 0.8112, Test Feature Accuracy: 0.7935 275 | Test Loss: 1.0887, Train Loss: 1.2071, Test Emotion Accuracy: 0.8190, Test Feature Accuracy: 0.7984 276 | Test Loss: 1.0447, Train Loss: 1.1401, Test Emotion Accuracy: 0.8176, Test Feature Accuracy: 0.8474 277 | Test Loss: 0.9859, Train Loss: 1.0987, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.8559 278 | Test Loss: 0.9197, Train Loss: 1.0254, Test Emotion Accuracy: 0.7970, Test Feature Accuracy: 0.8808 279 | Test Loss: 0.9127, Train Loss: 0.9850, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.8850 280 | Test Loss: 0.8827, Train Loss: 0.9419, Test Emotion Accuracy: 0.8048, Test Feature Accuracy: 0.8935 281 | Test Loss: 0.8706, Train Loss: 0.9390, Test Emotion Accuracy: 0.7835, Test Feature Accuracy: 0.9077 282 | Test Loss: 0.8342, Train Loss: 0.8713, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.9006 283 | Test Loss: 0.7978, Train Loss: 0.8580, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.9177 284 | Test Loss: 0.7729, Train Loss: 0.8467, Test Emotion Accuracy: 0.8062, Test Feature Accuracy: 0.9297 285 | Test Loss: 0.7523, Train Loss: 0.8215, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.9418 286 | Test Loss: 0.7653, Train Loss: 0.7839, Test Emotion Accuracy: 0.8041, Test Feature Accuracy: 0.9383 287 | Test Loss: 0.7331, Train Loss: 0.7711, Test Emotion Accuracy: 0.8091, Test Feature Accuracy: 0.9304 288 | Test Loss: 0.7187, Train Loss: 0.7662, Test Emotion Accuracy: 0.8141, Test Feature Accuracy: 0.9454 289 | Test Loss: 0.7082, Train Loss: 0.7374, Test Emotion Accuracy: 0.8190, Test Feature Accuracy: 0.9503 290 | Test Loss: 0.7169, Train Loss: 0.7403, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.9496 291 | Test Loss: 0.6945, Train Loss: 0.7300, Test Emotion Accuracy: 0.8133, Test Feature Accuracy: 0.9503 292 | Test Loss: 0.6863, Train Loss: 0.7232, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.9517 293 | Test Loss: 0.6957, Train Loss: 0.7180, Test Emotion Accuracy: 0.8098, Test Feature Accuracy: 0.9517 294 | Test Loss: 0.6889, Train Loss: 0.6925, Test Emotion Accuracy: 0.8077, Test Feature Accuracy: 0.9468 295 | Test Loss: 0.6816, Train Loss: 0.6662, Test Emotion Accuracy: 0.8048, Test Feature Accuracy: 0.9581 296 | Test Loss: 0.6805, Train Loss: 0.6671, Test Emotion Accuracy: 0.8119, Test Feature Accuracy: 0.9517 297 | Test Loss: 0.6725, Train Loss: 0.6603, Test Emotion Accuracy: 0.8084, Test Feature Accuracy: 0.9595 298 | Test Loss: 0.6667, Train Loss: 0.6693, Test Emotion Accuracy: 0.8169, Test Feature Accuracy: 0.9588 299 | Test Loss: 0.6475, Train Loss: 0.6635, Test Emotion Accuracy: 0.8112, Test Feature Accuracy: 0.9666 300 | Test Loss: 0.6732, Train Loss: 0.6451, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.9645 301 | Test Loss: 0.6663, Train Loss: 0.6319, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.9581 302 | Test Loss: 0.6589, Train Loss: 0.6416, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.9652 303 | Test Loss: 0.6424, Train Loss: 0.6114, Test Emotion Accuracy: 0.8119, Test Feature Accuracy: 0.9674 304 | Test Loss: 0.6453, Train Loss: 0.6184, Test Emotion Accuracy: 0.8070, Test Feature Accuracy: 0.9702 305 | Test Loss: 0.6473, Train Loss: 0.6111, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.9666 306 | Test Loss: 0.6293, Train Loss: 0.6075, Test Emotion Accuracy: 0.8105, Test Feature Accuracy: 0.9688 307 | Test Loss: 0.6159, Train Loss: 0.6047, Test Emotion Accuracy: 0.8155, Test Feature Accuracy: 0.9737 308 | Test Loss: 0.6384, Train Loss: 0.5931, Test Emotion Accuracy: 0.8148, Test Feature Accuracy: 0.9695 309 | Test Loss: 0.6305, Train Loss: 0.5916, Test Emotion Accuracy: 0.8162, Test Feature Accuracy: 0.9744 310 | Test Loss: 0.6096, Train Loss: 0.5897, Test Emotion Accuracy: 0.8126, Test Feature Accuracy: 0.9759 311 | Test Loss: 0.6342, Train Loss: 0.5812, Test Emotion Accuracy: 0.7906, Test Feature Accuracy: 0.9709 312 | -------------------------------------------------------------------------------- /sources/Figure_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_1.png -------------------------------------------------------------------------------- /sources/Figure_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_2.png -------------------------------------------------------------------------------- /sources/Figure_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_3.png -------------------------------------------------------------------------------- /sources/Figure_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_4.png -------------------------------------------------------------------------------- /sources/Figure_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_5.png -------------------------------------------------------------------------------- /sources/Figure_6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RedForestLonvor/Performance-in-Fine-Classification-of-Short-Text-in-Commodity-Finance/c1bc97e941d684c47262fc1bb7a617fee90b0769/sources/Figure_6.png --------------------------------------------------------------------------------