├── .gitignore └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | .DS_Store 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | This is a paper reading list on Document level Relation Extraction. 3 | 4 | Our list is still incomplete and the categorization might be inappropriate. We will keep adding papers and improving the list. Any suggestions are welcomed! 5 | 6 | ## Doc RE Papers 7 | 8 | ### 2021 9 | 10 | * EMNLP 2021 [Learning Logic Rules for Document-level Relation Extraction](https://aclanthology.org/2021.emnlp-main.95/) 11 | 12 | * ACL 2021 [Three Sentences Are All You Need — Local Path Enhanced Document Relation Extraction](https://arxiv.org/abs/2106.01793), [code](https://github.com/AndrewZhe/Three-Sentences-Are-All-You-Need), Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai and Dongyan Zhao 13 |
👉 Method: Using huristic rules to select at most 3 sentences for an entity pair 14 | 15 | * ACL 2021 Findings [SIRE: Separate Intra- and Inter-sentential Reasoning for Document-level Relation Extraction](https://arxiv.org/abs/2106.01709), [code](https://github.com/DreamInvoker/SIRE), Shuang Zeng, Yuting Wu, Baobao Chang 16 |
👉 Method: Seperate intra and inter relations: for intra, using sentence for mention pair representation, then aggregate all the mention pairs to entity pair representation; for inter, using the graph in GAIN. Furthermore, a novel logical reasoning method is used. 17 | 18 | * ACL 2021 Findings [Discriminative Reasoning for Document-level Relation Extraction](https://arxiv.org/abs/2106.01562), [code](https://github.com/xwjim/DRN), Wang Xu, Kehai Chen, Tiejun Zhao 19 |
👉 Method: Represent 3 types of paths for each relation pair, the paths including: intra-sentence reasoning path, logical reasoning path, and coreference reasoning path. 20 | 21 | * ACL 2021 Findings [MRN: A Locally and Globally Mention-Based Reasoning Network for Document-Level Relation Extraction](https://aclanthology.org/2021.findings-acl.117.pdf), Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren and Donghong Ji 22 | 23 | * EACL 2021 [An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning](https://arxiv.org/abs/2102.05980), Markus Eberts, Adrian Ulges 24 |
👉 Method: a model for JOINT mention detection and doc RE: finetune BERT for 4 tasks: entity mention localization, coreference resolution, entity classification and relation classification 25 | 26 | * IJCAI 2021 [Document-level Relation Extraction as Semantic Segmentation](https://arxiv.org/abs/2106.03618), [code](https://github.com/zjunlp/DocuNet) 27 |
👉 Method: Capturing the corelation between relations by U-net, which is to enlarge the reciptove field 28 | 29 | * AAAI 2021 [Document-Level Relation Extraction with Reconstruction](https://arxiv.org/abs/2012.11384), Wang Xu, Kehai Chen, Tiejun Zhao 30 |
👉 Method: build a graph like EoG, use LSTM to calculate the probability of "inference meta path", then maximize the probability of relationed entity pairs using BCE 31 | 32 | * AAAI 2021 [Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling](https://arxiv.org/abs/2010.11304), Wenxuan Zhou, Kevin Huang, Tengyu Ma, Jing Huang 33 |
👉 Method: improve BERT with marker, log-sum-exp pooling, group bilinear + adaptive threshold class + directly use transformer's attn matrix to aggregation words into doc representation 34 | 35 | * AAAI 2021 [Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction](https://arxiv.org/abs/2102.10249), Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, Zhendong Mao 36 |
👉 Method: add a structured attentive bias when calculating attention scores, to enable BERT attend more to coreference tokens 37 | 38 | * AAAI 2021 [Multi-view Inference for Relation Extraction with Uncertain Knowledge](https://www.researchgate.net/publication/347879225_Multi-view_Inference_for_Relation_Extraction_with_Uncertain_Knowledge), Bo Li, Wei Ye, Canming Huang, Shikun Zhang 39 |
👉 Method: use KG concept knowledges: 3 attention aggregation(e2c, c2e, m2e) to get contextual and global entity pair representation, another attention aggregation to get sentence representation, concat for final classification 40 | 41 | * PAKDD 2021 [Densely Connected Graph Attention Network Based on Iterative Path Reasoning for Document-Level Relation Extraction](https://link.springer.com/content/pdf/10.1007%2F978-3-030-75765-6_22.pdf), Hongya Zhang, Zhen Huang, Zhenzhen Li, Dongsheng Li, and Feng Liu 42 |
👉 Method: DCGAT for structural representation + inference same as EoG 43 | 44 | * PAKDD 2021 [SaGCN: Structure-Aware Graph Convolution Network for Document-Level Relation Extraction](https://link.springer.com/content/pdf/10.1007%2F978-3-030-75768-7_30.pdf), Shuangji Yang, Taolin Zhang, Danning Su, Nan Hu, Wei Nong, and Xiaofeng He 45 |
👉 Method: a graph with explicit structure (depandency tree) and implicit structure (from hardkuma distribution) for structural representation, and a graph for inference 46 | 47 | * ECML-PKDD 2021 [NA-Aware Machine Reading Comprehension for Document-Level Relation Extraction](https://2021.ecmlpkdd.org/wp-content/uploads/2021/07/sub_591.pdf), Zhenyu Zhang, Bowen Yu, Xiaobo Shu, and Tingwen Liu 48 |
👉 Method: using MRC-style encoder, aggragate mentions using directional attention flow, and add nota into labels 49 | 50 | * ICASSP 2021 [Multi-Granularity Heterogeneous Graph for Document-Level Relation Extraction](https://ieeexplore.ieee.org/abstract/document/9414755) 51 |
👉 Method: RGCN for graph reasoning, and entity-ware attn for final relation representation 52 | 53 | * IEEE, [Multi-Scale Feature and Metric Learning for Relation Extraction](https://arxiv.org/abs/2107.13425), Mi Zhang, Tieyun Qian 54 | 55 | * Knowledge-Based Systems, [Document-level relation extraction using evidence reasoning on RST-GRAPH](https://www.sciencedirect.com/science/article/pii/S0950705121005360), Hailin Wang, Ke Qin, Guoming Lu, Jin Yin, Rufai Yusuf Zakari, Jim Wilson Owusu 56 | 57 | * Information Sciences, [Document-level relation extraction with entity-selection attention](https://www.sciencedirect.com/science/article/abs/pii/S0020025521003285), Changsen Yuan, Heyan Huang, Chong Feng, Ge Shi, and Xiaochi Wei 58 |
👉 Method: Select the essential sentence-level features and document-level features from the document by inter-sentenceattention and combine them with the document gating. 59 | 60 | * Pattern Recognition Letters [Document-level Relation Extraction via Graph Transformer Networks and Temporal Convolutional Networks](https://www.sciencedirect.com/science/article/pii/S016786552100218X), Yong Shi, Yang Xiao, Pei Quan, MingLong Lei, and Lingfeng Niu 61 |
👉 Method: Temporal convolutional networks as encoder, build graph with huristic rules, and graph transformer networks for path gengration 62 | 63 | * arXiv 2021 [SAIS: Supervising and Augmenting Intermediate Steps for Document-Level Relation Extraction](https://arxiv.org/pdf/2109.12093.pdf), Yuxin Xiao, Zecheng Zhang, Yuning Mao, Carl Yang, Jiawei Han 64 | 65 | * arXiv 2021 [Modular Self-Supervision for Document-Level Relation Extraction](https://arxiv.org/pdf/2109.05362), Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, Hoifung Poon 66 |
👉 Method: Decompose document-level relation extraction into relation detection and argument resolution. 67 | 68 | * arXiv 2021 [Eider: Evidence-enhanced Document-level Relation Extraction](https://arxiv.org/abs/2106.08657), Yiqing Xie, Jiaming Shen, Sha Li, Yuning Mao, Jiawei Han 69 |
👉 Method: Predict evidence sentences to construct pseudo document, and use a blend layer to combine the predictions of origin/pseudo document. 70 | 71 | * arXiv 2021 [BERT-GT: Cross-sentence n-ary relation extraction with BERT and Graph Transformer](https://arxiv.org/abs/2101.04158), Po-Ting Lai, Zhiyong Lu 72 |
👉 Method: densely connect Graph Transformer(neighbor attention) & Transformer to improve BERT 73 | 74 | * arXiv 2021 [MrGCN: Mirror Graph Convolution Network for Relation Extraction with Long-Term Dependencies](http://arxiv.org/abs/2101.00124), Xiao Guo, I-Hung Hsu, Wael AbdAlmageed, Premkumar Natarajan, Nanyun Peng 75 |
👉 Method: pooling-unpooling after GCN layers to enlarge receptive field (like u-net) 76 | 77 | * arXiv 2021 [Mention-centered Graph Neural Network for Document-level Relation Extraction](http://arxiv.org/abs/2103.08200v1), Jiaxin Pan, Min Peng, Yiyan Zhang 78 |
👉 Method: introduce mention-mention edge weight when building a graph 79 | 80 | 81 | ### 2020 82 | 83 | * ACL 2020 [Reasoning with Latent Structure Refinement for Document-Level Relation Extraction](https://arxiv.org/abs/2005.06312), Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu 84 |
👉 Method: Latent Structure + DCGCN 85 | 86 | * EMNLP 2020 [Double Graph Based Reasoning for Document-level Relation Extraction](https://arxiv.org/abs/2009.13752), Shuang Zeng, Runxin Xu, Baobao Chang, Lei Li 87 |
👉 Method: Mention graph(mention + doc node) + entity graph (2-hot at most) 88 | 89 | * EMNLP 2020 [Global-to-Local Neural Networks for Document-Level Relation Extraction](https://www.aclweb.org/anthology/2020.emnlp-main.303), Difeng Wang, Wei Hu, Ermei Cao, Weijian Sun 90 |
👉 Method: EoG + R-GCN + entity-pair attention 91 | 92 | * EMNLP 2020 [Denoising Relation Extraction from Document-level Distant Supervision](https://arxiv.org/abs/2011.03888), Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, Leyu Lin 93 |
👉 Method: using DS data to pre-train model for DocRE with 3 tasks: Mention-Entity Matching, Relation Detection, Relational Fact Alignment 94 | 95 | * COLING 2020 [Graph Enhanced Dual Attention Network for Document-Level Relation Extraction](https://www.aclweb.org/anthology/2020.coling-main.136/),Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, Shikun Zhang 96 |
👉 Method: bi-directional attn between sentence & relation instance + attn duality + support evidence guide 97 | 98 | * COLING 2020 [Global Context-enhanced Graph Convolutional Networks for Document-level Relation Extraction](https://www.aclweb.org/anthology/2020.coling-main.461/), Huiwei Zhou, Yibin Xu, Zhe Liu, Weihong Yao, Chengkun Lang, Haibin Jiang 99 |
👉 Method: entity graph with attn gate & attn adj matrix for entity representation + entity graph with multi-head attn as adj matrix for reasoning + dense-node&edge-GCN 100 | 101 | * COLING 2020 [Document-level Relation Extraction with Dual-tier Heterogeneous Graph](https://www.aclweb.org/anthology/2020.coling-main.143/), Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Tingwen Liu, Hengzhu Tang, Yubin Wang and Li Guo 102 |
👉 Method: strcuct modeling graph + relation reasoning graph + weighted RGCN 103 | 104 | * PAKDD 2020 [HIN: Hierarchical Inference Network for Document-Level Relation Extraction](https://arxiv.org/abs/2003.12754), Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, Pengfei Yin 105 |
👉 Method: Hierarchical (entity & document level) inference representation (using LSTMs, attention and all kinds of "concat") for an entity pair's representation 106 | 107 | * ICKG 2020 [Improving Document-level Relation Extraction via Contextualizing Mention Representations andWeighting Mention Pairs](https://ieeexplore.ieee.org/document/9194547), [code](https://github.com/nefujiangping/EncAttAgg), Ping Jiang, Xian-Ling Mao, Binbin Bian and Heyan Huang 108 | 109 | * arXiv 2020 [Coarse-to-Fine Entity Representations for Document-level Relation Extraction](https://arxiv.org/abs/2012.02507), Damai Dai, Jing Ren, Shuang Zeng, Baobao Chang, Zhifang Sui 110 |
👉 Method: a word graph for coarse representation, Bi-GRU for path encoding, and an attention aggregator 111 | 112 | * arXiv 2020 [Entity and Evidence Guided Relation Extraction for DocRED](https://arxiv.org/abs/2008.12283), Kevin Huang, Guangtao Wang, Tengyu Ma, Jing Huang 113 |
👉 Method: convert a doc into N {head entity; doc} samples for BERT, bilinear for RE, and double bilinear for evidence prediction as an auxiliary task 114 | 115 | * IEEE Access 2020 [A Novel Document-Level Relation Extraction Method Based on BERT and Entity Information](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9098945), Xiaoyu Han and Lei Wang 116 |
👉 Method: using marker to wrap mention as {[entity type] mention [entity X]}, then using BERT + bilinear 117 | 118 | ### 2019 119 | 120 | * ACL 2019 [Inter-sentence Relation Extraction with Document-level Graph Convolutional Neural Network](https://www.aclweb.org/anthology/P19-1423/), Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou 121 |
👉 Method: GCN + bi-affine classification 122 | 123 | * ACL 2019 [Attention Guided Graph Convolutional Networks for Relation Extraction](https://www.aclweb.org/anthology/P19-1024.pdf), Zhijiang Guo, Yan Zhang, Wei Lu 124 |
👉 Method: multi-head attention to get multi graph with different adj matrix + dense GCN 125 | 126 | * EMNLP 2019 [Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs](https://arxiv.org/abs/1909.00228), Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou, EMNLP 2019 127 |
👉 Method: Edge-oriented graph + Iterative Inference Mechanism 128 | 129 | * NAACL 2019 [Document-Level N-ary Relation Extraction with Multiscale Representation Learning](https://arxiv.org/abs/1904.02347), Robin Jia, Cliff Wong, Hoifung Poon 130 |
👉 Method: multioscale representation of mention and entity: lstm for paragraph-level mention representation, log-sum-exp pooling for entity representation 131 | 132 | * arXiv 2019 [Fine-tune Bert for DocRED with Two-step Process](https://arxiv.org/abs/1909.11898), Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, William Wang 133 |
👉 Method: BERT + bilinear, 2 step training: binary classification for relation detection, then multi classification for relation type 134 | 135 | ### 2018 136 | 137 | * EMNLP 2018 [N-ary Relation Extraction using Graph-State LSTM](https://www.aclweb.org/anthology/D18-1246), Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea 138 | 139 | * NAACL 2018 [Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction](https://www.aclweb.org/anthology/N18-1080/), Patrick Verga, Emma Strubell, and Andrew McCallum 140 |
👉 Method: Transformers + CNN as encoder, head MLP and tail MLP for position-specific representations, bilinear + log-sum-exp for entity pair representation 141 | 142 | ### 2017 143 | 144 | * TACL 2017 [Cross-Sentence N-ary Relation Extraction with Graph LSTMs](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/tacl17.pdf), Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih 145 | 146 | ## Related Papers 147 | 148 | The task of those papers are somewhat relevant to Doc RE (i.e. cross-sentence RE, GCN for RE, reasoning for MRC, etc.), of much value as well 149 | 150 | ### 2021 151 | 152 | 153 | * ACL 2021 [Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution](https://ugentt2k.github.io/papers/2021/verlinden2021.pdf) 154 |
👉 task: joint information extraction (used DocRED dataset) 155 |
👉 Method: inject KB-text and KB-graph information into the model, using span representation to do NER, coreference, and RE 156 | 157 | * EACL 2021 [Two Training Strategies for Improving Relation Extraction over Universal Graph](https://arxiv.org/abs/2102.06540), Qin Dai, Naoya Inoue, Ryo Takahashi and Kentaro Inui 158 |
👉 Task: DSRE 159 |
👉 Method: merge text into KG, and improve the "select path" stage with path type (textual, hybrid, KG paths) adaptive pretraining & complexity ranking guided attention 160 | 161 | ### 2019 162 | 163 | * ACL 2019 [Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs](https://www.aclweb.org/anthology/P19-1260.pdf), Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, and Bowen Zhou 164 | 165 | * ACL 2019 [Graph Neural Networks with Generated Parameters for Relation Extraction](https://www.aclweb.org/anthology/P19-1128.pdf), Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-seng Chua, and Maosong Sun 166 | 167 | * EMNLP 2019 [KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning](https://www.aclweb.org/anthology/D19-1282.pdf), Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren 168 | 169 | * NAACL 2019 [Question Answering by Reasoning Across Documents with Graph Convolutional Networks](https://arxiv.org/abs/1808.09920), Nicola De Cao, Wilker Aziz, Ivan Titov 170 | 171 | * NAACL 2019 [BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering](https://arxiv.org/abs/1904.04969), Yu Cao, Meng Fang, and Dacheng Tao 172 | 173 | * NAACL 2019 [Long-tail relation extraction via knowledge graph embeddings and graph convolution networks](https://www.aclweb.org/anthology/N19-1306.pdf), Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen 174 |
👉 Task: RE 175 |
👉 Method: Using KG knowledges to improve the performance of long-tail instances in RE. Using KG embeddings and GCN to learn relational knowledge, then attention aggregation to get final relation representation 176 | 177 | * AAAI 2019 [Neural Relation Extraction within and across Sentence Boundaries](https://arxiv.org/abs/1810.05102), Pankaj Gupta, Subburam Rajaram, Bernt Andrassy, Hinrich Schutze, Thomas Runkler 178 |
👉 Task: cross sentence RE 179 |
👉 Method: Using RNN to model the dependancy subtree 180 | 181 | ### 2018 182 | 183 | * EMNLP 2018 [Graph Convolution over Pruned Dependency Trees Improves Relation Extraction](https://www.aclweb.org/anthology/D18-1244/), Yuhao Zhang, Peng Qi, and Christopher D Manning 184 | 185 | * Wei Zheng, Hongfei Lin, Zhiheng Li, Xiaoxia Liu, Zhengguang Li, Bo Xu, Yijia Zhang, Zhihao Yang, and Jian Wang. 2018. An effective neural model extracting document level chemical-induced disease relations from biomedical literature. Journal of Biomedical Informatics, 83:1–9. 186 | 187 | * NeurIPS 2018 [Recurrent Relational Networks](https://arxiv.org/abs/1711.08028), Rasmus Palm, Ulrich Paquet, Ole Winther 188 | 189 | * ACL 2018 [A Walk-based Model on Entity Graphs for Relation Extraction](https://www.aclweb.org/anthology/P18-2014/), Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou 190 | 191 | ### 2017 192 | 193 | * ACL 2017 [Distant Supervision for Relation Extraction beyond the Sentence Boundary](https://www.aclweb.org/anthology/E17-1110), Chris Quirk and Hoifung Poon. 194 | 195 | * EMNLP 2017 [Incorporating relation paths in neural relation extraction](https://www.aclweb.org/anthology/D17-1186), Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, and Maosong Sun 196 | --------------------------------------------------------------------------------