├── sara_hooker.png ├── ana_marasovic.png ├── _config.yml ├── program.md ├── cfp.txt └── index.md /sara_hooker.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TatianaShavrina/blackboxnlp.github.io/main/sara_hooker.png -------------------------------------------------------------------------------- /ana_marasovic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TatianaShavrina/blackboxnlp.github.io/main/ana_marasovic.png -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-dinky 2 | title: "Analyzing and interpreting neural networks for NLP" 3 | description: "Revealing the content of the neural black box: workshop on the analysis and interpretation of neural networks for Natural Language Processing." 4 | show_downloads: false 5 | 6 | 7 | -------------------------------------------------------------------------------- /program.md: -------------------------------------------------------------------------------- 1 | 2 | [Main page](index.md) 3 | 4 | This year's BlackboxNLP workshop will have a hybrid programme. The first half of the program is entirely virtual, 5 | the second half is going to be hosted both on-site and online. All plenary sessions will be held (either livestreamed or broadcasted) in Zoom 6 | and followed by a live Q&A (unless the schedule indicates otherwise). Questions can be asked in the Zoom chat during the presentations. 7 | Poster sessions will be held in Gather.town. 8 | **The links to the zoom session and Gather.town space can be found on the EMNLP page of the workshop** (only accessible with conference registration). 9 | 10 | You can find the programme [here](https://docs.google.com/spreadsheets/d/1SLJ07nMi6VoVOg5iEPSOv5v4cq_WopD8T0FJlr7X_wU/edit?usp=sharing). 11 | 12 | 48 | -------------------------------------------------------------------------------- /cfp.txt: -------------------------------------------------------------------------------- 1 | BlackboxNLP 2021: Analyzing and interpreting neural networks for NLP -- EMNLP 2021 2 | When: November 11, 2021 3 | Where: Hybrid - Punta Cana & Online 4 | Website: https://blackboxnlp.github.io 5 | 6 | Workshop description 7 | ----------------------------- 8 | Neural networks have rapidly become a central component in NLP systems in 9 | the last few years. The improvement in accuracy and performance brought by 10 | the introduction of neural networks has typically come at the cost of our 11 | understanding of the system: How do we assess what the representations and 12 | computations are that the network learns? The goal of this workshop is to 13 | bring together people who are attempting to peek inside the neural network 14 | black box, taking inspiration from machine learning, psychology, 15 | linguistics, and neuroscience. 16 | 17 | Topics of interest include, but are not limited to: 18 | * Applying analysis techniques from neuroscience to analyze 19 | high-dimensional vector representations in artificial neural networks; 20 | * Analyzing the network’s response to strategically chosen input in order 21 | to infer the linguistic generalizations that the network has acquired; 22 | * Examining network performance on simplified or formal languages; 23 | * Proposing modifications to neural architectures that increase their 24 | interpretability; 25 | * Testing whether interpretable information can be decoded from 26 | intermediate representations; 27 | * Explaining specific model predictions made by neural networks; 28 | * Generating and evaluating the quality of adversarial examples in NLP; 29 | * Developing open-source tools for analyzing neural networks in NLP; 30 | * Evaluating the analysis results: how do we know that the analysis is 31 | valid? 32 | 33 | BlackboxNLP 2021 is the fourth BlackboxNLP workshop. The programme and 34 | proceedings of the previous editions, which were held at EMNLP 2018, ACL 35 | 2019 and EMNLP 2020, can be found on the workshop website. 36 | 37 | Submissions 38 | ----------------- 39 | We call for two types of papers: 40 | 1) Archival papers. These are papers reporting on completed, original and 41 | unpublished research, with a maximum length of 8 pages + references. Papers 42 | shorter than this maximum are also welcome. Accepted papers are expected to 43 | be presented at the workshop and will be published in the workshop 44 | proceedings. They should report on obtained results rather than intended 45 | work. These papers will undergo double-blind peer-review, and should thus 46 | be anonymized. 47 | 2) Extended abstracts. These may report on work in progress or may be cross 48 | submissions that have already appeared in a non-NLP venue. The extended 49 | abstracts are of maximum 2 pages + references. These submissions are 50 | non-archival in order to allow submission to another venue. The selection 51 | will not be based on a double-blind review and thus submissions of this 52 | type need not be anonymized. 53 | Submissions should follow the official EMNLP 2021 style guidelines. The 54 | submission site is: 55 | https://www.softconf.com/emnlp2021/BlackboxNLP 56 | 57 | Our workshop also welcomes submissions through ACL 58 | Rolling Review (https://aclrollingreview.org/). Authors of any papers 59 | that are submitted to ARR before July 15, 2021 and have their reviews ready 60 | may submit their papers and reviews for consideration for the workshop up to 61 | one week before our notification date, i.e. by August 27, 2021. 62 | 63 | 64 | Contact 65 | --------------------- 66 | Please contact the organizers at blackboxnlp@googlegroups.com for any questions. 67 | 68 | Important dates 69 | --------------------- 70 | August 5, 2021 – Submission deadline. 71 | TBD – Retraction of workshop papers accepted for EMNLP. 72 | August 27, 2021 – Deadline for submitting ARR-reviewed papers 73 | September 3, 2021 – Notification of acceptance. 74 | September 15, 2021 – Camera-ready papers due. 75 | November 11, 2021 – Workshop. 76 | Note: All deadlines are 11:59PM UTC-12:00. 77 | 78 | 79 | -------------------------------------------------------------------------------- /index.md: -------------------------------------------------------------------------------- 1 | # BlackboxNLP 2021 2 | 3 | The fourth edition of BlackboxNLP will be collocated with EMNLP 2021. The workshop programme is available [here](https://blackboxnlp.github.io/program.html). 4 | 5 | ## Important dates 6 | 7 | - ~~August 5, 2021 -- Submission deadline.~~ 8 | - ~~August 27, 2021 -- Retraction of workshop papers accepted for EMNLP.~~ 9 | - ~~September 3, 2021 -- Notification of acceptance.~~ 10 | - ~~September 15, 2021 -- Camera-ready papers due.~~ 11 | - November 11, 2021 -- Workshop. 12 | 13 | ## Workshop description 14 | 15 | Neural networks have rapidly become a central component in NLP systems in the last few years. 16 | The improvement in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: How do we assess what the representations and computations are that the network learns? 17 | The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics, and neuroscience. 18 | The topics of the workshop will include, but are not limited to: 19 | 20 | - Applying analysis techniques from neuroscience to analyse high-dimensional vector representations (such as Haxby et al., 2001; Kriegeskorte, 2008) in artificial neural networks; 21 | - Analyzing the network's response to strategically chosen inputs in order to infer the linguistic generalizations that the network has acquired (e.g., Linzen et al., 2016; Hupkes et al., 2020); 22 | - Examining the performance of the network on simplified or formal languages (e.g., Hupkes et al., 2018; Lake et al., 2018); 23 | - Proposing modifications to neural network architectures that can make them more interpretable (e.g., Palanki et al., 2018); 24 | - Scaling up neural network analysis techniques developed in the connectionist literature in the 1990s (Elman, 1991); 25 | - Testing whether interpretable information can be decoded from intermediate representations (e.g., Adi et al., 2017; Chrupala et al., 2017; Hupkes et al., 2017, Conneau et al., 2018); 26 | - Translating insights on neural networks interpretation from the vision domain (e.g., Zeiler & Fergus, 2014) to language; 27 | - Explaining model predictions (e.g., Lei et al., 2016; Alvarez-Melis & Jaakkola, 2017): What are ways to explain specific decisions made by neural networks? 28 | - Adversarial examples in NLP (e.g., Ebrahimi et al., 2018; Belinkov & Bisk, 2018): How to generate them and how to evaluate their quality? 29 | - Open-source tools for analyzing neural networks in NLP (e.g., Strobelt et al., 2018; Rikters, 2018). 30 | - Evaluation of analysis results: How do we know that the analysis is valid? 31 | - Analysing the linguistic properties captured by contextualised word representations (e.g. Aina et al 2019, Bommasani et al., 2020) 32 | - Analysing learning and inference mechanisms of neural networks, such as memory and attention (e.g. Abnar and Zuidema, 2020, Serrano and Smith, 2019, Haviv et al., 2019) 33 | 34 | BlackboxNLP 2021 is the fourth BlackboxNLP workshop. 35 | The programme and proceedings of the previous editions, which were held at EMNLP 2018, ACL 2019 and EMNLP 2020, can be found [here](https://blackboxnlp.github.io/2018/), [here](https://blackboxnlp.github.io/2019/) and [here](https://blackboxnlp.github.io/2020/). 36 | 37 | The official call for papers is available [here](cfp.txt). 38 | 39 | ## Paper submission 40 | 41 | We accept two types of papers 42 | 43 | - Archival papers. These are papers reporting on completed, original and unpublished research, with maximum length of 8 pages + references. Papers shorter than this maximum are also welcome. An optional appendix may appear after the references in the same pdf file. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. They should report on obtained results rather than intended work. These papers will undergo double-blind peer-review, and should thus be anonymized. Archival papers will be included in the workshop proceedings and the ACL anthology. 44 | 45 | - Extended abstracts. These may report on work in progress or may be cross submissions that have already appeared in a non-NLP venue. The extended abstracts are of maximum 2 pages + references. These submissions are non-archival in order to allow submission to another venue. The selection will not be based on a double-blind review and thus submissions of this type need not be anonymized. Abstracts will be posted on the workshop website but will not be included in the proceedings. 46 | 47 | Both papers and abstracts should follow the official EMNLP 2020 style guidelines and should be submitted via softconf: 48 | 49 | [https://www.softconf.com/emnlp2021/BlackboxNLP](https://www.softconf.com/emnlp2021/BlackboxNLP) 50 | 51 | Accepted submissions will be presented at the workshop: most as posters, some as oral presentations (determined by the program committee). 52 | 53 | ## Dual submissions and preprints 54 | Dual submissions with the main conference are allowed, but authors must declare dual submission by entering the paper's main conference submission id. 55 | The reviews for the submission for the main conference will be automatically forwarded to the workshop and taken into consideration when your paper is evaluated. 56 | Authors of dual-submission papers accepted to the main conference should retract them from the workshop by September 20. 57 | 58 | Papers posted to preprint servers such as arxiv can be submitted 59 | without any restrictions on when they were posted. 60 | 61 | ## Camera-ready information 62 | Authors of accepted archival papers should upload the final version of their paper to the submission system by the camera-ready deadline. Authors may use one extra page to address reviewer comments, for a total of nine pages. 63 | 64 | 65 | ## Invited speakers 66 | 67 | ### [Sara Hooker](https://www.sarahooker.me/) 68 | 69 | Sara Hooker is a research scientist at Google Brain working on training models that go beyond test-set accuracy to fulfill multiple desiderata. Her research interests gravitate towards interpretability, model compression and fairness. She is a founding organizer of the cross-institutional Trustworthy ML Initiative, a forum and seminar series dedicated to trustworthy machine learning research. Her current work centers on building tools that help human-in-the-loop audits of model behavior. 70 | 71 | ### [Ana Marasović](https://www.anamarasovic.com/) 72 | 73 | Ana Marasović is a postdoctoral researcher at the Allen Institute for AI (AllenNLP Team) and at the University of Washington (Noah's ARK). Her research interests span natural language processing, explainable AI, and multimodality. She is currently focused on developing and evaluating models that provide readable explanations of their decision process for tasks requiring advanced reasoning abilities. She received her Ph.D. in the Heidelberg University NLP Group where she worked on learning with limited labeled data for discourse-oriented tasks. 74 | 75 | ### [Willem Zuidema](https://staff.fnwi.uva.nl/w.zuidema/) 76 | 77 | Willem Zuidema is associate professor of computational linguistics and cognitive science at the Institute for Logic, Language and Computation, University of Amsterdam. His lab works on deep learning models for NLP, with a focus on interpretability, bias, cognitive and neural relevance, and the relation between language and music. Zuidema and his students were early contributors to deep learning models in NLP, with work on neural parsing (from 2008), tree-shaped neurals networks (from 2012), and diagnostic classification/probing (from 2016). Recent work includes the integration of formal logic and deep learning, representational stability analysis, contextual decomposition and knowledge distillation. 78 | 79 | 80 | ## Organizers 81 | 82 | ### Jasmijn Bastings 83 | Jasmijn Bastings (bastings[-at-]google.com) is a researcher at Google Amsterdam, having joined Google in Berlin late 2019. She holds a PhD from ILLC, University of Amsterdam, on the topic of Interpretable and Linguistically-informed Deep Learning for NLP. Recently, Jasmijn has been focusing on explainability, fairness and robustness within natural language processing. She authored two BlackboxNLP papers (2018, 2020) on generalisation and saliency methods, as well as an ACL paper (2019) on interpretable neural predictions using differentiable binary variables. 84 | 85 | ### Yonatan Belinkov 86 | Yonatan Belinkov (belinkov@technion.ac.il) is an assistant professor at the Henry and Marilyn Taub Faculty of Computer Science at the Technion. 87 | He has previosuly been a Postdoctoral Fellowat the Harvard School of Engineering and Applied Sciences (SEAS) and the MIT Computer Scienceand Artificial Intelligence Laboratory (CSAIL). 88 | His recent research focuses on interpretability androbustness of neural network models of language. 89 | His research has been published at leading NLPand ML venues. 90 | His PhD dissertation at MIT analyzed internal language representations in deeplearning models. 91 | He has been awarded the Harvard Mind, Brain, and Behavior Postdoctoral Fellowship and the Azrieli Early Career Faculty Fellowship. 92 | He co-organised the second and third editions of BlackboxNLP and the first and second machine translation robustness tasks at WMT. 93 | 94 | ### Dieuwke Hupkes 95 | Dieuwke Hupkes (dieuwkehupkes@fb.com) is a Research scientist at Facebook AI Research, and the scientific manager of the Amsterdam unit of the [ELLIS society](https://ellis.eu/). 96 | The main focus of her research is understanding how neural networks can understand and learn the structures that occur in natural language. 97 | Developing methods to interpret and interact with neural networks has therefore been an important area of focus in her research. 98 | She authored several articles directly relevant to the workshop, two of them published in a top AI journal (Journal of Artificial Intelligence), and she is co-organizing a workshop on compositionality, neural networks, and the brain, held at the Lorentz Center in the summer of 2019. 99 | 100 | ### Emmanuel Dupoux 101 | Emmanuel Dupoux (emmanuel.dupoux@gmail.com) is full professor at the Ecole des Hautes Etudesen Sciences Sociales (EHESS), and directs the Cognitive Machine Learning team at the Ecole NormaleSupérieure (ENS) in Paris and INRIA. 102 | Since 2018, he has been a part-time research scientist atFacebook AI Research. 103 | His research mixes developmental science, cognitive neuroscience, and machinelearning, with a focus on the reverse engineering of infant language and cognitive development using unsupervised or weakly supervised learning. 104 | He has directed the CNRS Laboratoire de SciencesCognitives et Psycholinguistique for 10 years. 105 | He is the recipient of an Advanced ERC grant, theorganiser of the Zero Resource Speech Challenge (2015, 2017, 2019, 2020), the Intuitive Physics Benchmark (2019) and led in 2017 a Jelinek Summer Workshop at CMU on multimodal speechlearning 106 | 107 | ### Yuval Pinter 108 | Yuval Pinter (me@yuvalpinter.com) is an incoming Senior Lecturer in the Computer Science department at Ben-Gurion University. 109 | He authored three papers on the topic of NLP neural model interpretation, looking into attention modules and character-level LSTMs. 110 | He co-organised the TREC Live QA competition for its three years of existence (2015–2017) including administering the real-time challenge, and served as publicity and social media co-chair at NAACL 2019. 111 | 112 | ### Hassan Sajjad 113 | Hassan Sajjadd (hsajjad@hbku.edu.qa) is a research scientist at the Arabic Language Technologies group, Qatar Computing Research Institute - HBKU. 114 | His recent research focuses on developing methods to analyze and interpret neural network models both at the representation-level and at the individual neuron-level. 115 | His work on the analysis of deep models is recognized at various prestigious research venues such as ACL, NAACL, ICLR, and AAAI. 116 | 117 | ### Mario Giulianelli 118 | Mario Giulianelli (m.giulianelli@uva.nl) is a PhD student at the University of Amsterdam. 119 | He investigates whether neural networks can be employed as computational models of language learning and use, and works on proposing interpretable and controllable neural architectures which more explicitly emulate the processes underlying human language cognition. 120 | He authored two articles on investigating the propensity of language models to capture syntactic and semantic phenomena, presented at EMNLP and ACL. 121 | At the first edition of BlackBoxNLP, he won the Best Paper Award with his work on probing and improving an LSTM’s ability to track number agreement information. 122 | 123 | ## Program committee 124 | 125 | ## Workshop programme 126 | 127 | TBD 128 | 129 | 130 | ## Anti-Harassment Policy 131 | BlackboxNLP 2021 adheres to the [ACL Anti-Harassment Policy](https://www.aclweb.org/adminwiki/sphp?title=Anti-Harassment_Policy). 132 | --------------------------------------------------------------------------------