├── CreativeCommonsLicense.txt ├── README.md ├── TAALED_1_3_1_Py3.zip ├── TAALED_1_3_1_Py3 ├── TAALED_1_3_1.py ├── TAALED_1_4_1.py └── dep_files │ ├── adj_lem_list.txt │ ├── en_core_web_sm │ ├── accuracy.json │ ├── meta.json │ ├── ner │ │ ├── cfg │ │ ├── lower_model │ │ ├── moves │ │ ├── tok2vec_model │ │ └── upper_model │ ├── parser │ │ ├── cfg │ │ ├── lower_model │ │ ├── moves │ │ ├── tok2vec_model │ │ └── upper_model │ ├── tagger │ │ ├── cfg │ │ ├── model │ │ └── tag_map │ ├── tokenizer │ └── vocab │ │ ├── key2row │ │ ├── lexemes.bin │ │ ├── strings.json │ │ └── vectors │ └── real_words.txt └── TAALED_Index_Description.xlsx /CreativeCommonsLicense.txt: -------------------------------------------------------------------------------- 1 | Use of TAALED is governed by the following license: 2 | 3 | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License 4 | Please see https://creativecommons.org/licenses/by-nc-sa/4.0/ for more information (or keep reading) 5 | 6 | This is a human-readable summary of (and not a substitute for) the license. 7 | You are free to: 8 | Share — copy and redistribute the material in any medium or format 9 | Adapt — remix, transform, and build upon the material 10 | The licensor cannot revoke these freedoms as long as you follow the license terms. 11 | Under the following terms: 12 | Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. 13 | NonCommercial — You may not use the material for commercial purposes. 14 | ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. 15 | No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. 16 | 17 | 18 | By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. 19 | 20 | Section 1 – Definitions. 21 | 22 | Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. 23 | Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. 24 | BY-NC-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License. 25 | Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. 26 | Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. 27 | Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. 28 | License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike. 29 | Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. 30 | Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. 31 | Licensor means the individual(s) or entity(ies) granting rights under this Public License. 32 | NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange. 33 | Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. 34 | Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. 35 | You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. 36 | Section 2 – Scope. 37 | 38 | License grant. 39 | Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: 40 | reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and 41 | produce, reproduce, and Share Adapted Material for NonCommercial purposes only. 42 | Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 43 | Term. The term of this Public License is specified in Section 6(a). 44 | Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. 45 | Downstream recipients. 46 | Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. 47 | Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply. 48 | No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 49 | No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). 50 | Other rights. 51 | 52 | Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 53 | Patent and trademark rights are not licensed under this Public License. 54 | To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes. 55 | Section 3 – License Conditions. 56 | 57 | Your exercise of the Licensed Rights is expressly made subject to the following conditions. 58 | 59 | Attribution. 60 | 61 | If You Share the Licensed Material (including in modified form), You must: 62 | 63 | retain the following if it is supplied by the Licensor with the Licensed Material: 64 | identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); 65 | a copyright notice; 66 | a notice that refers to this Public License; 67 | a notice that refers to the disclaimer of warranties; 68 | a URI or hyperlink to the Licensed Material to the extent reasonably practicable; 69 | indicate if You modified the Licensed Material and retain an indication of any previous modifications; and 70 | indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 71 | You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 72 | If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 73 | ShareAlike. 74 | In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. 75 | 76 | The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License. 77 | You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. 78 | You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. 79 | Section 4 – Sui Generis Database Rights. 80 | 81 | Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: 82 | 83 | for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only; 84 | if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and 85 | You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. 86 | For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. 87 | Section 5 – Disclaimer of Warranties and Limitation of Liability. 88 | 89 | Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You. 90 | To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You. 91 | The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. 92 | Section 6 – Term and Termination. 93 | 94 | This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. 95 | Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 96 | 97 | automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 98 | upon express reinstatement by the Licensor. 99 | For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. 100 | For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. 101 | Sections 1, 5, 6, 7, and 8 survive termination of this Public License. 102 | Section 7 – Other Terms and Conditions. 103 | 104 | The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. 105 | Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. 106 | Section 8 – Interpretation. 107 | 108 | For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. 109 | To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. 110 | No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. 111 | Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # The Tool for the Automatic Analysis of Lexical Diversity (TAALED) 1.4.1 2 | This tool was developed by Kristopher Kyle in collaboration with Scott Crossley and Scott Jarvis. 3 | See https://www.linguisticanalysistools.org/taaled.html for more information. 4 | 5 | # Quick Start Guide 6 | For Mac OSX, download the TAALED_1_4_1_Mac.zip file (found under the "releases" tab). After extracting the TAALED program from the .zip file, double-click the icon to open the program. Note that you may have to change your security preferences to open TAALED. 7 | For Windows 10 (64-bit versions only), download the TAALED_1_4_1_win.zip file (found under the "releases" tab). After extracting the TAALED program from the .zip file, double-click the icon to open the program. 8 | Note that after clicking on the TAALED icon, a terminal (Mac) or command prompt (Windows) window will open. TAALED will take 10-20 seconds to initialize, depending on the speed of your program. 9 | 10 | The source code for TAALED 1.3.1 is also available. For TAALED to work properly the user must have spaCy installed (see https://spacy.io/usage/ for instructions). 11 | 12 | After downloading and extracting the TAALED_1_4_1_Py3.zip file, the user can start the program using the following command: 13 | python TAALED_1_4_1.py 14 | 15 | Also note that the underlying functions for calculating lexical diversity are distributed as the Python package "lexical-diversity". See https://github.com/kristopherkyle/lexical_diversity for more information. 16 | 17 | # Licensing 18 | TAALED is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. 19 | See https://creativecommons.org/licenses/by-nc-sa/4.0/ for more information 20 | 21 | #Version Details 22 | Version 1.4.1 comprises a bug fix. In version 1.3.1, words that began with capital letters were considered non words (and were ignored). This issue is fixed in version 1.4.1. 23 | 24 | -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3.zip -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/TAALED_1_3_1.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | 4 | import tkinter as tk 5 | import tkinter.font 6 | import tkinter.filedialog 7 | import tkinter.constants 8 | import queue 9 | from tkinter import messagebox 10 | import os 11 | import sys 12 | import re 13 | import platform 14 | import glob 15 | import math 16 | from collections import Counter 17 | from threading import Thread 18 | 19 | #This creates a que in which the core TAALED program can communicate with the GUI 20 | dataQueue = queue.Queue() 21 | 22 | #This creates the message for the progress box (and puts it in the dataQueue) 23 | progress = "...Waiting for Data to Process" 24 | dataQueue.put(progress) 25 | 26 | #Def1 is the core TAALED program; args is information passed to TAALED 27 | def start_thread(def1, arg1, arg2, arg3): 28 | t = Thread(target=def1, args=(arg1, arg2, arg3)) 29 | t.start() 30 | 31 | #This allows for a packaged gui to find the resource files. 32 | def resource_path(relative): 33 | if hasattr(sys, "_MEIPASS"): 34 | return os.path.join(sys._MEIPASS, relative) 35 | return os.path.join(relative) 36 | 37 | color = "#758fa8" 38 | 39 | prog_name = "TAALED 1.3.1" 40 | 41 | if platform.system() == "Darwin": 42 | system = "M" 43 | title_size = 16 44 | font_size = 14 45 | geom_size = "525x500" 46 | elif platform.system() == "Windows": 47 | system = "W" 48 | title_size = 12 49 | font_size = 12 50 | geom_size = "525x475" 51 | elif platform.system() == "Linux": 52 | system = "L" 53 | title_size = 14 54 | font_size = 12 55 | geom_size = "525x385" 56 | 57 | def start_watcher(def2, count, folder): 58 | t2 = Thread(target=def2, args =(count,folder)) 59 | t2.start() 60 | 61 | class MyApp: #this is the class for the gui and the text analysis 62 | def __init__(self, parent): 63 | 64 | helv14= tkinter.font.Font(family= "Helvetica Neue", size=font_size) 65 | times14= tkinter.font.Font(family= "Lucida Grande", size=font_size) 66 | helv16= tkinter.font.Font(family= "Helvetica Neue", size = title_size, weight = "bold", slant = "italic") 67 | 68 | #This defines the GUI parent (ish) 69 | 70 | self.myParent = parent 71 | 72 | self.var_dict = {} 73 | 74 | #This creates the header text - Task:work with this to make more pretty! 75 | self.spacer1= tk.Label(parent, text= "Tool for the Automatic Analysis of Lexical Diversity", font = helv16, background = color) 76 | self.spacer1.pack() 77 | 78 | #This creates a frame for the meat of the GUI 79 | self.thestuff= tk.Frame(parent, background =color) 80 | self.thestuff.pack() 81 | 82 | self.myContainer1= tk.Frame(self.thestuff, background = color) 83 | self.myContainer1.pack(side = tk.RIGHT, expand= tk.TRUE) 84 | self.instruct = tk.Button(self.myContainer1, text = "Instructions", justify = tk.LEFT) 85 | self.instruct.pack() 86 | self.instruct.bind("", self.instruct_mess) 87 | 88 | self.opt_frame = tk.LabelFrame(self.myContainer1, text= "Options and index selection", background = color) 89 | self.opt_frame.pack(fill = tk.X,expand=tk.TRUE) 90 | 91 | self.options_frame = tk.LabelFrame(self.opt_frame, text= "Word analysis options", background = color) 92 | self.options_frame.pack(fill = tk.X,expand=tk.TRUE) 93 | 94 | #insert checkboxes here 95 | self.aw_choice_var = tk.IntVar() 96 | self.aw_choice = tk.Checkbutton(self.options_frame, text="All words", variable=self.aw_choice_var,background = color) 97 | self.aw_choice.grid(row=1,column=1, sticky = "W") 98 | self.var_dict["aw"] = self.aw_choice_var 99 | 100 | self.cw_choice_var = tk.IntVar() 101 | self.cw_choice = tk.Checkbutton(self.options_frame, text="Content words", variable=self.cw_choice_var,background = color) 102 | self.cw_choice.grid(row=1,column=2, sticky = "W") 103 | self.var_dict["cw"] = self.cw_choice_var 104 | 105 | self.fw_choice_var = tk.IntVar() 106 | self.fw_choice = tk.Checkbutton(self.options_frame, text="Function words", variable=self.fw_choice_var,background = color) 107 | self.fw_choice.grid(row=1,column=3, sticky = "W") 108 | self.var_dict["fw"] = self.fw_choice_var 109 | 110 | self.indices_frame = tk.LabelFrame(self.opt_frame, text= "Index selection", background = color) 111 | self.indices_frame.pack(fill = tk.X,expand=tk.TRUE) 112 | 113 | self.simple_ttr_var = tk.IntVar() 114 | self.simple_ttr = tk.Checkbutton(self.indices_frame, text="Simple TTR", variable=self.simple_ttr_var,background = color) 115 | self.simple_ttr.grid(row=1,column=1, sticky = "W") 116 | self.var_dict["simple_ttr"] = self.simple_ttr_var 117 | 118 | self.root_ttr_var = tk.IntVar() 119 | self.root_ttr = tk.Checkbutton(self.indices_frame, text="Root TTR", variable=self.root_ttr_var,background = color) 120 | self.root_ttr.grid(row=1,column=2, sticky = "W") 121 | self.var_dict["root_ttr"] = self.root_ttr_var 122 | 123 | self.bi_log_ttr_var = tk.IntVar() 124 | self.bi_log_ttr = tk.Checkbutton(self.indices_frame, text="Log TTR", variable=self.bi_log_ttr_var,background = color) 125 | self.bi_log_ttr.grid(row=1,column=3, sticky = "W") 126 | self.var_dict["log_ttr"] = self.bi_log_ttr_var 127 | 128 | self.maas_ttr_var = tk.IntVar() 129 | self.maas_ttr = tk.Checkbutton(self.indices_frame, text="Maas", variable=self.maas_ttr_var,background = color) 130 | self.maas_ttr.grid(row=1,column=4, sticky = "W") 131 | self.var_dict["maas_ttr"] = self.maas_ttr_var 132 | 133 | self.mattr_var = tk.IntVar() 134 | self.mattr = tk.Checkbutton(self.indices_frame, text="MATTR", variable=self.mattr_var,background = color) 135 | self.mattr.grid(row=2,column=1, sticky = "W") 136 | self.var_dict["mattr"] = self.mattr_var 137 | 138 | self.msttr_var = tk.IntVar() 139 | self.msttr = tk.Checkbutton(self.indices_frame, text="MSTTR", variable=self.msttr_var,background = color) 140 | self.msttr.grid(row=2,column=2, sticky = "W") 141 | self.var_dict["msttr"] = self.msttr_var 142 | 143 | self.hdd_var = tk.IntVar() 144 | self.hdd = tk.Checkbutton(self.indices_frame, text="HD-D", variable=self.hdd_var,background = color) 145 | self.hdd.grid(row=2,column=3, sticky = "W") 146 | self.var_dict["hdd"] = self.hdd_var 147 | 148 | self.mtld_orig_var = tk.IntVar() 149 | self.mtld_orig = tk.Checkbutton(self.indices_frame, text="MTLD Original", variable=self.mtld_orig_var,background = color) 150 | self.mtld_orig.grid(row=3,column=1, sticky = "W") 151 | self.var_dict["mltd"] = self.mtld_orig_var 152 | 153 | self.mltd_ma_var = tk.IntVar() 154 | self.mltd_ma = tk.Checkbutton(self.indices_frame, text="MTLD MA Bi", variable=self.mltd_ma_var,background = color) 155 | self.mltd_ma.grid(row=3,column=2, sticky = "W") 156 | self.var_dict["mltd_ma"] = self.mltd_ma_var 157 | 158 | self.mltd_wrap_var = tk.IntVar() 159 | self.mltd_wrap = tk.Checkbutton(self.indices_frame, text="MTLD MA Wrap", variable=self.mltd_wrap_var,background = color) 160 | self.mltd_wrap.grid(row=3,column=3, sticky = "W") 161 | self.var_dict["mtld_wrap"] = self.mltd_wrap_var 162 | 163 | self.secondframe= tk.LabelFrame(self.myContainer1, text= "Data input", background = color) 164 | self.secondframe.pack(fill = tk.X,expand=tk.TRUE) 165 | 166 | 167 | #Creates default dirname so if statement in Process Texts can check to see 168 | #if a directory name has been chosen 169 | self.dirname = "" 170 | 171 | #This creates a label for the first program input (Input Directory) 172 | self.inputdirlabel =tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Your selected input folder:", background = color) 173 | self.inputdirlabel.pack(fill = tk.X) 174 | 175 | #Creates label that informs user which directory has been chosen 176 | directoryprompt = "(No Folder Chosen)" 177 | self.inputdirchosen = tk.Label(self.inputdirlabel, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text = directoryprompt) 178 | self.inputdirchosen.pack(side = tk.LEFT) 179 | 180 | #This Places the first button under the instructions. 181 | self.button1 = tk.Button(self.inputdirlabel) 182 | self.button1.configure(text= "Select") 183 | self.button1.pack(side = tk.LEFT, padx = 5) 184 | 185 | self.button1.bind("", self.button1Click) 186 | 187 | self.outdirname = "" 188 | 189 | self.out_optframe = tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Output Options", background = color) 190 | self.out_optframe.pack() 191 | 192 | self.ind_out_var = tk.IntVar() 193 | self.ind_out = tk.Checkbutton(self.out_optframe, text="Individual Item Output", variable=self.ind_out_var,background = color) 194 | self.ind_out.pack(side = tk.LEFT) 195 | self.ind_out.deselect() 196 | self.var_dict["indout"] = self.ind_out_var 197 | 198 | #Creates a label for the second program input (Output Directory) 199 | self.outputdirlabel = tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Your selected output filename:", background = color) 200 | self.outputdirlabel.pack(fill = tk.X) 201 | 202 | #Creates a label that informs sure which directory has been chosen 203 | outdirectoryprompt = "(No Output Filename Chosen)" 204 | self.outputdirchosen = tk.Label(self.outputdirlabel, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text = outdirectoryprompt) 205 | self.outputdirchosen.pack(side = tk.LEFT) 206 | 207 | self.button2 = tk.Button(self.outputdirlabel) 208 | self.button2["text"]= "Select" 209 | #This tells the button what to do if clicked. 210 | self.button2.bind("", self.button2Click) 211 | self.button2.pack(side = tk.LEFT, padx = 5) 212 | 213 | self.BottomSpace= tk.LabelFrame(self.myContainer1, text = "Run Program", background = color) 214 | self.BottomSpace.pack() 215 | 216 | self.button3= tk.Button(self.BottomSpace) 217 | self.button3["text"] = "Process Texts" 218 | self.button3.bind("", self.runprogram) 219 | self.button3.pack() 220 | 221 | self.progresslabelframe = tk.LabelFrame(self.BottomSpace, text= "Program Status", background = color) 222 | self.progresslabelframe.pack(expand= tk.TRUE) 223 | 224 | self.progress= tk.Label(self.progresslabelframe, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text=progress) 225 | self.progress.pack() 226 | 227 | self.poll(self.progress) 228 | 229 | def poll(self, function): 230 | 231 | self.myParent.after(10, self.poll, function) 232 | try: 233 | function.config(text = dataQueue.get(block=False)) 234 | 235 | except queue.Empty: 236 | pass 237 | 238 | def instruct_mess(self, event): 239 | messagebox.showinfo("Instructions", "1. Click the 'Index Selection and Options' button to select desired indices\n\n2. Choose the input folder (where your files are)\n\n3. If desired, select additional output options (see user manual for details)\n\n4. Choose your output filename\n\n5. Press the 'Process Texts' button") 240 | 241 | def entry1Return(self,event): 242 | input= self.entry1.get() 243 | self.input2 = input + ".csv" 244 | self.filechosenchosen.config(text = self.input2) 245 | self.filechosenchosen.update_idletasks() 246 | 247 | #Following is an example of how we can update the information from users... 248 | def button1Click(self, event): 249 | #import Tkinter, 250 | if sys.version_info[0] == 2: 251 | import tkFileDialog 252 | self.dirname = tkFileDialog.askdirectory(parent=root,initialdir="/",title='Please select a directory') 253 | 254 | if sys.version_info[0] == 3: 255 | import tkinter.filedialog 256 | self.dirname = tkinter.filedialog.askdirectory(parent=root,initialdir="/",title='Please select a directory') 257 | 258 | self.displayinputtext = '.../'+self.dirname.split('/')[-1] 259 | self.inputdirchosen.config(text = self.displayinputtext) 260 | 261 | 262 | def button2Click(self, event): 263 | self.outdirname = tkinter.filedialog.asksaveasfilename(parent=root, defaultextension = ".csv", initialfile = "results",title='Choose Output Filename') 264 | 265 | #print(self.outdirname) 266 | if self.outdirname == "": 267 | self.displayoutputtext = "(No Output Filename Chosen)" 268 | else: self.displayoutputtext = '.../' + self.outdirname.split('/')[-1] 269 | self.outputdirchosen.config(text = self.displayoutputtext) 270 | 271 | def runprogram(self, event): 272 | self.poll(self.progress) 273 | start_thread(main, self.dirname, self.outdirname, self.var_dict) 274 | 275 | 276 | #### THIS IS BEGINNING OF PROGRAM ### 277 | def main(indir, outdir, var_dict): 278 | 279 | import tkinter.messagebox 280 | if indir is "": 281 | tkinter.messagebox.showinfo("Supply Information", "Choose Input Directory") 282 | if outdir is "": 283 | tkinter.messagebox.showinfo("Choose Output Directory", "Choose Output Directory") 284 | 285 | 286 | if indir is not "" and outdir is not "": 287 | dataQueue.put("Starting TAALED...") 288 | 289 | import spacy 290 | from spacy.util import set_data_path 291 | set_data_path(resource_path('dep_files/en_core_web_sm')) 292 | nlp = spacy.load(resource_path('dep_files/en_core_web_sm')) 293 | 294 | #thus begins the text analysis portion of the program 295 | adj_word_list = open(resource_path("dep_files/adj_lem_list.txt"), "r",errors = 'ignore').read().split("\n")[:-1] 296 | real_word_list = open(resource_path("dep_files/real_words.txt"), "r",errors = 'ignore').read().split("\n")[:-1] 297 | 298 | ### THESE ARE PERTINENT FOR ALL IMPORTANT INDICES #### 299 | noun_tags = ["NN", "NNS", "NNP", "NNPS"] #consider whether to identify gerunds 300 | proper_n = ["NNP", "NNPS"] 301 | no_proper = ["NN", "NNS"] 302 | pronouns = ["PRP", "PRP$"] 303 | adjectives = ["JJ", "JJR", "JJS"] 304 | verbs = ["VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "MD"] 305 | adverbs = ["RB", "RBR", "RBS"] 306 | content = ["NN", "NNS", "NNP", "NNPS","JJ", "JJR", "JJS"] #note that this is a preliminary list 307 | prelim_not_function = ["NN", "NNS", "NNP", "NNPS","JJ", "JJR", "JJS", "RB", "RBR", "RBS", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "MD"] 308 | pronoun_dict = {"me":"i","him":"he","her":"she"} 309 | punctuation = "`` '' ' . , ? ! ) ( % / - _ -LRB- -RRB- SYM : ;".split(" ") 310 | punctuation.append('"') 311 | 312 | #This function deals with denominator issues that can kill the program: 313 | def safe_divide(numerator, denominator): 314 | if denominator == 0 or denominator == 0.0: 315 | index = 0 316 | else: index = numerator/denominator 317 | return index 318 | 319 | def indexer(header_list, index_list, name, index): 320 | header_list.append(name) 321 | index_list.append(index) 322 | 323 | def tag_processor_spaCy(raw_text): #uses default spaCy 2.016 324 | 325 | lemma_list = [] 326 | content_list = [] 327 | function_list = [] 328 | 329 | tagged_text = nlp(raw_text) 330 | 331 | for sent in tagged_text.sents: 332 | for token in sent: 333 | if token.tag_ in punctuation: 334 | continue 335 | if token.text not in real_word_list: 336 | continue 337 | 338 | 339 | if token.tag_ in content: 340 | if token.tag_ in noun_tags: 341 | content_list.append(token.lemma_ + "_cw_nn") 342 | lemma_list.append(token.lemma_ + "_cw_nn") 343 | else: 344 | content_list.append(token.lemma_ + "_cw") 345 | lemma_list.append(token.lemma_ + "_cw") 346 | 347 | if token.tag_ not in prelim_not_function: 348 | if token.tag_ in pronouns: 349 | if token.text.lower() in pronoun_dict: 350 | function_list.append(pronoun_dict[token.text.lower()] + "_fw") 351 | lemma_list.append(pronoun_dict[token.text.lower()] + "_fw") 352 | else: 353 | function_list.append(token.text.lower() + "_fw") 354 | lemma_list.append(token.text.lower() + "_fw") 355 | else: 356 | function_list.append(token.lemma_ + "_fw") 357 | lemma_list.append(token.lemma_ + "_fw") 358 | 359 | if token.tag_ in verbs: 360 | if token.dep_ == "aux": 361 | function_list.append(token.lemma_ + "_fw") 362 | lemma_list.append(token.lemma_ + "_fw") 363 | 364 | elif token.lemma_ == "be": 365 | function_list.append(token.lemma_ + "_fw") 366 | lemma_list.append(token.lemma_ + "_fw") 367 | 368 | else: 369 | content_list.append(token.lemma_ + "_cw_vb") 370 | lemma_list.append(token.lemma_ + "_cw_vb") 371 | 372 | if token.tag_ in adverbs: 373 | if (token.lemma_[-2:] == "ly" and token.lemma_[:-2] in adj_word_list) or (token.lemma_[-4:] == "ally" and token.lemma_[:-4] in adj_word_list): 374 | content_list.append(token.lemma_ + "_cw") 375 | lemma_list.append(token.lemma_ + "_cw") 376 | else: 377 | function_list.append(token.lemma_ + "_fw") 378 | lemma_list.append(token.lemma_ + "_fw") 379 | #print(raw_token, lemma_list[-1]) 380 | 381 | return {"lemma" : lemma_list, "content" : content_list, "function":function_list} 382 | 383 | def lex_density(cw_text, fw_text): 384 | n_cw = len(cw_text) 385 | n_fw = len(fw_text) 386 | n_all = n_cw + n_fw 387 | 388 | n_type_cw = len(set(cw_text)) 389 | n_type_fw = len(set(fw_text)) 390 | n_all_type = n_type_cw + n_type_fw 391 | 392 | lex_dens_all = safe_divide(n_cw,n_all) #percentage of content words 393 | lex_dens_cw_fw = safe_divide(n_cw,n_fw) #ratio content words to function words 394 | 395 | lex_dens_all_type = safe_divide(n_type_cw,n_all_type) #percentage of content words 396 | lex_dens_cw_fw_type = safe_divide(n_type_cw,n_type_fw) #ratio content words to function words 397 | 398 | return [lex_dens_all, lex_dens_all_type] 399 | 400 | def ttr(text): 401 | ntokens = len(text) 402 | ntypes = len(set(text)) 403 | 404 | simple_ttr = safe_divide(ntypes,ntokens) 405 | root_ttr = safe_divide(ntypes, math.sqrt(ntokens)) 406 | log_ttr = safe_divide(math.log10(ntypes), math.log10(ntokens)) 407 | maas_ttr = safe_divide((math.log10(ntokens)-math.log10(ntypes)), math.pow(math.log10(ntokens),2)) 408 | 409 | return [simple_ttr,root_ttr,log_ttr,maas_ttr] 410 | 411 | def simple_ttr(text): 412 | ntokens = len(text) 413 | ntypes = len(set(text)) 414 | 415 | return safe_divide(ntypes,ntokens) 416 | 417 | def root_ttr(text): 418 | ntokens = len(text) 419 | ntypes = len(set(text)) 420 | 421 | return safe_divide(ntypes, math.sqrt(ntokens)) 422 | 423 | def log_ttr(text): 424 | ntokens = len(text) 425 | ntypes = len(set(text)) 426 | 427 | return safe_divide(math.log10(ntypes), math.log10(ntokens)) 428 | 429 | def maas_ttr(text): 430 | ntokens = len(text) 431 | ntypes = len(set(text)) 432 | 433 | return safe_divide((math.log10(ntokens)-math.log10(ntypes)), math.pow(math.log10(ntokens),2)) 434 | 435 | 436 | def mattr(text, window_length = 50): #from TAACO 2.0.4 437 | 438 | if len(text) < (window_length + 1): 439 | ma_ttr = safe_divide(len(set(text)),len(text)) 440 | 441 | else: 442 | sum_ttr = 0 443 | denom = 0 444 | for x in range(len(text)): 445 | small_text = text[x:(x + window_length)] 446 | if len(small_text) < window_length: 447 | break 448 | denom += 1 449 | sum_ttr+= safe_divide(len(set(small_text)),float(window_length)) 450 | ma_ttr = safe_divide(sum_ttr,denom) 451 | 452 | return ma_ttr 453 | 454 | def msttr(text, window_length = 50): 455 | 456 | if len(text) < (window_length + 1): 457 | ms_ttr = safe_divide(len(set(text)),len(text)) 458 | 459 | else: 460 | sum_ttr = 0 461 | denom = 0 462 | 463 | n_segments = int(safe_divide(len(text),window_length)) 464 | seed = 0 465 | for x in range(n_segments): 466 | sub_text = text[seed:seed+window_length] 467 | #print sub_text 468 | sum_ttr += safe_divide(len(set(sub_text)), len(sub_text)) 469 | denom+=1 470 | seed+=window_length 471 | 472 | ms_ttr = safe_divide(sum_ttr, denom) 473 | 474 | return ms_ttr 475 | 476 | def hdd(text): 477 | #requires Counter import 478 | def choose(n, k): #calculate binomial 479 | """ 480 | A fast way to calculate binomial coefficients by Andrew Dalke (contrib). 481 | """ 482 | if 0 <= k <= n: 483 | ntok = 1 484 | ktok = 1 485 | for t in range(1, min(k, n - k) + 1): #this was changed to "range" from "xrange" for py3 486 | ntok *= n 487 | ktok *= t 488 | n -= 1 489 | return ntok // ktok 490 | else: 491 | return 0 492 | 493 | def hyper(successes, sample_size, population_size, freq): #calculate hypergeometric distribution 494 | #probability a word will occur at least once in a sample of a particular size 495 | try: 496 | prob_1 = 1.0 - (float((choose(freq, successes) * choose((population_size - freq),(sample_size - successes)))) / float(choose(population_size, sample_size))) 497 | prob_1 = prob_1 * (1/sample_size) 498 | except ZeroDivisionError: 499 | prob_1 = 0 500 | 501 | return prob_1 502 | 503 | prob_sum = 0.0 504 | ntokens = len(text) 505 | types_list = list(set(text)) 506 | frequency_dict = Counter(text) 507 | 508 | for items in types_list: 509 | prob = hyper(0,42,ntokens,frequency_dict[items]) #random sample is 42 items in length 510 | prob_sum += prob 511 | 512 | return prob_sum 513 | 514 | 515 | def mtld_original(input, min = 10): 516 | def mtlder(text): 517 | factor = 0 518 | factor_lengths = 0 519 | start = 0 520 | for x in range(len(text)): 521 | factor_text = text[start:x+1] 522 | if x+1 == len(text): 523 | factor += safe_divide((1 - ttr(factor_text)[0]),(1 - .72)) 524 | factor_lengths += len(factor_text) 525 | else: 526 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 527 | factor += 1 528 | factor_lengths += len(factor_text) 529 | start = x+1 530 | else: 531 | continue 532 | 533 | mtld = safe_divide(factor_lengths,factor) 534 | return mtld 535 | input_reversed = list(reversed(input)) 536 | mtld_full = safe_divide((mtlder(input)+mtlder(input_reversed)),2) 537 | return mtld_full 538 | 539 | def mtld_bi_directional_ma(text, min = 10): 540 | def mtld_ma(text, min = 10): 541 | factor = 0 542 | factor_lengths = 0 543 | for x in range(len(text)): 544 | sub_text = text[x:] 545 | breaker = 0 546 | for y in range(len(sub_text)): 547 | if breaker == 0: 548 | factor_text = sub_text[:y+1] 549 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 550 | factor += 1 551 | factor_lengths += len(factor_text) 552 | breaker = 1 553 | else: 554 | continue 555 | mtld = safe_divide(factor_lengths,factor) 556 | return mtld 557 | 558 | forward = mtld_ma(text) 559 | backward = mtld_ma(list(reversed(text))) 560 | 561 | mtld_bi = safe_divide((forward + backward), 2) #average of forward and backward mtld 562 | 563 | return mtld_bi 564 | 565 | 566 | def mtld_ma_wrap(text, min = 10): 567 | factor = 0 568 | factor_lengths = 0 569 | start = 0 570 | double_text = text + text #allows wraparound 571 | for x in range(len(text)): 572 | breaker = 0 573 | sub_text = double_text[x:] 574 | for y in range(len(sub_text)): 575 | if breaker == 0: 576 | factor_text = sub_text[:y+1] 577 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 578 | factor += 1 579 | factor_lengths += len(factor_text) 580 | breaker = 1 581 | else: 582 | continue 583 | mtld = safe_divide(factor_lengths,factor) 584 | return mtld 585 | 586 | 587 | #### END DEFINED FUNCTIONS #### 588 | 589 | for keys in var_dict: 590 | try: 591 | if var_dict[keys].get() == 1: 592 | var_dict[keys] = 1 593 | else: var_dict[keys] = 0 594 | except AttributeError: 595 | continue 596 | 597 | inputfile = indir + "/*.txt" 598 | outf=open(outdir, "w") 599 | 600 | filenames = glob.glob(inputfile) 601 | file_number = 0 602 | 603 | if var_dict["indout"] == 1: 604 | directory = outdir[:-4] + "_diagnostic/" #this is for diagnostic file 605 | if not os.path.exists(directory): 606 | os.makedirs(directory) 607 | 608 | for the_file in os.listdir(directory): #this cleans out the old diagnostic file (if applicable) 609 | file_path = os.path.join(directory, the_file) 610 | os.unlink(file_path) 611 | 612 | 613 | nfiles = len(filenames) 614 | file_counter = 1 615 | 616 | 617 | for filename in filenames: 618 | 619 | if system == "M" or system == "L": 620 | simple_filename = filename.split("/")[-1] 621 | 622 | if system == "W": 623 | simple_filename = filename.split("\\")[-1] 624 | if "/" in simple_filename: 625 | simple_filename = simple_filename.split("/")[-1] 626 | 627 | #print(simple_filename) 628 | 629 | if var_dict["indout"] == 1: 630 | basic_diag_file_name = directory + simple_filename[:-4] + "_processed.txt" 631 | basic_diag_file = open(basic_diag_file_name, "w") 632 | 633 | index_list = [simple_filename] 634 | header_list = ["filename"] 635 | 636 | #updates Program Status 637 | filename1 = ("Processing: " + str(file_counter) + " of " + str(nfiles) + " files") 638 | dataQueue.put(filename1) 639 | root.update_idletasks() 640 | 641 | file_counter+=1 642 | 643 | if system == "M" or system == "L": 644 | filename_2 = filename.split("/")[-1] 645 | elif system == "W": 646 | filename_2 = filename.split("\\")[-1] 647 | 648 | raw_text= open(filename, "r", errors = 'ignore').read() 649 | raw_text = re.sub('\s+',' ',raw_text) 650 | #while " " in raw_text: 651 | #raw_text = raw_text.replace(" ", " ") 652 | 653 | refined_lemma_dict = tag_processor_spaCy(raw_text) 654 | 655 | lemma_text_aw = refined_lemma_dict["lemma"] 656 | 657 | lemma_text_cw = refined_lemma_dict["content"] 658 | lemma_text_fw = refined_lemma_dict["function"] 659 | 660 | indexer(header_list, index_list, "basic_ntokens", len(lemma_text_aw)) 661 | indexer(header_list, index_list, "basic_ntypes", len(set(lemma_text_aw))) 662 | indexer(header_list, index_list, "basic_ncontent_tokens", len(lemma_text_cw)) 663 | indexer(header_list, index_list, "basic_ncontent_types", len(set(lemma_text_cw))) 664 | indexer(header_list, index_list, "basic_nfunction_tokens", len(lemma_text_fw)) 665 | indexer(header_list, index_list, "basic_nfunction_types", len(set(lemma_text_fw))) 666 | 667 | indexer(header_list, index_list, "lexical_density_types", lex_density(lemma_text_cw, lemma_text_fw)[1]) 668 | indexer(header_list, index_list, "lexical_density_tokens", lex_density(lemma_text_cw, lemma_text_fw)[0]) 669 | 670 | if var_dict["simple_ttr"] == 1: 671 | if var_dict["aw"] ==1: 672 | indexer(header_list, index_list, "simple_ttr_aw", ttr(lemma_text_aw)[0]) 673 | 674 | if var_dict["cw"] ==1: 675 | indexer(header_list, index_list, "simple_ttr_cw", ttr(lemma_text_cw)[0]) 676 | if var_dict["fw"] ==1: 677 | indexer(header_list, index_list, "simple_ttr_fw", ttr(lemma_text_fw)[0]) 678 | 679 | if var_dict["root_ttr"] == 1: 680 | if var_dict["aw"] ==1: 681 | indexer(header_list, index_list, "root_ttr_aw", ttr(lemma_text_aw)[1]) 682 | 683 | if var_dict["cw"] ==1: 684 | indexer(header_list, index_list, "root_ttr_cw", ttr(lemma_text_cw)[1]) 685 | if var_dict["fw"] ==1: 686 | indexer(header_list, index_list, "root_ttr_fw", ttr(lemma_text_fw)[1]) 687 | 688 | if var_dict["log_ttr"] == 1: 689 | if var_dict["aw"] ==1: 690 | indexer(header_list, index_list, "log_ttr_aw", ttr(lemma_text_aw)[2]) 691 | 692 | if var_dict["cw"] ==1: 693 | indexer(header_list, index_list, "log_ttr_cw", ttr(lemma_text_cw)[2]) 694 | if var_dict["fw"] ==1: 695 | indexer(header_list, index_list, "log_ttr_fw", ttr(lemma_text_fw)[2]) 696 | 697 | if var_dict["maas_ttr"] == 1: 698 | if var_dict["aw"] ==1: 699 | indexer(header_list, index_list, "maas_ttr_aw", ttr(lemma_text_aw)[3]) 700 | 701 | if var_dict["cw"] ==1: 702 | indexer(header_list, index_list, "maas_ttr_cw", ttr(lemma_text_cw)[3]) 703 | if var_dict["fw"] ==1: 704 | indexer(header_list, index_list, "maas_ttr_fw", ttr(lemma_text_fw)[3]) 705 | 706 | if var_dict["mattr"] == 1: 707 | if var_dict["aw"] ==1: 708 | indexer(header_list, index_list, "mattr50_aw", mattr(lemma_text_aw,50)) 709 | 710 | if var_dict["cw"] ==1: 711 | indexer(header_list, index_list, "mattr50_cw", mattr(lemma_text_cw,50)) 712 | if var_dict["fw"] ==1: 713 | indexer(header_list, index_list, "mattr50_fw", mattr(lemma_text_fw,50)) 714 | 715 | if var_dict["msttr"] == 1: 716 | if var_dict["aw"] ==1: 717 | indexer(header_list, index_list, "msttr50_aw", msttr(lemma_text_aw,50)) 718 | if var_dict["cw"] ==1: 719 | indexer(header_list, index_list, "msttr50_cw", msttr(lemma_text_cw,50)) 720 | if var_dict["fw"] ==1: 721 | indexer(header_list, index_list, "msttr50_fw", msttr(lemma_text_fw,50)) 722 | 723 | if var_dict["hdd"] == 1: 724 | if var_dict["aw"] ==1: 725 | indexer(header_list, index_list, "hdd42_aw", hdd(lemma_text_aw)) 726 | 727 | if var_dict["cw"] ==1: 728 | indexer(header_list, index_list, "hdd42_cw", hdd(lemma_text_cw)) 729 | if var_dict["fw"] ==1: 730 | indexer(header_list, index_list, "hdd42_fw", hdd(lemma_text_fw)) 731 | 732 | if var_dict["mltd"] == 1: 733 | if var_dict["aw"] ==1: 734 | indexer(header_list, index_list, "mtld_original_aw", mtld_original(lemma_text_aw)) 735 | 736 | if var_dict["cw"] ==1: 737 | indexer(header_list, index_list, "mtld_original_cw", mtld_original(lemma_text_cw)) 738 | if var_dict["fw"] ==1: 739 | indexer(header_list, index_list, "mtld_original_fw", mtld_original(lemma_text_fw)) 740 | 741 | if var_dict["mltd_ma"] == 1: 742 | if var_dict["aw"] ==1: 743 | indexer(header_list, index_list, "mtld_ma_bi_aw", mtld_bi_directional_ma(lemma_text_aw)) 744 | 745 | if var_dict["cw"] ==1: 746 | indexer(header_list, index_list, "mtld_ma_bi_cw", mtld_bi_directional_ma(lemma_text_cw)) 747 | if var_dict["fw"] ==1: 748 | indexer(header_list, index_list, "mtld_ma_bi_fw", mtld_bi_directional_ma(lemma_text_fw)) 749 | 750 | if var_dict["mtld_wrap"] == 1: 751 | if var_dict["aw"] ==1: 752 | indexer(header_list, index_list, "mtld_ma_wrap_aw", mtld_ma_wrap(lemma_text_aw)) 753 | 754 | if var_dict["cw"] ==1: 755 | indexer(header_list, index_list, "mtld_ma_wrap_cw", mtld_ma_wrap(lemma_text_cw)) 756 | if var_dict["fw"] ==1: 757 | indexer(header_list, index_list, "mtld_ma_wrap_fw", mtld_ma_wrap(lemma_text_fw)) 758 | 759 | #### output for user ### 760 | if var_dict["indout"] == 1: 761 | 762 | basic_diag_file.write("tokens\n\n") 763 | 764 | for diags in lemma_text_aw: 765 | try: 766 | basic_diag_file.write(diags+"\n") 767 | except UnicodeEncodeError: 768 | basic_diag_file.write("encoding error!\n") 769 | 770 | basic_diag_file.write("\ntypes\n\n") 771 | 772 | for diags in list(set(lemma_text_aw)): 773 | try: 774 | basic_diag_file.write(diags+"\n") 775 | except UnicodeEncodeError: 776 | basic_diag_file.write("encoding error!\n") 777 | 778 | basic_diag_file.flush() 779 | 780 | ### end output for user ### 781 | 782 | if file_number == 0: 783 | header_out = ",".join(header_list) + "\n" 784 | outf.write(header_out) 785 | file_number +=1 786 | 787 | out_list = [] 788 | for vars in index_list: 789 | out_list.append(str(vars)) 790 | outstring = ",".join(out_list) + "\n" 791 | outf.write(outstring) 792 | 793 | 794 | nfiles = len(filenames) 795 | finishmessage = ("Processed " + str(nfiles) + " Files") 796 | dataQueue.put(finishmessage) 797 | if system == "M": 798 | messagebox.showinfo("Finished!", "TAALED has converted your files to numbers.\n\n Now the real work begins!") 799 | 800 | 801 | if __name__ == '__main__': 802 | root = tk.Tk() 803 | root.wm_title(prog_name) 804 | root.configure(background = color) 805 | root.geometry(geom_size) 806 | myapp = MyApp(root) 807 | root.mainloop() -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/TAALED_1_4_1.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | import sys 3 | 4 | #import spacy #this is for if spaCy is used 5 | import tkinter as tk 6 | import tkinter.font 7 | import tkinter.filedialog 8 | import tkinter.constants 9 | import queue 10 | from tkinter import messagebox 11 | 12 | import os 13 | import sys 14 | import re 15 | import platform 16 | #import shutil 17 | #import subprocess 18 | import glob 19 | import math 20 | from collections import Counter 21 | try: 22 | import xml.etree.cElementTree as ET 23 | except ImportError: 24 | import xml.etree.ElementTree as ET 25 | 26 | #V1.2 includes a number of lemmatization fixes 27 | #v4 fixes a bug that excluded all upper-case words 28 | 29 | ###THIS IS NEW IN V1.3.py ### 30 | from threading import Thread 31 | 32 | #This creates a que in which the core TAALES program can communicate with the GUI 33 | dataQueue = queue.Queue() 34 | 35 | #This creates the message for the progress box (and puts it in the dataQueue) 36 | progress = "...Waiting for Data to Process" 37 | dataQueue.put(progress) 38 | 39 | #Def1 is the core TAALES program; args is information passed to TAALES 40 | def start_thread(def1, arg1, arg2, arg3): 41 | t = Thread(target=def1, args=(arg1, arg2, arg3)) 42 | t.start() 43 | 44 | #This allows for a packaged gui to find the resource files. 45 | def resource_path(relative): 46 | if hasattr(sys, "_MEIPASS"): 47 | return os.path.join(sys._MEIPASS, relative) 48 | return os.path.join(relative) 49 | 50 | color = "#758fa8" 51 | 52 | prog_name = "TAALED 1.4.1" 53 | 54 | if platform.system() == "Darwin": 55 | system = "M" 56 | title_size = 16 57 | font_size = 14 58 | geom_size = "525x500" 59 | elif platform.system() == "Windows": 60 | system = "W" 61 | title_size = 12 62 | font_size = 12 63 | geom_size = "525x475" 64 | elif platform.system() == "Linux": 65 | system = "L" 66 | title_size = 14 67 | font_size = 12 68 | geom_size = "525x385" 69 | 70 | def start_watcher(def2, count, folder): 71 | t2 = Thread(target=def2, args =(count,folder)) 72 | t2.start() 73 | 74 | class MyApp: #this is the class for the gui and the text analysis 75 | def __init__(self, parent): 76 | 77 | helv14= tkinter.font.Font(family= "Helvetica Neue", size=font_size) 78 | times14= tkinter.font.Font(family= "Lucida Grande", size=font_size) 79 | helv16= tkinter.font.Font(family= "Helvetica Neue", size = title_size, weight = "bold", slant = "italic") 80 | 81 | #This defines the GUI parent (ish) 82 | 83 | self.myParent = parent 84 | 85 | self.var_dict = {} 86 | 87 | #This creates the header text - Task:work with this to make more pretty! 88 | self.spacer1= tk.Label(parent, text= "Tool for the Automatic Analysis of Lexical Diversity", font = helv16, background = color) 89 | self.spacer1.pack() 90 | 91 | #This creates a frame for the meat of the GUI 92 | self.thestuff= tk.Frame(parent, background =color) 93 | self.thestuff.pack() 94 | 95 | self.myContainer1= tk.Frame(self.thestuff, background = color) 96 | self.myContainer1.pack(side = tk.RIGHT, expand= tk.TRUE) 97 | self.instruct = tk.Button(self.myContainer1, text = "Instructions", justify = tk.LEFT) 98 | self.instruct.pack() 99 | self.instruct.bind("", self.instruct_mess) 100 | 101 | self.opt_frame = tk.LabelFrame(self.myContainer1, text= "Options and index selection", background = color) 102 | self.opt_frame.pack(fill = tk.X,expand=tk.TRUE) 103 | 104 | self.options_frame = tk.LabelFrame(self.opt_frame, text= "Word analysis options", background = color) 105 | self.options_frame.pack(fill = tk.X,expand=tk.TRUE) 106 | 107 | #insert checkboxes here 108 | self.aw_choice_var = tk.IntVar() 109 | self.aw_choice = tk.Checkbutton(self.options_frame, text="All words", variable=self.aw_choice_var,background = color) 110 | self.aw_choice.grid(row=1,column=1, sticky = "W") 111 | self.var_dict["aw"] = self.aw_choice_var 112 | 113 | self.cw_choice_var = tk.IntVar() 114 | self.cw_choice = tk.Checkbutton(self.options_frame, text="Content words", variable=self.cw_choice_var,background = color) 115 | self.cw_choice.grid(row=1,column=2, sticky = "W") 116 | self.var_dict["cw"] = self.cw_choice_var 117 | 118 | self.fw_choice_var = tk.IntVar() 119 | self.fw_choice = tk.Checkbutton(self.options_frame, text="Function words", variable=self.fw_choice_var,background = color) 120 | self.fw_choice.grid(row=1,column=3, sticky = "W") 121 | self.var_dict["fw"] = self.fw_choice_var 122 | 123 | self.indices_frame = tk.LabelFrame(self.opt_frame, text= "Index selection", background = color) 124 | self.indices_frame.pack(fill = tk.X,expand=tk.TRUE) 125 | 126 | self.simple_ttr_var = tk.IntVar() 127 | self.simple_ttr = tk.Checkbutton(self.indices_frame, text="Simple TTR", variable=self.simple_ttr_var,background = color) 128 | self.simple_ttr.grid(row=1,column=1, sticky = "W") 129 | self.var_dict["simple_ttr"] = self.simple_ttr_var 130 | 131 | self.root_ttr_var = tk.IntVar() 132 | self.root_ttr = tk.Checkbutton(self.indices_frame, text="Root TTR", variable=self.root_ttr_var,background = color) 133 | self.root_ttr.grid(row=1,column=2, sticky = "W") 134 | self.var_dict["root_ttr"] = self.root_ttr_var 135 | 136 | self.bi_log_ttr_var = tk.IntVar() 137 | self.bi_log_ttr = tk.Checkbutton(self.indices_frame, text="Log TTR", variable=self.bi_log_ttr_var,background = color) 138 | self.bi_log_ttr.grid(row=1,column=3, sticky = "W") 139 | self.var_dict["log_ttr"] = self.bi_log_ttr_var 140 | 141 | self.maas_ttr_var = tk.IntVar() 142 | self.maas_ttr = tk.Checkbutton(self.indices_frame, text="Maas", variable=self.maas_ttr_var,background = color) 143 | self.maas_ttr.grid(row=1,column=4, sticky = "W") 144 | self.var_dict["maas_ttr"] = self.maas_ttr_var 145 | 146 | self.mattr_var = tk.IntVar() 147 | self.mattr = tk.Checkbutton(self.indices_frame, text="MATTR", variable=self.mattr_var,background = color) 148 | self.mattr.grid(row=2,column=1, sticky = "W") 149 | self.var_dict["mattr"] = self.mattr_var 150 | 151 | self.msttr_var = tk.IntVar() 152 | self.msttr = tk.Checkbutton(self.indices_frame, text="MSTTR", variable=self.msttr_var,background = color) 153 | self.msttr.grid(row=2,column=2, sticky = "W") 154 | self.var_dict["msttr"] = self.msttr_var 155 | 156 | self.hdd_var = tk.IntVar() 157 | self.hdd = tk.Checkbutton(self.indices_frame, text="HD-D", variable=self.hdd_var,background = color) 158 | self.hdd.grid(row=2,column=3, sticky = "W") 159 | self.var_dict["hdd"] = self.hdd_var 160 | 161 | self.mtld_orig_var = tk.IntVar() 162 | self.mtld_orig = tk.Checkbutton(self.indices_frame, text="MTLD Original", variable=self.mtld_orig_var,background = color) 163 | self.mtld_orig.grid(row=3,column=1, sticky = "W") 164 | self.var_dict["mltd"] = self.mtld_orig_var 165 | 166 | self.mltd_ma_var = tk.IntVar() 167 | self.mltd_ma = tk.Checkbutton(self.indices_frame, text="MTLD MA Bi", variable=self.mltd_ma_var,background = color) 168 | self.mltd_ma.grid(row=3,column=2, sticky = "W") 169 | self.var_dict["mltd_ma"] = self.mltd_ma_var 170 | 171 | self.mltd_wrap_var = tk.IntVar() 172 | self.mltd_wrap = tk.Checkbutton(self.indices_frame, text="MTLD MA Wrap", variable=self.mltd_wrap_var,background = color) 173 | self.mltd_wrap.grid(row=3,column=3, sticky = "W") 174 | self.var_dict["mtld_wrap"] = self.mltd_wrap_var 175 | 176 | self.secondframe= tk.LabelFrame(self.myContainer1, text= "Data input", background = color) 177 | self.secondframe.pack(fill = tk.X,expand=tk.TRUE) 178 | 179 | 180 | #Creates default dirname so if statement in Process Texts can check to see 181 | #if a directory name has been chosen 182 | self.dirname = "" 183 | 184 | #This creates a label for the first program input (Input Directory) 185 | self.inputdirlabel =tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Your selected input folder:", background = color) 186 | self.inputdirlabel.pack(fill = tk.X) 187 | 188 | #Creates label that informs user which directory has been chosen 189 | directoryprompt = "(No Folder Chosen)" 190 | self.inputdirchosen = tk.Label(self.inputdirlabel, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text = directoryprompt) 191 | self.inputdirchosen.pack(side = tk.LEFT) 192 | 193 | #This Places the first button under the instructions. 194 | self.button1 = tk.Button(self.inputdirlabel) 195 | self.button1.configure(text= "Select") 196 | self.button1.pack(side = tk.LEFT, padx = 5) 197 | 198 | #This tells the button what to do when clicked. Currently, only a left-click 199 | #makes the button do anything (e.g. ). The second argument is a "def" 200 | #That is defined later in the program. 201 | self.button1.bind("", self.button1Click) 202 | #This creates the Output Directory button. 203 | 204 | self.outdirname = "" 205 | 206 | self.out_optframe = tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Output Options", background = color) 207 | self.out_optframe.pack() 208 | 209 | self.ind_out_var = tk.IntVar() 210 | self.ind_out = tk.Checkbutton(self.out_optframe, text="Individual Item Output", variable=self.ind_out_var,background = color) 211 | self.ind_out.pack(side = tk.LEFT) 212 | self.ind_out.deselect() 213 | self.var_dict["indout"] = self.ind_out_var 214 | 215 | #Creates a label for the second program input (Output Directory) 216 | self.outputdirlabel = tk.LabelFrame(self.secondframe, height = "1", width= "45", padx = "4", text = "Your selected output filename:", background = color) 217 | self.outputdirlabel.pack(fill = tk.X) 218 | 219 | #Creates a label that informs sure which directory has been chosen 220 | outdirectoryprompt = "(No Output Filename Chosen)" 221 | self.outputdirchosen = tk.Label(self.outputdirlabel, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text = outdirectoryprompt) 222 | self.outputdirchosen.pack(side = tk.LEFT) 223 | 224 | self.button2 = tk.Button(self.outputdirlabel) 225 | self.button2["text"]= "Select" 226 | #This tells the button what to do if clicked. 227 | self.button2.bind("", self.button2Click) 228 | self.button2.pack(side = tk.LEFT, padx = 5) 229 | 230 | self.BottomSpace= tk.LabelFrame(self.myContainer1, text = "Run Program", background = color) 231 | self.BottomSpace.pack() 232 | 233 | self.button3= tk.Button(self.BottomSpace) 234 | self.button3["text"] = "Process Texts" 235 | self.button3.bind("", self.runprogram) 236 | self.button3.pack() 237 | 238 | self.progresslabelframe = tk.LabelFrame(self.BottomSpace, text= "Program Status", background = color) 239 | self.progresslabelframe.pack(expand= tk.TRUE) 240 | 241 | self.progress= tk.Label(self.progresslabelframe, height= "1", width= "45", justify=tk.LEFT, padx = "4", anchor = tk.W, font= helv14, text=progress) 242 | self.progress.pack() 243 | 244 | self.poll(self.progress) 245 | 246 | def poll(self, function): 247 | 248 | self.myParent.after(10, self.poll, function) 249 | try: 250 | function.config(text = dataQueue.get(block=False)) 251 | 252 | except queue.Empty: 253 | pass 254 | 255 | def instruct_mess(self, event): 256 | messagebox.showinfo("Instructions", "1. Click the 'Index Selection and Options' button to select desired indices\n\n2. Choose the input folder (where your files are)\n\n3. If desired, select additional output options (see user manual for details)\n\n4. Choose your output filename\n\n5. Press the 'Process Texts' button") 257 | 258 | def entry1Return(self,event): 259 | input= self.entry1.get() 260 | self.input2 = input + ".csv" 261 | self.filechosenchosen.config(text = self.input2) 262 | self.filechosenchosen.update_idletasks() 263 | 264 | #Following is an example of how we can update the information from users... 265 | def button1Click(self, event): 266 | #import Tkinter, 267 | if sys.version_info[0] == 2: 268 | import tkFileDialog 269 | self.dirname = tkFileDialog.askdirectory(parent=root,initialdir="/",title='Please select a directory') 270 | 271 | if sys.version_info[0] == 3: 272 | import tkinter.filedialog 273 | self.dirname = tkinter.filedialog.askdirectory(parent=root,initialdir="/",title='Please select a directory') 274 | 275 | self.displayinputtext = '.../'+self.dirname.split('/')[-1] 276 | self.inputdirchosen.config(text = self.displayinputtext) 277 | 278 | #newmsg= "Chosen" 279 | #self.inputdirchosen.config(text = newmsg) 280 | #self.inputdirchosen.update_idletasks() 281 | 282 | def button2Click(self, event): 283 | #self.outdirname = tkFileDialog.askdirectory(parent=root,initialdir="/",title='Please select a directory') 284 | #if sys.version_info[0] == 2: self.outdirname = tkFileDialog.asksaveasfilename(parent=root, defaultextension = ".csv", initialfile = "results",title='Choose Output Filename') 285 | #if sys.version_info[0] == 3: self.outdirname = tkinter.filedialog.asksaveasfilename(parent=root, defaultextension = ".csv", initialfile = "results",title='Choose Output Filename') 286 | self.outdirname = tkinter.filedialog.asksaveasfilename(parent=root, defaultextension = ".csv", initialfile = "results",title='Choose Output Filename') 287 | 288 | #print(self.outdirname) 289 | if self.outdirname == "": 290 | self.displayoutputtext = "(No Output Filename Chosen)" 291 | else: self.displayoutputtext = '.../' + self.outdirname.split('/')[-1] 292 | self.outputdirchosen.config(text = self.displayoutputtext) 293 | 294 | def runprogram(self, event): 295 | self.poll(self.progress) 296 | start_thread(main, self.dirname, self.outdirname, self.var_dict) 297 | 298 | 299 | #### THIS IS BEGINNING OF PROGRAM ### 300 | def main(indir, outdir, var_dict): 301 | 302 | import tkinter.messagebox 303 | if indir is "": 304 | tkinter.messagebox.showinfo("Supply Information", "Choose Input Directory") 305 | if outdir is "": 306 | tkinter.messagebox.showinfo("Choose Output Directory", "Choose Output Directory") 307 | 308 | 309 | if indir is not "" and outdir is not "": 310 | dataQueue.put("Starting TAALED...") 311 | 312 | import spacy 313 | from spacy.util import set_data_path 314 | set_data_path(resource_path('en_core_web_sm')) 315 | nlp = spacy.load(resource_path('en_core_web_sm')) 316 | 317 | #thus begins the text analysis portion of the program 318 | adj_word_list = open(resource_path("dep_files/adj_lem_list.txt"), "r",errors = 'ignore').read().split("\n")[:-1] 319 | real_word_list = open(resource_path("dep_files/real_words.txt"), "r",errors = 'ignore').read().split("\n")[:-1] #these are lowered 320 | 321 | ### THESE ARE PERTINENT FOR ALL IMPORTANT INDICES #### 322 | noun_tags = ["NN", "NNS", "NNP", "NNPS"] #consider whether to identify gerunds 323 | proper_n = ["NNP", "NNPS"] 324 | no_proper = ["NN", "NNS"] 325 | pronouns = ["PRP", "PRP$"] 326 | adjectives = ["JJ", "JJR", "JJS"] 327 | verbs = ["VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "MD"] 328 | adverbs = ["RB", "RBR", "RBS"] 329 | content = ["NN", "NNS", "NNP", "NNPS","JJ", "JJR", "JJS"] #note that this is a preliminary list 330 | prelim_not_function = ["NN", "NNS", "NNP", "NNPS","JJ", "JJR", "JJS", "RB", "RBR", "RBS", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "MD"] 331 | pronoun_dict = {"me":"i","him":"he","her":"she"} 332 | punctuation = "`` '' ' . , ? ! ) ( % / - _ -LRB- -RRB- SYM : ;".split(" ") 333 | punctuation.append('"') 334 | 335 | #This function deals with denominator issues that can kill the program: 336 | def safe_divide(numerator, denominator): 337 | if denominator == 0 or denominator == 0.0: 338 | index = 0 339 | else: index = numerator/denominator 340 | return index 341 | 342 | def indexer(header_list, index_list, name, index): 343 | header_list.append(name) 344 | index_list.append(index) 345 | 346 | def tag_processor_spaCy(raw_text): #uses default spaCy 2.016 347 | 348 | lemma_list = [] 349 | content_list = [] 350 | function_list = [] 351 | 352 | tagged_text = nlp(raw_text) 353 | 354 | for sent in tagged_text.sents: 355 | for token in sent: 356 | if token.tag_ in punctuation: 357 | continue 358 | if token.text.lower() not in real_word_list: #lowered because real_word_list is lowered 359 | continue 360 | 361 | 362 | if token.tag_ in content: 363 | if token.tag_ in noun_tags: 364 | content_list.append(token.lemma_ + "_cw_nn") 365 | lemma_list.append(token.lemma_ + "_cw_nn") 366 | else: 367 | content_list.append(token.lemma_ + "_cw") 368 | lemma_list.append(token.lemma_ + "_cw") 369 | 370 | if token.tag_ not in prelim_not_function: 371 | if token.tag_ in pronouns: 372 | if token.text.lower() in pronoun_dict: 373 | function_list.append(pronoun_dict[token.text.lower()] + "_fw") 374 | lemma_list.append(pronoun_dict[token.text.lower()] + "_fw") 375 | else: 376 | function_list.append(token.text.lower() + "_fw") 377 | lemma_list.append(token.text.lower() + "_fw") 378 | else: 379 | function_list.append(token.lemma_ + "_fw") 380 | lemma_list.append(token.lemma_ + "_fw") 381 | 382 | if token.tag_ in verbs: 383 | if token.dep_ == "aux": 384 | function_list.append(token.lemma_ + "_fw") 385 | lemma_list.append(token.lemma_ + "_fw") 386 | 387 | elif token.lemma_ == "be": 388 | function_list.append(token.lemma_ + "_fw") 389 | lemma_list.append(token.lemma_ + "_fw") 390 | 391 | else: 392 | content_list.append(token.lemma_ + "_cw_vb") 393 | lemma_list.append(token.lemma_ + "_cw_vb") 394 | 395 | if token.tag_ in adverbs: 396 | if (token.lemma_[-2:] == "ly" and token.lemma_[:-2] in adj_word_list) or (token.lemma_[-4:] == "ally" and token.lemma_[:-4] in adj_word_list): 397 | content_list.append(token.lemma_ + "_cw") 398 | lemma_list.append(token.lemma_ + "_cw") 399 | else: 400 | function_list.append(token.lemma_ + "_fw") 401 | lemma_list.append(token.lemma_ + "_fw") 402 | #print(raw_token, lemma_list[-1]) 403 | 404 | return {"lemma" : lemma_list, "content" : content_list, "function":function_list} 405 | 406 | def lex_density(cw_text, fw_text): 407 | n_cw = len(cw_text) 408 | n_fw = len(fw_text) 409 | n_all = n_cw + n_fw 410 | 411 | n_type_cw = len(set(cw_text)) 412 | n_type_fw = len(set(fw_text)) 413 | n_all_type = n_type_cw + n_type_fw 414 | 415 | lex_dens_all = safe_divide(n_cw,n_all) #percentage of content words 416 | lex_dens_cw_fw = safe_divide(n_cw,n_fw) #ratio content words to function words 417 | 418 | lex_dens_all_type = safe_divide(n_type_cw,n_all_type) #percentage of content words 419 | lex_dens_cw_fw_type = safe_divide(n_type_cw,n_type_fw) #ratio content words to function words 420 | 421 | return [lex_dens_all, lex_dens_all_type] 422 | 423 | def ttr(text): 424 | ntokens = len(text) 425 | ntypes = len(set(text)) 426 | 427 | simple_ttr = safe_divide(ntypes,ntokens) 428 | root_ttr = safe_divide(ntypes, math.sqrt(ntokens)) 429 | log_ttr = safe_divide(math.log10(ntypes), math.log10(ntokens)) 430 | maas_ttr = safe_divide((math.log10(ntokens)-math.log10(ntypes)), math.pow(math.log10(ntokens),2)) 431 | 432 | return [simple_ttr,root_ttr,log_ttr,maas_ttr] 433 | 434 | def simple_ttr(text): 435 | ntokens = len(text) 436 | ntypes = len(set(text)) 437 | 438 | return safe_divide(ntypes,ntokens) 439 | 440 | def root_ttr(text): 441 | ntokens = len(text) 442 | ntypes = len(set(text)) 443 | 444 | return safe_divide(ntypes, math.sqrt(ntokens)) 445 | 446 | def log_ttr(text): 447 | ntokens = len(text) 448 | ntypes = len(set(text)) 449 | 450 | return safe_divide(math.log10(ntypes), math.log10(ntokens)) 451 | 452 | def maas_ttr(text): 453 | ntokens = len(text) 454 | ntypes = len(set(text)) 455 | 456 | return safe_divide((math.log10(ntokens)-math.log10(ntypes)), math.pow(math.log10(ntokens),2)) 457 | 458 | 459 | def mattr(text, window_length = 50): #from TAACO 2.0.4 460 | 461 | if len(text) < (window_length + 1): 462 | ma_ttr = safe_divide(len(set(text)),len(text)) 463 | 464 | else: 465 | sum_ttr = 0 466 | denom = 0 467 | for x in range(len(text)): 468 | small_text = text[x:(x + window_length)] 469 | if len(small_text) < window_length: 470 | break 471 | denom += 1 472 | sum_ttr+= safe_divide(len(set(small_text)),float(window_length)) 473 | ma_ttr = safe_divide(sum_ttr,denom) 474 | 475 | return ma_ttr 476 | 477 | def msttr(text, window_length = 50): 478 | 479 | if len(text) < (window_length + 1): 480 | ms_ttr = safe_divide(len(set(text)),len(text)) 481 | 482 | else: 483 | sum_ttr = 0 484 | denom = 0 485 | 486 | n_segments = int(safe_divide(len(text),window_length)) 487 | seed = 0 488 | for x in range(n_segments): 489 | sub_text = text[seed:seed+window_length] 490 | #print sub_text 491 | sum_ttr += safe_divide(len(set(sub_text)), len(sub_text)) 492 | denom+=1 493 | seed+=window_length 494 | 495 | ms_ttr = safe_divide(sum_ttr, denom) 496 | 497 | return ms_ttr 498 | 499 | def hdd(text): 500 | #requires Counter import 501 | def choose(n, k): #calculate binomial 502 | """ 503 | A fast way to calculate binomial coefficients by Andrew Dalke (contrib). 504 | """ 505 | if 0 <= k <= n: 506 | ntok = 1 507 | ktok = 1 508 | for t in range(1, min(k, n - k) + 1): #this was changed to "range" from "xrange" for py3 509 | ntok *= n 510 | ktok *= t 511 | n -= 1 512 | return ntok // ktok 513 | else: 514 | return 0 515 | 516 | def hyper(successes, sample_size, population_size, freq): #calculate hypergeometric distribution 517 | #probability a word will occur at least once in a sample of a particular size 518 | try: 519 | prob_1 = 1.0 - (float((choose(freq, successes) * choose((population_size - freq),(sample_size - successes)))) / float(choose(population_size, sample_size))) 520 | prob_1 = prob_1 * (1/sample_size) 521 | except ZeroDivisionError: 522 | prob_1 = 0 523 | 524 | return prob_1 525 | 526 | prob_sum = 0.0 527 | ntokens = len(text) 528 | types_list = list(set(text)) 529 | frequency_dict = Counter(text) 530 | 531 | for items in types_list: 532 | prob = hyper(0,42,ntokens,frequency_dict[items]) #random sample is 42 items in length 533 | prob_sum += prob 534 | 535 | return prob_sum 536 | 537 | 538 | def mtld_original(input, min = 10): 539 | def mtlder(text): 540 | factor = 0 541 | factor_lengths = 0 542 | start = 0 543 | for x in range(len(text)): 544 | factor_text = text[start:x+1] 545 | if x+1 == len(text): 546 | factor += safe_divide((1 - ttr(factor_text)[0]),(1 - .72)) 547 | factor_lengths += len(factor_text) 548 | else: 549 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 550 | factor += 1 551 | factor_lengths += len(factor_text) 552 | start = x+1 553 | else: 554 | continue 555 | 556 | mtld = safe_divide(factor_lengths,factor) 557 | return mtld 558 | input_reversed = list(reversed(input)) 559 | mtld_full = safe_divide((mtlder(input)+mtlder(input_reversed)),2) 560 | return mtld_full 561 | 562 | def mtld_bi_directional_ma(text, min = 10): 563 | def mtld_ma(text, min = 10): 564 | factor = 0 565 | factor_lengths = 0 566 | for x in range(len(text)): 567 | sub_text = text[x:] 568 | breaker = 0 569 | for y in range(len(sub_text)): 570 | if breaker == 0: 571 | factor_text = sub_text[:y+1] 572 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 573 | factor += 1 574 | factor_lengths += len(factor_text) 575 | breaker = 1 576 | else: 577 | continue 578 | mtld = safe_divide(factor_lengths,factor) 579 | return mtld 580 | 581 | forward = mtld_ma(text) 582 | backward = mtld_ma(list(reversed(text))) 583 | 584 | mtld_bi = safe_divide((forward + backward), 2) #average of forward and backward mtld 585 | 586 | return mtld_bi 587 | 588 | 589 | def mtld_ma_wrap(text, min = 10): 590 | factor = 0 591 | factor_lengths = 0 592 | start = 0 593 | double_text = text + text #allows wraparound 594 | for x in range(len(text)): 595 | breaker = 0 596 | sub_text = double_text[x:] 597 | for y in range(len(sub_text)): 598 | if breaker == 0: 599 | factor_text = sub_text[:y+1] 600 | if ttr(factor_text)[0] < .720 and len(factor_text) >= min: 601 | factor += 1 602 | factor_lengths += len(factor_text) 603 | breaker = 1 604 | else: 605 | continue 606 | mtld = safe_divide(factor_lengths,factor) 607 | return mtld 608 | 609 | 610 | #### END DEFINED FUNCTIONS #### 611 | 612 | for keys in var_dict: 613 | try: 614 | if var_dict[keys].get() == 1: 615 | var_dict[keys] = 1 616 | else: var_dict[keys] = 0 617 | except AttributeError: 618 | continue 619 | 620 | inputfile = indir + "/*.txt" 621 | outf=open(outdir, "w") 622 | 623 | filenames = glob.glob(inputfile) 624 | file_number = 0 625 | 626 | if var_dict["indout"] == 1: 627 | directory = outdir[:-4] + "_diagnostic/" #this is for diagnostic file 628 | if not os.path.exists(directory): 629 | os.makedirs(directory) 630 | 631 | for the_file in os.listdir(directory): #this cleans out the old diagnostic file (if applicable) 632 | file_path = os.path.join(directory, the_file) 633 | os.unlink(file_path) 634 | 635 | 636 | nfiles = len(filenames) 637 | file_counter = 1 638 | 639 | 640 | for filename in filenames: 641 | 642 | if system == "M" or system == "L": 643 | simple_filename = filename.split("/")[-1] 644 | 645 | if system == "W": 646 | simple_filename = filename.split("\\")[-1] 647 | if "/" in simple_filename: 648 | simple_filename = simple_filename.split("/")[-1] 649 | 650 | #print(simple_filename) 651 | 652 | if var_dict["indout"] == 1: 653 | basic_diag_file_name = directory + simple_filename[:-4] + "_processed.txt" 654 | basic_diag_file = open(basic_diag_file_name, "w") 655 | 656 | index_list = [simple_filename] 657 | header_list = ["filename"] 658 | 659 | #updates Program Status 660 | filename1 = ("Processing: " + str(file_counter) + " of " + str(nfiles) + " files") 661 | dataQueue.put(filename1) 662 | root.update_idletasks() 663 | 664 | file_counter+=1 665 | 666 | if system == "M" or system == "L": 667 | filename_2 = filename.split("/")[-1] 668 | elif system == "W": 669 | filename_2 = filename.split("\\")[-1] 670 | 671 | raw_text= open(filename, "r", errors = 'ignore').read() 672 | raw_text = re.sub('\s+',' ',raw_text) 673 | #while " " in raw_text: 674 | #raw_text = raw_text.replace(" ", " ") 675 | 676 | refined_lemma_dict = tag_processor_spaCy(raw_text) 677 | 678 | lemma_text_aw = refined_lemma_dict["lemma"] 679 | 680 | lemma_text_cw = refined_lemma_dict["content"] 681 | lemma_text_fw = refined_lemma_dict["function"] 682 | 683 | indexer(header_list, index_list, "basic_ntokens", len(lemma_text_aw)) 684 | indexer(header_list, index_list, "basic_ntypes", len(set(lemma_text_aw))) 685 | indexer(header_list, index_list, "basic_ncontent_tokens", len(lemma_text_cw)) 686 | indexer(header_list, index_list, "basic_ncontent_types", len(set(lemma_text_cw))) 687 | indexer(header_list, index_list, "basic_nfunction_tokens", len(lemma_text_fw)) 688 | indexer(header_list, index_list, "basic_nfunction_types", len(set(lemma_text_fw))) 689 | 690 | indexer(header_list, index_list, "lexical_density_types", lex_density(lemma_text_cw, lemma_text_fw)[1]) 691 | indexer(header_list, index_list, "lexical_density_tokens", lex_density(lemma_text_cw, lemma_text_fw)[0]) 692 | 693 | if var_dict["simple_ttr"] == 1: 694 | if var_dict["aw"] ==1: 695 | indexer(header_list, index_list, "simple_ttr_aw", ttr(lemma_text_aw)[0]) 696 | 697 | if var_dict["cw"] ==1: 698 | indexer(header_list, index_list, "simple_ttr_cw", ttr(lemma_text_cw)[0]) 699 | if var_dict["fw"] ==1: 700 | indexer(header_list, index_list, "simple_ttr_fw", ttr(lemma_text_fw)[0]) 701 | 702 | if var_dict["root_ttr"] == 1: 703 | if var_dict["aw"] ==1: 704 | indexer(header_list, index_list, "root_ttr_aw", ttr(lemma_text_aw)[1]) 705 | 706 | if var_dict["cw"] ==1: 707 | indexer(header_list, index_list, "root_ttr_cw", ttr(lemma_text_cw)[1]) 708 | if var_dict["fw"] ==1: 709 | indexer(header_list, index_list, "root_ttr_fw", ttr(lemma_text_fw)[1]) 710 | 711 | if var_dict["log_ttr"] == 1: 712 | if var_dict["aw"] ==1: 713 | indexer(header_list, index_list, "log_ttr_aw", ttr(lemma_text_aw)[2]) 714 | 715 | if var_dict["cw"] ==1: 716 | indexer(header_list, index_list, "log_ttr_cw", ttr(lemma_text_cw)[2]) 717 | if var_dict["fw"] ==1: 718 | indexer(header_list, index_list, "log_ttr_fw", ttr(lemma_text_fw)[2]) 719 | 720 | if var_dict["maas_ttr"] == 1: 721 | if var_dict["aw"] ==1: 722 | indexer(header_list, index_list, "maas_ttr_aw", ttr(lemma_text_aw)[3]) 723 | 724 | if var_dict["cw"] ==1: 725 | indexer(header_list, index_list, "maas_ttr_cw", ttr(lemma_text_cw)[3]) 726 | if var_dict["fw"] ==1: 727 | indexer(header_list, index_list, "maas_ttr_fw", ttr(lemma_text_fw)[3]) 728 | 729 | if var_dict["mattr"] == 1: 730 | if var_dict["aw"] ==1: 731 | indexer(header_list, index_list, "mattr50_aw", mattr(lemma_text_aw,50)) 732 | 733 | if var_dict["cw"] ==1: 734 | indexer(header_list, index_list, "mattr50_cw", mattr(lemma_text_cw,50)) 735 | if var_dict["fw"] ==1: 736 | indexer(header_list, index_list, "mattr50_fw", mattr(lemma_text_fw,50)) 737 | 738 | if var_dict["msttr"] == 1: 739 | if var_dict["aw"] ==1: 740 | indexer(header_list, index_list, "msttr50_aw", msttr(lemma_text_aw,50)) 741 | if var_dict["cw"] ==1: 742 | indexer(header_list, index_list, "msttr50_cw", msttr(lemma_text_cw,50)) 743 | if var_dict["fw"] ==1: 744 | indexer(header_list, index_list, "msttr50_fw", msttr(lemma_text_fw,50)) 745 | 746 | if var_dict["hdd"] == 1: 747 | if var_dict["aw"] ==1: 748 | indexer(header_list, index_list, "hdd42_aw", hdd(lemma_text_aw)) 749 | 750 | if var_dict["cw"] ==1: 751 | indexer(header_list, index_list, "hdd42_cw", hdd(lemma_text_cw)) 752 | if var_dict["fw"] ==1: 753 | indexer(header_list, index_list, "hdd42_fw", hdd(lemma_text_fw)) 754 | 755 | if var_dict["mltd"] == 1: 756 | if var_dict["aw"] ==1: 757 | indexer(header_list, index_list, "mtld_original_aw", mtld_original(lemma_text_aw)) 758 | 759 | if var_dict["cw"] ==1: 760 | indexer(header_list, index_list, "mtld_original_cw", mtld_original(lemma_text_cw)) 761 | if var_dict["fw"] ==1: 762 | indexer(header_list, index_list, "mtld_original_fw", mtld_original(lemma_text_fw)) 763 | 764 | if var_dict["mltd_ma"] == 1: 765 | if var_dict["aw"] ==1: 766 | indexer(header_list, index_list, "mtld_ma_bi_aw", mtld_bi_directional_ma(lemma_text_aw)) 767 | 768 | if var_dict["cw"] ==1: 769 | indexer(header_list, index_list, "mtld_ma_bi_cw", mtld_bi_directional_ma(lemma_text_cw)) 770 | if var_dict["fw"] ==1: 771 | indexer(header_list, index_list, "mtld_ma_bi_fw", mtld_bi_directional_ma(lemma_text_fw)) 772 | 773 | if var_dict["mtld_wrap"] == 1: 774 | if var_dict["aw"] ==1: 775 | indexer(header_list, index_list, "mtld_ma_wrap_aw", mtld_ma_wrap(lemma_text_aw)) 776 | 777 | if var_dict["cw"] ==1: 778 | indexer(header_list, index_list, "mtld_ma_wrap_cw", mtld_ma_wrap(lemma_text_cw)) 779 | if var_dict["fw"] ==1: 780 | indexer(header_list, index_list, "mtld_ma_wrap_fw", mtld_ma_wrap(lemma_text_fw)) 781 | 782 | #### output for user ### 783 | if var_dict["indout"] == 1: 784 | 785 | basic_diag_file.write("tokens\n\n") 786 | 787 | for diags in lemma_text_aw: 788 | try: 789 | basic_diag_file.write(diags+"\n") 790 | except UnicodeEncodeError: 791 | basic_diag_file.write("encoding error!\n") 792 | 793 | basic_diag_file.write("\ntypes\n\n") 794 | 795 | for diags in list(set(lemma_text_aw)): 796 | try: 797 | basic_diag_file.write(diags+"\n") 798 | except UnicodeEncodeError: 799 | basic_diag_file.write("encoding error!\n") 800 | 801 | basic_diag_file.flush() 802 | 803 | ### end output for user ### 804 | 805 | if file_number == 0: 806 | header_out = ",".join(header_list) + "\n" 807 | outf.write(header_out) 808 | file_number +=1 809 | 810 | out_list = [] 811 | for vars in index_list: 812 | out_list.append(str(vars)) 813 | outstring = ",".join(out_list) + "\n" 814 | outf.write(outstring) 815 | 816 | 817 | nfiles = len(filenames) 818 | finishmessage = ("Processed " + str(nfiles) + " Files") 819 | dataQueue.put(finishmessage) 820 | if system == "M": 821 | messagebox.showinfo("Finished!", "TAALED has converted your files to numbers.\n\n Now the real work begins!") 822 | 823 | 824 | if __name__ == '__main__': 825 | root = tk.Tk() 826 | root.wm_title(prog_name) 827 | root.configure(background = color) 828 | root.geometry(geom_size) 829 | myapp = MyApp(root) 830 | root.mainloop() -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/accuracy.json: -------------------------------------------------------------------------------- 1 | { 2 | "uas":91.7237657538, 3 | "las":89.800872413, 4 | "ents_p":84.9664503965, 5 | "ents_r":85.6312524451, 6 | "ents_f":85.2975560875, 7 | "tags_acc":97.0403350292, 8 | "token_acc":99.8698372794 9 | } -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/meta.json: -------------------------------------------------------------------------------- 1 | { 2 | "lang":"en", 3 | "pipeline":[ 4 | "tagger", 5 | "parser", 6 | "ner" 7 | ], 8 | "accuracy":{ 9 | "token_acc":99.8698372794, 10 | "ents_p":84.9664503965, 11 | "ents_r":85.6312524451, 12 | "uas":91.7237657538, 13 | "tags_acc":97.0403350292, 14 | "ents_f":85.2975560875, 15 | "las":89.800872413 16 | }, 17 | "name":"core_web_sm", 18 | "license":"CC BY-SA 3.0", 19 | "author":"Explosion AI", 20 | "url":"https://explosion.ai", 21 | "vectors":{ 22 | "keys":0, 23 | "width":0, 24 | "vectors":0 25 | }, 26 | "sources":[ 27 | "OntoNotes 5", 28 | "Common Crawl" 29 | ], 30 | "version":"2.0.0", 31 | "spacy_version":">=2.0.0a18", 32 | "parent_package":"spacy", 33 | "speed":{ 34 | "gpu":null, 35 | "nwords":291344, 36 | "cpu":5122.3040471407 37 | }, 38 | "email":"contact@explosion.ai", 39 | "description":"English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. Assigns word vectors, context-specific token vectors, POS tags, dependency parse and named entities." 40 | } -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/cfg: -------------------------------------------------------------------------------- 1 | { 2 | "beam_width":1, 3 | "beam_density":0.0, 4 | "pretrained_dims":0, 5 | "cnn_maxout_pieces":3, 6 | "nr_class":73, 7 | "hidden_depth":1, 8 | "token_vector_width":128, 9 | "hidden_width":200, 10 | "maxout_pieces":2, 11 | "hist_size":0, 12 | "hist_width":0 13 | } -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/lower_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/lower_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/moves: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/moves -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/tok2vec_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/tok2vec_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/upper_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/ner/upper_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/cfg: -------------------------------------------------------------------------------- 1 | { 2 | "beam_width":1, 3 | "beam_density":0.0, 4 | "pretrained_dims":0, 5 | "cnn_maxout_pieces":3, 6 | "nr_class":111, 7 | "hidden_depth":1, 8 | "token_vector_width":128, 9 | "hidden_width":200, 10 | "maxout_pieces":2, 11 | "hist_size":0, 12 | "hist_width":0 13 | } -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/lower_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/lower_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/moves: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/moves -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/tok2vec_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/tok2vec_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/upper_model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/parser/upper_model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tagger/cfg: -------------------------------------------------------------------------------- 1 | { 2 | "cnn_maxout_pieces":2, 3 | "pretrained_dims":0 4 | } -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tagger/model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tagger/model -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tagger/tag_map: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tagger/tag_map -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tokenizer: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/tokenizer -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/key2row: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/key2row -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/lexemes.bin: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/lexemes.bin -------------------------------------------------------------------------------- /TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/vectors: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_1_3_1_Py3/dep_files/en_core_web_sm/vocab/vectors -------------------------------------------------------------------------------- /TAALED_Index_Description.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kristopherkyle/TAALED/61dfe70a794f729a14c23f4b3ba870a888877012/TAALED_Index_Description.xlsx --------------------------------------------------------------------------------