├── .Rprofile
├── .gitignore
├── CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE.md
├── R
├── funs_data-cleaning.R
├── funs_knitting.R
├── funs_notebook.R
├── models_analysis.R
├── models_details.R
├── models_lhr.R
└── models_pts.R
├── README.Rmd
├── README.html
├── README.md
├── _packages.R
├── _targets.R
├── analysis
├── 01_data-overview.Rmd
├── 02_descriptive-analysis.Rmd
├── 03_environment-repression.Rmd
├── 03_laws-repression.Rmd
├── 03_model-details.Rmd
├── 03_modeling-choices.Rmd
├── 04_predictions.Rmd
├── Makefile
├── _site.yml
├── html
│ ├── fixes.css
│ └── footer.html
├── index.Rmd
├── options.R
└── output
├── cautioning-canary.Rproj
├── data
├── DO-NOT-EDIT-ANY-FILES-IN-HERE-BY-HAND
├── derived_data
│ ├── panel.csv
│ ├── panel.rds
│ ├── panel_lagged.csv
│ └── panel_lagged.rds
└── raw_data
│ ├── Chaudhry restrictions
│ └── SC_Expanded.dta
│ ├── Civicus
│ ├── civicus_2021-03-19.json
│ └── index_2021-03-19.html
│ ├── Country_Year_V-Dem_Full+others_R_v10
│ ├── V-Dem Cautionary Notes v10.pdf
│ ├── V-Dem Codebook v10.pdf
│ ├── V-Dem Suggested Citation v10.pdf
│ ├── V-Dem-CY-Full+Others-v10.rds
│ └── What's New.pdf
│ ├── Latent Human Rights Protection Scores
│ └── HumanRightsProtectionScores_v4.01.csv
│ ├── Political Terror Scale
│ ├── PTS-2019.RData
│ └── PTS-Codebook-V120.pdf
│ ├── UCDP PRIO
│ └── ucdp-prio-acd-191.csv
│ ├── UN data
│ ├── UNdata_Export_20210118_034054729.csv
│ ├── UNdata_Export_20210118_034311252.csv
│ └── WPP2019_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx
│ └── ne_110m_admin_0_countries
│ ├── ne_110m_admin_0_countries.README.html
│ ├── ne_110m_admin_0_countries.VERSION.txt
│ ├── ne_110m_admin_0_countries.cpg
│ ├── ne_110m_admin_0_countries.dbf
│ ├── ne_110m_admin_0_countries.prj
│ ├── ne_110m_admin_0_countries.shp
│ └── ne_110m_admin_0_countries.shx
├── img
├── data_large_color.png
└── materials_large_color.png
├── lib
├── graphics.R
└── presentation_graphs.R
├── manuscript
├── _output.yaml
├── appendix.Rmd
├── bibliography.bib
├── manuscript.Rmd
├── output
│ ├── extracted-citations.bib
│ └── html-support
│ │ ├── ath-1.0.0
│ │ └── ath-clean.css
│ │ ├── header-attrs-2.11
│ │ └── header-attrs.js
│ │ ├── header-attrs-2.7
│ │ └── header-attrs.js
│ │ ├── kePrint-0.0.1
│ │ └── kePrint.js
│ │ └── lightable-0.0.1
│ │ └── lightable.css
└── pandoc
│ ├── bin
│ ├── anonymize.py
│ └── replacements.csv
│ ├── csl
│ ├── american-political-science-association.csl
│ ├── apa.csl
│ ├── apsa-no-bib.csl
│ ├── chicago-author-date.csl
│ ├── chicago-fullnote-no-bib.csl
│ ├── chicago-syllabus-no-bib.csl
│ └── chicago-syllabus.csl
│ ├── css
│ └── ath-clean.css
│ └── templates
│ ├── ath-manuscript.docx
│ ├── html.html
│ ├── odt-manuscript.odt
│ ├── odt.odt
│ ├── reference-manuscript.odt
│ ├── reference.odt
│ ├── xelatex-manuscript.tex
│ └── xelatex.tex
├── renv.lock
└── renv
├── .gitignore
├── activate.R
└── settings.dcf
/.Rprofile:
--------------------------------------------------------------------------------
1 | source("renv/activate.R")
2 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # R stuff
2 | .Rproj.user
3 | .Rhistory
4 | .RData
5 | .Ruserdata
6 |
7 | # Site output
8 | analysis/_site/*
9 | analysis/site_libs/*
10 |
11 | # Targets stuff
12 | _targets
13 |
14 | # knitr and caching stuff
15 | */*_files/*
16 | */*_cache/*
17 | analysis/cache/*
18 |
19 | # Manuscript output
20 | manuscript/*.log
21 | manuscript/output/*.docx
22 | manuscript/output/*.html
23 | manuscript/output/*.pdf
24 | manuscript/output/*.tex
25 | manuscript/output/figures/*
26 | manuscript/pandoc/bin/vc*
27 | manuscript/*.ent
28 |
29 | # Other folders
30 | admin/*
31 | manuscript/submissions/*
32 | analysis/• sandbox/*
33 |
34 | # Miscellaneous
35 | *.~lock.*
36 | .DS_Store
37 | .dropbox
38 | Icon?
39 |
--------------------------------------------------------------------------------
/CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Code of Conduct
2 |
3 | As contributors and maintainers of this project, we pledge to respect all people who
4 | contribute through reporting issues, posting feature requests, updating documentation,
5 | submitting pull requests or patches, and other activities.
6 |
7 | We are committed to making participation in this project a harassment-free experience for
8 | everyone, regardless of level of experience, gender, gender identity and expression,
9 | sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.
10 |
11 | Examples of unacceptable behavior by participants include the use of sexual language or
12 | imagery, derogatory comments or personal attacks, trolling, public or private harassment,
13 | insults, or other unprofessional conduct.
14 |
15 | Project maintainers have the right and responsibility to remove, edit, or reject comments,
16 | commits, code, wiki edits, issues, and other contributions that are not aligned to this
17 | Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed
18 | from the project team.
19 |
20 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by
21 | opening an issue or contacting one or more of the project maintainers.
22 |
23 | This Code of Conduct is adapted from the Contributor Covenant
24 | (http:contributor-covenant.org), version 1.0.0, available at
25 | http://contributor-covenant.org/version/1/0/0/
26 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing
2 |
3 | We love pull requests from everyone. By participating in this project, you
4 | agree to abide by our [code of conduct](CONDUCT.md).
5 |
6 | ## Getting Started
7 |
8 | * Make sure you have a [GitHub account](https://github.com/signup/free). If you are not familar with git and GitHub, take a look at to get started.
9 | * [Submit a post for your issue](https://github.com///issues/), assuming one does not already exist.
10 | * Clearly describe your issue, including steps to reproduce when it is a bug, or some justification for a proposed improvement.
11 | * [Fork](https://github.com///#fork-destination-box) the repository on GitHub to make a copy of the repository on your account. Or use this line in your shell terminal:
12 |
13 | `git clone git@github.com:your-username/.git`
14 |
15 | ## Making changes
16 |
17 | * Edit the files, save often, and make commits of logical units, where each commit indicates one concept
18 | * Follow our [style guide](http://adv-r.had.co.nz/Style.html).
19 | * Make sure you write [good commit messages](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
20 | * Make sure you have added the necessary tests for your code changes.
21 | * Run _all_ the tests using `devtools::check()` to assure nothing else was accidentally broken.
22 | * If you need help or unsure about anything, post an update to [your issue](https://github.com///issues/).
23 |
24 | ## Submitting your changes
25 |
26 | Push to your fork and [submit a pull request](https://github.com///compare/).
27 |
28 | At this point you're waiting on us. We like to at least comment on pull requests
29 | within a few days (and, typically, one business day). We may suggest
30 | some changes or improvements or alternatives.
31 |
32 | Some things you can do that will increase the chance that your pull request is accepted:
33 |
34 | * Engage in discussion on [your issue](https://github.com///issues/).
35 | * Be familiar with the backround literature cited in the [README](README.Rmd)
36 | * Write tests that pass.
37 | * Follow our [code style guide](http://adv-r.had.co.nz/Style.html).
38 | * Write a [good commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
39 |
40 |
41 |
42 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | # MIT License
2 |
3 | Copyright (c) 2021 Andrew Heiss and Suparna Chaudhry
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/R/funs_knitting.R:
--------------------------------------------------------------------------------
1 | library(bib2df)
2 | library(rvest)
3 | library(xml2)
4 | library(stringi)
5 |
6 | render_html <- function(input, output, csl, support_folder, ...) {
7 | # Add CSS file as a dependency so it goes into the _files directory
8 | dep <- htmltools::htmlDependency(
9 | name = "ath",
10 | version = "1.0.0",
11 | "pandoc/css",
12 | stylesheet = "ath-clean.css"
13 | )
14 | extra_dependencies <- list(dep)
15 |
16 | # IMPORTANT
17 | # When knitting to docx, bookdown deletes the _files directory for whatever
18 | # reason, so if you knit to HTML and then docx, you get a nice *_files
19 | # directly that then disappears when the Word file is done. One way around
20 | # this is to specify lib_dir here in rmarkdown::render()
21 |
22 | out <- rmarkdown::render(
23 | input = input,
24 | output_file = output,
25 | bookdown::html_document2(
26 | template = "pandoc/templates/html.html",
27 | pandoc_args = c("--metadata", "link-citations=true",
28 | "--metadata", "linkReferences=true",
29 | paste0("--csl=", csl)),
30 | md_extensions = "+raw_tex+smart-autolink_bare_uris+ascii_identifiers",
31 | toc = TRUE,
32 | number_sections = FALSE,
33 | self_contained = FALSE,
34 | theme = NULL,
35 | extra_dependencies = extra_dependencies,
36 | lib_dir = support_folder
37 | ),
38 | encoding = "UTF-8"
39 | )
40 |
41 | return(fs::path_rel(out))
42 | }
43 |
44 | render_pdf <- function(input, output, bibstyle, ...) {
45 | out <- rmarkdown::render(
46 | input = input,
47 | output_file = output,
48 | bookdown::pdf_document2(
49 | template = "pandoc/templates/xelatex.tex",
50 | latex_engine = "xelatex",
51 | dev = "cairo_pdf",
52 | pandoc_args = c("--top-level-division=section",
53 | "--shift-heading-level-by=0",
54 | "-V", bibstyle,
55 | "-V", "chapterstyle=hikma-article"),
56 | md_extensions = "+raw_tex+smart-autolink_bare_uris",
57 | toc = FALSE,
58 | keep_tex = FALSE,
59 | citation_package = "biblatex"
60 | ),
61 | encoding = "UTF-8"
62 | )
63 |
64 | return(fs::path_rel(out))
65 | }
66 |
67 | render_pdf_ms <- function(input, output, bibstyle, ...) {
68 | out <- rmarkdown::render(
69 | input = input,
70 | output_file = output,
71 | bookdown::pdf_document2(
72 | template = "pandoc/templates/xelatex-manuscript.tex",
73 | latex_engine = "xelatex",
74 | dev = "cairo_pdf",
75 | pandoc_args = c("--top-level-division=section",
76 | "--shift-heading-level-by=0",
77 | "-V", bibstyle),
78 | md_extensions = "+raw_tex+smart-autolink_bare_uris",
79 | toc = FALSE,
80 | keep_tex = FALSE,
81 | citation_package = "biblatex"
82 | ),
83 | encoding = "UTF-8"
84 | )
85 |
86 | return(fs::path_rel(out))
87 | }
88 |
89 | render_docx <- function(input, output, csl, ...) {
90 | out <- rmarkdown::render(
91 | input = input,
92 | output_file = output,
93 | bookdown::word_document2(
94 | reference_docx = "pandoc/templates/ath-manuscript.docx",
95 | pandoc_args = c(paste0("--csl=", csl)),
96 | md_extensions = "+raw_tex+smart-autolink_bare_uris",
97 | toc = FALSE,
98 | number_sections = FALSE
99 | ),
100 | encoding = "UTF-8"
101 | )
102 |
103 | return(fs::path_rel(out))
104 | }
105 |
106 |
107 | extract_bib <- function(input_rmd, input_bib, output, ...) {
108 | # Load the document
109 | document <- readLines(input_rmd)
110 |
111 | # Find all the citation-looking things in the document. This picks up e-mail
112 | # addresses (like example@gsu.edu) and it picks up bookdown-style references,
113 | # like \@ref(fig:thing). I filter out the "ref" entries, but leave the e-mail
114 | # addresses because there's too much possible variation there to easily remove
115 | # them. As long as there aren't citation keys like "gsu" or "gmail", it should
116 | # be fine. I also remove pandoc-crossref-style references like @fig:thing,
117 | # @tbl:thing, @eq:thing, and @sec:thing
118 | found_citations <- document %>%
119 | map(~as_tibble(str_match_all(., "@([[:alnum:]:&!~=_+-]+)")[[1]][,2])) %>%
120 | bind_rows() %>%
121 | filter(value != "ref",
122 | !str_starts(value, "fig:"),
123 | !str_starts(value, "tbl:"),
124 | !str_starts(value, "eq:"),
125 | !str_starts(value, "sec:")) %>%
126 | distinct(value) %>%
127 | pull(value)
128 |
129 | # Load the bibliography and convert to a data frame
130 | # When year is the last key in an entry and the closing entry brace is on the
131 | # same line, like `Year = {2006}}`, bib2df() parses the year as "2006}" and
132 | # includes the closing }, which then causes warnings about year not being an
133 | # integer. So here I remove all curly braces from the YEAR column
134 | suppressWarnings(suppressMessages({
135 | bib_df <- bib2df(input_bib) %>%
136 | mutate(YEAR = str_remove_all(YEAR, "\\{|\\}"))
137 | }))
138 |
139 | # In biblatex, entries can be cross-referenced using the crossref key. When
140 | # including something that's cross-referenced, like an incollection item, the
141 | # containing item should also be extracted
142 | if (any(names(bib_df) == "CROSSREF")) {
143 | crossrefs <- bib_df %>%
144 | filter(BIBTEXKEY %in% found_citations) %>%
145 | filter(!is.na(CROSSREF)) %>%
146 | pull(CROSSREF)
147 | } else {
148 | crossrefs <- as.character(0)
149 | }
150 |
151 | # Write a simplified bibtex file to disk (no BibDesk-specific entries) and only
152 | # include citations that appear in the document or that are cross-referenced
153 | bib_df %>%
154 | filter(BIBTEXKEY %in% c(found_citations, crossrefs)) %>%
155 | select(-starts_with("BDSK"), -RATING, -READ,
156 | -starts_with("DATE."), -KEYWORDS) %>%
157 | arrange(CATEGORY, BIBTEXKEY) %>%
158 | df2bib(output)
159 | }
160 |
161 |
162 | # string::stri_count_words() counts, um, words. That's its whole job. But it
163 | # doesn't count them like Microsoft Word counts them, and academic publishing
164 | # portals tend to care about Word-like counts. For instance, Word considers
165 | # hyphenated words to be one word, while stringr counts them as two (and even
166 | # worse, stringi counts / as word boundaries, so URLs can severely inflate your
167 | # word count).
168 | #
169 | # Also, academic writing typically doesn't count the title, abstract, table
170 | # text, figure captions, or equations as words in the manuscript (and it
171 | # SHOULDN'T count bibliographies, but it always seems to ugh).
172 | #
173 | # This parses the rendered HTML, removes extra elements, adjusts the text so
174 | # that stringi treats it more like Word, and then finally provides a more
175 | # accurate word count.
176 | count_words <- function(html) {
177 | # Read the HTML file
178 | ms_raw <- read_html(html)
179 |
180 | # Extract just the article, ignoring title, abstract, etc.
181 | ms <- ms_raw %>%
182 | html_nodes("article")
183 |
184 | # Get rid of figures, tables, and math
185 | xml_remove(ms %>% html_nodes("figure"))
186 | xml_remove(ms %>% html_nodes("table"))
187 | xml_remove(ms %>% html_nodes(".display")) # Block math
188 | xml_replace(ms %>% html_nodes(".inline"), read_xml("MATH")) %>%
189 | invisible()
190 |
191 | # Go through each child element in the article and extract it
192 | ms_cleaned_list <- map(html_children(ms), ~ {
193 | .x %>%
194 | html_text(trim = TRUE) %>%
195 | # ICU counts hyphenated words as multiple words, so replace - with DASH
196 | str_replace_all("\\-", "DASH") %>%
197 | # ICU also counts / as multiple words, so URLs go crazy. Replace / with SLASH
198 | str_replace_all("\\/", "SLASH") %>%
199 | # ICU *also* counts [things] in brackets multiple times, so kill those too
200 | str_replace_all("\\[|\\]", "") %>%
201 | # Other things to ignore
202 | str_replace_all("×", "")
203 | })
204 |
205 | # Get count of words! (close enough to Word)
206 | final_count <- sum(stri_count_words(ms_cleaned_list))
207 |
208 | class(final_count) <- c("wordcount", "numeric")
209 |
210 | return(final_count)
211 | }
212 |
213 | print.wordcount <- function(x) {
214 | cat("*", scales::comma(x), "words in manuscript\n")
215 | }
216 |
--------------------------------------------------------------------------------
/R/funs_notebook.R:
--------------------------------------------------------------------------------
1 | # This is adapted from TJ Mahr's {notestar}
2 | # (https://github.com/tjmahr/notestar/blob/main/R/tar-notebook.R).
3 | #
4 | # He did all the hard work figuring out how to dynamically generate targets
5 | # based on a bunch of files, while also checking for targets dependencies with
6 | # tarchetypes::tar_knitr_deps(), based on this issue in {tarchetypes}:
7 | # https://github.com/ropensci/tarchetypes/issues/23
8 | #
9 | # I just adapted it for an R Markdown website
10 |
11 | notebook_rmd_collate <- function(dir_notebook = "analysis") {
12 | index <- file.path(dir_notebook, "index.Rmd")
13 | posts <- list.files(
14 | path = dir_notebook,
15 | pattern = "\\d.+.Rmd",
16 | full.names = TRUE
17 | )
18 | c(index, posts)
19 | }
20 |
21 | rmd_to_html <- function(x) gsub("[.]Rmd$", ".html", x = x)
22 | html_to_rmd <- function(x) gsub("[.]html$", ".Rmd", x = x)
23 |
24 | lazy_list <- function(...) {
25 | q <- rlang::enexprs(..., .named = TRUE, .check_assign = TRUE)
26 | data <- list()
27 | for (x in seq_along(q)) {
28 | data[names(q[x])] <- list(rlang::eval_tidy(q[[x]], data = data))
29 | }
30 | data
31 | }
32 |
33 | knit_notebook_page <- function(rmd_in, html_out) {
34 | rmarkdown::render_site(rmd_in, encoding = "UTF-8")
35 | html_out
36 | }
37 |
38 | tar_notebook_pages <- function(
39 | dir_notebook = "analysis",
40 | dir_html = "analysis/_site",
41 | yaml_config = "analysis/_site.yml"
42 | ) {
43 |
44 | rmds <- notebook_rmd_collate(dir_notebook)
45 |
46 | values <- lazy_list(
47 | rmd_file = !! rmds,
48 | rmd_page_raw = basename(.data$rmd_file),
49 | rmd_page = make.names(.data$rmd_page_raw),
50 | sym_rmd_page = rlang::syms(.data$rmd_page),
51 | rmd_deps = lapply(.data$rmd_file, tarchetypes::tar_knitr_deps_expr),
52 | html_page = rmd_to_html(.data$rmd_page),
53 | html_page_raw = rmd_to_html(.data$rmd_page_raw),
54 | html_file = file.path(!! dir_html, .data$html_page_raw)
55 | )
56 |
57 | list(
58 | # Add _site.yml as a dependency
59 | # Have to use tar_target_raw() instead of tar_target() so that yaml_config is usable
60 | tar_target_raw("site_yml", yaml_config, format = "file"),
61 |
62 | # Prepare targets for each of the notebook pages
63 | tarchetypes::tar_eval_raw(
64 | quote(
65 | targets::tar_target(rmd_page, c(rmd_file, site_yml), format = "file")
66 | ),
67 | values = values
68 | ),
69 |
70 | tarchetypes::tar_eval_raw(
71 | quote(
72 | targets::tar_target(
73 | html_page,
74 | command = {
75 | rmd_deps
76 | sym_rmd_page
77 | knit_notebook_page(rmd_file, html_file);
78 | html_file
79 | },
80 | format = "file"
81 | )
82 | ),
83 | values = values
84 | )
85 | )
86 | }
87 |
88 | copy_notebook_supporting_files <- function(rmd, ...) {
89 | rmarkdown::render_site(rmd, encoding = "UTF-8")
90 | }
91 |
--------------------------------------------------------------------------------
/R/models_analysis.R:
--------------------------------------------------------------------------------
1 | generate_mfx <- function(models, is_categorical = FALSE) {
2 | models <- models %>%
3 | mutate(plot_var_nice = fct_inorder(plot_var_nice, ordered = TRUE))
4 |
5 | mfx <- models %>%
6 | mutate(fx = map2(model, plot_var,
7 | ~conditional_effects(.x, effects = .y,
8 | categorical = is_categorical)[[1]])) %>%
9 | select(-model) %>%
10 | unnest(fx)
11 |
12 | return(mfx)
13 | }
14 |
--------------------------------------------------------------------------------
/R/models_details.R:
--------------------------------------------------------------------------------
1 | create_model_df <- function() {
2 | models <- tribble(
3 | ~model, ~outcome_var, ~explan_var,
4 | # Models for the political terror score (PTS)
5 | "m_pts_baseline", "Political terror", "Baseline",
6 | "m_pts_total", "Political terror", "Total legal barriers",
7 | "m_pts_total_new", "Political terror", "New legal barriers",
8 | "m_pts_advocacy", "Political terror", "Barriers to advocacy",
9 | "m_pts_entry", "Political terror", "Barriers to entry",
10 | "m_pts_funding", "Political terror", "Barriers to funding",
11 | "m_pts_v2csreprss", "Political terror", "Civil society repression",
12 |
13 | # Models for latent human rights (latent_hr_mean)
14 | "m_lhr_baseline", "Latent human rights", "Baseline",
15 | "m_lhr_total", "Latent human rights", "Total legal barriers",
16 | "m_lhr_total_new", "Latent human rights", "New legal barriers",
17 | "m_lhr_advocacy", "Latent human rights", "Barriers to advocacy",
18 | "m_lhr_entry", "Latent human rights", "Barriers to entry",
19 | "m_lhr_funding", "Latent human rights", "Barriers to funding",
20 | "m_lhr_v2csreprss", "Latent human rights", "Civil society repression",
21 |
22 | ## Models for PTS using training data
23 | "m_pts_baseline_train", "Political terror", "Baseline",
24 | "m_pts_total_train", "Political terror", "Total legal barriers",
25 | "m_pts_advocacy_train", "Political terror", "Barriers to advocacy",
26 | "m_pts_entry_train", "Political terror", "Barriers to entry",
27 | "m_pts_funding_train", "Political terror", "Barriers to funding",
28 | "m_pts_v2csreprss_train", "Political terror", "Civil society repression",
29 |
30 | # Models for latent human rights using training data
31 | "m_lhr_baseline_train", "Latent human rights", "Baseline",
32 | "m_lhr_total_train", "Latent human rights", "Total legal barriers",
33 | "m_lhr_total_new_train", "Latent human rights", "New legal barriers",
34 | "m_lhr_advocacy_train", "Latent human rights", "Barriers to advocacy",
35 | "m_lhr_entry_train", "Latent human rights", "Barriers to entry",
36 | "m_lhr_funding_train", "Latent human rights", "Barriers to funding",
37 | "m_lhr_v2csreprss_train", "Latent human rights", "Civil society repression"
38 | ) %>%
39 | mutate(family = ifelse(str_detect(model, "_pts"), "Ordered logit", "OLS"),
40 | training = ifelse(str_detect(model, "_train"), "Training", "Full data"))
41 |
42 | return(models)
43 | }
44 |
45 | # Running modelsummary() on Bayesian models takes *forever* because of all the
46 | # calculations involved in creating the confidence intervals and all the GOF
47 | # statistics. With
48 | # https://github.com/vincentarelbundock/modelsummary/commit/55d0d91, though,
49 | # it's now possible to build the base model with modelsummary(..., output =
50 | # "modelsummary_list", estimate = "", statistic = ""), save that as an
51 | # intermediate object, and then feed it through modelsummary() again with
52 | # whatever other output you want. The modelsummary_list-based object thus acts
53 | # like an output-agnostic ur-model.
54 |
55 | build_modelsummary <- function(models) {
56 | msl <- modelsummary::modelsummary(models,
57 | output = "modelsummary_list",
58 | statistic = "[{conf.low}, {conf.high}]")
59 | return(msl)
60 | }
61 |
62 |
63 | build_coef_list <- function() {
64 | list(
65 | "b_barriers_total" = "Total legal barriers",
66 | "b_barriers_total_lag1" = "Total legal barriers (t - 1)",
67 | "b_barriers_total_new" = "New legal barriers",
68 | "b_barriers_total_new_lag1" = "New legal barriers (t - 1)",
69 | "b_advocacy" = "Barriers to advocacy",
70 | "b_advocacy_lag1" = "Barriers to advocacy (t - 1)",
71 | "b_entry" = "Barriers to entry",
72 | "b_entry_lag1" = "Barriers to entry (t - 1)",
73 | "b_funding" = "Barriers to funding",
74 | "b_funding_lag1" = "Barriers to funding (t - 1)",
75 | "b_v2csreprss" = "Civil society repression",
76 | "b_v2csreprss_lag1" = "Civil society repression (t - 1)",
77 | "b_PTS_factorLevel2" = "PTS = 2",
78 | "b_PTS_factorLevel3" = "PTS = 3",
79 | "b_PTS_factorLevel4" = "PTS = 4",
80 | "b_PTS_factorLevel5" = "PTS = 5",
81 | "b_latent_hr_mean" = "Latent human rights (t)",
82 | "b_v2x_polyarchy" = "Polyarchy index",
83 | "b_gdpcap_log" = "Log GDP per capita",
84 | "b_un_trade_pct_gdp" = "Trade as % of GDP",
85 | "b_armed_conflictTRUE" = "Armed conflict",
86 | "b_Intercept.1." = "Cutpoint 1/2",
87 | "b_Intercept.2." = "Cutpoint 2/3",
88 | "b_Intercept.3." = "Cutpoint 3/4",
89 | "b_Intercept.4." = "Cutpoint 4/5",
90 | "b_Intercept" = "Intercept"
91 | )
92 | }
93 |
--------------------------------------------------------------------------------
/R/models_lhr.R:
--------------------------------------------------------------------------------
1 | # Settings ----------------------------------------------------------------
2 |
3 | lhr_setup <- function() {
4 | options(worker_options)
5 |
6 | # Settings
7 | CHAINS <- 4
8 | ITER <- 2000
9 | WARMUP <- 1000
10 | BAYES_SEED <- 4045 # From random.org
11 | threads <- getOption("mc.threads")
12 |
13 | # Priors
14 | priors_vague <- c(set_prior("normal(0, 10)", class = "Intercept"),
15 | set_prior("normal(0, 3)", class = "b"),
16 | set_prior("cauchy(0, 1)", class = "sd"))
17 |
18 | return(list(chains = CHAINS, iter = ITER, warmup = WARMUP, seed = BAYES_SEED,
19 | threads = threads, priors_vague = priors_vague))
20 | }
21 |
22 |
23 | # Regular models ----------------------------------------------------------
24 |
25 | f_lhr_baseline <- function(dat) {
26 | lhr_settings <- lhr_setup()
27 |
28 | dat <- dat %>% filter(laws)
29 |
30 | model <- brm(
31 | bf(latent_hr_mean_lead1 ~ latent_hr_mean +
32 | v2x_polyarchy +
33 | gdpcap_log +
34 | un_trade_pct_gdp +
35 | armed_conflict +
36 | (1 | gwcode)
37 | ),
38 | family = gaussian(),
39 | prior = lhr_settings$priors_vague,
40 | control = list(adapt_delta = 0.9),
41 | data = dat,
42 | threads = threading(lhr_settings$threads),
43 | chains = lhr_settings$chains, iter = lhr_settings$iter,
44 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
45 |
46 | return(model)
47 | }
48 |
49 | f_lhr_total <- function(dat) {
50 | lhr_settings <- lhr_setup()
51 |
52 | dat <- dat %>% filter(laws)
53 |
54 | model <- brm(
55 | bf(latent_hr_mean_lead1 ~ barriers_total + barriers_total_lag1 +
56 | latent_hr_mean +
57 | v2x_polyarchy +
58 | gdpcap_log +
59 | un_trade_pct_gdp +
60 | armed_conflict +
61 | (1 | gwcode)
62 | ),
63 | family = gaussian(),
64 | prior = lhr_settings$priors_vague,
65 | control = list(adapt_delta = 0.9),
66 | data = dat,
67 | threads = threading(lhr_settings$threads),
68 | chains = lhr_settings$chains, iter = lhr_settings$iter,
69 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
70 |
71 | return(model)
72 | }
73 |
74 | f_lhr_total_new <- function(dat) {
75 | lhr_settings <- lhr_setup()
76 |
77 | dat <- dat %>% filter(laws)
78 |
79 | model <- brm(
80 | bf(latent_hr_mean_lead1 ~ barriers_total_new + barriers_total_new_lag1 +
81 | latent_hr_mean +
82 | v2x_polyarchy +
83 | gdpcap_log +
84 | un_trade_pct_gdp +
85 | armed_conflict +
86 | (1 | gwcode)
87 | ),
88 | family = gaussian(),
89 | prior = lhr_settings$priors_vague,
90 | control = list(adapt_delta = 0.9),
91 | data = dat,
92 | threads = threading(lhr_settings$threads),
93 | chains = lhr_settings$chains, iter = lhr_settings$iter,
94 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
95 |
96 | return(model)
97 | }
98 |
99 | f_lhr_advocacy <- function(dat) {
100 | lhr_settings <- lhr_setup()
101 |
102 | dat <- dat %>% filter(laws)
103 |
104 | model <- brm(
105 | bf(latent_hr_mean_lead1 ~ advocacy + advocacy_lag1 +
106 | latent_hr_mean +
107 | v2x_polyarchy +
108 | gdpcap_log +
109 | un_trade_pct_gdp +
110 | armed_conflict +
111 | (1 | gwcode)
112 | ),
113 | family = gaussian(),
114 | prior = lhr_settings$priors_vague,
115 | control = list(adapt_delta = 0.9),
116 | data = dat,
117 | threads = threading(lhr_settings$threads),
118 | chains = lhr_settings$chains, iter = lhr_settings$iter,
119 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
120 |
121 | return(model)
122 | }
123 |
124 | f_lhr_entry <- function(dat) {
125 | lhr_settings <- lhr_setup()
126 |
127 | dat <- dat %>% filter(laws)
128 |
129 | model <- brm(
130 | bf(latent_hr_mean_lead1 ~ entry + entry_lag1 +
131 | latent_hr_mean +
132 | v2x_polyarchy +
133 | gdpcap_log +
134 | un_trade_pct_gdp +
135 | armed_conflict +
136 | (1 | gwcode)
137 | ),
138 | family = gaussian(),
139 | prior = lhr_settings$priors_vague,
140 | control = list(adapt_delta = 0.9),
141 | data = dat,
142 | threads = threading(lhr_settings$threads),
143 | chains = lhr_settings$chains, iter = lhr_settings$iter,
144 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
145 |
146 | return(model)
147 | }
148 |
149 | f_lhr_funding <- function(dat) {
150 | lhr_settings <- lhr_setup()
151 |
152 | dat <- dat %>% filter(laws)
153 |
154 | model <- brm(
155 | bf(latent_hr_mean_lead1 ~ funding + funding_lag1 +
156 | latent_hr_mean +
157 | v2x_polyarchy +
158 | gdpcap_log +
159 | un_trade_pct_gdp +
160 | armed_conflict +
161 | (1 | gwcode)
162 | ),
163 | family = gaussian(),
164 | prior = lhr_settings$priors_vague,
165 | control = list(adapt_delta = 0.9),
166 | data = dat,
167 | threads = threading(lhr_settings$threads),
168 | chains = lhr_settings$chains, iter = lhr_settings$iter,
169 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
170 |
171 | return(model)
172 | }
173 |
174 | f_lhr_v2csreprss <- function(dat) {
175 | lhr_settings <- lhr_setup()
176 |
177 | model <- brm(
178 | bf(latent_hr_mean_lead1 ~ v2csreprss + v2csreprss_lag1 +
179 | latent_hr_mean +
180 | v2x_polyarchy +
181 | gdpcap_log +
182 | un_trade_pct_gdp +
183 | armed_conflict +
184 | (1 | gwcode)
185 | ),
186 | family = gaussian(),
187 | prior = lhr_settings$priors_vague,
188 | control = list(adapt_delta = 0.9),
189 | data = dat,
190 | threads = threading(lhr_settings$threads),
191 | chains = lhr_settings$chains, iter = lhr_settings$iter,
192 | warmup = lhr_settings$warmup, seed = lhr_settings$seed)
193 |
194 | return(model)
195 | }
196 |
--------------------------------------------------------------------------------
/R/models_pts.R:
--------------------------------------------------------------------------------
1 | # Settings ----------------------------------------------------------------
2 |
3 | # Run this inside each model function instead of outside so that future workers
4 | # use these options internally
5 | pts_setup <- function() {
6 | options(worker_options)
7 |
8 | # Settings
9 | CHAINS <- 4
10 | ITER <- 2000
11 | WARMUP <- 1000
12 | BAYES_SEED <- 2009 # From random.org
13 | threads <- getOption("mc.threads")
14 |
15 | # Priors
16 | priors_vague <- c(set_prior("normal(0, 3)", class = "Intercept"),
17 | set_prior("normal(0, 3)", class = "b"),
18 | set_prior("cauchy(0, 1)", class = "sd"))
19 |
20 | return(list(chains = CHAINS, iter = ITER, warmup = WARMUP, seed = BAYES_SEED,
21 | threads = threads, priors_vague = priors_vague))
22 | }
23 |
24 |
25 | # Regular models ----------------------------------------------------------
26 |
27 | f_pts_baseline <- function(dat) {
28 | pts_settings <- pts_setup()
29 |
30 | dat <- dat %>% filter(laws)
31 |
32 | model <- brm(
33 | bf(PTS_factor_lead1 ~ PTS_factor +
34 | v2x_polyarchy +
35 | gdpcap_log +
36 | un_trade_pct_gdp +
37 | armed_conflict +
38 | (1 | gwcode)
39 | ),
40 | family = cumulative(),
41 | prior = pts_settings$priors_vague,
42 | data = dat,
43 | threads = threading(pts_settings$threads),
44 | chains = pts_settings$chains, iter = pts_settings$iter,
45 | warmup = pts_settings$warmup, seed = pts_settings$seed)
46 |
47 | return(model)
48 | }
49 |
50 | f_pts_total <- function(dat) {
51 | pts_settings <- pts_setup()
52 |
53 | dat <- dat %>% filter(laws)
54 |
55 | model <- brm(
56 | bf(PTS_factor_lead1 ~ barriers_total + barriers_total_lag1 +
57 | PTS_factor +
58 | v2x_polyarchy +
59 | gdpcap_log +
60 | un_trade_pct_gdp +
61 | armed_conflict +
62 | (1 | gwcode)
63 | ),
64 | family = cumulative(),
65 | prior = pts_settings$priors_vague,
66 | data = dat,
67 | threads = threading(pts_settings$threads),
68 | chains = pts_settings$chains, iter = pts_settings$iter,
69 | warmup = pts_settings$warmup, seed = pts_settings$seed)
70 |
71 | return(model)
72 | }
73 |
74 | f_pts_total_new <- function(dat) {
75 | pts_settings <- pts_setup()
76 |
77 | dat <- dat %>% filter(laws)
78 |
79 | model <- brm(
80 | bf(PTS_factor_lead1 ~ barriers_total_new + barriers_total_new_lag1 +
81 | PTS_factor +
82 | v2x_polyarchy +
83 | gdpcap_log +
84 | un_trade_pct_gdp +
85 | armed_conflict +
86 | (1 | gwcode)
87 | ),
88 | family = cumulative(),
89 | prior = pts_settings$priors_vague,
90 | data = dat,
91 | threads = threading(pts_settings$threads),
92 | chains = pts_settings$chains, iter = pts_settings$iter,
93 | warmup = pts_settings$warmup, seed = pts_settings$seed)
94 |
95 | return(model)
96 | }
97 |
98 | f_pts_advocacy <- function(dat) {
99 | pts_settings <- pts_setup()
100 |
101 | dat <- dat %>% filter(laws)
102 |
103 | model <- brm(
104 | bf(PTS_factor_lead1 ~ advocacy + advocacy_lag1 +
105 | PTS_factor +
106 | v2x_polyarchy +
107 | gdpcap_log +
108 | un_trade_pct_gdp +
109 | armed_conflict +
110 | (1 | gwcode)
111 | ),
112 | family = cumulative(),
113 | prior = pts_settings$priors_vague,
114 | data = dat,
115 | threads = threading(pts_settings$threads),
116 | chains = pts_settings$chains, iter = pts_settings$iter,
117 | warmup = pts_settings$warmup, seed = pts_settings$seed)
118 |
119 | return(model)
120 | }
121 |
122 | f_pts_entry <- function(dat) {
123 | pts_settings <- pts_setup()
124 |
125 | dat <- dat %>% filter(laws)
126 |
127 | model <- brm(
128 | bf(PTS_factor_lead1 ~ entry + entry_lag1 +
129 | PTS_factor +
130 | v2x_polyarchy +
131 | gdpcap_log +
132 | un_trade_pct_gdp +
133 | armed_conflict +
134 | (1 | gwcode)
135 | ),
136 | family = cumulative(),
137 | prior = pts_settings$priors_vague,
138 | data = dat,
139 | threads = threading(pts_settings$threads),
140 | chains = pts_settings$chains, iter = pts_settings$iter,
141 | warmup = pts_settings$warmup, seed = pts_settings$seed)
142 |
143 | return(model)
144 | }
145 |
146 | f_pts_funding <- function(dat) {
147 | pts_settings <- pts_setup()
148 |
149 | dat <- dat %>% filter(laws)
150 |
151 | model <- brm(
152 | bf(PTS_factor_lead1 ~ funding + funding_lag1 +
153 | PTS_factor +
154 | v2x_polyarchy +
155 | gdpcap_log +
156 | un_trade_pct_gdp +
157 | armed_conflict +
158 | (1 | gwcode)
159 | ),
160 | family = cumulative(),
161 | prior = pts_settings$priors_vague,
162 | data = dat,
163 | threads = threading(pts_settings$threads),
164 | chains = pts_settings$chains, iter = pts_settings$iter,
165 | warmup = pts_settings$warmup, seed = pts_settings$seed)
166 |
167 | return(model)
168 | }
169 |
170 | f_pts_v2csreprss <- function(dat) {
171 | pts_settings <- pts_setup()
172 |
173 | model <- brm(
174 | bf(PTS_factor_lead1 ~ v2csreprss + v2csreprss_lag1 +
175 | PTS_factor +
176 | v2x_polyarchy +
177 | gdpcap_log +
178 | un_trade_pct_gdp +
179 | armed_conflict +
180 | (1 | gwcode)
181 | ),
182 | family = cumulative(),
183 | prior = pts_settings$priors_vague,
184 | data = dat,
185 | threads = threading(pts_settings$threads),
186 | chains = pts_settings$chains, iter = pts_settings$iter,
187 | warmup = pts_settings$warmup, seed = pts_settings$seed)
188 |
189 | return(model)
190 | }
191 |
--------------------------------------------------------------------------------
/README.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | output: github_document
3 | ---
4 |
5 |
6 |
7 | ```{r, echo = FALSE}
8 | knitr::opts_chunk$set(
9 | collapse = TRUE,
10 | comment = "#>",
11 | fig.path = "README-"
12 | )
13 | # Please put your title here to include it in the file below.
14 | Title <- "NGO Repression as a Predictor of Worsening Human Rights Abuses"
15 | Authors <- "Suparna Chaudhry and Andrew Heiss"
16 | Year <- "2022"
17 | ```
18 |
19 | # `r Title`
20 |
21 | [Suparna Chaudhry](http://www.suparnachaudhry.com/) • Department of International Affairs • Lewis \& Clark College
22 | [Andrew Heiss](https://www.andrewheiss.com/) • Andrew Young School of Policy Studies • Georgia State University
23 |
24 | ---
25 |
26 | [](https://dx.doi.org/10.17605/OSF.IO/MTR6X) [](https://doi.org/10.5281/zenodo.5715402)
27 |
28 | > `r Authors`. `r Year`. ["`r Title`,"](https://doi.org/10.1177/0899764020971045) *Journal of Human Rights* (forthcoming).
29 |
30 | **All this project's materials are free and open:**
31 |
32 | - [Download the data](#data)
33 | - [See the analysis notebook website](https://stats.andrewheiss.com/cautioning-canary/)
34 |
35 |  
36 |
37 | ---
38 |
39 | ## Abstract
40 |
41 | An increasing number of countries have recently cracked down on non-governmental organizations (NGOs). Much of this crackdown is sanctioned by law and represents a bureaucratic form of repression that could indicate more severe human rights abuses in the future. This is especially the case for democracies, which unlike autocracies, may not aggressively attack civic space. We explore if crackdowns on NGOs predict broader human rights repression. Anti-NGO laws are among the most subtle means of repression and attract lesser domestic and international condemnation compared to the use of violence. Using original data on NGO repression, we test whether NGO crackdown is a predictor of political terror, and violations of physical integrity rights and civil liberties. We find that while de jure anti-NGO laws provide little information in predicting future repression, their patterns of implementation—or de facto civil society repression—predicts worsening respect for physical integrity rights and civil liberties.
42 |
43 | ---
44 |
45 | This repository contains the data and code for our paper. Our pre-print is online here:
46 |
47 | > `r Authors`. `r Year`. "`r Title`"". Accessed `r format(Sys.time(), '%B %e, %Y')`. Online at
48 |
49 | ## How to download and replicate
50 |
51 | You can either [download the compendium as a ZIP file](/archive/master.zip) or use GitHub to clone or fork the compendium repository (see the green "Clone or download" button at the top of the GitHub page).
52 |
53 | We use the [**renv** package](https://rstudio.github.io/renv/articles/renv.html) to create a stable version-specific library of packages, and we use the [**targets** package](https://docs.ropensci.org/targets/) to manage all file dependencies and run the analysis. ([See this for a short helpful walkthrough of **targets**.](https://books.ropensci.org/targets/walkthrough.html)).
54 |
55 | To reproduce the findings and re-run the analysis, do the following:
56 |
57 | 1. Download and install these fonts (if you’re using Windows, make sure you right click on the font files and choose “Install for all users” when installing these fonts):
58 | - [Cochineal](https://fontesk.com/cochineal-typeface/)
59 | - [Inter](https://fonts.google.com/specimen/Inter)
60 | - [Linux Libertine O](https://www.cufonfonts.com/font/linux-libertine-o) (also [here](https://sourceforge.net/projects/linuxlibertine/))
61 | - [Libertinus Math](https://github.com/alerque/libertinus)
62 | - [InconsolataGo](https://github.com/ryanoasis/nerd-fonts/tree/master/patched-fonts/InconsolataGo)
63 | 2. [Install R](https://cloud.r-project.org/) (and preferably [RStudio](https://www.rstudio.com/products/rstudio/download/#download)).
64 | - If you're using macOS, [install XQuartz too](https://www.xquartz.org/), so that you have access to the Cairo graphics library
65 | - If you’re using Windows, [install RTools too](https://cran.r-project.org/bin/windows/Rtools/) and add it to your PATH so that you can install packages from source if needed
66 | 3. Open `cautioning-canary.Rproj` to open an [RStudio Project](https://r4ds.had.co.nz/workflow-projects.html).
67 | 4. Make sure you have a working installation of LaTeX:
68 | - *Easy-and-recommended way*: Install the [**tinytex** package](https://yihui.org/tinytex/) by running `install.packages("tinytex")` in the R console, then running `tinytex::install_tinytex()`
69 | - *Easy-but-requires-huge-4+-GB-download way*: Download TeX Live ([macOS](http://www.tug.org/mactex/); [Windows](https://miktex.org/))
70 | 5. If it's not installed already, R *should* try to install the **renv** package when you open the RStudio Project for the first time. If you don't see a message about package installation, install it yourself by running `install.packages("renv")` in the R console.
71 | 6. Run `renv::restore()` in the R console to install all the required packages for this project.
72 | 7. Run `targets::tar_make()` in the R console to automatically download all data files, process the data, run the analysis, and compile the paper and appendix.
73 |
74 | Running `targets::tar_make()` will create several helpful outputs:
75 |
76 | 1. All project data in `data/`
77 | 2. An analysis notebook website in `analysis/_site/index.html`
78 | 3. PDF, HTML, and Word versions of the manuscript in `manuscript/output/`
79 |
80 |
81 | ## Data
82 |
83 |
84 | **NB**: If you're reproducing this project, it is best if you rely on the **targets** package and all the functions in `R/funs_data-cleaning.R` to handle the cleaning, processing, tidying, and merging. All the analysis in the project depends on having a **targets**-created object named `panel`.
85 |
86 |
87 | For reference, though, we export CSV and RDS files of the final dataset:
88 |
89 | - [**`data/derived_data/panel.csv`**](data/derived_data/panel.csv) and [**`data/derived_data/panel.rds`**](data/derived_data/panel.rds): CSV and RDS versions of our final combined dataset
90 | - [**`data/derived_data/panel_lagged.csv`**](data/derived_data/panel_lagged.csv) and [**`data/derived_data/panel_lagged.rds`**](data/derived_data/panel_lagged.rds): CSV and RDS versions of our final combined dataset with lagged and leaded versions of variables
91 |
92 | This data is derived from many different original data sources. See the ["Process and merge data" page of our analysis notebook](https://stats.andrewheiss.com/cautioning-canary/01_data-overview.html) for details about both the original data and the merging process.
93 |
94 | - **Chaudhry NGO restrictions**: We use counts of anti-NGO legal barriers from [the replication data](https://doi.org/10.7910/DVN/JHOGNX) for Suparna Chaudhry's "The Assault on Civil Society: Explaining State Repression of NGOs" (*International Organization*, 2022).
95 | - `data/raw_data/Chaudhry restrictions/SC_Expanded.dta`
96 | - **Political Terror Scores**: We use data from the [Political Terror Scale (PTS) project](http://www.politicalterrorscale.org/) to measure state repression. This project uses reports from the US State Department, Amnesty International, and Human Rights Watch and codes political repression on a scale of 1-5.
97 | - `data/raw_data/Political Terror Scale/PTS-2019.RData`, v2019
98 | - **Latent Human Rights Protection Scores**: We use Chris Fariss's [Latent Human Rights Protection Scores](https://doi.org/10.7910/DVN/RQ85GK), which are estimates from fancy Bayesian models that capture a country's respect for physical integrity rights.
99 | - `data/raw_data/Latent Human Rights Protection Scores/HumanRightsProtectionScores_v4.01.csv`, v4.01
100 | - **Varieties of Democracy data**: We use a bunch of variables from the [Varieties of Democracy (V-Dem) project](https://www.v-dem.net/en/).
101 | - `data/raw_data/Country_Year_V-Dem_Full+others_R_v10/V-Dem-CY-Full+Others-v10.rds`, v10
102 | - **UN data**: We use [the **WDI** package](https://vincentarelbundock.github.io/WDI/) to collect data from the World Bank. However, we don't use WDI data for GDP and % of GDP from trade because the WDI data is incomplete (especially pre-1990, but that's not an issue in this project) To get around that, we create our own GDP and trade measures using data directly from the UN (at [UNData](https://data.un.org/)). They don't have a neat API like the World Bank, so you have to go to their website and export the data manually. We collect three variables:
103 | - [GDP at constant 2015 prices](http://data.un.org/Data.aspx?q=gdp&d=SNAAMA&f=grID%3a102%3bcurrID%3aUSD%3bpcFlag%3a0) (`data/raw_data/UN data/UNdata_Export_20210118_034054729.csv`)
104 | - [GDP at current prices](http://data.un.org/Data.aspx?q=gdp&d=SNAAMA&f=grID%3a101%3bcurrID%3aUSD%3bpcFlag%3a0) (`data/raw_data/UN data/UNdata_Export_20210118_034311252.csv`)
105 | - [Population](https://population.un.org/wpp/Download/Standard/Population/) (`data/raw_data/UN data/WPP2019_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx`)
106 | - **UCDP/PRIO Armed Conflict Data**: We use [UCDP/PRIO Armed Conflict data](https://ucdp.uu.se/downloads/index.html#armedconflict) to create an indicator marking if a country-year was involved in armed conflict that resulted in at least 25 battle-related deaths.
107 | - `data/raw_data/UCDP PRIO/ucdp-prio-acd-191.csv`, v19.1
108 | - **Natural Earth shapefiles**: We use the ["Admin 0 - Countries" 1:110m cultural shapefiles](https://www.naturalearthdata.com/downloads/110m-cultural-vectors/) for maps.
109 | - `data/raw_data/ne_110m_admin_0_countries/`
110 | - **2020 Civicus Monitor ratings**: CIVICUS rates countries [using a 5-item scale of civic space openness](https://monitor.civicus.org/widgets/world/), but getting their data in a machine-readable format is a little tricky. We downloaded [the standalone embeddable widget](https://monitor.civicus.org/widgets/world/) as an HTML file with `wget https://monitor.civicus.org/widgets/world/` and saved it as `index_2021-03-19.html`. We then extracted the `COUNTRIES_DATA` variable embedded in a `
16 |
17 | $if(quotes)$
18 |
19 | $endif$
20 |
21 | $if(highlighting-css)$
22 |
25 | $endif$
26 | $for(css)$
27 |
28 | $endfor$
29 | $if(math)$
30 | $math$
31 | $endif$
32 | $for(header-includes)$
33 | $header-includes$
34 | $endfor$
35 |
36 |
37 | $for(include-before)$
38 | $include-before$
39 | $endfor$
40 | $if(title)$
41 |
42 | $if(date)$
43 |