├── .gitignore ├── 01.Rmd ├── 01.md ├── 01_cache ├── gfm │ ├── __packages │ ├── fit_e93d26babf75bd225595c3d8a92a3864.RData │ ├── fit_e93d26babf75bd225595c3d8a92a3864.rdb │ ├── fit_e93d26babf75bd225595c3d8a92a3864.rdx │ ├── s2_e022447ca1d20267476b0b0c0ae7c732.RData │ ├── s2_e022447ca1d20267476b0b0c0ae7c732.rdb │ ├── s2_e022447ca1d20267476b0b0c0ae7c732.rdx │ ├── s3_db0ac210cc58f324df7512072a32e61a.RData │ ├── s3_db0ac210cc58f324df7512072a32e61a.rdb │ ├── s3_db0ac210cc58f324df7512072a32e61a.rdx │ ├── s_346b959d17b9572fd6775dce43b019cc.RData │ ├── s_346b959d17b9572fd6775dce43b019cc.rdb │ ├── s_346b959d17b9572fd6775dce43b019cc.rdx │ ├── updated_fit_fafd8da0cfd5a47242ec51c37df91427.RData │ ├── updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdb │ └── updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdx └── html │ ├── __packages │ ├── fit_86be471e90214988c62f408f9830018a.RData │ ├── fit_86be471e90214988c62f408f9830018a.rdb │ ├── fit_86be471e90214988c62f408f9830018a.rdx │ ├── s2_3ad7016b9475c362730407f7a71ecfcc.RData │ ├── s2_3ad7016b9475c362730407f7a71ecfcc.rdb │ ├── s2_3ad7016b9475c362730407f7a71ecfcc.rdx │ ├── s3_7f92671df94d4bdfffdb99f828dd602f.RData │ ├── s3_7f92671df94d4bdfffdb99f828dd602f.rdb │ ├── s3_7f92671df94d4bdfffdb99f828dd602f.rdx │ ├── s_c5e70d28ce82b96676fd3a66683021a2.RData │ ├── s_c5e70d28ce82b96676fd3a66683021a2.rdb │ ├── s_c5e70d28ce82b96676fd3a66683021a2.rdx │ ├── updated_fit_b6032fef620c7fc98e9ea4b838434bb4.RData │ ├── updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdb │ └── updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdx ├── 01_files ├── figure-gfm │ ├── unnamed-chunk-15-1.png │ ├── unnamed-chunk-17-1.png │ ├── unnamed-chunk-2-1.png │ ├── unnamed-chunk-20-1.png │ ├── unnamed-chunk-21-1.png │ ├── unnamed-chunk-26-1.png │ └── unnamed-chunk-6-1.png └── figure-html │ ├── unnamed-chunk-15-1.png │ ├── unnamed-chunk-17-1.png │ ├── unnamed-chunk-2-1.png │ ├── unnamed-chunk-20-1.png │ ├── unnamed-chunk-21-1.png │ ├── unnamed-chunk-26-1.png │ └── unnamed-chunk-6-1.png ├── 02.Rmd ├── 02.md ├── 02_cache └── gfm │ ├── __packages │ ├── fit_45cd95835a57d8dbaf6f90d131dd104f.RData │ ├── fit_45cd95835a57d8dbaf6f90d131dd104f.rdb │ ├── fit_45cd95835a57d8dbaf6f90d131dd104f.rdx │ ├── s3_0b4ed38c928734dead22ae1ac38eabed.RData │ ├── s3_0b4ed38c928734dead22ae1ac38eabed.rdb │ ├── s3_0b4ed38c928734dead22ae1ac38eabed.rdx │ ├── s4_1d0dccb20a40342c9ebfe44a80aad635.RData │ ├── s4_1d0dccb20a40342c9ebfe44a80aad635.rdb │ ├── s4_1d0dccb20a40342c9ebfe44a80aad635.rdx │ ├── s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.RData │ ├── s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdb │ ├── s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdx │ ├── s6_0555490575dbe08f16d6608918182537.RData │ ├── s6_0555490575dbe08f16d6608918182537.rdb │ ├── s6_0555490575dbe08f16d6608918182537.rdx │ ├── s7_9c445b4e8ff04faf70df920cd182ab8c.RData │ ├── s7_9c445b4e8ff04faf70df920cd182ab8c.rdb │ ├── s7_9c445b4e8ff04faf70df920cd182ab8c.rdx │ ├── s8_66bc9bedede54f05c4d24d3e9973fcab.RData │ ├── s8_66bc9bedede54f05c4d24d3e9973fcab.rdb │ ├── s8_66bc9bedede54f05c4d24d3e9973fcab.rdx │ ├── s9_4a259be02ecb0f2156e377f564006d53.RData │ ├── s9_4a259be02ecb0f2156e377f564006d53.rdb │ └── s9_4a259be02ecb0f2156e377f564006d53.rdx ├── 02_files └── figure-gfm │ ├── unnamed-chunk-10-1.png │ ├── unnamed-chunk-12-1.png │ ├── unnamed-chunk-14-1.png │ ├── unnamed-chunk-16-1.png │ ├── unnamed-chunk-18-1.png │ ├── unnamed-chunk-20-1.png │ ├── unnamed-chunk-21-1.png │ ├── unnamed-chunk-3-1.png │ ├── unnamed-chunk-4-1.png │ ├── unnamed-chunk-6-1.png │ └── unnamed-chunk-7-1.png ├── 03.Rmd ├── 03.md ├── 03_cache └── gfm │ ├── __packages │ ├── fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.RData │ ├── fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdb │ ├── fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdx │ ├── fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.RData │ ├── fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdb │ ├── fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdx │ ├── fit3_f8d85acb30dd54c124fcf373fea6b570.RData │ ├── fit3_f8d85acb30dd54c124fcf373fea6b570.rdb │ ├── fit3_f8d85acb30dd54c124fcf373fea6b570.rdx │ ├── sim1_96946f136648945121ced4d3160fceea.RData │ ├── sim1_96946f136648945121ced4d3160fceea.rdb │ ├── sim1_96946f136648945121ced4d3160fceea.rdx │ ├── sim2_bc5eae669806636b31ed0aac62f82b2b.RData │ ├── sim2_bc5eae669806636b31ed0aac62f82b2b.rdb │ ├── sim2_bc5eae669806636b31ed0aac62f82b2b.rdx │ ├── sim3_ee2f7c006cf328b7064a28faa5f40b4b.RData │ ├── sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdb │ ├── sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdx │ ├── sim4_3481489fae5ab3281241ea9fb758206f.RData │ ├── sim4_3481489fae5ab3281241ea9fb758206f.rdb │ └── sim4_3481489fae5ab3281241ea9fb758206f.rdx ├── 03_files └── figure-gfm │ ├── unnamed-chunk-1-1.png │ ├── unnamed-chunk-10-1.png │ ├── unnamed-chunk-11-1.png │ ├── unnamed-chunk-18-1.png │ ├── unnamed-chunk-2-1.png │ ├── unnamed-chunk-23-1.png │ ├── unnamed-chunk-26-1.png │ ├── unnamed-chunk-31-1.png │ └── unnamed-chunk-33-1.png ├── Bayesian_power.Rproj ├── LICENSE └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .Rproj.user 2 | .Rhistory 3 | .RData 4 | .Ruserdata 5 | -------------------------------------------------------------------------------- /01.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Bayesian power analysis: Part I' 3 | subtitle: "Prepare to reject $H_0$ with simulation." 4 | output: 5 | github_document 6 | --- 7 | 8 | ## tl;dr 9 | 10 | If you'd like to learn how to do Bayesian power calculations using **brms**, stick around for this 5-part blog series. Here with part I, we'll set the foundation. 11 | 12 | ## Power is hard, especially for Bayesians. 13 | 14 | Many journals, funding agencies, and dissertation committees require power calculations for your primary analyses. Frequentists have a variety of tools available to perform these calculations (e.g., [here](https://rpsychologist.com/analytical-and-simulation-based-power-analyses-for-mixed-design-anovas)). Bayesians, however, have a more difficult time of it. Most of our research questions and data issues are sufficiently complicated that we cannot solve the problems by hand. We need Markov chain Monte Carlo methods to iteratively sample from the posterior to summarize the parameters from our models. Same deal for power. If you'd like to compute the power for a given combination of $N$, likelihood $p(\text{data} | \theta)$, and set of priors $p (\theta)$, you'll need to simulate. 15 | 16 | It's been one of my recent career goals to learn how to do this. You know how they say: *The best way to learn is to teach*. This series of blog posts is the evidence of me learning by teaching. It will be an exploration of what a Bayesian power simulation workflow might look like. The overall statistical framework will be **R**, with an emphasis on code style based on the [**tidyverse**](https://www.tidyverse.org). We'll be fitting our Bayesian models with Bürkner's [**brms**](https://github.com/paul-buerkner/brms). 17 | 18 | What this series is not, however, is an introduction to statistical power itself. Keep reading if you're ready to roll up your sleeves, put on your applied hat, and learn how to get things done. If you're more interested in introductions to power, itself, see the references in the next section. 19 | 20 | ## I make assumptions. 21 | 22 | For this series, I'm presuming you are familiar with linear regression, familiar with the basic differences between frequentist and Bayesian approaches to statistics, and have a basic sense of what we mean by statistical power. Here are some resources if you'd like to shore up. 23 | 24 | * To learn about Bayesian regression, I recommend the introductory text books by either McElreath ([here](([here]()))) or Kruschke ([here](http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/)). Both authors host blogs ([here](http://doingbayesiandataanalysis.blogspot.com) and [here](http://elevanth.org/blog/), respectively). If you go with McElreath, do check out his [online lectures](https://www.youtube.com/channel/UCNJK6_DZvcMqNSzQdEkzvzA/playlists) and [my project](https://bookdown.org/connect/#/apps/1850/access) translating his text to **brms** and **tidyverse** code. I'm working on a [similar project](https://github.com/ASKurz/Doing-Bayesian-Data-Analysis-in-brms-and-the-tidyverse) for Kruschke's text, but it still has a ways to go before I release it in full. 25 | * If you're unfamiliar with statistical power, Kruschke covered it in chapter 13 of his text. You might also check out [this review paper](https://www3.nd.edu/~kkelley/publications/articles/Maxwell_Kelley_Rausch_2008.pdf) by Maxwell, Kelley, and Rausch. There's always, of course, the original work by Cohen (e.g., [here](https://www.worldcat.org/title/statistical-power-analysis-for-the-behavioral-sciences/oclc/17877467)). 26 | * For even more **brms**-related resources, you can find vignettes and documentation [here](https://cran.r-project.org/web/packages/brms/index.html). 27 | * For **tidyverse** introductions, your best bets are [*R4DS*](https://r4ds.had.co.nz) and [*The tidyverse style guide*](https://style.tidyverse.org). 28 | * We'll be simulating data. If that's new to you, both Kruschke and McElreath cover that a little in their texts. You can find nice online tutorials [here](https://debruine.github.io/tutorials/sim-data.html) and [here](https://aosmith.rbind.io/2018/08/29/getting-started-simulating-data/), too. 29 | * We'll also be making a couple custom functions. If that's new, you might check out [*R4DS*, chapter 19](https://r4ds.had.co.nz/functions.html) or [chapter 14](https://bookdown.org/rdpeng/rprogdatascience/functions.html) of Roger Peng's *R Programming for Data Science*. 30 | 31 | ## We need to warm up before jumping into power. 32 | 33 | Let's load our primary statistical packages. 34 | 35 | ```{r, message = F, warning = F} 36 | library(tidyverse) 37 | library(brms) 38 | library(broom) 39 | ``` 40 | 41 | Consider a case where you have some dependent variable $Y$ that you'd like to compare between two groups, which we'll call treatment and control. Here we presume $Y$ is continuous and, for the sake of simplicity, is in a standardized metric for the control condition. Letting $c$ stand for control and $i$ index the data row for a given case, we might write that as $y_{i, c} \sim \text{Normal} (0, 1)$. The mean for our treatment condition is 0.5, with the standard deviation still in the standardized metric. In the social sciences a standardized mean difference of 0.5 would typically be considered a medium effect size. Here's what that'd look like. 42 | 43 | ```{r, fig.width = 6, fig.height = 2.25} 44 | # set our theme because, though I love the default ggplot theme, I hate gridlines 45 | theme_set(theme_grey() + 46 | theme(panel.grid = element_blank())) 47 | 48 | # define the means 49 | mu_c <- 0 50 | mu_t <- .5 51 | 52 | # set up the data 53 | tibble(x = seq(from = -4, to = 5, by = .01)) %>% 54 | mutate(c = dnorm(x, mean = mu_c, sd = 1), 55 | t = dnorm(x, mean = mu_t, sd = 1)) %>% 56 | 57 | # plot 58 | ggplot(aes(x = x, ymin = 0)) + 59 | geom_ribbon(aes(ymax = c), 60 | size = 0, alpha = 1/3, fill = "grey25") + 61 | geom_ribbon(aes(ymax = t), 62 | size = 0, alpha = 1/3, fill = "blue2") + 63 | geom_text(data = tibble(x = c(-.5, 1), 64 | y = .385, 65 | label = c("control", "treatment"), 66 | hjust = 1:0), 67 | aes(y = y, label = label, color = label, hjust = hjust), 68 | size = 5, show.legend = F) + 69 | scale_x_continuous(NULL, breaks = -4:5) + 70 | scale_y_continuous(NULL, breaks = NULL) + 71 | scale_color_manual(values = c("grey25", "blue2")) 72 | ``` 73 | 74 | Sure, those distributions have a lot of overlap. But their means are clearly different and we'd like to make sure we plan on collecting enough data to do a good job showing that. A power analysis will help. 75 | 76 | Within the conventional frequentist paradigm, power is the probability of rejecting the null hypothesis $H_0$ in favor of the alternative hypothesis $H_1$, given the alternative hypothesis is true. In this case, the typical null hypothesis is 77 | 78 | $$H_0\text{: } \mu_c = \mu_t,$$ 79 | 80 | or put differently 81 | 82 | $$ 83 | H_0\text{: } \mu_t - \mu_c = 0. 84 | $$ 85 | 86 | And the alternative hypothesis is often just 87 | 88 | $$H_1\text{: } \mu_c \neq \mu_t,$$ 89 | 90 | or otherwise put 91 | 92 | $$ 93 | H_1\text{: } \mu_t - \mu_c \neq 0. 94 | $$ 95 | 96 | Within the regression framework, we'll be comparing $\mu$s using the formula 97 | 98 | $$ 99 | \begin{align*} 100 | y_i & \sim \text{Normal} (\mu_i, \sigma) \\ 101 | \mu_i & = \beta_0 + \beta_1 \text{treatment}_i, 102 | \end{align*} 103 | $$ 104 | 105 | where $\text{treatment}$ is a dummy variable coded 0 = control 1 = treatment and varies across cases indexed by $i$. In this setup, $\beta_0$ is the estimate for $\mu_c$ and $\beta_1$ is the estimate of the difference between condition means, $\mu_t - \mu_c$. Thus our focal parameter, the one we care about the most in our power analysis, will be $\beta_1$. 106 | 107 | Within the frequentist paradigm, we typically compare these hypotheses using a $p$-value for $H_0$ with the critical value, $\alpha$, set to .05. Thus, power is the probability we'll have $p < .05$ when it is indeed the case that $\mu_c \neq \mu_t$. We won't actually be computing $p$-values in this project, but we will use 95% intervals. Recall that the result of a Bayesian analysis, the posterior distribution, is the probability of the parameters, given the data $p (\theta | \text{data})$. With our 95% Bayesian credible intervals, we'll be able to describe the parameter space over which our estimate of $\mu_t - \mu_c$ is 95% probable. That is, for our power analysis, we're interested in the probability our 95% credible intervals for $\beta_1$ contain 0 within their bounds when we know a priori $\mu_c \neq \mu_t$. 108 | 109 | The reason we know $\mu_c \neq \mu_t$ is because we'll be simulating the data that way. What our power analysis will help us determine is how many cases we'll need to achieve a predetermined level of power. The conventional threshold is .8. 110 | 111 | ### Dry run number 1. 112 | 113 | To make this all concrete, let's start with a simple example. We'll simulate a single set of data, fit a Bayesian regression model, and examine the results for the critical parameter $\beta_1$. For the sake of simplicity, let's keep our two groups, treatment and control, the same size. We'll start with $n = 50$ for each condition. 114 | 115 | ```{r} 116 | n <- 50 117 | ``` 118 | 119 | We already decided above that 120 | 121 | $$ 122 | \begin{align*} 123 | y_{i, c} & \sim \text{Normal} (0, 1) \text{ and}\\ 124 | y_{i, t} & \sim \text{Normal} (0.5, 1). 125 | \end{align*} 126 | $$ 127 | 128 | Here's how we might simulate data along those lines. 129 | 130 | ```{r} 131 | set.seed(1) 132 | 133 | d <- 134 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 135 | mutate(treatment = ifelse(group == "control", 0, 1), 136 | y = ifelse(group == "control", 137 | rnorm(n, mean = mu_c, sd = 1), 138 | rnorm(n, mean = mu_t, sd = 1))) 139 | 140 | glimpse(d) 141 | ``` 142 | 143 | In case it wasn’t clear, the two variables `group` and `treatment` are redundant. Whereas the former is composed of names, the latter is the dummy-variable equivalent. The main event was how we used the `nrorm()` function to simulate the normally-distributed values for `y`. 144 | 145 | Before we fit our model, we need to decide on priors. To give us ideas, here are the **brms** defaults for our model and data. 146 | 147 | ```{r} 148 | get_prior(data = d, 149 | family = gaussian, 150 | y ~ 0 + intercept + treatment) 151 | ``` 152 | 153 | A few things: Notice that here we’re using the `0 + intercept` syntax. This is because **brms** handles the priors for the default intercept under the presumption you’ve mean-centered all your predictor variables. However, since our `treatment` variable is a dummy, that assumption won’t fly. The `0 + intercept` allows us to treat the model intercept as just another $\beta$ parameter, which makes no assumptions about centering. Along those lines, you’ll notice **brms** currently defaults to improper flat priors for the $\beta$ parameters (i.e., those for which `class = b`). And finally, the default prior on $\sigma$ is a permissive `student_t(3, 0, 10)`. By default, **brms** also sets the left bounds for $\sigma$ parameters at zero, making that a half-$t$ distribution. If you’re confused by these details, spend some time with the [**brms** reference manual](https://cran.r-project.org/web/packages/brms/brms.pdf), particularly the `brm` and `brmsformula` sections. 154 | 155 | In this project, we'll be primarily using two kinds of priors: default flat priors and weakly-regularizing priors. Hopefully flat priors are self-explanatory. They let the likelihood dominate the posterior and tend to produce results similar to those from frequentist estimators. 156 | 157 | As for weakly-regularizing priors, McElreath covered them in his text. They're mentioned a bit in the **Stan** team's [*Prior Choice Recommendations*](https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations) wiki, and you can learn even more from Gelman, Simpson, and Betancourt's [*The prior can only be understood in the context of the likelihood*](http://www.stat.columbia.edu/~gelman/research/published/entropy-19-00555-v2.pdf). These priors aren’t strongly informative and aren't really representative of our research hypotheses. But they're not flat, either. Rather, with just a little bit of knowledge about the data, these priors are set to keep the MCMC chains on target. Since our `y` variable has a mean near zero and a standard deviation near 1 and since our sole predictor, `treatment` is a dummy, setting $\text{Normal} (0, 2)$ as the prior for both $\beta$ parameters might be a good place to start. The prior is permissive enough that it will let likelihood dominate the posterior, but it also rules out ridiculous parts of the parameter space (e.g., a standardized mean difference of 20, an intercept of -93). And since we know the data are on the unit scale, we might just center our half-Student-$t$ prior on 1 and add a gentle scale setting of 1. 158 | 159 | Feel free to disagree and use your own priors. The point, here, is settle on the priors you'd use in the future with the real data. Select ones you'd feel comfortable defending to a skeptical reviewer. 160 | 161 | Here's how we might fit the model. 162 | 163 | ```{r fit, cache = T, warning = F, message = F, results = "hide"} 164 | fit <- 165 | brm(data = d, 166 | family = gaussian, 167 | y ~ 0 + intercept + treatment, 168 | prior = c(prior(normal(0, 2), class = b), 169 | prior(student_t(3, 1, 1), class = sigma)), 170 | seed = 1) 171 | ``` 172 | 173 | Before we look at the summary, we might check the chains in a trace plot. 174 | 175 | ```{r, fig.width = 8, fig.height = 4} 176 | plot(fit) 177 | ``` 178 | 179 | Yep, the chains all look good. Here's the parameter summary. 180 | 181 | ```{r} 182 | print(fit) 183 | ``` 184 | 185 | The 95% credible intervals for our $\beta_1$ parameter, termed `treatment` in the output, are well above zero. 186 | 187 | Another way to look at the model summary is with the handy `broom::tidy()` function. 188 | 189 | ```{r} 190 | tidy(fit, prob = .95) 191 | ``` 192 | 193 | It's important to keep in mind that by default, `tidy()` returns 90% intervals for `brm()` fit objects. To get the conventional 95% intervals, you'll need to specify `prob = .95`. The intervals are presented for each parameter in the `lower` and `upper` columns. Once we start simulating in bulk, the `tidy()` function will come in handy. You'll see. 194 | 195 | ### You can reuse a fit. 196 | 197 | Especially with simple models like this, a lot of the time we spend waiting for `brms::brm()` to return the model is wrapped up in compilation. This is because **brms** is a collection of user-friendly functions designed to fit models with [**Stan**](https://mc-stan.org). With each new model, `brm()` translates your model into **Stan** code, which then gets translated to C++ and is compiled afterwards (see [here](https://cran.r-project.org/web/packages/brms/vignettes/brms_overview.pdf) or [here](https://cran.r-project.org/web/packages/brms/brms.pdf)). However, we can use the `update()` function to update a previously-compiled fit object with new data. This cuts out the compilation time and allows us to get directly to sampling. Here’s how to do it. 198 | 199 | ```{r updated_fit, cache = T, warning = F, message = F, results = "hide"} 200 | # set a new seed 201 | set.seed(2) 202 | 203 | # simulate new data based on that new seed 204 | d <- 205 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 206 | mutate(treatment = ifelse(group == "control", 0, 1), 207 | y = ifelse(group == "control", 208 | rnorm(n, mean = mu_c, sd = 1), 209 | rnorm(n, mean = mu_t, sd = 1))) 210 | 211 | updated_fit <- 212 | update(fit, 213 | newdata = d, 214 | seed = 2) 215 | ``` 216 | 217 | Behold the `tidy()` summary of our updated model. 218 | 219 | ```{r} 220 | tidy(updated_fit, prob = .95) 221 | ``` 222 | 223 | Well how about that? In this case, our 95% credible intervals for $\beta_1$ did include 0 within their bounds. Though the posterior mean, 0.30, is still well away from zero, here we'd fail to reject $H_0$ at the conventional level. This is why we simulate. 224 | 225 | To recap, we've 226 | 227 | a. determined our primary data type, 228 | b. cast our research question in terms of a regression model, 229 | c. identified the parameter of interest, 230 | d. settled on defensible priors, 231 | e. picked an initial sample size, 232 | f. fit an initial model with a single simulated data set, and 233 | g. practiced reusing that fit with `update()`. 234 | 235 | We're more than half way there! It's time to do our first power simulation. 236 | 237 | ## Simulate to determine power. 238 | 239 | In this post, we'll play with three ways to do a Bayesian power simulation. They'll all be similar, but hopefully you'll learn a bit as we transition from one to the next. Though if you're impatient and all this seems remedial, you could probably just skip down to the final example, Version 3. 240 | 241 | ### Version 1: Let's introduce making a custom model-fitting function. 242 | 243 | For our power analysis, we'll need to simulate a large number of data sets, each of which we'll fit a model to. Here we'll make a custom function, `sim_d()`, that will simulate new data sets just like before. Our function will have two parameters: we'll set our seeds with `seed` and determine how many cases we'd like per group with `n`. 244 | 245 | ```{r} 246 | sim_d <- function(seed, n) { 247 | 248 | mu_t <- .5 249 | mu_c <- 0 250 | 251 | set.seed(seed) 252 | 253 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 254 | mutate(treatment = ifelse(group == "control", 0, 1), 255 | y = ifelse(group == "control", 256 | rnorm(n, mean = mu_c, sd = 1), 257 | rnorm(n, mean = mu_t, sd = 1))) 258 | } 259 | ``` 260 | 261 | Here's a quick example of how our function works. 262 | 263 | ```{r} 264 | sim_d(seed = 123, n = 2) 265 | ``` 266 | 267 | Now we're ready to get down to business. We’re going to be saving our simulation results in a nested data frame, `s`. Initially, `s` will have one column of `seed` values. These will serve a dual function. First, they are the values we’ll be feeding into the `seed` argument of our custom data-generating function, `sim_d()`. Second, since the `seed` values serially increase, they also stand in as iteration indexes. 268 | 269 | For our second step, we add the data simulations and save them in a nested column, `d`. In the first argument of the `purrr::map()` function, we indicate we want to iterate over the values in `seed`. In the second argument, we indicate we want to serially plug those `seed` values into the first argument within the `sim_d()` function. That argument, recall, is the well-named `seed` argument. With the final argument in `map()`, `n = 50`, we hard code 50 into the `n` argument of `sim_d()`. 270 | 271 | For the third step, we expand our `purrr::map()` skills from above to `purrr::map2()`, which allows us to iteratively insert two arguments into a function. Within this paradigm, the two arguments are generically termed `.x` and `.y`. Thus our approach will be `.x = d, .y = seed`. For our function, we specify `~update(fit, newdata = .x, seed = .y)`. Thus we'll be iteratively inserting our simulated `d` data into the `newdata` argument and will be simultaneously inserting our `seed` values into the `seed` argument. 272 | 273 | Also notice that the number of iterations we'll be working with is determined by the number of rows in the `seed` column. We are defining that number as `n_sim`. Since this is just a blog post, I'm going to take it easy and use 100. But if this was a real power analysis for one of your projects, something like 1000 would be better. 274 | 275 | Finally, you don’t have to do this, but I'm timing my simulation by saving `Sys.time()` values at the beginning and end of the simulation. 276 | 277 | ```{r s, cache = T, warning = F, message = F, results = "hide"} 278 | # how many simulations would you like? 279 | n_sim <- 100 280 | 281 | # this will help us track time 282 | t1 <- Sys.time() 283 | 284 | # here's the main event! 285 | s <- 286 | tibble(seed = 1:n_sim) %>% 287 | mutate(d = map(seed, sim_d, n = 50)) %>% 288 | mutate(fit = map2(d, seed, ~update(fit, newdata = .x, seed = .y))) 289 | 290 | t2 <- Sys.time() 291 | ``` 292 | 293 | The entire simulation took just a couple minutes on my several-year-old laptop. 294 | 295 | ```{r} 296 | t2 - t1 297 | ``` 298 | 299 | Your mileage may vary. 300 | 301 | Let's take a look at what we've done. 302 | 303 | ```{r} 304 | head(s) 305 | ``` 306 | 307 | In our 100-row nested tibble, we have all our simulated data sets in the `d` column and all of our **brms** fit objects nested in the `fit` column. Next we'll use `broom::tidy()` and a little wrangling to extract the parameter of interest, `b_treatment` (i.e., $\beta_1$), from each simulation. 308 | 309 | ```{r} 310 | s %>% 311 | mutate(treatment = map(fit, tidy, prob = .95)) %>% 312 | unnest(treatment) %>% 313 | filter(term == "b_treatment") %>% 314 | head() 315 | ``` 316 | 317 | As an aside, I know I'm moving kinda fast with all this wacky `purrr::map()`/`purrr::map2()` stuff. If you're new to using the **tidyverse** for iterating and saving the results in nested data structures, I recommend fixing an adult beverage and cozying up with Hadley Wickham's presentation, [*Managing many models*](https://www.youtube.com/watch?v=rz3_FDVt9eg). And if you really hate it, both Kruschke and McElreath texts contain many examples of how to iterate in a more base **R** sort of way. 318 | 319 | Anyway, here's what those 100 $\beta_1$ summaries look like in bulk. 320 | 321 | ```{r, fig.width = 8, fig.height = 3} 322 | s %>% 323 | mutate(treatment = map(fit, tidy, prob = .95)) %>% 324 | unnest(treatment) %>% 325 | filter(term == "b_treatment") %>% 326 | 327 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 328 | geom_hline(yintercept = c(0, .5), color = "white") + 329 | geom_pointrange(fatten = 1/2) + 330 | labs(x = "seed (i.e., simulation index)", 331 | y = expression(beta[1])) 332 | ``` 333 | 334 | The horizontal lines show the idealized effect size (0.5) and the null hypothesis (0). Already, it's apparent that most of our intervals indicate there's more than a 95% probability the null hypothesis is not credible. Several do. Here's how to quantify that. 335 | 336 | ```{r} 337 | s %>% 338 | mutate(treatment = map(fit, tidy, prob = .95)) %>% 339 | unnest(treatment) %>% 340 | filter(term == "b_treatment") %>% 341 | mutate(check = ifelse(lower > 0, 1, 0)) %>% 342 | summarise(power = mean(check)) 343 | ``` 344 | 345 | With the second `mutate()` line, we used a logical statement within `ifelse()` to code all instances where the lower limit of the 95% interval was greater than 0 as a 1, with the rest as 0. That left us with a vector of 1s and 0s, which we saved as `check`. In the `summarise()` line, we took the mean of that column, which returned our Bayesian power estimate. 346 | 347 | That is, in 67 of our 100 simulations, an $n = 50$ per group was enough to produce a 95% Bayesian credible interval that did not straddle 0. 348 | 349 | I should probably point out that a 95% interval for which `upper < 0` would have also been consistent with the alternative hypothesis of $\mu_c \neq \mu_t$. However, I didn't bother to work that option into the definition of our `check` variable because I knew from the outset that that would be a highly unlikely result. But if you'd like to work more rigor into your checks, by all means do. 350 | 351 | And if you've gotten this far and have been following along with code of your own, congratulations! You did it! You've estimated the power of a Bayesian model with a given $n$. Now let's refine our approach. 352 | 353 | ### Version 2: We might should be more careful with memory. 354 | 355 | I really like it that our `s` object contains all our `brm()` fits. It makes it really handy to do global diagnostics like making sure our $\hat R$ values are all within a respectable range. 356 | 357 | ```{r, fig.width = 3, fig.height = 2.5} 358 | s %>% 359 | mutate(rhat = map(fit, rhat)) %>% 360 | unnest(rhat) %>% 361 | 362 | ggplot(aes(x = rhat)) + 363 | geom_histogram(bins = 20) 364 | ``` 365 | 366 | Man those $\hat R$ values look sweet. But so anyway, holding on to all those fits can take up a lot of memory. If the only thing you're interested in are the parameter summaries, a better approach might be to do the model refitting and parameter extraction in one step. That way you only save the parameter summaries. Here's how you might do that. 367 | 368 | ```{r s2, cache = T, warning = F, message = F, results = "hide"} 369 | t3 <- Sys.time() 370 | 371 | s2 <- 372 | tibble(seed = 1:n_sim) %>% 373 | mutate(d = map(seed, sim_d, n = 50)) %>% 374 | # here's the new part 375 | mutate(tidy = map2(d, seed, ~update(fit, newdata = .x, seed = .y) %>% 376 | tidy(prob = .95) %>% 377 | filter(term == "b_treatment"))) 378 | 379 | t4 <- Sys.time() 380 | ``` 381 | 382 | Like before, this only took a couple minutes. 383 | 384 | ```{r} 385 | t4 - t3 386 | ``` 387 | 388 | As a point of comparison, here are the sizes of the results from our first approach to those from the second. 389 | 390 | ```{r} 391 | object.size(s) 392 | object.size(s2) 393 | ``` 394 | 395 | That's a big difference. Hopefully you get the idea. With more complicated models and 10+ times the number of simulations, size will eventually matter. 396 | 397 | Anyway, here are the results. 398 | 399 | ```{r, fig.width = 8, fig.height = 3} 400 | s2 %>% 401 | unnest(tidy) %>% 402 | 403 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 404 | geom_hline(yintercept = c(0, .5), color = "white") + 405 | geom_pointrange(fatten = 1/2) + 406 | labs(x = "seed (i.e., simulation index)", 407 | y = expression(beta[1])) 408 | ``` 409 | 410 | Same basic deal, lower memory burden. 411 | 412 | ### Version 3: Still talking about memory, we can be even stingier. 413 | 414 | So far, both of our simulation attempts resulted in our saving the simulated data sets. It's a really nice option if you ever want to go back and take a look at those simulated data. For example, you might want to inspect a random subset of the data simulations with box plots. 415 | 416 | ```{r, fig.width = 6, fig.height = 5} 417 | set.seed(1) 418 | 419 | s2 %>% 420 | sample_n(12) %>% 421 | unnest(d) %>% 422 | 423 | ggplot(aes(x = group, y = y)) + 424 | geom_boxplot(aes(fill = group), 425 | alpha = 2/3, show.legend = F) + 426 | scale_fill_manual(values = c("grey25", "blue2")) + 427 | xlab(NULL) + 428 | facet_wrap(~seed) 429 | ``` 430 | 431 | In this case, it's no big deal if we keep the data around or not. The data sets are fairly small and we're only simulating 100 of them. But in cases where the data are larger and you're doing 1000s of simulations, keeping the data could become a memory drain. 432 | 433 | If you're willing to forgo the luxury of inspecting your data simulations, it might make sense to run our power analysis in a way that avoids saving them. One way to do so would be to just wrap the data simulation and model fitting all in one function. We'll call it `sim_d_and_fit()`. 434 | 435 | ```{r} 436 | sim_d_and_fit <- function(seed, n) { 437 | 438 | mu_t <- .5 439 | mu_c <- 0 440 | 441 | set.seed(seed) 442 | 443 | d <- 444 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 445 | mutate(treatment = ifelse(group == "control", 0, 1), 446 | y = ifelse(group == "control", 447 | rnorm(n, mean = mu_c, sd = 1), 448 | rnorm(n, mean = mu_t, sd = 1))) 449 | 450 | update(fit, 451 | newdata = d, 452 | seed = seed) %>% 453 | tidy(prob = .95) %>% 454 | filter(term == "b_treatment") 455 | } 456 | ``` 457 | 458 | Now iterate 100 times once more. 459 | 460 | ```{r s3, cache = T, warning = F, message = F, results = "hide"} 461 | t5 <- Sys.time() 462 | 463 | s3 <- 464 | tibble(seed = 1:n_sim) %>% 465 | mutate(tidy = map(seed, sim_d_and_fit, n = 50)) %>% 466 | unnest(tidy) 467 | 468 | t6 <- Sys.time() 469 | ``` 470 | 471 | That was pretty quick. 472 | 473 | ```{r} 474 | t6 - t5 475 | ``` 476 | 477 | Here's what it returned. 478 | 479 | ```{r} 480 | head(s3) 481 | ``` 482 | 483 | By wrapping our data simulation, model fitting, and parameter extraction steps all in one function, we simplified the output such that we're no longer holding on to the data simulations or the **brms** fit objects. We just have the parameter summaries and the `seed`, making the product even smaller. 484 | 485 | ```{r} 486 | object.size(s) 487 | object.size(s2) 488 | object.size(s3) 489 | ``` 490 | 491 | But the primary results are the same. 492 | 493 | ```{r, fig.width = 8, fig.height = 3} 494 | s3 %>% 495 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 496 | geom_hline(yintercept = c(0, .5), color = "white") + 497 | geom_pointrange(fatten = 1/2) + 498 | labs(x = "seed (i.e., simulation index)", 499 | y = expression(beta[1])) 500 | ``` 501 | 502 | We still get the same power estimate, too. 503 | 504 | ```{r} 505 | s3 %>% 506 | mutate(check = ifelse(lower > 0, 1, 0)) %>% 507 | summarise(power = mean(check)) 508 | ``` 509 | 510 | ## Next steps 511 | 512 | *But my goal was to figure out what $n$ will get me power of .8 or more!*, you say. Fair enough. Try increasing `n` to 65 or something. 513 | 514 | Welcome to the world of simulation. Since our Bayesian models are complicated, we don't have the luxury of plugging a few values into some quick power formula. Just as simulation is an iterative process, determining on the right values to simulate over might well be an iterative process, too. 515 | 516 | ## Wrap-up 517 | 518 | Anyway, that's the essence of the **brms/tidyverse** workflow for Bayesian power analysis. You follow these steps: 519 | 520 | 1. Determine your primary data type. 521 | 2. Determine your primary regression model and parameter(s) of interest. 522 | 3. Pick defensible priors for all parameters. 523 | 4. Select a sample size. 524 | 5. Fit an initial model and save the fit object. 525 | 6. Simulate some large number of data sets all following your prechosen form and use the `update()` function to iteratively fit the models. 526 | 7. Extract the parameter(s) of interest. 527 | 8. Summarize. 528 | 529 | In addition, we played with a few approaches based on logistical concerns like memory. In the next post, part II, we'll see how the precision-oriented approach is an alternative to power focused on rejecting null hypotheses. 530 | 531 | ## Session info 532 | 533 | ```{r} 534 | sessionInfo() 535 | ``` 536 | 537 | ```{r, eval = F} 538 | # If you increase n to 65, the power becomes about .84 539 | n_sim <- 100 540 | 541 | t7 <- Sys.time() 542 | 543 | s4 <- 544 | tibble(seed = 1:n_sim) %>% 545 | mutate(tidy = map(seed, sim_d_and_fit, n = 65)) 546 | 547 | t8 <- Sys.time() 548 | 549 | t8 - t7 550 | 551 | object.size(s4) 552 | 553 | s4 %>% 554 | unnest(tidy) %>% 555 | mutate(check = ifelse(lower > 0, 1, 0)) %>% 556 | summarise(power = mean(check)) 557 | ``` 558 | 559 | -------------------------------------------------------------------------------- /01_cache/gfm/__packages: -------------------------------------------------------------------------------- 1 | base 2 | tidyverse 3 | ggplot2 4 | tibble 5 | tidyr 6 | readr 7 | purrr 8 | dplyr 9 | stringr 10 | forcats 11 | Rcpp 12 | brms 13 | broom 14 | -------------------------------------------------------------------------------- /01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.RData -------------------------------------------------------------------------------- /01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.rdb -------------------------------------------------------------------------------- /01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/fit_e93d26babf75bd225595c3d8a92a3864.rdx -------------------------------------------------------------------------------- /01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.RData -------------------------------------------------------------------------------- /01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.rdb -------------------------------------------------------------------------------- /01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s2_e022447ca1d20267476b0b0c0ae7c732.rdx -------------------------------------------------------------------------------- /01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.RData -------------------------------------------------------------------------------- /01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.rdb -------------------------------------------------------------------------------- /01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s3_db0ac210cc58f324df7512072a32e61a.rdx -------------------------------------------------------------------------------- /01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.RData -------------------------------------------------------------------------------- /01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.rdb -------------------------------------------------------------------------------- /01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/s_346b959d17b9572fd6775dce43b019cc.rdx -------------------------------------------------------------------------------- /01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.RData -------------------------------------------------------------------------------- /01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdb -------------------------------------------------------------------------------- /01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/gfm/updated_fit_fafd8da0cfd5a47242ec51c37df91427.rdx -------------------------------------------------------------------------------- /01_cache/html/__packages: -------------------------------------------------------------------------------- 1 | base 2 | tidyverse 3 | ggplot2 4 | tibble 5 | tidyr 6 | readr 7 | purrr 8 | dplyr 9 | stringr 10 | forcats 11 | Rcpp 12 | brms 13 | broom 14 | -------------------------------------------------------------------------------- /01_cache/html/fit_86be471e90214988c62f408f9830018a.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/fit_86be471e90214988c62f408f9830018a.RData -------------------------------------------------------------------------------- /01_cache/html/fit_86be471e90214988c62f408f9830018a.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/fit_86be471e90214988c62f408f9830018a.rdb -------------------------------------------------------------------------------- /01_cache/html/fit_86be471e90214988c62f408f9830018a.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/fit_86be471e90214988c62f408f9830018a.rdx -------------------------------------------------------------------------------- /01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.RData -------------------------------------------------------------------------------- /01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.rdb -------------------------------------------------------------------------------- /01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s2_3ad7016b9475c362730407f7a71ecfcc.rdx -------------------------------------------------------------------------------- /01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.RData -------------------------------------------------------------------------------- /01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.rdb -------------------------------------------------------------------------------- /01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s3_7f92671df94d4bdfffdb99f828dd602f.rdx -------------------------------------------------------------------------------- /01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.RData -------------------------------------------------------------------------------- /01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.rdb -------------------------------------------------------------------------------- /01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/s_c5e70d28ce82b96676fd3a66683021a2.rdx -------------------------------------------------------------------------------- /01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.RData -------------------------------------------------------------------------------- /01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdb -------------------------------------------------------------------------------- /01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_cache/html/updated_fit_b6032fef620c7fc98e9ea4b838434bb4.rdx -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-15-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-15-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-17-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-17-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-2-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-20-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-20-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-21-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-21-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-26-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-26-1.png -------------------------------------------------------------------------------- /01_files/figure-gfm/unnamed-chunk-6-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-gfm/unnamed-chunk-6-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-15-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-15-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-17-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-17-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-2-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-20-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-20-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-21-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-21-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-26-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-26-1.png -------------------------------------------------------------------------------- /01_files/figure-html/unnamed-chunk-6-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/01_files/figure-html/unnamed-chunk-6-1.png -------------------------------------------------------------------------------- /02.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Bayesian power analysis: Part II. Some might prefer precision to power." 3 | output: 4 | github_document 5 | --- 6 | 7 | ## tl;dr 8 | 9 | When researchers decide on a sample size for an upcoming project, there are more things to consider than null-hypothesis-oriented power. Bayesian researchers might like to frame their concerns in terms of precision. Stick around to learn what and how. 10 | 11 | ## If all you want to do is reject $H_0$, why not just drop Bayes altogether? 12 | 13 | If you read my last post, you may have found yourself thinking: *Sure, last time you avoided computing $p$-values with your 95% Bayesian credible intervals. But weren't you still acting like a NHSTesting frequentist with all that $H_0 / H_1$ talk?* 14 | 15 | Solid criticism. We didn't even bother discussing all the type-I versus type-II error details. Yet they too were lurking in the background the way we just chose the typical .8 power benchmark. That's not to say frequentist null-hypothesis-oriented approaches to sample size planning aren't legitimate. They're certainly congruent with what most reviewers would expect. But this all seems at odds with a model-oriented Bayesian approach. Happily, we have other options to explore. 16 | 17 | ## Let's just pick up where we left off. 18 | 19 | Load our primary statistical packages. 20 | 21 | ```{r, message = F, warning = F} 22 | library(tidyverse) 23 | library(brms) 24 | library(broom) 25 | ``` 26 | 27 | As a recap, here's how we performed the last simulation-based Bayesian power analysis from part I. First, we simulated a single data set and fit an initial model. 28 | 29 | ```{r fit, cache = T, warning = F, message = F, results = "hide"} 30 | # define the means 31 | mu_c <- 0 32 | mu_t <- .5 33 | 34 | # determine the group size 35 | n <- 50 36 | 37 | # simulate the data 38 | set.seed(1) 39 | d <- 40 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 41 | mutate(treatment = ifelse(group == "control", 0, 1), 42 | y = ifelse(group == "control", 43 | rnorm(n, mean = mu_c, sd = 1), 44 | rnorm(n, mean = mu_t, sd = 1))) 45 | # fit the model 46 | fit <- 47 | brm(data = d, 48 | family = gaussian, 49 | y ~ 0 + intercept + treatment, 50 | prior = c(prior(normal(0, 2), class = b), 51 | prior(student_t(3, 1, 1), class = sigma)), 52 | seed = 1) 53 | ``` 54 | 55 | Next, we made a custom function that both simulated data sets and used the `update()` function to update that initial fit in order to avoid additional compilation time. 56 | 57 | ```{r} 58 | sim_d_and_fit <- function(seed, n) { 59 | 60 | mu_t <- .5 61 | mu_c <- 0 62 | 63 | set.seed(seed) 64 | 65 | d <- 66 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 67 | mutate(treatment = ifelse(group == "control", 0, 1), 68 | y = ifelse(group == "control", 69 | rnorm(n, mean = mu_c, sd = 1), 70 | rnorm(n, mean = mu_t, sd = 1))) 71 | 72 | update(fit, 73 | newdata = d, 74 | seed = seed) %>% 75 | tidy(prob = .95) %>% 76 | filter(term == "b_treatment") 77 | } 78 | ``` 79 | 80 | Then we finally iterated over `n_sim <- 100` times. 81 | 82 | ```{r s3, cache = T, warning = F, message = F, results = "hide"} 83 | n_sim <- 100 84 | 85 | s3 <- 86 | tibble(seed = 1:n_sim) %>% 87 | mutate(tidy = map(seed, sim_d_and_fit, n = 50)) %>% 88 | unnest(tidy) 89 | ``` 90 | 91 | The results looked like so: 92 | 93 | ```{r, fig.width = 8, fig.height = 3} 94 | theme_set(theme_grey() + 95 | theme(panel.grid = element_blank())) 96 | 97 | s3 %>% 98 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 99 | geom_hline(yintercept = c(0, .5), color = "white") + 100 | geom_pointrange(fatten = 1/2) + 101 | labs(x = "seed (i.e., simulation index)", 102 | y = expression(beta[1])) 103 | ``` 104 | 105 | Time to build. 106 | 107 | ## We might evaluate "power" by widths. 108 | 109 | Instead of just ordering the point-ranges by their `seed` values, we might instead arrange them by the `lower` levels. 110 | 111 | ```{r, fig.width = 8, fig.height = 3} 112 | s3 %>% 113 | ggplot(aes(x = reorder(seed, lower), y = estimate, ymin = lower, ymax = upper)) + 114 | geom_hline(yintercept = c(0, .5), color = "white") + 115 | geom_pointrange(fatten = 1/2) + 116 | scale_x_discrete("reordered by the lower level of the 95% intervals", breaks = NULL) + 117 | ylab(expression(beta[1])) + 118 | coord_cartesian(ylim = c(-.5, 1.3)) 119 | ``` 120 | 121 | Notice how this arrangement highlights the differences in widths among the intervals. The wider the interval, the less precise the estimate. Some intervals were wider than others, but all tended to hover in a similar range. We might quantify those ranges by computing a `width` variable. 122 | 123 | ```{r, fig.width = 4, fig.height = 2.5} 124 | s3 <- 125 | s3 %>% 126 | mutate(width = upper - lower) 127 | 128 | head(s3) 129 | ``` 130 | 131 | Here's the `width` distribution. 132 | 133 | ```{r, fig.width = 4, fig.height = 2.5} 134 | s3 %>% 135 | ggplot(aes(x = width)) + 136 | geom_histogram(binwidth = .01) 137 | ``` 138 | 139 | The widths of our 95% intervals range from 0.6 to 0.95, with the bulk sitting around 0.8. Let's focus a bit and take a random sample from one of the simulation iterations. 140 | 141 | ```{r, fig.width = 4, fig.height = 1.125} 142 | set.seed(1) 143 | 144 | s3 %>% 145 | sample_n(1) %>% 146 | mutate(seed = seed %>% as.character()) %>% 147 | 148 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 149 | geom_hline(yintercept = c(0, .5), color = "white") + 150 | geom_pointrange() + 151 | coord_flip() + 152 | labs(x = "seed #", 153 | y = expression(beta[1])) 154 | ``` 155 | 156 | Though the posterior mean suggests the most probable value for $\beta_1$ is about 0.6, the intervals suggest values from about 0.2 to almost 1 are within the 95% probability range. That's a wide spread. Within psychology, a standardized mean difference of 0.2 would typically be considered small, whereas a difference of 1 would be large enough to raise a skeptical eyebrow or two. 157 | 158 | So instead of fucosing on rejecting a null hypothisis like $H_0\text{: } \mu_\text{control} = \mu_\text{treatment}$, we might instead use our power simulation to determine the sampel size we need to have most of our 95% intervals come in at a certain level of precision. This has been termed the accuracy in parameter estimation (AIPE; [Maxwell, Kelley, & Rausch, 2008](https://www3.nd.edu/~kkelley/publications/articles/Maxwell_Kelley_Rausch_2008.pdf); see also [Kruschke, 2014](http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/)) approach to sample size planning. 159 | 160 | Thinking in terms of AIPE, in terms of precision, let's say we wanted widths of 0.7 or smaller. Here's how we did with `s3`. 161 | 162 | ```{r} 163 | s3 %>% 164 | mutate(check = ifelse(width < .7, 1, 0)) %>% 165 | summarise(`width power` = mean(check)) 166 | ``` 167 | 168 | We did terrible. I'm not sure the term "width power" is even a thing. But hopefully you get the point. Our baby 100-iteration simulation suggests we have about a .08 probability of achieving 95% CI widths of 0.7 or smaller with $n = 50$ per group. Though we're pretty good at excluding zero, we don't tend to do so with precision above that. 169 | 170 | That last bit about excluding zero brings up an important point. Once we're concerned about width size, about precision, the null hypothesis is no longer of direct relevance. And since we're no longer wed to thinking in terms of the null hypothesis, there's no real need to stick with a .8 threshold for evaluating width power (okay, I'll stop using that term). Now if we wanted to stick with .8, we could. Though a little nonsensical, the .8 criterion would give our AIPE analyses a sense of familiarity with traditional power analyses, which some reviewers might appreciate. But in his text, Kruschke mentioned several other alternatives. One would be to set maximum value for our CI widths and simulate to find the $n$ necessary so all our simulations pass that criterion. Another would follow Joseph, Wolfson, and du Berger ([1995a](http://www.medicine.mcgill.ca/epidemiology/Joseph/publications%5CMethodological%5Css_binom.pdf), [1995b](http://www.med.mcgill.ca/epidemiology/Joseph/publications/Methodological/ss_hpd.pdf)), who suggested we shoot for an $n$ that produces widths that pass that criterion on average. Here's how we did based on the average-width criterion. 171 | 172 | ```{r} 173 | s3 %>% 174 | summarise(`average width` = mean(width)) 175 | ``` 176 | 177 | Close. Let's see how increasing our sample size to 75 per group effects these metrics. 178 | 179 | ```{r s4, cache = T, warning = F, message = F, results = "hide"} 180 | s4 <- 181 | tibble(seed = 1:n_sim) %>% 182 | mutate(tidy = map(seed, sim_d_and_fit, n = 75)) %>% 183 | unnest(tidy) %>% 184 | mutate(width = upper - lower) 185 | ``` 186 | 187 | Here's what our new batch of 95% intervals looks like. 188 | 189 | ```{r, fig.width = 8, fig.height = 3} 190 | s4 %>% 191 | ggplot(aes(x = reorder(seed, lower), y = estimate, ymin = lower, ymax = upper)) + 192 | geom_hline(yintercept = c(0, .5), color = "white") + 193 | geom_pointrange(fatten = 1/2) + 194 | scale_x_discrete("reordered by the lower level of the 95% intervals", breaks = NULL) + 195 | ylab(expression(beta[1])) + 196 | # this kept the scale on the y-axis the same as the simulation with n = 50 197 | coord_cartesian(ylim = c(-.5, 1.3)) 198 | ``` 199 | 200 | Some of the intervals are still more precise than others, but they all now hover more tightly around their true data-generating value of 0.5. Here's our updated power for producing interval widths smaller than 0.7. 201 | 202 | ```{r} 203 | s4 %>% 204 | mutate(check = ifelse(width < .7, 1, 0)) %>% 205 | summarise(`proportion below 0.7` = mean(check), 206 | `average width` = mean(width)) 207 | ``` 208 | 209 | If we hold to the NHST-oriented .8 threshold, we did great and are even "overpowered". We didn't quite meet Kruschke's strict limiting-worst-precision threshold, but we got close enough we'd have a good sense of what range of $n$ values we might evaluate over next. As far as the mean-precision criterion, we did great by that one and even beat it by about 0.04. 210 | 211 | Here's a look at how this batch of widths is distributed. 212 | 213 | ```{r, fig.width = 4, fig.height = 2.5} 214 | s4 %>% 215 | ggplot(aes(x = width)) + 216 | geom_histogram(binwidth = .02) + 217 | geom_rug(size = 1/6) 218 | ``` 219 | 220 | Let's see if we can nail down the $n$s for our three AIPE criteria. Since we're so close to fulfilling Kruschke's limiting-worst-precision criterion, we'll start there. I'm thinking $n = 85$ should just about do it. 221 | 222 | ```{r s5, cache = T, warning = F, message = F, results = "hide"} 223 | s5 <- 224 | tibble(seed = 1:n_sim) %>% 225 | mutate(tidy = map(seed, sim_d_and_fit, n = 85)) %>% 226 | unnest(tidy) %>% 227 | mutate(width = upper - lower) 228 | ``` 229 | 230 | Did we pass? 231 | 232 | ```{r} 233 | s5 %>% 234 | mutate(check = ifelse(width < .7, 1, 0)) %>% 235 | summarise(`proportion below 0.7` = mean(check)) 236 | ``` 237 | 238 | Success! We might look at how they're distributed. 239 | 240 | ```{r, fig.width = 4, fig.height = 2.5} 241 | s5 %>% 242 | ggplot(aes(x = width)) + 243 | geom_histogram(binwidth = .01) + 244 | geom_rug(size = 1/6) 245 | ``` 246 | 247 | Two of our simulated widths were pretty close to the 0.7 boundary. If we were to do a proper simulation with 1000+ iterations, I'd worry one or two would creep over that boundary. So perhaps $n = 90$ would be a better candidate for a large-scale simulation. 248 | 249 | If we just wanted to meet the mean-precision criterion, we might look at something like $n = 65$. 250 | 251 | ```{r s6, cache = T, warning = F, message = F, results = "hide"} 252 | s6 <- 253 | tibble(seed = 1:n_sim) %>% 254 | mutate(tidy = map(seed, sim_d_and_fit, n = 65)) %>% 255 | unnest(tidy) %>% 256 | mutate(width = upper - lower) 257 | ``` 258 | 259 | Did we pass? 260 | 261 | ```{r} 262 | s6 %>% 263 | summarise(`average width` = mean(width)) 264 | ``` 265 | 266 | We got it! It looks like something like $n = 65$ would be a good candidate for a larger-scale simulation. Here's the distribution. 267 | 268 | ```{r, fig.width = 4, fig.height = 2.5} 269 | s6 %>% 270 | ggplot(aes(x = width)) + 271 | geom_histogram(binwidth = .02) + 272 | geom_rug(size = 1/6) 273 | ``` 274 | 275 | For our final possible criterion, just get .8 of the widths below the threshold, we'll want an $n$ somewhere between 65 and 85. 70, perhaps? 276 | 277 | ```{r s7, cache = T, warning = F, message = F, results = "hide"} 278 | s7 <- 279 | tibble(seed = 1:n_sim) %>% 280 | mutate(tidy = map(seed, sim_d_and_fit, n = 70)) %>% 281 | unnest(tidy) %>% 282 | mutate(width = upper - lower) 283 | ``` 284 | 285 | Did we pass? 286 | 287 | ```{r} 288 | s7 %>% 289 | mutate(check = ifelse(width < .7, 1, 0)) %>% 290 | summarise(`proportion below 0.7` = mean(check)) 291 | ``` 292 | 293 | Yep. Here's the distribution. 294 | 295 | ```{r, fig.width = 4, fig.height = 2.5} 296 | s7 %>% 297 | ggplot(aes(x = width)) + 298 | geom_histogram(binwidth = .02) + 299 | geom_rug(size = 1/6) 300 | ``` 301 | 302 | ## How are we defining our widths? 303 | 304 | In frequentist analyses, we typically work with 95% confidence intervals because of their close connection to the conventional $p < .05$ threshold. Another consequence of dropping our focus on rejecting $H_0$ is that it no longer seems necessary to evaluate our posteriors with 95% intervals. And as it turns out, some Bayesians aren't fans of the 95% interval. McElreath, for example, defiantly used 89% intervals in his [text](http://xcelab.net/rm/statistical-rethinking/). In contrast, Gelman has [blogged](https://statmodeling.stat.columbia.edu/2016/11/05/why-i-prefer-50-to-95-intervals/) on his fondness for 50% intervals. Just for kicks, let's follow Gelman's lead and practice evaluating an $n$ based on 50% intervals. This will require us to update our `sim_d_and_fit()` function to allow us to change the `prob` setting in the `broom::tidy()` function. 305 | 306 | ```{r} 307 | sim_d_and_fit <- function(seed, n, prob) { 308 | 309 | mu_c <- 0 310 | mu_t <- 0.5 311 | 312 | set.seed(seed) 313 | 314 | d <- 315 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 316 | mutate(treatment = ifelse(group == "control", 0, 1), 317 | y = ifelse(group == "control", 318 | rnorm(n, mean = mu_c, sd = 1), 319 | rnorm(n, mean = mu_t, sd = 1))) 320 | 321 | update(fit, 322 | newdata = d, 323 | seed = seed) %>% 324 | tidy(prob = prob) %>% 325 | filter(term == "b_treatment") 326 | } 327 | ``` 328 | 329 | Now simulate to examine those 50% intervals. We'll start with the original $n = 50$ 330 | 331 | ```{r s8, cache = T, warning = F, message = F, results = "hide"} 332 | n_sim <- 100 333 | 334 | s8 <- 335 | tibble(seed = 1:n_sim) %>% 336 | mutate(tidy = map(seed, sim_d_and_fit, n = 50, prob = .5)) %>% 337 | unnest(tidy) %>% 338 | mutate(width = upper - lower) 339 | ``` 340 | 341 | Here is the distribution of our 50% interval widths. 342 | 343 | ```{r, fig.width = 4, fig.height = 2.5} 344 | s8 %>% 345 | mutate(width = upper - lower) %>% 346 | 347 | ggplot(aes(x = width)) + 348 | geom_histogram(binwidth = .01) + 349 | geom_rug(size = 1/6) 350 | ``` 351 | 352 | Since we've gone from 95% to 50% intervals, it should be no surprise that their widths are substantially more narrow. Accordingly, we should evaluate then with a higher standard. Perhaps it's more reasonable to ask for an average width of 0.1. Let's see how close $n = 150$ gets us. 353 | 354 | ```{r s9, cache = T, warning = F, message = F, results = "hide"} 355 | s9 <- 356 | tibble(seed = 1:n_sim) %>% 357 | mutate(tidy = map(seed, sim_d_and_fit, n = 150, prob = .5)) %>% 358 | unnest(tidy) %>% 359 | mutate(width = upper - lower) 360 | ``` 361 | 362 | Look at the distribution. 363 | 364 | ```{r, fig.width = 4, fig.height = 2.5} 365 | s9 %>% 366 | ggplot(aes(x = width)) + 367 | geom_histogram(binwidth = .0025) + 368 | geom_rug(size = 1/6) 369 | ``` 370 | 371 | Nope, we're not there yet. Perhaps $n = 200$ or $250$ is the ticket. This is an iterative process. Anyway, once we're talking that AIPE/precision/interval-width talk, we can get all kinds of creative with which intervals we're even interested in. As far as I can tell, the topic is wide open for fights and collaborations between statisticians, methodologists, and substantive researchers to find sensible ways forward. 372 | 373 | Maybe you should write a dissertation on it. 374 | 375 | Regardless, stay tuned for part III where we'll liberate ourselves from the tyranny of the Gauss. 376 | 377 | ## Session info 378 | 379 | ```{r} 380 | sessionInfo() 381 | ``` 382 | 383 | -------------------------------------------------------------------------------- /02.md: -------------------------------------------------------------------------------- 1 | Bayesian power analysis: Part II. Some might prefer precision to power. 2 | ================ 3 | 4 | ## tl;dr 5 | 6 | When researchers decide on a sample size for an upcoming project, there 7 | are more things to consider than null-hypothesis-oriented power. 8 | Bayesian researchers might like to frame their concerns in terms of 9 | precision. Stick around to learn what and 10 | how. 11 | 12 | ## If all you want to do is reject \(H_0\), why not just drop Bayes altogether? 13 | 14 | If you read my last post, you may have found yourself thinking: *Sure, 15 | last time you avoided computing \(p\)-values with your 95% Bayesian 16 | credible intervals. But weren’t you still acting like a NHSTesting 17 | frequentist with all that \(H_0 / H_1\) talk?* 18 | 19 | Solid criticism. We didn’t even bother discussing all the type-I versus 20 | type-II error details. Yet they too were lurking in the background the 21 | way we just chose the typical .8 power benchmark. That’s not to say 22 | frequentist null-hypothesis-oriented approaches to sample size planning 23 | aren’t legitimate. They’re certainly congruent with what most reviewers 24 | would expect. But this all seems at odds with a model-oriented Bayesian 25 | approach. Happily, we have other options to explore. 26 | 27 | ## Let’s just pick up where we left off. 28 | 29 | Load our primary statistical packages. 30 | 31 | ``` r 32 | library(tidyverse) 33 | library(brms) 34 | library(broom) 35 | ``` 36 | 37 | As a recap, here’s how we performed the last simulation-based Bayesian 38 | power analysis from part I. First, we simulated a single data set and 39 | fit an initial model. 40 | 41 | ``` r 42 | # define the means 43 | mu_c <- 0 44 | mu_t <- .5 45 | 46 | # determine the group size 47 | n <- 50 48 | 49 | # simulate the data 50 | set.seed(1) 51 | d <- 52 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 53 | mutate(treatment = ifelse(group == "control", 0, 1), 54 | y = ifelse(group == "control", 55 | rnorm(n, mean = mu_c, sd = 1), 56 | rnorm(n, mean = mu_t, sd = 1))) 57 | # fit the model 58 | fit <- 59 | brm(data = d, 60 | family = gaussian, 61 | y ~ 0 + intercept + treatment, 62 | prior = c(prior(normal(0, 2), class = b), 63 | prior(student_t(3, 1, 1), class = sigma)), 64 | seed = 1) 65 | ``` 66 | 67 | Next, we made a custom function that both simulated data sets and used 68 | the `update()` function to update that initial fit in order to avoid 69 | additional compilation time. 70 | 71 | ``` r 72 | sim_d_and_fit <- function(seed, n) { 73 | 74 | mu_t <- .5 75 | mu_c <- 0 76 | 77 | set.seed(seed) 78 | 79 | d <- 80 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 81 | mutate(treatment = ifelse(group == "control", 0, 1), 82 | y = ifelse(group == "control", 83 | rnorm(n, mean = mu_c, sd = 1), 84 | rnorm(n, mean = mu_t, sd = 1))) 85 | 86 | update(fit, 87 | newdata = d, 88 | seed = seed) %>% 89 | tidy(prob = .95) %>% 90 | filter(term == "b_treatment") 91 | } 92 | ``` 93 | 94 | Then we finally iterated over `n_sim <- 100` times. 95 | 96 | ``` r 97 | n_sim <- 100 98 | 99 | s3 <- 100 | tibble(seed = 1:n_sim) %>% 101 | mutate(tidy = map(seed, sim_d_and_fit, n = 50)) %>% 102 | unnest(tidy) 103 | ``` 104 | 105 | The results looked like so: 106 | 107 | ``` r 108 | theme_set(theme_grey() + 109 | theme(panel.grid = element_blank())) 110 | 111 | s3 %>% 112 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 113 | geom_hline(yintercept = c(0, .5), color = "white") + 114 | geom_pointrange(fatten = 1/2) + 115 | labs(x = "seed (i.e., simulation index)", 116 | y = expression(beta[1])) 117 | ``` 118 | 119 | ![](02_files/figure-gfm/unnamed-chunk-3-1.png) 120 | 121 | Time to build. 122 | 123 | ## We might evaluate “power” by widths. 124 | 125 | Instead of just ordering the point-ranges by their `seed` values, we 126 | might instead arrange them by the `lower` levels. 127 | 128 | ``` r 129 | s3 %>% 130 | ggplot(aes(x = reorder(seed, lower), y = estimate, ymin = lower, ymax = upper)) + 131 | geom_hline(yintercept = c(0, .5), color = "white") + 132 | geom_pointrange(fatten = 1/2) + 133 | scale_x_discrete("reordered by the lower level of the 95% intervals", breaks = NULL) + 134 | ylab(expression(beta[1])) + 135 | coord_cartesian(ylim = c(-.5, 1.3)) 136 | ``` 137 | 138 | ![](02_files/figure-gfm/unnamed-chunk-4-1.png) 139 | 140 | Notice how this arrangement highlights the differences in widths among 141 | the intervals. The wider the interval, the less precise the estimate. 142 | Some intervals were wider than others, but all tended to hover in a 143 | similar range. We might quantify those ranges by computing a `width` 144 | variable. 145 | 146 | ``` r 147 | s3 <- 148 | s3 %>% 149 | mutate(width = upper - lower) 150 | 151 | head(s3) 152 | ``` 153 | 154 | ## # A tibble: 6 x 7 155 | ## seed term estimate std.error lower upper width 156 | ## 157 | ## 1 1 b_treatment 0.518 0.184 0.159 0.892 0.733 158 | ## 2 2 b_treatment 0.297 0.237 -0.176 0.765 0.941 159 | ## 3 3 b_treatment 0.641 0.178 0.296 0.982 0.686 160 | ## 4 4 b_treatment 0.224 0.178 -0.124 0.582 0.706 161 | ## 5 5 b_treatment 0.436 0.190 0.0560 0.796 0.740 162 | ## 6 6 b_treatment 0.300 0.206 -0.106 0.694 0.799 163 | 164 | Here’s the `width` distribution. 165 | 166 | ``` r 167 | s3 %>% 168 | ggplot(aes(x = width)) + 169 | geom_histogram(binwidth = .01) 170 | ``` 171 | 172 | ![](02_files/figure-gfm/unnamed-chunk-6-1.png) 173 | 174 | The widths of our 95% intervals range from 0.6 to 0.95, with the bulk 175 | sitting around 0.8. Let’s focus a bit and take a random sample from one 176 | of the simulation iterations. 177 | 178 | ``` r 179 | set.seed(1) 180 | 181 | s3 %>% 182 | sample_n(1) %>% 183 | mutate(seed = seed %>% as.character()) %>% 184 | 185 | ggplot(aes(x = seed, y = estimate, ymin = lower, ymax = upper)) + 186 | geom_hline(yintercept = c(0, .5), color = "white") + 187 | geom_pointrange() + 188 | coord_flip() + 189 | labs(x = "seed #", 190 | y = expression(beta[1])) 191 | ``` 192 | 193 | ![](02_files/figure-gfm/unnamed-chunk-7-1.png) 194 | 195 | Though the posterior mean suggests the most probable value for 196 | \(\beta_1\) is about 0.6, the intervals suggest values from about 0.2 to 197 | almost 1 are within the 95% probability range. That’s a wide spread. 198 | Within psychology, a standardized mean difference of 0.2 would typically 199 | be considered small, whereas a difference of 1 would be large enough to 200 | raise a skeptical eyebrow or two. 201 | 202 | So instead of fucosing on rejecting a null hypothisis like 203 | \(H_0\text{: } \mu_\text{control} = \mu_\text{treatment}\), we might 204 | instead use our power simulation to determine the sampel size we need to 205 | have most of our 95% intervals come in at a certain level of precision. 206 | This has been termed the accuracy in parameter estimation (AIPE; 207 | [Maxwell, Kelley, & 208 | Rausch, 2008](https://www3.nd.edu/~kkelley/publications/articles/Maxwell_Kelley_Rausch_2008.pdf); 209 | see also 210 | [Kruschke, 2014](http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/)) 211 | approach to sample size planning. 212 | 213 | Thinking in terms of AIPE, in terms of precision, let’s say we wanted 214 | widths of 0.7 or smaller. Here’s how we did with `s3`. 215 | 216 | ``` r 217 | s3 %>% 218 | mutate(check = ifelse(width < .7, 1, 0)) %>% 219 | summarise(`width power` = mean(check)) 220 | ``` 221 | 222 | ## # A tibble: 1 x 1 223 | ## `width power` 224 | ## 225 | ## 1 0.08 226 | 227 | We did terrible. I’m not sure the term “width power” is even a thing. 228 | But hopefully you get the point. Our baby 100-iteration simulation 229 | suggests we have about a .08 probability of achieving 95% CI widths of 230 | 0.7 or smaller with \(n = 50\) per group. Though we’re pretty good at 231 | excluding zero, we don’t tend to do so with precision above that. 232 | 233 | That last bit about excluding zero brings up an important point. Once 234 | we’re concerned about width size, about precision, the null hypothesis 235 | is no longer of direct relevance. And since we’re no longer wed to 236 | thinking in terms of the null hypothesis, there’s no real need to stick 237 | with a .8 threshold for evaluating width power (okay, I’ll stop using 238 | that term). Now if we wanted to stick with .8, we could. Though a little 239 | nonsensical, the .8 criterion would give our AIPE analyses a sense of 240 | familiarity with traditional power analyses, which some reviewers might 241 | appreciate. But in his text, Kruschke mentioned several other 242 | alternatives. One would be to set maximum value for our CI widths and 243 | simulate to find the \(n\) necessary so all our simulations pass that 244 | criterion. Another would follow Joseph, Wolfson, and du Berger 245 | ([1995a](http://www.medicine.mcgill.ca/epidemiology/Joseph/publications%5CMethodological%5Css_binom.pdf), 246 | [1995b](http://www.med.mcgill.ca/epidemiology/Joseph/publications/Methodological/ss_hpd.pdf)), 247 | who suggested we shoot for an \(n\) that produces widths that pass that 248 | criterion on average. Here’s how we did based on the average-width 249 | criterion. 250 | 251 | ``` r 252 | s3 %>% 253 | summarise(`average width` = mean(width)) 254 | ``` 255 | 256 | ## # A tibble: 1 x 1 257 | ## `average width` 258 | ## 259 | ## 1 0.784 260 | 261 | Close. Let’s see how increasing our sample size to 75 per group effects 262 | these metrics. 263 | 264 | ``` r 265 | s4 <- 266 | tibble(seed = 1:n_sim) %>% 267 | mutate(tidy = map(seed, sim_d_and_fit, n = 75)) %>% 268 | unnest(tidy) %>% 269 | mutate(width = upper - lower) 270 | ``` 271 | 272 | Here’s what our new batch of 95% intervals looks like. 273 | 274 | ``` r 275 | s4 %>% 276 | ggplot(aes(x = reorder(seed, lower), y = estimate, ymin = lower, ymax = upper)) + 277 | geom_hline(yintercept = c(0, .5), color = "white") + 278 | geom_pointrange(fatten = 1/2) + 279 | scale_x_discrete("reordered by the lower level of the 95% intervals", breaks = NULL) + 280 | ylab(expression(beta[1])) + 281 | # this kept the scale on the y-axis the same as the simulation with n = 50 282 | coord_cartesian(ylim = c(-.5, 1.3)) 283 | ``` 284 | 285 | ![](02_files/figure-gfm/unnamed-chunk-10-1.png) 286 | 287 | Some of the intervals are still more precise than others, but they all 288 | now hover more tightly around their true data-generating value of 0.5. 289 | Here’s our updated power for producing interval widths smaller than 0.7. 290 | 291 | ``` r 292 | s4 %>% 293 | mutate(check = ifelse(width < .7, 1, 0)) %>% 294 | summarise(`proportion below 0.7` = mean(check), 295 | `average width` = mean(width)) 296 | ``` 297 | 298 | ## # A tibble: 1 x 2 299 | ## `proportion below 0.7` `average width` 300 | ## 301 | ## 1 0.94 0.637 302 | 303 | If we hold to the NHST-oriented .8 threshold, we did great and are even 304 | “overpowered”. We didn’t quite meet Kruschke’s strict 305 | limiting-worst-precision threshold, but we got close enough we’d have a 306 | good sense of what range of \(n\) values we might evaluate over next. As 307 | far as the mean-precision criterion, we did great by that one and even 308 | beat it by about 0.04. 309 | 310 | Here’s a look at how this batch of widths is distributed. 311 | 312 | ``` r 313 | s4 %>% 314 | ggplot(aes(x = width)) + 315 | geom_histogram(binwidth = .02) + 316 | geom_rug(size = 1/6) 317 | ``` 318 | 319 | ![](02_files/figure-gfm/unnamed-chunk-12-1.png) 320 | 321 | Let’s see if we can nail down the \(n\)s for our three AIPE criteria. 322 | Since we’re so close to fulfilling Kruschke’s limiting-worst-precision 323 | criterion, we’ll start there. I’m thinking \(n = 85\) should just about 324 | do it. 325 | 326 | ``` r 327 | s5 <- 328 | tibble(seed = 1:n_sim) %>% 329 | mutate(tidy = map(seed, sim_d_and_fit, n = 85)) %>% 330 | unnest(tidy) %>% 331 | mutate(width = upper - lower) 332 | ``` 333 | 334 | Did we pass? 335 | 336 | ``` r 337 | s5 %>% 338 | mutate(check = ifelse(width < .7, 1, 0)) %>% 339 | summarise(`proportion below 0.7` = mean(check)) 340 | ``` 341 | 342 | ## # A tibble: 1 x 1 343 | ## `proportion below 0.7` 344 | ## 345 | ## 1 1 346 | 347 | Success\! We might look at how they’re distributed. 348 | 349 | ``` r 350 | s5 %>% 351 | ggplot(aes(x = width)) + 352 | geom_histogram(binwidth = .01) + 353 | geom_rug(size = 1/6) 354 | ``` 355 | 356 | ![](02_files/figure-gfm/unnamed-chunk-14-1.png) 357 | 358 | Two of our simulated widths were pretty close to the 0.7 boundary. If we 359 | were to do a proper simulation with 1000+ iterations, I’d worry one or 360 | two would creep over that boundary. So perhaps \(n = 90\) would be a 361 | better candidate for a large-scale simulation. 362 | 363 | If we just wanted to meet the mean-precision criterion, we might look at 364 | something like \(n = 65\). 365 | 366 | ``` r 367 | s6 <- 368 | tibble(seed = 1:n_sim) %>% 369 | mutate(tidy = map(seed, sim_d_and_fit, n = 65)) %>% 370 | unnest(tidy) %>% 371 | mutate(width = upper - lower) 372 | ``` 373 | 374 | Did we pass? 375 | 376 | ``` r 377 | s6 %>% 378 | summarise(`average width` = mean(width)) 379 | ``` 380 | 381 | ## # A tibble: 1 x 1 382 | ## `average width` 383 | ## 384 | ## 1 0.686 385 | 386 | We got it\! It looks like something like \(n = 65\) would be a good 387 | candidate for a larger-scale simulation. Here’s the distribution. 388 | 389 | ``` r 390 | s6 %>% 391 | ggplot(aes(x = width)) + 392 | geom_histogram(binwidth = .02) + 393 | geom_rug(size = 1/6) 394 | ``` 395 | 396 | ![](02_files/figure-gfm/unnamed-chunk-16-1.png) 397 | 398 | For our final possible criterion, just get .8 of the widths below the 399 | threshold, we’ll want an \(n\) somewhere between 65 and 85. 70, perhaps? 400 | 401 | ``` r 402 | s7 <- 403 | tibble(seed = 1:n_sim) %>% 404 | mutate(tidy = map(seed, sim_d_and_fit, n = 70)) %>% 405 | unnest(tidy) %>% 406 | mutate(width = upper - lower) 407 | ``` 408 | 409 | Did we pass? 410 | 411 | ``` r 412 | s7 %>% 413 | mutate(check = ifelse(width < .7, 1, 0)) %>% 414 | summarise(`proportion below 0.7` = mean(check)) 415 | ``` 416 | 417 | ## # A tibble: 1 x 1 418 | ## `proportion below 0.7` 419 | ## 420 | ## 1 0.82 421 | 422 | Yep. Here’s the distribution. 423 | 424 | ``` r 425 | s7 %>% 426 | ggplot(aes(x = width)) + 427 | geom_histogram(binwidth = .02) + 428 | geom_rug(size = 1/6) 429 | ``` 430 | 431 | ![](02_files/figure-gfm/unnamed-chunk-18-1.png) 432 | 433 | ## How are we defining our widths? 434 | 435 | In frequentist analyses, we typically work with 95% confidence intervals 436 | because of their close connection to the conventional \(p < .05\) 437 | threshold. Another consequence of dropping our focus on rejecting 438 | \(H_0\) is that it no longer seems necessary to evaluate our posteriors 439 | with 95% intervals. And as it turns out, some Bayesians aren’t fans of 440 | the 95% interval. McElreath, for example, defiantly used 89% intervals 441 | in his [text](http://xcelab.net/rm/statistical-rethinking/). In 442 | contrast, Gelman has 443 | [blogged](https://statmodeling.stat.columbia.edu/2016/11/05/why-i-prefer-50-to-95-intervals/) 444 | on his fondness for 50% intervals. Just for kicks, let’s follow Gelman’s 445 | lead and practice evaluating an \(n\) based on 50% intervals. This will 446 | require us to update our `sim_d_and_fit()` function to allow us to 447 | change the `prob` setting in the `broom::tidy()` function. 448 | 449 | ``` r 450 | sim_d_and_fit <- function(seed, n, prob) { 451 | 452 | mu_c <- 0 453 | mu_t <- 0.5 454 | 455 | set.seed(seed) 456 | 457 | d <- 458 | tibble(group = rep(c("control", "treatment"), each = n)) %>% 459 | mutate(treatment = ifelse(group == "control", 0, 1), 460 | y = ifelse(group == "control", 461 | rnorm(n, mean = mu_c, sd = 1), 462 | rnorm(n, mean = mu_t, sd = 1))) 463 | 464 | update(fit, 465 | newdata = d, 466 | seed = seed) %>% 467 | tidy(prob = prob) %>% 468 | filter(term == "b_treatment") 469 | } 470 | ``` 471 | 472 | Now simulate to examine those 50% intervals. We’ll start with the 473 | original \(n = 50\) 474 | 475 | ``` r 476 | n_sim <- 100 477 | 478 | s8 <- 479 | tibble(seed = 1:n_sim) %>% 480 | mutate(tidy = map(seed, sim_d_and_fit, n = 50, prob = .5)) %>% 481 | unnest(tidy) %>% 482 | mutate(width = upper - lower) 483 | ``` 484 | 485 | Here is the distribution of our 50% interval widths. 486 | 487 | ``` r 488 | s8 %>% 489 | mutate(width = upper - lower) %>% 490 | 491 | ggplot(aes(x = width)) + 492 | geom_histogram(binwidth = .01) + 493 | geom_rug(size = 1/6) 494 | ``` 495 | 496 | ![](02_files/figure-gfm/unnamed-chunk-20-1.png) 497 | 498 | Since we’ve gone from 95% to 50% intervals, it should be no surprise 499 | that their widths are substantially more narrow. Accordingly, we should 500 | evaluate then with a higher standard. Perhaps it’s more reasonable to 501 | ask for an average width of 0.1. Let’s see how close \(n = 150\) gets 502 | us. 503 | 504 | ``` r 505 | s9 <- 506 | tibble(seed = 1:n_sim) %>% 507 | mutate(tidy = map(seed, sim_d_and_fit, n = 150, prob = .5)) %>% 508 | unnest(tidy) %>% 509 | mutate(width = upper - lower) 510 | ``` 511 | 512 | Look at the distribution. 513 | 514 | ``` r 515 | s9 %>% 516 | ggplot(aes(x = width)) + 517 | geom_histogram(binwidth = .0025) + 518 | geom_rug(size = 1/6) 519 | ``` 520 | 521 | ![](02_files/figure-gfm/unnamed-chunk-21-1.png) 522 | 523 | Nope, we’re not there yet. Perhaps \(n = 200\) or \(250\) is the ticket. 524 | This is an iterative process. Anyway, once we’re talking that 525 | AIPE/precision/interval-width talk, we can get all kinds of creative 526 | with which intervals we’re even interested in. As far as I can tell, the 527 | topic is wide open for fights and collaborations between statisticians, 528 | methodologists, and substantive researchers to find sensible ways 529 | forward. 530 | 531 | Maybe you should write a dissertation on it. 532 | 533 | Regardless, stay tuned for part III where we’ll liberate ourselves from 534 | the tyranny of the Gauss. 535 | 536 | ## Session info 537 | 538 | ``` r 539 | sessionInfo() 540 | ``` 541 | 542 | ## R version 3.6.0 (2019-04-26) 543 | ## Platform: x86_64-apple-darwin15.6.0 (64-bit) 544 | ## Running under: macOS High Sierra 10.13.6 545 | ## 546 | ## Matrix products: default 547 | ## BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib 548 | ## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib 549 | ## 550 | ## locale: 551 | ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 552 | ## 553 | ## attached base packages: 554 | ## [1] stats graphics grDevices utils datasets methods base 555 | ## 556 | ## other attached packages: 557 | ## [1] broom_0.5.2 brms_2.9.0 Rcpp_1.0.1 forcats_0.4.0 558 | ## [5] stringr_1.4.0 dplyr_0.8.1 purrr_0.3.2 readr_1.3.1 559 | ## [9] tidyr_0.8.3 tibble_2.1.3 ggplot2_3.2.0 tidyverse_1.2.1 560 | ## 561 | ## loaded via a namespace (and not attached): 562 | ## [1] nlme_3.1-139 matrixStats_0.54.0 xts_0.11-2 563 | ## [4] lubridate_1.7.4 threejs_0.3.1 httr_1.4.0 564 | ## [7] rstan_2.18.2 tools_3.6.0 backports_1.1.4 565 | ## [10] utf8_1.1.4 R6_2.4.0 DT_0.7 566 | ## [13] lazyeval_0.2.2 colorspace_1.4-1 withr_2.1.2 567 | ## [16] prettyunits_1.0.2 processx_3.3.1 tidyselect_0.2.5 568 | ## [19] gridExtra_2.3 Brobdingnag_1.2-6 compiler_3.6.0 569 | ## [22] cli_1.1.0 rvest_0.3.4 xml2_1.2.0 570 | ## [25] shinyjs_1.0 labeling_0.3 colourpicker_1.0 571 | ## [28] scales_1.0.0 dygraphs_1.1.1.6 mvtnorm_1.0-11 572 | ## [31] callr_3.2.0 ggridges_0.5.1 StanHeaders_2.18.1-10 573 | ## [34] digest_0.6.19 rmarkdown_1.13 base64enc_0.1-3 574 | ## [37] pkgconfig_2.0.2 htmltools_0.3.6 htmlwidgets_1.3 575 | ## [40] rlang_0.4.0 readxl_1.3.1 rstudioapi_0.10 576 | ## [43] shiny_1.3.2 generics_0.0.2 zoo_1.8-6 577 | ## [46] jsonlite_1.6 crosstalk_1.0.0 gtools_3.8.1 578 | ## [49] inline_0.3.15 magrittr_1.5 loo_2.1.0 579 | ## [52] bayesplot_1.7.0 Matrix_1.2-17 fansi_0.4.0 580 | ## [55] munsell_0.5.0 abind_1.4-5 stringi_1.4.3 581 | ## [58] yaml_2.2.0 pkgbuild_1.0.3 plyr_1.8.4 582 | ## [61] grid_3.6.0 parallel_3.6.0 promises_1.0.1 583 | ## [64] crayon_1.3.4 miniUI_0.1.1.1 lattice_0.20-38 584 | ## [67] haven_2.1.0 hms_0.4.2 zeallot_0.1.0 585 | ## [70] ps_1.3.0 knitr_1.23 pillar_1.4.1 586 | ## [73] igraph_1.2.4.1 markdown_1.0 shinystan_2.5.0 587 | ## [76] reshape2_1.4.3 stats4_3.6.0 rstantools_1.5.1 588 | ## [79] glue_1.3.1 evaluate_0.14 modelr_0.1.4 589 | ## [82] vctrs_0.1.0 httpuv_1.5.1 cellranger_1.1.0 590 | ## [85] gtable_0.3.0 assertthat_0.2.1 xfun_0.8 591 | ## [88] mime_0.7 xtable_1.8-4 coda_0.19-2 592 | ## [91] later_0.8.0 rsconnect_0.8.13 shinythemes_1.1.2 593 | ## [94] bridgesampling_0.6-0 594 | -------------------------------------------------------------------------------- /02_cache/gfm/__packages: -------------------------------------------------------------------------------- 1 | base 2 | tidyverse 3 | ggplot2 4 | tibble 5 | tidyr 6 | readr 7 | purrr 8 | dplyr 9 | stringr 10 | forcats 11 | Rcpp 12 | brms 13 | broom 14 | -------------------------------------------------------------------------------- /02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.RData -------------------------------------------------------------------------------- /02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.rdb -------------------------------------------------------------------------------- /02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/fit_45cd95835a57d8dbaf6f90d131dd104f.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.RData -------------------------------------------------------------------------------- /02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s3_0b4ed38c928734dead22ae1ac38eabed.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.RData -------------------------------------------------------------------------------- /02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s4_1d0dccb20a40342c9ebfe44a80aad635.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.RData -------------------------------------------------------------------------------- /02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s5_fc8a23cc74beb54bc2ee6ed7ebb07ebd.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s6_0555490575dbe08f16d6608918182537.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s6_0555490575dbe08f16d6608918182537.RData -------------------------------------------------------------------------------- /02_cache/gfm/s6_0555490575dbe08f16d6608918182537.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s6_0555490575dbe08f16d6608918182537.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s6_0555490575dbe08f16d6608918182537.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s6_0555490575dbe08f16d6608918182537.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.RData -------------------------------------------------------------------------------- /02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s7_9c445b4e8ff04faf70df920cd182ab8c.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.RData -------------------------------------------------------------------------------- /02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s8_66bc9bedede54f05c4d24d3e9973fcab.rdx -------------------------------------------------------------------------------- /02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.RData -------------------------------------------------------------------------------- /02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.rdb -------------------------------------------------------------------------------- /02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_cache/gfm/s9_4a259be02ecb0f2156e377f564006d53.rdx -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-10-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-10-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-12-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-12-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-14-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-14-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-16-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-16-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-18-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-18-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-20-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-20-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-21-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-21-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-3-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-3-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-4-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-4-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-6-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-6-1.png -------------------------------------------------------------------------------- /02_files/figure-gfm/unnamed-chunk-7-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/02_files/figure-gfm/unnamed-chunk-7-1.png -------------------------------------------------------------------------------- /03.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Bayesian power analysis: Part III. All the world isn't a Gauss." 3 | output: 4 | github_document 5 | --- 6 | 7 | ## tl;dr 8 | 9 | So far we've covered Bayesian power simulations from both a null hypothesis orientation and a parameter width perspective. In both instances, we kept things simple and stayed with Gaussian (i.e., normally distributed) data. But not all data follow that form, so it might do us well to expand our skill set a bit. In this post, we'll cover how we might perform power simulations with count and binary data. For the count data, we'll use the Poisson likelihood. For the binary, we'll use the binomial. 10 | 11 | ## The Poisson distribution is handy for counts. 12 | 13 | In the social sciences, count data arise when we ask questions like: 14 | 15 | * How many sexual partners have you had? 16 | * How many pets do you have at home? 17 | * How many cigarettes did you smoke, yesterday? 18 | 19 | The values these data will take are discrete [^1] in that you've either slept with 9 or 10 people, but definitely not 9.5. The values cannot go below zero in that even if you quit smoking cold turkey 15 years ago and have been a health nut since, you still could not have smoked -3 cigarettes, yesterday. Zero is as low as it goes. 20 | 21 | The canonical distribution for data of this type--non-zero integers--is the Poisson. It's named after the French mathematician Siméon Denis Poisson, [who had quite the confident stare in his youth](https://upload.wikimedia.org/wikipedia/commons/e/e8/E._Marcellot_Siméon-Denis_Poisson_1804.jpg). The Poisson distribution has one parameter, $\lambda$, which controls both its mean and variance. Although the numbers the Poisson describes are counts, the $\lambda$ parameter does not need to be an integer. For example, here's the plot of 1000 draws from a Poisson for which $\lambda = 3.2$. 22 | 23 | ```{r, fig.width = 4, fig.height = 2.5, warning = F, message = F} 24 | library(tidyverse) 25 | 26 | theme_set(theme_gray() + theme(panel.grid = element_blank())) 27 | 28 | tibble(x = rpois(n = 1e3, lambda = 3.2)) %>% 29 | mutate(x = factor(x)) %>% 30 | 31 | ggplot(aes(x = x)) + 32 | geom_bar() 33 | ``` 34 | 35 | In case you missed it, the key function for generating those data was `rpois()`. I'm not going to go into a full-blown tutorial on the Poisson distribution or on count regression. For more thorough introductions, check out Atkins et al's [*A tutorial on count regression and zero-altered count models for longitudinal substance use data*](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3513584/pdf/nihms396181.pdf), chapters 9 through 11 in McElreath's [*Statistical Rethinking*](https://xcelab.net/rm/statistical-rethinking/), or, if you really want to dive in, Agresti's [*Foundations of Linear and Generalized Linear Models*](https://www.wiley.com/en-us/Foundations+of+Linear+and+Generalized+Linear+Models-p-9781118730034). 36 | 37 | For our power example, let's say you were interested in drinking. Using data from [the National Epidemiologic Survey on Alcohol and Related Conditions](https://pubs.niaaa.nih.gov/publications/AA70/AA70.htm), Christopher Ingraham presented [a data visualization](https://www.washingtonpost.com/news/wonk/wp/2014/09/25/think-you-drink-a-lot-this-chart-will-tell-you/?utm_term=.b81599bbbe25) of the average number of alcoholic drinks American adults consume, per week. By decile, the numbers were: 38 | 39 | 1. 0.00 40 | 2. 0.00 41 | 3. 0.00 42 | 4. 0.02 43 | 5. 0.14 44 | 6. 0.63 45 | 7. 2.17 46 | 8. 6.25 47 | 9. 15.28 48 | 10. 73.85 49 | 50 | Let's say you wanted to run a study where you planned on comparing two demographic groups by their weekly drinking levels. Let's further say you suspected one of those groups drank like the folks American adults in the 7^th^ decile and the other drank like American adults in the 8^th^. We'll call them low and high drinkers, respectively. For convenience, let's further presume you'll be able to recruit equal numbers of participants from both groups. The question for our power analysis is to determine how many you'd need per group to detect reliable differences. Using $n = 50$ as a starting point, here’s what the data for our hypothetical groups might look like. 51 | 52 | ```{r, fig.width = 4, fig.height = 4} 53 | mu_7 <- 2.17 54 | mu_8 <- 6.25 55 | 56 | n <- 50 57 | 58 | set.seed(3) 59 | 60 | d <- 61 | tibble(low = rpois(n = n, lambda = mu_7), 62 | high = rpois(n = n, lambda = mu_8)) %>% 63 | gather(group, count) 64 | 65 | d %>% 66 | mutate(count = factor(count)) %>% 67 | 68 | ggplot(aes(x = count)) + 69 | geom_bar() + 70 | facet_wrap(~group, ncol = 1) 71 | ``` 72 | 73 | This will be our primary data type. Our next step is to determine how to express our research question as a regression model. Like with our two-group Gaussian models, we can predict counts in terms of an intercept (i.e., standing for the expected value on the reference group) and slope (i.e., standing for the expected difference between the reference group and the comparison group). If we coded our two groups by a `high` variable for which 0 stood for low drinkers and 1 stood for high drinkers, the basic model would follow the form 74 | 75 | $$ 76 | \begin{align*} 77 | \text{drinks_per_week}_i & \sim \text{Poisson}(\lambda_i) \\ 78 | \operatorname{log} (\lambda_i) & = \beta_0 + \beta_1 \text{high}_i. 79 | \end{align*} 80 | $$ 81 | 82 | Here's how to set the data up for that model. 83 | 84 | ```{r} 85 | d <- 86 | d %>% 87 | mutate(high = ifelse(group == "low", 0, 1)) 88 | ``` 89 | 90 | 91 | If you were attending closely to our model formula, you noticed we ran into a detail. Count regression, such as with the Poisson likelihood, tends to use the log link. *Why?* you ask. Recall that counts need to be 0 and above. Same deal for our $\lambda$ parameter. In order to make sure our models don't yield silly estimates for $\lambda$, like -2 or something, we typically use the log link. You don't have to, of course. [The world is your playground](https://i.kym-cdn.com/entries/icons/original/000/008/342/ihave.jpg). But this is the method most of your colleagues are likely to use and it's the one I suggest you use until you have compelling reasons to do otherwise. 92 | 93 | So then since we're now fitting a model with a log link, it might seem challenging to pick good priors. As a place to start, we can use the `brms::get_prior()` function to see the **brms** defaults. 94 | 95 | ```{r, warning = F, message = F} 96 | library(brms) 97 | 98 | get_prior(data = d, 99 | family = poisson, 100 | count ~ 0 + intercept + high) 101 | ``` 102 | 103 | Hopefully two things popped out. First, there's no prior of `class = sigma`. Since the Poisson distribution only has one parameter $\lambda$, we don't need to set a prior for $\sigma$. Our model won't have one. Second, because we're continuing to use the `0 + intercept` syntax for our model intercept, both our intercept and slope are of prior `class = b` and those currently have default flat priors with **brms**. To be sure, flat priors aren't the best. But maybe if this was your first time playing around with a Poisson model, default flat priors might seem like a safe place to start. [Feel free to disagree](https://xkcd.com/386/). In the meantime, here's how to fit that default Poisson model with `brms::brm()`. 104 | 105 | ```{r fit1, cache = T, message = F, warning = F, results = "hide"} 106 | fit1 <- 107 | brm(data = d, 108 | family = poisson, 109 | count ~ 0 + intercept + high, 110 | seed = 3) 111 | ``` 112 | 113 | ```{r} 114 | print(fit1) 115 | ``` 116 | 117 | Since we used the log link, our model results are in the log metric, too. If you'd like them in the metric of the data, you'd work directly with the poster samples and exponentiate. 118 | 119 | ```{r} 120 | post <- 121 | posterior_samples(fit1) %>% 122 | mutate(`beta_0 (i.e., low)` = exp(b_intercept), 123 | `beta_1 (i.e., difference score for high)` = exp(b_high)) 124 | ``` 125 | 126 | We can then just summarize our parameters of interest. 127 | 128 | ```{r} 129 | post %>% 130 | select(starts_with("beta_")) %>% 131 | gather() %>% 132 | group_by(key) %>% 133 | summarise(mean = mean(value), 134 | lower = quantile(value, prob = .025), 135 | upper = quantile(value, prob = .975)) 136 | ``` 137 | 138 | For the sake of simulation, it'll be easier if we press on with evaluating the parameters on the log metric, though. If you're working within a null-hypothesis oriented power paradigm, you'll be happy to know zero is still the number to beat for evaluating our 95% intervals for $\beta_1$, even when that parameter is in the log metric. Here it is, again. 139 | 140 | ```{r, warning = F, message = F} 141 | library(broom) 142 | 143 | tidy(fit1, prob = .95) %>% 144 | filter(term == "b_high") 145 | ``` 146 | 147 | So our first fit suggests we're on good footing to run a quick power simulation holding $n = 50$. As in the prior blog posts, our lives will be simpler if we set up a custom simulation function. Since we'll be using it to simulate the data and fit the model in one step, let's call it `sim_data_fit()`. 148 | 149 | ```{r} 150 | sim_data_fit <- function(seed, n) { 151 | 152 | n <- n 153 | 154 | set.seed(seed) 155 | 156 | d <- 157 | tibble(high = rep(0:1, each = n), 158 | count = c(rpois(n = n, lambda = mu_7), 159 | rpois(n = n, lambda = mu_8))) 160 | 161 | update(fit1, 162 | newdata = d, 163 | seed = seed) %>% 164 | tidy(prob = .95) %>% 165 | filter(term == "b_high") %>% 166 | select(lower:upper) 167 | 168 | } 169 | ``` 170 | 171 | Here's the simulation for a simple 100 iterations. 172 | 173 | ```{r sim1, cache = T, warning = F, message = F, results = "hide"} 174 | sim1 <- 175 | tibble(seed = 1:100) %>% 176 | mutate(ci = map(seed, sim_data_fit, n = 50)) %>% 177 | unnest() 178 | ``` 179 | 180 | That went quick--less than 3 minutes on my old laptop. Here's what those 100 $\beta_1$ intervals look like in bulk. 181 | 182 | ```{r, fig.width = 8, fig.height = 3} 183 | sim1 %>% 184 | 185 | ggplot(aes(x = seed, ymin = lower, ymax = upper)) + 186 | geom_hline(yintercept = 0, color = "white") + 187 | geom_linerange() + 188 | labs(x = "seed (i.e., simulation index)", 189 | y = expression(beta[1])) 190 | ``` 191 | 192 | None of them are anywhere near the null value 0. So it appears we're well above .8 power to reject the typical $H_0$ with $n = 50$. Here's the distribution of their widths. 193 | 194 | ```{r, fig.width = 4, fig.height = 2.5} 195 | sim1 %>% 196 | mutate(width = upper - lower) %>% 197 | 198 | ggplot(aes(x = width)) + 199 | geom_histogram(binwidth = 0.01) 200 | ``` 201 | 202 | What if we wanted a mean width of 0.25 on the log scale? We might try the simulation with $n = 150$. 203 | 204 | ```{r sim2, cache = T, warning = F, message = F, results = "hide"} 205 | sim2 <- 206 | tibble(seed = 1:100) %>% 207 | mutate(ci = map(seed, sim_data_fit, n = 150)) %>% 208 | unnest() 209 | ``` 210 | 211 | Here we'll summarize the widths both in terms of their mean and what proportion were smaller than 0.25. 212 | 213 | ```{r} 214 | sim2 %>% 215 | mutate(width = upper - lower) %>% 216 | summarise(`mean width` = mean(width), 217 | `below 0.25` = mean(width < 0.25)) 218 | ``` 219 | 220 | If we wanted to focus on the mean, we did pretty good. Perhaps set the $n = 155$ and simulate a full 1000+ iterations for a serious power analysis. But if we wanted to make the stricter criteria of all below 0.25, we'd need to up the $n$ quite a bit more. And of course, once you have a little experience working with Poisson models, you might do the power simulations with more ambitious priors. For example, if your count values are lower than like 1000, there's a good chance a `normal(0, 6)` prior on your $\beta$ parameters will be nearly flat within the reasonable neighborhoods of the parameter space. 221 | 222 | ```{r, fig.width = 5, fig.height = 2, eval = F, echo = F} 223 | # here's a quick visualization of why that's so 224 | tibble(sd = 0:6, 225 | `sd * 2` = sd * 2) %>% 226 | gather() %>% 227 | 228 | ggplot(aes(x = value, y = exp(value))) + 229 | geom_point() + 230 | facet_wrap(~key, scales = "free") 231 | ``` 232 | 233 | Now you've got a sense of how to work with the Poisson likelihood, it's time to move on to the binomial. 234 | 235 | ## The binomial is handy for binary data. 236 | 237 | Binary data are even weirder than counts. They typically only take on two values: 0 and 1. Sometimes 0 is a stand-in for "no" and 1 for "yes" (e.g., *Are you an expert in Bayesian power analysis?* For me that would be `0`). You can also have data of this kind if you asked people whether they'd like to choose option A or B. With those kinds of data, you might code A as 0 and B as 1. Binomial data also often stand in for trials where 0 = "fail" and 1 = "success." For example, if you answered "Yes" to the question *Are all data normally distributed?* we'd mark your answer down as a `0`. 238 | 239 | Though 0s and 1s are popular, sometimes binomial data appear in their aggregated form. Let's say I gave you 10 algebra questions and you got 7 of them right. Here's one way to encode those data. 240 | 241 | ```{r} 242 | n <- 10 243 | z <- 7 244 | 245 | rep(0:1, times = c(n - z, z)) 246 | ``` 247 | 248 | In that example, `n` stood for the total number of trials and `z` was the number you got correct (i.e., the number of times we encoded your response as a 1). A more compact way to encode that data is with two columns, one for `z` and the other for `n`. 249 | 250 | ```{r} 251 | tibble(z = z, 252 | n = n) 253 | ``` 254 | 255 | So then if you gave those same 10 questions to four of your friends, we could encode the results like this. 256 | 257 | ```{r} 258 | set.seed(3) 259 | 260 | tibble(id = letters[1:5], 261 | z = rpois(n = 5, lambda = 5), 262 | n = n) 263 | ``` 264 | 265 | If you're `b`, it appears you're the smart one in the group. 266 | 267 | Anyway, whether working with binomial or aggregated binomial data, we're interested in the probability a given trial will be 1. 268 | 269 | ### Logistic regression with binary data. 270 | 271 | Taking binary data as a starting point, given $d$ data that includes a variable $y$ where the value in the $i^\text{th}$ row is a 0 or a 1, we'd like to know $p(y_i = 1 | d)$. The binomial distribution will help us get that estimate for $p$. We'll do so within the context of a logistic regression model following the form 272 | 273 | $$ 274 | \begin{align*} 275 | y_i & \sim \text{Binomial} (n = 1, p_i) \\ 276 | \operatorname{logit} (p_i) & = \beta_0, 277 | \end{align*} 278 | $$ 279 | 280 | were the logit function is defined as the log odds 281 | 282 | $$ 283 | \operatorname{logit} (p_i) = \operatorname{log} \bigg (\frac{p_i}{1 - p_i} \bigg), 284 | $$ 285 | 286 | and thus 287 | 288 | $$ 289 | \operatorname{log} \bigg (\frac{p_i}{1 - p_i} \bigg) = \beta_0. 290 | $$ 291 | 292 | In those formulas, $\beta_0$ is the intercept. In a binomial model with no predictors [^2], the intercept $\beta_0$ is just the estimate for $p$, but in the log-odds metric. So yes, similar to the Poisson models, we typically use a link function with our binomial models. Instead of the log link, we use the logit because constrains the posterior for $p$ to values between 0 and 1. Just as the null value for a probability is .5, the null value for the parameters within a logistic regression model is typically 0. 293 | 294 | As above with the Poisson, I'm not going to go into a full-blown tutorial on the binomial distribution or on logistic regression. For more thorough introductions, check out chapters 9 through 10 in McElreath's [*Statistical Rethinking*](https://xcelab.net/rm/statistical-rethinking/) or Agresti's [*Foundations of Linear and Generalized Linear Models*](https://www.wiley.com/en-us/Foundations+of+Linear+and+Generalized+Linear+Models-p-9781118730034). 295 | 296 | Time to simulate some data. Let's say we'd like to estimate the probability someone will hit a ball in a baseball game. Nowadays, batting averages for professional baseball players tend around .25 (see [here](http://www.baseball-almanac.com/hitting/hibavg4.shtml)). So if we wanted to simulate 50 at-bats, we might do so like this. 297 | 298 | ```{r} 299 | set.seed(3) 300 | 301 | d <- tibble(y = rbinom(n = 50, size = 1, prob = .25)) 302 | 303 | str(d) 304 | ``` 305 | 306 | Here are what those data look like in a bar plot. 307 | 308 | ```{r, fig.width = 3, fig.height = 2} 309 | d %>% 310 | mutate(y = factor(y)) %>% 311 | 312 | ggplot(aes(x = y)) + 313 | geom_bar() 314 | ``` 315 | 316 | Here's the **brms** default for our intercept-only logistic regression model. 317 | 318 | ```{r} 319 | get_prior(data = d, 320 | family = binomial, 321 | y | trials(1) ~ 1) 322 | ``` 323 | 324 | That's a really liberal prior. We might be a little more gutsy and put a more skeptical `normal(0, 2)` prior on that intercept. With the context of our logit link, that still puts a 95% probability that the $p$ is between .02 and .98, which is almost the entire parameter space. Here's how to fit the model with the `brm()` function. 325 | 326 | ```{r, eval = F, echo = F} 327 | # proof of how permissive that `normal(0, 2)` prior is 328 | inv_logit_scaled(c(-4, 4)) 329 | ``` 330 | 331 | 332 | ```{r fit2, cache = T, message = F, warning = F, results = "hide"} 333 | fit2 <- 334 | brm(data = d, 335 | family = binomial, 336 | y | trials(1) ~ 1, 337 | prior(normal(0, 2), class = Intercept), 338 | seed = 3) 339 | ``` 340 | 341 | In the `brm()` formula syntax, including a `|` bar on the left side of a formula indicates we have extra supplementary information about our criterion variable. In this case, that information is that each `y` value corresponds to a single trial (i.e., `trials(1)`), which itself corresponds to the $n = 1$ portion of the statistical formula, above. Here are the results. 342 | 343 | ```{r} 344 | print(fit2) 345 | ``` 346 | 347 | Remember that that intercept is on the scale of the logit link, the log odds. We can transform it with the `brms::inv_logit_scaled()` function. 348 | 349 | ```{r} 350 | fixef(fit2)["Intercept", 1] %>% 351 | inv_logit_scaled() 352 | ``` 353 | 354 | If we want the full posterior distribution, we'll need to work with the posterior draws themselves. 355 | 356 | ```{r, fig.width = 4, fig.height = 2.25} 357 | posterior_samples(fit2) %>% 358 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 359 | 360 | ggplot(aes(x = p)) + 361 | geom_density(fill = "grey25", size = 0) + 362 | scale_x_continuous("probability of a hit", limits = c(0, 1)) + 363 | scale_y_continuous(NULL, breaks = NULL) 364 | ``` 365 | 366 | Looks like the null hypothesis of $p = .5$ is not credible for this simulation. If we'd like the posterior median and percentile-based 95% intervals, we might use the `median_qi()` function from the handy [**tidybayes** package](https://mjskay.github.io/tidybayes/index.html). 367 | 368 | ```{r, warning = F, message = F} 369 | library(tidybayes) 370 | 371 | posterior_samples(fit2) %>% 372 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 373 | median_qi() 374 | ``` 375 | 376 | Yep, .5 was not within those intervals. Let's see what happens when we do a mini power analysis with 100 iterations. First we set up our simulation function. 377 | 378 | ```{r} 379 | sim_data_fit <- function(seed, n_player) { 380 | 381 | n_trials <- 1 382 | prob_hit <- .25 383 | 384 | set.seed(seed) 385 | 386 | d <- tibble(y = rbinom(n = n_player, 387 | size = n_trials, 388 | prob = prob_hit)) 389 | 390 | update(fit2, 391 | newdata = d, 392 | seed = seed) %>% 393 | posterior_samples() %>% 394 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 395 | median_qi() %>% 396 | select(.lower:.upper) 397 | 398 | } 399 | ``` 400 | 401 | Simulate. 402 | 403 | ```{r sim3, cache = T, warning = F, message = F, results = "hide"} 404 | sim3 <- 405 | tibble(seed = 1:100) %>% 406 | mutate(ci = map(seed, sim_data_fit, n_player = 50)) %>% 407 | unnest() 408 | ``` 409 | 410 | You might plot the intervals. 411 | 412 | ```{r, fig.width = 8, fig.height = 3} 413 | sim3 %>% 414 | ggplot(aes(x = seed, ymin = .lower, ymax = .upper)) + 415 | geom_hline(yintercept = c(.25, .5), color = "white") + 416 | geom_linerange() + 417 | xlab("seed (i.e., simulation index)") + 418 | scale_y_continuous("probability of hitting the ball", limits = c(0, 1)) 419 | ``` 420 | 421 | Like one of my old coworkers used to say: *Purtier 'n a hog!* Here we'll summarize the results both in terms of their conventional power, their mean width, and the proportion of widths more narrow than .25. *Why .25?* I don't know. Without a substantively-informed alternative, it's as good a criterion as any. 422 | 423 | ```{r} 424 | sim3 %>% 425 | mutate(width = .upper - .lower) %>% 426 | summarise(`conventional power` = mean(.upper < .5), 427 | `mean width` = mean(width), 428 | `width below .25` = mean(width < .25)) 429 | ``` 430 | 431 | Depending on your study needs, you'd adjust your sample size accordingly, do a mini simulation or two first, and then follow up with a proper 1000+ power simulation. 432 | 433 | I should point out that whereas we evaluated the power of the Poisson model with the parameters on the scale of the link function, we evaluated the power for our logistic regression model after transforming the intercept back into the probability metric. Both methods are fine. The way you run your power simulation should be based on how you want to interpret and report your results. 434 | 435 | We should also acknowledge that this was our first example of a power simulation that wasn't based on some group comparison. Comparing groups is fine and normal and important. And it's also the case that power matters for more than group-based analyses. Our simulation-based approach to Bayesian power analyses is fine for both. 436 | 437 | ### Aggregated binomial regression. 438 | 439 | It's no more difficult to simulate and work with aggregated binomial data. But since the mechanics for `brms::brm()` and thus the down-the-road simulation setup are a little different, we should practice. With our new setup, we'll consider a new example. Since .25 is the typical batting average, it might better sense to define the null hypothesis like this: 440 | 441 | $$H_0 \text{: } p = .25.$$ 442 | 443 | Consider a case where we had some intervention where we expected a new batting average of .35. How many trials would we need, then, to either reject $H_0$ or at least estimate $p$ with a satisfactory degree of precision? Here's what the statistical formula for the implied aggregated binomial model might look like: 444 | 445 | $$ 446 | \begin{align*} 447 | y_i & \sim \text{Binomial} (n, p_i) \\ 448 | \operatorname{logit} (p_i) & = \beta_0. 449 | \end{align*} 450 | $$ 451 | 452 | The big change is we no longer defined $n$ as 1. Let’s say we wanted our aggregated binomial data set to contain the summary statistics for $n = 100$ trials. Here’s what that might look like. 453 | 454 | ```{r} 455 | n_trials <- 100 456 | prob_hit <- .35 457 | 458 | set.seed(3) 459 | 460 | d <- tibble(n_trials = n_trials, 461 | y = rbinom(n = 1, 462 | size = n_trials, 463 | prob = prob_hit)) 464 | 465 | d 466 | ``` 467 | 468 | Now we have two columns. The first, `n_trials`, indicates how many cases or trials we're summarizing. The second, `y`, indicates how many successes/1s/hits we might expect given $p = .35$. This is the aggregated binomial equivalent of if we had a 100 row vector composed of 32 1s and 68 0s. 469 | 470 | Now, before we discuss fitting the model with **brms**, let's talk priors. Since we've updated our definition of $H_0$, it might make sense to update the prior for $\beta_0$. As it turns out, setting that prior to `normal(-1, 0.5)` puts the posterior mode at about .25 on the probability space, but with fairly wide 95% intervals ranging from about .12 to .5. Though centered on our updated null value, this prior is still quite permissive given our hypothesized $p = .35$. 471 | 472 | ```{r, fig.width = 3, fig.height = 2.25, echo = F, eval = F} 473 | # here's the evidence for that claim 474 | set.seed(1) 475 | proof <- 476 | tibble(y = rnorm(1e6, -1, .5)) %>% 477 | mutate(y = inv_logit_scaled(y)) 478 | 479 | proof %>% 480 | ggplot(aes(x = y, y = 0)) + 481 | geom_halfeyeh(point_interval = mode_qi, .width = c(.5, .95)) + 482 | scale_x_continuous("hit probability", limits = c(0, 1)) + 483 | scale_y_continuous(NULL, breaks = NULL) + 484 | labs(subtitle = "This is the consequence of the\nnormal(-1, 0.5) prior after converting it\nback to the probability metric.") 485 | 486 | proof %>% 487 | mode_qi() 488 | 489 | rm(proof) 490 | ``` 491 | 492 | To fit an aggregated binomial model with the `brm()` function, we augment the ` | trials()` syntax where the value that goes in `trials()` is either a fixed number or variable in the data indexing $n$. Our approach will be the latter. 493 | 494 | ```{r fit3, cache = T, message = F, warning = F, results = "hide"} 495 | fit3 <- 496 | brm(data = d, 497 | family = binomial, 498 | y | trials(n_trials) ~ 1, 499 | prior(normal(-1, 0.5), class = Intercept), 500 | seed = 3) 501 | ``` 502 | 503 | Inspect the summary. 504 | 505 | ```{r} 506 | print(fit3) 507 | ``` 508 | 509 | After a transformation, here’s what that looks like in a plot. 510 | 511 | ```{r, fig.width = 4, fig.height = 2} 512 | posterior_samples(fit3) %>% 513 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 514 | 515 | ggplot(aes(x = p, y = 0)) + 516 | geom_halfeyeh(.width = c(.5, .95)) + 517 | scale_x_continuous("probability of a hit", limits = c(0, 1)) + 518 | scale_y_continuous(NULL, breaks = NULL) 519 | ``` 520 | 521 | Based on a single simulation, it looks like $n = 100$ won’t quite be enough to reject $H_0 \text{: } p = .25$ with a conventional 2-sided 95% interval. But it does look like we're in the ballpark and that our basic data + model setup will work for a larger-scale simulation. Here's an example of how you might update our custom simulation function. 522 | 523 | ```{r} 524 | sim_data_fit <- function(seed, n_trials) { 525 | 526 | prob_hit <- .35 527 | 528 | set.seed(seed) 529 | 530 | d <- tibble(y = rbinom(n = 1, 531 | size = n_trials, 532 | prob = prob_hit), 533 | n_trials = n_trials) 534 | 535 | update(fit3, 536 | newdata = d, 537 | seed = seed) %>% 538 | posterior_samples() %>% 539 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 540 | median_qi() %>% 541 | select(.lower:.upper) 542 | 543 | } 544 | ``` 545 | 546 | Simulate, this time trying out $n = 120$. 547 | 548 | ```{r sim4, cache = T, warning = F, message = F, results = "hide"} 549 | sim4 <- 550 | tibble(seed = 1:100) %>% 551 | mutate(ci = map(seed, sim_data_fit, n_trials = 120)) %>% 552 | unnest() 553 | ``` 554 | 555 | Plot the intervals. 556 | 557 | ```{r, fig.width = 8, fig.height = 3} 558 | sim4 %>% 559 | ggplot(aes(x = seed, ymin = .lower, ymax = .upper)) + 560 | geom_hline(yintercept = c(.25, .35), color = "white") + 561 | geom_linerange() + 562 | xlab("seed (i.e., simulation index)") + 563 | scale_y_continuous("probability of hitting the ball", 564 | limits = c(0, 1), breaks = c(0, .25, .35, 1)) 565 | ``` 566 | 567 | Overall, those intervals look pretty good. They're fairly narrow and are hovering around the data generating $p = .35$. But it seems many are still crossing the .25 threshold. Let's see the results of a formal summary. 568 | 569 | ```{r} 570 | sim4 %>% 571 | mutate(width = .upper - .lower) %>% 572 | summarise(`conventional power` = mean(.lower > .25), 573 | `mean width` = mean(width), 574 | `width below .2` = mean(width < .2)) 575 | ``` 576 | 577 | All widths were more narrow than .2 and the mean width was about .16. In the abstract that might seem reasonably precise. But we're still not precise enough to reject $H_0$ with a conventional power level. Depending on your needs, adjust the $n$ accordingly and simulate again. 578 | 579 | ## Session info 580 | 581 | ```{r} 582 | sessionInfo() 583 | ``` 584 | 585 | ## Footnotes 586 | 587 | [^1]: Yes, one can smoke half a cigarette or drink 1/3 of a drink. Ideally, we'd have the exact amount of nicotine in your blood at a given moment and over time and the same for the amount of alcohol in your system relative to your blood volume and such. But in practice, substance use researchers just don't tend to have access to data of that quality. Instead, we're typically stuck with simple counts. And I look forward to the day the right team of engineers, computer scientists, and substance use researchers (and whoever else I forgot to mention) release the cheap, non-invasive technology we need to passively measure these things. Until then: *How many standard servings of alcohol did you drink, last night?* 588 | 589 | [^2]: In case this is all new to you and you and you had the question in your mind: Yes, you can add predictors to the logistic regression model. Say we had a model with two predictors, $x_1$ and $x_2$. Our statistical model would then follow the form $\operatorname{logit} (p_i) = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}$. 590 | 591 | -------------------------------------------------------------------------------- /03.md: -------------------------------------------------------------------------------- 1 | Bayesian power analysis: Part III. All the world isn’t a Gauss. 2 | ================ 3 | 4 | ## tl;dr 5 | 6 | So far we’ve covered Bayesian power simulations from both a null 7 | hypothesis orientation and a parameter width perspective. In both 8 | instances, we kept things simple and stayed with Gaussian (i.e., 9 | normally distributed) data. But not all data follow that form, so it 10 | might do us well to expand our skill set a bit. In this post, we’ll 11 | cover how we might perform power simulations with count and binary data. 12 | For the count data, we’ll use the Poisson likelihood. For the binary, 13 | we’ll use the binomial. 14 | 15 | ## The Poisson distribution is handy for counts. 16 | 17 | In the social sciences, count data arise when we ask questions like: 18 | 19 | - How many sexual partners have you had? 20 | - How many pets do you have at home? 21 | - How many cigarettes did you smoke, yesterday? 22 | 23 | The values these data will take are discrete \[1\] in that you’ve either 24 | slept with 9 or 10 people, but definitely not 9.5. The values cannot go 25 | below zero in that even if you quit smoking cold turkey 15 years ago and 26 | have been a health nut since, you still could not have smoked -3 27 | cigarettes, yesterday. Zero is as low as it goes. 28 | 29 | The canonical distribution for data of this type–non-zero integers–is 30 | the Poisson. It’s named after the French mathematician Siméon Denis 31 | Poisson, [who had quite the confident stare in his 32 | youth](https://upload.wikimedia.org/wikipedia/commons/e/e8/E._Marcellot_Siméon-Denis_Poisson_1804.jpg). 33 | The Poisson distribution has one parameter, \(\lambda\), which controls 34 | both its mean and variance. Although the numbers the Poisson describes 35 | are counts, the \(\lambda\) parameter does not need to be an integer. 36 | For example, here’s the plot of 1000 draws from a Poisson for which 37 | \(\lambda = 3.2\). 38 | 39 | ``` r 40 | library(tidyverse) 41 | 42 | theme_set(theme_gray() + theme(panel.grid = element_blank())) 43 | 44 | tibble(x = rpois(n = 1e3, lambda = 3.2)) %>% 45 | mutate(x = factor(x)) %>% 46 | 47 | ggplot(aes(x = x)) + 48 | geom_bar() 49 | ``` 50 | 51 | ![](03_files/figure-gfm/unnamed-chunk-1-1.png) 52 | 53 | In case you missed it, the key function for generating those data was 54 | `rpois()`. I’m not going to go into a full-blown tutorial on the Poisson 55 | distribution or on count regression. For more thorough introductions, 56 | check out Atkins et al’s [*A tutorial on count regression and 57 | zero-altered count models for longitudinal substance use 58 | data*](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3513584/pdf/nihms396181.pdf), 59 | chapters 9 through 11 in McElreath’s [*Statistical 60 | Rethinking*](https://xcelab.net/rm/statistical-rethinking/), or, if you 61 | really want to dive in, Agresti’s [*Foundations of Linear and 62 | Generalized Linear 63 | Models*](https://www.wiley.com/en-us/Foundations+of+Linear+and+Generalized+Linear+Models-p-9781118730034). 64 | 65 | For our power example, let’s say you were interested in drinking. Using 66 | data from [the National Epidemiologic Survey on Alcohol and Related 67 | Conditions](https://pubs.niaaa.nih.gov/publications/AA70/AA70.htm), 68 | Christopher Ingraham presented [a data 69 | visualization](https://www.washingtonpost.com/news/wonk/wp/2014/09/25/think-you-drink-a-lot-this-chart-will-tell-you/?utm_term=.b81599bbbe25) 70 | of the average number of alcoholic drinks American adults consume, per 71 | week. By decile, the numbers were: 72 | 73 | 1. 0.00 74 | 2. 0.00 75 | 3. 0.00 76 | 4. 0.02 77 | 5. 0.14 78 | 6. 0.63 79 | 7. 2.17 80 | 8. 6.25 81 | 9. 15.28 82 | 10. 73.85 83 | 84 | Let’s say you wanted to run a study where you planned on comparing two 85 | demographic groups by their weekly drinking levels. Let’s further say 86 | you suspected one of those groups drank like the folks American adults 87 | in the 7th decile and the other drank like American adults in 88 | the 8th. We’ll call them low and high drinkers, respectively. 89 | For convenience, let’s further presume you’ll be able to recruit equal 90 | numbers of participants from both groups. The question for our power 91 | analysis is to determine how many you’d need per group to detect 92 | reliable differences. Using \(n = 50\) as a starting point, here’s what 93 | the data for our hypothetical groups might look like. 94 | 95 | ``` r 96 | mu_7 <- 2.17 97 | mu_8 <- 6.25 98 | 99 | n <- 50 100 | 101 | set.seed(3) 102 | 103 | d <- 104 | tibble(low = rpois(n = n, lambda = mu_7), 105 | high = rpois(n = n, lambda = mu_8)) %>% 106 | gather(group, count) 107 | 108 | d %>% 109 | mutate(count = factor(count)) %>% 110 | 111 | ggplot(aes(x = count)) + 112 | geom_bar() + 113 | facet_wrap(~group, ncol = 1) 114 | ``` 115 | 116 | ![](03_files/figure-gfm/unnamed-chunk-2-1.png) 117 | 118 | This will be our primary data type. Our next step is to determine how to 119 | express our research question as a regression model. Like with our 120 | two-group Gaussian models, we can predict counts in terms of an 121 | intercept (i.e., standing for the expected value on the reference group) 122 | and slope (i.e., standing for the expected difference between the 123 | reference group and the comparison group). If we coded our two groups by 124 | a `high` variable for which 0 stood for low drinkers and 1 stood for 125 | high drinkers, the basic model would follow the form 126 | 127 | \[ 128 | \begin{align*} 129 | \text{drinks_per_week}_i & \sim \text{Poisson}(\lambda_i) \\ 130 | \operatorname{log} (\lambda_i) & = \beta_0 + \beta_1 \text{high}_i. 131 | \end{align*} 132 | \] 133 | 134 | Here’s how to set the data up for that model. 135 | 136 | ``` r 137 | d <- 138 | d %>% 139 | mutate(high = ifelse(group == "low", 0, 1)) 140 | ``` 141 | 142 | If you were attending closely to our model formula, you noticed we ran 143 | into a detail. Count regression, such as with the Poisson likelihood, 144 | tends to use the log link. *Why?* you ask. Recall that counts need to be 145 | 0 and above. Same deal for our \(\lambda\) parameter. In order to make 146 | sure our models don’t yield silly estimates for \(\lambda\), like -2 or 147 | something, we typically use the log link. You don’t have to, of course. 148 | [The world is your 149 | playground](https://i.kym-cdn.com/entries/icons/original/000/008/342/ihave.jpg). 150 | But this is the method most of your colleagues are likely to use and 151 | it’s the one I suggest you use until you have compelling reasons to do 152 | otherwise. 153 | 154 | So then since we’re now fitting a model with a log link, it might seem 155 | challenging to pick good priors. As a place to start, we can use the 156 | `brms::get_prior()` function to see the **brms** defaults. 157 | 158 | ``` r 159 | library(brms) 160 | 161 | get_prior(data = d, 162 | family = poisson, 163 | count ~ 0 + intercept + high) 164 | ``` 165 | 166 | ## prior class coef group resp dpar nlpar bound 167 | ## 1 b 168 | ## 2 b high 169 | ## 3 b intercept 170 | 171 | Hopefully two things popped out. First, there’s no prior of `class = 172 | sigma`. Since the Poisson distribution only has one parameter 173 | \(\lambda\), we don’t need to set a prior for \(\sigma\). Our model 174 | won’t have one. Second, because we’re continuing to use the `0 + 175 | intercept` syntax for our model intercept, both our intercept and slope 176 | are of prior `class = b` and those currently have default flat priors 177 | with **brms**. To be sure, flat priors aren’t the best. But maybe if 178 | this was your first time playing around with a Poisson model, default 179 | flat priors might seem like a safe place to start. [Feel free to 180 | disagree](https://xkcd.com/386/). In the meantime, here’s how to fit 181 | that default Poisson model with `brms::brm()`. 182 | 183 | ``` r 184 | fit1 <- 185 | brm(data = d, 186 | family = poisson, 187 | count ~ 0 + intercept + high, 188 | seed = 3) 189 | ``` 190 | 191 | ``` r 192 | print(fit1) 193 | ``` 194 | 195 | ## Family: poisson 196 | ## Links: mu = log 197 | ## Formula: count ~ 0 + intercept + high 198 | ## Data: d (Number of observations: 100) 199 | ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; 200 | ## total post-warmup samples = 4000 201 | ## 202 | ## Population-Level Effects: 203 | ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat 204 | ## intercept 0.59 0.11 0.37 0.81 984 1.00 205 | ## high 1.27 0.12 1.02 1.50 1003 1.00 206 | ## 207 | ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 208 | ## is a crude measure of effective sample size, and Rhat is the potential 209 | ## scale reduction factor on split chains (at convergence, Rhat = 1). 210 | 211 | Since we used the log link, our model results are in the log metric, 212 | too. If you’d like them in the metric of the data, you’d work directly 213 | with the poster samples and exponentiate. 214 | 215 | ``` r 216 | post <- 217 | posterior_samples(fit1) %>% 218 | mutate(`beta_0 (i.e., low)` = exp(b_intercept), 219 | `beta_1 (i.e., difference score for high)` = exp(b_high)) 220 | ``` 221 | 222 | We can then just summarize our parameters of interest. 223 | 224 | ``` r 225 | post %>% 226 | select(starts_with("beta_")) %>% 227 | gather() %>% 228 | group_by(key) %>% 229 | summarise(mean = mean(value), 230 | lower = quantile(value, prob = .025), 231 | upper = quantile(value, prob = .975)) 232 | ``` 233 | 234 | ## # A tibble: 2 x 4 235 | ## key mean lower upper 236 | ## 237 | ## 1 beta_0 (i.e., low) 1.81 1.45 2.24 238 | ## 2 beta_1 (i.e., difference score for high) 3.58 2.78 4.50 239 | 240 | For the sake of simulation, it’ll be easier if we press on with 241 | evaluating the parameters on the log metric, though. If you’re working 242 | within a null-hypothesis oriented power paradigm, you’ll be happy to 243 | know zero is still the number to beat for evaluating our 95% intervals 244 | for \(\beta_1\), even when that parameter is in the log metric. Here it 245 | is, again. 246 | 247 | ``` r 248 | library(broom) 249 | 250 | tidy(fit1, prob = .95) %>% 251 | filter(term == "b_high") 252 | ``` 253 | 254 | ## term estimate std.error lower upper 255 | ## 1 b_high 1.266527 0.1232635 1.02143 1.504801 256 | 257 | So our first fit suggests we’re on good footing to run a quick power 258 | simulation holding \(n = 50\). As in the prior blog posts, our lives 259 | will be simpler if we set up a custom simulation function. Since we’ll 260 | be using it to simulate the data and fit the model in one step, let’s 261 | call it `sim_data_fit()`. 262 | 263 | ``` r 264 | sim_data_fit <- function(seed, n) { 265 | 266 | n <- n 267 | 268 | set.seed(seed) 269 | 270 | d <- 271 | tibble(high = rep(0:1, each = n), 272 | count = c(rpois(n = n, lambda = mu_7), 273 | rpois(n = n, lambda = mu_8))) 274 | 275 | update(fit1, 276 | newdata = d, 277 | seed = seed) %>% 278 | tidy(prob = .95) %>% 279 | filter(term == "b_high") %>% 280 | select(lower:upper) 281 | 282 | } 283 | ``` 284 | 285 | Here’s the simulation for a simple 100 iterations. 286 | 287 | ``` r 288 | sim1 <- 289 | tibble(seed = 1:100) %>% 290 | mutate(ci = map(seed, sim_data_fit, n = 50)) %>% 291 | unnest() 292 | ``` 293 | 294 | That went quick–less than 3 minutes on my old laptop. Here’s what those 295 | 100 \(\beta_1\) intervals look like in bulk. 296 | 297 | ``` r 298 | sim1 %>% 299 | 300 | ggplot(aes(x = seed, ymin = lower, ymax = upper)) + 301 | geom_hline(yintercept = 0, color = "white") + 302 | geom_linerange() + 303 | labs(x = "seed (i.e., simulation index)", 304 | y = expression(beta[1])) 305 | ``` 306 | 307 | ![](03_files/figure-gfm/unnamed-chunk-10-1.png) 308 | 309 | None of them are anywhere near the null value 0. So it appears we’re 310 | well above .8 power to reject the typical \(H_0\) with \(n = 50\). 311 | Here’s the distribution of their widths. 312 | 313 | ``` r 314 | sim1 %>% 315 | mutate(width = upper - lower) %>% 316 | 317 | ggplot(aes(x = width)) + 318 | geom_histogram(binwidth = 0.01) 319 | ``` 320 | 321 | ![](03_files/figure-gfm/unnamed-chunk-11-1.png) 322 | 323 | What if we wanted a mean width of 0.25 on the log scale? We might try 324 | the simulation with \(n = 150\). 325 | 326 | ``` r 327 | sim2 <- 328 | tibble(seed = 1:100) %>% 329 | mutate(ci = map(seed, sim_data_fit, n = 150)) %>% 330 | unnest() 331 | ``` 332 | 333 | Here we’ll summarize the widths both in terms of their mean and what 334 | proportion were smaller than 0.25. 335 | 336 | ``` r 337 | sim2 %>% 338 | mutate(width = upper - lower) %>% 339 | summarise(`mean width` = mean(width), 340 | `below 0.25` = mean(width < 0.25)) 341 | ``` 342 | 343 | ## # A tibble: 1 x 2 344 | ## `mean width` `below 0.25` 345 | ## 346 | ## 1 0.253 0.4 347 | 348 | If we wanted to focus on the mean, we did pretty good. Perhaps set the 349 | \(n = 155\) and simulate a full 1000+ iterations for a serious power 350 | analysis. But if we wanted to make the stricter criteria of all below 351 | 0.25, we’d need to up the \(n\) quite a bit more. And of course, once 352 | you have a little experience working with Poisson models, you might do 353 | the power simulations with more ambitious priors. For example, if your 354 | count values are lower than like 1000, there’s a good chance a 355 | `normal(0, 6)` prior on your \(\beta\) parameters will be nearly flat 356 | within the reasonable neighborhoods of the parameter space. 357 | 358 | Now you’ve got a sense of how to work with the Poisson likelihood, it’s 359 | time to move on to the binomial. 360 | 361 | ## The binomial is handy for binary data. 362 | 363 | Binary data are even weirder than counts. They typically only take on 364 | two values: 0 and 1. Sometimes 0 is a stand-in for “no” and 1 for “yes” 365 | (e.g., *Are you an expert in Bayesian power analysis?* For me that would 366 | be `0`). You can also have data of this kind if you asked people whether 367 | they’d like to choose option A or B. With those kinds of data, you might 368 | code A as 0 and B as 1. Binomial data also often stand in for trials 369 | where 0 = “fail” and 1 = “success.” For example, if you answered “Yes” 370 | to the question *Are all data normally distributed?* we’d mark your 371 | answer down as a `0`. 372 | 373 | Though 0s and 1s are popular, sometimes binomial data appear in their 374 | aggregated form. Let’s say I gave you 10 algebra questions and you got 7 375 | of them right. Here’s one way to encode those data. 376 | 377 | ``` r 378 | n <- 10 379 | z <- 7 380 | 381 | rep(0:1, times = c(n - z, z)) 382 | ``` 383 | 384 | ## [1] 0 0 0 1 1 1 1 1 1 1 385 | 386 | In that example, `n` stood for the total number of trials and `z` was 387 | the number you got correct (i.e., the number of times we encoded your 388 | response as a 1). A more compact way to encode that data is with two 389 | columns, one for `z` and the other for `n`. 390 | 391 | ``` r 392 | tibble(z = z, 393 | n = n) 394 | ``` 395 | 396 | ## # A tibble: 1 x 2 397 | ## z n 398 | ## 399 | ## 1 7 10 400 | 401 | So then if you gave those same 10 questions to four of your friends, we 402 | could encode the results like this. 403 | 404 | ``` r 405 | set.seed(3) 406 | 407 | tibble(id = letters[1:5], 408 | z = rpois(n = 5, lambda = 5), 409 | n = n) 410 | ``` 411 | 412 | ## # A tibble: 5 x 3 413 | ## id z n 414 | ## 415 | ## 1 a 3 10 416 | ## 2 b 7 10 417 | ## 3 c 4 10 418 | ## 4 d 4 10 419 | ## 5 e 5 10 420 | 421 | If you’re `b`, it appears you’re the smart one in the group. 422 | 423 | Anyway, whether working with binomial or aggregated binomial data, we’re 424 | interested in the probability a given trial will be 1. 425 | 426 | ### Logistic regression with binary data. 427 | 428 | Taking binary data as a starting point, given \(d\) data that includes a 429 | variable \(y\) where the value in the \(i^\text{th}\) row is a 0 or a 1, 430 | we’d like to know \(p(y_i = 1 | d)\). The binomial distribution will 431 | help us get that estimate for \(p\). We’ll do so within the context of a 432 | logistic regression model following the form 433 | 434 | \[ 435 | \begin{align*} 436 | y_i & \sim \text{Binomial} (n = 1, p_i) \\ 437 | \operatorname{logit} (p_i) & = \beta_0, 438 | \end{align*} 439 | \] 440 | 441 | were the logit function is defined as the log odds 442 | 443 | \[ 444 | \operatorname{logit} (p_i) = \operatorname{log} \bigg (\frac{p_i}{1 - p_i} \bigg), 445 | \] 446 | 447 | and thus 448 | 449 | \[ 450 | \operatorname{log} \bigg (\frac{p_i}{1 - p_i} \bigg) = \beta_0. 451 | \] 452 | 453 | In those formulas, \(\beta_0\) is the intercept. In a binomial model 454 | with no predictors \[2\], the intercept \(\beta_0\) is just the estimate 455 | for \(p\), but in the log-odds metric. So yes, similar to the Poisson 456 | models, we typically use a link function with our binomial models. 457 | Instead of the log link, we use the logit because constrains the 458 | posterior for \(p\) to values between 0 and 1. Just as the null value 459 | for a probability is .5, the null value for the parameters within a 460 | logistic regression model is typically 0. 461 | 462 | As above with the Poisson, I’m not going to go into a full-blown 463 | tutorial on the binomial distribution or on logistic regression. For 464 | more thorough introductions, check out chapters 9 through 10 in 465 | McElreath’s [*Statistical 466 | Rethinking*](https://xcelab.net/rm/statistical-rethinking/) or Agresti’s 467 | [*Foundations of Linear and Generalized Linear 468 | Models*](https://www.wiley.com/en-us/Foundations+of+Linear+and+Generalized+Linear+Models-p-9781118730034). 469 | 470 | Time to simulate some data. Let’s say we’d like to estimate the 471 | probability someone will hit a ball in a baseball game. Nowadays, 472 | batting averages for professional baseball players tend around .25 (see 473 | [here](http://www.baseball-almanac.com/hitting/hibavg4.shtml)). So if we 474 | wanted to simulate 50 at-bats, we might do so like this. 475 | 476 | ``` r 477 | set.seed(3) 478 | 479 | d <- tibble(y = rbinom(n = 50, size = 1, prob = .25)) 480 | 481 | str(d) 482 | ``` 483 | 484 | ## Classes 'tbl_df', 'tbl' and 'data.frame': 50 obs. of 1 variable: 485 | ## $ y: int 0 1 0 0 0 0 0 0 0 0 ... 486 | 487 | Here are what those data look like in a bar plot. 488 | 489 | ``` r 490 | d %>% 491 | mutate(y = factor(y)) %>% 492 | 493 | ggplot(aes(x = y)) + 494 | geom_bar() 495 | ``` 496 | 497 | ![](03_files/figure-gfm/unnamed-chunk-18-1.png) 498 | 499 | Here’s the **brms** default for our intercept-only logistic regression 500 | model. 501 | 502 | ``` r 503 | get_prior(data = d, 504 | family = binomial, 505 | y | trials(1) ~ 1) 506 | ``` 507 | 508 | ## Intercept ~ student_t(3, 0, 10) 509 | 510 | That’s a really liberal prior. We might be a little more gutsy and put a 511 | more skeptical `normal(0, 2)` prior on that intercept. With the context 512 | of our logit link, that still puts a 95% probability that the \(p\) is 513 | between .02 and .98, which is almost the entire parameter space. Here’s 514 | how to fit the model with the `brm()` function. 515 | 516 | ``` r 517 | fit2 <- 518 | brm(data = d, 519 | family = binomial, 520 | y | trials(1) ~ 1, 521 | prior(normal(0, 2), class = Intercept), 522 | seed = 3) 523 | ``` 524 | 525 | In the `brm()` formula syntax, including a `|` bar on the left side of a 526 | formula indicates we have extra supplementary information about our 527 | criterion variable. In this case, that information is that each `y` 528 | value corresponds to a single trial (i.e., `trials(1)`), which itself 529 | corresponds to the \(n = 1\) portion of the statistical formula, above. 530 | Here are the results. 531 | 532 | ``` r 533 | print(fit2) 534 | ``` 535 | 536 | ## Family: binomial 537 | ## Links: mu = logit 538 | ## Formula: y | trials(1) ~ 1 539 | ## Data: d (Number of observations: 50) 540 | ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; 541 | ## total post-warmup samples = 4000 542 | ## 543 | ## Population-Level Effects: 544 | ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat 545 | ## Intercept -1.39 0.36 -2.12 -0.73 1462 1.00 546 | ## 547 | ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 548 | ## is a crude measure of effective sample size, and Rhat is the potential 549 | ## scale reduction factor on split chains (at convergence, Rhat = 1). 550 | 551 | Remember that that intercept is on the scale of the logit link, the log 552 | odds. We can transform it with the `brms::inv_logit_scaled()` function. 553 | 554 | ``` r 555 | fixef(fit2)["Intercept", 1] %>% 556 | inv_logit_scaled() 557 | ``` 558 | 559 | ## [1] 0.1991036 560 | 561 | If we want the full posterior distribution, we’ll need to work with the 562 | posterior draws themselves. 563 | 564 | ``` r 565 | posterior_samples(fit2) %>% 566 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 567 | 568 | ggplot(aes(x = p)) + 569 | geom_density(fill = "grey25", size = 0) + 570 | scale_x_continuous("probability of a hit", limits = c(0, 1)) + 571 | scale_y_continuous(NULL, breaks = NULL) 572 | ``` 573 | 574 | ![](03_files/figure-gfm/unnamed-chunk-23-1.png) 575 | 576 | Looks like the null hypothesis of \(p = .5\) is not credible for this 577 | simulation. If we’d like the posterior median and percentile-based 95% 578 | intervals, we might use the `median_qi()` function from the handy 579 | [**tidybayes** package](https://mjskay.github.io/tidybayes/index.html). 580 | 581 | ``` r 582 | library(tidybayes) 583 | 584 | posterior_samples(fit2) %>% 585 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 586 | median_qi() 587 | ``` 588 | 589 | ## p .lower .upper .width .point .interval 590 | ## 1 0.2011897 0.1076116 0.3252849 0.95 median qi 591 | 592 | Yep, .5 was not within those intervals. Let’s see what happens when we 593 | do a mini power analysis with 100 iterations. First we set up our 594 | simulation function. 595 | 596 | ``` r 597 | sim_data_fit <- function(seed, n_player) { 598 | 599 | n_trials <- 1 600 | prob_hit <- .25 601 | 602 | set.seed(seed) 603 | 604 | d <- tibble(y = rbinom(n = n_player, 605 | size = n_trials, 606 | prob = prob_hit)) 607 | 608 | update(fit2, 609 | newdata = d, 610 | seed = seed) %>% 611 | posterior_samples() %>% 612 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 613 | median_qi() %>% 614 | select(.lower:.upper) 615 | 616 | } 617 | ``` 618 | 619 | Simulate. 620 | 621 | ``` r 622 | sim3 <- 623 | tibble(seed = 1:100) %>% 624 | mutate(ci = map(seed, sim_data_fit, n_player = 50)) %>% 625 | unnest() 626 | ``` 627 | 628 | You might plot the intervals. 629 | 630 | ``` r 631 | sim3 %>% 632 | ggplot(aes(x = seed, ymin = .lower, ymax = .upper)) + 633 | geom_hline(yintercept = c(.25, .5), color = "white") + 634 | geom_linerange() + 635 | xlab("seed (i.e., simulation index)") + 636 | scale_y_continuous("probability of hitting the ball", limits = c(0, 1)) 637 | ``` 638 | 639 | ![](03_files/figure-gfm/unnamed-chunk-26-1.png) 640 | 641 | Like one of my old coworkers used to say: *Purtier ’n a hog\!* Here 642 | we’ll summarize the results both in terms of their conventional power, 643 | their mean width, and the proportion of widths more narrow than .25. 644 | *Why .25?* I don’t know. Without a substantively-informed alternative, 645 | it’s as good a criterion as any. 646 | 647 | ``` r 648 | sim3 %>% 649 | mutate(width = .upper - .lower) %>% 650 | summarise(`conventional power` = mean(.upper < .5), 651 | `mean width` = mean(width), 652 | `width below .25` = mean(width < .25)) 653 | ``` 654 | 655 | ## # A tibble: 1 x 3 656 | ## `conventional power` `mean width` `width below .25` 657 | ## 658 | ## 1 0.95 0.231 0.8 659 | 660 | Depending on your study needs, you’d adjust your sample size 661 | accordingly, do a mini simulation or two first, and then follow up with 662 | a proper 1000+ power simulation. 663 | 664 | I should point out that whereas we evaluated the power of the Poisson 665 | model with the parameters on the scale of the link function, we 666 | evaluated the power for our logistic regression model after transforming 667 | the intercept back into the probability metric. Both methods are fine. 668 | The way you run your power simulation should be based on how you want to 669 | interpret and report your results. 670 | 671 | We should also acknowledge that this was our first example of a power 672 | simulation that wasn’t based on some group comparison. Comparing groups 673 | is fine and normal and important. And it’s also the case that power 674 | matters for more than group-based analyses. Our simulation-based 675 | approach to Bayesian power analyses is fine for both. 676 | 677 | ### Aggregated binomial regression. 678 | 679 | It’s no more difficult to simulate and work with aggregated binomial 680 | data. But since the mechanics for `brms::brm()` and thus the 681 | down-the-road simulation setup are a little different, we should 682 | practice. With our new setup, we’ll consider a new example. Since .25 is 683 | the typical batting average, it might better sense to define the null 684 | hypothesis like this: 685 | 686 | \[H_0 \text{: } p = .25.\] 687 | 688 | Consider a case where we had some intervention where we expected a new 689 | batting average of .35. How many trials would we need, then, to either 690 | reject \(H_0\) or at least estimate \(p\) with a satisfactory degree of 691 | precision? Here’s what the statistical formula for the implied 692 | aggregated binomial model might look like: 693 | 694 | \[ 695 | \begin{align*} 696 | y_i & \sim \text{Binomial} (n, p_i) \\ 697 | \operatorname{logit} (p_i) & = \beta_0. 698 | \end{align*} 699 | \] 700 | 701 | The big change is we no longer defined \(n\) as 1. Let’s say we wanted 702 | our aggregated binomial data set to contain the summary statistics for 703 | \(n = 100\) trials. Here’s what that might look like. 704 | 705 | ``` r 706 | n_trials <- 100 707 | prob_hit <- .35 708 | 709 | set.seed(3) 710 | 711 | d <- tibble(n_trials = n_trials, 712 | y = rbinom(n = 1, 713 | size = n_trials, 714 | prob = prob_hit)) 715 | 716 | d 717 | ``` 718 | 719 | ## # A tibble: 1 x 2 720 | ## n_trials y 721 | ## 722 | ## 1 100 32 723 | 724 | Now we have two columns. The first, `n_trials`, indicates how many cases 725 | or trials we’re summarizing. The second, `y`, indicates how many 726 | successes/1s/hits we might expect given \(p = .35\). This is the 727 | aggregated binomial equivalent of if we had a 100 row vector composed of 728 | 32 1s and 68 0s. 729 | 730 | Now, before we discuss fitting the model with **brms**, let’s talk 731 | priors. Since we’ve updated our definition of \(H_0\), it might make 732 | sense to update the prior for \(\beta_0\). As it turns out, setting that 733 | prior to `normal(-1, 0.5)` puts the posterior mode at about .25 on the 734 | probability space, but with fairly wide 95% intervals ranging from about 735 | .12 to .5. Though centered on our updated null value, this prior is 736 | still quite permissive given our hypothesized \(p = .35\). 737 | 738 | To fit an aggregated binomial model with the `brm()` function, we 739 | augment the ` | trials()` syntax where the value that goes in 740 | `trials()` is either a fixed number or variable in the data indexing 741 | \(n\). Our approach will be the latter. 742 | 743 | ``` r 744 | fit3 <- 745 | brm(data = d, 746 | family = binomial, 747 | y | trials(n_trials) ~ 1, 748 | prior(normal(-1, 0.5), class = Intercept), 749 | seed = 3) 750 | ``` 751 | 752 | Inspect the summary. 753 | 754 | ``` r 755 | print(fit3) 756 | ``` 757 | 758 | ## Family: binomial 759 | ## Links: mu = logit 760 | ## Formula: y | trials(n_trials) ~ 1 761 | ## Data: d (Number of observations: 1) 762 | ## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; 763 | ## total post-warmup samples = 4000 764 | ## 765 | ## Population-Level Effects: 766 | ## Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat 767 | ## Intercept -0.81 0.20 -1.19 -0.42 1396 1.00 768 | ## 769 | ## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 770 | ## is a crude measure of effective sample size, and Rhat is the potential 771 | ## scale reduction factor on split chains (at convergence, Rhat = 1). 772 | 773 | After a transformation, here’s what that looks like in a plot. 774 | 775 | ``` r 776 | posterior_samples(fit3) %>% 777 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 778 | 779 | ggplot(aes(x = p, y = 0)) + 780 | geom_halfeyeh(.width = c(.5, .95)) + 781 | scale_x_continuous("probability of a hit", limits = c(0, 1)) + 782 | scale_y_continuous(NULL, breaks = NULL) 783 | ``` 784 | 785 | ![](03_files/figure-gfm/unnamed-chunk-31-1.png) 786 | 787 | Based on a single simulation, it looks like \(n = 100\) won’t quite be 788 | enough to reject \(H_0 \text{: } p = .25\) with a conventional 2-sided 789 | 95% interval. But it does look like we’re in the ballpark and that our 790 | basic data + model setup will work for a larger-scale simulation. Here’s 791 | an example of how you might update our custom simulation function. 792 | 793 | ``` r 794 | sim_data_fit <- function(seed, n_trials) { 795 | 796 | prob_hit <- .35 797 | 798 | set.seed(seed) 799 | 800 | d <- tibble(y = rbinom(n = 1, 801 | size = n_trials, 802 | prob = prob_hit), 803 | n_trials = n_trials) 804 | 805 | update(fit3, 806 | newdata = d, 807 | seed = seed) %>% 808 | posterior_samples() %>% 809 | transmute(p = inv_logit_scaled(b_Intercept)) %>% 810 | median_qi() %>% 811 | select(.lower:.upper) 812 | 813 | } 814 | ``` 815 | 816 | Simulate, this time trying out \(n = 120\). 817 | 818 | ``` r 819 | sim4 <- 820 | tibble(seed = 1:100) %>% 821 | mutate(ci = map(seed, sim_data_fit, n_trials = 120)) %>% 822 | unnest() 823 | ``` 824 | 825 | Plot the intervals. 826 | 827 | ``` r 828 | sim4 %>% 829 | ggplot(aes(x = seed, ymin = .lower, ymax = .upper)) + 830 | geom_hline(yintercept = c(.25, .35), color = "white") + 831 | geom_linerange() + 832 | xlab("seed (i.e., simulation index)") + 833 | scale_y_continuous("probability of hitting the ball", 834 | limits = c(0, 1), breaks = c(0, .25, .35, 1)) 835 | ``` 836 | 837 | ![](03_files/figure-gfm/unnamed-chunk-33-1.png) 838 | 839 | Overall, those intervals look pretty good. They’re fairly narrow and are 840 | hovering around the data generating \(p = .35\). But it seems many are 841 | still crossing the .25 threshold. Let’s see the results of a formal 842 | summary. 843 | 844 | ``` r 845 | sim4 %>% 846 | mutate(width = .upper - .lower) %>% 847 | summarise(`conventional power` = mean(.lower > .25), 848 | `mean width` = mean(width), 849 | `width below .2` = mean(width < .2)) 850 | ``` 851 | 852 | ## # A tibble: 1 x 3 853 | ## `conventional power` `mean width` `width below .2` 854 | ## 855 | ## 1 0.56 0.155 1 856 | 857 | All widths were more narrow than .2 and the mean width was about .16. In 858 | the abstract that might seem reasonably precise. But we’re still not 859 | precise enough to reject \(H_0\) with a conventional power level. 860 | Depending on your needs, adjust the \(n\) accordingly and simulate 861 | again. 862 | 863 | ## Session info 864 | 865 | ``` r 866 | sessionInfo() 867 | ``` 868 | 869 | ## R version 3.6.0 (2019-04-26) 870 | ## Platform: x86_64-apple-darwin15.6.0 (64-bit) 871 | ## Running under: macOS High Sierra 10.13.6 872 | ## 873 | ## Matrix products: default 874 | ## BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib 875 | ## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib 876 | ## 877 | ## locale: 878 | ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 879 | ## 880 | ## attached base packages: 881 | ## [1] stats graphics grDevices utils datasets methods base 882 | ## 883 | ## other attached packages: 884 | ## [1] tidybayes_1.1.0 broom_0.5.2 brms_2.9.0 Rcpp_1.0.1 885 | ## [5] forcats_0.4.0 stringr_1.4.0 dplyr_0.8.1 purrr_0.3.2 886 | ## [9] readr_1.3.1 tidyr_0.8.3 tibble_2.1.3 ggplot2_3.2.0 887 | ## [13] tidyverse_1.2.1 888 | ## 889 | ## loaded via a namespace (and not attached): 890 | ## [1] colorspace_1.4-1 ggridges_0.5.1 891 | ## [3] rsconnect_0.8.13 ggstance_0.3.2 892 | ## [5] markdown_1.0 base64enc_0.1-3 893 | ## [7] rstudioapi_0.10 rstan_2.18.2 894 | ## [9] svUnit_0.7-12 DT_0.7 895 | ## [11] fansi_0.4.0 mvtnorm_1.0-11 896 | ## [13] lubridate_1.7.4 xml2_1.2.0 897 | ## [15] bridgesampling_0.6-0 knitr_1.23 898 | ## [17] shinythemes_1.1.2 zeallot_0.1.0 899 | ## [19] bayesplot_1.7.0 jsonlite_1.6 900 | ## [21] shiny_1.3.2 compiler_3.6.0 901 | ## [23] httr_1.4.0 backports_1.1.4 902 | ## [25] assertthat_0.2.1 Matrix_1.2-17 903 | ## [27] lazyeval_0.2.2 cli_1.1.0 904 | ## [29] later_0.8.0 htmltools_0.3.6 905 | ## [31] prettyunits_1.0.2 tools_3.6.0 906 | ## [33] igraph_1.2.4.1 coda_0.19-2 907 | ## [35] gtable_0.3.0 glue_1.3.1 908 | ## [37] reshape2_1.4.3 cellranger_1.1.0 909 | ## [39] vctrs_0.1.0 nlme_3.1-139 910 | ## [41] crosstalk_1.0.0 xfun_0.8 911 | ## [43] ps_1.3.0 rvest_0.3.4 912 | ## [45] mime_0.7 miniUI_0.1.1.1 913 | ## [47] gtools_3.8.1 zoo_1.8-6 914 | ## [49] scales_1.0.0 colourpicker_1.0 915 | ## [51] hms_0.4.2 promises_1.0.1 916 | ## [53] Brobdingnag_1.2-6 parallel_3.6.0 917 | ## [55] inline_0.3.15 shinystan_2.5.0 918 | ## [57] yaml_2.2.0 gridExtra_2.3 919 | ## [59] loo_2.1.0 StanHeaders_2.18.1-10 920 | ## [61] stringi_1.4.3 dygraphs_1.1.1.6 921 | ## [63] pkgbuild_1.0.3 rlang_0.4.0 922 | ## [65] pkgconfig_2.0.2 matrixStats_0.54.0 923 | ## [67] evaluate_0.14 lattice_0.20-38 924 | ## [69] rstantools_1.5.1 htmlwidgets_1.3 925 | ## [71] labeling_0.3 tidyselect_0.2.5 926 | ## [73] processx_3.3.1 plyr_1.8.4 927 | ## [75] magrittr_1.5 R6_2.4.0 928 | ## [77] generics_0.0.2 pillar_1.4.1 929 | ## [79] haven_2.1.0 withr_2.1.2 930 | ## [81] xts_0.11-2 abind_1.4-5 931 | ## [83] modelr_0.1.4 crayon_1.3.4 932 | ## [85] arrayhelpers_1.0-20160527 utf8_1.1.4 933 | ## [87] rmarkdown_1.13 grid_3.6.0 934 | ## [89] readxl_1.3.1 callr_3.2.0 935 | ## [91] threejs_0.3.1 digest_0.6.19 936 | ## [93] xtable_1.8-4 httpuv_1.5.1 937 | ## [95] stats4_3.6.0 munsell_0.5.0 938 | ## [97] shinyjs_1.0 939 | 940 | ## Footnotes 941 | 942 | 1. Yes, one can smoke half a cigarette or drink 1/3 of a drink. 943 | Ideally, we’d have the exact amount of nicotine in your blood at a 944 | given moment and over time and the same for the amount of alcohol in 945 | your system relative to your blood volume and such. But in practice, 946 | substance use researchers just don’t tend to have access to data of 947 | that quality. Instead, we’re typically stuck with simple counts. And 948 | I look forward to the day the right team of engineers, computer 949 | scientists, and substance use researchers (and whoever else I forgot 950 | to mention) release the cheap, non-invasive technology we need to 951 | passively measure these things. Until then: *How many standard 952 | servings of alcohol did you drink, last night?* 953 | 954 | 2. In case this is all new to you and you and you had the question in 955 | your mind: Yes, you can add predictors to the logistic regression 956 | model. Say we had a model with two predictors, \(x_1\) and \(x_2\). 957 | Our statistical model would then follow the form 958 | \(\operatorname{logit} (p_i) = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}\). 959 | -------------------------------------------------------------------------------- /03_cache/gfm/__packages: -------------------------------------------------------------------------------- 1 | base 2 | tidyverse 3 | ggplot2 4 | tibble 5 | tidyr 6 | readr 7 | purrr 8 | dplyr 9 | stringr 10 | forcats 11 | Rcpp 12 | brms 13 | broom 14 | tidybayes 15 | -------------------------------------------------------------------------------- /03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.RData -------------------------------------------------------------------------------- /03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdb -------------------------------------------------------------------------------- /03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit1_1dc1cbc6cebdbc94fe037ed0eacaaae5.rdx -------------------------------------------------------------------------------- /03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.RData -------------------------------------------------------------------------------- /03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdb -------------------------------------------------------------------------------- /03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit2_40fbc1f7d1f1063c7b1ce9eb0a5133e8.rdx -------------------------------------------------------------------------------- /03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.RData -------------------------------------------------------------------------------- /03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.rdb -------------------------------------------------------------------------------- /03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/fit3_f8d85acb30dd54c124fcf373fea6b570.rdx -------------------------------------------------------------------------------- /03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.RData -------------------------------------------------------------------------------- /03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.rdb -------------------------------------------------------------------------------- /03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim1_96946f136648945121ced4d3160fceea.rdx -------------------------------------------------------------------------------- /03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.RData -------------------------------------------------------------------------------- /03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.rdb -------------------------------------------------------------------------------- /03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim2_bc5eae669806636b31ed0aac62f82b2b.rdx -------------------------------------------------------------------------------- /03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.RData -------------------------------------------------------------------------------- /03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdb -------------------------------------------------------------------------------- /03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim3_ee2f7c006cf328b7064a28faa5f40b4b.rdx -------------------------------------------------------------------------------- /03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.RData: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.RData -------------------------------------------------------------------------------- /03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.rdb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.rdb -------------------------------------------------------------------------------- /03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.rdx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_cache/gfm/sim4_3481489fae5ab3281241ea9fb758206f.rdx -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-1-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-1-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-10-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-10-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-11-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-11-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-18-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-18-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-2-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-2-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-23-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-23-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-26-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-26-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-31-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-31-1.png -------------------------------------------------------------------------------- /03_files/figure-gfm/unnamed-chunk-33-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ASKurz/Bayesian_power/2744c3c4c416297b43b5db5071dd81aee1396141/03_files/figure-gfm/unnamed-chunk-33-1.png -------------------------------------------------------------------------------- /Bayesian_power.Rproj: -------------------------------------------------------------------------------- 1 | Version: 1.0 2 | 3 | RestoreWorkspace: Default 4 | SaveWorkspace: Default 5 | AlwaysSaveHistory: Default 6 | 7 | EnableCodeIndexing: Yes 8 | UseSpacesForTab: Yes 9 | NumSpacesForTab: 2 10 | Encoding: UTF-8 11 | 12 | RnwWeave: Sweave 13 | LaTeX: pdfLaTeX 14 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | I'm planning to release a series of blog posts walking out Bayesian power analysis. It'll be based around using simulations with [**brms**](https://github.com/paul-buerkner/brms) models. The current plan is to release the content in 5 posts: 2 | 3 | * Part I: Intro to simulation, focus on rejecting the null 4 | * Part II: Alternative approach: focusing on interval widths 5 | * Part III: Putting Gauss aside to explore models with Poisson and binomial likelihoods 6 | * Part IV: Introduction to simulations with multilevel models 7 | * Part V: Using frequentist simulations to inform Bayesian simulations 8 | 9 | Here you can find the `.Rmd` and `.md` files for each part. Unfortunately, `github_document` files still do a poor job rendering LaTeX equations. So please forgive those; they'll look better once I move the posts over to my blog. 10 | 11 | In the meantime, constructive peer-reviews are welcome! 12 | --------------------------------------------------------------------------------