├── 2019.md ├── 2020.md ├── README.md └── scratch ├── fairml-hacknight-1.ipynb └── fairml-hacknight-2.ipynb /2019.md: -------------------------------------------------------------------------------- 1 | # Fair ML Reading Group 2019 2 | 🌏 Melbourne, Australia 3 | 🏠 Hosted by [Silverpond](https://silverpond.com.au/) 4 | 🤖 Facilitated by [Laura](https://twitter.com/summerscope) 5 | 📬 Listserv at [groups.io/g/fair-ml](https://groups.io/g/fair-ml) 6 | 7 | ## Description 8 | A multi-disciplinary group reading papers on the topic of fairness and ethics in Machine Learning and Data Science. Our syllabus so far has been setting the scene and getting a feeling for the overall approaches. We'll be doing some deep diving into specific approaches in the months to come. 9 | 10 | --- 11 | ## Reading history 12 | 13 | May 16 14 | **Lipstick on a Pig** 15 | [https://arxiv.org/abs/1903.03862](https://arxiv.org/abs/1903.03862) 16 | 17 | May 23 18 | **Fairness Definitions Explained** 19 | [http://www.ece.ubc.ca/~mjulia/publications/Fairness_Definitions_Explained_2018.pdf](http://www.ece.ubc.ca/~mjulia/publications/Fairness_Definitions_Explained_2018.pdf) 20 | 21 | May 30 22 | **50 Years of Test (Un)fairness: Lessons for Machine Learning** 23 | [https://arxiv.org/abs/1811.10104v2](https://arxiv.org/abs/1811.10104v2) 24 | 25 | 6 June 26 | **Counterfactual Fairness (1st half)** 27 | [https://arxiv.org/abs/1703.06856v3](https://arxiv.org/abs/1703.06856v3) 28 | 29 | 13 June 30 | **Counterfactual Fairness (2nd half)** 31 | [https://arxiv.org/abs/1703.06856v3](https://arxiv.org/abs/1703.06856v3) 32 | 33 | 20 June 34 | **Fairness in Machine Learning: Lessons from Political Philosophy** 35 | [https://arxiv.org/abs/1712.03586v2](https://arxiv.org/abs/1712.03586v2) 36 | 37 | 27 June 38 | _Week off_ 39 | 40 | 4 July 41 | **Delayed Impact of Fair Machine Learning** 42 | [https://arxiv.org/pdf/1803.04383.pdf](https://arxiv.org/pdf/1803.04383.pdf) 43 | 44 | 11 July 45 | **The Ethics of AI Ethics -- An Evaluation of Guidelines** 46 | [https://arxiv.org/abs/1903.03425](https://arxiv.org/abs/1903.03425) 47 | & 48 | [https://ai-hr.cyber.harvard.edu/primp-viz.html](https://ai-hr.cyber.harvard.edu/primp-viz.html) 49 | (whitepaper to come) 50 | 51 | 18 July 52 | **Does ACM’s Code of Ethics Change Ethical Decision Making 53 | in Software Development?** 54 | [https://people.engr.ncsu.edu/ermurph3/papers/fse18nier.pdf](https://people.engr.ncsu.edu/ermurph3/papers/fse18nier.pdf) 55 | 56 | 25 July 57 | **The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation** 58 | [https://arxiv.org/abs/1802.07228](https://arxiv.org/abs/1802.07228) 59 | 60 | 1 August 61 | **Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems** 62 | [https://arxiv.org/pdf/1907.09615.pdf](https://arxiv.org/pdf/1907.09615.pdf) 63 | 64 | 65 | _Counterfactual cluster_ 66 | 67 | 3 October 68 | **The Sensitivity of Counterfactual Fairness to Unmeasured Confounding** 69 | [https://arxiv.org/abs/1907.01040](https://arxiv.org/abs/1907.01040) 70 | 71 | 11 October 72 | **Equal Opportunity and Affirmative Action via Counterfactual Predictions (1st Half)** 73 | [https://arxiv.org/abs/1905.10870v2](https://arxiv.org/abs/1905.10870v2) 74 | 75 | 76 | 23 October 77 | **Equal Opportunity and Affirmative Action via Counterfactual Predictions (2nd Half)** 78 | [https://arxiv.org/abs/1905.10870v2](https://arxiv.org/abs/1905.10870v2) 79 | 80 | 81 | 1 November 82 | **Counterfactual Fairness Hack Night** 83 | _No paper_ 84 | 85 | 7 November 86 | **Path-specific Counterfactual Fairness** 87 | [https://arxiv.org/pdf/1802.08139v1.pdf](https://arxiv.org/pdf/1802.08139v1.pdf) 88 | 89 | 15 November 90 | **Counterfactual Fairness Hack Night** 91 | _No paper_ 92 | 93 | 21 November 94 | **Machine Behaviour** 95 | [https://www.nature.com/articles/s41586-019-1138-y.pdf](https://www.nature.com/articles/s41586-019-1138-y.pdf) 96 | 97 | 29 November 98 | **Counterfactual Fairness Hack Night** 99 | _No paper_ 100 | 101 | 5 December 102 | **Counterfactual Risk Assessments, Evaluation, and Fairness** 103 | [https://arxiv.org/pdf/1909.00066v1.pdf](https://arxiv.org/pdf/1909.00066v1.pdf) 104 | 105 | 13 December 106 | **Contrastive Fairness in Machine Learning** 107 | [https://arxiv.org/pdf/1905.07360v4.pdf](https://arxiv.org/pdf/1905.07360v4.pdf) 108 | -------------------------------------------------------------------------------- /2020.md: -------------------------------------------------------------------------------- 1 | # Fair ML Reading Group 2020 2 | 🌏 Melbourne, Australia 3 | 🖥 Hosted remotely 4 | 🤖 Facilitated by [Laura](https://twitter.com/summerscope) 5 | 📬 Listserv at [groups.io/g/fair-ml](https://groups.io/g/fair-ml) 6 | 🗓 [Google calendar](https://calendar.google.com/calendar?cid=MWVxa29iam90NHB0YXMzNjQxZXRvN2lkZjhAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ) 7 | 8 | ## Description 9 | A multi-disciplinary group reading papers on the topic of fairness and ethics in Machine Learning and Data Science. 10 | 11 | --- 12 | ## Reading history 13 | 14 | January 16 15 | **On Consequentialism and Fairness** 16 | [https://arxiv.org/abs/2001.00329](https://arxiv.org/abs/2001.00329) 17 | 18 | January 24 19 | **A Framework for Understanding Unintended Consequences of Machine Learning** 20 | [https://arxiv.org/abs/1901.10002](https://arxiv.org/abs/1901.10002) 21 | 22 | January 30 23 | **Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for Ai** 24 | [https://cyber.harvard.edu/publication/2020/principled-ai](https://cyber.harvard.edu/publication/2020/principled-ai) 25 | 26 | February 14 27 | **Bayesian Modeling of Intersectional Fairness: The Variance of Bias** 28 | [http://arxiv.org/abs/1811.07255v2](http://arxiv.org/abs/1811.07255v2) 29 | 30 | February 21 31 | **Autonomous technology – sources of confusion: a model for explanation and prediction of conceptual shifts** 32 | [https://www.researchgate.net/publication/259204283_Autonomous_technology_-_sources_of_confusion_A_model_for_explanation_and_prediction_of_conceptual_shifts](https://www.researchgate.net/publication/259204283_Autonomous_technology_-_sources_of_confusion_A_model_for_explanation_and_prediction_of_conceptual_shifts) 33 | 34 | February 28 35 | **The hidden assumptions behind counterfactual explanations and principal reasons** 36 | [https://dl.acm.org/doi/10.1145/3351095.3372830](https://dl.acm.org/doi/10.1145/3351095.3372830) 37 | 38 | March 5 39 | **Bayesian Modeling of Intersectional Fairness: The Variance of Bias** _(Revisited)_ 40 | [http://arxiv.org/abs/1811.07255v2](http://arxiv.org/abs/1811.07255v2) 41 | 42 | March 13 43 | **Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness** 44 | [https://arxiv.org/abs/1906.02896](https://arxiv.org/abs/1906.02896) + [https://github.com/wwoods/adversarial-explanations-cifar](https://github.com/wwoods/adversarial-explanations-cifar) 45 | 46 | March 20 47 | **Principles for the Application of Human Intelligence** 48 | [https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/](https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/) 49 | 50 | April 2 51 | **Leverage Points: Places to Intervene in a System** 52 | Online: [http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/](http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/) 53 | PDF: [http://donellameadows.org/wp-content/userfiles/Leverage_Points.pdf](http://donellameadows.org/wp-content/userfiles/Leverage_Points.pdf) 54 | 55 | April 17 56 | **Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness** 57 | [https://arxiv.org/abs/1906.02896](https://arxiv.org/abs/1906.02896) 58 | 59 | April 30 60 | _COVIDsafe app discussion, no paper_ 61 | 62 | May 15 63 | **Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction** 64 | [https://estsjournal.org/index.php/ests/article/view/260](https://estsjournal.org/index.php/ests/article/view/260) 65 | 66 | May 28 & June 4 67 | **Explanation in Artificial Intelligence: Insights from the Social Sciences** 68 | [https://arxiv.org/abs/1706.07269](https://arxiv.org/abs/1706.07269) 69 | 70 | June 12 71 | **Desanctifying the charisma of numbers** 72 | [https://www.tandfonline.com/doi/full/10.1080/17530350.2018.1527710](https://www.tandfonline.com/doi/full/10.1080/17530350.2018.1527710) 73 | 74 | June 25 75 | **Gender Shades** 76 | [http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf) 77 | 78 | July 10 79 | **Combating Anti-Blackness in the AI Community** 80 | [https://arxiv.org/pdf/2006.16879.pdf](https://arxiv.org/pdf/2006.16879.pdf) 81 | 82 | August 7 83 | **RoboTed: a case study in Ethical Risk Assessment** 84 | [https://arxiv.org/abs/2007.15864v1](https://arxiv.org/abs/2007.15864v1) 85 | 86 | August 20 87 | **From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices** 88 | [https://arxiv.org/abs/1905.06876](https://arxiv.org/abs/1905.06876) 89 | 90 | Sept 17 91 | **SCRUPLES: A Corpus of Community Ethical Judgments on 32,000 Real-life Anecdotes** 92 | [https://arxiv.org/abs/2008.09094](https://arxiv.org/abs/2008.09094) 93 | 94 | 95 | Oct 1 96 | **FACE: Feasible and Actionable Counterfactual Explanations** 97 | [https://arxiv.org/abs/1909.09369v2](https://arxiv.org/abs/1909.09369v2) 98 | 99 | Oct 15 100 | **Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages** 101 | [https://arxiv.org/abs/2010.02353](https://arxiv.org/abs/2010.02353) 102 | 103 | Nov 13 104 | **Survey on Causal-based Machine Learning Fairness Notions** 105 | [https://arxiv.org/abs/2010.09553v1](https://arxiv.org/abs/2010.09553v1) 106 | 107 | _Bonus paper_ 108 | **Against Scale: Provocations and Resistances to Scale Thinking** 109 | [https://arxiv.org/abs/2010.08850](https://arxiv.org/abs/2010.08850) 110 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Fair ML Reading Group 2021 2 | 🌏 Melbourne, Australia 3 | 🖥 Hosted remotely 4 | 🤖 Facilitated by [Laura](https://twitter.com/summerscope) 5 | 📬 Listserv at [groups.io/g/fair-ml](https://groups.io/g/fair-ml) 6 | 🗓 [Google calendar](https://calendar.google.com/calendar?cid=MWVxa29iam90NHB0YXMzNjQxZXRvN2lkZjhAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ) 7 | 8 | ## Description 9 | A multi-disciplinary group reading papers on the topic of fairness and ethics in Machine Learning and Data Science. 10 | 11 | 2020 papers — [https://github.com/summerscope/fair-ml-reading-group/blob/master/2020.md](https://github.com/summerscope/fair-ml-reading-group/blob/master/2020.md) 12 | 13 | 2019 papers — [https://github.com/summerscope/fair-ml-reading-group/blob/master/2019.md](https://github.com/summerscope/fair-ml-reading-group/blob/master/2019.md) 14 | 15 | --- 16 | ## Reading history 17 | 18 | March 3 19 | **On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?** 20 | [http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf](http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf) 21 | 22 | March 18 23 | **"This Whole Thing Smacks of Gender": Algorithmic Exclusion in Bioimpedance-based Body Composition Analysis** 24 | [https://arxiv.org/abs/2101.08325](https://arxiv.org/abs/2101.08325) 25 | 26 | March 31 27 | **PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies** 28 | [https://www.researchgate.net/publication/330070686_PROBAST_A_Tool_to_Assess_the_Risk_of_Bias_and_Applicability_of_Prediction_Model_Studies](https://www.researchgate.net/publication/330070686_PROBAST_A_Tool_to_Assess_the_Risk_of_Bias_and_Applicability_of_Prediction_Model_Studies) 29 | 30 | _PROBAST in use_ 31 | **Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans** 32 | [https://www.nature.com/articles/s42256-021-00307-0](https://www.nature.com/articles/s42256-021-00307-0) 33 | 34 | April 15 35 | **A Unified Approach to Interpreting Model Predictions** _(SHAP paper)_ 36 | [https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html) 37 | [https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf](https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf) 38 | 39 | April 28 40 | **Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?** 41 | [https://arxiv.org/abs/1812.05239](https://arxiv.org/abs/1812.05239) 42 | 43 | May 13 44 | **Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI** 45 | [https://arxiv.org/abs/2010.07487](https://arxiv.org/abs/2010.07487) 46 | 47 | May 26 48 | **Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities** 49 | [https://explainml-tutorial.github.io/aaai21](https://explainml-tutorial.github.io/aaai21) 50 | 51 | _Bonus Paper_ 52 | **The Myth of Complete AI-Fairness** 53 | [https://arxiv.org/abs/2104.12544](https://arxiv.org/abs/2104.12544) 54 | 55 | June 10 56 | **Fair Bayesian Optimization** 57 | [https://arxiv.org/abs/2006.05109](https://arxiv.org/abs/2006.05109) 58 | 59 | June 23 60 | **Ethical considerations in multilabel text classifications** 61 | [https://smartygrants.com.au/research/ethical-considerations-in-multilabel-text-classifications](https://smartygrants.com.au/research/ethical-considerations-in-multilabel-text-classifications) 62 | 63 | July 8 64 | **It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks** 65 | [https://arxiv.org/abs/2106.05498](https://arxiv.org/abs/2106.05498) 66 | 67 | July 21 68 | **Artificial Intelligence and the Purpose of Social Systems** 69 | [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3850456](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3850456) 70 | 71 | August 5 72 | **Leave-one-out Unfairness** 73 | [https://arxiv.org/abs/2107.10171v1](https://arxiv.org/abs/2107.10171v1) 74 | -------------------------------------------------------------------------------- /scratch/fairml-hacknight-2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 65, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "# import the libraries we will use\n", 10 | "import numpy as np # for array and matrix maths\n", 11 | "import pandas as pd # for working with tables of data\n", 12 | "import matplotlib.pyplot as plt # for making plots" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": 66, 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "# Load the Kaggle german credit data: https://www.kaggle.com/uciml/german-credit/data\n", 22 | "df_kaggle = pd.read_csv(\"~/data/german/german_credit_data.csv\", index_col=0)\n", 23 | "# Load the original UCI german credit data: https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29\n", 24 | "df_uci = pd.read_csv(\"~/data/german/german.data\", header=None, sep=\"\\s+\")\n", 25 | "\n", 26 | "# Add the last column from the UCI data to the kaggle data to get labels\n", 27 | "last_column_uci = df_uci.iloc[:, -1]\n", 28 | "df = df_kaggle.assign(Good=last_column_uci)" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": 67, 34 | "metadata": {}, 35 | "outputs": [], 36 | "source": [ 37 | "category_cols = ['Sex', 'Job', 'Housing', 'Purpose']\n", 38 | "target_col = 'Good'\n", 39 | "\n", 40 | "for c in category_cols + [target_col]:\n", 41 | " df[c] = df[c].astype('category')\n", 42 | "\n", 43 | "def uci_class_as_boolean(x):\n", 44 | " if x == 1:\n", 45 | " return True\n", 46 | " elif x == 2:\n", 47 | " return False\n", 48 | " else:\n", 49 | " return x\n", 50 | " \n", 51 | "df[target_col] = df[target_col].apply(uci_class_as_boolean)\n", 52 | "df = df[pd.notna(df['Saving accounts'])]" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": 68, 58 | "metadata": {}, 59 | "outputs": [ 60 | { 61 | "data": { 62 | "text/plain": [ 63 | "array(['little', 'quite rich', 'rich', 'moderate'], dtype=object)" 64 | ] 65 | }, 66 | "execution_count": 68, 67 | "metadata": {}, 68 | "output_type": "execute_result" 69 | } 70 | ], 71 | "source": [ 72 | "df['Saving accounts'].unique()" 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": 69, 78 | "metadata": {}, 79 | "outputs": [], 80 | "source": [ 81 | "account_value_lookup = {\n", 82 | " np.nan: 0,\n", 83 | " 'little': 1,\n", 84 | " 'moderate': 2,\n", 85 | " 'rich': 3,\n", 86 | " 'quite rich': 4\n", 87 | "}\n", 88 | "\n", 89 | "def dollar_dollar(x):\n", 90 | " return account_value_lookup[x]\n", 91 | "\n", 92 | "df['Saving amount'] = df['Saving accounts'].apply(dollar_dollar)" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": 70, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "import pystan" 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": 49, 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [ 110 | "model_code = \"\"\"\n", 111 | "data {\n", 112 | " int N;\n", 113 | "\n", 114 | " int y[N];\n", 115 | "\n", 116 | " int female[N];\n", 117 | " vector[N] bank_balance;\n", 118 | " \n", 119 | " real money_mean_prior_std;\n", 120 | " real money_per_u_prior_std;\n", 121 | "\n", 122 | " real good_alpha_prior_std;\n", 123 | " real good_beta_prior_std;\n", 124 | " \n", 125 | " real bank_balance_std;\n", 126 | "}\n", 127 | "\n", 128 | "parameters {\n", 129 | " real money_mean_male;\n", 130 | " real money_mean_female;\n", 131 | " real money_per_u;\n", 132 | "\n", 133 | " real good_alpha;\n", 134 | " real good_beta;\n", 135 | "\n", 136 | " vector[N] u;\n", 137 | "}\n", 138 | "\n", 139 | "model {\n", 140 | " money_mean_female ~ normal(0, money_mean_prior_std);\n", 141 | " money_mean_male ~ normal(0, money_mean_prior_std);\n", 142 | " money_per_u ~ normal(0, money_per_u_prior_std);\n", 143 | "\n", 144 | " good_alpha ~ normal(0, good_alpha_prior_std);\n", 145 | " good_beta ~ normal(0, good_beta_prior_std);\n", 146 | "\n", 147 | " u ~ normal(0, 1);\n", 148 | "\n", 149 | " for (n in 1:N) {\n", 150 | " real pred_balance;\n", 151 | " if (female[n]) {\n", 152 | " pred_balance = money_mean_female + money_per_u*u[n];\n", 153 | " } else {\n", 154 | " pred_balance = money_mean_male + money_per_u*u[n];\n", 155 | " }\n", 156 | " bank_balance[n] ~ normal(pred_balance, bank_balance_std);\n", 157 | " }\n", 158 | " \n", 159 | " y ~ bernoulli_logit(bank_balance * good_beta + good_alpha + u);\n", 160 | "}\n", 161 | "\"\"\"" 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": 50, 167 | "metadata": {}, 168 | "outputs": [ 169 | { 170 | "name": "stderr", 171 | "output_type": "stream", 172 | "text": [ 173 | "INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_dd88a3a980ba9f56451dd3bed951ee71 NOW.\n" 174 | ] 175 | } 176 | ], 177 | "source": [ 178 | "model = pystan.StanModel(model_code=model_code)" 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": 71, 184 | "metadata": {}, 185 | "outputs": [], 186 | "source": [ 187 | "data = dict(\n", 188 | " N = len(df),\n", 189 | " y = df['Good'].values.astype(np.int),\n", 190 | " female = (df['Sex'] == 'female').values.astype(np.int),\n", 191 | " bank_balance = df['Saving amount'].values,\n", 192 | " money_mean_prior_std = 4.0,\n", 193 | " money_per_u_prior_std = 4.0,\n", 194 | " good_alpha_prior_std = 1.0,\n", 195 | " good_beta_prior_std = 1.0,\n", 196 | " bank_balance_std = 1.0)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": 72, 202 | "metadata": {}, 203 | "outputs": [], 204 | "source": [ 205 | "fit = model.sampling(data)" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": 73, 211 | "metadata": {}, 212 | "outputs": [ 213 | { 214 | "name": "stderr", 215 | "output_type": "stream", 216 | "text": [ 217 | "WARNING:pystan:Truncated summary with the 'fit.__repr__' method. For the full summary use 'print(fit)'\n" 218 | ] 219 | }, 220 | { 221 | "data": { 222 | "text/plain": [ 223 | "\n", 224 | "Warning: Shown data is truncated to 100 parameters\n", 225 | "For the full summary use 'print(fit)'\n", 226 | "\n", 227 | "Inference for Stan model: anon_model_dd88a3a980ba9f56451dd3bed951ee71.\n", 228 | "4 chains, each with iter=2000; warmup=1000; thin=1; \n", 229 | "post-warmup draws per chain=1000, total post-warmup draws=4000.\n", 230 | "\n", 231 | " mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat\n", 232 | "money_mean_male 1.48 5.6e-4 0.04 1.4 1.46 1.48 1.51 1.57 5815 1.0\n", 233 | "money_mean_female 1.45 8.3e-4 0.06 1.33 1.41 1.45 1.49 1.58 5680 1.0\n", 234 | "money_per_u 2.8e-4 3.0e-3 0.08 -0.16 -0.05 4.9e-4 0.05 0.16 729 1.01\n", 235 | "good_alpha 0.24 6.4e-3 0.2 -0.16 0.1 0.24 0.38 0.63 1011 1.0\n", 236 | "good_beta 0.44 4.3e-3 0.13 0.19 0.35 0.44 0.53 0.71 946 1.0\n", 237 | "u[1] -0.53 0.01 0.9 -2.3 -1.12 -0.53 0.07 1.26 5278 1.0\n", 238 | "u[2] 0.29 0.01 0.9 -1.4 -0.33 0.28 0.9 2.08 6670 1.0\n", 239 | "u[3] 0.3 0.01 0.92 -1.45 -0.32 0.29 0.92 2.11 5295 1.0\n", 240 | "u[4] -0.51 0.01 0.91 -2.31 -1.1 -0.5 0.08 1.24 6117 1.0\n", 241 | "u[5] 0.13 0.01 0.95 -1.65 -0.55 0.11 0.78 2.01 4342 1.0\n", 242 | "u[6] 0.3 0.01 0.91 -1.46 -0.31 0.28 0.9 2.09 6452 1.0\n", 243 | "u[7] 0.17 0.01 0.94 -1.67 -0.47 0.16 0.8 2.08 4833 1.0\n", 244 | "u[8] -0.53 0.01 0.91 -2.31 -1.16 -0.52 0.08 1.24 6207 1.0\n", 245 | "u[9] -0.52 0.01 0.88 -2.22 -1.13 -0.52 0.08 1.22 5723 1.0\n", 246 | "u[10] -0.54 0.01 0.92 -2.34 -1.16 -0.56 0.05 1.28 5942 1.0\n", 247 | "u[11] 0.29 0.01 0.93 -1.51 -0.35 0.31 0.92 2.12 5965 1.0\n", 248 | "u[12] -0.52 0.01 0.92 -2.39 -1.12 -0.51 0.09 1.35 5598 1.0\n", 249 | "u[13] 0.31 0.01 0.9 -1.44 -0.29 0.31 0.9 2.13 5668 1.0\n", 250 | "u[14] -0.6 0.01 0.92 -2.4 -1.21 -0.58 -5.7e-3 1.19 4635 1.0\n", 251 | "u[15] -0.53 0.02 0.94 -2.35 -1.17 -0.54 0.09 1.33 3875 1.0\n", 252 | "u[16] 0.13 0.01 1.01 -1.84 -0.57 0.12 0.81 2.14 5041 1.0\n", 253 | "u[17] 0.3 0.01 0.93 -1.51 -0.33 0.3 0.92 2.14 5434 1.0\n", 254 | "u[18] 0.15 0.02 0.98 -1.71 -0.52 0.14 0.81 2.1 4216 1.0\n", 255 | "u[19] 0.29 0.01 0.92 -1.54 -0.31 0.28 0.9 2.09 5577 1.0\n", 256 | "u[20] 0.24 0.01 0.94 -1.61 -0.41 0.26 0.89 2.09 5714 1.0\n", 257 | "u[21] 0.28 0.01 0.92 -1.51 -0.34 0.27 0.88 2.09 6240 1.0\n", 258 | "u[22] 0.31 0.01 0.9 -1.52 -0.29 0.3 0.9 2.08 6650 1.0\n", 259 | "u[23] 0.18 0.01 0.93 -1.64 -0.45 0.17 0.79 2.08 5469 1.0\n", 260 | "u[24] 0.3 0.01 0.91 -1.45 -0.3 0.3 0.9 2.1 5616 1.0\n", 261 | "u[25] -0.53 0.01 0.91 -2.32 -1.15 -0.54 0.08 1.22 5183 1.0\n", 262 | "u[26] 0.2 0.01 0.95 -1.59 -0.45 0.18 0.84 2.09 5468 1.0\n", 263 | "u[27] 0.3 0.01 0.89 -1.45 -0.31 0.3 0.89 2.03 4764 1.0\n", 264 | "u[28] 0.23 0.01 0.93 -1.6 -0.38 0.21 0.83 2.04 5499 1.0\n", 265 | "u[29] 0.31 0.01 0.92 -1.51 -0.31 0.3 0.91 2.16 5161 1.0\n", 266 | "u[30] -0.52 0.01 0.92 -2.36 -1.13 -0.52 0.1 1.27 5481 1.0\n", 267 | "u[31] 0.3 0.01 0.92 -1.51 -0.34 0.3 0.91 2.16 4953 1.0\n", 268 | "u[32] -0.52 0.01 0.9 -2.25 -1.15 -0.54 0.1 1.22 4316 1.0\n", 269 | "u[33] 0.3 0.01 0.92 -1.43 -0.35 0.28 0.92 2.09 6209 1.0\n", 270 | "u[34] 0.31 0.01 0.93 -1.56 -0.32 0.32 0.93 2.22 6050 1.0\n", 271 | "u[35] 0.14 0.01 1.0 -1.82 -0.54 0.15 0.8 2.1 5184 1.0\n", 272 | "u[36] 0.12 0.01 0.97 -1.72 -0.55 0.11 0.76 2.03 4398 1.0\n", 273 | "u[37] 0.32 0.01 0.92 -1.46 -0.29 0.3 0.94 2.21 5476 1.0\n", 274 | "u[38] 0.24 0.01 0.91 -1.56 -0.37 0.21 0.84 2.1 5473 1.0\n", 275 | "u[39] -0.51 0.01 0.95 -2.4 -1.15 -0.5 0.12 1.3 6250 1.0\n", 276 | "u[40] 0.3 0.01 0.94 -1.48 -0.34 0.29 0.95 2.15 6216 1.0\n", 277 | "u[41] 0.14 0.01 0.97 -1.75 -0.51 0.13 0.8 2.03 4293 1.0\n", 278 | "u[42] 0.14 0.01 0.97 -1.72 -0.53 0.12 0.78 2.04 4791 1.0\n", 279 | "u[43] 0.3 0.01 0.93 -1.55 -0.34 0.3 0.92 2.08 4516 1.0\n", 280 | "u[44] 0.23 0.01 0.94 -1.62 -0.43 0.24 0.86 2.08 6334 1.0\n", 281 | "u[45] 0.31 0.01 0.9 -1.43 -0.33 0.29 0.93 2.05 5752 1.0\n", 282 | "u[46] 0.3 0.01 0.93 -1.53 -0.31 0.3 0.94 2.1 4859 1.0\n", 283 | "u[47] -0.53 0.01 0.9 -2.32 -1.14 -0.53 0.06 1.22 4956 1.0\n", 284 | "u[48] 0.29 0.01 0.9 -1.45 -0.3 0.29 0.89 2.09 5197 1.0\n", 285 | "u[49] 0.32 0.01 0.96 -1.57 -0.33 0.32 0.97 2.18 5379 1.0\n", 286 | "u[50] -0.53 0.01 0.91 -2.31 -1.12 -0.53 0.09 1.25 5218 1.0\n", 287 | "u[51] 0.31 0.01 0.91 -1.46 -0.29 0.3 0.88 2.18 4540 1.0\n", 288 | "u[52] -0.53 0.01 0.9 -2.26 -1.16 -0.53 0.09 1.2 5749 1.0\n", 289 | "u[53] -0.51 0.01 0.9 -2.24 -1.15 -0.51 0.12 1.24 5366 1.0\n", 290 | "u[54] 0.29 0.01 0.89 -1.4 -0.3 0.3 0.88 2.0 6103 1.0\n", 291 | "u[55] 0.29 0.01 0.94 -1.52 -0.35 0.28 0.91 2.18 4867 1.0\n", 292 | "u[56] 0.18 0.01 0.92 -1.57 -0.45 0.16 0.81 2.0 5432 1.0\n", 293 | "u[57] -0.51 0.01 0.91 -2.31 -1.12 -0.51 0.09 1.24 6780 1.0\n", 294 | "u[58] 0.3 0.01 0.92 -1.5 -0.31 0.29 0.9 2.15 5246 1.0\n", 295 | "u[59] 0.31 0.01 0.92 -1.47 -0.32 0.29 0.92 2.1 6423 1.0\n", 296 | "u[60] 0.3 0.01 0.94 -1.49 -0.34 0.28 0.95 2.18 5013 1.0\n", 297 | "u[61] 0.31 0.01 0.94 -1.47 -0.33 0.31 0.94 2.14 6496 1.0\n", 298 | "u[62] -0.52 0.01 0.9 -2.28 -1.13 -0.5 0.06 1.23 5040 1.0\n", 299 | "u[63] 0.29 0.01 0.93 -1.51 -0.33 0.27 0.89 2.21 5467 1.0\n", 300 | "u[64] 0.29 0.01 0.93 -1.48 -0.35 0.28 0.92 2.14 7999 1.0\n", 301 | "u[65] 0.16 0.01 0.98 -1.73 -0.51 0.17 0.83 2.06 4994 1.0\n", 302 | "u[66] 0.25 0.01 0.92 -1.48 -0.4 0.24 0.88 2.1 6068 1.0\n", 303 | "u[67] 0.31 0.01 0.9 -1.41 -0.33 0.3 0.92 2.07 4622 1.0\n", 304 | "u[68] 0.31 0.01 0.94 -1.55 -0.3 0.3 0.92 2.23 5713 1.0\n", 305 | "u[69] 0.3 0.01 0.94 -1.53 -0.34 0.29 0.93 2.19 6019 1.0\n", 306 | "u[70] 0.3 0.01 0.92 -1.46 -0.32 0.28 0.9 2.16 5700 1.0\n", 307 | "u[71] -0.59 0.01 0.91 -2.37 -1.22 -0.6 0.01 1.25 5757 1.0\n", 308 | "u[72] 0.26 0.01 0.91 -1.51 -0.36 0.25 0.85 2.08 5667 1.0\n", 309 | "u[73] -0.52 0.01 0.93 -2.36 -1.15 -0.53 0.1 1.26 6392 1.0\n", 310 | "u[74] 0.31 0.01 0.93 -1.5 -0.32 0.3 0.93 2.18 6045 1.0\n", 311 | "u[75] 0.31 0.01 0.93 -1.56 -0.32 0.32 0.92 2.18 6069 1.0\n", 312 | "u[76] 0.19 0.01 0.95 -1.64 -0.45 0.17 0.81 2.09 5075 1.0\n", 313 | "u[77] -0.51 0.01 0.89 -2.31 -1.13 -0.51 0.09 1.21 4635 1.0\n", 314 | "u[78] 0.23 0.01 0.92 -1.52 -0.4 0.24 0.87 2.07 5501 1.0\n", 315 | "u[79] 0.32 0.01 0.94 -1.49 -0.31 0.3 0.95 2.2 5235 1.0\n", 316 | "u[80] 0.24 0.01 0.97 -1.62 -0.41 0.22 0.88 2.14 6393 1.0\n", 317 | "u[81] 0.3 0.01 0.92 -1.5 -0.32 0.3 0.9 2.09 5308 1.0\n", 318 | "u[82] 0.32 0.01 0.91 -1.43 -0.29 0.31 0.91 2.13 5417 1.0\n", 319 | "u[83] 0.3 0.01 0.91 -1.47 -0.29 0.28 0.91 2.1 5297 1.0\n", 320 | "u[84] -0.53 0.01 0.93 -2.29 -1.18 -0.51 0.11 1.28 6209 1.0\n", 321 | "u[85] -0.54 0.01 0.91 -2.33 -1.14 -0.54 0.06 1.25 5407 1.0\n", 322 | "u[86] 0.29 0.01 0.92 -1.47 -0.33 0.28 0.9 2.1 5875 1.0\n", 323 | "u[87] 0.16 0.01 0.97 -1.75 -0.48 0.17 0.8 2.06 4411 1.0\n", 324 | "u[88] 0.23 0.01 0.96 -1.63 -0.43 0.23 0.88 2.1 5672 1.0\n", 325 | "u[89] 0.31 0.01 0.92 -1.47 -0.3 0.31 0.9 2.11 5755 1.0\n", 326 | "u[90] 0.3 0.01 0.91 -1.5 -0.31 0.31 0.91 2.07 5656 1.0\n", 327 | "u[91] -0.52 0.01 0.89 -2.26 -1.11 -0.52 0.05 1.31 6019 1.0\n", 328 | "u[92] 0.14 0.01 0.95 -1.67 -0.51 0.14 0.77 2.06 4439 1.0\n", 329 | "u[93] -0.72 0.02 0.99 -2.59 -1.4 -0.74 -0.06 1.26 3663 1.0\n", 330 | "u[94] 0.15 0.02 0.93 -1.71 -0.47 0.15 0.77 2.04 3514 1.0\n", 331 | "lp__ -1188 0.48 20.2 -1228 -1201 -1188 -1175 -1150 1782 1.0\n", 332 | "\n", 333 | "Samples were drawn using NUTS at Fri Nov 29 20:26:52 2019.\n", 334 | "For each parameter, n_eff is a crude measure of effective sample size,\n", 335 | "and Rhat is the potential scale reduction factor on split chains (at \n", 336 | "convergence, Rhat=1)." 337 | ] 338 | }, 339 | "execution_count": 73, 340 | "metadata": {}, 341 | "output_type": "execute_result" 342 | } 343 | ], 344 | "source": [ 345 | "fit" 346 | ] 347 | }, 348 | { 349 | "cell_type": "code", 350 | "execution_count": null, 351 | "metadata": {}, 352 | "outputs": [], 353 | "source": [] 354 | } 355 | ], 356 | "metadata": { 357 | "kernelspec": { 358 | "display_name": "Python (stan env)", 359 | "language": "python", 360 | "name": "stan" 361 | }, 362 | "language_info": { 363 | "codemirror_mode": { 364 | "name": "ipython", 365 | "version": 3 366 | }, 367 | "file_extension": ".py", 368 | "mimetype": "text/x-python", 369 | "name": "python", 370 | "nbconvert_exporter": "python", 371 | "pygments_lexer": "ipython3", 372 | "version": "3.7.4" 373 | } 374 | }, 375 | "nbformat": 4, 376 | "nbformat_minor": 4 377 | } 378 | --------------------------------------------------------------------------------