27 | {% if site.github.is_project_page %}
28 | Go to Homepage
29 | View on GitHub
30 | {% endif %}
31 | {% if site.show_downloads %}
32 | Download .zip
33 | Download .tar.gz
34 | {% endif %}
35 |
36 |
37 |
38 | {{ content }}
39 |
40 |
46 |
47 |
48 |
49 |
--------------------------------------------------------------------------------
/assets/css/style.scss:
--------------------------------------------------------------------------------
1 | ---
2 | ---
3 |
4 | @import "{{ site.theme }}";
5 |
6 | @media screen and (min-width: 80em) {
7 | .project-name {
8 | font-size: 4.75rem
9 | }
10 | }
11 |
12 | @media screen and (min-width: 64em) and (max-width: 80em) {
13 | .project-name {
14 | font-size: 3.25rem
15 | }
16 | }
17 |
18 | @media screen and (min-width: 80em) {
19 | .main-content {
20 | max-width: 80rem;
21 | padding: 2rem 6rem;
22 | margin: 0 auto;
23 | font-size: 1.1rem
24 | }
25 | }
26 |
27 | @media screen and (min-width: 64em) and (max-width: 80em) {
28 | .main-content {
29 | max-width: 64rem;
30 | padding: 2rem 6rem;
31 | margin: 0 auto;
32 | font-size: 1.1rem
33 | }
34 | }
--------------------------------------------------------------------------------
/conf-2023.md:
--------------------------------------------------------------------------------
1 | ## 2023
2 |
3 | ### [AAAI 2023](https://dblp.uni-trier.de/db/conf/aaai/aaai2023.html)
4 |
5 | ### [AIES 2023](https://www.aies-conference.com/2023/)
6 |
7 | ### [CIKM 2023](https://dblp.uni-trier.de/db/conf/cikm/cikm2023.html)
8 |
9 | ### [FAT\* 2023](https://dblp.uni-trier.de/db/conf/fat/fat2023.html)
10 |
11 | - [Machine Explanations and Human Understanding.](https://doi.org/10.1145/3593013.3593970)
12 | - [Broadening AI Ethics Narratives: An Indic Art View.](https://doi.org/10.1145/3593013.3593971)
13 | - [How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making.](https://doi.org/10.1145/3593013.3593972)
14 | - ['We are adults and deserve control of our phones': Examining the risks and opportunities of a right to repair for mobile apps.](https://doi.org/10.1145/3593013.3593973)
15 | - [Fairness in machine learning from the perspective of sociology of statistics: How machine learning is becoming scientific by turning its back on metrological realism.](https://doi.org/10.1145/3593013.3593974)
16 | - [Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.](https://doi.org/10.1145/3593013.3593975)
17 | - [Optimization's Neglected Normative Commitments.](https://doi.org/10.1145/3593013.3593976)
18 | - [Welfarist Moral Grounding for Transparent AI.](https://doi.org/10.1145/3593013.3593977)
19 | - [Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application.](https://doi.org/10.1145/3593013.3593978)
20 | - [Multi-dimensional Discrimination in Law and Machine Learning - A Comparative Overview.](https://doi.org/10.1145/3593013.3593979)
21 | - [Reconciling Individual Probability Forecasts✱.](https://doi.org/10.1145/3593013.3593980)
22 | - [The Gradient of Generative AI Release: Methods and Considerations.](https://doi.org/10.1145/3593013.3593981)
23 | - [In the Name of Fairness: Assessing the Bias in Clinical Record De-identification.](https://doi.org/10.1145/3593013.3593982)
24 | - ["How Biased are Your Features?": Computing Fairness Influence Functions with Global Sensitivity Analysis.](https://doi.org/10.1145/3593013.3593983)
25 | - [Preventing Discriminatory Decision-making in Evolving Data Streams.](https://doi.org/10.1145/3593013.3593984)
26 | - [WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and Democratic is FAccT?](https://doi.org/10.1145/3593013.3593985)
27 | - [Trustworthy AI and the Logics of Intersectional Resistance.](https://doi.org/10.1145/3593013.3593986)
28 | - [In her Shoes: Gendered Labelling in Crowdsourced Safety Perceptions Data from India.](https://doi.org/10.1145/3593013.3593987)
29 | - [The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions.](https://doi.org/10.1145/3593013.3593988)
30 | - ["I wouldn't say offensive but...": Disability-Centered Perspectives on Large Language Models.](https://doi.org/10.1145/3593013.3593989)
31 | - [Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs.](https://doi.org/10.1145/3593013.3593990)
32 | - [Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms.](https://doi.org/10.1145/3593013.3593991)
33 | - [Who Should Pay When Machines Cause Harm? Laypeople's Expectations of Legal Damages for Machine-Caused Harm.](https://doi.org/10.1145/3593013.3593992)
34 | - [Diagnosing AI Explanation Methods with Folk Concepts of Behavior.](https://doi.org/10.1145/3593013.3593993)
35 | - [Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study.](https://doi.org/10.1145/3593013.3593994)
36 | - [The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices.](https://doi.org/10.1145/3593013.3593995)
37 | - [Making Intelligence: Ethical Values in IQ and ML Benchmarks.](https://doi.org/10.1145/3593013.3593996)
38 | - [Saliency Cards: A Framework to Characterize and Compare Saliency Methods.](https://doi.org/10.1145/3593013.3593997)
39 | - [Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints.](https://doi.org/10.1145/3593013.3593998)
40 | - [Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument.](https://doi.org/10.1145/3593013.3593999)
41 | - ['Affordances' for Machine Learning.](https://doi.org/10.1145/3593013.3594000)
42 | - [Explainable AI is Dead, Long Live Explainable AI!: Hypothesis-driven Decision Support using Evaluative AI.](https://doi.org/10.1145/3593013.3594001)
43 | - [Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML.](https://doi.org/10.1145/3593013.3594002)
44 | - [Simplicity Bias Leads to Amplified Performance Disparities.](https://doi.org/10.1145/3593013.3594003)
45 | - [On the Independence of Association Bias and Empirical Fairness in Language Models.](https://doi.org/10.1145/3593013.3594004)
46 | - [Envisioning Equitable Speech Technologies for Black Older Adults.](https://doi.org/10.1145/3593013.3594005)
47 | - [Group-Fair Classification with Strategic Agents.](https://doi.org/10.1145/3593013.3594006)
48 | - [The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice.](https://doi.org/10.1145/3593013.3594007)
49 | - [Domain Adaptive Decision Trees: Implications for Accuracy and Fairness.](https://doi.org/10.1145/3593013.3594008)
50 | - [Algorithmic Transparency and Accountability through Crowdsourcing: A Study of the NYC School Admission Lottery.](https://doi.org/10.1145/3593013.3594009)
51 | - [Rethinking Transparency as a Communicative Constellation.](https://doi.org/10.1145/3593013.3594010)
52 | - [On the Praxes and Politics of AI Speech Emotion Recognition.](https://doi.org/10.1145/3593013.3594011)
53 | - [It's about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?](https://doi.org/10.1145/3593013.3594012)
54 | - [Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?](https://doi.org/10.1145/3593013.3594013)
55 | - [Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits.](https://doi.org/10.1145/3593013.3594014)
56 | - [The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government.](https://doi.org/10.1145/3593013.3594015)
57 | - [AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia.](https://doi.org/10.1145/3593013.3594016)
58 | - [A Theory of Auditability for Allocation and Social Choice Mechanisms.](https://doi.org/10.1145/3593013.3594017)
59 | - [Representation in AI Evaluations.](https://doi.org/10.1145/3593013.3594019)
60 | - [Detecting disparities in police deployments using dashcam data.](https://doi.org/10.1145/3593013.3594020)
61 | - [Delayed and Indirect Impacts of Link Recommendations.](https://doi.org/10.1145/3593013.3594021)
62 | - [Striving for Affirmative Algorithmic Futures: How the Social Sciences can Promote more Equitable and Just Algorithmic System Design.](https://doi.org/10.1145/3593013.3594022)
63 | - [Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?](https://doi.org/10.1145/3593013.3594023)
64 | - [Invigorating Ubuntu Ethics in AI for healthcare: Enabling equitable care.](https://doi.org/10.1145/3593013.3594024)
65 | - [Honor Ethics: The Challenge of Globalizing Value Alignment in AI.](https://doi.org/10.1145/3593013.3594026)
66 | - [Power and Resistance in the Twitter Bias Discourse.](https://doi.org/10.1145/3593013.3594027)
67 | - [Runtime Monitoring of Dynamic Fairness Properties.](https://doi.org/10.1145/3593013.3594028)
68 | - [Data Collaboratives with the Use of Decentralised Learning.](https://doi.org/10.1145/3593013.3594029)
69 | - [Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy.](https://doi.org/10.1145/3593013.3594030)
70 | - [Care and Coordination in Algorithmic Systems: An Economies of Worth Approach.](https://doi.org/10.1145/3593013.3594031)
71 | - [You Sound Depressed: A Case Study on Sonde Health's Diagnostic Use of Voice Analysis AI.](https://doi.org/10.1145/3593013.3594032)
72 | - [Harms from Increasingly Agentic Algorithmic Systems.](https://doi.org/10.1145/3593013.3594033)
73 | - [How Redundant are Redundant Encodings? Blindness in the Wild and Racial Disparity when Race is Unobserved.](https://doi.org/10.1145/3593013.3594034)
74 | - [On the Site of Predictive Justice.](https://doi.org/10.1145/3593013.3594035)
75 | - [Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making.](https://doi.org/10.1145/3593013.3594036)
76 | - [Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice.](https://doi.org/10.1145/3593013.3594037)
77 | - [Your Browsing History May Cost You: A Framework for Discovering Differential Pricing in Non-Transparent Markets.](https://doi.org/10.1145/3593013.3594038)
78 | - [Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness.](https://doi.org/10.1145/3593013.3594039)
79 | - [FairAssign: Stochastically Fair Driver Assignment in Gig Delivery Platforms.](https://doi.org/10.1145/3593013.3594040)
80 | - [Algorithmic Decisions, Desire for Control, and the Preference for Human Review over Algorithmic Review.](https://doi.org/10.1145/3593013.3594041)
81 | - [Gender Animus Can Still Exist Under Favorable Disparate Impact: a Cautionary Tale from Online P2P Lending.](https://doi.org/10.1145/3593013.3594042)
82 | - ["I Think You Might Like This": Exploring Effects of Confidence Signal Patterns on Trust in and Reliance on Conversational Recommender Systems.](https://doi.org/10.1145/3593013.3594043)
83 | - [Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or Why the Law is not a Decision Tree.](https://doi.org/10.1145/3593013.3594044)
84 | - [On (assessing) the fairness of risk score models.](https://doi.org/10.1145/3593013.3594045)
85 | - [UNFair: Search Engine Manipulation, Undetectable by Amortized Inequity.](https://doi.org/10.1145/3593013.3594046)
86 | - [Datafication Genealogies beyond Algorithmic Fairness: Making Up Racialised Subjects.](https://doi.org/10.1145/3593013.3594047)
87 | - [Maximal fairness.](https://doi.org/10.1145/3593013.3594048)
88 | - [Augmented Datasheets for Speech Datasets and Ethical Decision-Making.](https://doi.org/10.1145/3593013.3594049)
89 | - [To Be High-Risk, or Not To Be - Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards.](https://doi.org/10.1145/3593013.3594050)
90 | - [Implementing Fairness Constraints in Markets Using Taxes and Subsidies.](https://doi.org/10.1145/3593013.3594051)
91 | - [AI in the Public Eye: Investigating Public AI Literacy Through AI Art.](https://doi.org/10.1145/3593013.3594052)
92 | - [Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making.](https://doi.org/10.1145/3593013.3594053)
93 | - [On the Impact of Explanations on Understanding of Algorithmic Decision-Making.](https://doi.org/10.1145/3593013.3594054)
94 | - [Addressing contingency in algorithmic (mis)information classification: Toward a responsible machine learning agenda.](https://doi.org/10.1145/3593013.3594055)
95 | - ["We try to empower them" - Exploring Future Technologies to Support Migrant Jobseekers.](https://doi.org/10.1145/3593013.3594056)
96 | - [Robustness Implies Fairness in Causal Algorithmic Recourse.](https://doi.org/10.1145/3593013.3594057)
97 | - [Bias on Demand: A Modelling Framework That Generates Synthetic Data With Bias.](https://doi.org/10.1145/3593013.3594058)
98 | - [ACROCPoLis: A Descriptive Framework for Making Sense of Fairness.](https://doi.org/10.1145/3593013.3594059)
99 | - [Towards Labor Transparency in Situated Computational Systems Impact Research.](https://doi.org/10.1145/3593013.3594060)
100 | - [Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina.](https://doi.org/10.1145/3593013.3594061)
101 | - [Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?](https://doi.org/10.1145/3593013.3594062)
102 | - [Ethical considerations in the early detection of Alzheimer's disease using speech and AI.](https://doi.org/10.1145/3593013.3594063)
103 | - [Co-Design Perspectives on Algorithm Transparency Reporting: Guidelines and Prototypes.](https://doi.org/10.1145/3593013.3594064)
104 | - [More Data Types More Problems: A Temporal Analysis of Complexity, Stability, and Sensitivity in Privacy Policies.](https://doi.org/10.1145/3593013.3594065)
105 | - [Emotions and Dynamic Assemblages: A Study of Automated Social Security Using Qualitative Longitudinal Research.](https://doi.org/10.1145/3593013.3594066)
106 | - [Regulating ChatGPT and other Large Generative AI Models.](https://doi.org/10.1145/3593013.3594067)
107 | - [On the Richness of Calibration.](https://doi.org/10.1145/3593013.3594068)
108 | - [The role of explainable AI in the context of the AI Act.](https://doi.org/10.1145/3593013.3594069)
109 | - [The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers.](https://doi.org/10.1145/3593013.3594070)
110 | - [Going public: the role of public participation approaches in commercial AI labs.](https://doi.org/10.1145/3593013.3594071)
111 | - [Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias.](https://doi.org/10.1145/3593013.3594072)
112 | - [Understanding accountability in algorithmic supply chains.](https://doi.org/10.1145/3593013.3594073)
113 | - [Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK.](https://doi.org/10.1145/3593013.3594074)
114 | - [Disentangling and Operationalizing AI Fairness at LinkedIn.](https://doi.org/10.1145/3593013.3594075)
115 | - [Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective.](https://doi.org/10.1145/3593013.3594076)
116 | - ["I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation.](https://doi.org/10.1145/3593013.3594078)
117 | - [AI Regulation Is (not) All You Need.](https://doi.org/10.1145/3593013.3594079)
118 | - [Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content Moderation.](https://doi.org/10.1145/3593013.3594080)
119 | - [The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool.](https://doi.org/10.1145/3593013.3594081)
120 | - [A Systematic Review of Ethics Disclosures in Predictive Mental Health Research.](https://doi.org/10.1145/3593013.3594082)
121 | - [An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature.](https://doi.org/10.1145/3593013.3594083)
122 | - [A Sociotechnical Audit: Assessing Police Use of Facial Recognition.](https://doi.org/10.1145/3593013.3594084)
123 | - [Fairer Together: Mitigating Disparate Exposure in Kemeny Rank Aggregation.](https://doi.org/10.1145/3593013.3594085)
124 | - [Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity.](https://doi.org/10.1145/3593013.3594086)
125 | - [Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies.](https://doi.org/10.1145/3593013.3594087)
126 | - [(Anti)-Intentional Harms: The Conceptual Pitfalls of Emotion AI in Education.](https://doi.org/10.1145/3593013.3594088)
127 | - [Organizational Governance of Emerging Technologies: AI Adoption in Healthcare.](https://doi.org/10.1145/3593013.3594089)
128 | - [Navigating the Audit Landscape: A Framework for Developing Transparent and Auditable XR.](https://doi.org/10.1145/3593013.3594090)
129 | - [Group fairness without demographics using social networks.](https://doi.org/10.1145/3593013.3594091)
130 | - [Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability.](https://doi.org/10.1145/3593013.3594092)
131 | - [Co-Designing for Transparency: Lessons from Building a Document Organization Tool in the Criminal Justice Domain.](https://doi.org/10.1145/3593013.3594093)
132 | - [Examining risks of racial biases in NLP tools for child protective services.](https://doi.org/10.1145/3593013.3594094)
133 | - [Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale.](https://doi.org/10.1145/3593013.3594095)
134 | - [What's fair is... fair? Presenting JustEFAB, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning: JustEFAB.](https://doi.org/10.1145/3593013.3594096)
135 | - [Personalized Pricing with Group Fairness Constraint.](https://doi.org/10.1145/3593013.3594097)
136 | - [Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems.](https://doi.org/10.1145/3593013.3594098)
137 | - [The Progression of Disparities within the Criminal Justice System: Differential Enforcement and Risk Assessment Instruments.](https://doi.org/10.1145/3593013.3594099)
138 | - [The Misuse of AUC: What High Impact Risk Assessment Gets Wrong.](https://doi.org/10.1145/3593013.3594100)
139 | - [Counterfactual Prediction Under Outcome Measurement Error.](https://doi.org/10.1145/3593013.3594101)
140 | - [Improving Fairness in AI Models on Electronic Health Records: The Case for Federated Learning Methods.](https://doi.org/10.1145/3593013.3594102)
141 | - [Arbitrary Decisions are a Hidden Cost of Differentially Private Training.](https://doi.org/10.1145/3593013.3594103)
142 | - [Interrogating the T in FAccT.](https://doi.org/10.1145/3593013.3594104)
143 | - [Reducing Access Disparities in Networks using Edge Augmentation✱.](https://doi.org/10.1145/3593013.3594105)
144 | - [The Many Faces of Fairness: Exploring the Institutional Logics of Multistakeholder Microlending Recommendation.](https://doi.org/10.1145/3593013.3594106)
145 | - [Cross-Institutional Transfer Learning for Educational Models: Implications for Model Performance, Fairness, and Equity.](https://doi.org/10.1145/3593013.3594107)
146 | - [Help or Hinder? Evaluating the Impact of Fairness Metrics and Algorithms in Visualizations for Consensus Ranking.](https://doi.org/10.1145/3593013.3594108)
147 | - [Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks.](https://doi.org/10.1145/3593013.3594109)
148 | - [Representation, Self-Determination, and Refusal: Queer People's Experiences with Targeted Advertising.](https://doi.org/10.1145/3593013.3594110)
149 | - [Capturing Humans' Mental Models of AI: An Item Response Theory Approach.](https://doi.org/10.1145/3593013.3594111)
150 | - [Representation Online Matters: Practical End-to-End Diversification in Search and Recommender Systems.](https://doi.org/10.1145/3593013.3594112)
151 | - [Using Supervised Learning to Estimate Inequality in the Size and Persistence of Income Shocks.](https://doi.org/10.1145/3593013.3594113)
152 | - [Skin Deep: Investigating Subjectivity in Skin Tone Annotations for Computer Vision Benchmark Datasets.](https://doi.org/10.1145/3593013.3594114)
153 | - [Discrimination through Image Selection by Job Advertisers on Facebook.](https://doi.org/10.1145/3593013.3594115)
154 | - [On The Impact of Machine Learning Randomness on Group Fairness.](https://doi.org/10.1145/3593013.3594116)
155 | - [Detection and Mitigation of Algorithmic Bias via Predictive Parity.](https://doi.org/10.1145/3593013.3594117)
156 | - [Fairness Auditing in Urban Decisions using LP-based Data Combination.](https://doi.org/10.1145/3593013.3594118)
157 | - [The Slow Violence of Surveillance Capitalism: How Online Behavioral Advertising Harms People.](https://doi.org/10.1145/3593013.3594119)
158 | - [Bias as Boundary Object: Unpacking The Politics Of An Austerity Algorithm Using Bias Frameworks.](https://doi.org/10.1145/3593013.3594120)
159 | - [Legal Taxonomies of Machine Bias: Revisiting Direct Discrimination.](https://doi.org/10.1145/3593013.3594121)
160 | - [Achieving Diversity in Counterfactual Explanations: a Review and Discussion.](https://doi.org/10.1145/3593013.3594122)
161 | - [Disparities in Text-to-Image Model Concept Possession Across Languages.](https://doi.org/10.1145/3593013.3594123)
162 | - [Reconciling Governmental Use of Online Targeting With Democracy.](https://doi.org/10.1145/3593013.3594133)
163 | - [Queer In AI: A Case Study in Community-Led Participatory AI.](https://doi.org/10.1145/3593013.3594134)
164 |
165 | ### [ICLR 2023](https://dblp.uni-trier.de/db/conf/iclr/iclr2023.html)
166 |
167 | ### [ICML 2023](https://dblp.uni-trier.de/db/conf/icml/icml2023.html)
168 |
169 | ### [IJCAI 2023](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2023.html)
170 |
171 | ### [KDD 2023](https://dblp.uni-trier.de/db/conf/kdd/kdd2023.html)
172 |
173 | ### [NIPS 2023](https://dblp.uni-trier.de/db/conf/nips/neurips2023.html)
174 |
175 | ### [SDM 2023](https://dblp.uni-trier.de/db/conf/sdm/sdm2023.html)
176 |
177 | ### [UAI 2023](https://dblp.uni-trier.de/db/conf/uai/uai2023.html)
178 |
179 | - [Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling.](https://proceedings.mlr.press/v206/wang23c.html)
180 | - [Mean Parity Fair Regression in RKHS.](https://proceedings.mlr.press/v206/wei23a.html)
181 | - [Fair Representation Learning with Unreliable Labels.](https://proceedings.mlr.press/v206/zhang23g.html)
182 | - [Efficient fair PCA for fair representation learning.](https://proceedings.mlr.press/v206/kleindessner23a.html)
183 | - [Revisiting Fair-PAC Learning and the Axioms of Cardinal Welfare.](https://proceedings.mlr.press/v206/cousins23a.html)
184 | - [Scalable Spectral Clustering with Group Fairness Constraints.](https://proceedings.mlr.press/v206/wang23h.html)
185 | - [Fast Feature Selection with Fairness Constraints.](https://proceedings.mlr.press/v206/quinzan23a.html)
186 | - [Improved Approximation for Fair Correlation Clustering.](https://proceedings.mlr.press/v206/ahmadian23a.html)
187 | - [MMD-B-Fair: Learning Fair Representations with Statistical Testing.](https://proceedings.mlr.press/v206/deka23a.html)
188 | - [Doubly Fair Dynamic Pricing.](https://proceedings.mlr.press/v206/xu23i.html)
189 | - [Stochastic Methods for AUC Optimization subject to AUC-based Fairness Constraints.](https://proceedings.mlr.press/v206/yao23b.html)
190 | - [Reinforcement Learning with Stepwise Fairness Constraints.](https://proceedings.mlr.press/v206/deng23a.html)
191 | - [Uncertainty Estimates of Predictions via a General Bias-Variance Decomposition.](https://proceedings.mlr.press/v206/gruber23a.html)
192 |
193 | ### [WWW 2023](https://dblp.uni-trier.de/db/conf/www/www2023.html)
194 |
195 | #### [WSDM 2023](https://dblp.uni-trier.de/db/conf/wsdm/wsdm2023.html)
196 |
197 | ### Others 2023
198 |
--------------------------------------------------------------------------------
/conference.md:
--------------------------------------------------------------------------------
1 | # Conference Papers
2 |
3 | ## 2022
4 |
5 | ### [AAAI 2022](https://dblp.uni-trier.de/db/conf/aaai/aaai2022.html)
6 |
7 | - [A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications.](https://ojs.aaai.org/index.php/AAAI/article/view/19894)
8 | - [Resistance Training Using Prior Bias: Toward Unbiased Scene Graph Generation.](https://ojs.aaai.org/index.php/AAAI/article/view/19896)
9 | - [Unbiased IoU for Spherical Image Object Detection.](https://ojs.aaai.org/index.php/AAAI/article/view/19929)
10 | - [LAGConv: Local-Context Adaptive Convolution Kernels with Global Harmonic Bias for Pansharpening.](https://ojs.aaai.org/index.php/AAAI/article/view/19996)
11 | - [A Causal Debiasing Framework for Unsupervised Salient Object Detection.](https://ojs.aaai.org/index.php/AAAI/article/view/20052)
12 | - [Debiased Batch Normalization via Gaussian Process for Generalizable Person Re-identification.](https://ojs.aaai.org/index.php/AAAI/article/view/20065)
13 | - [Information-Theoretic Bias Reduction via Causal View of Spurious Correlation.](https://ojs.aaai.org/index.php/AAAI/article/view/20115)
14 | - [Cross-Domain Empirical Risk Minimization for Unbiased Long-Tailed Classification.](https://ojs.aaai.org/index.php/AAAI/article/view/20271)
15 | - [Unifying Knowledge Base Completion with PU Learning to Mitigate the Observation Bias.](https://ojs.aaai.org/index.php/AAAI/article/view/20332)
16 | - [Locally Fair Partitioning.](https://ojs.aaai.org/index.php/AAAI/article/view/20401)
17 | - [Fair and Truthful Giveaway Lotteries.](https://ojs.aaai.org/index.php/AAAI/article/view/20405)
18 | - [Truthful and Fair Mechanisms for Matroid-Rank Valuations.](https://ojs.aaai.org/index.php/AAAI/article/view/20407)
19 | - [A Little Charity Guarantees Fair Connected Graph Partitioning.](https://ojs.aaai.org/index.php/AAAI/article/view/20420)
20 | - [Weighted Fairness Notions for Indivisible Items Revisited.](https://ojs.aaai.org/index.php/AAAI/article/view/20425)
21 | - [Fair and Efficient Allocations of Chores under Bivalued Preferences.](https://ojs.aaai.org/index.php/AAAI/article/view/20436)
22 | - [Improved Maximin Guarantees for Subadditive and Fractionally Subadditive Fair Allocation Problem.](https://ojs.aaai.org/index.php/AAAI/article/view/20453)
23 | - [On Testing for Discrimination Using Causal Models.](https://ojs.aaai.org/index.php/AAAI/article/view/20494)
24 | - [Online Certification of Preference-Based Fairness for Personalized Recommender Systems.](https://ojs.aaai.org/index.php/AAAI/article/view/20606)
25 | - [Modification-Fair Cluster Editing.](https://ojs.aaai.org/index.php/AAAI/article/view/20617)
26 | - [Recovering the Propensity Score from Biased Positive Unlabeled Data.](https://ojs.aaai.org/index.php/AAAI/article/view/20624)
27 | - [Achieving Counterfactual Fairness for Causal Bandit.](https://ojs.aaai.org/index.php/AAAI/article/view/20653)
28 | - [Group-Aware Threshold Adaptation for Fair Classification.](https://ojs.aaai.org/index.php/AAAI/article/view/20657)
29 | - [Spatial Frequency Bias in Convolutional Generative Adversarial Networks.](https://ojs.aaai.org/index.php/AAAI/article/view/20675)
30 | - [A Computable Definition of the Spectral Bias.](https://ojs.aaai.org/index.php/AAAI/article/view/20677)
31 | - [Gradient Based Activations for Accurate Bias-Free Learning.](https://ojs.aaai.org/index.php/AAAI/article/view/20687)
32 | - [Fast and Efficient MMD-Based Fair PCA via Optimization over Stiefel Manifold.](https://ojs.aaai.org/index.php/AAAI/article/view/20699)
33 | - [Covered Information Disentanglement: Model Transparency via Unbiased Permutation Importance.](https://ojs.aaai.org/index.php/AAAI/article/view/20769)
34 | - [On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints.](https://ojs.aaai.org/index.php/AAAI/article/view/20770)
35 | - [Powering Finetuning in Few-Shot Learning: Domain-Agnostic Bias Reduction with Selected Sampling.](https://ojs.aaai.org/index.php/AAAI/article/view/20823)
36 | - [Controlling Underestimation Bias in Reinforcement Learning via Quasi-median Operation.](https://ojs.aaai.org/index.php/AAAI/article/view/20840)
37 | - [Cooperative Multi-Agent Fairness and Equivariant Policies.](https://ojs.aaai.org/index.php/AAAI/article/view/21166)
38 | - [Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness.](https://ojs.aaai.org/index.php/AAAI/article/view/21182)
39 | - [Towards Debiasing DNN Models from Spurious Feature Influence.](https://ojs.aaai.org/index.php/AAAI/article/view/21185)
40 | - [Algorithmic Fairness Verification with Graphical Models.](https://ojs.aaai.org/index.php/AAAI/article/view/21187)
41 | - [Achieving Long-Term Fairness in Sequential Decision Making.](https://ojs.aaai.org/index.php/AAAI/article/view/21188)
42 | - [Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values.](https://ojs.aaai.org/index.php/AAAI/article/view/21189)
43 | - [On the Fairness of Causal Algorithmic Recourse.](https://ojs.aaai.org/index.php/AAAI/article/view/21192)
44 | - [Mitigating Reporting Bias in Semi-supervised Temporal Commonsense Inference with Probabilistic Soft Logic.](https://ojs.aaai.org/index.php/AAAI/article/view/21288)
45 | - [Attention Biasing and Context Augmentation for Zero-Shot Control of Encoder-Decoder Transformers for Natural Language Generation.](https://ojs.aaai.org/index.php/AAAI/article/view/21319)
46 | - [KATG: Keyword-Bias-Aware Adversarial Text Generation for Text Classification.](https://ojs.aaai.org/index.php/AAAI/article/view/21380)
47 | - [Debiasing NLU Models via Causal Intervention and Counterfactual Reasoning.](https://ojs.aaai.org/index.php/AAAI/article/view/21389)
48 | - [Socially Fair Mitigation of Misinformation on Social Networks via Constraint Stochastic Optimization.](https://ojs.aaai.org/index.php/AAAI/article/view/21436)
49 | - [Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters.](https://ojs.aaai.org/index.php/AAAI/article/view/21442)
50 | - [Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving.](https://ojs.aaai.org/index.php/AAAI/article/view/21443)
51 | - [Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search.](https://ojs.aaai.org/index.php/AAAI/article/view/21445)
52 | - [FairFoody: Bringing In Fairness in Food Delivery.](https://ojs.aaai.org/index.php/AAAI/article/view/21447)
53 | - [Gradual (In)Compatibility of Fairness Criteria.](https://ojs.aaai.org/index.php/AAAI/article/view/21450)
54 | - [Unmasking the Mask - Evaluating Social Biases in Masked Language Models.](https://ojs.aaai.org/index.php/AAAI/article/view/21453)
55 | - [CrossWalk: Fairness-Enhanced Node Representation Learning.](https://ojs.aaai.org/index.php/AAAI/article/view/21454)
56 | - [Fair Conformal Predictors for Applications in Medical Imaging.](https://ojs.aaai.org/index.php/AAAI/article/view/21459)
57 | - [Investigations of Performance and Bias in Human-AI Teamwork in Hiring.](https://ojs.aaai.org/index.php/AAAI/article/view/21468)
58 | - [Fairness by "Where": A Statistically-Robust and Model-Agnostic Bi-level Learning Framework.](https://ojs.aaai.org/index.php/AAAI/article/view/21481)
59 | - [Longitudinal Fairness with Censorship.](https://ojs.aaai.org/index.php/AAAI/article/view/21484)
60 | - [Target Languages (vs. Inductive Biases) for Learning to Act and Plan.](https://ojs.aaai.org/index.php/AAAI/article/view/21497)
61 | - [Anatomizing Bias in Facial Analysis.](https://ojs.aaai.org/index.php/AAAI/article/view/21500)
62 | - [Combating Sampling Bias: A Self-Training Method in Credit Risk Models.](https://ojs.aaai.org/index.php/AAAI/article/view/21528)
63 | - [Reproducibility as a Mechanism for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence.](https://ojs.aaai.org/index.php/AAAI/article/view/21558)
64 | - [Deep Representation Debiasing via Mutual Information Minimization and Maximization (Student Abstract).](https://ojs.aaai.org/index.php/AAAI/article/view/21619)
65 | - [LITMUS Predictor: An AI Assistant for Building Reliable, High-Performing and Fair Multilingual NLP Systems.](https://ojs.aaai.org/index.php/AAAI/article/view/21736)
66 |
67 | ### [AIES 2022](https://www.aies-conference.com/2022/)
68 |
69 | - [The Limits of Fairness.](https://doi.org/10.1145/3514094.3539568)
70 | - [Beyond Fairness and Explanation: Foundations of Trustworthiness of Artificial Agents.](https://doi.org/10.1145/3514094.3539570)
71 | - [Long-term Dynamics of Fairness Intervention in Connection Recommender Systems.](https://doi.org/10.1145/3514094.3534173)
72 | - [SCALES: From Fairness Principles to Constrained Decision-Making.](https://doi.org/10.1145/3514094.3534190)
73 | - [Fairness in Agreement With European Values: An Interdisciplinary Perspective on AI Regulation.](https://doi.org/10.1145/3514094.3534158)
74 | - [FINS Auditing Framework: Group Fairness for Subset Selections.](https://doi.org/10.1145/3514094.3534160)
75 | - [Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics.](https://doi.org/10.1145/3514094.3534162)
76 | - [Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations.](https://doi.org/10.1145/3514094.3534159)
77 | - [Does AI De-Bias Recruitment?: Race, Gender, and AI's 'Eradication of Differences Between Groups'.](https://doi.org/10.1145/3514094.3534151)
78 | - [An Ontology for Fairness Metrics.](https://doi.org/10.1145/3514094.3534137)
79 | - [Understanding Decision Subjects' Fairness Perceptions and Retention in Repeated Interactions with AI-Based Decision Systems.](https://doi.org/10.1145/3514094.3534201)
80 | - [FairCanary: Rapid Continuous Explainable Fairness.](https://doi.org/10.1145/3514094.3534157)
81 | - [Learning Fairer Interventions.](https://doi.org/10.1145/3514094.3534172)
82 | - [Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.](https://doi.org/10.1145/3514094.3534188)
83 | - [Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation.](https://doi.org/10.1145/3514094.3534154)
84 | - [Data-Centric Factors in Algorithmic Fairness.](https://doi.org/10.1145/3514094.3534147)
85 | - [Towards Better Detection of Biased Language with Scarce, Noisy, and Biased Annotations.](https://doi.org/10.1145/3514094.3534142)
86 | - [Investigating Debiasing Effects on Classification and Explainability.](https://doi.org/10.1145/3514094.3534170)
87 | - [Contrastive Counterfactual Fairness in Algorithmic Decision-Making.](https://doi.org/10.1145/3514094.3534143)
88 | - [Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals.](https://doi.org/10.1145/3514094.3534176)
89 | - [A Dynamic Decision-Making Framework Promoting Long-Term Fairness.](https://doi.org/10.1145/3514094.3534127)
90 | - [A Bio-Inspired Framework for Machine Bias Interpretation.](https://doi.org/10.1145/3514094.3534126)
91 | - [Algorithms that "Don't See Color": Measuring Biases in Lookalike and Special Ad Audiences.](https://doi.org/10.1145/3514094.3534135)
92 | - [From Coded Bias to Existential Threat: Expert Frames and the Epistemic Politics of AI Governance.](https://doi.org/10.1145/3514094.3534161)
93 | - [Strategic Best Response Fairness in Fair Machine Learning.](https://doi.org/10.1145/3514094.3534194)
94 | - [Explainability's Gain is Optimality's Loss?: How Explanations Bias Decision-making.](https://doi.org/10.1145/3514094.3534156)
95 | - [Enhancing Fairness in Face Detection in Computer Vision Systems by Demographic Bias Mitigation.](https://doi.org/10.1145/3514094.3534153)
96 | - [Identifying Bias in Data Using Two-Distribution Hypothesis Tests.](https://doi.org/10.1145/3514094.3534169)
97 | - [Why is my System Biased?: Rating of AI Systems through a Causal Lens.](https://doi.org/10.1145/3514094.3539556)
98 | - [Socially-Aware Artificial Intelligence for Fair Mobility.](https://doi.org/10.1145/3514094.3539545)
99 | - [Bias in Artificial Intelligence Models in Financial Services.](https://doi.org/10.1145/3514094.3539561)
100 | - [Bias in Hate Speech and Toxicity Detection.](https://doi.org/10.1145/3514094.3539519)
101 | - [What's (Not) Ideal about Fair Machine Learning?](https://doi.org/10.1145/3514094.3539543)
102 | - [Fair, Robust, and Data-Efficient Machine Learning in Healthcare.](https://doi.org/10.1145/3514094.3539552)
103 |
104 | ### [CIKM 2022](https://dblp.uni-trier.de/db/conf/cikm/cikm2022.html)
105 |
106 | - [RAGUEL: Recourse-Aware Group Unfairness Elimination.](https://doi.org/10.1145/3511808.3557424)
107 | - [Quantifying and Mitigating Popularity Bias in Conversational Recommender Systems.](https://doi.org/10.1145/3511808.3557423)
108 | - [Debiased Balanced Interleaving at Amazon Search.](https://doi.org/10.1145/3511808.3557123)
109 | - [Mitigating Biases in Student Performance Prediction via Attention-Based Personalized Federated Learning.](https://doi.org/10.1145/3511808.3557108)
110 | - [Cascaded Debiasing: Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions.](https://doi.org/10.1145/3511808.3557155)
111 | - [Towards Fairer Classifier via True Fairness Score Path.](https://doi.org/10.1145/3511808.3557109)
112 | - [Incorporating Fairness in Large-scale Evacuation Planning.](https://doi.org/10.1145/3511808.3557075)
113 | - [Causal Intervention for Sentiment De-biasing in Recommendation.](https://doi.org/10.1145/3511808.3557558)
114 | - [Debiasing Neighbor Aggregation for Graph Neural Network in Recommender Systems.](https://doi.org/10.1145/3511808.3557576)
115 | - [Do Graph Neural Networks Build Fair User Models? Assessing Disparate Impact and Mistreatment in Behavioural User Profiling.](https://doi.org/10.1145/3511808.3557584)
116 | - [Balancing Utility and Exposure Fairness for Integrated Ranking with Reinforcement Learning.](https://doi.org/10.1145/3511808.3557551)
117 | - [Visual Encoding and Debiasing for CTR Prediction.](https://doi.org/10.1145/3511808.3557721)
118 | - [How Does the Crowd Impact the Model? A Tool for Raising Awareness of Social Bias in Crowdsourced Training Data.](https://doi.org/10.1145/3511808.3557178)
119 |
120 | ### [FAT\* 2022](https://dblp.uni-trier.de/db/conf/fat/fat2022.html)
121 |
122 |
123 | ### [ICLR 2022](https://dblp.uni-trier.de/db/conf/iclr/iclr2022.html)
124 |
125 | - [Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning.](https://openreview.net/forum?id=js62_xuLDDv)
126 | - [Fair Normalizing Flows.](https://openreview.net/forum?id=BrFIKuxrZE)
127 | - [Distributionally Robust Fair Principal Components via Geodesic Descents.](https://openreview.net/forum?id=9NVd-DMtThY)
128 | - [FairCal: Fairness Calibration for Face Verification.](https://openreview.net/forum?id=nRj0NcmSuxb)
129 | - [Fairness Guarantees under Demographic Shift.](https://openreview.net/forum?id=wbPObLm6ueA)
130 | - [Generalized Demographic Parity for Group Fairness.](https://openreview.net/forum?id=YigKlMJwjye)
131 | - [Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling.](https://openreview.net/forum?id=-llS6TiOew)
132 |
133 | ## [ICDM 2022](https://dblp.uni-trier.de/db/conf/icdm/icdm2022.html)
134 |
135 | ### [ICML 2022](https://dblp.uni-trier.de/db/conf/icml/icml2022.html)
136 |
137 | - [Active Sampling for Min-Max Fairness.](https://proceedings.mlr.press/v162/abernethy22a.html)
138 | - [Fair and Fast k-Center Clustering for Data Summarization.](https://proceedings.mlr.press/v162/angelidakis22a.html)
139 | - [On the Hidden Biases of Policy Mirror Ascent in Continuous Action Spaces.](https://proceedings.mlr.press/v162/bedi22a.html)
140 | - [Skin Deep Unlearning: Artefact and Instrument Debiasing in the Context of Melanoma Classification.](https://proceedings.mlr.press/v162/bevan22a.html)
141 | - [Fairness with Adaptive Weights.](https://proceedings.mlr.press/v162/chai22a.html)
142 | - [The Poisson Binomial Mechanism for Unbiased Federated Learning with Secure Aggregation.](https://proceedings.mlr.press/v162/chen22s.html)
143 | - [RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests.](https://proceedings.mlr.press/v162/chernozhukov22a.html)
144 | - [Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model.](https://proceedings.mlr.press/v162/conti22a.html)
145 | - [Fair Generalized Linear Models with a Convex Penalty.](https://proceedings.mlr.press/v162/do22a.html)
146 | - [Fast rates for noisy interpolation require rethinking the effect of inductive bias.](https://proceedings.mlr.press/v162/donhauser22a.html)
147 | - [Inductive Biases and Variable Creation in Self-Attention Mechanisms.](https://proceedings.mlr.press/v162/edelman22a.html)
148 | - [Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness.](https://proceedings.mlr.press/v162/foster22a.html)
149 | - [Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology.](https://proceedings.mlr.press/v162/hofmann22a.html)
150 | - [Input-agnostic Certified Group Fairness via Gaussian Parameter Smoothing.](https://proceedings.mlr.press/v162/jin22g.html)
151 | - [Learning fair representation with a parametric integral probability metric.](https://proceedings.mlr.press/v162/kim22b.html)
152 | - [Implicit Bias of Linear Equivariant Networks.](https://proceedings.mlr.press/v162/lawrence22a.html)
153 | - [Achieving Fairness at No Utility Cost via Data Reweighing with Influence.](https://proceedings.mlr.press/v162/li22p.html)
154 | - [Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension.](https://proceedings.mlr.press/v162/loureiro22a.html)
155 | - [ModLaNets: Learning Generalisable Dynamics via Modularity and Physical Inductive Bias.](https://proceedings.mlr.press/v162/lu22c.html)
156 | - [Rethinking Fano's Inequality in Ensemble Learning.](https://proceedings.mlr.press/v162/morishita22a.html)
157 | - [Implicit Bias of the Step Size in Linear Diagonal Neural Networks.](https://proceedings.mlr.press/v162/nacson22a.html)
158 | - [The Primacy Bias in Deep Reinforcement Learning.](https://proceedings.mlr.press/v162/nikishin22a.html)
159 | - [Causal Conceptions of Fairness and their Consequences.](https://proceedings.mlr.press/v162/nilforoshan22a.html)
160 | - [Debiaser Beware: Pitfalls of Centering Regularized Transport Maps.](https://proceedings.mlr.press/v162/pooladian22a.html)
161 | - [A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1.](https://proceedings.mlr.press/v162/salim22a.html)
162 | - [Understanding Contrastive Learning Requires Incorporating Inductive Biases.](https://proceedings.mlr.press/v162/saunshi22a.html)
163 | - [Selective Regression under Fairness Criteria.](https://proceedings.mlr.press/v162/shah22a.html)
164 | - [Metric-Fair Active Learning.](https://proceedings.mlr.press/v162/shen22b.html)
165 | - [Fair Representation Learning through Implicit Path Alignment.](https://proceedings.mlr.press/v162/shui22a.html)
166 |
167 | ### [IJCAI 2022](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2022.html)
168 |
169 | - [Individual Fairness Guarantees for Neural Networks.](https://doi.org/10.24963/ijcai.2022/92)
170 | - [How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?](https://doi.org/10.24963/ijcai.2022/93)
171 | - [SoFaiR: Single Shot Fair Representation Learning.](https://doi.org/10.24963/ijcai.2022/97)
172 | - [Fairness without the Sensitive Attribute via Causal Variational Autoencoder.](https://doi.org/10.24963/ijcai.2022/98)
173 | - [Counterfactual Interpolation Augmentation (CIA): A Unified Approach to Enhance Fairness and Explainability of DNN.](https://doi.org/10.24963/ijcai.2022/103)
174 | - [Post-processing of Differentially Private Data: A Fairness Perspective.](https://doi.org/10.24963/ijcai.2022/559)
175 | - [Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey.](https://doi.org/10.24963/ijcai.2022/766)
176 | - [Extending Decision Tree to Handle Multiple Fairness Criteria.](https://doi.org/10.24963/ijcai.2022/822)
177 |
178 | ### [KDD 2022](https://dblp.uni-trier.de/db/conf/kdd/kdd2022.html)
179 |
180 | - [Avoiding Biases due to Similarity Assumptions in Node Embeddings.](https://doi.org/10.1145/3534678.3539287)
181 | - [Scalar is Not Enough: Vectorization-based Unbiased Learning to Rank.](https://doi.org/10.1145/3534678.3539468)
182 | - [A Generalized Doubly Robust Learning Framework for Debiasing Post-Click Conversion Rate Prediction.](https://doi.org/10.1145/3534678.3539270)
183 | - [Debiasing the Cloze Task in Sequential Recommendation with Bidirectional Transformers.](https://doi.org/10.1145/3534678.3539430)
184 | - [On Structural Explanation of Bias in Graph Neural Networks.](https://doi.org/10.1145/3534678.3539319)
185 | - [Fair Labeled Clustering.](https://doi.org/10.1145/3534678.3539451)
186 | - [Fair Representation Learning: An Alternative to Mutual Information.](https://doi.org/10.1145/3534678.3539302)
187 | - [UD-GNN: Uncertainty-aware Debiased Training on Semi-Homophilous Graphs.](https://doi.org/10.1145/3534678.3539483)
188 | - [Learning Fair Representation via Distributional Contrastive Disentanglement.](https://doi.org/10.1145/3534678.3539232)
189 | - [Fair and Interpretable Models for Survival Analysis.](https://doi.org/10.1145/3534678.3539259)
190 | - [Fair Ranking as Fair Division: Impact-Based Individual Fairness in Ranking.](https://doi.org/10.1145/3534678.3539353)
191 | - [Balancing Bias and Variance for Active Weakly Supervised Learning.](https://doi.org/10.1145/3534678.3539264)
192 | - [GUIDE: Group Equality Informed Individual Fairness in Graph Neural Networks.](https://doi.org/10.1145/3534678.3539346)
193 | - [Clustering with Fair-Center Representation: Parameterized Approximation Algorithms and Heuristics.](https://doi.org/10.1145/3534678.3539487)
194 | - [Make Fairness More Fair: Fair Item Utility Estimation and Exposure Re-Distribution.](https://doi.org/10.1145/3534678.3539354)
195 | - [Partial Label Learning with Discrimination Augmentation.](https://doi.org/10.1145/3534678.3539363)
196 | - [Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage.](https://doi.org/10.1145/3534678.3539404)
197 | - [Debiasing Learning for Membership Inference Attacks Against Recommender Systems.](https://doi.org/10.1145/3534678.3539392)
198 | - [Invariant Preference Learning for General Debiasing in Recommendation.](https://doi.org/10.1145/3534678.3539439)
199 | - [Comprehensive Fair Meta-learned Recommender System.](https://doi.org/10.1145/3534678.3539269)
200 | - [Counteracting User Attention Bias in Music Streaming Recommendation via Reward Modification.](https://doi.org/10.1145/3534678.3539393)
201 | - [Adaptive Fairness-Aware Online Meta-Learning for Changing Environments.](https://doi.org/10.1145/3534678.3539420)
202 | - [Optimizing Long-Term Efficiency and Fairness in Ride-Hailing via Joint Order Dispatching and Driver Repositioning.](https://doi.org/10.1145/3534678.3539060)
203 | - [CausalMTA: Eliminating the User Confounding Bias for Causal Multi-touch Attribution.](https://doi.org/10.1145/3534678.3539108)
204 | - [Deconfounding Duration Bias in Watch-time Prediction for Video Recommendation.](https://doi.org/10.1145/3534678.3539092)
205 | - [Why Data Scientists Prefer Glassbox Machine Learning: Algorithms, Differential Privacy, Editing and Bias Mitigation.](https://doi.org/10.1145/3534678.3542627)
206 | - [The Battlefront of Combating Misinformation and Coping with Media Bias.](https://doi.org/10.1145/3534678.3542615)
207 | - [Algorithmic Fairness on Graphs: Methods and Trends.](https://doi.org/10.1145/3534678.3542599)
208 | - [Temporal Graph Learning for Financial World: Algorithms, Scalability, Explainability & Fairness.](https://doi.org/10.1145/3534678.3542619)
209 |
210 | ### [NIPS 2022](https://dblp.uni-trier.de/db/conf/nips/neurips2022.html)
211 |
212 | - [A Large Scale Search Dataset for Unbiased Learning to Rank.](http://papers.nips.cc/paper_files/paper/2022/hash/07f560092a0edceabf55af32a40eaee3-Abstract-Datasets_and_Benchmarks.html)
213 | - [Counterfactual Fairness with Partially Known Causal Graph.](http://papers.nips.cc/paper_files/paper/2022/hash/08887999616116910fccec17a63584b5-Abstract-Conference.html)
214 | - [Adaptive Data Debiasing through Bounded Exploration.](http://papers.nips.cc/paper_files/paper/2022/hash/0a166a3d98720697d9028bbe592fa177-Abstract-Conference.html)
215 | - [A Reduction to Binary Approach for Debiasing Multiclass Datasets.](http://papers.nips.cc/paper_files/paper/2022/hash/10eaa0aae94b34308e9b3fa7b677cbe1-Abstract-Conference.html)
216 | - [Combinatorial Bandits with Linear Constraints: Beyond Knapsacks and Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/13f17f74ec061f1e3e231aca9a43ff23-Abstract-Conference.html)
217 | - [Debiased Machine Learning without Sample-Splitting for Stable Estimators.](http://papers.nips.cc/paper_files/paper/2022/hash/1498a03a04f9bcd3a7d44058fc5dc639-Abstract-Conference.html)
218 | - [Is Sortition Both Representative and Fair?](http://papers.nips.cc/paper_files/paper/2022/hash/165bbd0a0a1b9470ec34d5afec582d2e-Abstract-Conference.html)
219 | - [Fairness in Federated Learning via Core-Stability.](http://papers.nips.cc/paper_files/paper/2022/hash/25e92e33ac8c35fd49f394c37f21b6da-Abstract-Conference.html)
220 | - [Spectral Bias in Practice: The Role of Function Frequency in Generalization.](http://papers.nips.cc/paper_files/paper/2022/hash/306264db5698839230be3642aafc849c-Abstract-Conference.html)
221 | - [FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning.](http://papers.nips.cc/paper_files/paper/2022/hash/333a7697dbb67f09249337f81c27d749-Abstract-Conference.html)
222 | - [Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems.](http://papers.nips.cc/paper_files/paper/2022/hash/36b76e1f69bbba80d3463f7d6c02bc3d-Abstract-Conference.html)
223 | - [Fairness Transferability Subject to Bounded Distribution Shift.](http://papers.nips.cc/paper_files/paper/2022/hash/4937610670be26d651ecdb4f2206d95f-Abstract-Conference.html)
224 | - [Conformalized Fairness via Quantile Regression.](http://papers.nips.cc/paper_files/paper/2022/hash/4b52b3c50110fc10f6a1a86055682ea2-Abstract-Conference.html)
225 | - [SelecMix: Debiased Learning by Contradicting-pair Sampling.](http://papers.nips.cc/paper_files/paper/2022/hash/5c6f928e3fc5f32ee29a1d916b68e6f5-Abstract-Conference.html)
226 | - [Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks.](http://papers.nips.cc/paper_files/paper/2022/hash/698c05933e5f7fde98e567a669d2c752-Abstract-Conference.html)
227 | - [Bounding and Approximating Intersectional Fairness through Marginal Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/6ae7df1f40f5faeda474b36b61197822-Abstract-Conference.html)
228 | - [All Politics is Local: Redistricting via Local Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/6f7fa4df2c8a79c164d3697898a32bd9-Abstract-Conference.html)
229 | - [The price of unfairness in linear bandits with biased feedback.](http://papers.nips.cc/paper_files/paper/2022/hash/74bb24dca8334adce292883b4b651eda-Abstract-Conference.html)
230 | - [Learning Debiased Classifier with Biased Committee.](http://papers.nips.cc/paper_files/paper/2022/hash/750046157471c56235a781f2eff6e226-Abstract-Conference.html)
231 | - [Fairness without Demographics through Knowledge Distillation.](http://papers.nips.cc/paper_files/paper/2022/hash/79dc391a2c1067e9ac2b764e31a60377-Abstract-Conference.html)
232 | - [Diagnosing failures of fairness transfer across distribution shift in real-world medical settings.](http://papers.nips.cc/paper_files/paper/2022/hash/7a969c30dc7e74d4e891c8ffb217cf79-Abstract-Conference.html)
233 | - [Fair Wrapping for Black-box Predictions.](http://papers.nips.cc/paper_files/paper/2022/hash/876b45367d9069f0e91e359c57155ab1-Abstract-Conference.html)
234 | - [Fair Rank Aggregation.](http://papers.nips.cc/paper_files/paper/2022/hash/974309ef51ebd89034adc64a57e304f2-Abstract-Conference.html)
235 | - [Group Meritocratic Fairness in Linear Contextual Bandits.](http://papers.nips.cc/paper_files/paper/2022/hash/9a1dab894ce96cb8339c2fadd85a100b-Abstract-Conference.html)
236 | - [Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure.](http://papers.nips.cc/paper_files/paper/2022/hash/9e47a0bc530cc88b09b7670d2c130a29-Abstract-Conference.html)
237 | - [On the Tradeoff Between Robustness and Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/a80ebbb4ec9e9b39789318a0a61e2e43-Abstract-Conference.html)
238 | - [Self-Supervised Fair Representation Learning without Demographics.](http://papers.nips.cc/paper_files/paper/2022/hash/ad991bbc381626a8e44dc5414aa136a8-Abstract-Conference.html)
239 | - [Fair Bayes-Optimal Classifiers Under Predictive Parity.](http://papers.nips.cc/paper_files/paper/2022/hash/b1d9c7e7bd265d81aae8d74a7a6bd7f1-Abstract-Conference.html)
240 | - [Debiased, Longitudinal and Coordinated Drug Recommendation through Multi-Visit Clinic Records.](http://papers.nips.cc/paper_files/paper/2022/hash/b295b3a940706f431076c86b78907757-Abstract-Conference.html)
241 | - [DeepMed: Semiparametric Causal Mediation Analysis with Debiased Deep Learning.](http://papers.nips.cc/paper_files/paper/2022/hash/b57939005a3cbe40f49b66a0efd6fc8c-Abstract-Conference.html)
242 | - [Domain Adaptation meets Individual Fairness. And they get along.](http://papers.nips.cc/paper_files/paper/2022/hash/b9e0ceee9751ae8b5c6603c029e4ca42-Abstract-Conference.html)
243 | - [Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/c47e6286162ec5442e06fe2b7cb7145f-Abstract-Conference.html)
244 | - [Certifying Some Distributional Fairness with Subpopulation Decomposition.](http://papers.nips.cc/paper_files/paper/2022/hash/c8e9a2beb84ab1a616edb89581c4b32a-Abstract-Conference.html)
245 | - [Fair Ranking with Noisy Protected Attributes.](http://papers.nips.cc/paper_files/paper/2022/hash/cdd0640218a27e9e2c0e52e324e25db0-Abstract-Conference.html)
246 | - [Debiased Self-Training for Semi-Supervised Learning.](http://papers.nips.cc/paper_files/paper/2022/hash/d10d6b28d74c4f0fcab588feeb6fe7d6-Abstract-Conference.html)
247 | - [Uncovering the Structural Fairness in Graph Contrastive Learning.](http://papers.nips.cc/paper_files/paper/2022/hash/d13565c82d1e44eda2da3bd00b35ca11-Abstract-Conference.html)
248 | - [Transferring Fairness under Distribution Shifts via Fair Consistency Regularization.](http://papers.nips.cc/paper_files/paper/2022/hash/d1dbaabf454a479ca86309e66592c7f6-Abstract-Conference.html)
249 | - [Pushing the limits of fairness impossibility: Who's the fairest of them all?](http://papers.nips.cc/paper_files/paper/2022/hash/d3222559698f41247261b7a6c2bbaedc-Abstract-Conference.html)
250 | - [Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation.](http://papers.nips.cc/paper_files/paper/2022/hash/d9696563856bd350e4e7ac5e5812f23c-Abstract-Datasets_and_Benchmarks.html)
251 | - [Optimal Transport of Classifiers to Fairness.](http://papers.nips.cc/paper_files/paper/2022/hash/da75d2bbf862b86f10241d0887613b41-Abstract-Conference.html)
252 | - [On Learning Fairness and Accuracy on Multiple Subgroups.](http://papers.nips.cc/paper_files/paper/2022/hash/dc96134e169de5aea1ba1fc34dfb8419-Abstract-Conference.html)
253 | - [Fairness Reprogramming.](http://papers.nips.cc/paper_files/paper/2022/hash/de08b3ee7c0043a76ee4a44fe68e90bc-Abstract-Conference.html)
254 | - [Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent.](http://papers.nips.cc/paper_files/paper/2022/hash/dfa1106ea7065899b13f2be9da04efb4-Abstract-Conference.html)
255 | - [Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection.](http://papers.nips.cc/paper_files/paper/2022/hash/fd5013ea0c3f96931dec77174eaf9d80-Abstract-Conference.html)
256 | - [Fair and Optimal Decision Trees: A Dynamic Programming Approach.](http://papers.nips.cc/paper_files/paper/2022/hash/fe248e22b241ae5a9adf11493c8c12bc-Abstract-Conference.html)
257 |
258 | ### [SDM 2022](https://dblp.uni-trier.de/db/conf/sdm/sdm2022.html)
259 |
260 |
261 | ### [UAI 2022](https://dblp.uni-trier.de/db/conf/uai/uai2022.html)
262 |
263 | - [Active approximately metric-fair learning.](https://proceedings.mlr.press/v180/cao22a.html)
264 | - [Quadratic metric elicitation for fairness and beyond.](https://proceedings.mlr.press/v180/hiranandani22a.html)
265 | - [Efficient resource allocation with fairness constraints in restless multi-armed bandits.](https://proceedings.mlr.press/v180/li22e.html)
266 | - [How unfair is private learning?](https://proceedings.mlr.press/v180/sanyal22a.html)
267 |
268 | ### [WWW 2022](https://dblp.uni-trier.de/db/conf/www/www2022.html)
269 |
270 | - [FairGAN: GANs-based Fairness-aware Learning for Recommendations with Implicit Feedback.](https://doi.org/10.1145/3485447.3511958)
271 | - [EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks.](https://doi.org/10.1145/3485447.3512173)
272 | - [Fair k-Center Clustering in MapReduce and Streaming Settings.](https://doi.org/10.1145/3485447.3512188)
273 | - [Unbiased Graph Embedding with Biased Graph Observations.](https://doi.org/10.1145/3485447.3512189)
274 | - [Rating Distribution Calibration for Selection Bias Mitigation in Recommendations.](https://doi.org/10.1145/3485447.3512078)
275 | - [UKD: Debiasing Conversion Rate Estimation via Uncertainty-regularized Knowledge Distillation.](https://doi.org/10.1145/3485447.3512081)
276 | - [Unbiased Sequential Recommendation with Latent Confounders.](https://doi.org/10.1145/3485447.3512092)
277 | - [CBR: Context Bias aware Recommendation for Debiasing User Modeling and Click Prediction✱.](https://doi.org/10.1145/3485447.3512099)
278 | - [Cross Pairwise Ranking for Unbiased Item Recommendation.](https://doi.org/10.1145/3485447.3512010)
279 | - [Left or Right: A Peek into the Political Biases in Email Spam Filtering Algorithms During US Election 2020.](https://doi.org/10.1145/3485447.3512121)
280 | - [Controlled Analyses of Social Biases in Wikipedia Bios.](https://doi.org/10.1145/3485447.3512134)
281 | - [Scheduling Virtual Conferences Fairly: Achieving Equitable Participant and Speaker Satisfaction.](https://doi.org/10.1145/3485447.3512136)
282 | - [What Does Perception Bias on Social Networks Tell Us About Friend Count Satisfaction?](https://doi.org/10.1145/3485447.3511931)
283 | - [Fairness Audit of Machine Learning Models with Confidential Computing.](https://doi.org/10.1145/3485447.3512244)
284 | - [End-to-End Learning for Fair Ranking Systems.](https://doi.org/10.1145/3485447.3512247)
285 | - [Link Recommendations for PageRank Fairness.](https://doi.org/10.1145/3485447.3512249)
286 | - [Privacy-Preserving Fair Learning of Support Vector Machine with Homomorphic Encryption.](https://doi.org/10.1145/3485447.3512252)
287 | - [Alexa, in you, I trust! Fairness and Interpretability Issues in E-commerce Search through Smart Speakers.](https://doi.org/10.1145/3485447.3512265)
288 | - [Regulatory Instruments for Fair Personalized Pricing.](https://doi.org/10.1145/3485447.3512046)
289 |
290 | ### Others 2022
291 |
292 | #### [WSDM 2022](https://dblp.uni-trier.de/db/conf/wsdm/wsdm2022.html)
293 |
294 | - [k-Clustering with Fair Outliers.](https://doi.org/10.1145/3488560.3498485)
295 | - [Toward Pareto Efficient Fairness-Utility Trade-off in Recommendation through Reinforcement Learning.](https://doi.org/10.1145/3488560.3498487)
296 | - [It Is Different When Items Are Older: Debiasing Recommendations When Selection Bias and User Preferences Are Dynamic.](https://doi.org/10.1145/3488560.3498375)
297 | - [Introducing the Expohedron for Efficient Pareto-optimal Fairness-Utility Amortizations in Repeated Rankings.](https://doi.org/10.1145/3488560.3498490)
298 | - [Diversified Subgraph Query Generation with Group Fairness.](https://doi.org/10.1145/3488560.3498525)
299 | - [Learning Fair Node Representations with Graph Counterfactual Fairness.](https://doi.org/10.1145/3488560.3498391)
300 | - [Understanding and Mitigating the Effect of Outliers in Fair Ranking.](https://doi.org/10.1145/3488560.3498441)
301 | - [Enumerating Fair Packages for Group Recommendations.](https://doi.org/10.1145/3488560.3498432)
302 | - [Towards Unbiased and Robust Causal Ranking for Recommender Systems.](https://doi.org/10.1145/3488560.3498521)
303 | - [Assessing Algorithmic Biases for Musical Version Identification.](https://doi.org/10.1145/3488560.3498397)
304 | - [Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features.](https://doi.org/10.1145/3488560.3498493)
305 | - [Fighting Mainstream Bias in Recommender Systems via Local Fine Tuning.](https://doi.org/10.1145/3488560.3498427)
306 |
307 | ## 2021
308 |
309 | ### [AAAI 2021](https://dblp.uni-trier.de/db/conf/aaai/aaai2021.html)
310 |
311 | - [Learning Disentangled Representation for Fair Facial Attribute Classification via Fairness-aware Information Alignment.](https://ojs.aaai.org/index.php/AAAI/article/view/16341)
312 | - [Fairness-aware News Recommendation with Decomposed Adversarial Learning.](https://ojs.aaai.org/index.php/AAAI/article/view/16573)
313 | - [Fair and Truthful Mechanisms for Dichotomous Valuations.](https://ojs.aaai.org/index.php/AAAI/article/view/16647)
314 | - [Maximin Fairness with Mixed Divisible and Indivisible Goods.](https://ojs.aaai.org/index.php/AAAI/article/view/16653)
315 | - [Protecting the Protected Group: Circumventing Harmful Fairness.](https://ojs.aaai.org/index.php/AAAI/article/view/16654)
316 | - [Fairness, Semi-Supervised Learning, and More: A General Framework for Clustering with Stochastic Pairwise Constraints.](https://ojs.aaai.org/index.php/AAAI/article/view/16842)
317 | - [The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective.](https://ojs.aaai.org/index.php/AAAI/article/view/16926)
318 | - [Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation.](https://ojs.aaai.org/index.php/AAAI/article/view/16931)
319 | - [Constructing a Fair Classifier with Generated Fair Data.](https://ojs.aaai.org/index.php/AAAI/article/view/16965)
320 | - [Improving Fairness and Privacy in Selection Problems.](https://ojs.aaai.org/index.php/AAAI/article/view/16986)
321 | - [Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder.](https://ojs.aaai.org/index.php/AAAI/article/view/16990)
322 | - [Exacerbating Algorithmic Bias through Fairness Attacks.](https://ojs.aaai.org/index.php/AAAI/article/view/17080)
323 | - [Minimum Robust Multi-Submodular Cover for Fairness.](https://ojs.aaai.org/index.php/AAAI/article/view/17100)
324 | - [Robust Fairness Under Covariate Shift.](https://ojs.aaai.org/index.php/AAAI/article/view/17135)
325 | - [Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach.](https://ojs.aaai.org/index.php/AAAI/article/view/17193)
326 | - [Fairness in Forecasting and Learning Linear Dynamical Systems.](https://ojs.aaai.org/index.php/AAAI/article/view/17328)
327 | - [Variational Fair Clustering.](https://ojs.aaai.org/index.php/AAAI/article/view/17336)
328 | - [Individual Fairness in Kidney Exchange Programs.](https://ojs.aaai.org/index.php/AAAI/article/view/17369)
329 | - [Fair Representations by Compression.](https://ojs.aaai.org/index.php/AAAI/article/view/17370)
330 | - [Fair Influence Maximization: a Welfare Optimization Approach.](https://ojs.aaai.org/index.php/AAAI/article/view/17383)
331 | - [Group Fairness by Probabilistic Modeling with Latent Fair Decisions.](https://ojs.aaai.org/index.php/AAAI/article/view/17431)
332 | - [How Linguistically Fair Are Multilingual Pre-Trained Language Models?](https://ojs.aaai.org/index.php/AAAI/article/view/17505)
333 | - [Fairness in Influence Maximization through Randomization.](https://ojs.aaai.org/index.php/AAAI/article/view/17725)
334 | - [Fair and Interpretable Algorithmic Hiring using Evolutionary Many Objective Optimization.](https://ojs.aaai.org/index.php/AAAI/article/view/17737)
335 |
336 | ### [AISTATS 2021](https://dblp.uni-trier.de/db/conf/aistats/aistats2021.html)
337 |
338 | - [Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint.](http://proceedings.mlr.press/v130/chikahara21a.html)
339 | - [Learning Smooth and Fair Representations.](http://proceedings.mlr.press/v130/gitiaux21a.html)
340 | - [Learning Fair Scoring Functions: Bipartite Ranking under ROC-based Fairness Constraints.](http://proceedings.mlr.press/v130/vogel21a.html)
341 | - [Algorithms for Fairness in Sequential Decision Making.](http://proceedings.mlr.press/v130/wen21a.html)
342 | - [All of the Fairness for Edge Prediction with Optimal Transport.](http://proceedings.mlr.press/v130/laclau21a.html)
343 | - [Differentiable Causal Discovery Under Unmeasured Confounding.](http://proceedings.mlr.press/v130/bhattacharya21a.html)
344 | - [Causal Modeling with Stochastic Confounders.](http://proceedings.mlr.press/v130/vinh-vo21a.html)
345 | - [Fair for All: Best-effort Fairness Guarantees for Classification.](http://proceedings.mlr.press/v130/krishnaswamy21a.html)
346 |
347 | ### [BIGDATA 2021](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2021.html)
348 |
349 | - [An Effective, Robust and Fairness-aware Hate Speech Detection Framework.](https://doi.org/10.1109/BigData52589.2021.9672022)
350 | - [Fairness-aware Bandit-based Recommendation.](https://doi.org/10.1109/BigData52589.2021.9671959)
351 | - [ExgFair: A Crowdsourcing Data Exchange Approach To Fair Human Face Datasets Augmentation.](https://doi.org/10.1109/BigData52589.2021.9671973)
352 | - [Bayesian model for Fairness in sampling from clustered data and FP-FN error rates.](https://doi.org/10.1109/BigData52589.2021.9671353)
353 |
354 | ### [CIKM 2021](https://dblp.uni-trier.de/db/conf/cikm/cikm2021.html)
355 |
356 | > TBD
357 |
358 | ### [FAT\* 2021](https://dblp.uni-trier.de/db/conf/fat/fat2021.html)
359 |
360 | - [Black Feminist Musings on Algorithmic Oppression.](https://doi.org/10.1145/3442188.3445929)
361 | - [Price Discrimination with Fairness Constraints.](https://doi.org/10.1145/3442188.3445864)
362 | - [Fairness Violations and Mitigation under Covariate Shift.](https://doi.org/10.1145/3442188.3445865)
363 | - [Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence.](https://doi.org/10.1145/3442188.3445866)
364 | - [Allocating Opportunities in a Dynamic Model of Intergenerational Mobility.](https://doi.org/10.1145/3442188.3445867)
365 | - [Corporate Social Responsibility via Multi-Armed Bandits.](https://doi.org/10.1145/3442188.3445868)
366 | - [Biases in Generative Art: A Causal Look from the Lens of Art History.](https://doi.org/10.1145/3442188.3445869)
367 | - [Designing an Online Infrastructure for Collecting AI Data From People With Disabilities.](https://doi.org/10.1145/3442188.3445870)
368 | - [Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design.](https://doi.org/10.1145/3442188.3445871)
369 | - [Representativeness in Statistics, Politics, and Machine Learning.](https://doi.org/10.1145/3442188.3445872)
370 | - [The Distributive Effects of Risk Prediction in Environmental Compliance: Algorithmic Design, Environmental Justice, and Public Policy.](https://doi.org/10.1145/3442188.3445873)
371 | - [Computer Science Communities: Who is Speaking, and Who is Listening to the Women? Using an Ethics of Care to Promote Diverse Voices.](https://doi.org/10.1145/3442188.3445874)
372 | - [Differential Tweetment: Mitigating Racial Dialect Bias in Harmful Tweet Detection.](https://doi.org/10.1145/3442188.3445875)
373 | - [Group Fairness: Independence Revisited.](https://doi.org/10.1145/3442188.3445876)
374 | - [Towards Fair Deep Anomaly Detection.](https://doi.org/10.1145/3442188.3445878)
375 | - [Can You Fake It Until You Make It?: Impacts of Differentially Private Synthetic Data on Downstream Classification Fairness.](https://doi.org/10.1145/3442188.3445879)
376 | - [Documenting Computer Vision Datasets: An Invitation to Reflexive Data Practices.](https://doi.org/10.1145/3442188.3445880)
377 | - [Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy.](https://doi.org/10.1145/3442188.3445881)
378 | - [Better Together?: How Externalities of Size Complicate Notions of Solidarity and Actuarial Fairness.](https://doi.org/10.1145/3442188.3445882)
379 | - [Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately.](https://doi.org/10.1145/3442188.3445883)
380 | - [Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information.](https://doi.org/10.1145/3442188.3445884)
381 | - [Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies.](https://doi.org/10.1145/3442188.3445885)
382 | - [The Use and Misuse of Counterfactuals in Ethical Machine Learning.](https://doi.org/10.1145/3442188.3445886)
383 | - [Mitigating Bias in Set Selection with Noisy Protected Attributes.](https://doi.org/10.1145/3442188.3445887)
384 | - [What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness.](https://doi.org/10.1145/3442188.3445888)
385 | - [Standardized Tests and Affirmative Action: The Role of Bias and Variance.](https://doi.org/10.1145/3442188.3445889)
386 | - [The Sanction of Authority: Promoting Public Trust in AI.](https://doi.org/10.1145/3442188.3445890)
387 | - [Algorithmic Fairness in Predicting Opioid Use Disorder using Machine Learning.](https://doi.org/10.1145/3442188.3445891)
388 | - [Avoiding Disparity Amplification under Different Worldviews.](https://doi.org/10.1145/3442188.3445892)
389 | - [Spoken Corpora Data, Automatic Speech Recognition, and Bias Against African American Language: The case of Habitual 'Be'.](https://doi.org/10.1145/3442188.3445893)
390 | - [Leave-one-out Unfairness.](https://doi.org/10.1145/3442188.3445894)
391 | - [Fairness, Welfare, and Equity in Personalized Pricing.](https://doi.org/10.1145/3442188.3445895)
392 | - [Re-imagining Algorithmic Fairness in India and Beyond.](https://doi.org/10.1145/3442188.3445896)
393 | - [Narratives and Counternarratives on Data Sharing in Africa.](https://doi.org/10.1145/3442188.3445897)
394 | - [This Whole Thing Smacks of Gender: Algorithmic Exclusion in Bioimpedance-based Body Composition Analysis.](https://doi.org/10.1145/3442188.3445898)
395 | - [Algorithmic Recourse: from Counterfactual Explanations to Interventions.](https://doi.org/10.1145/3442188.3445899)
396 | - [A Semiotics-based epistemic tool to reason about ethical issues in digital technology design and development.](https://doi.org/10.1145/3442188.3445900)
397 | - [Measurement and Fairness.](https://doi.org/10.1145/3442188.3445901)
398 | - [Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds.](https://doi.org/10.1145/3442188.3445902)
399 | - [High Dimensional Model Explanations: An Axiomatic Approach.](https://doi.org/10.1145/3442188.3445903)
400 | - [An Agent-based Model to Evaluate Interventions on Online Dating Platforms to Decrease Racial Homogamy.](https://doi.org/10.1145/3442188.3445904)
401 | - [Designing Accountable Systems.](https://doi.org/10.1145/3442188.3445905)
402 | - [Socially Fair k-Means Clustering.](https://doi.org/10.1145/3442188.3445906)
403 | - [Towards Cross-Lingual Generalization of Translation Gender Bias.](https://doi.org/10.1145/3442188.3445907)
404 | - [A Pilot Study in Surveying Clinical Judgments to Evaluate Radiology Report Generation.](https://doi.org/10.1145/3442188.3445909)
405 | - [Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning.](https://doi.org/10.1145/3442188.3445910)
406 | - [Operationalizing Framing to Support Multiperspective Recommendations of Opinion Pieces.](https://doi.org/10.1145/3442188.3445911)
407 | - [Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness.](https://doi.org/10.1145/3442188.3445912)
408 | - [Fair Clustering via Equitable Group Representations.](https://doi.org/10.1145/3442188.3445913)
409 | - [You Can't Sit With Us: Exclusionary Pedagogy in AI Ethics Education.](https://doi.org/10.1145/3442188.3445914)
410 | - [Fair Classification with Group-Dependent Label Noise.](https://doi.org/10.1145/3442188.3445915)
411 | - [Censorship of Online Encyclopedias: Implications for NLP Models.](https://doi.org/10.1145/3442188.3445916)
412 | - [Impossible Explanations?: Beyond explainable AI in the GDPR from a COVID-19 use case scenario.](https://doi.org/10.1145/3442188.3445917)
413 | - [Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure.](https://doi.org/10.1145/3442188.3445918)
414 | - [Fairness, Equality, and Power in Algorithmic Decision-Making.](https://doi.org/10.1145/3442188.3445919)
415 | - [One Label, One Billion Faces: Usage and Consistency of Racial Categories in Computer Vision.](https://doi.org/10.1145/3442188.3445920)
416 | - [Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems.](https://doi.org/10.1145/3442188.3445921)
417 | - [On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://doi.org/10.1145/3442188.3445922)
418 | - [Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI.](https://doi.org/10.1145/3442188.3445923)
419 | - [TILT: A GDPR-Aligned Transparency Information Language and Toolkit for Practical Privacy Engineering.](https://doi.org/10.1145/3442188.3445925)
420 | - [From Papers to Programs: Courts, Corporations, Clinics and the Battle over Computerized Psychological Testing.](https://doi.org/10.1145/3442188.3445926)
421 | - [A Statistical Test for Probabilistic Fairness.](https://doi.org/10.1145/3442188.3445927)
422 | - [Building and Auditing Fair Algorithms: A Case Study in Candidate Screening.](https://doi.org/10.1145/3442188.3445928)
423 | - [The Effect of the Rooney Rule on Implicit Bias in the Long Term.](https://doi.org/10.1145/3442188.3445930)
424 | - [I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions.](https://doi.org/10.1145/3442188.3445931)
425 | - [Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases.](https://doi.org/10.1145/3442188.3445932)
426 | - [From Optimizing Engagement to Measuring Value.](https://doi.org/10.1145/3442188.3445933)
427 | - [Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings.](https://doi.org/10.1145/3442188.3445934)
428 | - [Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts.](https://doi.org/10.1145/3442188.3445935)
429 | - [On the Moral Justification of Statistical Parity.](https://doi.org/10.1145/3442188.3445936)
430 | - [Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems.](https://doi.org/10.1145/3442188.3445937)
431 | - [An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists.](https://doi.org/10.1145/3442188.3445938)
432 | - [The Ethics of Emotion in Artificial Intelligence Systems.](https://doi.org/10.1145/3442188.3445939)
433 | - [Detecting discriminatory risk through data annotation based on Bayesian inferences.](https://doi.org/10.1145/3442188.3445940)
434 | - [How can I choose an explainer?: An Application-grounded Evaluation of Post-hoc Explanations.](https://doi.org/10.1145/3442188.3445941)
435 | - [The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems.](https://doi.org/10.1145/3442188.3445942)
436 | - [Epistemic values in feature importance methods: Lessons from feminist epistemology.](https://doi.org/10.1145/3442188.3445943)
437 | - [A Bayesian Model of Cash Bail Decisions.](https://doi.org/10.1145/3442188.3445908)
438 | - [The effect of differential victim crime reporting on predictive policing systems.](https://doi.org/10.1145/3442188.3445877)
439 | - [Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation.](https://doi.org/10.1145/3442188.3445971)
440 | - [BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation.](https://doi.org/10.1145/3442188.3445924)
441 | - [When the Umpire is also a Player: Bias in Private Label Product Recommendations on E-commerce Marketplaces.](https://doi.org/10.1145/3442188.3445944)
442 |
443 | ### [ICDM 2021](https://dblp.uni-trier.de/db/conf/icdm/icdm2021.html)
444 |
445 | - [Fair Decision-making Under Uncertainty.](https://doi.org/10.1109/ICDM51629.2021.00100)
446 | - [Promoting Fairness through Hyperparameter Optimization.](https://doi.org/10.1109/ICDM51629.2021.00119)
447 | - [Fair Graph Auto-Encoder for Unbiased Graph Representations with Wasserstein Distance.](https://doi.org/10.1109/ICDM51629.2021.00122)
448 | - [A Multi-view Confidence-calibrated Framework for Fair and Stable Graph Representation Learning.](https://doi.org/10.1109/ICDM51629.2021.00194)
449 | - [Unified Fairness from Data to Learning Algorithm.](https://doi.org/10.1109/ICDM51629.2021.00195)
450 |
451 | ### [ICLR 2021](https://dblp.uni-trier.de/db/conf/iclr/iclr2021.html)
452 |
453 | - [SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness.](https://openreview.net/forum?id=DktZb97_Fx)
454 | - [Individually Fair Gradient Boosting.](https://openreview.net/forum?id=JBAa9we1AL)
455 | - [On Statistical Bias In Active Learning: How and When to Fix It.](https://openreview.net/forum?id=JiYq3eqTKY)
456 | - [FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders.](https://openreview.net/forum?id=N6JECD-PI5w)
457 | - [Fair Mixup: Fairness via Interpolation.](https://openreview.net/forum?id=DNl5s5BXeBn)
458 | - [Individually Fair Rankings.](https://openreview.net/forum?id=71zCSP_HuBN)
459 | - [FairBatch: Batch Selection for Model Fairness.](https://openreview.net/forum?id=YNnpaAKeCfx)
460 | - [INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving.](https://openreview.net/forum?id=O6LPudowNQm)
461 | - [Debiasing Concept-based Explanations with Causal Analysis.](https://openreview.net/forum?id=6puUoArESGp)
462 | - [Unbiased Teacher for Semi-Supervised Object Detection.](https://openreview.net/forum?id=MJIve1zgR_)
463 | - [Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective.](https://openreview.net/forum?id=gIHd-5X324)
464 | - [Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate.](https://openreview.net/forum?id=3X64RLgzY6O)
465 | - [A unifying view on implicit bias in training linear neural networks.](https://openreview.net/forum?id=ZsZM-4iMQkH)
466 | - [What Makes Instance Discrimination Good for Transfer Learning?](https://openreview.net/forum?id=tC6iW2UUbJf)
467 | - [Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation.](https://openreview.net/forum?id=e12NDM7wkEY)
468 | - [Shape-Texture Debiased Neural Network Training.](https://openreview.net/forum?id=Db4yerZTYkz)
469 | - [The inductive bias of ReLU networks on orthogonally separable data.](https://openreview.net/forum?id=krz7T0xU9Z_)
470 | - [Statistical inference for individual fairness.](https://openreview.net/forum?id=z9k8BWL-_2u)
471 | - [What they do when in doubt: a study of inductive biases in seq2seq learners.](https://openreview.net/forum?id=YmA86Zo-P_t)
472 | - [Learning from others' mistakes: Avoiding dataset biases without modeling them.](https://openreview.net/forum?id=Hf3qXoiNkR)
473 | - [Predicting Inductive Biases of Pre-Trained Models.](https://openreview.net/forum?id=mNtmhaDkAr)
474 | - [Does enhanced shape bias improve neural network robustness to common corruptions?](https://openreview.net/forum?id=yUxUNaj2Sl)
475 | - [On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections.](https://openreview.net/forum?id=xgGS6PmzNq6)
476 | - [Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients.](https://openreview.net/forum?id=iQQK02mxVIT)
477 | - [Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning.](https://openreview.net/forum?id=AHOs7Sm5H7R)
478 |
479 | ### [ICML 2021](https://dblp.uni-trier.de/db/conf/icml/icml2021.html)
480 |
481 | - [Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees.](http://proceedings.mlr.press/v139/celis21a.html)
482 | - [Fairness and Bias in Online Selection.](http://proceedings.mlr.press/v139/correa21a.html)
483 | - [Characterizing Fairness Over the Set of Good Models Under Selective Labels.](http://proceedings.mlr.press/v139/coston21a.html)
484 | - [On the Problem of Underranking in Group-Fair Ranking.](http://proceedings.mlr.press/v139/gorantla21a.html)
485 | - [Fairness for Image Generation with Uncertain Sensitive Attributes.](http://proceedings.mlr.press/v139/jalal21b.html)
486 | - [Fair Selective Classification Via Sufficiency.](http://proceedings.mlr.press/v139/lee21b.html)
487 | - [Ditto: Fair and Robust Federated Learning Through Personalization.](http://proceedings.mlr.press/v139/li21h.html)
488 | - [Approximate Group Fairness for Clustering.](http://proceedings.mlr.press/v139/li21j.html)
489 | - [Blind Pareto Fairness and Subgroup Robustness.](http://proceedings.mlr.press/v139/martinez21a.html)
490 | - [Testing Group Fairness via Optimal Transport Projections.](http://proceedings.mlr.press/v139/si21a.html)
491 | - [Collaborative Bayesian Optimization with Fair Regret.](http://proceedings.mlr.press/v139/sim21b.html)
492 | - [Fairness of Exposure in Stochastic Bandits.](http://proceedings.mlr.press/v139/wang21b.html)
493 | - [To be Robust or to be Fair: Towards Fairness in Adversarial Training.](http://proceedings.mlr.press/v139/xu21b.html)
494 |
495 | ### [IJCAI 2021](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2021.html)
496 |
497 | - [Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models.](https://doi.org/10.24963/ijcai.2021/77)
498 | - [Decision Making with Differential Privacy under a Fairness Lens.](https://doi.org/10.24963/ijcai.2021/78)
499 | - [An Examination of Fairness of AI Models for Deepfake Detection.](https://doi.org/10.24963/ijcai.2021/79)
500 | - [Towards Reducing Biases in Combining Multiple Experts Online.](https://doi.org/10.24963/ijcai.2021/416)
501 | - [Understanding the Effect of Bias in Deep Anomaly Detection.](https://doi.org/10.24963/ijcai.2021/456)
502 | - [Graph Debiased Contrastive Learning with Joint Representation Clustering.](https://doi.org/10.24963/ijcai.2021/473)
503 | - [Controlling Fairness and Bias in Dynamic Learning-to-Rank (Extended Abstract).](https://doi.org/10.24963/ijcai.2021/655)
504 |
505 | ### [KDD 2021](https://dblp.uni-trier.de/db/conf/kdd/kdd2021.html)
506 |
507 | - [Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking Fairness and Algorithm Utility.](https://doi.org/10.1145/3447548.3467251)
508 | - [Individual Fairness for Graph Neural Networks: A Ranking based Approach.](https://doi.org/10.1145/3447548.3467266)
509 | - [Maxmin-Fair Ranking: Individual Fairness under Group-Fairness Constraints.](https://doi.org/10.1145/3447548.3467349)
510 | - [Federated Adversarial Debiasing for Fair and Transferable Representations.](https://doi.org/10.1145/3447548.3467281)
511 | - [Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition.](https://doi.org/10.1145/3447548.3467258)
512 | - [Deep Clustering based Fair Outlier Detection.](https://doi.org/10.1145/3447548.3467225)
513 | - [Deconfounded Recommendation for Alleviating Bias Amplification.](https://doi.org/10.1145/3447548.3467249)
514 | - [Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning.](https://doi.org/10.1145/3447548.3467326)
515 | - [Fairness-Aware Online Meta-learning.](https://doi.org/10.1145/3447548.3467389)
516 |
517 | ### [NIPS 2021](https://dblp.uni-trier.de/db/conf/nips/neurips2021.html)
518 |
519 | > TBD
520 |
521 | ### [SDM 2021](https://dblp.uni-trier.de/db/conf/sdm/sdm2021.html)
522 |
523 | - [Fairness-aware Agnostic Federated Learning.](https://doi.org/10.1137/1.9781611976700.21)
524 | - [Equitable Allocation of Healthcare Resources with Fair Survival Models.](https://doi.org/10.1137/1.9781611976700.22)
525 | - [Fair Classification Under Strict Unawareness.](https://doi.org/10.1137/1.9781611976700.23)
526 |
527 | ### [UAI 2021](https://dblp.uni-trier.de/db/conf/uai/uai2021.html)
528 |
529 | > TBD
530 |
531 | ### [WWW 2021](https://dblp.uni-trier.de/db/conf/www/www2021.html)
532 |
533 | > TBD
534 |
535 | ### Others 2021
536 |
537 | #### [WSDM 2021](https://dblp.uni-trier.de/db/conf/wsdm/wsdm2021.html)
538 |
539 | - [Popularity-Opportunity Bias in Collaborative Filtering.](https://doi.org/10.1145/3437963.3441820)
540 | - [Deconfounding with Networked Observational Data in a Dynamic Environment.](https://doi.org/10.1145/3437963.3441818)
541 | - [Causal Transfer Random Forest: Combining Logged Data and Randomized Experiments for Robust Prediction.](https://doi.org/10.1145/3437963.3441722)
542 | - [Split-Treatment Analysis to Rank Heterogeneous Causal Effects for Prospective Interventions.](https://doi.org/10.1145/3437963.3441821)
543 | - [Explain and Predict, and then Predict Again.](https://doi.org/10.1145/3437963.3441758)
544 | - [Combating Selection Biases in Recommender Systems with a Few Unbiased Ratings.](https://doi.org/10.1145/3437963.3441799)
545 | - [Practical Compositional Fairness: Understanding Fairness in Multi-Component Recommender Systems.](https://doi.org/10.1145/3437963.3441732)
546 | - [Towards Long-term Fairness in Recommendation.](https://doi.org/10.1145/3437963.3441824)
547 | - [Unifying Online and Counterfactual Learning to Rank: A Novel Counterfactual Estimator that Effectively Utilizes Online Interventions.](https://doi.org/10.1145/3437963.3441794)
548 | - [Interpretable Ranking with Generalized Additive Models.](https://doi.org/10.1145/3437963.3441796)
549 |
550 | #### [COLT 2021](learningtheory.org/colt2021/)
551 |
552 | - [Approximation Algorithms for Socially Fair Clustering.](http://proceedings.mlr.press/v134/makarychev21a.html)
553 |
554 | ## 2020
555 |
556 | ### [AAAI 2020](https://dblp.uni-trier.de/db/conf/aaai/aaai2020.html)
557 |
558 | - [Faking Fairness via Stealthily Biased Sampling.](https://aaai.org/ojs/index.php/AAAI/article/view/5377)
559 | - [Differentially Private and Fair Classification via Calibrated Functional Mechanism.](https://aaai.org/ojs/index.php/AAAI/article/view/5402)
560 | - [Bursting the Filter Bubble: Fairness-Aware Network Link Prediction.](https://aaai.org/ojs/index.php/AAAI/article/view/5429)
561 | - [Making Existing Clusterings Fairer: Algorithms, Complexity Results and Insights.](https://aaai.org/ojs/index.php/AAAI/article/view/5783)
562 | - [Fairness in Network Representation by Latent Structural Heterogeneity in Observational Data.](https://aaai.org/ojs/index.php/AAAI/article/view/5792)
563 | - [Pairwise Fairness for Ranking and Regression.](https://aaai.org/ojs/index.php/AAAI/article/view/5970)
564 | - [Achieving Fairness in the Stochastic Multi-Armed Bandit Problem.](https://aaai.org/ojs/index.php/AAAI/article/view/5986)
565 | - [Fairness for Robust Log Loss Classification.](https://aaai.org/ojs/index.php/AAAI/article/view/6002)
566 | - [Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns.](https://aaai.org/ojs/index.php/AAAI/article/view/6565)
567 |
568 | ### [AISTATS 2020](https://dblp.uni-trier.de/db/conf/aistats/aistats2020.html)
569 |
570 | - [Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons.](http://proceedings.mlr.press/v108/wang20a.html)
571 | - [Learning Fair Representations for Kernel Models.](http://proceedings.mlr.press/v108/tan20a.html)
572 | - [Fair Decisions Despite Imperfect Predictions.](http://proceedings.mlr.press/v108/kilbertus20a.html)
573 | - [Identifying and Correcting Label Bias in Machine Learning.](http://proceedings.mlr.press/v108/jiang20a.html)
574 | - [Optimized Score Transformation for Fair Classification.](http://proceedings.mlr.press/v108/wei20a.html)
575 | - [Equalized odds postprocessing under imperfect group information.](http://proceedings.mlr.press/v108/awasthi20a.html)
576 | - [Fairness Evaluation in Presence of Biased Noisy Labels.](http://proceedings.mlr.press/v108/fogliato20a.html)
577 | - [Fair Correlation Clustering.](http://proceedings.mlr.press/v108/ahmadian20a.html)
578 | - [Auditing ML Models for Individual Bias and Unfairness.](http://proceedings.mlr.press/v108/xue20a.html)
579 |
580 | ### [BIGDATA 2020](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2020.html)
581 |
582 | > TBD
583 |
584 | ### [CIKM 2020](https://dblp.uni-trier.de/db/conf/cikm/cikm2020.html)
585 |
586 | - [Spectral Relaxations and Fair Densest Subgraphs.](https://doi.org/10.1145/3340531.3412036)
587 | - [Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes.](https://doi.org/10.1145/3340531.3411980)
588 | - [Active Query of Private Demographic Data for Learning Fair Models.](https://doi.org/10.1145/3340531.3412074)
589 | - [Fairness-Aware Learning with Prejudice Free Representations.](https://doi.org/10.1145/3340531.3412150)
590 | - [Denoising Individual Bias for Fairer Binary Submatrix Detection.](https://doi.org/10.1145/3340531.3412156)
591 | - [LiFT: A Scalable Framework for Measuring Fairness in ML Applications.](https://doi.org/10.1145/3340531.3412705)
592 |
593 | ### [FAT\* 2020](https://dblp.uni-trier.de/db/conf/fat/fat2020.html)
594 |
595 | - [What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability.](https://doi.org/10.1145/3351095.3372833)
596 | - [Algorithmic realism: expanding the boundaries of algorithmic thought.](https://doi.org/10.1145/3351095.3372840)
597 | - [Algorithmic accountability in public administration: the GDPR paradox.](https://doi.org/10.1145/3351095.3373153)
598 | - [Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing.](https://doi.org/10.1145/3351095.3372873)
599 | - [Toward situated interventions for algorithmic equity: lessons from the field.](https://doi.org/10.1145/3351095.3372874)
600 | - [Explainability fact sheets: a framework for systematic assessment of explainable approaches.](https://doi.org/10.1145/3351095.3372870)
601 | - [Multi-layered explanations from algorithmic impact assessments in the GDPR.](https://doi.org/10.1145/3351095.3372875)
602 | - [The hidden assumptions behind counterfactual explanations and principal reasons.](https://doi.org/10.1145/3351095.3372830)
603 | - [Why does my model fail?: contrastive local explanations for retail forecasting.](https://doi.org/10.1145/3351095.3372824)
604 | - ["The human body is a black box": supporting clinical decision-making with deep learning.](https://doi.org/10.1145/3351095.3372827)
605 | - [Assessing algorithmic fairness with unobserved protected class using data combination.](https://doi.org/10.1145/3351095.3373154)
606 | - [FlipTest: fairness testing via optimal transport.](https://doi.org/10.1145/3351095.3372845)
607 | - [Implications of AI (un-)fairness in higher education admissions: the effects of perceived AI (un-)fairness on exit, voice and organizational reputation.](https://doi.org/10.1145/3351095.3372867)
608 | - [Auditing radicalization pathways on YouTube.](https://doi.org/10.1145/3351095.3372879)
609 | - [Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions.](https://doi.org/10.1145/3351095.3372863)
610 | - [The concept of fairness in the GDPR: a linguistic and contextual interpretation.](https://doi.org/10.1145/3351095.3372868)
611 | - [Studying up: reorienting the study of algorithmic fairness around issues of power.](https://doi.org/10.1145/3351095.3372859)
612 | - [POTs: protective optimization technologies.](https://doi.org/10.1145/3351095.3372853)
613 | - [Fair decision making using privacy-protected data.](https://doi.org/10.1145/3351095.3372872)
614 | - [Fairness warnings and fair-MAML: learning fairly with minimal data.](https://doi.org/10.1145/3351095.3372839)
615 | - [From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy.](https://doi.org/10.1145/3351095.3372860)
616 | - [Onward for the freedom of others: marching beyond the AI ethics.](https://doi.org/10.1145/3351095.3373152)
617 | - [Whose side are ethics codes on?: power, responsibility and the social good.](https://doi.org/10.1145/3351095.3372844)
618 | - [Algorithmic targeting of social policies: fairness, accuracy, and distributed governance.](https://doi.org/10.1145/3351095.3375784)
619 | - [Roles for computing in social change.](https://doi.org/10.1145/3351095.3372871)
620 | - [Regulating transparency?: Facebook, Twitter and the german network enforcement act.](https://doi.org/10.1145/3351095.3372856)
621 | - [The relationship between trust in AI and trustworthy machine learning technologies.](https://doi.org/10.1145/3351095.3372834)
622 | - [The philosophical basis of algorithmic recourse.](https://doi.org/10.1145/3351095.3372876)
623 | - [Value-laden disciplinary shifts in machine learning.](https://doi.org/10.1145/3351095.3373157)
624 | - [Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making.](https://doi.org/10.1145/3351095.3372852)
625 | - [Lessons from archives: strategies for collecting sociocultural data in machine learning.](https://doi.org/10.1145/3351095.3372829)
626 | - [Data in New Delhi's predictive policing system.](https://doi.org/10.1145/3351095.3372865)
627 | - [Garbage in, garbage out?: do machine learning application papers in social computing report where human-labeled training data comes from?](https://doi.org/10.1145/3351095.3372862)
628 | - [Bidding strategies with gender nondiscrimination constraints for online ad auctions.](https://doi.org/10.1145/3351095.3375783)
629 | - [Multi-category fairness in sponsored search auctions.](https://doi.org/10.1145/3351095.3372848)
630 | - [Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning.](https://doi.org/10.1145/3351095.3372837)
631 | - [Interventions for ranking in the presence of implicit bias.](https://doi.org/10.1145/3351095.3372858)
632 | - [The disparate equilibria of algorithmic decision making when individuals invest rationally.](https://doi.org/10.1145/3351095.3372861)
633 | - [An empirical study on the perceived fairness of realistic, imperfect machine learning models.](https://doi.org/10.1145/3351095.3372831)
634 | - [Artificial mental phenomena: psychophysics as a framework to detect perception biases in AI models.](https://doi.org/10.1145/3351095.3375623)
635 | - [The social lives of generative adversarial networks.](https://doi.org/10.1145/3351095.3373156)
636 | - [Towards a more representative politics in the ethics of computer science.](https://doi.org/10.1145/3351095.3372854)
637 | - [Integrating FATE/critical data studies into data science curricula: where are we going and how do we get there?](https://doi.org/10.1145/3351095.3372832)
638 | - [Recommendations and user agency: the reachability of collaboratively-filtered information.](https://doi.org/10.1145/3351095.3372866)
639 | - [Bias in word embeddings.](https://doi.org/10.1145/3351095.3372843)
640 | - [What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems.](https://doi.org/10.1145/3351095.3372849)
641 | - [Mitigating bias in algorithmic hiring: evaluating claims and practices.](https://doi.org/10.1145/3351095.3372828)
642 | - [The impact of overbooking on a pre-trial risk assessment tool.](https://doi.org/10.1145/3351095.3372846)
643 | - [Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination.](https://doi.org/10.1145/3351095.3372877)
644 | - [Towards a critical race methodology in algorithmic fairness.](https://doi.org/10.1145/3351095.3372826)
645 | - [What's sex got to do with machine learning?](https://doi.org/10.1145/3351095.3375674)
646 | - [On the apparent conflict between individual and group fairness.](https://doi.org/10.1145/3351095.3372864)
647 | - [Fairness is not static: deeper understanding of long term fairness via simulation studies.](https://doi.org/10.1145/3351095.3372878)
648 | - [Fair classification and social welfare.](https://doi.org/10.1145/3351095.3372857)
649 | - [Preference-informed fairness.](https://doi.org/10.1145/3351095.3373155)
650 | - [Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy.](https://doi.org/10.1145/3351095.3375709)
651 | - [The case for voter-centered audits of search engines during political elections.](https://doi.org/10.1145/3351095.3372835)
652 | - [Whose tweets are surveilled for the police: an audit of a social-media monitoring tool via log files.](https://doi.org/10.1145/3351095.3372841)
653 | - [Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability.](https://doi.org/10.1145/3351095.3372825)
654 | - [Counterfactual risk assessments, evaluation, and fairness.](https://doi.org/10.1145/3351095.3372851)
655 | - [The false promise of risk assessments: epistemic reform and the limits of fairness.](https://doi.org/10.1145/3351095.3372869)
656 | - [Explaining machine learning classifiers through diverse counterfactual explanations.](https://doi.org/10.1145/3351095.3372850)
657 | - [Model agnostic interpretability of rankers via intent modelling.](https://doi.org/10.1145/3351095.3375234)
658 | - [Doctor XAI: an ontology-based approach to black-box sequential data classification explanations.](https://doi.org/10.1145/3351095.3372855)
659 | - [Robustness in machine learning explanations: does it matter?](https://doi.org/10.1145/3351095.3372836)
660 | - [Explainable machine learning in deployment.](https://doi.org/10.1145/3351095.3375624)
661 | - [Fairness and utilization in allocating resources with uncertain demand.](https://doi.org/10.1145/3351095.3372847)
662 | - [The effects of competition and regulation on error inequality in data-driven markets.](https://doi.org/10.1145/3351095.3372842)
663 |
664 | ### [ICDM 2020](https://dblp.uni-trier.de/db/conf/icdm/icdm2020.html)
665 |
666 | > TBD
667 |
668 | ### [ICML 2020](https://dblp.uni-trier.de/db/conf/icml/icml2020.html)
669 |
670 | - [A Pairwise Fair and Community-preserving Approach to k-Center Clustering.](http://proceedings.mlr.press/v119/brubach20a.html)
671 | - [How to Solve Fair k-Center in Massive Data Models.](http://proceedings.mlr.press/v119/chiplunkar20a.html)
672 | - [Fair Generative Modeling via Weak Supervision.](http://proceedings.mlr.press/v119/choi20a.html)
673 | - [Causal Modeling for Fairness In Dynamical Systems.](http://proceedings.mlr.press/v119/creager20a.html)
674 | - [Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing.](http://proceedings.mlr.press/v119/dutta20a.html)
675 | - [Fair k-Centers via Maximum Matching.](http://proceedings.mlr.press/v119/jones20a.html)
676 | - [FACT: A Diagnostic for Group Fairness Trade-offs.](http://proceedings.mlr.press/v119/kim20a.html)
677 | - [Too Relaxed to Be Fair.](http://proceedings.mlr.press/v119/lohaus20a.html)
678 | - [Individual Fairness for k-Clustering.](http://proceedings.mlr.press/v119/mahabadi20a.html)
679 | - [Minimax Pareto Fairness: A Multi Objective Perspective.](http://proceedings.mlr.press/v119/martinez20a.html)
680 | - [Fair Learning with Private Demographic Data.](http://proceedings.mlr.press/v119/mozannar20a.html)
681 | - [Two Simple Ways to Learn Individual Fairness Metrics from Data.](http://proceedings.mlr.press/v119/mukherjee20a.html)
682 | - [FR-Train: A Mutual Information-Based Approach to Fair and Robust Training.](http://proceedings.mlr.press/v119/roh20a.html)
683 | - [Bounding the fairness and accuracy of classifiers from population statistics.](http://proceedings.mlr.press/v119/sabato20a.html)
684 | - [Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics.](http://proceedings.mlr.press/v119/saha20c.html)
685 | - [Learning Fair Policies in Multi-Objective (Deep) Reinforcement Learning with Average and Discounted Rewards.](http://proceedings.mlr.press/v119/siddique20a.html)
686 | - [Learning De-biased Representations with Biased Representations.](http://proceedings.mlr.press/v119/bahng20a.html)
687 | - [DeBayes: a Bayesian Method for Debiasing Network Embeddings.](http://proceedings.mlr.press/v119/buyl20a.html)
688 | - [Data preprocessing to mitigate bias: A maximum entropy based approach.](http://proceedings.mlr.press/v119/celis20a.html)
689 |
690 | ### [IJCAI 2020](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2020.html)
691 |
692 | - [WEFE: The Word Embeddings Fairness Evaluation Framework.](https://doi.org/10.24963/ijcai.2020/60)
693 | - [Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness.](https://doi.org/10.24963/ijcai.2020/61)
694 | - [Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems.](https://doi.org/10.24963/ijcai.2020/62)
695 | - [Relation-Based Counterfactual Explanations for Bayesian Network Classifiers.](https://doi.org/10.24963/ijcai.2020/63)
696 | - [Metamorphic Testing and Certified Mitigation of Fairness Violations in NLP Models.](https://doi.org/10.24963/ijcai.2020/64)
697 | - [Fairness-Aware Neural Rényi Minimization for Continuous Features.](https://doi.org/10.24963/ijcai.2020/313)
698 | - [FNNC: Achieving Fairness through Neural Networks.](https://doi.org/10.24963/ijcai.2020/315)
699 | - [Adversarial Graph Embeddings for Fair Influence Maximization over Social Networks.](https://doi.org/10.24963/ijcai.2020/594)
700 |
701 | ### [KDD 2020](https://dblp.uni-trier.de/db/conf/kdd/kdd2020.html)
702 |
703 | - [InFoRM: Individual Fairness on Graph Mining.](https://dl.acm.org/doi/10.1145/3394486.3403080)
704 | - [Towards Fair Truth Discovery from Biased Crowdsourced Answers.](https://dl.acm.org/doi/10.1145/3394486.3403102)
705 | - [Evaluating Fairness Using Permutation Tests.](https://dl.acm.org/doi/10.1145/3394486.3403199)
706 | - [A Causal Look at Statistical Definitions of Discrimination.](https://dl.acm.org/doi/10.1145/3394486.3403130)
707 | - [List-wise Fairness Criterion for Point Processes.](https://dl.acm.org/doi/10.1145/3394486.3403246)
708 | - [Algorithmic Decision Making with Conditional Fairness.](https://dl.acm.org/doi/10.1145/3394486.3403263)
709 |
710 | ### [NIPS 2020](https://dblp.uni-trier.de/db/conf/nips/neurips2020.html)
711 |
712 | - [Achieving Equalized Odds by Resampling Sensitive Attributes.](https://proceedings.neurips.cc/paper/2020/hash/03593ce517feac573fdaafa6dcedef61-Abstract.html)
713 | - [Fairness without Demographics through Adversarially Reweighted Learning.](https://proceedings.neurips.cc/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html)
714 | - [Fairness with Overlapping Groups; a Probabilistic Perspective.](https://proceedings.neurips.cc/paper/2020/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html)
715 | - [Robust Optimization for Fairness with Noisy Protected Groups.](https://proceedings.neurips.cc/paper/2020/hash/37d097caf1299d9aa79c2c2b843d2d78-Abstract.html)
716 | - [Fair regression with Wasserstein barycenters.](https://proceedings.neurips.cc/paper/2020/hash/51cdbd2611e844ece5d80878eb770436-Abstract.html)
717 | - [Learning Certified Individually Fair Representations.](https://proceedings.neurips.cc/paper/2020/hash/55d491cf951b1b920900684d71419282-Abstract.html)
718 | - [Metric-Free Individual Fairness in Online Learning.](https://proceedings.neurips.cc/paper/2020/hash/80b618ebcac7aa97a6dac2ba65cb7e36-Abstract.html)
719 | - [Fairness constraints can help exact inference in structured prediction.](https://proceedings.neurips.cc/paper/2020/hash/8248a99e81e752cb9b41da3fc43fbe7f-Abstract.html)
720 | - [Investigating Gender Bias in Language Models Using Causal Mediation Analysis.](https://proceedings.neurips.cc/paper/2020/hash/92650b2e92217715fe312e6fa7b90d82-Abstract.html)
721 | - [Probabilistic Fair Clustering.](https://proceedings.neurips.cc/paper/2020/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html)
722 | - [KFC: A Scalable Approximation Algorithm for $k$-center Fair Clustering.](https://proceedings.neurips.cc/paper/2020/hash/a6d259bfbfa2062843ef543e21d7ec8e-Abstract.html)
723 | - [A Fair Classifier Using Kernel Density Estimation.](https://proceedings.neurips.cc/paper/2020/hash/ac3870fcad1cfc367825cda0101eee62-Abstract.html)
724 | - [Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning.](https://proceedings.neurips.cc/paper/2020/hash/af9c0e0c1dee63e5acad8b7ed1a5be96-Abstract.html)
725 | - [Fair Multiple Decision Making Through Soft Interventions.](https://proceedings.neurips.cc/paper/2020/hash/d0921d442ee91b896ad95059d13df618-Abstract.html)
726 | - [Ensuring Fairness Beyond the Training Data.](https://proceedings.neurips.cc/paper/2020/hash/d6539d3b57159babf6a72e106beb45bd-Abstract.html)
727 | - [How do fair decisions fare in long-term qualification?](https://proceedings.neurips.cc/paper/2020/hash/d6d231705f96d5a35aeb3a76402e49a3-Abstract.html)
728 | - [Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference.](https://proceedings.neurips.cc/paper/2020/hash/d83de59e10227072a9c034ce10029c39-Abstract.html)
729 | - [Fair regression via plug-in estimator and recalibration with statistical guarantees.](https://proceedings.neurips.cc/paper/2020/hash/ddd808772c035aed516d42ad3559be5f-Abstract.html)
730 | - [Learning from Failure: De-biasing Classifier from Biased Classifier.](https://proceedings.neurips.cc/paper/2020/hash/eddc3427c5d77843c2253f1e799fe933-Abstract.html)
731 | - [Fair Hierarchical Clustering.](https://proceedings.neurips.cc/paper/2020/hash/f10f2da9a238b746d2bac55759915f0d-Abstract.html)
732 |
733 | ### [SDM 2020](https://dblp.uni-trier.de/db/conf/sdm/sdm2020.html)
734 |
735 | - [Bayesian Modeling of Intersectional Fairness: The Variance of Bias.](https://doi.org/10.1137/1.9781611976236.48)
736 | - [On the Information Unfairness of Social Networks.](https://doi.org/10.1137/1.9781611976236.69)
737 |
738 | ### [UAI 2020](https://dblp.uni-trier.de/db/conf/uai/uai2020.html)
739 |
740 | - [Fair Contextual Multi-Armed Bandits: Theory and Experiments.](http://www.auai.org/uai2020/proceedings/99_main_paper.pdf)
741 | - [Towards Threshold Invariant Fair Classification.](http://www.auai.org/uai2020/proceedings/237_main_paper.pdf)
742 | - [Verifying Individual Fairness in Machine Learning Models.](http://www.auai.org/uai2020/proceedings/327_main_paper.pdf)
743 |
744 | ### [WWW 2020](https://dblp.uni-trier.de/db/conf/www/www2020.html)
745 |
746 | - [FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms.](https://doi.org/10.1145/3366423.3380196)
747 | - [Designing Fairly Fair Classifiers Via Economic Fairness Notions.](https://doi.org/10.1145/3366423.3380228)
748 | - [Learning Model-Agnostic Counterfactual Explanations for Tabular Data.](https://doi.org/10.1145/3366423.3380087)
749 |
750 | ### Others 2020
751 |
752 | #### [ASONAM 2020](https://dblp.uni-trier.de/db/conf/asunam/asonam2020.html)
753 |
754 | - [Bias in Knowledge Graph Embeddings.](https://doi.org/10.1109/ASONAM49781.2020.9381459)
755 | - [Debiasing Graph Representations via Metadata-Orthogonal Training.](https://doi.org/10.1109/ASONAM49781.2020.9381348)
756 |
757 | ## 2019
758 |
759 | ### [AAAI 2019](https://dblp.uni-trier.de/db/conf/aaai/aaai2019.html)
760 |
761 | - [Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making](https://aaai.org/ojs/index.php/AAAI/article/view/3943)
762 | - [Learning to Address Health Inequality in the United States with a Bayesian Decision Network](https://aaai.org/ojs/index.php/AAAI/article/view/3849)
763 | - [Convex Formulations for Fair Principal Component Analysis](https://aaai.org/ojs/index.php/AAAI/article/view/3843)
764 | - [Bayesian Fairness](https://aaai.org/ojs/index.php/AAAI/article/view/3824)
765 | - [One-Network Adversarial Fairness](https://aaai.org/ojs/index.php/AAAI/article/view/4085)
766 | - [Eliminating Latent Discrimination: Train Then Mask](https://aaai.org/ojs/index.php/AAAI/article/view/4251)
767 | - [Path-Specific Counterfactual Fairness](https://aaai.org/ojs/index.php/AAAI/article/view/4777)
768 |
769 | ### [AISTATS 2019](https://dblp.uni-trier.de/db/conf/aistats/aistats2019.html)
770 |
771 | - [Learning Controllable Fair Representations](http://proceedings.mlr.press/v89/song19a)
772 |
773 | ### [BIGDATA 2019](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2019.html)
774 |
775 | - [FAE: A Fairness-Aware Ensemble Framework](https://ieeexplore.ieee.org/document/9006487/)
776 | - [Privacy Bargaining with Fairness: Privacy–Price Negotiation System for Applying Differential Privacy in Data Market Environments](https://ieeexplore.ieee.org/document/9006101/)
777 | - [FairGAN+: Achieving Fair Data Generation and Classification through Generative Adversarial Nets](https://ieeexplore.ieee.org/document/9006322/)
778 |
779 | ### [CIKM 2019](https://dblp.uni-trier.de/db/conf/cikm/cikm2019.html)
780 |
781 | - [AdaFair: Cumulative Fairness Adaptive Boosting](https://doi.org/10.1145/3357384.3357974)
782 |
783 | ### [FAT\* 2019](https://dblp.uni-trier.de/db/conf/fat/fat2019.html)
784 |
785 | - [Explaining Explanations in AI](https://dl.acm.org/authorize?N675479)
786 | - [Deep Weighted Averaging Classifiers](https://dl.acm.org/authorize?N675488)
787 | - [Fairness and Abstraction in Sociotechnical Systems](https://dl.acm.org/authorize?N675344)
788 | - [50 Years of Test (Un)fairness: Lessons for Machine Learning](https://dl.acm.org/authorize?N675343)
789 | - [A comparative study of fairness-enhancing interventions in machine learning](https://dl.acm.org/authorize?N675474)
790 | - [Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing](https://dl.acm.org/authorize?N675460)
791 | - [Analyzing Biases in Perception of Truth in News Stories and their Implications for Fact Checking](https://dl.acm.org/authorize?N675453)
792 | - [Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments](https://dl.acm.org/authorize?N675458)
793 | - [Problem Formulation and Fairness](https://dl.acm.org/authorize?N675342)
794 | - [Fairness under unawareness: assessing disparity when protected class is unobserved](https://dl.acm.org/authorize?N675485)
795 | - [On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection](https://dl.acm.org/authorize?N675341)
796 | - [Actionable Recourse in Linear Classification](https://dl.acm.org/authorize?N675349)
797 | - [A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media](https://dl.acm.org/authorize?N675456)
798 | - [The Disparate Effects of Strategic Manipulation](https://dl.acm.org/authorize?N675477)
799 | - [An Algorithmic Framework to Control Polarization in Personalization](https://dl.acm.org/authorize?N675466)
800 | - [Racial categories in machine learning](https://dl.acm.org/authorize?N675470)
801 | - [Downstream Effects of Affirmative Action](https://dl.acm.org/authorize?N675475)
802 | - [Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data](https://dl.acm.org/authorize?N675486)
803 | - [Model Reconstruction from Model Explanations](https://dl.acm.org/authorize?N675348)
804 | - [Fair Allocation through Competitive Equilibrium from Generic Incomes](https://dl.acm.org/authorize?N675468)
805 | - [An Empirical Study of Rich Subgroup Fairness for Machine Learning](https://dl.acm.org/authorize?N675459)
806 | - [From Soft Classifiers to Hard Decisions: How fair can we be?](https://dl.acm.org/authorize?N675472)
807 | - [Efficient Search for Diverse Coherent Explanations](https://dl.acm.org/authorize?N675340)
808 | - [Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism](https://dl.acm.org/authorize?N675471)
809 | - [A Moral Framework for Understanding Fair ML through Economic Models of Equality of Opportunity](https://dl.acm.org/authorize?N675469)
810 | - [Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees](https://dl.acm.org/authorize?N675473)
811 | - [Access to Population-Level Signaling as a Source of Inequality](https://dl.acm.org/authorize?N675476)
812 | - [Measuring the Biases that Matter: The Ethical and Casual Foundations for Measures of Fairness in Algorithms](https://dl.acm.org/authorize?N675478)
813 | - [Fairness-Aware Programming](https://dl.acm.org/authorize?N675462)
814 | - [The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism](https://dl.acm.org/authorize?N675450)
815 | - [Clear Sanctions, Vague Rewards: How China's Social Credit System Defines "Good" and "Bad" Behavior](https://dl.acm.org/authorize?N675455)
816 | - [Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting](https://dl.acm.org/authorize?N675451)
817 | - [Who's the Guinea Pig? Investigating Online A/B/n Tests In-The-Wild](https://dl.acm.org/authorize?N675461)
818 | - [Fair Algorithms for Learning in Allocation Problems](https://dl.acm.org/authorize?N675467)
819 | - [On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook](https://dl.acm.org/authorize?N675454)
820 | - [Model Cards for Model Reporting](https://dl.acm.org/authorize?N675463)
821 | - [Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 million people](https://dl.acm.org/authorize?N675457)
822 | - [The Social Cost of Strategic Classification](https://dl.acm.org/authorize?N675464)
823 | - [SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments](https://dl.acm.org/authorize?N675465)
824 | - [Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations](https://dl.acm.org/authorize?N675452)
825 | - [From Fair Decision Making To Social Equality](https://dl.acm.org/authorize?N675487)
826 |
827 | ### [ICDM 2019](https://dblp.uni-trier.de/db/conf/icdm/icdm2019.html)
828 |
829 | - [Fair Adversarial Gradient Tree Boosting](https://ieeexplore.ieee.org/document/8970941/)
830 | - [Rank-Based Multi-task Learning For Fair Regression](https://ieeexplore.ieee.org/document/8970984/)
831 | - [A Distributed Fair Machine Learning Framework with Private Demographic Data Protection](https://ieeexplore.ieee.org/document/8970908/)
832 |
833 | ### [ICML 2019](https://dblp.uni-trier.de/db/conf/icml/icml2019.html)
834 |
835 | - [Fair Regression: Quantitative Definitions and Reduction-Based Algorithms](http://proceedings.mlr.press/v97/agarwal19d.html)
836 | - [Fairwashing: the risk of rationalization](http://proceedings.mlr.press/v97/aivodji19a.html)
837 | - [Scalable Fair Clustering](http://proceedings.mlr.press/v97/backurs19a.html)
838 | - [Compositional Fairness Constraints for Graph Embeddings](http://proceedings.mlr.press/v97/bose19a.html)
839 | - [Understanding the Origins of Bias in Word Embeddings](http://proceedings.mlr.press/v97/brunet19a.html)
840 | - [Proportionally Fair Clustering](http://proceedings.mlr.press/v97/chen19d.html)
841 | - [Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints](http://proceedings.mlr.press/v97/cotter19b.html)
842 | - [Flexibly Fair Representation Learning by Disentanglement](http://proceedings.mlr.press/v97/creager19a.html)
843 | - [Obtaining Fairness using Optimal Transport Theory](http://proceedings.mlr.press/v97/gordaliza19a.html)
844 | - [On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning](http://proceedings.mlr.press/v97/heidari19a.html)
845 | - [Stable and Fair Classification](http://proceedings.mlr.press/v97/huang19e.html)
846 | - [Differentially Private Fair Learning](http://proceedings.mlr.press/v97/jagielski19a.html)
847 | - [Fair k-Center Clustering for Data Summarization](http://proceedings.mlr.press/v97/kleindessner19a.html)
848 | - [Guarantees for Spectral Clustering with Fairness Constraints](http://proceedings.mlr.press/v97/kleindessner19b.html)
849 | - [Making Decisions that Reduce Discriminatory Impacts](http://proceedings.mlr.press/v97/kusner19a.html)
850 | - [The Implicit Fairness Criterion of Unconstrained Learning](http://proceedings.mlr.press/v97/liu19f.html)
851 | - [Fairness-Aware Learning for Continuous Attributes and Treatments](http://proceedings.mlr.press/v97/mary19a.html)
852 | - [Toward Controlling Discrimination in Online Ad Auctions](http://proceedings.mlr.press/v97/mehrotra19a.html)
853 | - [Learning Optimal Fair Policies](http://proceedings.mlr.press/v97/nabi19a.html)
854 | - [Fairness without Harm: Decoupled Classifiers with Preference Guarantees](http://proceedings.mlr.press/v97/ustun19a.html)
855 | - [Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions](http://proceedings.mlr.press/v97/wang19l.html)
856 | - [Fairness risk measures](http://proceedings.mlr.press/v97/williamson19a.html)
857 |
858 | ### [IJCAI 2019](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2019.html)
859 |
860 | - [Counterfactual Fairness: Unidentification, Bound and Algorithm](https://doi.org/10.24963/ijcai.2019/199)
861 | - [Achieving Causal Fairness through Generative Adversarial Networks](https://doi.org/10.24963/ijcai.2019/201)
862 | - [FAHT: An Adaptive Fairness-aware Decision Tree Classifier](https://doi.org/10.24963/ijcai.2019/205)
863 | - [Delayed Impact of Fair Machine Learning](https://doi.org/10.24963/ijcai.2019/862)
864 | - [The Price of Local Fairness in Multistage Selection](https://www.ijcai.org/proceedings/2019/809)
865 |
866 | ### [KDD 2019](https://dblp.uni-trier.de/db/conf/kdd/kdd2019.html)
867 |
868 | - [Fairness in Recommendation Ranking through Pairwise Comparisons](https://doi.org/10.1145/3292500.3330745)
869 | - [Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search](https://doi.org/10.1145/3292500.3330691)
870 | - [Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning](https://doi.org/10.1145/3292500.3330664)
871 |
872 | ### [NIPS 2019](https://dblp.uni-trier.de/db/conf/nips/nips2019.html)
873 |
874 | - [Noise-tolerant fair classification](http://papers.nips.cc/paper/8322-noise-tolerant-fair-classification)
875 | - [Envy-Free Classification](http://papers.nips.cc/paper/8407-envy-free-classification)
876 | - [Discrimination in Online Markets: Effects of Social Bias on Learning from Reviews and Policy Design](http://papers.nips.cc/paper/8487-discrimination-in-online-markets-effects-of-social-bias-on-learning-from-reviews-and-policy-design)
877 | - [PC-Fairness: A Unified Framework for Measuring Causality-based Fairness](http://papers.nips.cc/paper/8601-pc-fairness-a-unified-framework-for-measuring-causality-based-fairness)
878 | - [Assessing Disparate Impact of Personalized Interventions: Identifiability and Bounds](http://papers.nips.cc/paper/8603-assessing-disparate-impact-of-personalized-interventions-identifiability-and-bounds)
879 | - [The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric](http://papers.nips.cc/paper/8604-the-fairness-of-risk-scores-beyond-classification-bipartite-ranking-and-the-xauc-metric)
880 | - [Fair Algorithms for Clustering](http://papers.nips.cc/paper/8741-fair-algorithms-for-clustering)
881 | - [Characterizing Bias in Classifiers using Generative Models](http://papers.nips.cc/paper/8780-characterizing-bias-in-classifiers-using-generative-models)
882 | - [Policy Learning for Fairness in Ranking](http://papers.nips.cc/paper/8782-policy-learning-for-fairness-in-ranking)
883 | - [Average Individual Fairness: Algorithms, Generalization and Experiments](http://papers.nips.cc/paper/9034-average-individual-fairness-algorithms-generalization-and-experiments)
884 | - [Paradoxes in Fair Machine Learning](http://papers.nips.cc/paper/9043-paradoxes-in-fair-machine-learning)
885 | - [Unlocking Fairness: a Trade-off Revisited](http://papers.nips.cc/paper/9082-unlocking-fairness-a-trade-off-revisited)
886 | - [Equal Opportunity in Online Classification with Partial Feedback](http://papers.nips.cc/paper/9099-equal-opportunity-in-online-classification-with-partial-feedback)
887 | - [Learning Fairness in Multi-Agent Systems](http://papers.nips.cc/paper/9537-learning-fairness-in-multi-agent-systems)
888 | - [On the Fairness of Disentangled Representations](http://papers.nips.cc/paper/9603-on-the-fairness-of-disentangled-representations)
889 | - [Differential Privacy Has Disparate Impact on Model Accuracy](http://papers.nips.cc/paper/9681-differential-privacy-has-disparate-impact-on-model-accuracy)
890 | - [Inherent Tradeoffs in Learning Fair Representations](http://papers.nips.cc/paper/9698-inherent-tradeoffs-in-learning-fair-representations)
891 | - [Exploring Algorithmic Fairness in Robust Graph Covering Problems](http://papers.nips.cc/paper/9707-exploring-algorithmic-fairness-in-robust-graph-covering-problems)
892 | - [Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification](https://papers.nips.cc/paper/9437-leveraging-labeled-and-unlabeled-data-for-consistent-fair-binary-classification)
893 | - [Assessing Social and Intersectional Biases in Contextualized Word Representations](https://papers.nips.cc/paper/9479-assessing-social-and-intersectional-biases-in-contextualized-word-representations)
894 | - [Offline Contextual Bandits with High Probability Fairness Guarantees](https://papers.nips.cc/paper/9630-offline-contextual-bandits-with-high-probability-fairness-guarantees)
895 | - [Multi-Criteria Dimensionality Reduction with Applications to Fairness](https://papers.nips.cc/paper/9652-multi-criteria-dimensionality-reduction-with-applications-to-fairness)
896 | - [Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness](https://papers.nips.cc/paper/9662-group-retention-when-using-machine-learning-in-sequential-decision-making-the-interplay-between-user-dynamics-and-fairness)
897 |
898 | ### [SDM 2019](https://dblp.uni-trier.de/db/conf/sdm/sdm2019.html)
899 |
900 | - [Fairness in representation: quantifying stereotyping as a representational harm](https://doi.org/10.1137/1.9781611975673.90)
901 |
902 | ### [UAI 2019](https://dblp.uni-trier.de/db/conf/uai/uai2019.html)
903 |
904 | - [The Sensitivity of Counterfactual Fairness to Unmeasured Confounding](http://auai.org/uai2019/proceedings/papers/213.pdf)
905 | - [Wasserstein Fair Classification](http://auai.org/uai2019/proceedings/papers/315.pdf)
906 |
907 | ### [WWW 2019](https://dblp.uni-trier.de/db/conf/www/www2019.html)
908 |
909 | - [Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality](https://doi.org/10.1145/3308558.3313559)
910 | - [FARE: Diagnostics for Fair Ranking using Pairwise Error Metrics](https://doi.org/10.1145/3308558.3313443)
911 | - [On Convexity and Bounds of Fairness-aware Classification](https://doi.org/10.1145/3308558.3313723)
912 |
913 | ### Others 2019
914 |
915 | - [Fighting Fire with Fire: Using Antidote Data to Improve Polarization and Fairness of Recommender Systems](https://doi.org/10.1145/3289600.3291002), [WSDM 2019](https://dblp.uni-trier.de/db/conf/wsdm/wsdm2019.html)
916 | - [Interventional Fairness: Causal Database Repair for Algorithmic Fairness.](https://doi.org/10.1145/3299869.3319901), [SIGMOD 2019](https://dblp.uni-trier.de/db/conf/sigmod/index.html)
917 | - [Designing Fair Ranking Schemes.](https://doi.org/10.1145/3299869.3300079), [SIGMOD 2019](https://dblp.uni-trier.de/db/conf/sigmod/index.html)
918 |
919 | ## 2018
920 |
921 | ### [AAAI 2018](https://dblp.uni-trier.de/db/conf/aaai/aaai2018.html)
922 |
923 | - [Non-Discriminatory Machine Learning through Convex Fairness Criteria](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16476)
924 | - [Knowledge, Fairness, and Social Constraints](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17230)
925 | - [Fairness in Decision-Making -- The Causal Explanation Formula](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16949)
926 | - [Fair Inference on Outcomes](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16683)
927 | - [Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16523)
928 | - [Balancing Lexicographic Fairness and a Utilitarian Objective with Application to Kidney Exchange](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16196)
929 |
930 | ### [AISTATS 2018](https://dblp.uni-trier.de/db/conf/aistats/aistats2018.html)
931 |
932 | - [Fast Threshold Tests for Detecting Discrimination](http://proceedings.mlr.press/v84/pierson18a)
933 | - [Spectral Algorithms for Computing Fair Support Vector Machines](http://proceedings.mlr.press/v84/olfat18a)
934 |
935 | ### [BIGDATA 2018](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2018.html)
936 |
937 | - [FairGAN: Fairness-aware Generative Adversarial Networks](https://ieeexplore.ieee.org/document/8622525)
938 |
939 | ### [CIKM 2018](https://dblp.uni-trier.de/db/conf/cikm/cikm2018.html)
940 |
941 | - [Fairness-Aware Tensor-Based Recommendation](https://dl.acm.org/citation.cfm?doid=3269206.3271795)
942 | - [Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems](https://dl.acm.org/citation.cfm?doid=3269206.3272027)
943 |
944 | ### [FAT\* 2018](https://dblp.uni-trier.de/db/conf/fat/fat2018.html)
945 |
946 | - [Potential for Discrimination in Online Targeted Advertising](http://proceedings.mlr.press/v81/speicher18a.html)
947 | - [Discrimination in Online Personalization: A Multidisciplinary Inquiry](http://proceedings.mlr.press/v81/datta18a.html)
948 | - [Privacy for All: Ensuring Fair and Equitable Privacy Protections](http://proceedings.mlr.press/v81/ekstrand18a.html)
949 | - ["Meaningful Information" and the Right to Explanation](http://proceedings.mlr.press/v81/selbst18a.html)
950 | - [Interpretable Active Learning](http://proceedings.mlr.press/v81/phillips18a.html)
951 | - [Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment](http://proceedings.mlr.press/v81/barabas18a.html)
952 | - [Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification](http://proceedings.mlr.press/v81/buolamwini18a.html)
953 | - [Analyze, Detect and Remove Gender Stereotyping from Bollywood Movies](http://proceedings.mlr.press/v81/madaan18a.html)
954 | - [Mixed Messages? The Limits of Automated Social Media Content Analysis](http://proceedings.mlr.press/v81/duarte18a.html)
955 | - [The cost of fairness in binary classification](http://proceedings.mlr.press/v81/menon18a.html)
956 | - [Decoupled Classifiers for Group-Fair and Efficient Machine Learning](http://proceedings.mlr.press/v81/dwork18a.html)
957 | - [A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions](http://proceedings.mlr.press/v81/chouldechova18a.html)
958 | - [Fairness in Machine Learning: Lessons from Political Philosophy](http://proceedings.mlr.press/v81/binns18a.html)
959 | - [Runaway Feedback Loops in Predictive Policing](http://proceedings.mlr.press/v81/ensign18a.html)
960 | - [All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness](http://proceedings.mlr.press/v81/ekstrand18b.html)
961 | - [Recommendation Independence](http://proceedings.mlr.press/v81/kamishima18a.html)
962 | - [Balanced Neighborhoods for Multi-sided Fairness in Recommendation](http://proceedings.mlr.press/v81/burke18a.html)
963 |
964 | ### [ICDM 2018](https://dblp.uni-trier.de/db/conf/icdm/icdm2018.html)
965 |
966 | - [Using Balancing Terms to Avoid Discrimination in Classification](https://ieeexplore.ieee.org/document/8594925)
967 |
968 | ### [ICML 2018](https://dblp.uni-trier.de/db/conf/icml/icml2018.html)
969 |
970 | - [Blind Justice: Fairness with Encrypted Sensitive Attributes](http://proceedings.mlr.press/v80/kilbertus18a.html)
971 | - [Scalable Deletion-Robust Submodular Maximization: Data Summarization with Privacy and Fairness Constraints](http://proceedings.mlr.press/v80/kazemi18a.html)
972 | - [Nonconvex Optimization for Fair Regression](http://proceedings.mlr.press/v80/komiyama18a.html)
973 | - [Fair and Diverse DPP-based Data Summarization](http://proceedings.mlr.press/v80/celis18a.html)
974 | - [Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness](http://proceedings.mlr.press/v80/kearns18a.html)
975 | - [Residual Unfairness in Fair Machine Learning from Prejudiced Data](http://proceedings.mlr.press/v80/kallus18a.html)
976 | - [A Reductions Approach to Fair Classification](http://proceedings.mlr.press/v80/agarwal18a.html)
977 | - [Probably Approximately Metric-Fair Learning](http://proceedings.mlr.press/v80/yona18a.html)
978 | - [Learning Adversarially Fair and Transferable Representations](http://proceedings.mlr.press/v80/madras18a.html)
979 | - [Delayed Impact of Fair Machine Learning](http://proceedings.mlr.press/v80/liu18c.html), *Best Paper Awards*
980 | - [Fairness Without Demographics in Repeated Loss Minimization](http://proceedings.mlr.press/v80/hashimoto18a.html), *Best Paper Runner Up Awards*
981 |
982 | ### [IJCAI 2018](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2018.html)
983 |
984 | - [Achieving Non-Discrimination in Prediction](https://www.ijcai.org/proceedings/2018/430)
985 | - [Preventing Disparate Treatment in Sequential Decision Making](https://www.ijcai.org/proceedings/2018/311)
986 |
987 | ### [KDD 2018](https://dblp.uni-trier.de/db/conf/kdd/kdd2018.html)
988 |
989 | - [Fairness of Exposure in Rankings](https://dl.acm.org/citation.cfm?doid=3219819.3220088)
990 | - [On Discrimination Discovery and Removal in Ranked Data using Causal Graph](https://dl.acm.org/citation.cfm?doid=3219819.3220087)
991 | - [A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices](https://dl.acm.org/citation.cfm?doid=3219819.3220046)
992 |
993 | ### [NIPS 2018](https://dblp.uni-trier.de/db/conf/nips/nips2018.html)
994 |
995 | - [Fairness Behind a Veil of Ignorance: a Welfare Analysis for Automated Decision Making](http://papers.nips.cc/paper/7402-fairness-behind-a-veil-of-ignorance-a-welfare-analysis-for-automated-decision-making)
996 | - [Enhancing the Accuracy and Fairness of Human Decision Making](http://papers.nips.cc/paper/7448-enhancing-the-accuracy-and-fairness-of-human-decision-making)
997 | - [Online Learning with an Unknown Fairness Metric](http://papers.nips.cc/paper/7526-online-learning-with-an-unknown-fairness-metric)
998 | - [Empirical Risk Minimization under Fairness Constraints](http://papers.nips.cc/paper/7544-empirical-risk-minimization-under-fairness-constraints)
999 | - [Why Is My Classifier Discriminatory](http://papers.nips.cc/paper/7613-why-is-my-classifier-discriminatory)
1000 | - [Hunting for Discriminatory Proxies in Linear Regression Models](http://papers.nips.cc/paper/7708-hunting-for-discriminatory-proxies-in-linear-regression-models)
1001 | - [Fairness Through Computationally Bounded Awareness](http://papers.nips.cc/paper/7733-fairness-through-computationally-bounded-awareness)
1002 | - [Predict Responsibly Improving Fairness and Accuracy by Learning to Defer](http://papers.nips.cc/paper/7853-predict-responsibly-improving-fairness-and-accuracy-by-learning-to-defer)
1003 | - [On Preserving Non Discrimination When Combining Expert Advice](http://papers.nips.cc/paper/8058-on-preserving-non-discrimination-when-combining-expert-advice)
1004 | - [The Price of Fair PCA: One Extra Dimension](http://papers.nips.cc/paper/8294-the-price-of-fair-pca-one-extra-dimension)
1005 | - [Equality of Opportunity in Classification: A Causal Approach](http://papers.nips.cc/paper/7625-equality-of-opportunity-in-classification-a-causal-approach)
1006 | - [Invariant Representations without Adversarial Training](https://papers.nips.cc/paper/8122-invariant-representations-without-adversarial-training)
1007 | - [Learning to Pivot with Adversarial Networks](http://papers.nips.cc/paper/6699-learning-to-pivot-with-adversarial-networks)
1008 |
1009 | ### [SDM 2018](https://dblp.uni-trier.de/db/conf/sdm/sdm2018.html)
1010 |
1011 | > *null*
1012 |
1013 | ### [UAI 2018](https://dblp.uni-trier.de/db/conf/uai/uai2018.html)
1014 |
1015 | > *null*
1016 |
1017 | ### [WWW 2018](https://dblp.uni-trier.de/db/conf/www/www2018.html)
1018 |
1019 | - [Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification](https://dl.acm.org/citation.cfm?id=3186133)
1020 | - [Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction](https://dl.acm.org/citation.cfm?id=3186138)
1021 |
1022 | ### Others 2018
1023 |
1024 | - [Biases in the Facebook News Feed: a Case Study on the Italian Elections](https://ieeexplore.ieee.org/document/8508659), [ASONAM 2018](https://dblp.uni-trier.de/db/conf/asunam/asonam2018.html)
1025 | - [Unleashing Linear Optimizers for Group-Fair Learning and Optimization](http://proceedings.mlr.press/v75/alabi18a.html), [COLT 2018](https://dblp.uni-trier.de/db/conf/colt/colt2018.html)
1026 |
1027 | ## 2017
1028 |
1029 | ### [AAAI 2017](https://dblp.uni-trier.de/db/conf/aaai/aaai2017.html)
1030 |
1031 | > *null*
1032 |
1033 | ### [AISTATS 2017](https://dblp.uni-trier.de/db/conf/aistats/aistats2017.html)
1034 |
1035 | - [Fairness Constraints: Mechanisms for Fair Classification](http://proceedings.mlr.press/v54/zafar17a/zafar17a.pdf), [supplement](http://proceedings.mlr.press/v54/zafar17a/zafar17a-supp.pdf)
1036 |
1037 | ### [BIGDATA 2017](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2017.html)
1038 |
1039 | - [Discrimination detection by causal effect estimation](https://ieeexplore.ieee.org/document/8258033)
1040 |
1041 | ### [CIKM 2017](https://dblp.uni-trier.de/db/conf/cikm/cikm2017.html)
1042 |
1043 | - [FA\*IR: A Fair Top-k Ranking Algorithm](http://doi.acm.org/10.1145/3132847.3132938)
1044 | - [Algorithmic Bias: Do Good Systems Make Relevant Documents More Retrievable?](http://doi.acm.org/10.1145/3132847.3133135)
1045 |
1046 | ### [FAT\* 2017](https://dblp.uni-trier.de/db/conf/fat/fat2017.html)
1047 |
1048 | > *null*
1049 |
1050 | ### [ICDM 2017](https://dblp.uni-trier.de/db/conf/icdm/icdm2017.html)
1051 |
1052 | > *null*
1053 |
1054 | ### [ICML 2017](https://dblp.uni-trier.de/db/conf/icml/icml2017.html)
1055 |
1056 | - [Fairness in Reinforcement Learning](http://proceedings.mlr.press/v70/jabbari17a.html)
1057 | - [Meritocratic Fairness for Cross-Population Selection](http://proceedings.mlr.press/v70/kearns17a.html)
1058 |
1059 | ### [IJCAI 2017](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2017.html)
1060 |
1061 | - [A Causal Framework for Discovering and Removing Direct and Indirect Discrimination](https://www.ijcai.org/proceedings/2017/549)
1062 |
1063 | ### [KDD 2017](https://dblp.uni-trier.de/db/conf/kdd/kdd2017.html)
1064 |
1065 | - [Algorithmic decision making and the cost of fairness](http://www.kdd.org/kdd2017/papers/view/algorithmic-decision-making-and-the-cost-of-fairness)
1066 | - [Achieving Non-Discrimination in Data Release](http://www.kdd.org/kdd2017/papers/view/achieving-non-discrimination-in-data-release)
1067 |
1068 | ### [NIPS 2017](https://dblp.uni-trier.de/db/conf/nips/nips2017.html)
1069 |
1070 | - [From Parity to Preference-based Notions of Fairness in Classification](https://papers.nips.cc/paper/6627-from-parity-to-preference-based-notions-of-fairness-in-classification)
1071 | - [Controllable Invariance through Adversarial Feature Learning](https://papers.nips.cc/paper/6661-controllable-invariance-through-adversarial-feature-learning)
1072 | - [Avoiding Discrimination through Causal Reasoning](https://papers.nips.cc/paper/6668-avoiding-discrimination-through-causal-reasoning)
1073 | - [Recycling Privileged Learning and Distribution Matching for Fairness](https://papers.nips.cc/paper/6670-recycling-privileged-learning-and-distribution-matching-for-fairness)
1074 | - [Beyond Parity: Fairness Objectives for Collaborative Filtering](http://papers.nips.cc/paper/6885-beyond-parity-fairness-objectives-for-collaborative-filtering)
1075 | - [Optimized Pre-Processing for Discrimination Prevention](https://papers.nips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention)
1076 | - [Counterfactual Fairness](https://papers.nips.cc/paper/6995-counterfactual-fairness)
1077 | - [Fair Clustering Through Fairlets](http://papers.nips.cc/paper/7088-fair-clustering-through-fairlets)
1078 | - [On Fairness and Calibration](https://papers.nips.cc/paper/7151-on-fairness-and-calibration)
1079 | - [When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness](https://papers.nips.cc/paper/7220-when-worlds-collide-integrating-different-counterfactual-assumptions-in-fairness)
1080 |
1081 | ### [SDM 2017](https://dblp.uni-trier.de/db/conf/sdm/sdm2017.html)
1082 |
1083 | > *null*
1084 |
1085 | ### [UAI 2017](https://dblp.uni-trier.de/db/conf/uai/uai2017.html)
1086 |
1087 | - [Fair Optimal Stopping Policy for Matching with Mediator](http://auai.org/uai2017/proceedings/papers/207.pdf), [supplement](http://auai.org/uai2017/proceedings/supplements/207.pdf)
1088 | - [Importance Sampling for Fair Policy Selection](http://auai.org/uai2017/proceedings/papers/225.pdf), [supplement](http://auai.org/uai2017/proceedings/supplements/225.pdf)
1089 |
1090 | ### [WWW 2017](https://dblp.uni-trier.de/db/conf/www/www2017.html)
1091 |
1092 | - [Fairness in Package-to-Group Recommendations](https://dl.acm.org/citation.cfm?doid=3038912.3052612)
1093 | - [Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment](https://dl.acm.org/citation.cfm?doid=3038912.3052660)
1094 |
1095 | ### Others 2017
1096 |
1097 | - [Learning Non-Discriminatory Predictors](http://proceedings.mlr.press/v65/woodworth17a/woodworth17a.pdf), [COLT 2017](https://dblp.uni-trier.de/db/conf/colt/colt2017.html)
1098 | - [Inherent Trade-Offs in the Fair Determination of Risk Scores](http://drops.dagstuhl.de/opus/volltexte/2017/8156/), [ITCS 2017](https://dblp.uni-trier.de/db/conf/innovations/innovations2017.html)
1099 |
1100 | ## 2016
1101 |
1102 | ### [AAAI 2016](https://dblp.uni-trier.de/db/conf/aaai/aaai2016.html)
1103 |
1104 | > *null*
1105 |
1106 | ### [AISTATS 2016](https://dblp.uni-trier.de/db/conf/aistats/aistats2016.html)
1107 |
1108 | > *null*
1109 |
1110 | ### [BIGDATA 2016](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2016.html)
1111 |
1112 | > *null*
1113 |
1114 | ### [CIKM 2016](https://dblp.uni-trier.de/db/conf/cikm/cikm2016.html)
1115 |
1116 | > *null*
1117 |
1118 | ### [FAT\* 2016](https://dblp.uni-trier.de/db/conf/fat/fat2016.html)
1119 |
1120 | > *null*
1121 |
1122 | ### [ICDM 2016](https://dblp.uni-trier.de/db/conf/icdm/icdm2016.html)
1123 |
1124 | - [Auditing Black-Box Models for Indirect Influence](https://ieeexplore.ieee.org/document/7837824)
1125 |
1126 | ### [ICML 2016](https://dblp.uni-trier.de/db/conf/icml/icml2016.html)
1127 |
1128 | > *null*
1129 |
1130 | ### [IJCAI 2016](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2016.html)
1131 |
1132 | - [Situation Testing-Based Discrimination Discovery: A Causal Inference Approach](https://www.ijcai.org/Abstract/16/386)
1133 |
1134 | ### [KDD 2016](https://dblp.uni-trier.de/db/conf/kdd/kdd2016.html)
1135 |
1136 | > *null*
1137 |
1138 | ### [NIPS 2016](https://dblp.uni-trier.de/db/conf/nips/nips2016.html)
1139 |
1140 | - [Fairness in Learning: Classic and Contextual Bandits](https://papers.nips.cc/paper/6355-fairness-in-learning-classic-and-contextual-bandits)
1141 | - [Equality of Opportunity in Supervised Learning](https://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning)
1142 | - [Satisfying Real-world Goals with Dataset Constraints](http://papers.nips.cc/paper/6316-satisfying-real-world-goals-with-dataset-constraints)
1143 | - [Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings)
1144 |
1145 | ### [SDM 2016](https://dblp.uni-trier.de/db/conf/sdm/sdm2016.html)
1146 |
1147 | - [A Confidence-Based Approach for Balancing Fairness and Accuracy](http://epubs.siam.org/doi/abs/10.1137/1.9781611974348.17)
1148 |
1149 | ### [UAI 2016](https://dblp.uni-trier.de/db/conf/uai/uai2016.html)
1150 |
1151 | > *null*
1152 |
1153 | ### [WWW 2016](https://dblp.uni-trier.de/db/conf/www/www2016.html)
1154 |
1155 | > *null*
1156 |
1157 | ### Others 2016
1158 |
1159 | - [A KDD Process for Discrimination Discovery](https://link.springer.com/chapter/10.1007%2F978-3-319-46131-1_28), ECML/PKDD 2016
1160 |
1161 | ## 2015
1162 |
1163 | ### [AAAI 2015](https://dblp.uni-trier.de/db/conf/aaai/aaai2015.html)
1164 |
1165 | > *null*
1166 |
1167 | ### [AISTATS 2015](https://dblp.uni-trier.de/db/conf/aistats/aistats2015.html)
1168 |
1169 | > *null*
1170 |
1171 | ### [BIGDATA 2015](https://dblp.uni-trier.de/db/conf/bigdataconf/bigdataconf2015.html)
1172 |
1173 | > *null*
1174 |
1175 | ### [CIKM 2015](https://dblp.uni-trier.de/db/conf/cikm/cikm2015.html)
1176 |
1177 | > *null*
1178 |
1179 | ### [FAT\* 2015](https://dblp.uni-trier.de/db/conf/fat/fat2015.html)
1180 |
1181 | > *null*
1182 |
1183 | ### [ICDM 2015](https://dblp.uni-trier.de/db/conf/icdm/icdm2015.html)
1184 |
1185 | > *null*
1186 |
1187 | ### [ICML 2015](https://dblp.uni-trier.de/db/conf/icml/icml2015.html)
1188 |
1189 | > *null*
1190 |
1191 | ### [IJCAI 2015](https://dblp.uni-trier.de/db/conf/ijcai/ijcai2015.html)
1192 |
1193 | > *null*
1194 |
1195 | ### [KDD 2015](https://dblp.uni-trier.de/db/conf/kdd/kdd2015.html)
1196 |
1197 | - [Certifying and Removing Disparate Impact](https://dl.acm.org/citation.cfm?doid=2783258.2783311)
1198 |
1199 | ### [NIPS 2015](https://dblp.uni-trier.de/db/conf/nips/nips2015.html)
1200 |
1201 | > *null*
1202 |
1203 | ### [SDM 2015](https://dblp.uni-trier.de/db/conf/sdm/sdm2015.html)
1204 |
1205 | > *null*
1206 |
1207 | ### [UAI 2015](https://dblp.uni-trier.de/db/conf/uai/uai2015.html)
1208 |
1209 | > *null*
1210 |
1211 | ### [WWW 2015](https://dblp.uni-trier.de/db/conf/www/www2015.html)
1212 |
1213 | > *null*
1214 |
1215 | ## 2014
1216 |
1217 | - [Fair pattern discovery](https://dl.acm.org/citation.cfm?doid=2554850.2555043), SAC 2014
1218 | - [Anti-discrimination Analysis Using Privacy Attack Strategies](https://link.springer.com/chapter/10.1007%2F978-3-662-44851-9_44), ECML/PKDD 2014
1219 |
1220 | ## 2013
1221 |
1222 | - [Learning Fair Representations](http://proceedings.mlr.press/v28/zemel13.html), ICML 2013
1223 | - [Discrimination aware classification for imbalanced datasets](https://dl.acm.org/citation.cfm?doid=2505515.2507836), CIKM 2013
1224 |
1225 | ## 2012
1226 |
1227 | - [Fairness-Aware Classifier with Prejudice Remover Regularizer](https://link.springer.com/chapter/10.1007%2F978-3-642-33486-3_3), ECML/PKDD 2012
1228 | - [Fairness through awareness](https://dl.acm.org/citation.cfm?doid=2090236.2090255), ITCS 2012
1229 | - [Decision theory for discrimination-aware classification](https://ieeexplore.ieee.org/document/6413831), ICDM 2012
1230 | - [A study of top-k measures for discrimination discovery](https://dl.acm.org/citation.cfm?doid=2245276.2245303), SAC 2012
1231 |
1232 | ## 2011
1233 |
1234 | - [k-NN as an implementation of situation testing for discrimination discovery and prevention](https://dl.acm.org/citation.cfm?doid=2020408.2020488), KDD 2011
1235 | - [Handling Conditional Discrimination](https://ieeexplore.ieee.org/document/6137304), ICDM 2011
1236 | - [Discrimination prevention in data mining for intrusion and crime detection](https://ieeexplore.ieee.org/document/5949405), CICS 2011
1237 |
1238 | ## 2010
1239 |
1240 | - [Discrimination Aware Decision Tree Learning](https://ieeexplore.ieee.org/document/5694053), ICDM 2010
1241 | - [Classification with no discrimination by preferential sampling](https://dtai.cs.kuleuven.be/events/Benelearn2010/submissions/benelearn2010_submission_18.pdf), 19th Machine Learning Conf. Belgium and The Netherlands 2010
1242 |
1243 | ## 2009
1244 |
1245 | - [Measuring Discrimination in Socially-Sensitive Decision Records](https://doi.org/10.1137/1.9781611972795.50), SDM 2009
1246 | - [Classifying without discriminating](https://ieeexplore.ieee.org/document/4909197), IC4 2009
1247 |
1248 | ## 2008
1249 |
1250 | - [Discrimination-aware data mining](https://doi.org/10.1145/1401890.1401959), KDD 2008
1251 |
--------------------------------------------------------------------------------
/journal.md:
--------------------------------------------------------------------------------
1 | # Journal Papers
2 |
3 | ## 2019
4 |
5 | - [*Fairness in online social network timelines: Measurements, models and mechanism design*](https://www.sciencedirect.com/science/article/pii/S0166531618302724), in Performance Evaluation
6 |
7 | ## 2018
8 |
9 | - [Model-based and actual independence for fairness-aware classification](http://link.springer.com/10.1007/s10618-017-0534-x), in *Data Mining and Knowledge Discovery*, 2018
10 | - [Search bias quantification: investigating political bias in social media and web search](http://link.springer.com/10.1007/s10791-018-9341-2), in *Information Retrieval Journal*, 2018
11 | - [Fairness in Criminal Justice Risk Assessments: The State of the Art](http://journals.sagepub.com/doi/10.1177/0049124118782533), in *Sociological Methods & Research*, 2018
12 | - [Data Pre-Processing for Discrimination Prevention: Information-Theoretic Optimization and Analysis](https://doi.org/10.1109/JSTSP.2018.2865887), in *IEEE Journal of Selected Topics in Signal Processing*, 2018
13 | - [Causal Modeling-Based Discrimination Discovery and Removal: Criteria, Bounds, and Algorithms](https://ieeexplore.ieee.org/abstract/document/8477109), in *IEEE Transactions on Knowledge and Data Engineering*, 2018
14 |
15 | ## 2017
16 |
17 | - [Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification](http://www.liebertpub.com/doi/10.1089/big.2016.0048), in *Big Data*, 2017
18 | - [Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments](https://www.liebertpub.com/doi/10.1089/big.2016.0047), in *Big Data*, 2017
19 | - [Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data](http://journals.sagepub.com/doi/10.1177/2053951717743530), in *Big Data & Society*, 2017
20 | - [Fair, Transparent, and Accountable Algorithmic Decision-making Processes](http://link.springer.com/10.1007/s13347-017-0279-x), in *Philosophy & Technology*, 2017
21 | - [Measuring discrimination in algorithmic decision making](http://link.springer.com/10.1007/s10618-017-0506-1), in *Data Mining and Knowledge Discovery*, 2017
22 | - [Exposing the probabilistic causal structure of discrimination](http://link.springer.com/10.1007/s41060-016-0040-z), in *International Journal of Data Science and Analytics*, 2017
23 | - [Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?](http://link.springer.com/10.1007/s13347-017-0293-z), in *Philosophy & Technology*, 2017
24 |
25 | ## 2016
26 |
27 | - [Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models](http://link.springer.com/10.1007/s10506-016-9182-5), in *Artificial Intelligence and Law*, 2016
28 |
29 | ## 2015
30 |
31 | - [Balancing Fairness and Efficiency: The Impact of Identity-Blind and Identity-Conscious Accountability on Applicant Screening](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0145208), in *PLOS ONE*, 2015
32 | - [Discrimination- and privacy-aware patterns](https://doi.org/10.1007/s10618-014-0393-7), in *Data Mining and Knowledge Discovery*, 2015
33 |
34 | ## 2014
35 |
36 | - [A multidisciplinary survey on discrimination analysis](http://www.journals.cambridge.org/abstract_S0269888913000039), in *The Knowledge Engineering Review*, 2014
37 | - [Generalization-based privacy preservation and discrimination prevention in data publishing and mining](http://link.springer.com/10.1007/s10618-014-0346-1), in *Data Mining and Knowledge Discovery*, 2014
38 | - [Combating discrimination using Bayesian networks](http://link.springer.com/10.1007/s10506-014-9156-4), in *Artificial Intelligence and Law*, 2014
39 | - [Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence](http://link.springer.com/10.1007/s10506-013-9152-0), in *Artificial Intelligence and Law*, 2014
40 | - [Using t-closeness anonymity to control for non-discrimination](http://www.tdp.cat/issues11/abs.a196a14.php), in *Transactions on Data Privacy*, 2014
41 | - [Introduction to special issue on computational methods for enforcing privacy and fairness in the knowledge society](https://link.springer.com/article/10.1007%2Fs10506-014-9153-7), in *Artificial Intelligence and Law*, 2014
42 | - [Responsibly Innovating Data Mining and Profiling Tools: A New Approach to Discrimination Sensitive and Privacy Sensitive Attributes](https://link.springer.com/chapter/10.1007%2F978-94-017-8956-1_19), in *Responsible Innovation 1*, in 2014
43 |
44 | ## 2013
45 |
46 | - [A Methodology for Direct and Indirect Discrimination Prevention in Data Mining](http://ieeexplore.ieee.org/document/6175897/), in *IEEE Transactions on Knowledge and Data Engineering*, 2013
47 | - [Quantifying explainable discrimination and removing illegal discrimination in automated decision making](http://link.springer.com/10.1007/s10115-012-0584-8), in *Knowledge and Information Systems*, 2013
48 | - [Discrimination discovery in scientific project evaluation: A case study](https://www.sciencedirect.com/science/article/pii/S0957417413003023?via%3Dihub), in *Expert Systems with Applications*, 2013
49 | - [Discrimination and Privacy in the Information Society](https://link.springer.com/book/10.1007%2F978-3-642-30487-3#editorsandaffiliations), 2013
50 |
51 | ## 2012
52 |
53 | - [Data preprocessing techniques for classification without discrimination](http://link.springer.com/10.1007/s10115-011-0463-8), in *Knowledge and Information Systems*, 2012
54 |
55 | ## 2011
56 |
57 | - [Implementing Anti-discrimination Policies in Statistical Profiling Models](https://www.aeaweb.org/articles?id=10.1257/pol.3.3.206), in *American Economic Journal: Economic Policy*, 2011
58 |
59 | ## 2010
60 |
61 | - [Data Mining for Discrimination Discovery](http://doi.acm.org/10.1145/1754428.1754432), in *ACM Transactions on Knowledge Discovery from Data*, 2010
62 | - [Three naive Bayes approaches for discrimination-free classification](http://link.springer.com/10.1007/s10618-010-0190-x), in *Data Mining and Knowledge Discovery*, 2010
63 | - [Integrating induction and deduction for finding evidence of discrimination](https://doi.org/10.1007/s10506-010-9089-5), in *Artificial Intelligence and Law*, 2010
64 |
--------------------------------------------------------------------------------
/other.md:
--------------------------------------------------------------------------------
1 | # Other Resources
2 |
3 | ## Courses
4 |
5 | - [A Course on Fairness, Accountability and Transparency in Machine Learning](https://geomblog.github.io/fairness/), Utah, Fall 2016
6 | - [CS 294: Fairness in Machine Learning](https://fairmlclass.github.io), UC Berkeley, Fall 2017
7 |
8 | ## Tutorials
9 |
10 | - [Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining](http://www.francescobonchi.com/algorithmic_bias_tutorial.html), KDD 2016
11 | - [Anti-discrimination Learning: From Association to Causation](https://cci.drexel.edu/bigdata/bigdata2017/files/Tutorial8.pdf), IEEE Big Data 2017
12 | - [Fairness in Machine Learning](http://mrtz.org/nips17), NIPS 2017
13 | - [Defining and Designing Fair Algorithms](https://policylab.stanford.edu/projects/defining-and-designing-fair-algorithms.html), ICML 2018
14 | - [Anti-discrimination Learning: From Association to Causation](http://csce.uark.edu/~xintaowu/kdd18-tutorial/), KDD 2018
15 | - [Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned](https://sites.google.com/view/wsdm19-fairness-tutorial), WSDM 2019
16 | - [Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned](https://sites.google.com/view/www19-fairness-tutorial), WWW 2019
17 | - [Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned](https://sites.google.com/view/kdd19-fairness-tutorial), KDD 2019
18 |
19 | ## Workshops and Conferences
20 |
21 | - [Discrimination and Privacy-Aware Data Mining (DPADM)](https://sites.google.com/site/dpadm2012/), ICDM 2012
22 | - [FAT/ML](http://www.fatml.org/), since 2014
23 | - [Privacy and Discrimination in Data Mining](https://pddm16.eurecat.cat/), ICDM 2016
24 | - [Machine Learning and the Law](http://www.mlandthelaw.org/), NIPS 2016
25 | - [FATREC](https://piret.gitlab.io/fatrec/), since 2017
26 | - [Data & Algorithm Bias (DAB 2017)](http://dab.udd.cl/2017/), 2017
27 | - [AIES](http://www.aies-conference.com/), since 2018
28 | - [FAT* Conference](https://fatconference.org/), since 2018
29 | - [FATES](http://fates19.isti.cnr.it/), since 2019
30 |
31 | ## Datasets
32 |
33 | - [Adult](https://archive.ics.uci.edu/ml/datasets/adult)
34 | - [Bank marketing](https://archive.ics.uci.edu/ml/datasets/bank+marketing)
35 | - [Dutch Census](https://sites.google.com/site/conditionaldiscrimination/)
36 | - [German Credit](https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data))
37 | - [ProPublica COMPAS](https://github.com/propublica/compas-analysis)
38 |
39 | ## Surveys
40 |
41 | - [A comparative study of fairness-enhancing interventions in machine learning](https://dl.acm.org/doi/abs/10.1145/3287560.3287589)
42 | - [A multidisciplinary survey on discrimination analysis](http://www.journals.cambridge.org/abstract_S0269888913000039)
43 | - [A survey on bias and fairness in machine learning](https://dl.acm.org/doi/abs/10.1145/3457607)
44 | - [A survey on measuring indirect discrimination in machine learning](https://arxiv.org/abs/1511.00148)
45 | - [An Overview of Fairness in Clustering](https://ieeexplore.ieee.org/abstract/document/9541160/)
46 | - [Fairness Definitions Explained](http://fairware.cs.umass.edu/papers/Verma.pdf)
47 | - [Fairness in Learning-Based Sequential Decision Algorithms: A Survey](https://arxiv.org/abs/2001.04861)
48 | - [Fairness in learning-based sequential decision algorithms: A survey](https://link.springer.com/chapter/10.1007/978-3-030-60990-0_18)
49 | - [Fairness in machine learning: A survey](https://dl.acm.org/doi/10.1145/3457607)
50 | - [Fairness-aware machine learning](https://www.phil-fak.uni-duesseldorf.de/fileadmin/Redaktion/Institute/Sozialwissenschaften/Kommunikations-_und_Medienwissenschaft/KMW_I/Working_Paper/Dunkelau___Leuschel__2019__Fairness-Aware_Machine_Learning.pdf)
51 | - [Machine learning fairness notions: Bridging the gap with real-world applications](https://www.sciencedirect.com/science/article/pii/S0306457321001321)
52 | - [Machine learning testing: Survey, landscapes and horizons](https://ieeexplore.ieee.org/abstract/document/9000651/)
53 | - [Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning](https://dl.acm.org/doi/abs/10.1145/3292500.3330664)
54 | - [On formalizing fairness in prediction with machine learning](https://arxiv.org/abs/1710.03184)
55 | - [On the applicability of machine learning fairness notions](https://dl.acm.org/doi/abs/10.1145/3468507.3468511)
56 | - [On the applicability of ML fairness notions](https://arxiv.org/abs/2006.16745)
57 | - [Survey on Causal-based Machine Learning Fairness Notions](https://arxiv.org/abs/2010.09553)
58 | - [The measure and mismeasure of fairness: A critical review of fair machine learning](https://arxiv.org/abs/1808.00023)
59 |
--------------------------------------------------------------------------------
/packages.md:
--------------------------------------------------------------------------------
1 | # Packages
2 |
3 | - [AIF 360](https://github.com/Trusted-AI/AIF360): A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
4 | - [Fairlean](https://github.com/fairlearn/fairlearn): A Python package to assess and improve fairness of machine learning models.
5 |
--------------------------------------------------------------------------------
/sail.md:
--------------------------------------------------------------------------------
1 | # Papers from [SAIL](https://sail.uark.edu/)
2 |
3 | - Yongkai Wu and Xintao Wu, *Using Loglinear Model for Discrimination Discovery and Prevention*, in DSAA 2016, [link](http://ieeexplore.ieee.org/abstract/document/7796896/)
4 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *Situation Testing-Based Discrimination Discovery: A Causa Inference Approach*, in IJCAI 2016, [link](https://dl.acm.org/citation.cfm?id=3061001)
5 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *On Discrimination Discovery Using Causal Networks*, in SBP-BRiMS 2016, [link](https://link.springer.com/chapter/10.1007/978-3-319-39931-7_9)
6 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *A Causal Framework for Discovering and Removing Direct an Indirect Discrimination*, in IJCAI 2017, [link](https://dl.acm.org/citation.cfm?id=3172438)
7 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *Achieving Non-Discrimination in Data Release*, in KDD 2017, [link](https://dl.acm.org/citation.cfm?id=3098167)
8 | - Lu Zhang and Xintao Wu, *Anti-discrimination learning: a causal modeling-based framework*, in JDSA, [link](https://link.springer.com/article/10.1007/s41060-017-0058-x)
9 | - Yongkai Wu, Lu Zhang, and Xintao Wu, *On Discrimination Discovery and Removal in Ranked Data using Causal Graph*, in KDD 2018, [link](https://dl.acm.org/citation.cfm?id=3220087)
10 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *Achieving Non-Discrimination in Prediction*, in IJCAI 2018, [link](http://www.ijcai.org/proceedings/2018/430)
11 | - Lu Zhang, Yongkai Wu, and Xintao Wu, *Causal Modeling-Based Discrimination Discovery and Removal: Criteria, Bounds, and Algorithms*, in TKDE, [link](https://ieeexplore.ieee.org/abstract/document/8477109)
12 | - Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu, *FairGAN: Fairness-aware Generative Adversarial Networks*, in IEEE Big Data 2018, [link](https://ieeexplore.ieee.org/document/8622525)
13 | - Yongkai Wu, Lu Zhang, and Xintao Wu, *On Convexity and Bounds of Fairness-aware Classification*, in WWW 2019, [link](https://dl.acm.org/citation.cfm?id=3313723)
14 | - Depeng Xu, Shuhan Yuan and Xintao Wu, *Achieving Differential Privacy and Fairness in Logistic Regression*, in WWW 2019 workshop on FATES, [link](https://dl.acm.org/citation.cfm?id=3317584)
15 | - Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu, *FairGAN+: Achieving Fair Data Generation and Fair Classification through Generative Adversarial Networks*, in KDD 2019 workshop on XAI
16 | - Depeng Xu, Yongkai Wu, Shuhan Yuan, Lu Zhang, and Xintao Wu, *Achieving Causal Fairness through Generative Adversarial Networks*, in IJCAI 2019, [link](https://www.ijcai.org/proceedings/2019/201)
17 | - Yongkai Wu, Lu Zhang, and Xintao Wu, *Counterfactual Fairness: Unidentification, Bound and Algorithm*, in IJCAI 2019, [link](https://www.ijcai.org/proceedings/2019/199)
18 |
19 | ## Tutorials by [SAIL](https://sail.uark.edu/)
20 |
21 | - [Anti-Discrimination Learning: from Association to Causation](http://csce.uark.edu/~xintaowu/publ/sbp17.pdf ), SBP-BRiMS 2017, July 5, 2017. Washington DC, USA
22 | - [Anti-Discrimination Learning: from Association to Causation](https://cci.drexel.edu/bigdata/bigdata2017/files/Tutorial8.pdf ), IEEE BigData 2017, Dec 13, 2017. Boston, MA, USA
23 | - [Anti-discrimination Learning: From Association to Causation](http://csce.uark.edu/~xintaowu/kdd18-tutorial/ ), KDD 2018, Aug 19, 2018. London, UK
24 |
--------------------------------------------------------------------------------
/workshop.md:
--------------------------------------------------------------------------------
1 | # Workshop Papers
2 |
3 | ## 2021
4 |
5 | ### [AIES 2021](https://www.aies-conference.com/2021/)
6 |
7 | #### Paper Presentations
8 |
9 | - [Artificial Intelligence and the Purpose of Social Systems.](https://doi.org/10.1145/3461702.3462526)
10 | - [A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior.](https://doi.org/10.1145/3461702.3462515)
11 | - [Gender Bias and Under-Representation in Natural Language Processing Across Human Languages.](https://doi.org/10.1145/3461702.3462530)
12 | - [Blind Justice: Algorithmically Masking Race in Charging Decisions.](https://doi.org/10.1145/3461702.3462524)
13 | - [Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research.](https://doi.org/10.1145/3461702.3462519)
14 | - [Fair Machine Learning Under Partial Compliance.](https://doi.org/10.1145/3461702.3462521)
15 | - [Minimax Group Fairness: Algorithms and Experiments.](https://doi.org/10.1145/3461702.3462523)
16 | - [Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions.](https://doi.org/10.1145/3461702.3462537)
17 | - [Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness.](https://doi.org/10.1145/3461702.3462535)
18 | - [Reflexive Design for Fairness and Other Human Values in Formal Models.](https://doi.org/10.1145/3461702.3462518)
19 | - [On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes.](https://doi.org/10.1145/3461702.3462538)
20 | - [Hard Choices and Hard Limits in Artificial Intelligence.](https://doi.org/10.1145/3461702.3462539)
21 | - [Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases.](https://doi.org/10.1145/3461702.3462536)
22 | - [Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development.](https://doi.org/10.1145/3461702.3462527)
23 | - [Fairness and Data Protection Impact Assessments.](https://doi.org/10.1145/3461702.3462528)
24 | - [Towards Unbiased and Accurate Deferral to Multiple Experts.](https://doi.org/10.1145/3461702.3462516)
25 | - [Algorithmic Hiring in Practice: Recruiter and HR Professional's Perspectives on AI Use in Hiring.](https://doi.org/10.1145/3461702.3462531)
26 | - [Scaling Guarantees for Nearest Counterfactual Explanations.](https://doi.org/10.1145/3461702.3462514)
27 | - [Ethically Compliant Planning within Moral Communities.](https://doi.org/10.1145/3461702.3462522)
28 | - [Precarity: Modeling the Long Term Effects of Compounded Decisions on Individual Instability.](https://doi.org/10.1145/3461702.3462529)
29 | - [Moral Disagreement and Artificial Intelligence.](https://doi.org/10.1145/3461702.3462534)
30 | - [FairOD: Fairness-aware Outlier Detection.](https://doi.org/10.1145/3461702.3462517)
31 | - [Surveilling Surveillance: Estimating the Prevalence of Surveillance Cameras with Street View Data.](https://doi.org/10.1145/3461702.3462525)
32 | - [On the Privacy Risks of Model Explanations.](https://doi.org/10.1145/3461702.3462533)
33 | - [Measuring Automated Influence: Between Empirical Evidence and Ethical Values.](https://doi.org/10.1145/3461702.3462532)
34 | - [Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities.](https://doi.org/10.1145/3461702.3462540)
35 | - [Alienation in the AI-Driven Workplace.](https://doi.org/10.1145/3461702.3462520)
36 |
37 | #### Poster Presentations
38 |
39 | - [The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity.](https://doi.org/10.1145/3461702.3462563)
40 | - [Persistent Anti-Muslim Bias in Large Language Models.](https://doi.org/10.1145/3461702.3462624)
41 | - [Are AI Ethics Conferences Different and More Diverse Compared to Traditional Computer Science Conferences?](https://doi.org/10.1145/3461702.3462616)
42 | - [Ethical Implementation of Artificial Intelligence to Select Embryos in In Vitro Fertilization.](https://doi.org/10.1145/3461702.3462589)
43 | - [Measuring Model Biases in the Absence of Ground Truth.](https://doi.org/10.1145/3461702.3462557)
44 | - [Accounting for Model Uncertainty in Algorithmic Discrimination.](https://doi.org/10.1145/3461702.3462630)
45 | - [Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making using Confidence Thresholds.](https://doi.org/10.1145/3461702.3462575)
46 | - [Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging.](https://doi.org/10.1145/3461702.3462567)
47 | - [Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs.](https://doi.org/10.1145/3461702.3462610)
48 | - [Automating Procedurally Fair Feature Selection in Machine Learning.](https://doi.org/10.1145/3461702.3462585)
49 | - [Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study.](https://doi.org/10.1145/3461702.3462565)
50 | - [Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty.](https://doi.org/10.1145/3461702.3462571)
51 | - [Ensuring Fairness under Prior Probability Shifts.](https://doi.org/10.1145/3461702.3462596)
52 | - [Envisioning Communities: A Participatory Approach Towards AI for Social Good.](https://doi.org/10.1145/3461702.3462612)
53 | - [AI Alignment and Human Reward.](https://doi.org/10.1145/3461702.3462570)
54 | - [Fairness and Machine Fairness.](https://doi.org/10.1145/3461702.3462577)
55 | - [Reconfiguring Diversity and Inclusion for AI Ethics.](https://doi.org/10.1145/3461702.3462622)
56 | - [Algorithmic Audit of Italian Car Insurance: Evidence of Unfairness in Access and Pricing.](https://doi.org/10.1145/3461702.3462569)
57 | - [Modeling and Guiding the Creation of Ethical Human-AI Teams.](https://doi.org/10.1145/3461702.3462573)
58 | - [What's Fair about Individual Fairness?](https://doi.org/10.1145/3461702.3462621)
59 | - [Learning to Generate Fair Clusters from Demonstrations.](https://doi.org/10.1145/3461702.3462558)
60 | - [Ethical Obligations to Provide Novelty.](https://doi.org/10.1145/3461702.3462555)
61 | - [Computing Plans that Signal Normative Compliance.](https://doi.org/10.1145/3461702.3462607)
62 | - [An AI Ethics Course Highlighting Explicit Ethical Agents.](https://doi.org/10.1145/3461702.3462552)
63 | - [The Dangers of Drowsiness Detection: Differential Performance, Downstream Impact, and Misuses.](https://doi.org/10.1145/3461702.3462593)
64 | - [Designing Shapelets for Interpretable Data-Agnostic Classification.](https://doi.org/10.1145/3461702.3462553)
65 | - [Computer Vision and Conflicting Values: Describing People with Automated Alt Text.](https://doi.org/10.1145/3461702.3462620)
66 | - [Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation.](https://doi.org/10.1145/3461702.3462568)
67 | - [The Earth Is Flat and the Sun Is Not a Star: The Susceptibility of GPT-2 to Universal Adversarial Triggers.](https://doi.org/10.1145/3461702.3462578)
68 | - [Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI.](https://doi.org/10.1145/3461702.3462564)
69 | - [Can We Obtain Fairness For Free?](https://doi.org/10.1145/3461702.3462614)
70 | - [Monitoring AI Services for Misuse.](https://doi.org/10.1145/3461702.3462566)
71 | - [Towards Equity and Algorithmic Fairness in Student Grade Prediction.](https://doi.org/10.1145/3461702.3462623)
72 | - [The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.](https://doi.org/10.1145/3461702.3462606)
73 | - [Exciting, Useful, Worrying, Futuristic: Public Perception of Artificial Intelligence in 8 Countries.](https://doi.org/10.1145/3461702.3462605)
74 | - [Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults.](https://doi.org/10.1145/3461702.3462609)
75 | - [AI and Shared Prosperity.](https://doi.org/10.1145/3461702.3462619)
76 | - [Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End.](https://doi.org/10.1145/3461702.3462597)
77 | - [Becoming Good at AI for Good.](https://doi.org/10.1145/3461702.3462599)
78 | - [Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics.](https://doi.org/10.1145/3461702.3462588)
79 | - [A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized.](https://doi.org/10.1145/3461702.3462591)
80 | - [Ethical Data Curation for AI: An Approach based on Feminist Epistemology and Critical Theories of Race.](https://doi.org/10.1145/3461702.3462598)
81 | - [Risk Identification Questionnaire for Detecting Unintended Bias in the Machine Learning Development Lifecycle.](https://doi.org/10.1145/3461702.3462572)
82 | - [Participatory Algorithmic Management: Elicitation Methods for Worker Well-Being Models.](https://doi.org/10.1145/3461702.3462628)
83 | - [Feeding the Beast: Superintelligence, Corporate Capitalism and the End of Humanity.](https://doi.org/10.1145/3461702.3462581)
84 | - [The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adversarial Dynamics in Synthetic Media.](https://doi.org/10.1145/3461702.3462584)
85 | - [RAWLSNET: Altering Bayesian Networks to Encode Rawlsian Fair Equality of Opportunity.](https://doi.org/10.1145/3461702.3462618)
86 | - [Fair Equality of Chances for Prediction-based Decisions.](https://doi.org/10.1145/3461702.3462613)
87 | - [Towards Accountability in the Use of Artificial Intelligence for Public Administrations.](https://doi.org/10.1145/3461702.3462631)
88 | - [How Do the Score Distributions of Subpopulations Influence Fairness Notions?](https://doi.org/10.1145/3461702.3462601)
89 | - [More Similar Values, More Trust? - the Effect of Value Similarity on Trust in Human-Agent Interaction.](https://doi.org/10.1145/3461702.3462576)
90 | - [Causal Multi-level Fairness.](https://doi.org/10.1145/3461702.3462587)
91 | - [Unpacking the Expressed Consequences of AI Research in Broader Impact Statements.](https://doi.org/10.1145/3461702.3462608)
92 | - [Measuring Lay Reactions to Personal Data Markets.](https://doi.org/10.1145/3461702.3462582)
93 | - [Epistemic Reasoning for Machine Ethics with Situation Calculus.](https://doi.org/10.1145/3461702.3462586)
94 | - [Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms.](https://doi.org/10.1145/3461702.3462561)
95 | - [Understanding the Representation and Representativeness of Age in AI Data Sets.](https://doi.org/10.1145/3461702.3462590)
96 | - [Quantum Fair Machine Learning.](https://doi.org/10.1145/3461702.3462611)
97 | - [Fair Bayesian Optimization.](https://doi.org/10.1145/3461702.3462629)
98 | - [We Haven't Gone Paperless Yet: Why the Printing Press Can Help Us Understand Data and AI.](https://doi.org/10.1145/3461702.3462604)
99 | - [Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective.](https://doi.org/10.1145/3461702.3462603)
100 | - [GAEA: Graph Augmentation for Equitable Access via Reinforcement Learning.](https://doi.org/10.1145/3461702.3462615)
101 | - [Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition.](https://doi.org/10.1145/3461702.3462627)
102 | - [The Theory, Practice, and Ethical Challenges of Designing a Diversity-Aware Platform for Social Relations.](https://doi.org/10.1145/3461702.3462595)
103 | - [A Step Toward More Inclusive People Annotations for Fairness.](https://doi.org/10.1145/3461702.3462594)
104 | - [Fairness in the Eyes of the Data: Certifying Machine-Learning Models.](https://doi.org/10.1145/3461702.3462554)
105 | - [Rawlsian Fair Adaptation of Deep Learning Classifiers.](https://doi.org/10.1145/3461702.3462592)
106 | - [FaiR-N: Fair and Robust Neural Networks for Structured Data.](https://doi.org/10.1145/3461702.3462559)
107 | - [Machine Learning and the Meaning of Equal Treatment.](https://doi.org/10.1145/3461702.3462556)
108 | - [Digital Voodoo Dolls.](https://doi.org/10.1145/3461702.3462626)
109 | - [Comparing Equity and Effectiveness of Different Algorithms in an Application for the Room Rental Market.](https://doi.org/10.1145/3461702.3462600)
110 | - [Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring.](https://doi.org/10.1145/3461702.3462602)
111 | - [Differentially Private Normalizing Flows for Privacy-Preserving Density Estimation.](https://doi.org/10.1145/3461702.3462625)
112 | - [Governing Algorithmic Systems with Impact Assessments: Six Observations.](https://doi.org/10.1145/3461702.3462580)
113 | - [A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness.](https://doi.org/10.1145/3461702.3462583)
114 | - [Who's Responsible? Jointly Quantifying the Contribution of the Learning Algorithm and Data.](https://doi.org/10.1145/3461702.3462574)
115 | - [RelEx: A Model-Agnostic Relational Model Explainer.](https://doi.org/10.1145/3461702.3462562)
116 | - [Skilled and Mobile: Survey Evidence of AI Researchers' Immigration Preferences.](https://doi.org/10.1145/3461702.3462617)
117 |
118 | ### [FATES 2021](http://fates.isti.cnr.it/index.php/call-for-papers-2021/)
119 |
120 | - [Automating Fairness Configurations for Machine Learning.](https://doi.org/10.1145/3442442.3452301)
121 | - [Fairness beyond "equal": The Diversity Searcher as a Tool to Detect and Enhance the Representation of Socio-political Actors in News Media.](https://doi.org/10.1145/3442442.3452303)
122 | - [Characterizing and Comparing COVID-19 Misinformation Across Languages, Countries and Platforms.](https://doi.org/10.1145/3442442.3452304)
123 | - [Political Polarization and Platform Migration: : A Study of Parler and Twitter Usage by United States of America Congress Members.](https://doi.org/10.1145/3442442.3452305)
124 | - [Auditing Source Diversity Bias in Video Search Results Using Virtual Agents.](https://doi.org/10.1145/3442442.3452306)
125 | - [AI Principles in Identifying Toxicity in Online Conversation: Keynote at the Third Workshop on Fairness, Accountability, Transparency, Ethics and Society on the Web.](https://doi.org/10.1145/3442442.3452307)
126 |
127 | ### Others
128 |
129 | ## 2020
130 |
131 | ### [AIES 2020](https://www.aies-conference.com/2020/)
132 |
133 | #### Paper Presentations
134 |
135 | - [Exploring AI Futures Through Role Play.](https://doi.org/10.1145/3375627.3375817)
136 | - [Activism by the AI Community: Analysing Recent Achievements and Future Prospects.](https://doi.org/10.1145/3375627.3375814)
137 | - [Fair Allocation through Selective Information Acquisition.](https://doi.org/10.1145/3375627.3375823)
138 | - [The Problem with Intelligence: Its Value-Laden History and the Future of AI.](https://doi.org/10.1145/3375627.3375813)
139 | - [Learning Occupational Task-Shares Dynamics for the Future of Work.](https://doi.org/10.1145/3375627.3375826)
140 | - [Social Contracts for Non-Cooperative Games.](https://doi.org/10.1145/3375627.3375829)
141 | - [The AI Liability Puzzle and a Fund-Based Work-Around.](https://doi.org/10.1145/3375627.3375806)
142 | - [Algorithmic Fairness from a Non-ideal Perspective.](https://doi.org/10.1145/3375627.3375828)
143 | - [Bayesian Sensitivity Analysis for Offline Policy Evaluation.](https://doi.org/10.1145/3375627.3375822)
144 | - [Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices.](https://doi.org/10.1145/3375627.3375809)
145 | - [Defining AI in Policy versus Practice.](https://doi.org/10.1145/3375627.3375835)
146 | - ["How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations.](https://doi.org/10.1145/3375627.3375833)
147 | - [Normative Principles for Evaluating Fairness in Machine Learning.](https://doi.org/10.1145/3375627.3375808)
148 | - [Good Explanation for Algorithmic Transparency.](https://doi.org/10.1145/3375627.3375821)
149 | - [Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities.](https://doi.org/10.1145/3375627.3375831)
150 | - [An Empirical Approach to Capture Moral Uncertainty in AI.](https://doi.org/10.1145/3375627.3375805)
151 | - [When Trusted Black Boxes Don't Agree: Incentivizing Iterative Improvement and Accountability in Critical Software Systems.](https://doi.org/10.1145/3375627.3375807)
152 | - [When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning.](https://doi.org/10.1145/3375627.3375824)
153 | - [Ethics for AI Writing: The Importance of Rhetorical Context.](https://doi.org/10.1145/3375627.3375811)
154 | - [Diversity and Inclusion Metrics in Subset Selection.](https://doi.org/10.1145/3375627.3375832)
155 | - [Learning Norms from Stories: A Prior for Value Aligned Agents.](https://doi.org/10.1145/3375627.3375825)
156 | - [Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours.](https://doi.org/10.1145/3375627.3375818)
157 | - [Technocultural Pluralism: A "Clash of Civilizations" in Technology?](https://doi.org/10.1145/3375627.3375834)
158 | - [Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society.](https://doi.org/10.1145/3375627.3375803)
159 | - [Algorithmized but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta.](https://doi.org/10.1145/3375627.3375816)
160 | - [Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.](https://doi.org/10.1145/3375627.3375820)
161 | - [Human Comprehension of Fairness in Machine Learning.](https://doi.org/10.1145/3375627.3375819)
162 | - [What's Next for AI Ethics, Policy, and Governance? A Global Overview.](https://doi.org/10.1145/3375627.3375804)
163 | - [Trade-offs in Fair Redistricting.](https://doi.org/10.1145/3375627.3375802)
164 | - [CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models.](https://doi.org/10.1145/3375627.3375812)
165 | - [The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?](https://doi.org/10.1145/3375627.3375815)
166 | - [Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.](https://doi.org/10.1145/3375627.3375830)
167 | - [U.S. Public Opinion on the Governance of Artificial Intelligence.](https://doi.org/10.1145/3375627.3375827)
168 | - [Different "Intelligibility" for Different Folks.](https://doi.org/10.1145/3375627.3375810)
169 |
170 | #### Poster Presentations
171 |
172 | - [AI and Holistic Review: Informing Human Reading in College Admissions.](https://doi.org/10.1145/3375627.3375871)
173 | - [Robot Rights?: Let's Talk about Human Welfare Instead.](https://doi.org/10.1145/3375627.3375855)
174 | - [Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making.](https://doi.org/10.1145/3375627.3375870)
175 | - [A Just Approach Balancing Rawlsian Leximax Fairness and Utilitarianism.](https://doi.org/10.1145/3375627.3375844)
176 | - [Should Artificial Intelligence Governance be Centralised?: Design Lessons from History.](https://doi.org/10.1145/3375627.3375857)
177 | - [An Invitation to System-wide Algorithmic Fairness.](https://doi.org/10.1145/3375627.3375860)
178 | - [Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments.](https://doi.org/10.1145/3375627.3375861)
179 | - [Toward Implementing the Agent-Deed-Consequence Model of Moral Judgment in Autonomous Vehicles.](https://doi.org/10.1145/3375627.3375853)
180 | - [Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification.](https://doi.org/10.1145/3375627.3375875)
181 | - [Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems.](https://doi.org/10.1145/3375627.3375846)
182 | - [Adoption Dynamics and Societal Impact of AI Systems in Complex Networks.](https://doi.org/10.1145/3375627.3375847)
183 | - [Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization.](https://doi.org/10.1145/3375627.3375852)
184 | - [More Than "If Time Allows": The Role of Ethics in AI Education.](https://doi.org/10.1145/3375627.3375868)
185 | - [A Geometric Solution to Fair Representations.](https://doi.org/10.1145/3375627.3375864)
186 | - [Measuring Fairness in an Unfair World.](https://doi.org/10.1145/3375627.3375854)
187 | - [Towards Just, Fair and Interpretable Methods for Judicial Subset Selection.](https://doi.org/10.1145/3375627.3375848)
188 | - [Monitoring Misuse for Accountable 'Artificial Intelligence as a Service'.](https://doi.org/10.1145/3375627.3375873)
189 | - ["The Global South is everywhere, but also always somewhere": National Policy Narratives and AI Justice.](https://doi.org/10.1145/3375627.3375859)
190 | - [Ethics of Food Recommender Applications.](https://doi.org/10.1145/3375627.3375874)
191 | - [Artificial Intelligence and Indigenous Perspectives: Protecting and Empowering Intelligent Human Beings.](https://doi.org/10.1145/3375627.3375845)
192 | - [The Windfall Clause: Distributing the Benefits of AI for the Common Good.](https://doi.org/10.1145/3375627.3375842)
193 | - [Steps Towards Value-Aligned Systems.](https://doi.org/10.1145/3375627.3375872)
194 | - [Contextual Analysis of Social Media: The Promise and Challenge of Eliciting Context in Social Media Posts with Natural Language Processing.](https://doi.org/10.1145/3375627.3375841)
195 | - [The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making.](https://doi.org/10.1145/3375627.3375869)
196 | - [FACE: Feasible and Actionable Counterfactual Explanations.](https://doi.org/10.1145/3375627.3375850)
197 | - [Balancing the Tradeoff Between Clustering Value and Interpretability.](https://doi.org/10.1145/3375627.3375843)
198 | - [Data Augmentation for Discrimination Prevention and Bias Disambiguation.](https://doi.org/10.1145/3375627.3375865)
199 | - [Meta Decision Trees for Explainable Recommendation Systems.](https://doi.org/10.1145/3375627.3375876)
200 | - [Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning.](https://doi.org/10.1145/3375627.3375866)
201 | - [Social and Governance Implications of Improved Data Efficiency.](https://doi.org/10.1145/3375627.3375863)
202 | - [Conservative Agency via Attainable Utility Preservation.](https://doi.org/10.1145/3375627.3375851)
203 | - [A Deontic Logic for Programming Rightful Machines.](https://doi.org/10.1145/3375627.3375867)
204 | - [A Fairness-aware Incentive Scheme for Federated Learning.](https://doi.org/10.1145/3375627.3375840)
205 | - [Joint Optimization of AI Fairness and Utility: A Human-Centered Approach.](https://doi.org/10.1145/3375627.3375862)
206 | - [Assessing Post-hoc Explainability of the BKT Algorithm.](https://doi.org/10.1145/3375627.3375856)
207 | - [Deepfakes for Medical Video De-Identification: Privacy Protection and Diagnostic Information Preservation.](https://doi.org/10.1145/3375627.3375849)
208 | - [Arbiter: A Domain-Specific Language for Ethical Machine Learning.](https://doi.org/10.1145/3375627.3375858)
209 |
210 | ### [FATES 2020](http://fates.isti.cnr.it/index.php/fates-2020/)
211 |
212 | - [A Unifying Framework for Fairness-Aware Influence Maximization.](https://doi.org/10.1145/3366424.3383555)
213 | - [Convex Fairness Constrained Model Using Causal Effect Estimators.](https://doi.org/10.1145/3366424.3383556)
214 | - [Fairness of Classification Using Users' Social Relationships in Online Peer-To-Peer Lending.](https://doi.org/10.1145/3366424.3383557)
215 | - [Fairness through Equality of Effort.](https://doi.org/10.1145/3366424.3383558)
216 | - [Quantifying Gender Bias in Different Corpora.](https://doi.org/10.1145/3366424.3383559)
217 | - [Studying Political Bias via Word Embeddings.](https://doi.org/10.1145/3366424.3383560)
218 | - [Representativeness of Abortion Legislation Debate on Twitter: A Case Study in Argentina and Chile.](https://doi.org/10.1145/3366424.3383561)
219 | - [Mitigating Cognitive Biases in Machine Learning Algorithms for Decision Making.](https://doi.org/10.1145/3366424.3383562)
220 | - [Biases on Social Media Data: (Keynote Extended Abstract).](https://doi.org/10.1145/3366424.3383564)
221 |
222 | ### Others
223 |
224 | #### [SafeAI 2020](https://safeai.webs.upv.es/)
225 |
226 | - [Fair Enough: Improving Fairness in Budget-Constrained Decision Making Using Confidence Thresholds.](http://ceur-ws.org/Vol-2560/paper24.pdf)
227 | - [A Study on Multimodal and Interactive Explanations for Visual Question Answering.](http://ceur-ws.org/Vol-2560/paper44.pdf)
228 | - [You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods.](http://ceur-ws.org/Vol-2560/paper8.pdf)
229 | - [Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics.](http://ceur-ws.org/Vol-2560/paper10.pdf)
230 |
231 | ## 2019
232 |
233 | ### [AIES 2019](http://www.aies-conference.com/2019/)
234 |
235 | #### Accepted oral papers of AIES 2019
236 |
237 | - [Killer Robots and Human Dignity](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_6.pdf)
238 | - [Legible Normativity for AI Alignment: The Value of Silly Rules](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_21.pdf)
239 | - [Reinforcement learning and inverse reinforcement learning with system 1 and system 2](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_25.pdf)
240 | - [Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_27.pdf)
241 | - [Guiding Prosecutorial Decisions with an Interpretable Statistical Model](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_45.pdf)
242 | - [Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_49.pdf)
243 | - [Shared Moral Foundations of Embodied Artificial Intelligence](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_52.pdf)
244 | - [Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_54.pdf)
245 | - [Active Fairness in Algorithmic Decision Making](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_70.pdf)
246 | - [Speaking on Behalf of: Representation, Delegation, and Authority in Computational Text Analysis](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_77.pdf)
247 | - [AI + Art = Human](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_80.pdf)
248 | - [Fair Transfer Learning with Missing Protected Attributes](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_82.pdf)
249 | - [AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_83.pdf)
250 | - [A framework for benchmarking discrimination-aware models in machine learning](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_96.pdf)
251 | - [On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_98.pdf)
252 | - [Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_99.pdf)
253 | - [TED: Teaching AI to Explain its Decisions](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_128.pdf)
254 | - [How Technological Advances Can Reveal Rights](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_129.pdf)
255 | - [Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_130.pdf)
256 | - [Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithm and experimentation](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_131.pdf)
257 | - [Regulating Lethal and Harmful Autonomy: Drafting a Protocol VI of the Convention on Certain Conventional Weapons](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_142.pdf)
258 | - [Understanding Black Box Model Behavior through Subspace Explanations](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_143.pdf)
259 | - [Theories of parenting and their application to artificial intelligence](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_147.pdf)
260 | - [Learning Existing Social Conventions via Observationally Augmented Self-Play](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_158.pdf)
261 | - [Inferring Work Task Automatability from AI Expert Evidence](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_166.pdf)
262 | - [The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_188.pdf)
263 | - [Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_205.pdf)
264 | - [Balancing the Benefits of Autonomous Vehicles](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_207.pdf)
265 | - [Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_215.pdf)
266 | - [Paradoxes in Fair Computer-Aided Decision Making](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_216.pdf)
267 | - [Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products](http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf)
268 | - [How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_229.pdf)
269 | - [Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_230.pdf)
270 | - [Incomplete Contracting and AI Alignment](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_231.pdf)
271 |
272 | #### Accepted poster papers of AIES 2019
273 |
274 | - [(When) Can AI Bots Lie?](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_1.pdf)
275 | - [Crowdsourcing with Fairness, Diversity and Budget Constraints](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_12.pdf)
276 | - [Modelling and Influencing the AI Bidding War: A Research Agenda](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_28.pdf)
277 | - [IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_29.pdf)
278 | - [The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_32.pdf)
279 | - [Counterfactual Fairness in Text Classification through Robustness](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_35.pdf)
280 | - [Taking Advantage of Multitask Learning for Fair Classification](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_37.pdf)
281 | - [Explanatory Interactive Machine Learning](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_41.pdf)
282 | - [Multiaccuracy: Black-Box Post-Processing for Fairness in Classification](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_44.pdf)
283 | - [Mapping Informal Settlements in Developing Countries using Machine Learning and Low Resolution Multi-spectral Data](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_48.pdf)
284 | - [Equalized Odds Implies Partially Equalized Outcomes Under Realistic Assumptions](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_55.pdf)
285 | - [Costs and Benefits of Fair Representation Learning](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_56.pdf)
286 | - [Ethically Aligned Opportunistic Scheduling for Productive Laziness](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_65.pdf)
287 | - [Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_68.pdf)
288 | - [The Right To Confront Your Accuser: Opening the Black Box of Forensic DNA Software](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_72.pdf)
289 | - [Algorithmic greenlining: An approach to increase diversity](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_87.pdf)
290 | - [Requirements for an Artificial Agent with Norm Competence](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_116.pdf)
291 | - [Loss-Aversively Fair Classification](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_121.pdf)
292 | - [Epistemic Therapy for Bias in Automated Decision-Making](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_141.pdf)
293 | - [Mapping Missing Population in Rural India: A Deep Learning Approach with Satellite Imagery](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_157.pdf)
294 | - [Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_159.pdf)
295 | - [A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_161.pdf)
296 | - [Framing Artificial Intelligence in American Newspapers](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_162.pdf)
297 | - [Degenerate Feedback Loops in Recommender Systems](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_187.pdf)
298 | - [Global Explanations of Neural Networks: Mapping the Landscape of Predictions](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_198.pdf)
299 | - ["Scary Robots": Examining Public Responses to AI](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_200.pdf)
300 | - [TrolleyMod v1.0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_210.pdf)
301 | - [Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_220.pdf)
302 | - [Human-AI Learning Performance in Multi-Armed Bandits](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_225.pdf)
303 | - [Perceptions of Domestic Robots’ Normative Behavior Across Cultures](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_232.pdf)
304 | - [Toward the Engineering of Virtuous Machines](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_240.pdf)
305 | - [A Formal Approach to Explainability](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_244.pdf)
306 | - [Rightful Machines and Dilemmas](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_245.pdf)
307 | - [The Seductive Allure of Artificial Intelligence-Powered Neurotechnology](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_252.pdf)
308 | - [What are the biases in my word embedding?](http://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_253.pdf)
309 |
310 | #### [FATES 2019](http://fates19.isti.cnr.it/)
311 |
312 | - Achieving Differential Privacy and Fairness in Logistic Regression
313 | - Unsupervised Topic Extraction from Privacy Policies
314 | - Collaborative Explanation of Deep Models with Limited Interaction for Trade Secret and Privacy Preservation
315 | - Algorithms for Fair Team Formation in Online Labour Marketplaces
316 | - Fairness in the social influence maximization problem
317 | - Hegemony in Social Media and the effect of recommendations
318 | - Managing Bias in AI
319 | - Privacy-aware Linked Widgets
320 | - Privacy and Transparency within the 4IR: Two faces of the same coin
321 | - Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification
322 | - On Preserving Sensitive Information of Multiple Aspect Trajectories In-House
323 | - Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists
324 | - Trust and trustworthiness in social recommender systems
325 | - Empirical analysis of bias in voice based personal assistants
326 | - Black Hat Trolling, White Hat Trolling, and Hacking the Attention Landscape
327 | - Uncovering Social Media Bots: a Transparency-focused Approach
328 | - Can Location-Based Searches Create Exposure Bias?, *discussion paper*
329 | - What’s in a Name? The Need for Scalable External Audit Infrastructure, *discussion paper*
330 | - In Defense of Synthetic Data, *discussion paper*
331 |
332 | ## 2018
333 |
334 | ### [AIES 2018](http://www.aies-conference.com/2018/)
335 |
336 | #### Accepted oral papers of AIES 2018
337 |
338 | - [Measuring and Mitigating Unintended Bias in Text Classification](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_9.pdf)
339 | - [Value Alignment, Fair Play, and the Rights of Service Robots](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_10.pdf)
340 | - [Regulating Artificial Intelligence: Proposal for a Global Solution](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_13.pdf)
341 | - [Using Education as a Model to Capture Good-Faith Effort for Autonomous Systems](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_21.pdf)
342 | - [Exploiting moral values to choose the right norms](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_32.pdf)
343 | - [Software Malpractice in the Age of AI: A Guide for the Wary Tech Company](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_43.pdf)
344 | - [Non-Discriminatory Machine Learning through Convex Fairness Criteria](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_59.pdf)
345 | - [Fair Forests: Regularized Tree Induction to Minimize Model Bias](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_61.pdf)
346 | - [A framework for grounding the moral status of intelligent machines](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_64.pdf)
347 | - [Rethinking AI Strategy and Policy as Entangled Super Wicked Problems](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_70.pdf)
348 | - [An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_77.pdf)
349 | - [Incorrigibility in the CIRL Framework](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_84.pdf)
350 | - [When Do People Want AI to Make Decisions?](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_86.pdf)
351 | - [Toward Non-Intuition-Based Machine Ethics](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_88.pdf)
352 | - [Embodiment, Anthropomorphism, and Intellectual Property Rights for AI Creations](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_89.pdf)
353 | - [Partially Generative Neural Networks for Gang Crime Classification with Partial Information](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_93.pdf)
354 | - [Detecting Bias in Black-Box Models Using Transparent Model Distillation](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_96.pdf)
355 | - [Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward Shaping](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_97.pdf)
356 | - [The Dark side of Ethical Robots](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_98.pdf)
357 | - [Jill Watson Doesn't Care if You're Pregnant: Grounding AI Ethics in Empirical Studies](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_104.pdf)
358 | - [Purple Feed: Identifying High Consensus News Posts on Social Media](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_105.pdf)
359 | - [A Formalization of Kant's Second Formulation of the Categorical Imperative](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_110.pdf)
360 | - [Regulating Autonomous Vehicles: A Policy Proposal](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_111.pdf)
361 | - [Towards an "Ethics by Design" methodology for AI research projects](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_115.pdf)
362 | - [Regulating for 'normal AI accidents': operational lessons for the responsible governance of AI deployment](http://www.aies-conference.com/wp-content/papers/mai/AIES_2018_paper_118.pdf)
363 | - [On the distinction between implicit and explicit ethical agency](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_129.pdf)
364 | - [A Computational Model of Commonsense Moral Decision Making](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_130.pdf)
365 | - [Socially-Aware Navigation Using Topological Maps and Social Norm Learning](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_136.pdf)
366 | - [Transparency and Explanation in Deep Reinforcement Learning Neural Networks](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_157.pdf)
367 | - [Ethical Challenges in Data-Driven Dialogue Systems](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_159.pdf)
368 | - [An AI Race: Rhetoric and Risks](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_163.pdf)
369 | - [Preferences and Ethical Principles in Decision Making](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_74.pdf)
370 |
371 | #### Accepted poster papers of AIES 2018
372 |
373 | - [An Autonomous Architecture that Protects the Right to Privacy](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_125.pdf)
374 | - [Utilizing Housing Resources for Homeless Youth Through the Lens of Multiple Multi-Dimensional Knapsacks](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_92.pdf)
375 | - [Real-Time Inference of User Types to Assist with more Inclusive and Diverse Social Media Activism Campaigns](http://www.aies-conference.com/wp-content/papers/mai/AIES_2018_paper_164.pdf)
376 | - [Understanding Convolutional Networks with APPLE : Automatic Patch Pattern Labeling for Explanation](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_91.pdf)
377 | - [Companion Robots: the Hallucinatory Danger of Human-Robot Interactions](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_60.pdf)
378 | - [From Algorithmic Black Boxes to Adaptive White Boxes: Declarative Decision-Theoretic Ethical Programs as Codes of Ethics](http://www.aies-conference.com/wp-content/papers/mai/AIES_2018_paper_73.pdf)
379 | - [Privacy-preserving Machine Learning Based Data Analytics on Edge Devices](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_161.pdf)
380 | - [Inverse norm conflict resolution](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_152.pdf)
381 | - [Fairness in Relational Domains](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_27.pdf)
382 | - [Sociotechnical Systems and Ethics in the Large](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_57.pdf)
383 | - [Margins and opportunity](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_80.pdf)
384 | - [Opportunities and Challenges for Artificial Intelligence in India](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_52.pdf)
385 | - [Mitigating Unwanted Biases with Adversarial Learning](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_162.pdf)
386 | - [Fairness in Deceased Organ Matching](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_82.pdf)
387 | - [What's up with Privacy? : User Preferences and Privacy Concerns in Intelligent Personal Assistants](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_146.pdf)
388 | - [Data Driven Platform for Organizing Scientific Articles Relevant to Biomimicry](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_90.pdf)
389 | - [Towards Provably Moral AI Agents in Bottom-up Learning Frameworks](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_8.pdf)
390 | - [Meritocratic Fairness for Infinite and Contextual Bandits](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_76.pdf)
391 | - [Socialbots supporting human rights](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_30.pdf)
392 | - [Ethics by Design: Necessity or Curse?](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_68.pdf)
393 | - [Always Lurking: Understanding and Mitigating Bias in Online Human Trafficking Detection](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_174.pdf)
394 | - [Modeling Epistemological Principles for Bias Mitigation in AI Systems: An Illustration in Hiring Decisions](http://www.aies-conference.com/wp-content/papers/mai/AIES_2018_paper_85.pdf)
395 | - [Impacts on Trust of Healthcare AI](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_132.pdf)
396 | - [Sub-committee Approval Voting and Generalized Justified Representation Axioms](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_54.pdf)
397 | - [Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_35.pdf)
398 | - [Norms, Rewards, and the Intentional Stance: Comparing Machine Learning Approaches to Ethical Training](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_148.pdf)
399 | - [Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_120.pdf)
400 | - [Adapting a Kidney Exchange Algorithm to Align with Human Values](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_116.pdf)
401 | - [Towards Composable Bias Rating of AI Systems](http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_65.pdf)
402 |
403 | ### [FAT/ML 2018](https://www.fatml.org/schedule/2018)
404 |
405 | - [Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction](https://www.fatml.org//media/documents/achieving_fairness_through_adversearial_learning.pdf)
406 | - [Actionable Recourse in Linear Classification](http://www.berkustun.com/docs/actionable_recourse_fatml_2018.pdf)
407 | - [Axiomatic Characterization of Data-Driven Influence Measures for Classification](https://www.fatml.org//media/documents/axiomatic_characterization_of_data_driven_influence_measures.pdf)
408 | - [Blind Justice: Fairness with Encrypted Sensitive Attributes](https://www.fatml.org//media/documents/blind_justice_fairness_with_encypted_sensitive_attributes.pdf)
409 | - [Darling or Babygirl? Investigating Stylistic Bias in Sentiment Analysis](https://www.fatml.org//media/documents/darling_or_babygirl_stylistic_bias.pdf)
410 | - [Datasheets for Datasets](https://www.fatml.org//media/documents/datasheets_for_datasets.pdf)
411 | - [Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes](https://www.fatml.org//media/documents/debiasing_representations.pdf)
412 | - [Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study](https://www.fatml.org//media/documents/does_removing_stereotype_priming_remove_bias.pdf)
413 | - [Enhancing Human Decision Making via Assignment Optimization](https://www.fatml.org//media/documents/enhancing_human_decision_making_via_assignment_optimization.pdf)
414 | - [Equal Protection Under the Algorithm: A Legal-Inspired Framework for Identifying Discrimination in Machine Learning](https://www.fatml.org//media/documents/equal_protection_under_the_algorithm.pdf)
415 | - ["Fair" Risk Assessments: A Precarious Approach for Criminal Justice Reform](https://www.fatml.org//media/documents/fair_risk_assessments_criminal_justice.pdf)
416 | - [Fairness Through Computationally-Bounded Awareness](https://www.fatml.org//media/documents/fairness_through_computationally_bounded_awareness.pdf)
417 | - [Game-theoretic Interpretability for Temporal Modeling](https://arxiv.org/pdf/1807.00130.pdf)
418 | - [Gradient Reversal Against Discrimination](https://arxiv.org/abs/1807.00392)
419 | - [Group Fairness Under Composition](https://www.fatml.org//media/documents/group_fairness_under_composition.pdf)
420 | - [Individual Fairness Under Composition](https://www.fatml.org//media/documents/individual_fairness_under_composition.pdf)
421 | - [InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity](https://www.fatml.org//media/documents/inclusive_facenet.pdf)
422 | - [Learning under selective labels in the presence of expert consistency](https://arxiv.org/pdf/1807.00905.pdf)
423 | - [Modelling Mistrust in End-of-Life Care](http://arxiv.org/abs/1807.00124)
424 | - [On Formalizing Fairness in Prediction with ML](https://www.fatml.org//media/documents/formalizing_fairness_in_prediction_with_ml.pdf)
425 | - [Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness](https://www.fatml.org//media/documents/preventing_fairness_gerrymandering.pdf)
426 | - [Probably Approximately Metric-Fair Learning](https://www.fatml.org//media/documents/probably_approximately_metric_fair_learning.pdf)
427 | - [A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics](http://arxiv.org/abs/1807.00553)
428 | - [Training Fairness-Constrained Classifiers To Generalize](https://www.fatml.org//media/documents/training_fairness_constrained_classifiers_to_generalize.pdf)
429 | - [Turing Box: An Experimental Platform for the Evaluation of AI Systems](https://www.fatml.org//media/documents/turing_box.pdf)
430 | - [Using image fairness representations in diversity-based re-ranking for recommendations](https://www.fatml.org//media/documents/using_image_fairness_representations.pdf)
431 | - [Welfare and Distributional Impacts of Fair Classification](https://www.fatml.org//media/documents/welfare_and_distributional_effects_of_fair_classification.pdf)
432 | - [Women also Snowboard: Overcoming Bias in Captioning Models](http://arxiv.org/abs/1807.00517)
433 |
434 | ## 2017
435 |
436 | ### [FAT/ML 2017](https://www.fatml.org/schedule/2017)
437 |
438 | - [From Parity to Preference-based Notions of Fairness in Classification](http://www.fatml.org/media/documents/from_parity_to_preference_notions_of_fairness.pdf)
439 | - [The Authority of "Fair" in Machine Learning](https://arxiv.org/pdf/1706.09976.pdf)
440 | - [Logics and practices of transparency and opacity in real-world applications of public sector machine learning](https://arxiv.org/pdf/1706.09249.pdf)
441 | - [Learning Fair Classifiers: A Regularization Approach](https://arxiv.org/pdf/1707.00044.pdf)
442 | - [A Convex Framework for Fair Regression](http://www.fatml.org/media/documents/convex_framework_for_fair_regression.pdf)
443 | - [Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations](https://arxiv.org/pdf/1707.00075.pdf)
444 | - [Multisided Fairness for Recommendation](https://arxiv.org/pdf/1707.00093.pdf)
445 | - [Fairer and more accurate but for whom?](https://arxiv.org/pdf/1707.00046.pdf)
446 | - [Decoupled classifiers for fair and efficient ML](http://www.fatml.org/media/documents/decoupled_classifiers_for_fair_and_efficient_machine_learning.pdf)
447 | - [Decision making with limited feedback: Error bounds for recidivism prediction and predictive policing](http://www.fatml.org/media/documents/recidivism_prediction_and_predictive_policing.pdf)
448 | - [On Fairness Diversity and Randomness in Algorithmic Decision Making](https://arxiv.org/pdf/1706.10208.pdf)
449 | - [Better Fair Algorithms for Contextual Bandits](http://www.fatml.org/media/documents/better_fair_algorithms_for_contextual_bandits.pdf)
450 | - [Fair Algorithms for Infinite Contextual Bandits](http://www.fatml.org/media/documents/better_fair_algorithms_for_infinite_contextual_bandits.pdf)
451 | - [The causal impact of bail on case outcomes for indigent defendants](https://arxiv.org/pdf/1707.04666.pdf)
452 | - [New Fairness Metrics for Recommendation that Embrace Differences](https://arxiv.org/pdf/1706.09838.pdf)
453 |
454 | ## 2016
455 |
456 | ### [FAT/ML 2016](https://www.fatml.org/schedule/2016)
457 |
458 | - [Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models](https://arxiv.org/abs/1611.04967)
459 | - [Price of Transparency in Strategic Machine Learning](https://arxiv.org/abs/1610.08210)
460 | - [Fairness as a Program Property](https://arxiv.org/abs/1610.06067)
461 | - [Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2846909)
462 | - [How to Be Fair and Diverse](https://arxiv.org/abs/1610.07183)
463 | - [Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments](https://arxiv.org/abs/1610.07524)
464 | - [The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems](https://arxiv.org/abs/1610.10064)
465 | - [Fair Learning in Markovian Environments](https://arxiv.org/abs/1611.03071)
466 | - [Rawlsian Fairness for Machine Learning](https://arxiv.org/abs/1610.09559)
467 | - [Inherent Trade-Offs in the Fair Determination of Risk Scores](https://arxiv.org/abs/1609.05807)
468 | - [A Statistical Framework for Fair Predictive Algorithms](https://arxiv.org/abs/1610.08077)
469 | - [Measuring Fairness in Ranked Outputs](https://arxiv.org/abs/1610.08559)
470 | - [Fairness Beyond Disparate Treatment and Disparate Impact: Learning Classification without Disparate Mistreatment](https://arxiv.org/abs/1610.08452)
471 |
472 | ## 2015
473 |
474 | ### [FAT/ML 2015](https://www.fatml.org/schedule/2015)
475 |
476 | - [Fairness Constraints: A Mechanism for Fair Classification](https://arxiv.org/abs/1507.05259)
477 | - [A Confidence-Based Approach for Balancing Fairness and Accuracy](https://arxiv.org/abs/1601.05764)
478 | - Towards Diagnosing Accuracy Loss in Discrimination-Aware Classification: An Application to Predictive Policing
479 | - [On the Relation between Accuracy and Fairness in Binary Classification](https://arxiv.org/abs/1505.05723)
480 |
481 | ## 2014
482 |
483 | ### [FAT/ML 2014](https://www.fatml.org/schedule/2014)
484 |
485 | > *null*
486 |
487 | ## 2013
488 |
489 | ### [Privacy Aspects of Data Mining (PADM 2013)](http://www.zurich.ibm.com/padm2011/index.html) with ICDM 2013
490 |
491 | - [The Independence of Fairness-Aware Classifiers](https://ieeexplore.ieee.org/document/6754009)
492 | - [Data Anonymity Meets Non-discrimination](https://ieeexplore.ieee.org/document/6754013)
493 |
494 | ## 2012
495 |
496 | ### [Discrimination and Privacy-Aware Data Mining (DPADM 2012)](https://sites.google.com/site/dpadm2012/) with ICDM 2012
497 |
498 | - [Exploring Discrimination: A User-centric Evaluation of Discrimination-Aware Data Mining](https://ieeexplore.ieee.org/document/6406461)
499 | - [A Study on the Impact of Data Anonymization on Anti-discrimination](https://ieeexplore.ieee.org/document/6406462)
500 | - [Injecting Discrimination and Privacy Awareness Into Pattern Discovery](https://ieeexplore.ieee.org/document/6406463)
501 | - [Classifying Socially Sensitive Data Without Discrimination: An Analysis of a Crime Suspect Dataset](https://ieeexplore.ieee.org/document/6406464)
502 | - [Considerations on Fairness-Aware Data Mining](https://ieeexplore.ieee.org/document/6406465)
503 | - [Discriminatory Decision Policy Aware Classification](https://ieeexplore.ieee.org/document/6406466)
504 | - [Discovering Gender Discrimination in Project Funding](https://ieeexplore.ieee.org/document/6406467)
505 |
506 | ## 2011
507 |
508 | ### [Privacy Aspects of Data Mining (PADM 2011)](http://www.zurich.ibm.com/padm2011/index.html) with ICDM 2011
509 |
510 | - [Fairness-aware Learning through Regularization Approach](https://ieeexplore.ieee.org/document/6137441)
511 |
512 | ## 2009
513 |
514 | ### [Domain Driven Data Mining (D3M 2009)](https://dblp.org/db/conf/icdm/icdmw2009) with ICDM 2009
515 |
516 | - [Building Classifiers with Independency Constraints](https://ieeexplore.ieee.org/document/5360534)
517 |
--------------------------------------------------------------------------------