├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 J. Nathan Matias 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # COMM 4940: Governing Human-Algorithm Behavior 2 | [J. Nathan Matias](https://natematias.com) <nathan.matias AT cornell.edu> , Department of Communication, Field Member, Information Science 3 | 4 | * Spring 2024 ([see course listing](https://classes.cornell.edu/browse/roster/SP24/class/COMM/4940)) 5 | * Time: Tuesday & Thursday 11:40am - 12:55pm 6 | * Location: Malott Hall 224 7 | 8 | ![Paradise Wildfires, YouTube algorithms, Wilmer Catalan-Ramirez](https://imgur.com/R5pWrV1.png) 9 | 10 | Images: Google Waze blocks wildfire escape route, The international affiliation network of YouTube Trends, and Immigrant released from detention after being wrongly listed as gang member. 11 | 12 | Algorithms that monitor and influence human behavior are everywhere—directing the behavior of law enforcement, managing the world's financial systems, shaping our cultures, and flipping a coin on the success or failure of movements for change. Since human-algorithm feedback is already a basic pattern in society, we urgently need ways to assess the impact of attempts to steer that feedback toward justice. 13 | 14 | In 2023, the challenge became even clearer when Attorney General Letitia James and 31 other state Attorneys General filed a federal lawsuit against Meta for allegedly "[harming young people's mental health and contributing to the youth mental health crisis](https://ag.ny.gov/press-release/2023/attorney-general-james-and-multistate-coalition-sue-meta-harming-youth)." One of the lawsuit's claims is that Meta, through Facebook and Instagram, encouraged compulsive use of the platform through its algorthms and design features—contributing to a widespread mental health crisis. 15 | 16 | In this course for (15) upper-level undergraduates and (5) PhD students, you will learn about the design of adaptive algorithms and the feedback patterns they create with human behavior. You will learn about the challenge they represent for social policy, about ways to research their behavior, and about emerging policy ideas for governing these complex patterns. You will get first-hand experience at diagnosing and attempting to change a feedback system. Along the way, you will hear from pioneers in policy, advocacy, and scholarship. 17 | 18 | This course is an excellent stepping stone for anyone interested in a career in policy, advocacy, academia, or industry research. 19 | 20 |
21 | 22 |

How to Register for this class

23 | 24 |

Upper Level Undergraduates: to join this class, register for COMM 4940 like any class.

25 | 26 |

PhD Students: to join this class, you have two options:

27 | 28 |
  1. Enroll in an independent study (COMM 7970) with Professor Matias and then attend the class (preferred). Before enrolling, please write a brief note to Professor Matias with "Enrolling in COMM [4940]" in the subject. The email should:
      29 |
    • Introduce yourself, your program, and where you are at (1-2 sentences)
    • 30 |
    • Explain why you want to take the course. For example, is it related to your research? An interest in policy? A project you want to develop in the course? (2-3 sentences)
    • 31 |
    • Describe the perspectives and skills you bring to the class (a few bullet points) (for example, an understanding of certain theory, or capabilities at qualitative research, data analysis, policy writing, advocacy, media-making, or software development)
    • 32 |
    33 |
  2. Enroll in COMM 4940 as a piece of elective coursework 34 |
35 |
36 | 37 | ## What You Learn 38 | In this seminar class, you will engage with the science and policy challenges of regulating human-algorithm behavior. Along the way, you will work with scientific issues of methods, theory, and ethics, policy questions about how to govern these situations, and how to bridge between science and policy in a democracy. For a final project, student teams (3-4) will produce a novel project that makes a contribution at the intersection of science and policy. 39 | 40 | By the end of the semester, students will be able to: 41 | - Identify, analyze, and evaluate claims about how human and algorithm behavior interact 42 | - Summarize and criticize major approaches to governing human and algorithm behavior 43 | - Describe and evaluate major governance approaches 44 | - Design and analyze research methodologies for creating policy-relevant evidence for transparency, accountability, and change 45 | - Understand the uses of social science and computer science research in the policy process 46 | - Design an intervention into scholarly and policy conversations on human-algorithm behavior 47 | 48 | PhD students are encouraged to connect the course to an existing research question that they wish to connect with policy, or to develop a project that could become part of their wider research. 49 | 50 | ## Activities 51 | 52 | *Weekly Activities*: Throughout the semester, students will read a selection of articles and discuss that reading in class and on Canvas. Once teams have been formed, students will also submit regular progress reports on their final project. 53 | 54 | *Algorithm Incident Report*: The midterm is an analysis of an algorithm-involved event in the news, based on publicly-available information. 55 | 56 | *Project Proposal*: Your project proposal will include a description of the project, a bibliography, a list of the roles that team members will play, and a timeline. 57 | 58 | *Final Project*: An intervention into academic and/or policy conversations on human-algorithm research. In 2024, the class will support projects in three areas: 59 | * An in-depth case study and/or forensic analysis of some failure of an adaptive algorithm ([see the AI Incident Database for ideas](https://incidentdatabase.ai/about/)) 60 | * A proposal for a "black box" or data schema of what to record to inform future analysis of incidents 61 | * A digital simulation or card game that can be used to engage lived experience experts in critical thinking about human-algorithm behavior. 62 | 63 | *Grading*: Participation in class & online: 30%. Midterm: 30%. Final project: 40%. 64 | 65 | ## About the Instructor 66 | 67 | 68 | Dr. J. Nathan Matias 69 | (@natematias) 70 | organizes community behavioral science for a safer, fairer, more understanding internet. A Guatemalan-American, Nathan is an assistant professor in the Cornell University Department of Communication. He is also a field member in Information Science. 71 | 72 | Nathan is the founder of the Citizens and Technology Lab, a public-interest project at Cornell that supports community-led behavioral science—conducting independent, public-interest audits and evaluations of social technologies. CAT Lab achieves this through software systems that coordinate communities to conduct their own research on social issues. Nathan has worked with communities of millions of people to test ideas to prevent online harassment, broaden gender diversity on social media, manage human/algorithmic misinformation, and audit algorithms.

73 | 74 | ## Schedule 75 | 76 | 77 | 78 | 79 | #### Tues 01-23 Why Should We Care About Human-Algorithm Behavior? (Intro class) 80 | This class session will introduce the course and pose initial questions. 81 | 82 | --- 83 | 84 | #### Thurs 01-25 Tech and Methods: How Do Adaptive Algorithms Work? 85 | In this class, we will discuss the code and mathematics behind adaptive algorithms. 86 | 87 | * Sophie Mellor (2022) [After a 14-year-old Instagram user's suicide, Meta apologizes for (some of) the self-harm and suicide content she saw](https://finance.yahoo.com/news/14-old-instagram-users-suicide-151102166.html). Yahoo! News 88 | * Paresh Dave (2023) [The 5 Instagram Features That US States Say Ruin Teens’ Mental Health](https://www.wired.com/story/the-5-instagram-features-that-us-states-say-ruin-teens-mental-health/). WIRED 89 | 90 | * Shardanand, U., & Maes, P. (1995, May). [Social information filtering: Algorithms for automating “word of mouth”](https://dl.acm.org/doi/10.1145/223904.223931). In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 210-217). 91 | 92 | 93 | 94 | Questions to consider: 95 | * What are the specific claims being made about Meta and mental health? 96 | * What are social recommendations? How do they differ from personalized recommendations? 97 | * When would you want a personalized recommendation, and when would you want a recommendation that is not personalized? 98 | * How might a recommender system contribute to mental health harms? How might they reduce mental health harms? 99 | 100 | --- 101 | 102 | #### Tues 01-30 Feedback Loops: Simple Models to Explain Complex Behaviors 103 | * Clegg, N. (2021, March 31). [You and the Algorithm: It Takes Two to Tango](https://nickclegg.medium.com/you-and-the-algorithm-it-takes-two-to-tango-7722b19aa1c2). Medium. 104 | * Kimmerer, R. (2013) "Windigo Footprints" from Braiding Sweetgrass. Milkweed Editions 105 | 106 | Questions to think about from Windigo Footprints: 107 | * Think about an event related to algorithms and AI systems that matters to you. It might be an incident related to mental health, or discrimination, or something else 108 | * Imagine the forces akin to "hunger" (the drive for more) and "gratitude" (a drive for less of something) that direct individuals and institutions toward **more** and which ones direct them toward **less** 109 | * Reflect on how those forces might have contributed to that event. 110 | 111 | --- 112 | 113 | #### Thurs 02-01 Incidents and Disasters 114 | In this class, we're going to ask the question "What is an incident" and "What is an accident/disaster/catastrophe?" while also discussing how understanding them might help society: 115 | Readings: 116 | * Marsh, Allison (2021) [The Inventor of the Black Box Was Told to Drop the Idea and “Get On With Blowing Up Fuel Tanks”](https://spectrum.ieee.org/the-inventor-of-the-black-box-was-told-to-drop-the-idea-and-get-on-with-blowing-up-fuel-tanks). IEEE Spectrum. 117 | * Pinch, T. J. (1991). [How do we treat technical uncertainty in systems failure? The case of the space shuttle Challenger. In Social responses to large technical systems: control or anticipation](https://link.springer.com/content/pdf/10.1007/978-94-011-3400-2.pdf) (pp. 143-158). Dordrecht: Springer Netherlands. 118 | 119 | Questions to think about — choose a specific risk, harm, or incident involving AI, and consider: 120 | - Based on the article by Pinch, list out at least six of the "causes" of the Challenger disaster, and label them (were they technical, political social, etc). Could you name the most important one? 121 | - List out things that might possibly go wrong 122 | - What might you want to record in order to go back and figure out what went wrong? 123 | 124 | A cool article I sadly wasn't able to include: 125 | * Knowles, S. (2014). Engineering risk and disaster: Disaster-STS and the American history of technology. Engineering Studies, 6(3), 227-248. 126 | 127 | 131 | 132 | 138 | 139 | 140 | --- 141 | 142 | 143 | #### Tues 02-06 Case Study: Algorithms and Discrimination 144 | To build our intuitions about governing algorithms and what to do about it, we're going to consider a series of classes about discrimination online. This class introduces the problem of discrimination in online labor markets. 145 | 146 | * Mullainathan, S. (2019). [Biased algorithms are easier to fix than biased people](https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html). The New York Times. 147 | * McGhee, H. (2021). "Racism Drained the Pool" from The sum of us: What racism costs everyone and how we can prosper together. One World. 148 | * Read pages 17 - 28 (Cornell Library has e-book copies) 149 | * Carville, O. (2019) Meet Murray Cox, [The Man Trying to Take Down Airbnb](https://www.bnnbloomberg.ca/meet-murray-cox-the-man-trying-to-take-down-airbnb-1.1263088). Bloomberg 150 | 151 | 152 | Questions to think about for class and discussion: 153 | * How does Mullainathan think about discrimination and bias? 154 | * In Mullainathan's view, what does it mean to "fix" bias? 155 | * How does McGhee think about discrimination and bias? 156 | * How might McGhee think about "fixing" discrimination and bias? 157 | * What do we gain and lose by redefining our understanding of social issues only to the things that algorithms can meaningfully address? 158 | * * 159 | 160 | --- 161 | 162 | 163 | 164 | #### Thurs 02-08 Case Study on Discrimination: Laws, Individuals, Organizations, and Code 165 | Discrimination has a history and is reinforced through feedback from multiple sources beyond software: laws, individuals, organizations, and code. In this class, we will discuss how they interact. 166 | 167 | * Rothstein, Richard. (2017) "[If San Francisco, then everywhere?](https://erenow.net/modern/color-of-law-forgotten-history/2.php)" from The Color of Law. Economic Policy Institute. 168 | * Noble, S. (2018). "Searching for Black Girls" from [Algorithms of Oppression](https://newcatalog.library.cornell.edu/catalog/10294895). NYU Press. (Pages 64-84) 169 | 170 | 171 | Questions to consider from The Color of Law: 172 | * How was discrimination built into: 173 | * Government housing allocation 174 | * Access to education 175 | * The geography of the Bay area 176 | * Government services 177 | * Policing 178 | * Jobs & employment 179 | * Access to loans 180 | * Public advertising 181 | 182 | Questions to consider from Algorithms of Oppression: 183 | * What is "racialization" ? 184 | * How might the patterns described in The Color of Law influence the patterns observed in Algorithms of Oppression? 185 | * Who gets to create algorithms 186 | * Who gets to teach the people who create algorithms 187 | * What information the algorithms learn from 188 | * What questions people type into search boxes 189 | * What ads appear next to search terms 190 | 191 | Overall questions: 192 | * how might we describe these patterns as feedback loops? 193 | * what makes algorithms similar to other sources of discrimination, and what makes them different? 194 | 195 | --- 196 | 197 | #### Thurs 02-15 Could Things Be Different? 198 | In this class, we will discuss the challenge of determining responsibility for past harms and the difficulty of identifying interventions for change. 199 | 200 | * Matias, J.N. (2023) [Humans and algorithms work together — so study them together](https://www.nature.com/articles/d41586-023-01521-z). Nature. 201 | * Mengersen, K., Moynihan, S. A., & Tweedie, R. L. (2007). [Causality and association: The statistical and legal approaches](https://www.jstor.org/stable/27645822). Statistical Science, 227-254. 202 | 203 | --- 204 | 205 | 217 | 219 | 220 | --- 221 | 222 | #### Tues 02-20 What is an AI Incident? 223 | In order for algorithms to be governed, we need to be able to describe what happens when one goes wrong. So what is an incident anyway, and how would we establish that harm had occurred? 224 | 225 | * Allen, J. (2024) [Why Is Instagram Search More Harmful Than Google Search?](https://integrityinstitute.org/blog/why-is-instagram-search-more-harmful-than-google-search). Integrity Institute 226 | * McGregor, S. (2021, May). [Preventing repeated real world AI failures by cataloging incidents: The AI incident database](https://ojs.aaai.org/index.php/AAAI/article/view/17817). In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 17, pp. 15458-15463). 227 | 228 | To prepare for class, think about the story by Sophie Mellor that we read at the beginning of class. Imagine that you are an employee at Instagram who is asked to investigate a mental-health related incident experienced by a young person who was an Instagram user. 229 | * create a definition for the incident type— what is it and what isn't it? How would you tell what counts as an incident or not? 230 | * make a list of 6 things that you would want to record from what might be know by Instagram, friends/family, and institutions 231 | 232 | In class, we will work to collaborate on an incident report template for several kinds of mental health related incidents. 233 | 234 | --- 235 | 236 | #### Thurs 02-22 Discussing final project ideas 237 | In this class, we will discuss possible final project ideas for the course. 238 | 239 | In preparation for the class, meet with a possible project team to develop ideas for collective feedback and discussion. 240 | 241 | As a reminder, final project ideas include one of three options: 242 | * An in-depth case study and/or forensic analysis of some failure of an adaptive algorithm ([see the AI Incident Database for ideas](https://incidentdatabase.ai/about/)) 243 | * A proposal for a "black box" or data schema of what to record to inform future analysis of incidents 244 | * A creative project that can be used to involved lived experience experts in critical thinking about human-algorithm behavior and incident reporting. 245 | 246 | 247 | --- 248 | FEBRUARY BREAK 249 | --- 250 | 251 | #### Thurs 02-29 Feedback Pattern: Reinforcement 252 | In a series of sessions, we will discuss common patterns of human-algorithm feedback. The first is reinforcement, which happens with individuals in the case of personalization and with groups in the case of herding behavior. 253 | 254 | * Epps-Darling, A. (2020, October 24). [Racist Algorithms Are Especially Dangerous for Teens](https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/). The Atlantic. 255 | * Arnold, K. C., Chauncey, K., & Gajos, K. Z. (2020). [Predictive text encourages predictable writing](https://www.eecs.harvard.edu/~kgajos/papers/2020/arnold20predictcive.shtml). Proceedings of the 25th International Conference on Intelligent User Interfaces, 128–138. 256 | * (gradstudents) Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010). [A contextual-bandit approach to personalized news article recommendation](https://doi.org/10.1145/1772690.1772758). Proceedings of the 19th International Conference on World Wide Web, 661–670. 257 | 258 | --- 259 | 260 | 261 | #### Tues 03-05 Feedback Pattern: Herding 262 | Algorithms can amplify herding behavior by encouraging people to do something that's already popular. Ranking software has been credited with inspiring large-scale acts of charity, spreading prejudice and encouraging harassment mobs. When enough people engage in collective discrimination, adaptive systems can entrench injustice further. This class will investigate herding as a phenomenon of human + algorithm behavior. 263 | 264 | * Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). [Experimental study of inequality and unpredictability in an artificial cultural market](https://www.science.org/doi/abs/10.1126/science.1121066). Science, 311(5762), 854–856. 265 | * Bravo, D. Y., Jefferies, J., Epps, A., & Hill, N. E. (2019). [When Things Go Viral: Youth’s Discrimination Exposure in the World of Social Media](https://link.springer.com/chapter/10.1007/978-3-030-12228-7_15). In Handbook of Children and Prejudice (pp. 269–287). Springer. 266 | * (graduate students) Brayne, S. (2020). "Directed Surveillance: Predictive Policing and Quantified Risk," from Predict and surveil: Data, discretion, and the future of policing. Oxford University Press, USA. 267 | 268 | Questions to discuss: 269 | * How does herding happen even without algorithms? What ideas from your other classes or research might help explain herding? (COMM 2450 students - remember threshold models?) 270 | * How might herding create social change you want to see in the world, as well as harms to avoid? 271 | * How might we interrupt herding, and how might you test it? 272 | * Who should have the right to interrupt herding? 273 | 274 | 281 | 282 | --- 283 | 284 | #### Thurs 03-07 Feedback Pattern: Thresholds 285 | Patterns of human and algorithm behavior can dramatically change when their behavior reaches a certain threshold. For example, a popular conversation within a marginalized group might attract harassment if an algorithm amplifies it to a wider population where hatred prevails. 286 | * Biggins, David. [Goodbye, Thunderclap: How social media storms closed the popular crowdspeaking platform](https://luminouspr.com/goodbye-thunderclap-how-social-media-storms-closed-the-popular-crowdspeaking-platform/). Luminous PR. 287 | * Lemon, A. (2014) [Case studies in Thunderclap](https://digital.gov/2014/06/25/case-studies-in-thunderclap/). Digital.gov 288 | * Granovetter, M. (1978). [Threshold models of collective behavior](https://www.jstor.org/stable/2778111). American Journal of Sociology, 83(6), 1420–1443. 289 | 290 | Questions for discussion: 291 | * How might adaptive algorithms change how threshold models work, by influencing certain thresholds or influencing who is visible to whom? 292 | * If people have thresholds, how might algorithms also have thresholds? How can we influence algorithm thresholds? 293 | * How might thresholds be useful tools for governing human-algorithm behavior? 294 | 295 | --- 296 | 297 | #### Tues 03-12 Methods Area: Simulations 298 | 299 | People claim that algorithms amplify and spread hatred, misogyny and extremism. How could we tell, and what can we do about it? In this class, we will use the example of the spread of misogyny on Reddit as the basis of imagining simulations. 300 | * Messeri, L., Crockett, M. J. (2024) [Artificial intelligence and illusions of understanding in scientific research](https://www.nature.com/articles/s41586-024-07146-0). Nature. 301 | * Editorial Board. [Why scientists trust AI too much — and what to do about it](https://www.nature.com/articles/d41586-024-00639-y). Nature. 302 | Choose one: 303 | * Hosseinmardi, H., Ghasemian, A., Rivera-Lanas, M., Horta Ribeiro, M., West, R., & Watts, D. J. (2024). [Causally estimating the effect of YouTube’s recommender system using counterfactual bots](https://www.pnas.org/doi/abs/10.1073/pnas.2313377121). Proceedings of the National Academy of Sciences, 121(8), e2313377121. 304 | * Yin, L., Alba, D,. Nicoletti, L (2024) [OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias](https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwOTg1NjE4NCwiZXhwIjoxNzEwNDYwOTg0LCJhcnRpY2xlSWQiOiJTQTA1Q1FUMEFGQjQwMCIsImJjb25uZWN0SWQiOiI4QkY3REVFODZERTk0QjdEOEVDRDA1OEQ4RUJDQzAzMyJ9.q4dHdWWVcJO9PMKhwQ-IF5BfvVNVmPAX8hWNyrtwSYY). Bloomberg. 305 | 306 | 308 | 309 | 310 | 311 | 312 | 316 | 317 | --- 318 | 319 | #### Thurs 03-14 How Much Complexity Should Incident Reporting Embrace? 320 | So far this semester, we have looked at simple incident reports such as the systems used by the U.S. National Transportation Safety Board to report a crash— introduced in the 1960s. Since then, scholars have developed more comprehensive ideas about how to map and describe failures. How far should we go toward offering more complex models? And how much should we understand/engage with the politics at play? 321 | 322 | In this class, we're going to learn more about those systems, in the case of the Richmond CA Refinery fire of 2012. We're also going to consider how incident reporting fits into wider political engagements over error reporting. 323 | 324 | * Sadisivam, N. (2021) [A California law gave the people power to cut pollution. Why isn’t it working?](https://grist.org/equity/ab617-richmond-california-chevron-refinery-air-monitoring/). Grist. 325 | * Yousefi, A., Rodriguez Hernandez, M., & Lopez Peña, V. (2019). [Systemic accident analysis models: A comparison study between AcciMap, FRAM, and STAMP](https://aiche.onlinelibrary.wiley.com/doi/abs/10.1002/prs.12002). Process Safety Progress, 38(2), e12002. (in Canvas) 326 | 327 | 328 | --- 329 | 330 | #### Tues 03-19 Finalizing Student Projects 331 | In your final-project teams, students will present a 3 minute presentation to summarize and analyze an issue in human-algorithm feedback that you would like to work on for your final. Fellow students will submit feedback and suggestions. 332 | 333 | #### Thurs 03-21 How Can Evidence Inform Governance? 334 | What is evidence-based governance and how might research contribute (or not) to policy? In this session, we'll discuss a model where researchers are advisors, versus a model where researchers are producing evidence that is used in courts and other contestational settings. 335 | 336 | * Spruijt, P., Knol, A. B., Vasileiadou, E., Devilee, J., Lebret, E., & Petersen, A. C. (2014). [Roles of scientists as policy advisers on complex issues: A literature review](https://www.sciencedirect.com/science/article/pii/S1462901114000598). Environmental Science & Policy, 40, 16-25. 337 | * Systemic Justice (2023) [Strategic Litigation: A Guide for Legal Action](https://systemicjustice.ngo/community-toolkit/). Systemic Justice. 338 | * (gradstudents) Weiss, C. H. (1979). [The many meanings of research utilization](https://www.jstor.org/stable/3109916). Public administration review, 39(5), 426-431. 339 | 340 | Questions for discussion in Canvas and in class: 341 | * For your team's final project area: 342 | * What kind of evidence might be needed for policymaking 343 | * Is that kind of evidence possible yet? 344 | * Does the evidence already exist? 345 | * How does evidence already influence policy, if at all? 346 | * What forces are preventing evidence from being available?--> 347 | 348 | 355 | 356 | 360 | 361 | --- 362 | #### Tues 03-26 Policy Topic: What Is Policy Anyway? 363 | We've been talking about policy and governance in the abstract, but what is it really, and how is that changing in a world where algorithms are also being given governance power? 364 | 365 | * Cairney, P. (2019). "[What is Policy and policymaking](https://paulcairney.files.wordpress.com/2019/03/chapter-2-upp-2nd-ed-8.3.19.pdf)" from Understanding public policy: theories and issues. Red Globe Press. 366 | * Gillespie, T. (2017). [Governance of and by platforms](https://culturedigitally.org/2016/06/governance-of-and-by-platforms/). SAGE handbook of social media, 254-278. 367 | * (gradstudents) Lessig, L. (2009). "Code is Law" from [Code: And other laws of cyberspace](https://lessig.org/product/code). Basic Books. 368 | 369 | Questions for discussion: 370 | * What is a tech policy, really? 371 | * What is a platform policy, really? 372 | * What kind of policy outcomes can be achieved (or not) with code? 373 | 374 | #### Thurs 03-28 Project Check-in 375 | In this class, we will: 376 | * Support group collaboration time student projects 377 | * Discuss common questions about projects and final proposals 378 | 379 | --- 380 | 381 | #### SPRING BREAK (Syllabus Adjustment Check-In) 382 | 383 | --- 384 | 385 | 398 | 399 | #### Thurs 04-09 Policy Topic: How Does Evidence Actually Inform Policy? 400 | In previous classes, we discussed theories for how evidence could inform policy. How does it really happen? 401 | 402 | * **Each Team** share on Canvas an example of a piece of research, journalism, or activism that genuinely impacted policy (ideally related to your project), and comment on how it did so. A especially high quality submission will be able to cite an example where someone made a claim about the relationship between the corporate/government policy change and the project. Comment on other team's examples, come prepared to discuss your team's example, and be prepared to ask questions and discuss other team's examples. 403 | * Examples: 404 | * Edelman, B. G., & Luca, M. (2014). [Digital discrimination: The case of Airbnb.com](https://www.hbs.edu/ris/Publication%20Files/Airbnb_92dd6086-6e46-4eaf-9cea-60fe5ba3c596.pdf). Harvard Business School NOM Unit Working Paper, (14-054). 405 | * Buolamwini, J., & Gebru, T. (2018, January). [Gender shades: Intersectional accuracy disparities in commercial gender classification](https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf). In Conference on fairness, accountability and transparency (pp. 77-91). PMLR. 406 | * Penney, J. W. (2016). [Chilling effects: Online surveillance and Wikipedia use](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2769645). Berkeley Tech. LJ, 31, 117. 407 | * Matias, J. N. (2019). Preventing harassment and increasing group participation through social norms in 2,190 online science discussions. Proceedings of the National Academy of Sciences, 116(20), 9785-9789. 408 | * Bossetta, M. (2020). Scandalous design: How social media platforms’ responses to scandal impacts campaigns and elections. Social Media+ Society, 6(2), 2056305120924777. 409 | * Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143-1157. 410 | * Contandriopoulos, D., Lemire, M., Denis, J. L., & Tremblay, É. (2010). Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. The Milbank Quarterly, 88(4), 444-483. 411 | 412 | --- 413 | 414 | #### Thurs 04-11 Policy Topic: Algorithmic Literacy 415 | In order for democratic societies to govern a problem, people need to know about it and care enough to do something. How does the public understand and care about human-algorithm feedback? 416 | 417 | * Druga, S., Christoph, F., & Ko, A. J. (2022). [Family as a Third Space for AI Literacies: How do children and parents learn about AI together?](https://faculty.washington.edu/ajko/papers/Druga2022FamilyAILiteracy.pdf). 418 | * Rainie, L., Anderson, J. (2017) "The need grows for algorithmic literacy, transparency and oversight" from [Code-Dependent: Pros and Cons of the Algorithm Age](https://www.pewresearch.org/internet/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/). Pew Research Center 419 | 420 | Discussion questions: 421 | * If the design of algorithms is controlled by engineers and most people aren't engineers, how would algorithm literacy actually make a difference in people's lives if at all? 422 | * What kinds of knowledge about algorithms does a democratic public need? 423 | * How could your project area be transformed by algorithmic literacy — and how not? 424 | 425 | 445 | 449 | 450 | --- 451 | 452 | #### Tues 04-16 Policy Topic: Public Engagement with Incident Reporting 453 | 454 | If incident reporting is to work, the public need to understand incidents. In this class, in conversation with Dr. Jennifer King at Stanford University, we will look at the history of the Dark Patterns Tip Line. 455 | * Nguyen, Stephanie (2021) [Key learnings from the Dark Patterns Tip Line](https://ritaallen.org/stories/key-learnings-from-the-dark-patterns-tip-line/). Rita Allen Foundation 456 | * King, J., & Stephan, A. (2021). Regulating Privacy Dark Patterns in Practice-Drawing Inspiration from the California Privacy Rights Act. Georgetown Law Technology Review, 5(2), 250-276. 457 | 458 | Questions for discussion on Canvas and in class: 459 | * For your team's topic, how might it be possible to ask the public to report incidents? 460 | * Would the public be able to notice and distinguish possible incidents? 461 | * What kind of public awareness will be necessary? 462 | 463 | 470 | 471 | --- 472 | 473 | #### Thurs 04-18 Policy Topic: Creating Change With Courts 474 | What influence can courts have on the behavior of companies, populations, and algorithms? 475 | 476 | * Charles, S. (2020, January 27). [CPD decommissions ‘Strategic Subject List.’](https://chicago.suntimes.com/city-hall/2020/1/27/21084030/chicago-police-strategic-subject-list-party-to-violence-inspector-general-joe-ferguson) Chicago Sun-Times. 477 | * Kaplan, J. (2017). [Predictive Policing and the Long Road to Transparency](https://southsideweekly.com/predictive-policing-long-road-transparency/) 478 | * (Gradstudents) Jillson, Elisa. (2021) [Aiming for truth, fairness, and equity in your company's use of AI](https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai). Federal Trade Commission. 479 | 480 | Questions to consider: 481 | * What role did courts play in the policy debates about the Strategic Subjects List? 482 | * How can laws and agencies influence what courts are able to do? 483 | * How might courts be important to the project you're working on? 484 | 485 | --- 486 | #### Tues 04-23 Final Project Check-in 487 | In the class, teams will be supported to meet in person, workshop your ideas with peers, and get feedback on your project progress. 488 | 489 | --- 490 | 491 | #### Thurs 04-25 Case Study: Myanmar and Meta 492 | Guests: 493 | * Susan Benesch 494 | * Erin Kissane 495 | 496 | 502 | 503 | #### Tues 04-30 Final Presentations Part I 504 | In this session, teams will give a final presentation of their project, with an opportunity to receive feedback from peers. 505 | 506 | In the discussion, please post in advance a question for feedback you would like your peers to consider during your session in the class. 507 | 508 | #### Thurs 05-02 Final Presentations Part II 509 | In this session, teams will give a final presentation of their project, with an opportunity to receive feedback from peers. 510 | 511 | In the discussion, please post in advance a question for feedback you would like your peers to consider during your session in the class. 512 | 513 | #### Ties 05-07 Reflections on Learning & the Future of Algorithms and Society 514 | In this session, we will reflect on the topics in the class. Please submit a discussion post, which will inform our group conversation: 515 | 516 | * Name something you believed at the beginning of the course 517 | * Reflect on how your thinking about that question has changed 518 | * Name one way this topic might continue to be relevant to your future life, whether in your role as citizen or your work 519 | 520 | #### Final Project Deadline: May 16 521 | 522 | ## Course Practices and Policies 523 | 524 | ### Weekly Workload 525 | 526 | Before each class, I will assign **two readings** I expect you to read before class. As part of class participation, you will submit a reaction comment on one of the readings to the relevant discussion on Canvas and respond to at least one other student's comment by 9pm Eastern the night before class. Please come to class with: 527 | * The ability to summarize: 528 | * the goal/question of the paper 529 | * the field or ecosystem that the author is in 530 | * what constitutes an advancement in that field 531 | * how the paper advances the conversation 532 | * a question or observation that links the paper to the theme of the day or the theme of the course 533 | 534 | Each week, students are expected to post at least one news article to Canvas add at least one comment in response to one other student's posted link. 535 | 536 | Graduate students will have additional readings. Graduate students will also rotate responsibility for summarizing presenting to the class on the readings assigned to everyone. Summarizing the reading will involve: 537 | * Creating 2-3 slides for the reading 538 | * Slide one: introducing the core question of the paper, and the authors 539 | * Slide two: summarize the methods and findings of the paper 540 | * Slide three: one to two discussion questions that link the paper to the theme of the class 541 | * Alt: if the author of the paper is a guest speaker, the graduate student will interview the guest 542 | 543 | Since this is a discussion course, attendance is expected. 544 | 545 | ### Participation in Discussions 546 | Please post discussions about readings to Canvas. 547 | 548 | Whenever the course has assigned reading for a session, you are expected to post at least one top-level comment and one reply to someone else's comment. Participation will be 30% of your course grade. 549 | 550 | ### Team Progress Reports 551 | During the project period of the class, teams will submit on Canvas a weekly progress report no more than one page long, as group homework. This progress report will be graded 0/1 based on whether it was submitted. Reports should include the following details (a sample template is available here): 552 | 553 | * What your team made progress on 554 | * What your team is doing next 555 | * Who contributed what 556 | * Where your team is stuck 557 | * Any updates to your timeline 558 | 559 | ### Formats 560 | Written assignments should be uploaded to Canvas in one of the following formats 561 | * PDF 562 | * Text file 563 | * Word-compatible document 564 | 565 | Slide decks should be submitted in one of the following formats: 566 | * PowerPoint 567 | * KeyNote 568 | * Google Slides 569 | * A Markdown slide presentation system (such as Marp) 570 | * PDF 571 | 572 | In cases where students choose to submit code or analysis as part of a project, I can accept assignments in the following languages: 573 | * R (including R Markdown or Sweave) 574 | * Python 575 | * Ruby 576 | * PHP 577 | * C / C++ 578 | 579 | ### Group Work and Academic Integrity 580 | I expect students to follow the [Cornell University Code of Academic Integrity](https://www.library.cornell.edu/research/citation/code). You should submit your work as your own, cite sources and outside assistance, and credit people for their contributions. 581 | 582 | This class includes group work and individual assignments. On group projects, you are encouraged to work together on the activities for the class, but you are only only able to put your name on projects to which you made an intellectual contribution. If you have any doubts about what is appropriate, ask me. 583 | 584 | ### Grading 585 | I use the following grading scale, derived from Matt Salganik's grading practices. 586 | 587 | All grades are final. There will not be any make-up or extra credit assignments offered. 588 | 589 | Per university procedures, I will only give a grade of A+ in exceptional circumstances. 590 | 591 | Letter Grade Numeric Grade 592 | * A 93 - 100 593 | * A- 90 - 92.99 594 | * B+ 87 - 89.99 595 | * B 83 - 86.99 596 | * B- 80 - 82.99 597 | * C+ 77 - 79.99 598 | * C 73 - 76.99 599 | * C- 70 - 72.99 600 | * D 60 - 66.99 601 | * F 0 - 59.99 602 | 603 | ### Grace Period 604 | Late Submissions: All assignments have an automatic one-day grace period. On time and early papers are always encouraged and will be graded the same week. Students can turn in the assignment up to a day late, no questions asked, with the expectation that it could take substantially longer to receive a grade and feedback. After that, an automatic one grade (A to B, B to C, etc) is dropped on the assignment. 605 | 606 | If you turned in an assignment on time and haven't received a grade within the expected period, please contact the instructor in case of a technical glitch. 607 | 608 | 611 | ### Accommodations 612 | I am committed to working with students with recognized SDSes to ensure you have the best possible class experience. Please [follow these steps to get started](https://sds.cornell.edu/get-started). 613 | 614 | ### Health Accommodation for Professor Matias 615 | As part of a [medical condition of my own](https://natematias.com/portfolio/2021-06-06-ithaca-allergies/), I may on rare occasion need to teach remotely. Since I may not be alerted to this need until the morning before class, I will ask the designated presenting graduate student to also set up a video chat feed for me to join. 616 | 617 | ### Counseling Resources 618 | It is common for students to experience stressful events at some point during graduate school. Students sometimes experience depression, anxiety, family stress, the loss of loved ones, financial strain, and other stressors. It is perfectly normal for students to seek the service of mental health professionals to provide them with support and skills to cope with these experiences. Below I have provided the contact information for some of the mental health services available to Cornell University students so that you will know where you can go if you or a friend would like to take advantage of these resources. 619 | 620 | Cornell Health: 110 Ho Plaza, Ithaca NY. Phone: 607-255-5155. 621 | 622 | If you require a short term accommodation, please let me know as soon as possible. 623 | 624 | ### Acknowledgments 625 | 626 | I am grateful to my co-author Lucas Wright for his contributions to a review article that this course is based on, and to the Human-Algorithm-Behavior Research Collective for input. Several sections of these policies have been inspired by syllabi from Matthew Salganik, Adrienne Keene, and Neil Lewis Jr. 627 | --------------------------------------------------------------------------------