it’s been like ~5 years since my last 中文 class so this is where the english starts—alas.
I’ve been spending a fair amount of time reading Ben Green’s work in preparation for an interview with him (n/b not yet confirmed but hopefully happening)—in the process of that, I’ve been thinking through a lot of my takeaways as I navigated understanding the state of technology/AI ethics.
this isn’t going to be very articulate at all, but i haven’t written in this - journal ? really a collection of notes to self - in a while so i’m just going to write until my brain turns off and hit publish.
doing good thanks
computer scientists love to feel like we’re doing good for the world. variations on the theme run the gamut from healthily skeptical to cultish. some people are luddites; others read the manifestos and words of well-known technologists as though they were scripture.
I can’t fault people for wanting to do good in the world. it’s nice to imagine ways in which we could improve the conditions of people and the environment around us. but what does “good” actually mean? what’s “social good”?
HireVue is “committed to benefitting society” but I have serious concerns about the implications of algorithmic assessments for job seekers, whatever bias mitigation strategies they claim to be using (also worth noting that they dropped facial monitoring and Actual^TM researchers are not fans of the implicit claims their technology makes).
what’s going on when we claim to be doing “good” and attempt to find ways of doing so through technology, especially when it’s AI? for one, we have to subscribe to a particular notion of what “good” is. Ben’s wonderful paper “Good” isn’t good enough interrogates just this.
as Ben notes, despite the computer science community’s strong enthusiasm for contributing to social good (and idealism about their technology’s ability to do so), the community has not done much to develop a notion of what good actually entails (this paper was written in 2019; I can’t say the picture seems to have changed much by now). Ben further argues that the incrementalist good that often ends up being pursued, such as algorithmic reforms in criminal justice, can cause long-term harm. indeed, aiming to make a system better lacks a certain sort of imagination: that the system need not exist as it does.
in short, it’s a problem that we don’t know exactly what “good” we’re trying to achieve. chasing after lofty but vague notions of something isn’t a great way to get anywhere—at that point, we don’t even know where it is we want to go. we’re just pretty sure that it’s somehow better than here.
原则
but surely PRINCIPLES can help us figure out how to do w/e “good” is?
skeptical. a number of “AI Ethics Principles” have been rolled out over the years—these come from the likes of Google, DeepMind (aka still Google but cooler and also British except when they’re not in London), UK House of Lords, Microsoft, etc. but we’re still trying to figure out how to make all this work. operationalizing high-level principles is hard.
the last piece I edited for The Gradient was Ravit Dotan’s “Focus on the Process.” as I worked out with her, the piece primarily wants to argue that any attempt at formulating a set of ethical principles for AI will not be objective, and we should be skeptical of any claims that one is.
the piece makes the (fair) claim that the burden of operationalizing ethics principles should fall on individual organizations who try to follow them (a fair point—principles may manifest differently depending on the particular application/product an organization is working on).
what I take from the piece that is relevant here is the key issues of non-universality and revision. “good” is non-universal—not just w/r/t/ location, socio-historical context, and so on, but also w/r/t/ time. good now isn’t necessarily good in five centuries, or even five days. the world is messy.
the tunnel view
it’s not a problem, at least theoretically, that people want to do good for the world via technical means. the obvious issues that have been beaten to death aside (bias, etc etc), I worry that an over-focus on algorithmic interventions and their ilk might detract from the brainpower and imagination needed to achieve the goods we decide on through more ambitious changes. that is not to say tech won’t play a role—it may well play a large one.
but reform and progress goes far beyond the realm of the technological. it is political, human, every other clichéd word you’ve ever heard about things like this. as a dude living in Silicon Valley I’m sitting firmly in a bubble of other technology people and as a result probably overestimate the importance and impact of the technologies I find interesting, precisely because many other people here (probably) think the same way. so when I hear about something that could be better in the world, what sorts of solutions am I bound to think about first?
but (at least once we’ve figured out what the hell “good” means) I think the question should never be “how can I do good with technology?” it should be “how can I do good (and if technology fits into the picture, then where)?”
and maybe the answer does lie squarely within the capabilities of technology, but it’s a far more complicated answer than we thought at the outset.
the algorithmic is political
wow doesn’t that sound #deep (also this section is really bad)
I haven’t actually spent any time making a positive case for what to actually do about tech ethics here and probably won’t say much. but a few things are pretty obvious. anyone who wants to do “good in the world,” including technologists, needs to figure out what that good actually is. talk and ideals are cheap. operationalizing things is hard.
many of our decisions, priorities, principles, actions implicitly make a number of normative commitments. it’s worth exposing and interrogating those before we proceed to work towards ends that reflect them, especially when we’re using our principles/priorities to affect “good” for people that aren’t us.
those implicit commitments, biases, etc. make their way into the technologies we work on, how we make decisions about them, how those technologies then act on / process information. I won’t keep beating this long-dead horse except to repeat that algorithmic formalisms / the way AI systems process the world occupy a restricted space of ways to represent the world, and are therefore fundamentally limited in their ability to represent the world, characterize sociotechnical systems (in applying algorithms to social contexts), etc. I think Ben Green is right that the remedy lies beyond introducing things like fairness metrics into AI systems and in “introducing new epistemic and methodological tools that expand the bounds of what it means to ‘do’ algorithms.”
in something semi-related, I like how Delphi talks about how it approaches ethics, using a “bottom-up approach to capture moral implications of everyday actions in their immediate context, appropriate to our current social and ethical climate.” exposing meta-ethical commitments is a vital part of thinking through what normative commitments are appropriate in the design of methods that aim to achieve some sort of good.
current word curry
(I skipped a few of these)
(I don’t know why it’s curry this time, Taylor’s voice is a lot more like soufflé)
is it insensitive for me to say
get your shit together
so i can love you