At a party a few nights ago, I had a fun conversation about ethics with someone whose set of moral beliefs is extremely similar to the moral beliefs I had two or three years ago. My ethical beliefs have shifted quite a bit over the last few years, so we had quite vigorous disagreements. I thought it might be useful to write up the ways my beliefs have changed and why.

The conversation started when someone said that after the Singularity, if there was a magic button that would turn all of humanity into homogenous goop optimized for happiness (aka hedonium), they’d press it. I told them that I think people shouldn’t press buttons like that. A few years ago, I advocated pressing buttons like that. They asked me what had changed.

I think that I’ve changed my mind in two main ways. Firstly, I now feel more sold on the idea of cooperativeness. This plays out in a few different ways. I now think there is more value in obeying deontological rules like “don’t non-consensually kill people”. I now see myself more like a participant in a community of humans, where we should be generally trying to make positive-sum trades wherever we can. I also now have the concept of a unilateralist’s curse, which inclines me against taking drastic actions like forcibly turning everyone into hedonium.

Secondly, my views on ethics have changed. I was once a bullet-biting hedonic utilitarian. Nowadays my values are more complex. I now feel pretty terminally opposed to death, and I care about there being beauty in the world, and stuff like that. Also, reducing suffering feels different to me now. I now feel that the concept of suffering is more complex and value-laden than I once thought. It no longer seems to me that we’ll eventually write down a simple equation for what suffering and pleasure are, and put that equation into some AGI so that it can optimize it.

What changed my mind? A few things.

Firstly: Once upon a time, my approach to ethics was figure out the best theory of ethics and act according to that, where you measure “best” as the product of its simplicity and how well it fits your intuitions. Nowadays, I think of it more as “take all the theories of ethics that resonate with you and mix them together”. I think this is basically a more reasonable place to go with metaethics. This difference feels analogous to the difference between fully Bayesian inference and maximum a postieri inference; the full Bayesian approach feels obviously more “correct” there. Note that I didn’t express what I was doing very clearly at the time; if I’d thought about it more clearly I might have changed my perspective more quickly.

(When I was a child, probably 12 or so, I heard that jellyfish are a colony of different creatures, and I wondered if they should be considered to have increased moral value as a result of having multiple different genomes. And I wondered whether humans should be considered to have double the moral value as a result of mitochondrial DNA being separate from our main genome. I think these are both pretty dumb; this seems like evidence that I’m biased towards the perspective on ethics that I’ve updated away from.)

Secondly, my views on consciousness changed. I spent a hundred hours or so working with Luke Muehlhauser on a project related to philosophy of mind. In particular, I spent time reading and learning about illusionist and reductionist accounts of consciousness, which make the claim that we should try to understand consciousness by thinking about what kinds of systems would report the kinds of phenomena that we experience. For example, there’s this question “how do we know whether two different people have the same experiences of seeing red and seeing green, rather than having inverted experiences?”. This question implies that humans usually have the implicit model that it’s coherent to imagine humans looking at the same color and having different sense experiences, as opposed to for example thinking that humans logically must have the same sense experience, or thinking that the question is incoherent. Illusionists often try to make reductionist accounts of this kind of thing.

As a result of thinking about consciousness like this, it felt to me that consciousness is much less of a real, objective concept than I’d once thought. In his report, Luke makes the an analogy to the concept of life, which is very fuzzy. There are lots of things that are obviously alive, like trees and dogs, but there are also a whole bunch of more ambiguous cases, like viruses and mitochondria and giant viruses and prions. There’s not a clear boundary you can draw here. He says he suspects consciousness will end up seeming as fuzzy as life when we understand it better.

So while hedonic utilitarianism once felt like a very natural, simple concept, it now felt like a high-complexity concept that required making arbitrary judgement calls based on fuzzy intuition. This removed a large part of its advantage over more complex values, and also made the idea of looking for simple values seem dumber and more confused to me.

(Note that I am still very confused about consciousness and I sometimes struggle to make sense in response to even quite basic questions about it. So don’t take my opinions too seriously here.)

Thirdly, my initial enthusiasm for hedonic utilitarianism was somewhat a result of me liking and trusting hedonic utilitarians and distrusting other people who thought about the ethics of the long term future. Hedonic utilitarians were people like Brian Tomasik who seemed extremely reasonable and extremely good at spotting what seemed like incredibly important considerations like wild animal suffering, suffering of computations, and suffering of insects. My understanding of moral progress made it seem like things usually shift in the direction that Brian seemed to be in compared to the rest of society. So I felt strongly inclined to trust the simple hedonic utilitarian perspective that he embodied. (Note that Brian isn’t actually a hedonic utilitarian; his posts still pushed in the suffering-focused hedonic utilitarian direction compared to other people thinking about the long-term future, though.)

And I felt very suspicious of the people who thought about the long term future who didn’t seem very worried about astronomical suffering. A lot of them seemed to have bad arguments for not worrying about animal suffering. These people seemed terrified of dying and I didn’t trust them to accurately evaluate whether the future would be better without humanity in it.

I now feel less suspicious of far future people with complex values. I have updated towards their selflessness and altruism. Several different anti-death people have told me that they’d happily die to stop other people dying; this made me feel a bunch better about them.

I think that my hedonic utilitarian intuitions weren’t absurd, and they did a better job of pushing me towards important problems than most people’s ethical views. So I don’t regret my wild and reckless youth. I think it was dumb of me to not realize the extent to which I was probably still confused about ethics; I regret this.

I’m probably not doing justice to the best arguments for hedonic utilitarianism here; I just tried to argue against the reasons that I believed what I believed.


see comments on Facebook here