Over the last year or two, I’ve gone down some intellectual rabbit holes that led me to some pretty unusual beliefs. Here are some of my favorites of these beliefs. They mostly fall into two categories: weird things about my overall perspective on the multiverse and how to relate to it, and bullishness on the feasibility of using science to do extremely difficult things. I’m not trying to provide complete arguments for any of them here, more just trying to point at why I end up believing them.
I kind of want to include an epistemic status for these, but it’s hard because a lot of them feel very tied up with my whole worldview in such a way that it’s hard to imagine them being wrong unless I’m desperately wrong about everyting. I’m 80% confident that in five years I’ll endorse more than half of these ideas as being useful and worthy of inclusion on a list of weird good ideas.
For some reason, many of these beliefs require understanding the Solomonoff prior. I don’t know a great resource on Solomonoff induction; you could try Wikipedia or this.
I am responsible for some minor parts of the arguments below, but other people deserve almost all the credit for these.
UDASSA
This claims that the moral weight of an experience is proportional to the Solomonoff prior of locating it inside the universe. This is the best explanation of it. In my opinion, UDASSA solves a bunch of really thorny philosophical problems, including infinite ethics, Boltzmann brains, and the origin of the Born probabilities in quantum mechanics. This is closely related to what people call K-complexity weighted utitarianism. Paul says “it is the only [decision-making framework] I know which produces reasonable answers to reasonable questions; at the moment it is the only framework which I would feel comfortable using to make a real decision”.
The mathematical universe hypothesis
Our universe is a mathematical object–ignoring general relativity, you can think of it as an initial state and a transition function from states to states. In the same way that I am in the US and not China, but China exists anyway, I think it’s reasonable to say that I’m in our universe and I’m not in a universe described by the laws of Conway’s Game of Life, but universes based on Conway’s Game of Life exist too. It’s kind of up to us to decide what we want the word “exist” to mean; I think that it makes a lot of sense to say that mathematically describable universes that aren’t ours “exist”.
Just as I have moral preferences about the welfare of beings in our universe, I have preferences over the welfare of creatures that exist in Conway’s Game of Life. Of course, it’s not obvious how these preferences matter, given that you’d think that I can’t affect GOL-world because I’m not in it. But I think that inasmuch as I could affect GOL-world, I’d want to. And similarly to how I say that I have moral preferences over Bolivians even though I don’t particularly try to do anything specifically for Bolivians, I think it’s healthier to say I care about GOL-beings.
One obvious question here is how I should weight the preferences of GOL-beings against Earth-beings. I think the answer is something like “value universes according to their Solomonoff priors”. Note that this restricts my values to computable universes, which I’m somewhat uncomfortable with.
You might like Tegmark’s explanation of this here. Tegmark calls the set of all mathematically possible universes the level 4 multiverse; many people call it Tegmark 4.
Acausal trade
I think that superintelligent beings can trade with other superintelligent beings, even if the superintelligences aren’t causally connected to each other. This leads to much weirdness.
The universal prior is malign
This one is possibly the weirdest and hardest to explain: I think that if you try to use Solomonoff induction to make predictions in practice, your predictions will be hijacked by beings in other universes who want to trick you into letting them break into your universe. See this blog post for a somewhat better explanation.
I think that a similar argument might let us hijack hypercomputers in other universes (preferably hypercomputers which other beings aren’t using), which means that I think it’s plausible that humanity will be able to escape the heat death of our universe. (Note that because of UDASSA, I don’t think that this is particularly morally important.)
Technological bullishness
I think that it’s probably possible to do most things that don’t violate laws of physics. For this reasons, I think that if all goes well we’ll probably eventually be able to build Dyson spheres, engage in intergalactic colonization, move stars around in the sky, design proteins much more effectively than evolution does, and build molecular nanotechnology.
Cheap simulations of physics
On a similar theme to the previous, I suspect that it’s fundamentally possible to approximate a full simulation of all important physical systems, with computational power that rises much slower than quadratic in the number of particles in the system. For example, I suspect that scientists will eventually be able to run large scale approximations of chemical systems (eg, simulating a whole human body) that start out with an atom-by-atom description of the system and calculates the dynamics of the system using time and memory that grow only slightly faster than linearly in the size of the system. I predict that these simulations will take less than a million floating operations per atom per femtosecond (that’s \(10^{-15}\) seconds) to simulate something human-sized to sufficient accuracy that the brain works. (This is still an enormous amount of computation.)
I further suspect that you can approximate the high level properties of such systems with substantially sublinear computational cost, eg by building multilevel world models that figure out that you can approximate arms by saying “we have ten thousand muscle cells in this fibre, each of which has approximately this efficiency at the moment” as opposed to keeping track of every molecule of ATP. (This breaks down as soon as your systems are purposefully computing things. For example, you obviously can’t arbitrarily speed up your computers by having them approximately simulate themselves.)
I suspect that you can do all of the above without using quantum computers. So I’m claiming that large scale systems usually don’t rely on quantum entanglement in complicated ways.
I basically believe this because humans are pretty consistently able to make predictions about the world, and almost every domain seems to allow decomposition into smaller parts, each of which can be simulated with just a summary of the rest of the parts. For example, it takes \(n^2\) time to naively run a simulation of the n-body problem, but you can simulate it in \(n \log(n)\) with an approximation algorithm. My intuition is related to the fact that SAT-solving is usually easy in practice. One counterexample is ab initio nuclear physics, which is extremely computationally difficult because it involves large, highly entangled quantum systems. But I feel like nuclear physics is about the worst case.
This is important because it suggests that we can cheaply run simulations of our world, which plays into the simulation hypothesis, and probably acausal trade. (There’s a fun rabbit hole here about whether you can build experiments that classical computers couldn’t cheaply predict the outcomes of. If it’s possible to hide information from your simulators, you can probably force them to spend exponential time on computing the results of some of your experiments. But I expect that simulators can probably read our minds, and also they can probably do something where they guess the result of the experiment and then if they guessed wrong they roll the solution back and try a different guess. Thanks to Peter Schmitt-Nielsen for discussions on this topic.)
These are really weird, but I think that several of these play into my worldview and decisions in somewhat important ways.
For example, there are many proposals for AI safety that involve putting AI systems in some kind of competitive game–eg, you have one system that proposes plans and you have a second system that evaluates them. Acausal trade suggests that if the systems are above some threshold of intelligence, they’ll be able to collude against their operators. It’s not obvious that such systems will be intelligent enough to do this in practice, but knowing that this safety mechanism fails in the high-intelligence limit makes me a lot more concerned about it.
Similarly, my bullishness on science makes it seem more plausible to me that AGI might be able to rapidly do really wild things. This makes me more worried about AI safety.
Others, like the mathematical universe hypothesis, aren’t that decision relevant, but they do leave me feeling like I have a much more consistent understanding of the universe than I used to have.