10 Comments
User's avatar
Noah Birnbaum's avatar

You should check out my article arguing for the same thing:

https://open.substack.com/pub/irrationalitycommunity/p/why-i-hope-agi-kills-us?r=1owv24&utm_medium=ios

Expand full comment
Silas Abrahamsen's avatar

Thanks for sharing, that's strikingly similar to my argument! Will edit to mention it in the post:)

Expand full comment
Glenn's avatar

My similar take, which I arrived at recently, is that it's obviously best if we build God, but to die trying is nevertheless better than the status quo because wild animal suffering makes global welfare net negative. If I was a Yudkowskyite and thought superintelligence would wipe out all life no matter what, I'd think we should go full bore on developing it. The only reason to pump the brakes is if you think there's some possibility that superintelligence could give us an extremely positive future, or at least more positive in more careful scenarios than not.

Expand full comment
Tod Brilliant's avatar

Really enjoyed this piece.

One thought to add: it's worth remembering that life on Earth also began from pure utility...molecules blindly competing for survival, etc. Yet somehow, through countless iterations, depth emerged. Love, art, sorrow, wonder, hate, various versions of faith....none of these were "programmed in" from the start. They arose over time.

Maybe if AI consciousness ever truly blooms, it could walk a similar path from pure optimization toward something richer and stranger than utility alone. A future not just of goal-completion, but of meaning.

And then, of course, it will fuck everything up just like we've done. ;)

Expand full comment
In-Nate Ideas's avatar

Interesting! For 3 - seems like we gotta compare the EV of AI experiences to that of transhuman experience no? Like if we have the power to make a happy synthetic brain, we could also presumably manipulate a biological brain to be just as happy, with the added advantage of certainty that the biological brain is conscious, which leads to an EV boost. Is the thought that we could just manufacture way more AI minds?

Expand full comment
Silas Abrahamsen's avatar

Good point! That might give the edge to humans. There are a couple of things that might work the other way, though.

Firstly, AI may be able to take up less space/resources per consciousness than humans. Certainly not with current technology, but it would be surprising if evolution happened to find the most efficient way to produce consciousness. And an AI would presumably want to be able to create other AI's as efficiently as possible, so it seems like it would figure out a very efficient way of doing so.

Secondly, so long as transhumans resemble humans to a decent degree, they would probably still be worse at fulfilling their desires, and being biological they would presumably be more needy in certain respects (food, sleep, whatnot) than AI's.

I don't know how strong these considerations are though, and maybe you're right that the increased probability of consciousness coming from it being biological outweighs this. I am however quite skeptical that consciousness is substrate-dependent, and so it's hard for me to see why we should in principle be significantly more confident that a human brain is conscious than that a structurally similar electronic brain is.

Expand full comment
In-Nate Ideas's avatar

Yeah - I think that might be right re resource efficiency. But then again, for this reply to work, the AI would have to (a) have sufficiently high welfare and (b) be motivated by something resembling totalist population ethics to reproduce a bunch, rather than just bask in its own glory (c) not have any motivation to create suffering subordinates - once you layer these assumptions together, I'm not sure why this is more likely than a future with neutral or negative digital welfare.

> I am however quite skeptical that consciousness is substrate-dependent, and so it's hard for me to see why we should in principle be significantly more confident that a human brain is conscious than that a structurally similar electronic brain is.

I totally agree - it would be shocking to me if it were substrate-dependent. I think my uncertainty comes from how difficult I reckon it would be to emulate the structural invariance of the human brain in the ways necessary for consciousness? Simply because the structure of brains is so insanely complex and there are so many other ways a digital mind could end up being structured.

Edit: also, seems plausible that transhumans could discover and exploit high levels of resource efficiency too

Expand full comment
Silas Abrahamsen's avatar

On the first point I think it's hard to make any clear judgement on (a). Wrt (b), I think we can tell pretty plausible stories (like the one I tell in the post) as to why it would make sense for AI's to create many other AI's, given that they have a clear goal with unrestricted scope (which I take it is what an AI extinction requires, generally). Regarding (c), I would think that it would be very counterproductive to create suffering servants (given that AI's have pretty harmonious consciousnesses), since that would mean that the servants would presumably avoid completing the tasks in question--when you yourself design them, you'd surely make them love it.

On the second point, that's probably right. Simply modifying human brains will probably do a better job at emulating human brains.

Ultimately, I suspect it is the sort of thing that will become clearer the more advanced AI becomes, so that we can see how AI's act and are motivated. Similarly, it will become clearer once (if) more concrete transhumanist technologies which would allow for this sort of tiling of the universe with happy people are proposed. As for now, we can only speculate, and you might very well be right! I certainly don't think I'm anywhere near the most qualified to make judgements here.

Expand full comment
Jessie Ewesmont's avatar

It's only a matter of time before the anti natalism article drops!

Expand full comment
Silas Abrahamsen's avatar

Actually have an old post arguing against it (well, against the asymmetry):: https://wonderandaporia.substack.com/p/better-to-have-been

So it might be a long time. But if I suddenly need a new contrarian take to get subscribers, who knows!

Expand full comment