Existential Risk Reduction

Existential Risk Reduction

If you are already familiar with the term "existential risk," and are interested in the field, we should talk. If not, check out what I have to say about it here, and if you are (even a little bit) interested, we should talk.

The main point of this post is to see if there are people in this community who are interested in joining me in my exploration of the field of Existential Risk Reduction. I've been thinking about coming back to the casa for a bit longer than last time, maybe on the order of a month or two. One question is whether the casa wants to take another person on that timescale sometime soon, but even if that is the case I'm still trying to consider what I might bring to the community, and also if it is a good place for me to be doing the type of exploration and thinking I want to be doing.

An existential risk is a risk that threatens to end or severely cripple humanity. Common examples are bioterrorism, non-friendly artificial intelligence, nuclear war, and misuse of molecular nanotechnology [lifted from wikipedia].

The risk I'm reading about now is AI. The term Singularity refers to an event in which intelligence far beyond ours is created at a very rapid rate. I've found the arguments [video presentation] for why it could be catastrophic (and also potentially very positive) [essay] rather compelling, and so I decided to visit the Singularity Institute in California last month (I literally happened to be in their neighborhood). I'm considering volunteering for them for a while, wherein I would mostly be working on trying to get funding and people involved, since the group of people seriously working on this issue is surprisingly small (estimated to be around 300). The only other institute I've found that is addressing these types of issues head-on is the Future of Humanity Institute at Oxford.

My thought and motivation for getting into this field comes from my frustration with people and communities reducing 'sustainability' to carbon emissions and then calling it a day. A sustainable world is most certainly one that avoids global catastrophe, and global catastrophe is most certainly not limited to anthropogenic climate change. This PDF is the introduction section to a book called global catastrophic risks, and I think it does a good job of illustrating the claim I just made.

More reading/video:

  • How cognitive biases potentially affect how we think of global catastrophic risks [essay]
  • The 2009 Singularity Summit [videos]
  • "An Intuitive Explanation of Bayes' Theorem" - using probability to think more rationally [webpage, looong]

So, if you are at all interested, please be in contact. I have literally dropped out of my masters program so that I would have time to explore these ideas, and I want to do it with other minds.

Also, be rational, but BE CRITICAL. If you think this is all BS, I want to know about that too.

Cheers,
Nevin

Comments

robino's picture

i think this is so

i think this is so interesting that we should maybe reserve an evening for this to discuss

nnevvinn's picture

Ah, and I had concluded that

Ah, and I had concluded that this post was a lost cause! I would definitely be up for that, some time before 28/1.

rene's picture

"La parole est d'argent, le

"La parole est d'argent, le silence est d'or" (french say).
I was too fast last time so I erased my message, I have to review seriously your links before comenting.

nnevvinn's picture

Heh, I never knew there could

Heh, I never knew there could be an awkward silence in a comment thread.

rene's picture

.

.