I was recently asked this question, and this is what I wrote:

I am always interested in the worldview and thinking process behind any question. A reasonable assumption in this instance would be that it is written from a human perspective and hence concerns itself with existential risk for humans. As a species we clearly haven’t much concerned ourselves with existential risks for other species except where it somehow impacts our food supply or some other aspect of our lived experience.

For me, this is a reflection of the true existential risk for humans: our psychology –our failure to step back and understand that there is always a bigger picture. It really struck me at a recent technology summit how the tech world simply creates technology because it can. Add in psychological motivations like power and status (i.e., money, fame….) and we create a dangerous cocktail.

Human psychology has meant that we are often either blind to risks or fail to act because of some nearer-term consideration (risk of nuclear war, environmental collapse, runaway AI, potential bioengineering). All of these are human-generated risks and to truly address them we need to engineer a paradigm shift in how we think and act as a species.

There are clearly scenarios that are outside of our control that cannot be discounted (from asteroid strikes to alien invasion). It clearly makes sense to dedicate a portion of resources to the more realistic and likely of these scenarios and to build global coalitions to protect the planet and all species from extinction. Having a plan B like Mars would mitigate some of these concerns and yet we would bring the same human psychology with us.

How would we achieve this? My suggestion – let’s start asking more questions and see where they take us.