Transcension as a Solution to the Fermi Paradox? [Smart’s 2011 Transcension Hypothesis]

The Transcension Hypothesis, as presented here by John Smart, proposes a model of the universe in which an all-encompassing process of evolutionary development guides any sufficiently advanced civilization into what may be called “inner-space”: a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of Space, Time, Energy, and Matter (STEM).

This process, estimated by Smart to complete ~600 years after technological singularity, is an extension of the idea that virtual minds existing at the nano and femto scales will continue compressing their STEM until approximating a black hole-like optimal computational environment.

Based upon a set of varied assumptions, Smart estimates that the costs (both evolutionarily and developmentally) after entering such an environment will outweigh the benefits of reaching out to other non-transcendent species, hence our lack of any contact with any other highly advanced civilizations. His assumptions have their root in evolutionary biology, and imply that there exists a natural goal of fostering a maximally diverse set of transcendent species.

Smart’s assumptions are broad, however, he proposes a method of testing his hypothesis based in the trending prevalence of electromagnetic frequencies. See additional thoughts by Smart here via blog post.

Advertisements

Are We Living in a Simulation?

Are we living in a simulation? Nick Bostrom, founder of The Future of Humanity Institute at Oxford, seeks to argue that this scenario is not only possible, but in fact likely (given that we are able to develop the necessary technology before humanity’s extinction).

His line of reasoning is as follows:  Because the number of simulations run by a civilization capable of running them would be very great, if simulations are done, then the number of people that are simulated would be much greater than the number of people that are not simulated, which would mean that the probability that we are living in a simulated universe is almost 1. So, it becomes clear that one of two things must be the case. Either the probability that simulations are run is very small (practically null), or it is almost certain that we ourselves are living in a simulation.

See his original paper, along with a filmed discussion!

Coherent Extrapolated Volition (CEV)

Curious as to what makes AI “friendly”? Or how humans may attempt to define a goal for some relatively-omnipotent, future optimization process that does not lead to either “tiling the world with paper clips”, or destroying humanity as we know it?

Eliezer Yudkowsky seeks to answer these questions, and outlay a theoretical framework for defining friendly machine intelligence, through his idea of ‘Coherent Extrapolated Volition’ (CEV). CEV derives an abstract notion of humanity’s long-term intent for the world, and introduces terminology for discussing such ideas in the context of AI engineering.

Yudkowsky is also the founder of the rationality-focused discussion board LessWrong.

See his 2004 theory here!

Rationality’s Volition

Rationality’s Volition was founded to spur discussions on the impact of accelerating technologies. It primarily exists as a collection/ summation of theory, with practical + hypothetical implications derived accordingly.

The blog’s title is a stylistic reference to the rational process in abstract cognitive beings that propagates intent. Most of the blog’s content is focused on this process, as well as the ensuing philosophical implications.  As we will see, this said volition is very hard to predict.

Always looking for feedback/ responses! Feel free to reach out with anything that rationally comes to mind!

Woohoooo!