Nicholas Carr is the former Executive Editor of Harvard Business Review, as well as a Pulitzer Prize finalist, and has recently published a book entitled “The Glass Cage”. His novel seeks to explore the hidden costs of granting software dominion over our work & leisure, and is highly critical of the internet’s effect on individual cognition. Carr also penned the prominent 2008 article “Is Google Making Us Stupid?”
After listening to Carr speak, I think he brings up an interesting point regarding “Automation Complacency”: the term he uses to describe the choice by individuals to become disengaged in (i.e. ‘complacent in’) their surroundings when using automated technology, because they believe such technology will solve any issue that arises.
His example: A ship captain is complacent when his ship appears to have missed a key landmark, because the ship’s GPS still says it is on the right track. The ship crashes as a result.
I do wonder, however, if perhaps he is applying a bit of a Neo-Luddite framework to what is the more fundamental problem of human disregard for long-tail risk. Generally, individuals rely upon their internal statistical representation of a situation to develop a best response to stimuli. In this scenario, the captain may actually be acting as a rational agent. He chooses to ignore what appears to be tail risk (that the GPS is wrong) in order to follow what statistically occurs a very high percentage of the time (that the GPS is correct).
Hence, the captain isn’t ‘complacent’ because technology is easy to rely on, but instead is making a rational decision given his internal statistical representation of the scenario.
The Transcension Hypothesis, as presented here by John Smart, proposes a model of the universe in which an all-encompassing process of evolutionary development guides any sufficiently advanced civilization into what may be called “inner-space”: a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of Space, Time, Energy, and Matter (STEM).
This process, estimated by Smart to complete ~600 years after technological singularity, is an extension of the idea that virtual minds existing at the nano and femto scales will continue compressing their STEM until approximating a black hole-like optimal computational environment.
Based upon a set of varied assumptions, Smart estimates that the costs (both evolutionarily and developmentally) after entering such an environment will outweigh the benefits of reaching out to other non-transcendent species, hence our lack of any contact with any other highly advanced civilizations. His assumptions have their root in evolutionary biology, and imply that there exists a natural goal of fostering a maximally diverse set of transcendent species.
Smart’s assumptions are broad, however, he proposes a method of testing his hypothesis based in the trending prevalence of electromagnetic frequencies. See additional thoughts by Smart here via blog post.
See a brief story illustrating the point of infinitely precise simulations running simulations, etc., here!
(It doesn’t take itself too seriously).
Are we living in a simulation? Nick Bostrom, founder of The Future of Humanity Institute at Oxford, seeks to argue that this scenario is not only possible, but in fact likely (given that we are able to develop the necessary technology before humanity’s extinction).
His line of reasoning is as follows: Because the number of simulations run by a civilization capable of running them would be very great, if simulations are done, then the number of people that are simulated would be much greater than the number of people that are not simulated, which would mean that the probability that we are living in a simulated universe is almost 1. So, it becomes clear that one of two things must be the case. Either the probability that simulations are run is very small (practically null), or it is almost certain that we ourselves are living in a simulation.
Curious as to what makes AI “friendly”? Or how humans may attempt to define a goal for some relatively-omnipotent, future optimization process that does not lead to either “tiling the world with paper clips”, or destroying humanity as we know it?
Eliezer Yudkowsky seeks to answer these questions, and outlay a theoretical framework for defining friendly machine intelligence, through his idea of ‘Coherent Extrapolated Volition’ (CEV). CEV derives an abstract notion of humanity’s long-term intent for the world, and introduces terminology for discussing such ideas in the context of AI engineering.
Yudkowsky is also the founder of the rationality-focused discussion board LessWrong.
See his 2004 theory here!
Rationality’s Volition was founded to spur discussions on the impact of accelerating technologies. It primarily exists as a collection/ summation of theory, with practical + hypothetical implications derived accordingly.
The blog’s title is a stylistic reference to the rational process in abstract cognitive beings that propagates intent. Most of the blog’s content is focused on this process, as well as the ensuing philosophical implications. As we will see, this said volition is very hard to predict.
Always looking for feedback/ responses! Feel free to reach out with anything that rationally comes to mind!