Coherent Extrapolated Volition (CEV)

Curious as to what makes AI “friendly”? Or how humans may attempt to define a goal for some relatively-omnipotent, future optimization process that does not lead to either “tiling the world with paper clips”, or destroying humanity as we know it?

Eliezer Yudkowsky seeks to answer these questions, and outlay a theoretical framework for defining friendly machine intelligence, through his idea of ‘Coherent Extrapolated Volition’ (CEV). CEV derives an abstract notion of humanity’s long-term intent for the world, and introduces terminology for discussing such ideas in the context of AI engineering.

Yudkowsky is also the founder of the rationality-focused discussion board LessWrong.

See his 2004 theory here!

Rationality’s Volition

Rationality’s Volition was founded to spur discussions on the impact of accelerating technologies. It primarily exists as a collection/ summation of theory, with practical + hypothetical implications derived accordingly.

The blog’s title is a stylistic reference to the rational process in abstract cognitive beings that propagates intent. Most of the blog’s content is focused on this process, as well as the ensuing philosophical implications.  As we will see, this said volition is very hard to predict.

Always looking for feedback/ responses! Feel free to reach out with anything that rationally comes to mind!

Woohoooo!