The Subtlety of Boredom in Artificially Intelligent Systems

Complex Novelty, the ability to identify when an activity is teaching you insight (and is therefore not ‘boring’), poses a challenging theoretical question to those seeking to create an artificially intelligent system. The topic, one that ties closely with the notions of both ‘friendly’ AI & finite optimization, provides a theoretical method for avoiding a tiling the world with paperclips-type scenario. The identification and understanding of complex novelty offers a pathway for AI to self-limit a given optimization process, to self-identify new goals, and to generally avoid extreme optimization towards goals completely alien to those of humans (see: orthogonality thesis).

Eliezer Yudkowsky, founder of the rationality-focused community LessWrong, seeks to discuss the complexity of the issue + its powerful implications for intelligent beings.
See his full discussion here!

Coherent Extrapolated Volition (CEV)

Curious as to what makes AI “friendly”? Or how humans may attempt to define a goal for some relatively-omnipotent, future optimization process that does not lead to either “tiling the world with paper clips”, or destroying humanity as we know it?

Eliezer Yudkowsky seeks to answer these questions, and outlay a theoretical framework for defining friendly machine intelligence, through his idea of ‘Coherent Extrapolated Volition’ (CEV). CEV derives an abstract notion of humanity’s long-term intent for the world, and introduces terminology for discussing such ideas in the context of AI engineering.

Yudkowsky is also the founder of the rationality-focused discussion board LessWrong.

See his 2004 theory here!