Complex Novelty, the ability to identify when an activity is teaching you insight (and is therefore not ‘boring’), poses a challenging theoretical question to those seeking to create an artificially intelligent system. The topic, one that ties closely with the notions of both ‘friendly’ AI & finite optimization, provides a theoretical method for avoiding a tiling the world with paperclips-type scenario. The identification and understanding of complex novelty offers a pathway for AI to self-limit a given optimization process, to self-identify new goals, and to generally avoid extreme optimization towards goals completely alien to those of humans (see: orthogonality thesis).
Eliezer Yudkowsky, founder of the rationality-focused community LessWrong, seeks to discuss the complexity of the issue + its powerful implications for intelligent beings.
See his full discussion here!