Modeling Nash Equilibria in Artificial Intelligence Development

In his discussion of a theoretical artificial intelligence “arms race”, Nick Bostrom, Director of the Future of Humanity Institute at Oxford, presents a model of future AI research in which development teams compete to create the first General AI. Under the assumption that the first AI will be very powerful and transformative (a notably arguable one, as per the soft vs. hard takeoff debate), each team is highly incentivised to finish first. Bostrom argues that the level of safety precautions each development team will undertake arises as a reflection of broader policy parameters, specifically those relating to the allowed level of market concentration (i.e. permitted consolidation of research teams), and information accessibility (i.e. degrees of intellectual property protection & algorithm secrecy).

In his work, Bostrom does not reach one specific conclusion regarding AI safety levels, but instead defines a set of Nash equilibria given various numbers of development teams + levels of information accessibility. Specifically, he notes that having additional development teams (and therefore reduced market concentration) may increase the likelihood of an AI disaster, especially if risk-taking is more important than skill in developing the AI. Increased information accessibility also increases risk. The more teams know of each others’ capabilities and methodologies, the greater the velocity of, and enmity in, development; a greater equilibrium danger level follows accordingly.

Bostrom’s derivation is intended to spur discussions on AI governance design. See his original paper here!
Advertisements

Far Out: Using Individual GPS Data to Predict Long-Term Human Mobility

Where will you be exactly 285 days from now at 2PM? Adam Sadilek and John Krumm, researchers at the University of Rochester, seek to answer this question through their work Far Out: Predicting Long-Term Human Mobility. Their model, a nonparametric method that extracts significant and robust patterns in location data through the framework of an eigendecomposition problem, is noted as the first to predict an individual’s future location with high accuracy, even years into the future.

Sadilek & Krumm evaluated a massive dataset, more than 32,000 days worth of GPS data across 703 diverse individuals, by creating a 56-element vector for each day a subject used their GPS device: 24 elements included for representation of median GPS latitude (for each hour of the day), 24 for median GPS longitude, 7 for binary representation of the days of the week, and the final for a binary indicator of a national holiday. By performing their analysis on these ‘eigendays’, Sadilek & Krumm were able to capture long-term correlations in the data, as well as joint correlations between their additional attributes (day of week, holiday) and GPS locations.

The data employed by Sadilek & Krumm is not inimitable; the GPS devices used to track individual location were near replicas of those most people already carry around in their phone. As such, implications for their model are numerous. When focused on an individual, ‘Far Out’ may promote better reminders, search results, and advertisements (e.g. “need a haircut? In 4 days, you will be within 100 meters of a salon that will have a $5 special at that time”). When focused on a societal scale, ‘Far Out’ may allow for the first comprehensive scientific approach to urban planning (traffic patterns, the spread of disease, demand for electricity, etc.), and facilitate previously unseen precision in both public and private investment decisions (where to build a fire station, new pizza shop, etc).

Additional implications may be drawn when long-term human mobility modeling is combined with broader personal information, such as real-time location data or demographic trends. To the former, one may compare recent location information with predicted long-term coordinates to detect unusual individual behavior; to the latter, one could combine long-term location predictions with age, gender, or ethnicity information to predict economic undulations, crime trends, or hyper-local political movements.

See their full methodology & results here!

The Relevance of a Singleton in Managing Existential Risk

The idea of a ‘Singleton’, a universal decision-making agency that maintains world order at the highest level, offers a functional means for discussing the implications of global coordination, especially as they relate to existential risk. In his 2005 essay, Nick Bostrom both introduces the term and provides elaboration regarding possible examples of a Singleton, the ways one could arise, and its ability to manage global catastrophes.

Bostrom notes that a Singleton may come into being in various forms, including, but not limited to, a worldwide democratic republic, a worldwide dictatorship, or an omnipotent superintelligent machine; the final of these is the least intuitive (and certainly the most closely tied to science fiction), but does, in certain forms, meet Bostrom’s Singleton definition requirements.

One may note common characteristics between all forms of a Singleton. Its necessary powers include (1) the ability to prevent any threats (internal or external) to its own supremacy, and (2) the ability to exert control over the major features of its domain. The creation of a Singleton in ‘traditional government’ form may emerge if seen necessary to curtail potentially catastrophic events. Historically, the two most ambitious efforts to create a world government have grown directly out of crisis (League of Nations, United Nations); future increased power and ubiquity of military potential (e.g. nuclear, nanobot, A.I. capabilities) may help rapidly develop support for a globally coordinated government. The creation of a Singleton in superintelligent machine form may arise if a machine becomes powerful enough that no other entity could threaten its existence (possible through an uploaded consciousness or the ability to easily self-replicate), and if it holds universal monitoring/ security/ cryptography technologies (plausible given the rapidly increasing volume of internet-connected devices).

Although not without disadvantages (touched on further in the paper), the creation of a Singleton would offer a method for management of existential risk. See Bostrom’s full discussion on the merits of a Singleton here!

Orthogonality & Instrumental Convergence in Advanced Artificial Agents (Bostrom)

In his review of the theoretical superintelligent will, Nick Bostrom, Director of the Future of Humanity Institute at Oxford, applies a framework for analyzing the relationship between intelligence and motivation in artificial agents, and posits an intermediary goal system that any artificially intelligent system would almost certainly pursue.

Specifically, Bostrom notes the orthogonality of intelligence (here described as the capacity for instrumental reasoning) & motivation, and hence reasons that any level of intelligence could be combined with any motivation/ final goal; in this way, the two may be thought of axes along which possible agents can freely vary. This idea, often concealed by human bias towards the anthropomorphization of non-sensitive systems, implies that superintelligent systems may be motivated to strive towards simple goals (such as counting grains of sand), those that are impossibly complex (such as simulating the entire universe), or anything in-between. They, however, would not inherently be motivated to focus on human final goals, such as the ability to reproduce or the protection of offspring. High intelligence does not necessitate human motivations.

Bostrom ties this notion of orthogonality with the concept of instrumental convergence, noting that while artificially intelligent agents may have an infinite range of possible final goals, there are some instrumental (intermediate) goals that nearly any artificial agent will be motivated to pursue, because they are necessary for reaching almost any possible final goal. Examples of instrumental goals include cognitive enhancement and goal-content integrity. To the former, nearly all agents would seek improvement in rationality and intelligence, as this will improve an agent’s decision-making and make the agent more likely to achieve its final goal. To the latter, all agents have a present instrumental reason to prevent alteration of its final goal, because it is more likely to realize this goal if it still values it in the future.

Bostrom synthesizes the two theories by warning that a superintelligent agent will not necessarily value human welfare, or acting morally, if it interferes with instrumental goals necessary for achieving its final goal.

See his full discussion here!

The Subtlety of Boredom in Artificially Intelligent Systems

Complex Novelty, the ability to identify when an activity is teaching you insight (and is therefore not ‘boring’), poses a challenging theoretical question to those seeking to create an artificially intelligent system. The topic, one that ties closely with the notions of both ‘friendly’ AI & finite optimization, provides a theoretical method for avoiding a tiling the world with paperclips-type scenario. The identification and understanding of complex novelty offers a pathway for AI to self-limit a given optimization process, to self-identify new goals, and to generally avoid extreme optimization towards goals completely alien to those of humans (see: orthogonality thesis).

Eliezer Yudkowsky, founder of the rationality-focused community LessWrong, seeks to discuss the complexity of the issue + its powerful implications for intelligent beings.
See his full discussion here!

Nicholas Carr: Brief Response to ‘The Glass Cage’

Nicholas Carr is the former Executive Editor of Harvard Business Review, as well as a Pulitzer Prize finalist, and has recently published a book entitled “The Glass Cage”. His novel seeks to explore the hidden costs of granting software dominion over our work & leisure, and is highly critical of the internet’s effect on individual cognition. Carr also penned the prominent 2008 article “Is Google Making Us Stupid?”

After listening to Carr speak, I think he brings up an interesting point regarding “Automation Complacency”:  the term he uses to describe the choice by individuals to become disengaged in (i.e. ‘complacent in’) their surroundings when using automated technology, because they believe such technology will solve any issue that arises.

His example:  A ship captain is complacent when his ship appears to have missed a key landmark, because the ship’s GPS still says it is on the right track. The ship crashes as a result.

I do wonder, however, if perhaps he is applying a bit of a Neo-Luddite framework to what is the more fundamental problem of human disregard for long-tail risk. Generally, individuals rely upon their internal statistical representation of a situation to develop a best response to stimuli. In this scenario, the captain may actually be acting as a rational agent. He chooses to ignore what appears to be tail risk (that the GPS is wrong) in order to follow what statistically occurs a very high percentage of the time (that the GPS is correct).

Hence, the captain isn’t ‘complacent’ because technology is easy to rely on, but instead is making a rational decision given his internal statistical representation of the scenario.

Transcension as a Solution to the Fermi Paradox? [Smart’s 2011 Transcension Hypothesis]

The Transcension Hypothesis, as presented here by John Smart, proposes a model of the universe in which an all-encompassing process of evolutionary development guides any sufficiently advanced civilization into what may be called “inner-space”: a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of Space, Time, Energy, and Matter (STEM).

This process, estimated by Smart to complete ~600 years after technological singularity, is an extension of the idea that virtual minds existing at the nano and femto scales will continue compressing their STEM until approximating a black hole-like optimal computational environment.

Based upon a set of varied assumptions, Smart estimates that the costs (both evolutionarily and developmentally) after entering such an environment will outweigh the benefits of reaching out to other non-transcendent species, hence our lack of any contact with any other highly advanced civilizations. His assumptions have their root in evolutionary biology, and imply that there exists a natural goal of fostering a maximally diverse set of transcendent species.

Smart’s assumptions are broad, however, he proposes a method of testing his hypothesis based in the trending prevalence of electromagnetic frequencies. See additional thoughts by Smart here via blog post.