Can Artificial Intelligence Be Conscious?

Video Recording:  Here

On November 23rd, I had the opportunity to host a discussion with John Searle at Google’s Headquarters in Mountain View (see the video recording here). The discussion focused on the philosophy of mind and the potential for consciousness in artificial intelligence.

As a brief introduction, John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. He is widely noted for his contributions to the philosophy of language, philosophy of mind and social philosophy. Searle has received the Jean Nicod Prize, the National Humanities Medal, and the Mind & Brain Prize for his work. Among his notable concepts is the “Chinese room” argument against “strong” artificial intelligence.

Of special note, there is a question from Ray Kurzweil to John @38:51.

This Talk was presented for Google’s Singularity Network.

Modeling Nash Equilibria in Artificial Intelligence Development

In his discussion of a theoretical artificial intelligence “arms race”, Nick Bostrom, Director of the Future of Humanity Institute at Oxford, presents a model of future AI research in which development teams compete to create the first General AI. Under the assumption that the first AI will be very powerful and transformative (a notably arguable one, as per the soft vs. hard takeoff debate), each team is highly incentivised to finish first. Bostrom argues that the level of safety precautions each development team will undertake arises as a reflection of broader policy parameters, specifically those relating to the allowed level of market concentration (i.e. permitted consolidation of research teams), and information accessibility (i.e. degrees of intellectual property protection & algorithm secrecy).

In his work, Bostrom does not reach one specific conclusion regarding AI safety levels, but instead defines a set of Nash equilibria given various numbers of development teams + levels of information accessibility. Specifically, he notes that having additional development teams (and therefore reduced market concentration) may increase the likelihood of an AI disaster, especially if risk-taking is more important than skill in developing the AI. Increased information accessibility also increases risk. The more teams know of each others’ capabilities and methodologies, the greater the velocity of, and enmity in, development; a greater equilibrium danger level follows accordingly.

Bostrom’s derivation is intended to spur discussions on AI governance design. See his original paper here!