Applied AI: The Machine Learning Conference at Harvard Business School

We’re excited to announce the upcoming Machine Learning Conference at Harvard Business School.

Get your tickets here:  http://bit.ly/hbsmlconf

In the conference, we’ll explore the technology’s impact on health care, finance and other consumer technology companies. Speakers come from Amazon, Google, 23&Me, Path.AI, Spotify, Bridgewater Associates, MIT, and many more.

Attaching the full conference schedule below. Hope to see you there!

Conference Schedule:

Keynote:  David Ferrucci // Elemental Cognition – David is the award-winning AI researcher who built and led the IBM Watson team from its inception in 2006 to its landmark success in 2011 when Watson defeated the greatest Jeopardy! players of all time.

Health Care Speakers:

  • Robert Gentleman // 23&Me
  • Jen Kerner // Path.AI
  • Alina Gatowski // Boston Children’s Hospital
  • Panel Discussion: How can we collect novel biomarkers to impact disease prevention?

Finance Speakers:

  • Mikey Shulman // Kensho
  • Rudina Seseri // Glasswing Ventures
  • Paulo Marques // Feedzai
  • Panel Discussion: How are investment decisions changing with improved analytics?

Big Tech Speakers:

  • Spyros Matsoukas // Amazon
  • Sara Robinson // Google
  • James McInerney // Spotify
  • Panel Discussion: What responsibilities do tech companies have to the broader development ecosystem?

AI Ethics Keynote:  Max Tegmark // The Future of Humanity Institute

Closing Remarks

Life 3.0: Being Human in the Age of AI

Last December, I hosted a Talk at Google with Max Tegmark, a renowned scientific communicator and physicist at MIT who has accepted donations from Elon Musk to investigate the existential risk from advanced artificial intelligence.

The video recording is here. In it, Max discusses his thoughts on the fundamental nature of reality and what it means to be human in the age of AI.

A key learning is the ‘dual-use’ nature of AI systems – the same algorithms that we use to develop new image classification techniques or improve our video games can also be used for nefarious reasons (such as mass surveillance). It’s a challenging issue that should be discussed and managed actively within the research community.

*This talk was presented for Google’s Singularity Network.*

Full Bio:
Max Tegmark is a renowned scientific communicator and cosmologist, and has accepted donations from Elon Musk to investigate the existential risk from advanced artificial intelligence. He was elected Fellow of the American Physical Society in 2012, won Science Magazine’s “Breakthrough of the Year” in 2003, and has over 200 publications, nine of which have been cited more than 500 times. He is also scientific director of the Foundational Questions Institute, wrote the bestseller Our Mathematical Universe, and is a Professor of Physics at MIT.

The Sorcerer’s Apprentice

The crux of the AI Alignment (safety) movement is based upon two ideas:
  1. The orthogonality of intelligence and motivation in artificially intelligent systems
    • I.e. “Intelligent” systems do not necessarily have human-like motivations
  2. Instrumental convergence within autonomously-pursued goal systems
    • I.e. there exists an intermediary set of goals that any artificially intelligent system would almost certainly pursue, no matter their explicitly set “final” goal. These typically include objectives like goal-content integrity and cognitive enhancement
These principles were first proposed by Nick Bostrom here.

 

Sound bites (and recent articles in New York Times/ Tech Crunch) do a poor job of communicating these principles. The concern is not that a conscious AI system will rise up and “take revenge” on humans. Instead, it’s that an advanced and sufficiently capable AI system will pursue a human-specified goal, and by doing so, will inadvertently damage systems/ entities/ institutions that humans value.

 

The Sorcerer’s Apprentice (illustrated in the Disney movie Fantasia) provides a useful analogy:
The poem begins as an old sorcerer departs his workshop, leaving his apprentice with chores to perform. Tired of fetching water by pail, the apprentice enchants a broom to do the work for him – using magic in which he is not yet fully trained. The broom performs the chores as enchanted and fills the sorcerer’s cauldron with water. However, the apprentice soon realizes that the broom has obeyed only too well. The broom continues to bring buckets and the floor becomes awash with water. The apprentice realizes that he cannot stop the broom because he does not know how.
 
In a moment of desperation, the apprentice splits the broom in two with an axe – but each of the pieces becomes a whole new broom that takes up a pail and continues fetching water, now at twice the speed. The broom continues to bring water and fills the room until all seems lost. In the final moments, the old sorcerer returns and quickly breaks the spell. The poem finishes with the old sorcerer’s statement that such powerful spirits should only be called by those that have mastered them.
In this story, the apprentice under-specifies the goal he wishes his enchanted broom to pursue, not realizing that the broom:
  1. Does not care about filling the room with water. It only cares about increasing the chances that the bucket is filled
  2. Does not wish to be shut off as that would reduce its ability to ensure that the bucket is filled
Eliezer Yudkowsky’s paperclip maximizer provides another illustration of these principles.

 

*”Consciousness” in AI systems is a philosophical question and is not requisite for large-scale, negative impact from AI systems.

 

Preparing our Economy for the Impact of Artificial Intelligence (Robert Reich)

On January 30th, I hosted a discussion with Robert Reich, the former US Secretary of Labor under Bill Clinton, about his work on preparing our economy for the impact of artificial intelligence.

The video recording is here. This talk was presented for Google’s Singularity Network.

Bio: Robert Reich is the Chancellor’s Professor of Public Policy at the University of California, Berkeley and Senior Fellow at the Blum Center for Developing Economies.  He served as Secretary of Labor in the Clinton administration, for which Time Magazine named him one of the ten most effective cabinet secretaries of the twentieth century.  He has written fourteen books, including the best sellers Aftershock,The Work of Nations, and Beyond Outrage, and, his most recent, Saving Capitalism. He is also a founding editor of the American Prospect magazine, chairman of Common Cause, a member of the American Academy of Arts and Sciences, co-founder of the nonprofit Inequality Media, and co-creator of the award-winning documentary, Inequality for All.

Work, Love and Life when Robots Rule the Earth (Robin Hanson)

On August 1st, I hosted a discussion by Robin Hanson at Google about the social and economic effects of whole brain emulations.

The video recording is here. This talk was presented for Google’s Singularity Network.

Bio:  Robin Hanson is a research associate at the Future of Humanity Institute of Oxford University and an associate professor of economics at George Mason University. He is known as an expert on idea futures and markets, and he was involved in the creation of DARPA’s FutureMAP project and the Foresight Institute’s Foresight Exchange. He invented market scoring rules like LMSR (Logarithmic Market Scoring Rule) used by prediction markets such as Consensus Point (where Hanson is Chief Scientist), and has conducted research on signalling. Hanson’s blog Overcoming Bias receives over 50,000 visitors per month, with more than 8 million visitors since 2006.

Wait But Why? The Road to Superintelligence with Tim Urban

On July 22nd, I hosted a discussion with Tim Urban at Google about the future of artificial intelligence. Tim spoke to his blog Wait but Why and his writing on the The Road to Superintelligence.

The video recording is here. This talk was presented for Google’s Singularity Network.

Bio: Tim Urban has become one of the Internet’s most popular writers. With wry stick-figure illustrations and occasionally epic prose on everything from procrastination to artificial intelligence, Urban’s blog, Wait But Why, has garnered millions of unique page views, thousands of patrons and famous fans like Elon Musk. Urban has previously written long-form posts on The Road to Superintelligence, and his recent TED talk on procrastination has more than 6 million views. 

Can Artificial Intelligence Be Conscious?

Video Recording:  Here

On November 23rd, I had the opportunity to host a discussion with John Searle at Google’s Headquarters in Mountain View (see the video recording here). The discussion focused on the philosophy of mind and the potential for consciousness in artificial intelligence.

As a brief introduction, John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. He is widely noted for his contributions to the philosophy of language, philosophy of mind and social philosophy. Searle has received the Jean Nicod Prize, the National Humanities Medal, and the Mind & Brain Prize for his work. Among his notable concepts is the “Chinese room” argument against “strong” artificial intelligence.

Of special note, there is a question from Ray Kurzweil to John @38:51.

This Talk was presented for Google’s Singularity Network.

Avoiding Unintended Instrumental AI Behavior (Hibbard)

In his review of the hypothesized superintelligent agent, Bill Hibbard, principal author of the Vis5D, Cave5D and VisAD open source visualization systems, proposes a mathematical framework for reasoning about AI agents, discusses sources and risks of unexpected AI behavior, and presents an approach for designing superintelligent systems which may avoid unintended existential risk.

Following his initial description of the AI-environment framework, Hibbard begins by noting that a superintelligent agent may fail to satisfy the intentions of its designer when pursuing an instrumental behavior implicit to its final utility function. This said instrumental behavior, while unintended, could occur in order for the AI to preserve its own existence, to eliminate threats to itself and its utility function, or to increase its own efficiency and computing resources (see: Nick Bostrom’s paperclip maximizer).

Hibbard notes that several approaches to human-safe AI suggest designing intelligent machines to share human values so that actions we dislike, such as taking resources from humans, violate the AI’s motivations. However, humans are often unable to accurately write down their own values, and errors in doing so may motivate harmful instrumental AI action. Statistical algorithms may be able to learn human values by analyzing large amounts of human interaction data, but to accurately learn human values will require powerful learning ability. A chicken-and-egg problem for safe AI follows: learning human values requires powerful AI, but safe AI requires knowledge of human values.

Hibbard proposes a solution to this problem through a “first stage” superintelligent agent that is explicitly not allowed to act within the learning environment (thus refraining from unintended actions). The learning environment includes a set of safe, human-level surrogate AI agents, independent of the superintelligent agent, whose actions in composite mirror those of the superintelligent AI. As such, the superintelligent agent can observe humans, as well as their interactions with the surrogates and physical objects, and develop a safe environmental model from which it learns human values.

Hibbard’s mature superintelligent agent may still pose an existential threat (he specifically notes the dangers of military and economic competition), however, its utility function should assign nearly minimal value to human extinction. See his full discussion here!

A Fireside Chat with Ray Kurzweil

On June 4th, I had the opportunity to host a ‘Fireside Chat’ with Ray Kurzweil at Google’s Headquarters in Mountain View (see the video recording here). The Chat focused on Ray’s predictions for the Singularity, his view on the current state of AI, and the potential economic and societal impact of accelerating technologies.

As a brief background, Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a thirty-year track record of accurate predictions. Called “the restless genius” by The Wall Street Journal and “the ultimate thinking machine” by Forbes magazine, Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which described him as the “rightful heir to Thomas Edison.” PBS selected him as one of the “sixteen revolutionaries who made America.”

Kurzweil was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Among Kurzweil’s many honors, he recently received the 2015 Technical Grammy Award for outstanding achievements in the field of music technology; he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds twenty honorary Doctorates, and honors from three U.S. presidents.

Ray has written five national best-selling books, including New York Times best sellers The Singularity Is Near (2005) and How To Create A Mind (2012). He is a Director of Engineering at Google heading up a team developing machine intelligence and natural language understanding.

This Talk was presented for Google’s Singularity Network.

Modeling Nash Equilibria in Artificial Intelligence Development

In his discussion of a theoretical artificial intelligence “arms race”, Nick Bostrom, Director of the Future of Humanity Institute at Oxford, presents a model of future AI research in which development teams compete to create the first General AI. Under the assumption that the first AI will be very powerful and transformative (a notably arguable one, as per the soft vs. hard takeoff debate), each team is highly incentivised to finish first. Bostrom argues that the level of safety precautions each development team will undertake arises as a reflection of broader policy parameters, specifically those relating to the allowed level of market concentration (i.e. permitted consolidation of research teams), and information accessibility (i.e. degrees of intellectual property protection & algorithm secrecy).

In his work, Bostrom does not reach one specific conclusion regarding AI safety levels, but instead defines a set of Nash equilibria given various numbers of development teams + levels of information accessibility. Specifically, he notes that having additional development teams (and therefore reduced market concentration) may increase the likelihood of an AI disaster, especially if risk-taking is more important than skill in developing the AI. Increased information accessibility also increases risk. The more teams know of each others’ capabilities and methodologies, the greater the velocity of, and enmity in, development; a greater equilibrium danger level follows accordingly.

Bostrom’s derivation is intended to spur discussions on AI governance design. See his original paper here!