Nicholas Carr is the former Executive Editor of Harvard Business Review, as well as a Pulitzer Prize finalist, and has recently published a book entitled “The Glass Cage”. His novel seeks to explore the hidden costs of granting software dominion over our work & leisure, and is highly critical of the internet’s effect on individual cognition. Carr also penned the prominent 2008 article “Is Google Making Us Stupid?”
—
After listening to Carr speak, I think he brings up an interesting point regarding “Automation Complacency”: the term he uses to describe the choice by individuals to become disengaged in (i.e. ‘complacent in’) their surroundings when using automated technology, because they believe such technology will solve any issue that arises.
His example: A ship captain is complacent when his ship appears to have missed a key landmark, because the ship’s GPS still says it is on the right track. The ship crashes as a result.
I do wonder, however, if perhaps he is applying a bit of a Neo-Luddite framework to what is the more fundamental problem of human disregard for long-tail risk. Generally, individuals rely upon their internal statistical representation of a situation to develop a best response to stimuli. In this scenario, the captain may actually be acting as a rational agent. He chooses to ignore what appears to be tail risk (that the GPS is wrong) in order to follow what statistically occurs a very high percentage of the time (that the GPS is correct).
Hence, the captain isn’t ‘complacent’ because technology is easy to rely on, but instead is making a rational decision given his internal statistical representation of the scenario.