“Do we get better or worse at adapting to change?” asks Roots of Progress founder Jason Crawford in a new essay. That question has fascinated me throughout my career as a technology policy analyst and, over the years, I’ve written many things on the broader topic of how humans assimilate new technologies into their lives, economies, and cultures. After endless debates with scholars in the field of Science & Technology Studies (STS) as well as other technology critics, I came to believe that what separated our worldviews could be summarized in two concepts: fragility versus resilience.
Tech critics lack a theory of resiliency that can explain how humans have consistently muddled through and prospered, even in the midst of faster and faster technological change. The critics instead rely on theories of fragility (of individuals, institutions, systems, etc.) and only view technological change through the prism of victimization, alienation, sacrifice, etc.
The conflict of visions over the role that technology plays in society has always divided people, organizations, and even entire cultures and nations. The late Calestous Juma explored this tension brilliantly in his 2016 book, Innovation and Its Enemies: Why People Resist New Technologies, which offered richly detailed historical case studies. Virginia Postrel’s The Future and Its Enemies also examined how those tensions are playing out in the modern world.
Crawford argues that, despite all that opposition to progress, “we have actually been getting better at adapting, even relative to the pace of change.” He identifies at least two reasons why: detection and response to technological risks have both improved and he argues “this creates enormous resilience” when it comes to coping and adjusting to change.
His conclusion pinpoints what lies at the heart of how humans have repeatedly overcome adversity in the face of far-reaching technological change. In a 2014 essay (“Muddling Through: How We Learn to Cope with Technological Change”) and a 2016 book chapter (“Failing Better: What We Learn by Confronting Risk and Uncertainty”), I sketched out a theory of human adaptability and resiliency as it pertains to technological progress. In my “Failing Better” book chapter, I argued:
When it comes to human health, wealth, and happiness — and to social progress and prosperity more generally — there is no static equilibrium, no final destination. There is only a dynamic and never-ending learning process. Learning from experience provides individuals and organizations with valuable informational inputs regarding which methods work better than others. Even more importantly, learning by doing facilitates social and economic resiliency that helps individuals and organizations develop better coping strategies for when things go wrong.
Of course, learning-by-doing and the resiliency that comes from trial-and-error have been discussed by many others progress scholars, and I summarize some of the best work on that theme in my papers if you care to read more.
What remains so perplexing to me, however, is why so many tech critics — especially STS academics — dismiss or ignore all that history and the many powerful examples of resiliency in action. I’ve gotten into some raging debates with various STS scholars through the years over this point and found that much of their hostility toward technological change can be traced back to a deeper aversion to capitalism and markets more generally.
There’s just no denying the gains that technological innovation has brought to lives and living standards, however. But when challenged with that evidence, critics often seek to change the baseline of what we mean by prosperity and human flourishing, or suggest that things really haven’t gone all that well for humanity. More cynically, they argue that economic progress or material gains are either unimportant or an illusion. I find these arguments outlandish, as did Pulitzer Prize-winning historian Richard Rhodes who noted in his 1999 book, Visions of Technology: A Century of Vital Debate about Machines Systems And The Human World, “it’s surprising that [many intellectuals] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals?”
Sadly, it has instead only seem to inspire contempt from the critics. In fact, as I argued in an essay for the Progress Forum last year, when it comes to STS scholarship in particular, “the field might today better be labeled Anti-Science & Technology Studies.” The radicalism of their claims and extremism of their advocacy continues to reach all new heights, as I documented in a 2019 essay about, “The Radicalization of Modern Tech Criticism.” Insane concepts like “degrowth” and “methodological Luddism” now get serious attention in academic discussions, and variations of the Precautionary Principle are treated as the only legitimate baseline for policy discussions. In the current academic anti-progress echo chamber, it really is only a question of exactly far one wants to go in locking down the future or turning back the clock. Should we hit some imaginary “pause button” on future AI development as many want, or should we just move straight to carpet-bombing datacenters and using nuclear weapons to counter powerful computational systems? Such proposals are now receiving serious attention. The costs and trade-offs with such approaches are dismissed as irrelevant.
It is the underlying psychology of the anti-progress movement that has fueled this increased radicalization and reactionary calls for sweeping controls on technological innovation. The relentless narratives of victimization, alienation, and fragility seem to drive everything in this community as well as in the broader field of technological criticism (and journalistic writing today. I’ve been surveying many of the leading books on AI policy being assigned in STS programs today. They include titles like Weapons of Math Destruction, Automating Inequality, Technically Wrong, The New Jim Code, and Algorithms of Oppression. The underlying theme of all these books and the countless other hostile journal papers and media articles being written about algorithmic systems today is that everything in the AI space is irreparably broken and can never be fixed. Humans cannot cope; they can only be crushed by it all. It’s straight doom porn all the way down and no alternative theory is ever considered about how we might assimilate those new algorithmic tools into our lives and make them work to our advantage — just as we have countless times before.
Thus, when pro-progress, resiliency-minded people like Crawford and me admit that “adaptation is always a challenge,” but that progress is “helping us meet that challenge, as it helps us meet all challenges,” we are greeted with scoffs of laughter and derision. I’ve been in classrooms and conferences where I have been openly mocked for saying such things.
But the fact of the matter remains, theories of fragility cannot explain how we have prospered as a species by going through this process again and again and again. As I concluded in my last book:
We humans are a remarkably resilient species and we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it. In that process, we find a new baseline or equilibrium and incorporate new ideas, institutions, and values into our lives. We will continue to do so, but it will not always be according to the sort of script that many critics desire. […] Real wisdom is born of real-world human experiences, endless trial-and-error, and the resiliency that goes along with muddling through adversity.
This is how we get better, not worse, at adapting to change over time as Crawford suggests. It is how we become safer, more prosperous, and, yes — more human. The tech critics who argue the opposite should try having a little more faith in humanity.
Source: Medium