Why technology needs speed limits: confessions of an ex-libertarian

Used to think of myself as a techno-libertarian. Because when you’re 17 or 18, and a few years closer to the flat part of the exponential curve, it’s hard to imagine what could go wrong, or to care about collateral damage—and to this day, hard to have faith in government handling it constructively.

I’m still excited about the superintelligent, super-networked future, and I still believe that barring systemic crisis (this is a non-trivial caveat), it’s inevitable in one form or another. I’m not bitter about that.

But as I’ve grown out of ca. 1991-2008 “end of history” liberal triumphalism, I’ve come to realize that there’s *no guaranteeing* the valence that information super-technologies will have, and *no predicting* what values, if any, a strong AI will tend to express.

Even back in my Ray Kurzweil-boosting days, I could tell that the pure optimists were giving short shrift to questions of ethics and ideology. Nick Bostrom does a good job of mapping out the range of possibilities on this front in “Superintelligence”, all the while admitting that there’s only so much we can know.

I left Superintelligence a few years back convinced that the best we could hope for, as far as endowing the technologies of the future with benign values, was to have a frank, culture-wide conversation.

How is information technology already changing us today, and how will it more and more steeply change us in the decades to come? Note: the key question, over which we might be able to exert some control, is not If—it’s How.

Fast-forward a few years. The conversation has clearly broken out of Silicon Valley, but it remains an elite concern.

And even if it weren’t for the Trump circus, it would’ve continued (for natural, unsurprising reasons) to lurk in the background, churning along, while idiot lawyers and preachers stoked primal identity-based resentments and caricatured 18th-19th century economics.

Meanwhile, the largest handful of corporations in the world—mostly brand new money, unencumbered by the push-and-pull of regulatory politics—are leapfrogging ahead (as is natural), making the decisions for all of us, without our consent.

They are deploying the best engineering minds and the best algorithms in existence to build the most addictive, insidious products possible—taking advantage of the same neurochemical pathways as drugs, junk food, and sexual compulsion.

They are precipitating a mental health crisis at worst, and radically reshaping society, without the coordination of any of its other stakeholders, at best.

Like many people, I’ve struggled to figure out a healthy information technology regimen. One of my major personal goals right now is to develop a practice of mindfulness around my use of the internet, social media, and smartphones—but it’s a bitch.

As an instinctive libertarian, I think that developing our own personal ways of coping is all we can really count on, at the end of the day. But I’m not optimistic. Today’s information technology is just too well-engineered, attention spans have been shot to shit, and designing systems and regimens is something not many people are good at.

That’s why I think the obvious answer, speaking as a naive non-lawyer, is strict regulation (or, I’ll emphasize, *protection*) for the public good. We’ve come to a consensus that when it comes to anything with sufficient power to harm—cars, cigarettes, alcohol, factory mechanisms, food production pipelines, toxic waste—market mechanisms alone are not enough to stave off abuse and disaster.

Neurochemically and emotionally, this algorithmic crack is on another level. Sadly, I see just as little evidence of us dealing with this challenge frankly and maturely on a collective scale as I do on the individual scale. And remember, smartphones and Snapchat are just the tip of the iceberg.

Even if you disagree with them on *everything else*, I think this is a strong case for supporting economic progressives like Elizabeth Warren and Bernie Sanders. Because with anyone else—the alt-right, the corporate right, the Clinton-Zuckerberg technocrat-naifs—you can be sure that the conversation won’t happen. How we handle our exponential journey past computation’s inflection point is a decision we have to make democratically—lest we cede it to a few soon-to-be-trillion-dollar companies.

Nobody serious—not Warren, not Sanders, and certainly not me—is talking about smashing the market. That would bring unspeakable tragedy, and if you wish for it, you must feel very safe in your social standing.

But the market and culture (here’s where cultural conservatives, of the Rod Dreher variety, have a point too) are accelerating so fast, ripping apart communities and soldering together networks of capital so astonishingly, that taking a breath, having a democratic conversation, and doing what we can to slow things down until we can get a grip is the least we can do.

Because if a superintelligent computer with dubious motives were to manifest right now in front of all of us, it’d be all too obvious we don’t have our shit together.

https://www.wired.com/…/our-minds-have-been-hijacked-by-ou…/
https://www.theatlantic.com/…/has-the-smartphone-de…/534198/