AI and the big hurry to make ourselves less useful
I don't know much about Artificial Intelligence, which makes me an excellent candidate to be skeptical.
I don’t understand very much about Artificial Intelligence, which is inconvenient because it seems important but which I suspect also puts me squarely within a vast majority of the population.
It’s even more mysterious and harder to define than the “World Wide Web” was 25 or so years ago, but everything with the Internet worked out great so why should we worry about AI?
It is the future. It is robots?
It will do a lot of the mundane tasks and processes we don’t want to do, freeing us up for bigger and better things? Or will it do a lot of the complicated tasks and processes we once did, leaving us with little to do and rendering a lot of human thought (and jobs) obsolete?
It is all anyone can talk about, and nothing anyone can get a handle on.
But in a strange way, I think knowing very little about AI in this moment is a blessing. It allows me (and probably you) to be skeptical not about specific promises but about the general direction life seems to be moving.
These two things about AI stand out:
I have heard how it will (and already is) making our lives easier, and in these conversations “easier” often is used as a synonym for “better.” This, however, is a poor proxy. I am hardly convinced that the ease of modern life is making us happier. People adapt very quickly to the new normal, rendering any minor inconvenience completely unbearable. Time that used to be spent making or doing things has been replaced by hacks, apps and shortcuts, but precious few of us are consistently in the right headspace to benefit from the time we save — giving rise to mindless scrolling and other empty calories. We have all the convenience we need, and perhaps more than that.
I fear that like many other technological innovations, AI will ultimately serve to widen the gap between the haves and have-nots and accelerate an already runaway wealth gap problem. It would be wonderful to be proved wrong, and to believe that it will do such things as solve world hunger. But the level of investment in it, the amount of sophistication it requires to understand it and the brief history of the Internet all lead me to believe that on balance it will make a relative handful of people insanely wealthy and leave the rest of us a little more numb and less well off.
You’ll notice that neither of my specific points are of the doomsday, rise of the machines becoming self-aware and wiping out humanity sort, though I should note that I am not especially comforted by this paragraph I read recently:
In a survey of 2,700 AI researchers who had published at top AI conferences, a majority said there was an at least 5% chance that superintelligent AI will destroy humanity. Yet opinions on this topic were divided.
Extinction? We need to hear both sides!
Maybe I should be more worried about that, but at this point it seems like a distraction from the more subtle conversation.
Why are we so interested, as a species, in making ourselves less useful? Have we not learned anything yet from a decade or two of prologue?
In trying to put words to this idea and the feeling I get at several turns during a typical day, I was struck by a wonderful piece sent my way this week (thanks, Mom!) that decries the modern trend of “overoptimization.”
Freddie deBoer argues that several facets of life have been choked off or made less interesting in the name of ritualized efficiency.
Perhaps the examples that struck me the most — and which I had devoted the most thought already — were about sports, and how particularly baseball has become an aesthetic mess in an era of walks, strikeouts and launch angles even if the data that says these methods will lead to wins is good and true.
It’s no fun to watch a game when everyone has the efficiency cheat code and it involves the least amount of action possible. But it’s even less fun when we replace “baseball” with “life.”
deBoer concludes his lengthy piece with this damning but true sentiment:
“So much of the internet era has been defined by unintended consequences. But we’ve opened a Pandora’s box by providing the world with immense amounts of easily-accessed information, and so of course we have many ambitious people doing everything they can to exploit it — often to the detriment of the rest of us.”
As computers and machines become more capable of learning and acting like humans, and as AI inevitably becomes more mainstream and understood, it’s far too easy to imagine a world in which those ambitious people have an easier time exploiting it to an even greater detriment to the rest of us.
Will there be space for our own discovery, happy accidents and counterintuitive decisions that haven’t already been made for us? If you (like me) feel like you’ve already ceded too much ground to what you are supposed to think, how do we reverse course in the coming decades against an even stronger incoming tide?
At least I have something in common with Warren Buffett, even if only one of us is among the 10 richest people in the world. Buffett recently said this of AI:
“I do think, as someone who doesn’t understand a damn thing about it, it has enormous potential for good and enormous potential for harm — and I just don’t know how that plays out.”
I don’t know, either. But I have a guess that how it is playing out and how we are told it is playing out will be quite different.
Hard agree that automating large quantities of work and inconveniences out of our lives is not going to make us happier. The result will likely just be problem creep. Many of us will make bigger and bigger deals out of smaller and smaller things. However it will likely ring hollow. Life isn’t all that fulfilling if there aren’t any problems to solve or challenges to accomplish. It’s hard to be resilient in the face of a big challenge when you don’t have any experience overcoming small challenges on a regular basis.
Defense planners are increasingly worried that an adversary that is willing to turn over complete control of a weapon to AI will gain a significant advantage in terms of shortening the target identification-to-fire cycle. SkyNet seems more and more like a when-not-if proposition.