Speculative presence – 11

by Jehu

Our exploration has taken us to the edge of what is likely the minimum requirements of a fully communist society: a working day of three hours. Keynes predicted this three hours labor day based on a two percent growth rate and then existing technological trends. He assumed it would be likely to emerge by 2030. The Soviet Union, basing its projections on a much higher ten percent rate of growth, projected a three hours working day would be achievable fifty years earlier in 1980.

As we know, neither projection has come to pass thus far.

Nevertheless, I am engaged in creating an alternative world, a speculative fictional alternative future communist society that, at least so far, has never actually existed. I do this in order to describe how such a society might operate. This question is constantly posed by people who are skeptical such a society could ever exist.

***

So, let’s jump ahead for this post and assume we now have arrived at Khrushchev’s professed goal of a three hours day in 1980.

Have we solved all the problems facing mankind? Are we now in my communist utopia? Has history ended? Perhaps not.

Why not?

Well, remember what I said back in post seven:

Essentially, communism itself would be knowledge objectified, an extension of the human mind.

I used those terms for a reason. Marx used both of them to describe machines.

Another way to say the above is that, in contrast to capitalism, which is essentially a mode of production for squeezing surplus labor out of wage workers, economically, communism can be conceptualized as a massive machine, an intelligent machine. Communism is the creation of an artificial (machine) intelligence.

Initially, we would create that machine, maintain and supervise it. But, as time goes on, the machine would maintain and supervise itself, design its own improvements and mostly function without significant human intervention. It is even possible that this machine might one day (perhaps sooner than we expect) eclipse human beings in intelligence.

Cool, right?

Well, maybe not. In 1993, in an essay titled, Technological Singularity, Vernor Vinge gave some thought to this idea and decided this could lead to our extinction as a species.

It turns out that what I call “the material foundation of communism”, Vinge calls a “technological singularity”. The term carries an echo of Keynes own neologism, “technological unemployment”, which Vinge actually refers to in his 1993 essay. In that essay, Vinge defines what he means by the term and why he thinks it may be a threat to mankind’s future.

According to Vinge, the accelerating technological progress has been the central feature of this century. It has not only eclipsed the employment of human labor in production, it has produced a change comparable to the rise of human life on Earth — the imminent creation by technological means of a consciousness with greater-than-human intelligence. We can expect that, in one form or another, a superhuman intelligence will emerge. Vinge thinks this is a certainty by 2030 — the date by which Keynes predicted the emergence of a three hours working day.

Once this superhuman intelligence finally emerges, technological progress will be even more breathtakingly rapid. That progress will involve the creation of still more intelligent entities, on a still-shorter time scales. While the evolution of intelligent life through natural selection took billions of years on Earth, human beings have been able to accomplish it in a matter of centuries. Now we stand on the precipice of new stage that is as radically different from our own as we are from lower animals.

Vinge states his conclusion:

This change will be a throwing-away of all the human rules, perhaps in the blink of an eye — an exponential runaway beyond any hope of control. Developments that were thought might only happen in “a million years” (if ever) will likely happen in the next century. It’s fair to call this event a singularity (“the Singularity” for the purposes of this piece). It is a point where our old models must be discarded and a new reality rules, a point that will loom vaster and vaster over human affairs until the notion becomes a commonplace. Yet when it finally happens, it may still be a great surprise and a greater unknown.

In Vinge’s opinion, if a technological singularity can not be prevented or confined, the physical extinction of the human race is possible. But, he warns, physical extinction may not be the scariest possibility: mankind could be reduced to mere livestock, employed for specific useful functions in a larger AI environment:

Think of the different ways we relate to animals. A Posthuman world would still have plenty of niches where human-equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. … Some of these human equivalents might be used for nothing more than digital signal processing. Others might be very humanlike, yet with a onesidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new environment to what we call human now.

This is pretty much the concept behind the movie, The Matrix. Mankind has been reduced to a power source for an AI. It is digitally fed a simulation to keep it sane. What Vinge has done here is conceptualize the post-apocalypse in such a way as to make it appear to be the inevitable result of technological innovation.

Or has he?

Read this passage carefully:

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of humans’ natural competitiveness and the possibilities inherent in technology.

Vinge would have us believe that whatever threat of physical extinction hangs over the head of humanity today results from technological innovation. This technological innovation will in the very near future produce an intellectual runaway, an exponential explosion of machine intelligence beyond any hope of human control.

But examining his argument closely, it is obvious that there is no control over technology at present. Technological innovation is driven solely by competition.

According to Vinge:

  • “I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (so human competition would favor the development of the more dangerous models).”
  • “We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light.”
  • “The competitive advantage –economic, military, even artistic –of every advance in automation is so compelling that forbidding such things merely assures that someone else will get them first.”
  • “[Intelligence Amplification] for individual humans creates a rather sinister elite.”

Vinge suggests, perhaps without realizing it, that his chief symptom of a technological singularity — technological runaway — is not a future concern, but a constant reality under the existing mode of production. And it has been a threat since capitalist competition-driven technological innovation triggered the first depression — perhaps as early as 1819 in the United States.

First, technology displaced human labor in production, creating the Great Depression; now it threatens to make human beings superfluous even to the design and supervision of the machines they have created to replace human labor.

If Vinge’s argument about competition and technological innovation sounds vaguely familiar to you, it should. The same discussion has been raging among communists for decades now, under the rather awkward question: “Where is the revolutionary subject?”