On 'Some Moral and Technical Consequences of Automation'

In 1960, Norbert Wiener - widely considered the originator of the concept of cybernetics - published a short essay entitled "Some Moral and Technical Consequences of Automation". Here's the article that got me there, which is mostly about social media and an abstracted reapplication of these concepts, but they tie in the article a bit.

I find myself facing a public which has formed its attitude toward the machine on the basis of an imperfect understanding of the structure and mode of operation of modern machines.

While certainly contemporary society is far more technologically integrated than any country in the 1960s, I'm unconvinced that people now have a good understanding of these structures. Rather, I think the field of user experience and design has come leaps and bounds. The world that Wiener inhabited was one where interaction with a computer required some intimate knowledge of its inner workings. But one does not need to understand the slightest thing about the mechanical workings of a processor, kernel, or operating system in order to use a modern smartphone. I raise this point to suggest that this increased interaction has not actually given the general public a well-grounded knowledge of how the sausage is made - rather, we've just obfuscated it all behind something nice and shiny and these issues remain relevant.

Wiener's first point is that many people - the average "man on the street" - strongly believe that a machine cannot generate creative, original output from some input - "nothing can come out of the machine which has not been put into it". This is a trope about artificial intelligence that has found much ground in popular culture, though often with an implied question as to its truth. Wiener emphatically states that we should reject this viewpoint, because it is this creativity which presents the substantial danger of autonomous machines.

It is my thesis that machines can and do transcend the limitations of their designers, and that in doing so they may be both effective and dangerous … By the time we are able to react to information conveyed by our senses and stop the car we are driving, it may already have run head on into the wall.

And indeed, recent advances in neural networks are able to produce genuinely novel art, such as Dall-E and other GANs. It is said that plagiarists borrow, and true artists steal outright; that if we see far it is on the shoulders of giants; yet in this particular case, even without conscious artistic direction, it seems that building these "stochastic giants" produces some relatively interesting results when combined with the right latent spaces.

But Wiener's main purpose in raising this point is to drive home that creativity and unpredictability are intrinsically related, and that this relationship is a big issue in AI safety. This is a very important point, because it directs attention towards an anthropomorphism of heuristics - we generally think of "creative machines" as normatively a good thing, but "unpredictable machines" as normatively bad. I think Wiener's point here is that "creativity" is effectively unpredictability that humans find subjectively beautiful. The "s"-word should set off alarm bells in anyone's head, in terms of the ability to create an objective, empirical heuristic of this property. Even within the human species this property shows an enormous amount of breadth in what it can mean to the individual. It is entirely possible that an AGI will have some generalized conception of beauty and therefore some genuinely artistic creativity, but it is not at all guaranteed that said conception will align with modern human values.

Another of Wiener's points is about the ability of a less intelligent entity to control a more intelligent one - in an applied sense being a superintelligence vs. human intelligence. He raises the issue of AI explainability in terms of our ability to design, predict and influence an intelligence. As an AI grows in complexity, our potential for any one individual to understand it effectively remains static, even if our ability to understand it grows.

A compelling quote sums up Wiener's essay, and is perhaps my favourite snippet from this piece:

Disastrous results are to be expected… wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose.

Have we experienced some similar paradigm in the past that may prove analogous? That is, a paradigm in which two agencies that were previously unaligned and different in intelligence later align. I think firstly about dogs. When humans first encountered the animals that would one day become dogs, we were in the infancy of our collective conscious experience - and yet conscious and sentient. At first, our incentives probably did not align particularly well. It is very doubtful that the very first wolves to ever encounter a human perceived them as friends. We were undoubtedly foreign agencies to each other. And yet, today we find our incentives generally aligned. We certainly wouldn't describe our relationship to dogs as disastrous, but rather as immensely valuable - but would they, if they could comprehend what they lost? Would we feel particularly good about being the dogs of some digital overlord? I'm personally not thrilled, however much I adore doges of all kinds.

Overall, an interesting read!