Social Media Is Breaking Your Brain

They say that power corrupts. Usually, the addendum is that absolute power corrupts absolutely. But what about little morsels of power, meted out to maximize dopamine production by an algorithmically-driven automated system, to brains increasingly trained to act socially within Skinner-box facades of connection?

I am increasingly of the opinion that social media is breaking the minds of everyone it touches. I have not been an exception to that. If I think back to the early days of social media, I remember a generally positive - if mildly banal - user experience. It seemed primarily a way to share memories of meatspace social experiences, a convenient common forum to share photos of that party with everyone who attended or show your relatives the highlights of your latest holiday. If you had told me back then the sort of monster we had just begun to build, I'm not sure I would have believed you.

How did we get here?

My recent experiences with social media have not been nearly as positive as those early days. At first, I was an avid user of Reddit, Twitter, Facebook, and all the regular haunts. I tried to use it (as many do) as a testing ground for my views where I could submit an opinion and have it dissected by a hivemind within a relatively safe context, to expose its weaknesses. While there were times when that did feel like what was happening, especially in the much smaller communities, this quickly became the minority of my experience. For the most part, I would become combative, irritable, anxious and generally a bit of a prick when it came to these discussions. I was frustrated at what I saw as my inability to communicate my ideas, with discussions being lost within rabbit holes of semantics, and generally with what I felt to be an inability to accurately convey what I wanted to. Now, I'm not claiming that this isn't partially my fault. I'm terrible at conveying my idea, as these posts exemplify. But I think it's significant that - from what I have seen of discussions on social media like Twitter, Reddit, and other discussion-centered sites - I am not alone in being terrible at having constructive conversations online. Reflecting on this has led me to some disturbing conclusions about social media that now have me trying to wean myself off as much as possible. It's been a surprisingly difficult process.

The invention and rapid maturity of machine-learning-driven algorithms for managing information streams marked a significant turning point in our engagement with social media. No longer were we merely presented with a random or chronological stream of content. Instead, automated systems calculated the probabilities of engagement for each user and adjusted the information stream accordingly. We started being shown what would keep us engaged, and those who made that decision either did not sufficiently investigate what that would mean or in the more likely scenario knew exactly what would happen and simply did not care. The giants of social networking have always struggled to create profitable models, instead focusing on monopolizing as much of your time and social network as possible and figuring out the monetization later. For almost all of them, this monetization has emerged as advertising.

There is much to be said about how advertising broke the first ground when it came to the manufacturing of memes. By this word, I don't mean amusing images shared on the internet. Rather, memes are a kind of social or cultural virus that spread from mind to mind. Instead of through bodily fluids or fomites, memes are spread by language, art, media, propaganda, and any medium capable of storing concepts. Some memes are, in my opinion, very good for society. "Murder is bad" is a very old meme that has persisted for a long time, and we might presume that is because societies that did not consider murder to be bad did not stick around for very long. We can therefore see that (just like biological viruses) memes are subjected to a process of natural selection that removes mutations that do not self-propagate as the people who hold them die. Within advertising, memes became a method for promoting the ever-important brand. For many companies, a jingle that got stuck in your head was as valuable to a brand as the product itself. But a company generally could only create a single advertisement and throw it out into the world in a scatter-shot approach. The idea that you could go and design the perfect jingle for every single person on the planet, maximally optimized to get stuck in their head specifically, would have seemed utterly farcical. Does such a notion seem as farcical today, in our world of ML-driven information streams?

Skinner Boxes & Mobile Gaming

The artificial manipulation of the mammalian brain through dopamine triggers has a long and lurid history. We've touched on it a little bit with the advertising industry, but to see the trajectory of its evolution, we must look at where it became grounded in science - behavioral psychology.

In the 1950s, a behavioral psychologist named B.F. Skinner undertook a series of influential experiments using operant conditioning chambers. Building on the work of Pavlov and Konorsky, Skinner built a simple experiment wherein an animal was placed in a box with access to a lever or button (known as an operandum) and some unconditioned feedback mechanism, usually a food dispenser. The animal could provide some input via the operandum, and the scientists could alter how the feedback responded to that input. One of the primary discoveries here, and the phenomenon to which people generally refer when they bring up Skinner Boxes, is the way that the motivations of the animal can be manipulated by altering the configuration of the outputs. Skinner found that when the simplest mapping was set, where using the operandum always triggered the feedback (i.e. when the rat pressed the lever, a food pellet was always given) the rat would use it a couple of times, grow full and bored, and then stop. If the lever was adjusted so that, say, every 4 presses would always give a pellet, then the rat would take a little longer to figure things out but would again eventually get bored. However, when randomness was introduced - for instance, every time the lever was pressed there was a 20% chance of a pellet being given - then the rats displayed addictive and compulsive behavior, engaging with the operandum for far longer than if there was some obvious pattern to its rewards.

The impact of Skinner's work cannot be overstated. The insight that certain patterns of feedback could produce addictive behavior would go on to shape the algorithms that we are discussing here. The likes of modern social media form the feedback mechanisms that now shape our use of their operands. But his work first found its real footing in the world of human behavioral conditioning within the mobile games industry. If you go back to the early days of mobile development, before the more unscrupulous studios realized that this dark science did not exactly make them look good when they did public presentations on it, you'll see direct references to Skinner Boxes and operant conditioning. The mission of mobile games, and all games which contain In-App Purchases was to trigger dopamine responses in such a way that when a choice to pay money was presented to you, your brain would already be addicted to the conditioning. At first, it didn't work that well for most people, but that didn't matter as it worked incredibly well for some people. This led to the profit model that still dominates mobile gaming, that of "whales", where a tiny minority of players spend the vast majority of money within the application.

These patterns of operant conditioning are memes - or, rather, "meta memes" (memes about the creation of memes). The market of app stores and billions of smartphone users applied intense selective pressure on these meta memes towards those that effectively engaged and altered human behavior. We have gotten a lot better at designing these metamemes, but again it was not until we could channel them through machine learning to create tailor-made memes on an individual basis that their true raw fury could be unleashed upon the human psyche.

Performative Discussion

Many social media sites involve having discussions within some form of public space, mimicking the act of conversation yet combining it with a distinctly exhibitionist flair. That public space allows other users to both observe and engage in your discussion with one or more other users. Often, there is some form of user voting that takes place within the discussion that indicates the opinion of those who have engaged. On Reddit, this is "upvoting & downvoting". Within Twitter and Facebook, this is "Likes''. Within almost all social media systems, some type of this system exists.

I think anyone who has engaged in lots of these kinds of discussions would agree that they are not quite like regular face-to-face communication with a single other person. They are seemingly far more likely to exhibit some negative properties such as:

I believe that the performative nature of these discussions, combined with the social feedback mechanisms of user voting, are mostly to blame for this phenomenon. Generally, when I am talking to a single person, I am using language to figure out what is going on in their head. The words are throwaway, transient, and lost to space and time as soon as they are said to everyone in earshot. I am not trying to build social capital, and the person or people that I am trying to build a better social model of within my head are there in front of me.

It's taken me a long time to realize that when we engage in these performative discussions, we end up - myself included - preaching to some imaginary choir of our like-minded community. We do not seek to build an accurate model of the internal thoughts of another person, but instead stand at some virtual pulpit and lecture to our friends or followers. The social feedback mechanisms embedded in these voyeuristic discussions make this almost inevitable. Who in the age of social media has not evaluated their opinions as much by the purely numerical social response as by their fundamental merits?

The act of building an accurate model of another mind is a fundamental part of discussion. Language is merely a symbolic form of representing meaning. It is ambiguous, fuzzy, and altogether flawed as a way of talking about the universe, yet it remains our best option for building that essential bridge between minds. Approaching it instead as a performance for the benefit of those who already agree with us partially explains why these discussions so often devolve into semantic arguments - truly, where any debate goes to die - or why people are just very bad at understanding opposing viewpoints online.

Your brain is better at lying to you than you are at knowing when it lies

The operant conditioning of mobile gaming, the behavioral manipulation of advertising, and the probabilistic power of machine learning have all combined in a terrifying nexus of control. The builders of these systems have been exposed to significant and long-term selective pressure that has increased their power. This has, amongst other things, led to the realization that it is a lot easier to change people's minds than we first thought. While control of the media and news has always been a source of immense political power, the legacy of widespread mass manipulation projects of Cambridge Analytica is that political power seems forever linked to the exploitation of the algorithms that now drive that media.

Many people believe themselves immune to manipulation. They have a conception of their brain as a thing that they are in control of, and that this control is somehow the mark of intelligence. If they are intelligent, smart, and well-educated, they reason that their brain is sufficiently inoculated against any undue influence. I would bet that these same people believe themselves immune to addiction of many other kinds. I cannot stress enough that this is simply not how your brain works. Intelligence, education, or any other trained ability of the mind cannot alter the firmware upon which it runs. Inoculation is not possible against highly evolved memetic viruses that have been tailored to exploit weaknesses in your inherently animalistic brain. Addicts of all kinds understand this well, I think. An addict knows too well how good the human brain is at lying to itself - far, far better than it is at understanding itself. This is why social interaction, and in general the measuring of oneself through the perceptions of others, is such an important part of building and maintaining our egos. We are inescapably flawed and blinded in our perception of ourselves.

When that social interaction is distorted, however - when that act of introspection by socialization is distorted towards some purpose - we see the powerful impact it can have in shaping ourselves. Most people construct their internal values to have as little dissonance with their social interactions as possible. Someone raised within a homophobic social environment is far more likely to themselves be homophobic, simply because not being homophobic would require an uncomfortable divergence from their environment. When our social contexts, then, are highly controlled, the values we end up displaying also become highly controlled.

So what do we do?

I've tried to do a few things. It's a work in progress. These systems have managed to do a doozy on my brain, and I'm still only just figuring it all out. Mix and match to your heart's content.