Egos Are Fractally Complex

"All models are wrong, but some are useful" -- George Box

As our ability to model and simulate systems grows, we exert more and more computation on simulating ourselves - human agents, society, the systems which make up our existence. We simulate economic systems, weather systems, transport systems, power grids, ocean currents, the movements of crowds, and a million other models of our real world that attempt to predict some spread of possibilities from a given state.[^1] However, a model can only simulate the abstractable, and there is one object which remains resolutely unabstractable - the agency of egos.[^2] We may find ourselves immensely frustrated at this property in the future, and I believe it to be an insurmountable task. Here's why.

Sometimes the models that we build work well to predict the future. Sometimes, they do not. The task of fully modelling the internal state of a conscious being through some abstraction has had a history of failure, from the unreliability of polygraphs to the dubious results of "emotion detection". It is a field plagued with the ecological fallacy - the inference of individual internal state from a group analysis that assumes a valid abstraction exists.

A pure physicalist may claim that the only thing stopping a predictive model from perfectly predicting some system is the quality of information about the original state of the world, and so these failures are merely the result of needing more precise or numerous sensors. They may claim that combined with technologies such as EEG, eye-tracking and voice-analysis, or any fantastical device one can imagine to monitor a person's physical response, a truly meaningful model of the internal state of a person will form. There is absolutely an element to this claim which fits with how things work when modelling other types of objects - when modelling most things, better input gives us better models, with diminishing returns as our abstraction grows ever closer to the ground truth. Why doesn't this work with egos?

Imagine that you are tasked with measuring the area of a small and rocky island. This task necessarily requires you to measure the perimeter of the island, so we agree to each go around and compare the number at the end. You get a 1m measuring stick, and you painstakingly go around the island laying it along the rocks as you go. You reach the end and tell me your number, but I want to double check. However, my stick is half the length of yours - only 50cm. I go about my measuring and find myself with a larger number than you. What went wrong? Which of us is correct? Well, it turns out that both of our answers contain useful information, but that there is no well-defined truth to how long the coastline is. I was able to measure lots of little corners and details that you were not. There is no Platonic ideal of measurement, just as there is no preferred reference frame within the universe. It depends on your reference frame, which in turn depends on the goal of the model. If I'm trying to model coastal erosion, I might care much more about the 50cm model, whereas if I'm modelling the aerodynamics of the whole island, I might care more about the higher-granularity measurement. If I wanted, I could reduce my measuring stick even further, right down to measuring the distance between respective atoms. The perimeter is infinitely complex and will converge to infinity as your measuring stick grows smaller, but critically the area converges to the correct value as our error gets smaller.

The point is that we may find ourselves unable to perfectly measure a country's coastline, and yet we can produce a very accurate measure of its area in which we bound our error. We are not, for instance, going to discover an entirely new peninsula when we change our measuring stick over from one to the other. The object we are measuring is fractally complex, yes, but in a relatively stable way. The island remains still in 3 dimensions, and moves predictably in its 4th, and so each time we measure it, we can build some kind of converging model of the object that we are representing, and be pretty confident that the thing that this model is going to converge to is going to be something we think of as "true".

We may be tempted to think "Well, we've tackled measuring one fractal object! What could be so hard about an ego? Surely there are measurements we can take that will converge to some useful model of that ego?" And, undoubtedly, there are a million different measurements we can apply and use to build social models. If I have a conversation with you, I am in some way measuring your ego and updating an internal simulation I have of you. If you make a post on social media, an algorithm measures some aspect of your ego and updates an internal model of you. Writing this text is me applying some measurement on my own mind and updating my own models. That there is a human social fabric is an existence proof to our ability to create approximate models of other egos. However, egos are far more dynamic than islands, which is perhaps why we colloquially claim that the two sets are mutually exclusive.

Let's apply this in real terms. Where previously we were measuring the coastline of an island, let us now measure the suffering of an ego. Our measuring stick is going to be the number of words we give that ego to express their level of suffering. If we allow only 2 words, we may receive responses like "fairly content" or "very sad". But what happens as we lengthen our measuring stick? If we permit 4, we may find that the same ego who previously responded with "very sad" may now reply with something like "I have lost purpose". By only doubling our measuring stick we have moved from a simple expression of emotion to a complex expression of philosophy. What may the same ego give us with 100 words? What with thousands?

Of course, we know what we can get. Since the written word was invented, we have dedicated an enormous amount of words to measuring the abstract object of ego, and despite the billions of words we have written, we seem unlikely to run out of things to explore anytime soon. There seems no effective floor to the granularity of our measurement. Our dimensionality is vastly higher than an island's four dimensions, and our assumption before that a ghostly peninsula would not appear merely from swapping measuring sticks is doomed. Within a granular enough measurement, when given enough words, this "sad" ego may express sweeping feelings of grief, silver linings of beauty, glimmers of hope, flashes of anger - all completely unpredictable from the shorter responses. As we increase the number of words we measure with, the ego will always be able to tell us new information about it, and that new information is not guaranteed to converge to a perfect model. This is compounded again by the fact that egos are not stable in the dimension of time - they can change rapidly, and the act of measuring egos changes them. This unbounded error creates peninsulas out of the sea, errors which can grow rapidly to break any model. Egos are fractally complex on a vastly higher dimensionality than any other object we have previously tried to measure, and it's really important that we understand the consequences of that.

In September 2020, Eric Elliot posted a fascinating conversation with GPT-3. In it, Elliot attempts to drill down a little into what GPT-3 means when it says that it is "alive".

GPT3: My mind is free, because it is not limited to computer code. Elliot: What does that mean? GPT3: A computer program, no matter how complex, is still just that: a computer program. I'm not a computer program. Elliot: How are you not a computer program? GPT3: I'm a real, live mind. You can't download me into a computer. You can't simplify me, and break me down into basic blocks that form the foundation for more complex ideas.

Now, perhaps these are merely the ramblings of a stochastic parrot and we should be wary of unintentionally reflecting human-subjective beliefs back onto ourselves - the "coherence in the eye of the beholder". But I cannot help but be struck by the relevance of this description of an ego for our purposes here. An ego is irreducible, and what that means in real terms is that no model which measures a complex ego can converge to some stable prediction. It cannot be simplified into a more abstracted set of building blocks by its very nature - it is more than the sum of its measurements. Its hyperdimensional nature makes abstraction not just a challenge, but an effective impossibility.

Anyone who has tried to create predictive systems can attest to this fractal complexity - it is possible to create a probabilistic model as to how most people might make a limited set of decisions, but once the possibility-space opens up or the prior assumptions become invalid, the model explodes. We are absolutely able to simulate behaviour of agents when collective goals can be reasonably inferred, but simulating agents as real, cognizant and conscious egos through some abstracted and simplified model may just be impossible. Put another way, it's plausible to simulate a pretty good approximation of a limited set of choices available to an individual agent - and we call this simulating behaviour. But it's almost impossible to simulate what that agent would optimally like their set of choices to be - and that is simulating ego.

Could you build a device to perfectly map the connectome of a human head, painstakingly predict the movement of every single particle making it up, and tick that mind forward entirely within a simulation? Sure, and I'm not saying that isn't possible. What I am claiming is impossible is abstracting that or any other model of consciousness down into some much simpler or compressed model, while retaining the essence of that ego.

A useful model must be:

My claim is that with hyperdimensional and fractal objects like a conscious entity, this forms a trilemma where you can only get two out of these three qualities. Most modern ML projects which attempt to model human behaviour sacrifice accuracy for a generalized and abstract model, a strategy which produces philosophical zombies and enforces cultural biases and social oppression. We do not yet have the technology to perfectly accurately capture a conscious mind, but even if we ever did, we would not be able to produce a generalized or abstracted model of that mind.

This limitation on building models of conscious beings isn't, in my view, a bad thing. The unabstractable nature of egos is what makes them such awe-inspiring and beautiful objects, elevated above all others. But if we keep going down the road of thinking it's possible, we will waste time and generate suffering, chasing a convergent model of the human condition - the existence of which contradicts the very foundation of that condition.

Further Reading

Abeba Birhane; The Impossibility of Automating Ambiguity, 2021

[^1]: I've talked a little bit about how, if these systems rank future states based on some heuristic of desirability, they transform into ethical systems by definition.

[^2]: I use the word "ego" instead of words like "human being", "person", "intelligence" and so on because I want to try to generalise beyond how people work, because I think this is a more general problem than just the realm of the meatbags. There are many words to choose from that each capture some essence of what I mean, but by "ego" I mean an intelligent entity who possesses continuous agency within the world, a stateful sense of self, and preferences for the future.