Revelry illustration of a purple computer monitor. Behind the screen of the monitor is a tank of water with a pink human brain floating inside, and circuitry lines on the sides of the tank.

Stuart Page
Former Assistant Psychologist, current Revelry Software Engineer

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

— Emerson M. Pugh [1]

It seems to me that there has been a dangerous belief swirling around the technology sector since forever – that a human brain is simply a wet, messy computer [2]. As someone who has to write code that works with silicon chips – but which generally at some level has to be understood by someone with a brain – I find that this belief can lead to fantastical, unfulfilled hype at best; and harmful outcomes at worst. If a brain is a just wet, messy computer and we work with dry, clean computers, then getting them to talk to each other should be pretty easy if you could just figure out the correct connections. Right? Despite the persistence of this belief, however, there is no scientific psychological consensus that a computational framework is the correct way of describing how brains work, human or otherwise. It’s important to keep this in mind when designing any brain/computer interface; including when designing and writing software designed for consumption by messy brains, rather than clean chips.

Interactions between brains and computers can be very exciting. The application and promise of virtual reality is astounding – for example, using VR in exposure therapy to address patients’ phobias [3]. Crucially, though, this involves a computer tricking a brain, rather than communicating with it as a technological peer. While the patient knows on some rational level that they are wearing a headset, there are parts of the brain that just can’t grasp that the images aren’t real (which is why it works). But it also illustrates my point – we can’t just treat brains as rational thinking engines on par with a Macbook or Colossus. Computers can’t get phobias, but if they could, you probably couldn’t treat them by slapping a headset on the webcam.

So the next time you come across news that the Matrix is just over the horizon, or arguing that we’ll soon be able to download programs to our brains, here are five things I want you to keep in mind.

1. It’s just another metaphor in a long line of metaphors

“We have […] had telephone theories, electrical field theories, and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behavior, than by indulging in far-fetched physical analogies.”

— Karl Lashley [4], 1951

Pneumatics? Vibrations? Electrical fields? Using metaphors to explain complex phenomena is a great way to break things down that would otherwise be incomprehensible, especially intricate systems. What’s more complex than explaining our brains and what metaphor could be more appropriate than whatever new-fangled technological zeitgeist is gripping our collective cultural consciousness? I wonder if any one has compared brains to blockchains? Spoiler alert: they have. [5][6][7].

Fundamentally, metaphors are not explanations but starting points to understanding. The machines we create are complex systems – and so are brains. But brains haven’t been designed, and that (amongst a host of other differences) should underscore the point that while similar, they can never be the same. As scientist Matthew Cobb argues in his book, The Idea of the Brain [8], “Brains, unlike any machine, have not been designed. They are organs that have evolved for over five hundred million years, so there is little or no reason to expect they truly function like the machines we create.”

2. We don’t know how a human brain works

We know how computers work, and how software runs (most of the time). We can build exact models of computers – even inside other computers, as virtual machines. In comparison, creating an exact model of a brain is an enormous ask. In purely physical terms the sheer complexity is overwhelming. There’s roughly 86 billion neurons [9] as a starting point – communicating with each other in various different patterns and configurations. The complete mapping of these neurons and their associated neural pathways is called the C O N N E C T O M E – to lean on an electrical metaphor, a wiring diagram. Once you’ve assessed and painstakingly mapped all those connections, you need to take into account the glial cells [10] which are vital in supporting the neurons – but also important in their own right. There’s multiple different types of them, and there’s also about 86 billion of them as well. This is not to mention all the various hormones and messy chemical signals that would wreak havoc with any clean model of a brain.

Of course, once the physical components are in place, there’s still the question of how consciousness arises from the ‘machinery’. Does our model assume a materialist or a non-materialist standpoint? Or rather: could you understand the brain and consciousness if you had a complete model of a physical brain, enough to reconstruct one in it’s entirety? Or is there a ghost in the machine: some people talk about a soul, but I’m talking about emergent properties [11] – phenomena that arise from system of components that is unpredictable and indescribable, driven by more interactions and influences than we can catalogue. Maybe a bit like, I don’t know, deep learning neural networks? [12]

Perhaps one area where computers are indeed like brains is in their resilience to standard neuroscientific techniques – although this perhaps shows defects in those methods rather than similarities between brains and computers. In one fascinating example, scientists used these methods to try to understand the structure of a computer (a microprocessor from the 1950’s)[13]. The conclusion: while “the approaches reveal interesting structure in the data” they do not “meaningfully describe the hierarchy of information processing in the microprocessor.”

I think the most relevant conclusion to be drawn here is that a deeper understanding of the brain will only come through different approaches than the ones described. It could be argued that a shared resistance to these methods shows a similarity between the two; this is true, but only insofar as a unit test defined for a simple function won’t work if applied to two different functions that are complex in two different ways.

3. We know how BCIs work, but BCIs don’t know how brains work

Here is a formal definition of a Brain-Computer Interface (BCI):

A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them into commands that are relayed to an output device to carry out a desired action.[14]

In other words, a BCI allows a user to act in the world by simply using their brain (cutting out the middle man of actually using our muscles.) In spite of our limited understanding of the brain, researchers are able to build brain-computer interfaces (or BCIs) that allows us to use pure thought to control a robotic arm [15], play Final Fantasy XIV [16], and more. This is incredible, and I don’t wish to minimise the scientific achievement that this represents. However, here is my hot take: BCIs work, not because we understand the brain, but despite the fact we don’t.

Each device functions by reading the binary on/off electrical impulses that neuronal activity creates. However, that’s about as deep as the understanding goes: these devices know nothing about the ‘why’ behind those impulses. As Dr. Andrew Schwartz, one of the leading researchers on brain-computer interfaces explains, “There is very little nuance here,” he said. “Just because we get a very nice signal, it doesn’t mean that’s what the motor cortex is doing [17].” They are essentially just ‘hacking’ the neural response – looking for peaks and troughs of activity to control an output device. This is why currently, use of a BCI requires a period of training, as the user figures out which thoughts to think get the desired outcome.

Reading the neural response is not enough to gain a deep understanding of how the brain works. The electroencephalogram (EEG) has been around for almost 150 years [18]. But despite the age of these techniques, there’s still large gaps in our understanding. Once again, I don’t wish to undermine the scientific significance of the EEG – neuroscientists over the years have gained enormous insight by reading and analysing these signals. However, this information alone is limited in what in can show us.

4. More complex than copy paste

Recently in the news, Samsung have described a way to ‘copy’ and ‘paste’ the brain’s neuronal connection map [19] by creating a nanoelectrode array developed from a rat brain, and then pasting this map onto a high-density three-dimensional network of solid-state memories. They want to create a memory chip that approximates the unique computing traits of the brain — low power, facile learning, adaptation to the environment, and even autonomy and cognition — that have been beyond the reach of current technology.

This is still exploratory since the ultimate neuromorphic chip would require 100 trillion or so memories [20]. To even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question.

Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. Speaking of Descartes…

5. Not just a brain in a jar

“I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which [a deceiver] has devised to ensnare my judgement…”

— René Descartes [21]

A current trend in Silicon Valley is the idea of “uploading a mind” so it can live forever. The thing is, we’re not just brains in jars. Your brain is connected to your body intimately and you can’t pull the two things apart. Thoughts don’t just come from the brain, they come from the whole organism, as well as that organism’s interaction with the environment. As Alan Jasanoff, Ph.D. explains  “Although the brain is required for almost everything we do, it never works alone . Instead, its function is inextricably linked to the body and to the environment around it.”[22]

At a pretty basic level, brain function is connected to the oxygenation level of the blood that flows through it. But it goes much deeper than that. Your brain developed in tandem with your body, and this is represented through the link between language (how an adult brain understands the world) and movement. The activity that takes place in your brain when you describe kicking is essentially a weaker, identical version of the activity that occurs when you actually kick [23]. There’s plenty of emerging research about the deep connection between gut bacteria and mood, two things which at first blush seem to be completely separate – not just in terms of the physical distance between the gut and the brain, but the primary functions of those body parts.[24]

So how does this relate to software?

Ultimately, the job of a software engineer is to design and create software used by and for humans. And while it’s tempting to assume that humans are something easily controlled, understood & driven by algorithms, the reality is that understanding people isn’t easy – at all. At the end of day, someone with a messy wet brain will be using whatever it is we’re making. They probably won’t care how elegant an algorithm is, or the tech stack I used, the bugs I created and fixed along the way – they just want it to ‘work’. And unless I try and figure out what the product ‘working’ means for them, I’ve lost before I’ve even begun. With computers, there are some hard and fast assumptions I can make; but with people, it’s all out the window.

The good news is that engineers don’t work alone. There are people who have the answers I need – generally, the product owner or the project manager – that allow me to live in a sane world where I’m not paralyzed trying to work out what the user wants. Instead of making assumptions, we need to remember to listen. While I don’t get to interact with users every day the same way I used to interact with my patients, I can still learn from the expertise of neuroscientists, psychologists, sociologists, and the complex, lived experiences of the people who use our products every single day.

References

[1] https://en.wikipedia.org/wiki/Emerson_Pugh

[2] https://www.nytimes.com/2015/06/28/opinion/sunday/face-it-your-brain-is-a-computer.html

[3] https://pubmed.ncbi.nlm.nih.gov/10350911

[4] https://en.wikipedia.org/wiki/Karl_Lashley

[5] https://arxiv.org/pdf/1811.02881.pdf

[6] https://www.semanticscholar.org/paper/Blockchain-Thinking-%3A-The-Brain-as-a-DAC-(-)-Swan/ac862c394d5233d7fea85cdf45848b354b0a12b4

[7] https://technologyandsociety.org/blockchain-thinking-the-brain-as-a-decentralized-autonomous-corporation/

[8] https://www.nature.com/articles/d41586-020-00913-9

[9] https://www.nature.com/scitable/blog/brain-metrics/are_there_really_as_many/

[10] https://www.simplypsychology.org/glial-cells.html

[11] https://en.wikipedia.org/wiki/Emergence

[12] https://guava.physics.uiuc.edu/~nigel/courses/569/Essays_Spring2018/Files/khan.pdf

[13] https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

[14] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3497935/#:~:text=A%20BCI%20is%20a%20computer,of%20peripheral%20nerves%20and%20muscles

[15] https://www.washingtonpost.com/video-games/2020/12/16/brain-computer-gaming/

[16] https://www.youtube.com/watch?v=WjNHkRH0Dus

[17] https://www.newyorker.com/magazine/2018/11/26/how-to-control-a-machine-with-your-brain

[18] https://brainclinics.com/pioneers-of-the-eeg/

[19] https://www.nature.com/articles/s41928-021-00646-1

[20] https://news.samsung.com/global/samsung-electronics-puts-forward-a-vision-to-copy-and-paste-the-brain-on-neuromorphic-chips

[21] https://genius.com/Rene-descartes-first-meditation-annotated

[22] https://www.sciencenews.org/article/the-biological-mind-Alan-Jasanoff

[21] https://ed.ted.com/lessons/how-close-are-we-to-uploading-our-minds-michael-s-a-graziano#digdeeper

[22] https://www.newyorker.com/magazine/2018/11/26/how-to-control-a-machine-with-your-brain

[23] https://pubmed.ncbi.nlm.nih.gov/14741110/

[24] https://www.apa.org/monitor/2012/09/gut-feeling#:~:text=Gut%20bacteria%20also%20produce%20hundreds,both%20mood%20and%20GI%20activity

We're building an AI-powered Product Operations Cloud, leveraging AI in almost every aspect of the software delivery lifecycle. Want to test drive it with us? Join the ProdOps party at ProdOps.ai.