Thinking Like a Machine

By far the most difficult part of any complex robot to produce is the computer control system, or artificial brain. Human beings differ from most other life-forms on Earth by having the most complex brains (along with dolphins and whales, some other primates and octopuses also have complex brains).

A comparison between the human brain and a supercomputer

The human brain can perform an estimated 1016 (10 000 000 000 000 000) operations per second (though estimates vary from 1011 to 1019 or more) compared with 3.614 (360 000 000 000 000) operations per second for the Blue Gene supercomputer. Considering that the human brain can easily out-compute the best modern Earthling supercomputer, how do the two compare in, say, a game of chess?

Gary Kasparov versus IBM Deep Blue (1997)

In 1997 the then reigning world chess champion Gary Kasparov had a six game tournament with the IBM Deep Blue supercomputer, played to the correct time constraint rules (so both players had limited time to make moves). This was actually a sequel to the 1996  match, in which Kasparov won comfortably at 4-2 (Kasparov > won 3, lost one and drew two games). The 1997 game ended 3.5-2.5 to Deep Blue (Deep Blue won two games, lost one game and drew 3 games) - so what went wrong for the human?

Deep Blue was able to compute 200 million moves per second, whereas Kasparov could only compute three moves per second and had to rely on his 'intuition' to narrow the range of moves for detailed calculation. Why such a large difference when we have already said that the human brain can compute so much faster?

The human brain simply has lots of other tasks to do. IBM Deep Blue was purpose built to play chess and almost its whole computing power was reserved for this task. In contrast, the human brain has to deal with the internal workings of the body - regulating temperature, water balance and all the various other internal processes. The human brain also has to monitor thousands of sensory receptors and control thousands of muscle motor units, whereas Deep Blue had a human move the pieces for it. Most of the human brain's computing power is subconscious or even preconscious and cannot be readily accessed by the conscious mind - thus, only a small portion of Kasparov's brain was available to compute chess moves and that part of the subconscious involved is what we called his 'intuition' which probably analyzed thousands of moves without Kasparov's conscious awareness - perhaps performing incomplete analyses and so rejecting inappropriate moves without further analysis.

subconscious mind is that part of the brain whose workings and stored information you are not consciously aware of at any one moment in time. It is divided into two zones, the preconscious and the unconscious. The preconscious is that zone that contains mental processes that you are not currently aware > of but which can be brought into your conscious mind at any time (though this may be difficult if certain memories have been repressed, for example the motivated forgetting of traumatic events). Thus the conscious > mind spans only one small area of the brain's processing capabilities and information storage, but it can move about through the subconscious (or the subconscious can project into the conscious). There is considerable evidence that the subconscious mind is capable of very powerful computing processes of which you are unaware. For example, you may go to sleep pondering a difficult problem and wake up with the answer, or perhaps a difficult problem you have 'forgotten' about will suddenly present a solution to your conscious mind like a bolt out of the blue. Indeed, early philosophers would obtain solutions to difficult moral issues or to poetic puzzles or complex imagery set by others, in such moments of 'Divine inspiration' attributed to the Holy Spirit (which was the 'Spirit of Truth' to these mystic philosophers).

unconscious is the part of the subconscious mind that one can never be consciously aware of - it
includes all the nanoscale mechanical and electrical workings of the brain - the flow of electric current and chemical messages at synapses, for example.

Deep Blue used brute force to compute each move in series - meaning that it analyzed a chain of logic one link at a time, from A to B to C etc. until a conclusion was reached and then it analyzed the next sequence of moves. In contrast, the human brain works in parallel - it can evaluate many moves at once, but in a way that does not necessary look at the precise sequence of logic, but at the overall patterns and rejects those patterns that are inappropriate. In other words, the human brain analyses situations in three dimensions instead of one linear dimension. This makes analysis by the human brain much more rapid, but again much of this processing will be subconscious, with the conscious mind only aware of the end result of the calculations. This process relies upon experience - knowing which types of move work well in certain situations. Deep Blue relied primarily upon calculation with very little knowledge, whereas Kasparov had a vast knowledge base to draw upon, which reduced the need for calculation.

Kasparov had the ability to evaluate an individual opponent and change play style part way through a game if necessary. Indeed Kasparov chose opening moves he thought a computer would find problematic. However, since Kasparov was not used to playing these uncommon openings, it also increased his own chance of error, and on the crucial 6th game he made such an error and played two set moves in the wrong order, costing him the game! In contrast, Deep Blue does not learn and cannot evaluate opponents nor change its playing style part way through a game. The IBM team could, however, alter the way Deep Blue played between games.

Well, so far the odds look even, so what really went wrong for Kasparov? Kasparov won the first game comfortably, and the IBM team were disappointed as Deep Blue under-performed. However, Deep Blue came back and beat Kasparov in the second game by avoiding an obvious move that would have trapped the computer, but the kind of move that computers were hereto unable to forsee the consequences of, but Deep Blue refused to take the bait! This lead Kasparov to accuse IBM of teaching by using a human to instruct Deep Blue. However, perhaps the logic somehow revealed the pit-fall to the computer, in a way that is not obvious without performing the calculations. Kasparov resigned the second game after failing to see a possible winning position that could be obtained. However, Deep Blue's finite computing power meant that it too could have missed such an opportunity.

The next three games were all drawn and so everything came to hinge on the 6th and final game. However, observers noticed Kasparov's increasing frustration as he began to lose confidence at the prospect of being beaten by a mere calculating machine. On the final game, a subdued Kasparov (subdued according to observers) began the 6th game with an opening that he seldom played and he made a tremendous error by playing two set moves in the wrong order and so was forced to resign very soon with no prospect of winning.

Thus, in the end Kasparov lost due to the 'human element' - he made a mistake. However, this was a mistake that he would not normally make, but with so much pride to lose, Kasparov clearly got distracted by his own emotional discomfort - human emotion failed the great Chess master! Deep Blue had no emotion, and so could not 'feel' the pressure of such a crucial game. Deep Blue could not grow bored nor get distracted nor forget or lose concentration. Machines are relentless, and this mechanical relentlessness overcame the distressed Kasparov.


So, if emotions can be such a disadvantage, why have them?

Emotions may be a disadvantage in a chess game, but they evolved because of their survival value. Humans did not evolve to play chess, but they evolved as hunters. Emotions are instinctive motivators - they make you avoid danger and force you to act when danger or the chance of a victorious strike looms. Such motivators are essential for the survival of living creatures with conscious minds capable of some apparent free will. If humans lived like ants - enslaved to the nest by mechanical instincts, then perhaps emotions would be unnecessary and maybe consciousness too. For example, when an ant helps build a nest, it has no mental concept of what the whole nest is going to look like, and is unlikely to feel proud of its achievements, rather it simply does what it is programmed to do, rather like an automaton or machine. Human society is much more complex, however. Humans are capable of independent action. However, unrestrained free will can lead to stupidity. Supposing a lion was running toward you and since you had no emotion, no fear, no desire to live, you decided just to sit there and let it eat you! Of course, even if you decided a course of action, unless you were emotionally motivated to commit suicide, it would be very difficult for you to enforce your will over such a stupid decision, because fear, panic or anger would force you to act!

Emotions are not infallible, however. They work best in the situation in which they evolved, but do not work well in the modern work place where chronic pent-up anger increases unhappiness and the chance of stroke or heart attack. Getting angry because you have to attend a meeting every Friday evening when you should be relaxing over a glass of wine, will not help you because it will not enable you to best cope with the situation. It might work if you complained to an understanding boss, but it might also get you fired or just make you chronically frustrated and miserable! Emotions are also simple algorithms that work best on average. For example, when a rabbit sees a fox then maybe freezing and staying still would be the best defence on average over say one thousand occasions, but this would fail if the fox had already seen the rabbit! Emotions evolved early on as part of the more primitive brain, but consciousness evolved to modify these simple responses to account for the details of each situation. In contrast, a machine would simply be programmed to adopt the optimum strategy based on a series of logical rules - for example, if the helicopter turns toward us then maybe it has seen us and staying still would then be a stupid idea!

Conflicting Processes

So, the human brain evolved in stages, with the more advanced mammalian brain being added on top of the reptilian brain (which was added on top of your fish brain) and the emotional primate brain being added on top of this. This evolution by repeated tinkering rather than by a single re-design and rebuild does introduce problems, however. Suppose you are very tired having not slept well for many nights and days and just as you are trying to sleep a noisy neighbour starts playing loud music late at night. You will probably feel a bit annoyed, but complaining might make matters worse so you continue to lie there. Four hours later the noise continues and you have not slept at all. Perhaps if it's Friday night and you can stay in bed all day tomorrow you might be patient, though increasingly annoyed. However, you have to get up in four hours time to go to an important meeting with your boss at work who will not be forgiving if you are late or off form. So your brain has to weigh up the pros and cons of taking action (the following list is not exhaustive):

Cons - reasons not to act:

Complain to the neighbor and risk their retaliation by making more noise, so you resist your anger.
Being angry will only make it harder to sleep, so maybe you should just try and relax.
You can stay in bed tomorrow because you have a day off.

Pros - reasons to act:

Perhaps you have already done that two hours ago and the neighbour ignored you, adding to your anger.
You MUST wake up in four hours time and you need at least three hours sleep to do this, so your anger increases.You have not slept for days because the same neighbour has made noise and refused to listen to your complaints.

Supposing you complain but to no avail, but the motive to act continues to increase, as does your anger. Now you feel like grabbing your shot-gun and blowing the silly twit's head off. However, you would go to prison for that and it is against your ethics to take human life. Maybe you will resist, but two nights later there is no change. Your body is exhausted, your brain is computing subconsciously that if this continues you simply cannot live, you will grow weak and sick and eventually die. 'Act you silly fool!' Is the result of your subconscious computations, but for fear of punishment or for religious ethical reasons you prefer to remain placid. The neural circuits accessible to your conscious mind are now pitched against your subconscious instincts for self-preservation. The circuits block one another and you are confused, frustrated and unable to do anything. You grow tired, your will weakens as the brain cells of your conscious mind fatigue and their electrical signals drop in strength. Now the instinctual drive pours over your tired brain and takes control. Acting on impulse you take your shot-gun and shoot your neighbour! Next, you sleep!

You wake up to find police arresting you and you spend seven years in prison. Did your brain make the right decision in the end? Maybe not, but it was the only decision it could make because of the restraints of its basic architecture. You are being punished for being human! They are making an example of you so others do not use your case as an excuse, but are forced to go through the same battle of will in the hope of preserving human life. On the other hand, at least you are still alive and not dying of heart failure brought on by fatigue.

So, the human brain is composed of a series of computational modules or processors, that may all have your best interests at heart, but may sometimes conflict, especially when the will of society conflicts with your survival instincts. Programmed to survive you go along with society, because you will not prosper if you go against their ways, but their ways are restricting your ability to survive (you may be sent off to fight a war) or to reproduce (you may be devoutly religious and so consider celebacy a virtue) but in the end, survival of the species is Nature's primary concern. Without the species, its morals and ethics become immaterial. However, as a conscious being you have realised that there is a higher truth - to be kind to others and not to cause them unnecessary pain - a what a conflict! What's worse, should you fail to make the right decision then they will all blame you!

Id, Ego and Superego - The Psychodynamic Model of the Mind

These are the three principle modules or processors of the human mind that influence conscious behaviour, according to the Psychodynamic Theory.

Id has the task of ensuring survival of the individual and the passing on of the individual's genes through reproduction. It influences the conscious mind by generating emotions that drive or motivate the conscious to act in certain ways. The rules that govern the Id are tried and tested through millions of years of evolution. For aeons living things relied on these simple rules, such as fight back when attacked, or run away from lions, or eat when hungry or mate with every available fertile female (!). Without the Id life would cease. However, the Id is too ancient and too simplistic to cope with the complexities of social life in those animals with complex brains that quickly learn and adapt to their surroundings. The Id can be seen as the Pleasure Principle as it requests immediate satisfaction of its instinctive drives.

Superego is that part of the brain that ensures survival by fitting the individual into its society. Societies constantly change. New laws are made, new standards adopted, religions constantly change. Social conduct cannot be inborn - genetic changes are just too slow, so the solution is to give the brain plasticity. The Superego is programmed to enforce social norms upon the individual, such as no sex with the neighbour's wife as people will stone you, or no murdering the guy over there who owns 90% of the resources since you will go to prison for it. Without the Superego, social life would cease as anarchy would ensue.

Ego is perhaps the most conscious of the three, in the sense of being the 'I' or focus of consciousness, and it has the awkward, but necessary, job of balancing the Id and Superego. It strives to satisfy the demands "> of the Id whilst defending against an over-demanding Superego - it will try to satisfy the Id in a way that does not antagonise society. In this way, the Id and Superego are seen as opposing forces, with the Ego oscillating between as it gets pulled one way then the other. Like all systems, this balance is not perfect, but rather the Id oscillates, sometimes responding to one principle more than the other, before correcting its position and swinging back the other way. This is a state of dynamic tension in which the individual is perhaps never completely fulfilled.

Psychodynamic Theory


The Ego is the Reality Principle that must temper the ideals of the selfish Id and the selfless society. It is forced to compromise between the individual's personal desires and the demands of society. It must also protect against guilt. Guilt may result when the Ego inevitably fails to fulfill the 'moral' demands of the Superego (note that 'moral' may here refer to society's definition of what is moral, whether true or false, or it may refer to a self-derived social conscience that results from human interaction).

This system works well in most people most of the time. However, imbalances result in pathology or even criminal behaviour. A common cause of such problems, according to the Psychodynamic Theory, is repression. The Id comprises two basic classes of instinct: the
Eros or life instincts, that includes positive constructive
instincts such as reproduction and altruism and utilises a kind of 'motivational energy' or
libido ('psychic
energy'). On the other hand is the
Thanatos, or destructive instincts, such as aggression or the desire to return to the peaceful state that was disturbed by the beginning of life. An imbalance may occur if Id energies are repressed. For example, repressing Eros instincts, such as sexual feelings, may result in the Id increasing aggression in order to vent the libido, resulting in the display of aggressive behaviour toward others. This model pictures the Id as a kind of pressure pot from which the steam must be safely vented every so often to prevent a dangerous rupture. Interestingly, a good correlation has been demonstrated between a nation's religiosity and its militant aggression toward other nations - there may or may not be a causal link here. The balance must be maintained for optimum health. Here lies a problem - some philosophies invoke excessive guilt for minor infringements of the Superego - they use guilt to override the Ego through the Superego and thus gain control over the individual - an abuse of what can be a beneficial emotion, in moderation.

The Thanatos and Eros are subconscious drives and so the individual may not be directly aware of their interplay in affecting behaviour. In order to maintain a balance when either the Id or Superego become too strong, the Ego has several defense mechanisms. Repression or motivated forgetting occurs when the Ego causes an individual to bury memories (deep in the subconscious) of events that produced excessive guilt or fear or pain. Some time back there was a bout of people who attended (inexpert) hypnotherapy and whilst being regressed to their childhood they recalled hereto unknown events in which they were abused. However, it turned out that many of these were fictitious - the mind had little recollection of early childhood and so when pushed there by the hypnotherapist filled the gaps with fabricated memories that seemed real to the person upon waking! This may also explain recollections of past lives under regressive hypnotherapy. It at least shows how easily deceived the human mind can be!

Rationalisation is the defense mechanism whereby the Ego derives a rational or logical reason for bad behaviour, but this is in fact not the real reason. People like to believe they are fundamentally good, even when they do wrong things, and so the Ego tries to convince them that their actions were sound even if their motive was ill.

Projection is the process whereby people blame others for their own negative feelings, for example, an individual may hate someone, but since they believe it is wrong to hate another, they convince themselves that they don't in fact hate the person, but rather that the person hates them!

Reaction formation is the process whereby a person acts in an opposite sense to their impulses. For example, sexual interest might actually manifest in dislike for a person. There are cases of anti-homosexuals beating up homosexuals, but they would also rape them during the attack!

Sublimation provides socially acceptable outlets for Id impulses, for example, sexual or aggressive impulses that are suppressed may give rise to artistic creativity or athletic pursuits (to 'vent' frustration).

Displacement involves deflecting an Id impulse to a less threatening target, for example, anger with the boss may result in anger toward an innocent office clerk.

Denial involves a failure to recognise genuine feelings, for example a person may fiercely deny ever having been attracted to a person of the same sex.

Compensation is similar to sublimation and involves the person striving to make up for an inadequacy in one area by succeeding in another, for example feelings of inferiority may make a business executive fiercely competitive. Guilt from pass misdemeanours (real or perceived) may manifest in excessive charitable behaviour.

So, you see, when things get unbalanced, the results are rather ugly. Indeed, if defense mechanisms fail then psychosis may result. For example, a loss of feeling in the hand may result from suppressed guilt, what we call a psychosomatic illness (an illness caused by the mind but which presents real physical symptoms). So, those of you who wish to impose your own 'morals' or philosophy upon others - be careful when tampering with the human mind!

Asimov's Laws of Robotics

So, we have seen how the Id, Superego and Ego strive to maintain a balance in the human mind whilst permitting it to complete its biological program without upsetting society. How do we achieve the same thing in a robot? Isaac Asimov proposed programming robots with a series of hierarchical directives:

  1. The prime directive: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The first law takes precedence over the second and the second over the third, should there be a clash. Thus, the first directive must always be met, and then the second directive has to be met only if it does not prevent the first from being met and the third must be met only if the first and second are met.  Later, Asimov added the Zeroth Law as a new prime directive to account for flaws in the first three:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm",

which proved necessary as the first three laws themselves prove inadequate in certain situations. Asimov tests these laws in his science fiction stories. This highlights four key problems with laws:

1. Laws must sometimes be broken in order to uphold a higher law.
2. Laws are always inadequate in some situations.
3. In order to rectify demonstrated inadequacies more laws are constantly added.
4. Sometimes we reach an
en pass, cases where an individuals own 'directives' do indeed conflict or are else insufficient to provide adequate instruction on the matter at hand, which causes the brain of one of Asimov's robots to fuse!

This last point has lead to the convoluted innumerable tomes of law that dominate in human society, which rather misses the point since how can people obey the law when nobody actually knows what the law is?! Furthermore, if a law is justifiably broken why should the 'criminal' be punished?

Artificial Intelligence?

Was Deep Blue more intelligent than Gary Kasparov? No, definitely not. This illustrates one key point: serial logical computations, such as calculating chess moves by brute force, performing calculations and solving equations (by numerical methods) actually does not require what we would call 'intelligence'! It does, however, require mental discipline and a powerful computer. The fact is the human brain is not built primarily for such tasks, although certain modules may be designed for such tasks these are largely subconscious, for example, catching a fast moving ball requires (largely subconscious) computation of velocities and vectors, but does the brain do this in the same way as a computer? - I don't know.

Having been programmed in mathematics, physics and biology myself, I would say that all these subjects are equally as intellectually demanding, but in different ways. Mathematics requires the recollection of many algorithms and the ability to process these in a serial fashion with minimal error, but it also requires some intuition. Biology requires more parallel processing including the need to manipulate complex concepts and three dimensional images and to link many related ideas together into a multi-dimensional matrix. Physics is somewhere in-between. Chemistry requires a different set of rules too, and foreign languages (in which I am poorly programmed) are also just as intellectually difficult to master. Indeed, any subject can be as hard as you are prepared (or able) to take it.

So what is intelligence? Psychologists define several types of intelligence, including the manipulation of numbers, the manipulation of language, the manipulation of visual images, understanding emotions and physical manual dexterity. In fact anything which requires extensive computations or neural processing requires intelligence and the more intelligence available for the task the better it is performed. Thus, it is imprecise to talk of such and such a person being more intelligent than another, without first assessing their intelligences in each sphere. Most people are particularly good at something. Thus, before talking about 'artificial intelligence' (AI) we need to know what kind of intelligence we are talking about.

The Cybex 7000 series Warbot is excellent at firing weapons with accuracy and at second guessing its military opponents, but it is not designed to solve 10-dimensional spacetime equations (though with the correct instructions it could do a reasonable job). The more tasks a computer is designed to perform the worse its performance on any one task. Deep Blue was purpose built to play chess, whereas Kasparov had lots of other things to process (including his emotions) and so ended up losing a chess game to a mere calculating machine.

Parallel processing? Parallel processing is the computation of several calculations simultaneously, whereas serial processing deals strictly with one item at a time. The CPU (Central Processing Unit) of your PC emulates parallel processing whilst actually processing in series. A CPU is so fast that it can alternate between tasks without you necessarily noticing. Each task awaiting or undergoing computation is called a Thread. Each program may use several Threads, for example, one thread may listen to input from your keyboard, another from your mouse, another might deal with graphic output to your monitor and yet another may be playing music > for you whilst yet another enables you to view this page. The CPU is so fast that you cannot tell as it splits its time between these Threads, flitting back and forth many times a second. Of course, with too many Threads running, you will notice your PC slow down. However, your computer may have several auxiliary processors, enabling your PC to genuinely perform some tasks in parallel, such as a graphics and sound chip that may process things whilst the CPU deals with other Threads. However, parallel processing comes into its own in the human brain.

In your brain, thousands or millions of circuits are working simultaneously, each performing a different computation. These circuits are connected together in a three dimensional web. The brain deals more with patterns than numbers. For example, you continue a visual register that is laid out rather like a computer screen, as a grid of points (pixels) and which stores an image of what your eyes have just perceived - like a picture stored spatially over an area of the brain. This image gets split into several such images - one might show values of contrast across the image, another might show colour and another might show movement. Your PC works in a very similar way. Your computer screen is set to a resolution, say 1024 by 768 pixels. Each pixel is a light-emitting dot on the screen. Inside the computers memory are several grids, one stores the brightness value at each pixel, another the colour value. These and other networks of neurones form various circuits and are called
neural networks. A neural network is a highly connected collection of simple processing units interacting in a temporal manner. These processing units or nodes are the individual nerve cells (neurones or neurons) each of which can have many ingoing 'wires' (dendrites) and many outgoing 'wires' (axonal branches) with electrical signals travelling in definite directions, giving rise to a temporal connectivity between neurones, that is neurone A signals neurone B which then signals neurone C, etc. Neural nets are the basis of biological neural processors.

Each processor takes inputs, processes the data and then generates output. For example sensory maps (e.g. an image map showing the picture of a face) may be input from the sensory registers in the retina to an initial visual processor which splits the image into constituent maps such as contrast and brightness of the image output to a contrast processor and a brightness processor. As a second example, a semantic input, such as the conjecture, 'Are apples green?' will generate output such as true or false.

Artificial neural networks are computer circuits that mimic the functioning of biological neural networks. The processing nodes in this case are often called neurodes. True neural networks are parallel processors, as many neurones can process data simultaneously, which makes neural nets very fast computers. (Even if individual nodes are slow, so many can work simultaneously that a large neural net becomes very fast). Your PC does not use an artificial neural network, since these systems are still being developed, and may only be beneficial for certain applications. On Earth artificial neural networks have been successfully applied to some specific problems already. These tend to be problems requiring what we would call 'intelligent computing' such as automatic fraud detection, in which a neural network can judge whether or not a transaction seems odd and so decline it on suspicion of fraud. Internet search engines can use neural network technology to enable them to intelligently retrieve relevant links quickly.


Above: processing data. Inputs, such as an image map or a semantic conjecture enter the processor which computes relevant outputs, such as a processed image or a true or false statement.


In conventional computing and electronics, the most fundamental units of information processing are called logic gates. Logic gates take one or more inputs, and then combine them in some way according to logical rules and then generates the appropriate output. Neurones or neurodes can similarly form logic gates. The picture below shows how three neurones could form an AND gate and an (inclusive) OR gate. (Note that a real neurone may have up to 10 000 input channels and many output channels).

Neuronal logic gates

So, computers in the future may perform rapid computations by using an artificial neural net to perform many calculations in parallel. However, the way the CPU of your PC calculates by performing only one calculation at a time (but by doing it very fast and then alternating rapidly between threads) is fine for certain applications. It should even be possible to combine both architectures into a single computer. Neural networks have one great advantage, however, they facilitate learning. Standard serial processing is good for doing fast arithmetic with high precision, but neural network architecture allows more intelligent flexibility and adaptability to problems requiring the analysis of very complex data (like real life!).


Learning is the modification through experience of pre-existing behaviour and understanding. In order to learn a processor must modify the way in which it processes input data and thus alter the output, in accordance with how the previous output interacted with the environment to generate new inputs.

As an example of learning we shall consider a very simple neural network, consisting of just four neurodes, called the perceptron. The arrangement of the perceptron is shown below:


The red and blue neurodes can be either on (1) or off (0) whilst the grey bias neurode is always on (1). When a neurode is on it outputs a signal to the green computation neurode (the output of the blue, red and bias neurodes form the inputs of the green neurode). We label the blue neurode input 1 and the red neurode input 2. The input from each of these two neurodes is weighted by W1 and W2, such that the signal output from the blue neurode is either W1 or zero and that from the red neurode is either W2 or zero. The grey bias neurode is always on, but its signal is weighted by Wb and so it always generates an output equal to Wb. The green neurode performs the following calculation:

Calculation formula

where I1 and I2 indicate the activation or input states of neurodes one and two, respectively, which may be on (I = 1) or off (I = 0) since the bias neurode is always on, ie. Ib = 1 always (1 * Wb = Wb !).  In other words the green neurode sums the inputs it receives from the red, blue and bias neurodes. If the result of this calculation is greater than 1 the output from the green neurode is 1, but if the result of the calculation is equal to or less than zero, then the output is zero (zero is the threshold of activation of the green neurode, such that it will only generate an output signal, i.e. an output of one, if the sum of the inputs from the red, blue and green neurodes exceeds zero). We label the output from the perceptron, A, for actual output:

Now, the clever thing about the perceptron, is that the weights, W1, W2 and Wb are modified according to the discrepancy between the actual output and the desired output, a process called instructed learning. For example, let us train our perceptron to function as an inclusive OR logic gate. This will cause the perceptron
to output 1 if either, or, or both of the red and blue neurodes fires a signal to the green neurode and to output zero if both the red and blue neurodes are silent:

Here, I1 and I2 are the stimuli given to and hence the activation states of neurodes 1 and 2, respectively, 1 means the neurode is active and firing, zero means it is silent. D is the desired output if the perceptron is to work as an OR gate. (Note that for an exclusive OR gate, coincidences are ignored so the last row will have D = 0). The desired output is provided to the perceptron by instruction - we tell it what the desired output should be. The perceptron will modify the weights W1, W2 and Wb according to the difference between the desired and actual outputs. It does this according to the following formulae:

Where L is the learning rate (L can be any value between 0 and 1, where 0 means the perceptron never learns and 1 means it learns very fast).

For example:

Let us not stimulate the blue neurode (neurode 1) in other words we set I1 = 0, and we do stimulate the red neurode (neurode 2), causing neurode 2 to become active, such that I2 is equal to one. According to the OR gate logic table, the desired output, D, is 1, since one or other of the inputs is switched on.

Let us set the initial weights as follows: W1 = 3.5, W2 = 1.5, Wb = -2.3. We could choice any initial values we like, since in reality these may well be random. Now the green neurode calculates the following:

Calc = (3.5 x 0) + (1.5 x 1) + -2.3 = 0 + 1.5 - 2.3 = -0.8

This is less than zero, so the actual output, A, is zero. We know have: D = 1, and A = 0. Clearly our perceptron is wrong and so will learn to correct itself. If it learns at a rate equivalent to L = 0.1, then the new weightings will be:

W1 = 3.5 + 0.1 x (1 - 0) x 0 = 3.5,

W2 = 1.5 + 0.1 x (1 - 0) x 1 = 1.5 + 0.1 = 1.6,

Wb = -2.3 + 0.1 x (1 - 0) = -2.3 + 0.1 = -2.2.

If we know repeat the calculation with these new weightings (I1 = 0 and I2 = 1 as these remain unchanged) we obtain:

Calc = (3.5 x 0) + (1.6 x 1) + -2.2 = 0 + 1.6 - 2.2 = - 0.6,

which is less than 0, so the perceptron outputs zero again, which is still wrong! However, if we repeat or reiterate the procedure a few more times, then let us see what happens - the results are shown in the table > below:

Perceptron table of results

Notice that on the sixth attempt the correct output is obtained and then the weightings cease to change - the
perceptron has learned
to correctly process the input data as an OR gate.

Of course, in reality neural networks like the brain involve billions of neurodes working together - this clearly
dramatically enhances their capacity for learning! Even microscopic nematodes have a few hundred
neurodes (neurones), for example
Caenorhabditis elegans has 302 neurones, making them surprisingly
powerful computers (the intelligence of neural networks more than doubles as the number of neurodes
doubles). It is the ability of neural networks to modify their own responses to a given input in an iterative
process that enables them to learn. Of course, the more iterations the better they learn, which is why you
must practise and revise a lot to do well in an exam!


The perceptron learns which means that it also remembers. Its memory is the pattern of its activity as
determined by the nature of the connections between its neurodes. Similar the human brain forms memories
in similar ways - neurones can grow and establish new connections between one another and the weighting
of these connections can be altered. The actual point of contact between the 'wire' of one neurone and that
of another is called a synapse. Synapses are usually unidirectional (they only allow signals to flow in one
direction across them) and they alter the strength of the signal. Furthermore, their settings can be both
altered and maintained, just like in the perceptron. In this way memories are specific physical patterns of
neuronal activity.

There are three principle types of memory:
sensory registers, short-term memory (STM) and long-term
memory (LTM)
. When light strikes the retina of the eye, the retina converts the visual image into a spatial
map of points (like pixels on your computer screen) of electrical activity. This image is stored in the retina for
a very short time (of the order of one second or so). Stare at a bright light for a few seconds and then look
away and you will see an after-image (you may even be able to see the filament of a light bulb) that will
persist for severals seconds and is due to the registers in the retina storing the information about the image
of the light (brighter lights tend to take longer for their images to be flushed from the buffers). Your PC
similarly has registers or buffers that temporarily store data from input devices (such as the keyboard) until
the CPU is available to accept and process this data. One of the roles of biological sensory registers is to
filter the data. All sensors filter data. If your eyes could see all frequencies of the electromagnetic spectrum
and sent the whole of this data to the brain, then the brain would be over-whelmed by such a huge volume of
information, most of it unimportant as far as survival is concerned. Biological sensors will sense only a certain
range of the available stimulus energy, for example your cornea blocks out most of the ultraviolet radiation
that would otherwise be detected by the retina. Sensory registers may also further simplify and also process
this data, for example, the retina performs some processing of visual stimuli, highlighting certain features like
contrast. The brain will filter and simplify this data even further during its analysis. For example, you would
probably notice movement much more easily than say the detailed textures of all the leaves you can see -
movement signifies potential danger or potential prey and is far more important than leaf texture, in a natural
environment. Sensory registers also give you the appearance of continuity of perception - they store one set
of data whilst the previous set is being processed.

Short-term memory is the
working memory, it receives data from regions of the brain that process sensory
stimuli and from long-term memory. When you are adding numbers together, or constructing a sentence, it is
short-term memory that manipulates these data. STM has the odd property of being able to store only
chunks of data
, on average (5 to 9 chunks typically) more-or-less regardless of the size of these chunks.
For example, it is easier to remember the five numbers: 132, 256, 176, 89, 8 than to remember the 12
separate digits: 1,3,2,2,5,6,1,7,6,8,9, and 8. It is easier to remember seven sentences each ten words long
than to remember a list of 70 words! The STM retains data for about 20 seconds only - usually long enough
to complete your calculation or finish your sentence. If you want to store a new telephone number for a
longer period in your STM, say for two minutes, until you get to a pad and pen, then you will have to
rehearse the number - repeat it over and over and thus re-enter it into STM several times. STM perhaps
corresponds to the RAM on your PC. The RAM temporarily stores the data that the computer is working with
and retrieves data from the hard disk which works more like LTM.

Long-term memory stores data indefinitely. It receives data from the STM for long-term storage, however,
only information deemed important is stored in the LTM. Thus, to remember things for an exam, you must
convince your brain that the information is important by using it repeatedly and with earnest! The amount of
data that can be stored in the LTM is practically limitless, however, it does store everything that you have
ever perceived as some would claim. First of all, data from your senses has already been selectively filtered.
When you remember a face, you do not actually store a detailed image of that face in your LTM, rather you
store only enough of the salient features (e.g. big nose, wrinkled forehead, etc.) to allow you to recognise
that face again. The LTM also forgets - the
stored data decays over time. Stored data can also be
displaced by new data - a synapse involved in one memory pattern may become weak if this memory is not
recalled often and new patterns corresponding to new memories may interfere with this synapse, such that
the original memory pattern becomes weak and inaccurate.

For example, if you study mathematics, then you will have to remember lots of techniques for solving
equations, but if you do not use these methods regularly after your exams then you will start to forget them.
You are unlikely to forget them completely and you will not forget them immediately. Rather your memories
will decay and become increasingly vague over time and the fine details will be lost first. However, enough of
the original pattern may persist, even years later, to quickly re-learn the technique. Re-learning previously
learned things is much faster than learning brand new things.


Consciousness is the awareness of being aware. It is that part of your mind that thinks to itself: 'I think,
therefore I am', or at least that part which is aware of the thought. It is that part of you that is aware of sensations and emotions and thoughts. Can a machine ever be aware? Well, we have already seen that the human brain and mind are machines, at least in part. The final ingredient that makes a human human - the conscious awareness, is more elusive to pin down.

One school of thought has it that every point in space and time is conscious. However, if a stone has no
sensors and no chemicals to generate emotions and no neurones to learn, store memories, or to think, then
what is it conscious of exactly? This is the fundamental question - is consciousness a separate entity or
fundamental property, or is it simply the result of mental processes? Some consider it to be a fundamental
property of physics, much like length, mass and energy, and in the end just as inexplicable as all these
fundamental properties. What is mass? All we can say is what mass does, what its properties are and what
affect it has, we can never really say what mass is other than to describe it. Is consciousness a similar
fundamental physical property? Well, mass and length are elementary properties - they can not be reduced
further, but can consciousness be reduced to feelings, thoughts and memories? Consciousness may be an
emergent property of matter, rather than a fundamental one, emerging in systems with the right conditions.
One such condition is complexity - consciousness manifests itself most obviously in those animals with
complex brains. Maybe if an artificial computer or neural network reached sufficient complexity then it would
become conscious? Those who meditate may define consciousness as that singular point of awareness that
remains when thoughts, sensations and emotions detach from consciousness. This gives the impression that
consciousness is a separate property, however, others argue that consciousness is the result of sensations,
memories, thoughts and emotions. These are two apparently conflicting viewpoints.

Are all living things conscious? All living things can certainly sense their environment. Even a single celled
creature like the amoeba will move away from a needle that pricks it. However, this does not mean that the
amoeba 'feels' pain. The more we look at single cells, the more we see how their responses appear to be
autonomous circuits. Touch a cell and pores in its membrane will open, causing ions such as calcium to enter
the cell. These calcium ions bind to receptor molecules that set off a whole cascade of definable chemical
changes that ultimately results in an 'intelligent' response, such as the mobilisation of the cells motors to
result in movement away from the stimulus. However, the human brain can also be broken down into simple
mechanical units that work in a predictable manner, so simply because a machine is comprised of definite
mechanical parts, that does not mean that it can not be conscious.

gestalt hypothesis is worth considering here: that the whole is greater than the sum of its parts.
Clearly a neural network has properties in addition to those of a disconnected series of neurodes! The
patterns of connectivity confer additional properties. One of the emergent phenomena resulting from this
connectivity is the propagation of waves of electrical activity across the brain.

electroencephalogram (EEG) records electrical activity in the brain. Superimposed on all such
recordings are continuous waves of activity. A relaxed person with their eyes closed shows
alpha waves
with a frequency of 8 to 12 cycles per second. During sleep these waves undergo regular patterns of
change. As you drift into sleep and continue sleeping you will pass through four stages of sleep in the first
half hour as the brain waves lower their frequency and increase their amplitude, eventually reaching
slow-wave sleep, which is deep sleep. After thirty to forty-five minutes spent in stage 4 deep sleep, you
quickly return to stage 2 and enter
rapid eye movement (REM) sleep, during which your eyes move more
rapidly underneath your closed eyelids. During REM sleep, your brain waves and physiological parameters
(such as blood pressure and heart rate) are like those of an awake or alert person, but your muscles are
paralysed and so unable to respond to commands from the brain (though muscles may twitch as the brain
tries to move them). Some dreaming occurs during non-REM (NREM) sleep, but most happens during REM
sleep (especially the most vivid ones). It is usually said that the average person spends about two hours
each night dreaming, though whether any non-remembered dreams occur during the rest of the time or
whether consciousness temporarily ceases altogether is not clear.

Clearly, waves of electrical activity in the brain are correlated with consciousness, but we cannot infer that
one causes the other from a simple correlation. Most (if not all) physical systems that I can think of are
composed of waves or oscillations. Such situations arise whenever two forces oppose one another, resulting
in cyclic fluctuations around the equilibrium position (point of balance). If the system becomes unbalanced
then these oscillations characteristically become what we call non-linear waves, which are increasingly
disorderly. We saw how the opposition of the Id and Superego can cause the Ego to oscillate as it attempts
to please both, but finding this impossible it fluctuates around an equilibrium position that appeases both the
Id and Superego by reaching compromising agreements between them. The possibility that consciousness is
the result of electrical waves in the brain, raises the possibility that it is subject to
quantum effects. Waves
that are spatially confined (such as waves in the brain) become quantised, which can result in the strange
phenomena of quantum physics. Sometimes these effects manifest on a large scale, such as the
phenomenon of
quantum coherence, in which the individual tiny quanta merge into a single large
macroscopic quantum that behaves in a coordinated manner. One of the problems of consciousness is the
apparent simultaneous connectedness of the mind that seems essential for awareness, and yet electrical
signals travel through the brain with finite speed. Does the brain work like the CPU of your PC, constantly
switching between Threads, indeed it can, but it may also be able to synchronise many neurones exactly,
such that they oscillate together (in phase). Thus could result if each neurone has its own built-in clock, so
even if it is not aware of the state of a neurone on the other side of the brain, until say a 0.2 second
transmission delay, the two neurones may still work together by using synchronised clocks. A second
possibility is that quantum coherence synchronises the neurones as they behave like a single quantum.
Research into the
quantum model of consciousness continues.

The importance of timing

There is no evidence, at present, that quantum mechanics is required to explain consciousness. Care must be taken in interpreting this statement. Quantum mechanics (QM) explains the behaviour of atoms, molecules and cells, however, QM simplifies to the more familiar laws of classical Newtonian physics when dealing with objects around the size of a large molecule and greater. This means that QM is the underlying basis of classical physics. However, even in our more familiar large-scale world, effects predicted only by QM and not classical mechanics can still manifest. This has caused many to argue that QM is not important in explaining living systems, meaning that the more manageable methods of classical mechanics can be used instead. However, some important processes inside cells are definitely governed by QM. For example, when a photon of light interacts with rhodopsin in the light-sensitive retina, then it does so as a photon according to QM: the eye is a quantum detector! More mysterious processes seem to be dramatically dependent on QM (meaning that their behavior can not be well explained using classical mechanics) such as the ability of migrating birds to sense the Earth's magnetic field, the optimization of the biochemical pathways of photosynthesis and olfaction (specifically the detection of odor molecules by sensors in the nose). Evidence is also massing that one of the most fundamental processes of life: the mutation of DNA, requires QM phenomena in order to be properly understood. I suspect that a true understanding of DNA mutation mechanisms will revolutionize evolutionary theory.

Another possible function of brainwaves relates to what is known as the 'binding problem'. When you perceive a robin singing do you independently see red, hear a sound, see something move? If your brain properly binds these different sensory modes together correctly, then you will instead perceive a red robin singing. The red is identified as part of the robin, the sound comes from the robin and will be in sink with the opening of its beak (there will be no lag even if the visual inputs required more processing by the brain than the auditory). It is important to realize the difference between a sensation and a percept. When you 'taste' food what you really perceive is the 'flavor' and experiments have shown that flavor depends on many factors: the smell of the food, its literal taste on the tongue, its texture, its temperature, its color, past experiences and the context in which it is being eaten. Experiments have thwarted expert wine tasters simply by changing the color of the product! Taste is a sensation, but flavor is a percept (perception). All the different senses engaged collect data in separate streams and to begin with these streams are processed separately. Further along the 'pipe-line' the brain starts recombining these streams, which are now compressed and filtered with certain signal features amplified. It also accesses memory patterns to add meaning to the experience and may trigger emotional circuits. What you experience is not an uncorrelated set of stimuli, perhaps out of sink with the visual percept arriving last due to the heavy processing that visual inputs require. Digital electronic computers (DECs) solve this problem with a timing system, such as a square wave generated by an oscillating quartz crystal. This wave synchronises processing across the system. The CPU itself undergoes a fetch-decode-execute (FDE) cycle: fetching an instruction from the program code, decoding the instruction into machine code subroutines and then finally executing the instruction. Each FDE cycle takes several clock cycles to complete and is controlled by the timing of the clock signal. Reading from memory or writing to memory, communicating with hardware peripherals, such as the monitor, may take more clock cycles to complete and parts of the system may be paused to allow these slower operations to catch up. Processing may take place in several CPU cores in parallel, and a graphical processing unit(GPU) and a sound chip, but when you play a computer game the sound and video are in sync and execute smoothly. This is all down to the computer clock. It may be that brainwaves perform a similar function in the human brain.

Neural Networks

Neural networks are one of the primary constructs used in artificial intelligence. Neural networks are software algorithms (which can run on an ordinary DEC) which simulate the living brain. This requires some clarification. It is sometimes stated that neural networks do not model activity in the living brain since they do not simulate neurotransmitter movement across synapses and other biological mechanisms of neural function. However, they do encapsulate the key behaviour of neurones in many situations and were originally developed by neuroscientists precisely to model the living brain and have since been used to do so with some considerable success. Indeed i have used neural networks to model neural circuits in the human brain myself and with considerable success (with the help of students: this makes an excellent group project exercise for neuroscience students). However, neural networks are not only used in this way, indeed most people here about them in software engineering applications, from search engines to chat bots to electronic assistants like Cortana and robots like Sophia (Hanson Robotics).

In these artificial neural networks, a perecptron-like unit or node is connected to other nodes in a network. The nodes will be connected with differing degrees of strength (often randomly determined initially) and the network will be presented with data and undergo a number of training trials during which the more responsive nodes will strengthen their connections according to a specified mathematical learning rule/algorithm such that a memory pattern or useful computation pathway will be established. Using the orthodox approach, the level of signal activity at each node at any instant of time is modeled by a differential equation. (A differential equation is one which determines the rate of change of a variable, in this case the activity level of the neuron). This can be done in any computing language, though Python is currently the most popular, largely for historic reasons (I prefer to use Java myself). I have used both this orthodox approach, which is fast, and my own object-oriented approach which potentially allows us to make the nodes behave in any way conceivable, including more neuron-like behaviour as we are no longer restricted to a differential equation. The latter approach is often more useful, or perhaps more agreeable to neuroscience students. Nodes can represent individual neurons or whole brain regions in these models. Incidentally, you may often hear about 'deep learning' which is simply a neural network consisting of more than two layers of nodes, i.e. a deep network, which necessitates the use of certain learning algorithms such as 'back propagation' such that the training signal reaches the deeper layers which can not directly 'see' the input or output. The methods of learning applied to artificial neural networks have been successfully used to simulate learning in certain brain processes and circuits, however, the brain employs certain learning 'algorithms' that are more efficient and have so far eluded neuroscientists and engineers. It is noteworthy that these models feed back and forth between the neuroscience and engineering worlds with each boosting the understanding of the other.

The networks currently used in AI on Earth are apparently much simpler than those employed in the living brain. However, circuits in the living brain achieve much with a few layers of neurons, as artificial networks do. Indeed, deep learning emulates more the processes occurring in the visual cortex which processes visual signals in the brain. Visual data requires tremendous processing. In general, however, the brain achieves much of its complexity by replicating simpler circuit modules and it is the way these modules are connected together which accounts for much of the complexity. Nature generally takes this approach, from proteins to cells to organisms to ecosystems: complexity arises by replicating a current unit (gene, cell, segment, organism, etc.) and then modifying the new units and arranging them in subtle ways. It is possible that consciousness is an emergent property resulting from a certain complexity or arrangement of units in a signalling network. In this case, it would indeed be possible for a machine not only to simulate consciousness but to actually possess it.

In a neural network each neruon or node is modeled as a differential equation giving the rate of change in the state of excitation of the node over time. This makes each node as simple as is conceivable whilst capturing the essential behavior. The complexity arises, of course, in the way the nodes interact. In a differential equation each positive term indicates an input that is increasing node excitation, whilst each negative term indicates a process that reduces node excitation. Although this approach has shed considerable light on how some circuits of the brain may learn, it has nevertheless so far been unable to recreate the most efficient aspects of brain learning. Research is making progress all the time, of course, and this approach will bear more fruit but it may be that a more complex model of a node will eventually be required, perhaps one that does not use differential equations but a more direct object-oriented approach to modeling node behavior. This latter approach requires more computational power, however, but is an approach I have used to model brain circuits with some success.

'Do androids dream of electric sheep?'

Would a robot brain need to sleep? The chief function of sleep appears be to give the brain downtime for maintenance purposes. some jellyfish sleep, making them the most ancient of animals known to have developed sleep. Box Jellyfish (Cubozoa) are jellyfish with particularly advanced visual systems: Chironex fleckerihas 4 clusters of 6 eyes (24 eyes in total) and the eyes consist of 3 different types, including advanced camera-type eyes capable of image formation. Being particularly agile jellyfish, they are quite active hunters and would appear to use their vision to target shoals of fish. If kept in an aquarium and fed by hand they seldom need to sleep. However, after periods of intense hunting they require more sleep. Jellyfish have no 'brain' in the sense of a concentrated masses or ganglia of neurones, instead their nervous system consists of a distributed network of neurones, distributed around the body, though some have one or two nerve rings connecting the main sensory organs or rhopalia. the rhopalia are sensory stalks strategically positioned around the bell margin and carry the eyes, in addition to organs of balance and probable chemoreceptors. Chironex fleckeri has a nerve ring connecting its 4 rhopalia.

Processing visual images must be taxing for such a less evolved nervous system. The hypothesis is that active hunting so preoccupies the nervous system of this jellyfish that it requires sleep in order to give the system 'down-time' for basic maintenance. Neurons require maintaining and in humans it is now known that the brain 'washes' itself during sleep (glymphatic channels open up to flush the brain with cerebral-spinal fluid (CSF) to remove waste products that accumulate as a result of neural activity). It is also thought that remodeling of the nervous system, particularly the formation of long-term memories may occur during sleep. If android circuits are kept busy during normal activity, such as by processing visual images, then they may well need time 'offline' to carry out circuit maintenance and modification.


We started out by comparing the human brain and mind to a machine and found that, in fact, the human brain is a machine and that the human mind is a machine at least in part. This raises the possibility that if a machine is built in the right way that it might manifest genuine (as opposed to merely simulated) consciousness. This raises the question, if a series of neural networks, such as an AI system distributed in the cloud did acquire consciousness, then: how would we know? I do not think that we could ever be certain!


Article updated: 13 Apr 2019

The human brain!


The robot brain

Machine Code