The
RUSSIAN DOLL MODEL
of a Conscious Brain



Hugh Noble

Tartan Hen Publications
Copyright © Hugh Noble July 2019

Abstract
    This paper describes a cognitive model of an intelligent conscious brain which I call the "Russian Doll Model" (or RDM). The model is a Theoretical Construct. While it may not actually have the same physical structure as any real, conscious and biological brain, it is claimed that if it was constructed (using only conventional electronic units) it would be able to perform in many ways that are similar to how a real brain operates - including appearing to have "feelings" - such as desires, dislikes, enthusiasms, urges ... etc.
    The RDM brain, moreover, would identify these feelings as being the motivations for its own behaviour. It is also claimed, that this brain mechanism would be not only be conscious, but that it would consider itself to be conscious.
    There has been a respectable and useful place for the use of such models in the history of scientific discourse. Two famous examples will be discussed. It will also be suggested that, when examined in detail, all scientific theories can be described in those terms.
    The RDM proposal is offered here as an adjunct to other current explanations for emotions and other related properties - which has been developed by other workers in this field. As do many others, I regard the intelligent and conscious human brain as being the product of evolution. The extra ingredient, however, which is put forward here, is that being conscious is not a mysterious condition but provides an intelligent organism with additional survival advantages and as such is yet another product of evolution. It is argued here that the development of consciousness relates to the established element of evolutionary history which is known as the "Cambrian Explosion". That period occurred some 540 million years ago. The fossil record of that period appears to show that the diversity of living animal species accelerated, and brought into existence almost all of the phyla which are living at present (Feinberg and Mallett 2016).
    In this text, I will put forward an argument that that event has split the history of evolution into two main phases ....
(1) a long slow period of development (lasting some three billion years) during which, although they did evolve further, creatures remained small - mostly microscopic with a few larger arthropods and little else. They never became capable of more than a simple pattern of behaviour.
----- The Cambrian Explosion (sets up phase 2) -----
(2) The second phase was much shorter (but still lasted more than half a billion years). During this period creatures acquired additional abilities. It is on this period that my conjectures are concentrated. Creatures evolved further and developed an enhanced ability to construct internal mental representations (of what is commonly assumed to be "reality"). From the information provided by that representational structure, and by making use of the concept of "causation", there also emerged an improved repertoire of behavioural forms, which included:-
    ... being able to predict future events; being conscious; being able to make conscious decisions about what to do in the event of predicted events;
    ... and many more intellectual abilities which that made possible.
    Thus - to be conscious is to be in a condition of remembered and remembering awareness - of what we are doing - and also to have that kind of awareness of what we, and everything else of which we can be aware, and which can affect us - and is likely to do in the future. There is a limit of course. But not accepting that limit and trying to expand it, and trying in that way to be prepared for it, is what intelligence is all about.

End-Abstract


Author's NOTE
(About myself and how I got involved in this topic)




QUOTATION:
    "What seems to be singularly human is not consciousness or free will but inner conflict - the contending impulses that divide us from ourselves. No other animal seeks the satisfaction of its desires and at the same time curses them as evil; spends its life terrified of death while being ready to die in order to preserve an image of itself; kills its own species for the sake of dreams. Not self-awareness but the split in the self is what makes us human.
    How this split came about is unclear. There is no convincing scientific theory on the matter.
"

[John Gray, The Soul of the Marionette, 2015]




1.00 Introduction

1.01 The Divided SELF

    I think that John Gray is partly right in that quotation above. The human brain is indeed characterised by internal tension and by contradictory views.
    But I also think he is wrong to say that this is a phenomenon which is peculiar to the human species. Similar inner contradictions can be observed (in a more primitive form) in the behaviour of other species which live in tribal groups and have, within their group, a strong code of socially approved behaviour. I am thinking here of a team of sledge-dogs and the brutal way I have seen them deal collectively, with a member of their community which they seemed (to me) to regard as not behaving as it should. It is that "should" (which I think is present in their minds) that I find reminiscent of the brutal way some human religious groups deal (and have dealt) with supposed transgressors, witches or apostates.
    I also think that Gray is wrong to assert that there is no scientific explanation to these questions. Allow me please to offer here, in this text, just such an explanation - or, what I call "a working explanation". For an explanation of what that is - see later.



1.02 The RDM Idea
    My RDM idea proposes that a conscious intelligent brain is not (just) a single unit. I suggest it can also be usefully regarded as a collective entity - a group of semi-independent sub-brains, which can - sometimes individually and sometimes collectively - have access to several different sources of information and can (singly or collectively), when it is appropriate to do so, make contributions (or critical interventions) to collective decisions.
    The first, and most obvious of these data, are sourced from perceptions - devices which can detect items of information which are present in what we usually regard as being an external environment. Those perceptions process that data, and deliver up to the rest of the collective brain a pattern of "brain stuff" - that is, an arrangement of neurones, axons, dendrites and synapses, which are interconnected in complex ways, and which can send messages to one another.
    The brain also has what might be called "internal perceptions". These receive and process information, which is sourced internally - that is, from within our own bodies. These bits of information tell us about our own current internal condition - our state of nutrition, of tiredness, of thirst, of boredom, of anger, of happiness, pain, and so on.
    The collective brain receives all that information, responds to particular conditions that it has been able to recognise, and then triggers fixed (pre-programmed) and specific types of response, to each particular and specified perception. It does that in ways that were likely to improve its own prevailing circumstances. The word "improve" needs clarification (see below).
    That is the way I envisage life forms operating during the first phase of its evolutionary history, that is, for some three billion years. During that time there was little evidence of what we may informally describe as "intelligence" being involved in the way these organisms behaved. Most of them were microscopic. There were a few larger creatures - in the later stages there were some arthropods and also evidence of a few creatures which resembled centipedes. But on the whole, most were very small and could not therefore move about with any speed.
    As a consequence, they had little experience of more than one type of habitat, and had no need to be able to modify their pattern of behaviour in order to deal with the different conditions they would have experienced if they had transported themselves between several habitats. That was not the case, however, for those which occupied habitats in shallow water close to hitherto uninhabited land spaces.



1.03 The Cambrian Explosion
    Around 540 million years ago, the fossil record indicates that the diversity of animal species increased suddenly. Various explanations for that have been suggested - the evolution of the eye structure, an increase in atmospheric oxygen, competition between predators and prey. Several and perhaps all of these may have some validity. What seems to be the case, however, is that there may also have evolved, at that time, several additional mechanisms of behaviour. From then on these new mechanisms co-existed with the previously more-or-less automatic behaviour. From then on the fossil record suggests that among the range of phyla which emerged, were examples which corresponded to nearly all of the phyla which exist today (Feinberg and Mallett 2016). My RDM proposal concentrates on the period which lasted from then, until the present - we could call it the "second phase of evolutionary development".
    The acquisition of extra forms of behaviour was fortuitous. About one hundred million years later, animals began to explore and to establish themselves upon the hitherto uninhabited land space of the Earth. There they encountered multiple habitats, which were different from one another and over which they could roam with relative ease. In these circumstances a fixed range of behaviour was inadequate. A more subtle ability to modify behaviour, and to do so quickly using enhanced mental abilities, was required. The variable circumstances of these different habitats - open spaces; closed spaces; changeable weather conditions; light and shade; mountainous and flat; alternating salt and fresh water as rivers (did or did not) debouch into a shallow water habitat; tides ebbing and flowing; different ways in which sight, sound and scent could operate. The ability to modify the pattern of behaviour which was used in response to these changeable conditions could deliver significant benefits for creatures which lived in these habitats and moved easily from one to the other.
    The older method of response was generally very fast and that ability was retained. But it was also characterised by very slow changes in the prevailing reaction mechanism. With the new regime, however, there was a relatively slow mechanism of reaction, but also a relatively fast method of changing the type of reaction. We could characterise that by saying that creatures were then able to develop a repertoire of reactions, to put that repertoire in store, and then, when required, select the most appropriate response from that repertoire to suit the conditions of each environment.
    The development of that strategy, in addition to the older regime, produced selection pressure to force an increase in intellectual abilities - as I shall argue later in more detail. That produced the following sequence of actions -

(1) Over an extended period of time the mechanism develops a repertoire of possible responses to the recognition of particular circumstances,
(2) That range of responses is stored for use on a future occasion.
(3) The mechanism then evolves and as it does so, it introduces a method for choosing what seems to be (in the given circumstances) "the best" available form of response.
(4)
    (4a) Initially this method of choice requires each option in the repertoire to conduct a test of its own efficacy in the prevailing conditions, and then (provided it passes that test) to place its own version of the response in an ordered list. The version with the greatest merit is the first in that ordered list.
    (4b) The mechanism of choice progresses over time to reach a stage where there is a conscious deliberation to discover the "best" option. This requires that the mechanism calculates the likely outcome of each option and also identifies feasible potential unfortunate outcomes which the system is programmed to avoid. This method of choice requires that the procedure which performs that choice is a conscious procedure, and that the reasoning is eventually stored as a memory within the concept structure which represents SELF.




2.00 Evolution Before the Cambrian Explosion


2.01 How did life start?
    As indicated in the previous section, the first period lasted for about three billion years which is a prodigiously long time - and indeed accounts for most of the time that life forms have existed on our planet. How life first appeared is not known, but we do not lack plausible ideas. Some have suggested that microbes first emerged around volcanic vents in the deep oceans. They obtained energy from the hot soup of volcanic gases escaping from these vents in the ocean floor. Another suggestion is that acidic gradients around the edges of the vents were the main energy source. It has also been suggested that living organisms arrived from outer space on watery asteroids which had travelled through environments with conditions which were very different from those that could be experienced here. There is vigorous research into these possibilities. An agreed version of events will probably emerge soon. It may transpire that an agreed version will be a mixture of these current theories. The ingredients of life are many. Some may have been provided by asteroids and some may have had more local origins.
    What is agreed, however, is that the story of life here began about one billion years after the earth was established as a planet within the solar system. It then continued for those three billion years. What is also generally agreed is that the first habitat those organisms occupied were in very deep water and that the environment above the surface must have been a toxic mixture of gases (with very little oxygen), and strong UV radiation from the Sun.
    It is also thought that some time after it formed, the earth was struck by a wandering planetoid. Both the earth and this body contributed some material to form the Moon, which then remained in close orbit to the Earth. That, of course, created the tidal flow of our oceans. These ideas were born from a study of the Moon's geology (samples of which were brought back to Earth by the first human visitors to the Moon).
    Life forms, having arrived or emerged on Earth, began, slowly, to modify the prevailing conditions. Plants - mostly microscopic - were consuming carbon dioxide and releasing oxygen so that the proportion of the earth's atmosphere which was oxygen was gradually increasing. That also means that the proportion of the Earth's environment where organisms could live, was gradually expanding. As they established themselves nearer to the surface they must have changed from being reliant upon the energy of the ocean floor fumeroles, to energy sources found in sunlight, within the oceans themselves and within the atmosphere above the oceans.

Note: All of that information is available from standard reference sources. However, I also think it is important to establish what were the prevailing conditions at that time, and what were the mechanisms in operation, which affected life forms before the Cambrian Explosion occurred. It is important because what was the case at that time, became the launching pad for my RDM proposal and it suggests that the changes which the Cambrian Explosion introduced, did not replace those older circumstances.

    The new mechanisms of evolutionary change were added to the older mechanisms. And what was then, in effect, two parts of dual system, went on to operate in tandem.
    Moreover, that tandem system evolved further - that is, both parts (new and old) of that dual system continued to evolve.
    The older part was characterised by very fast responses (to a limited range of circumstances), but it was also very slow to upgrade its own mechanism as a new regime replaced a previous one. That happened when a new (and improved) generation emerged. I call the basic mechanism which drives that organism, an "SRA" or "Stimulus-Response Automaton". It continued to operate but was reserved for circumstances that required a very fast responses (but which did not need much consideration).
    Meanwhile, the newer part of the mechanism which emerged during - and also after the Cambrian Explosion period - ushered in a mechanism - which enabled the brain to construct and then to store a repertoire of alternative responses. When a recognisable set of more complex circumstances was presented to it, it was able to "choose" a response from its stored list of options. I put the word "choose" in inverted commas, because initially it was a very automatic kind of choice. In fact few people would use that term. But later it evolved further, so that it became (unequivocally) a choice. At that stage it made use of an intelligent appraisal of the options. Further development required the development of a conscious understanding of the consequences which each of those options could reasonably be expected to produce. That appraisal could also judge whether or not, those consequences might include unfortunate circumstances (with a high or low probability). That means the brain was able to avoid choices which involved possible threats.

    The evolutionary development of all these special abilities which made it possible for a brain to produce a conscious understanding of future events, is an integral and significant part of my RDM proposal.




3.00 The Cambrian Explosion

3.01 A Change in the Mechanism of Change

A Diagram of the relationship between the SRA 
and the collection of DOLLS
The diagram illustrates the RDM proposal. I have shown seven DOLL structures in the diagram and I have discussed the mechanism of the same number, below. If there is some validity in my proposal, then I would suspect there will be a greater number of DOLL structures required to facilitate the mechanism's operation. There could be more than seven DOLLS in the evolutionary story, and there may also be more subordinate DOLLS, effectively inside the ones I have described here. More - the number and distribution of DOLLS could vary from one individual to another. Even so, the seven shown here, and discussed below, will serve well enough to let me elaborate my RDM idea.

ANECDOTES
(Personal experiences that prompted me to
think about what changes were required)




3.01 The Burgess Shales
    The first indication that something unusual had happened, was found in the Burgess Shales in the Canadian Rockies. This discovery was made by a palaeontologist called Charles Walcott in 1909, What Walcott found was a huge diversity of fossil species, with their soft parts remarkably well preserved. At first this apparent increase in diversity was thought to be confined to that locality, but similar fossil beds, of a similar age, have since been found elsewhere in the world. There has been a recent find in China with a date somewhat earier than the Burgess Shales. The increase in diversity is now thought to have been a global phenomenon and has been dated to approximately the same time. It has been suggested that all of the phyla which we can now observe at the present time were also to be found in those fossil records (Feinberg and Mallett 2016).



3.02 Diverse Habitats
    Before animals could venture on to the land, they would need to establish themselves near the fringes of that land space. They would also need to wait until there was something available on that land that they could eat. We should also note that the shallow water fringes of the land would have presented animals with a diverse collection of habitats which were unlike anything they had encountered before. Here was a place where tides ebbed and flowed; where the difference between night and day was obvious; and where the weather affected the temperature of the sea. The tides were probably the most dramatic effect. A creature could be in relatively deep water, and then a few hours later would be stranded in a shrinking pool. Some would be able to flop about and get themselves back into deep water again. There must have been rain and rivers flowing into the sea. So some habitats would have salty water while others, near by, would have much fresher water. Some creatures would eat others. So those others would become potential prey. That would make it advisable to stay out of sight.
    And so on ....



3.03 Implications

Note this If what I am suggesting is true - that the apparent increase in diversity was caused by environmental conditions and that led eventually to the ability to choose a suitable response, to recognised conditions. That could not have happened without the acquisition of several other new abilities -

(1) The acquisition of concepts which characterise events and conditions in general terms (in addition to the memory of actual experienced events).

(2) The acquisition of a concept of SELF, as a physical component of those events and circumstances.

(3) The acquisition of the concept of causal connections, which enable the prediction of future events with better than random chance accuracy.

(4) An understanding of events, past, present and future which is associated with a conscious awareness of these events.

(5) The ability to predict future events requires the perception of signals which presage those events. This could be the case, even if those signals are not really the physical cause of the occurrence of those events, but are readily perceived and made use of, for that purpose. These signals may well have been introduced in evolutionary history, not deliberately, but because by making them available to act as alerting signals the group as a whole obtained significant survival benefit. The perceived existence of emotional states may correspond to these alerting signals and also the formation of internal plans designed to bring about some intended GOAL state.



3.04 Virtual Machines - A Practical Method of Implementing the DOLLS Mechanism
    I have a further suggestion about the nature of the brain mechanism which could create and introduce the DOLLS structure. Aaron Sloman and his colleagues have over many years developed the idea that the mechanistic structure we call "Virtual Machines" have had a role to play in the development of consciousness (Sloman & Chrisley 2015). I have not been able to accept his thesis completely, but I do accept that the technical device of virtual machines could readily be applied to the construction of the DOLLS structure which I envisage in this text.
    That is the approach I would try first if I planned to try to implement my RDM proposal. To enable the various DOLLS to communicate information among themselves, I would, in the first instance, introduce a so-called "Blackboard System" which would provide a suitable system. Each DOLL could "post" or "write" information on the blackboard and all the other DOLLS could then access and use that information freely. In computer science that is a well established method of information sharing which also permits simultaneous or parallel processing.


Note: This suggestion - about a possible modification to the mode of evolutionary change - is the most speculative (and problematic) aspect of this text. I do not know of any observable evidence which directly supports the idea (apart from what may be the coincidental occurrence of the Cambrian Explosion). Nevertheless, I do believe that my argument on this is sound. Such a change would be beneficial to creatures which had expanded their territory to include shallow water, and were then confronted by a multitude of different types of habitat, where different alternative mechanisms of perception and recognition could be deployed to advantage. It has been known and accepted for some considerable time, that the brain, and the human brain in particular is able to process information using parallel processing. The DOLLS proposal fits neatly with that idea - plus the fact that it offers us a way of thinking about those inner contradictions.


    So now that we have established a suitable platform which could allow an intellectual aspect to become involved with what was just a gene-controlled non-intellectual mechanism, We can now explore how such a system might have evolved and become the human brain we can observe today.






4.00 Evolution of the DOLLS and their Specialist Abilities



4.01 The SRA (in stand-alone mode)
            (i.e. before the Cambrian Explosion)



I offer a schematic representation of the mechanism.

The SRA mechanism, its input and its output data



And an alternative version which takes up a lot less space on the page or screen -

An alternative and more compact presentation of the SRA.


    In the beginning there is only the SRA - or "Stimulus-Response Automaton". It provides the only operational animal brain (or simple nervous system), and it does that over an immense period of time (developing very slowly for all of that time - and beyond.
    In its primitive state the mode of operation of this mechanism, is as follows -

    (1) It identifies or recognises certain simple conditions, and then ...

    (2) It triggers an action-response* which has been established by simple evolution, and shown, to be beneficial within its normal environment, by its survival and reproduction.

* Initially the response triggered is a simple action. To emphasise that I have called it an "action-response". Later, after the mechanism has acquired a bit more sophistication such that the response can have some influence on subsequent behaviour, we could then call it an "operant-response". The evolutionary pathway I have suggested in this text charts a steady progression to even more sophisticated behaviour.

    So, over a long period of time, the mechanism gradually develops more complex algorithms, and its basic ability to recognise simple features is expanded -

(Appendix A: Algorithms)


    Note please that at the start, these algorithms operate very quickly, and they tend to operate in parallel. That is not because of something that the mechanism does. It is because the incoming signals do that themselves. Incoming beams of light can impinge on all parts of a hard-template simultaneously.
    Sound signals impinge of both ears, not always simultaneously, but often within a very short time interval which can provide information about the direction of its source. A scent too, can be directional if it is combined with behaviour which consists of a search pattern of some kind.
    But in my list of algorithms, while they start using parallel processing of that kind, these gradually give way to procedures which cannot be performed in any way other than in linear sequence. And the pace of execution slows.
    For example, if an algorithm is trying to follow and analyse the characteristics of a perimeter, it will need to know where a short line segment ends, before it can begin to examine the next short perimeter segment to discover something - like a change of direction.

Note: There also has to be some investigation about the absence (or otherwise) of additional boundary segments outside and inside that perimeter, and it cannot know what is inside or outside until that boundary is determined.



4.02 SRA + DOLL_0
            (The first serial procedures operating in semi-autonomous mode)


Basic SRA mechanism with the first DOLL.

The diagram shows the SRA and DOLL_0 operating together. At this stage DOLL_0 will probably contain and control one or two isolated procedures, which will operate in semi-autonomous mode. That is, those procedures will need to be triggered by the SRA, but will then continue to operate without further control by the SRA, until it is concluded, at which point it will return its computed end-result (a proposed action-response). An alternative way to present that illustration is like this ...




Note: The DOLLS idea is a theoretical construct. I have also discounted the importance of it not corresponding precisely to the structure of a real brain. It is only a convenient way to think about a brain and the way it operates. Nevertheless, I do suggest that it does have a relationship to reality. I suggest that a real brain is able to accommodate the evolution of (perhaps isolated) procedures which can be triggered into action, each of which will then continue to operate autonomously until it is concluded. If that is in fact what is happening, then I find it rather confusing to think about a large number of separate procedures doing "their own thing" simultaneously - and yet with some kind of coordination, which achieves a common end result. so I prefer, and think it does not violate any important principles, if I regard those separate procedures as being grouped together into isolated DOLLS, each of which is focused upon a common single context.
    The mechanism will also need extra controls to ensure that the SRA will be able to allow time for all the parts of the mechanism to reach their results at approximately the same time.


    I describe the operation of the DOLLS as "semi-independent" because it is more complicated than the elementary action-responses that have been triggered before this point (by the SRA in its stand-alone mode), and because these responses can keep going regardless of what the SRA may then be programmed to do.
    We might reasonably suppose that some of these DOLLS have smaller DOLLS inside them to deal with component sub-systems. In this account I have restricted myself to a minimum number of DOLLS, but in reality I imagine there may be a considerable community of DOLLS.


4.03 DOLL_1
            (The TRACE Memory)




These two DOLLS operate together, in parallel. DOLL_0 operates as before, using information which comes directly from perceptions. It tests various hard templates and if it discovers a match, it is programmed to trigger an associated action-response - or, at least, that is what it used to do. Now, however, it hesitates to allow another new bit of the mechanism to operate alongside its previous activity. As it is progressing through its routine operations, it also sends a signal to trigger DOLL_1 into action. DOLL_1 controls the TRACE memory. The TRACE memory keeps a record of a sequence of STATES (i.e. conditions) which have occurred within the SRA - (or within some selected portion of the SRA).
    To maintain the memory TRACE, Doll_1 drops the oldest record off the end of the TRACE, and adds the current STATE of the SRA to the recent end of the TRACE memory. So the decision that the mechanism (as a whole) must now take, is -

"Should I keep on what I was doing (as I am now being told by this memory record - i.e. DOLL_1) or should I stop that and start doing this that I am being told to do by these new perceptions? - i.e. by the SRA.

    Of course, if nothing new has happened then it will just keep on doing what it was doing a moment ago. But if something new has happened, a decision needs to be taken.

    Note that each Doll, as it produces a recommendation of its own, brings with it two things -

(1) The source of some new type of information which enhances the representation which the mechanism is jointly constructing, and

(2) Some new or alternative action-response. This, provided it can establish itself as having priority over all alternatives, is triggered directly.

    What the mechanism must then decide is which of the two action-responses - the one that has now been presented to it by the SRA (i.e. current perceptions of its exterior - and interior - environment), or the one which DOLL_1 is now presenting to it (i.e. the one the memory tells it that it WAS performing a few seconds or minutes previously).
    An obvious way to interpret those words, is to imagine there is another separate mechanism which considers these two alternatives and then selects one of them.
    "Considers these two alternatives". It is easy to write these words. It is rather more difficult to write down how that "considering" must be done. Would that be like writing a computer programme? And also providing the data on which the consideration is based? But how could a programme be written without prior knowledge of the information which is at that point written in the form of brain-stuff coming directly from the mechanism of perceptions?
    That, however, comes later. It is not the interpretation that I prefer, at this point, I prefer instead that the two alternatives each present their own decision and accompany that decision with another factor - a degree of confidence (written in the form of brain-stuff) which is constructed after an examination of the strength of the evidence available (in brain-stuff form).
    To be explicit, consider how a computer program is written. In Java, or C++, or Lisp, or Prologue or some such computer language. But before it can be performed by a computer it must be compiled (or interpreted, or in some other way turned into machine code). And in the brain, machine code is what I am calling "brain-stuff"). There is little to be gained by trying to find some format which makes sense to ourselves (e.g. English, or French, or German, etc.). That would be appropriate for person to person communications, but not for the mechanism talking to itself. It has got to be in some form which the brain can understand and turn into actions. That task has already been achieved by the perceptions - and the reader may recall that the development of these has taken some considerable time. To maintain speed of response and avoid repeating that slow development period, it is better to use the readily available resources.
    Why do I prefer the idea that the DOLLS should self-verify? Because I reckon that it is more likely to be compatible with evolution and with a mechanism consisting of brain-stuff. I have illustrated this idea with a couple of anecdotes drawn from personal experience. To read that short piece of text or an illustrative diagram - click below.

(ANECDOTES)


    Each new DOLL merely presents its own option, accompanied by its merit, which is measured in the same way as are the merits of all existing rival options. The judgement about which is "best" is made in exactly the same way as all previous decisions of that kind. In other words - "Minimum change". The mechanism which performs that choice, just keeps going as before. The only change necessary is the addition of a new option to the chain of options in a list structure. That can be put into effect by the addition of a new synapse (or the identification and activation of an existing one). That is the only change required.



4.04 How the DOLL_1 Memory Operates
        (and how does the Brain-Stuff support these operations?)

    So where does the remembered information come from?
    The most plausible answer to that question is - from the SRA itself. It is the SRA that is in contact with perceptions. It is the SRA which triggers associated action-responses. It does not need to think about these things. It just does them as directed by a mutational change, which is then confirmed as a "good" thing to have done, by its own continued survival.


synapse types (after Schneider)


    It is the case that visual images, sound experiences, olfactory information, etc. tend to be stored in particular anatomical brain locations, or lobes. However, under the influence of hallucinatory drugs, like LSD, visual images, smells etc. can occur within, what appear to be, locations which are not normally associated with these types of information which has its source in the perceptions (Carhart-Harris 2016).
    What such observational facts suggest to me, is that a psychoactive drug like LSD is able to modify the connections between an item of stored information, its original source and its normal destination of its output signal, and does so by acting on the synapse transmitter substances which control those connections. That effect could change what was originally interpreted the stored record of a sound, and make it appear to be a visual record instead - or some other switch of modality. That explanation fits quite well with my suggestion that a given experience acquires its apparent characteristics by virtue of its connections to a given node. I cannot at this moment, however, see what kind of explanation could be offered, for these experimental observations, by the proponents of any other theories of brain processing, which dissociate conscious experience from physical mechanism.
    I am impressed and encouraged by the range of synapse arrangements which have been revealed by the histological investigations by Schneider and his co-workers - simple switches; switches which can switch on, and off, other switches; bi-directional switches; pretty well every arrangement that can think of.

synapse gap (after Schneider)

This diagram and the one above were copied and adapted from "Brain Structure and its Origins" by Gerald E. Schneider, (2014)


    The diagram above shows an enlarged image of a synaptic gap. When an action-potential reaches the gap, molecules of the transmitter substances are released. These diffuse across the gap and make contact with the receptor sites on the other side of the gap. This contact stimulates the generation of another action-potential on the other side (belonging to a different neurone). Neuroactive substances can interfere with this process either by stimulating it, or by inhibiting it - say by attaching its own molecules to the receptor sites.



4.05 The Content of the DOLL_1 (TRACE Memory)
    I return now to the issue of what could be stored in the DOLL_1 memory record. If the data is indeed the content of the SRA, those memory records can be quite simple. We can envisage it as having the structure of a written document - several pages drawn straight from the SRA, like several sheets of paper. Like this ...

a schematic diagram of the TRACE memory

Each "page", or "STATE" of the SRA at a single moment in time, carries a unique identifier ("Sn") and a Time-Stamp ("Tn"). The time of a state is defined by the clock. One moment in time is the time interval between two adjacent clock-ticks. Any group of signals which arrive from the perceptions (within a single time-interval), are deemed to be simultaneous.

    The TRACE memory this creates may begin as a very short sequence of STATES and, as it evolves, gradually grows longer. Allowing the TRACE to grow to an indeterminate length, however, could create problems with retrieval of memories. So there is a size limit. A new STATE is added to the TRACE every time the clock ticks. At the same moment the oldest STATE in the sequence, is dropped from the memory. This represents a loss of potentially valuable information.

Note-1: I know very well that the structure I have described is unlikely to be replicated in detail within an actual biological brain, but recall please my earler comments on the issue of a theoretical construct and its role in our scientific understanding of events and our ability to predict those events. It is the accuracy of those predictions which are important. If a model achieves that, inaccuracies of detail are of little significance.

Note-2: If the information which is the content of the TRACE, were to be moved back into the SRA (without disturbing the relative positions of the data content) the various bits of information would be back in the original positions where they were connected to the perceptions (the source of the information) and to the destination of the output (the action-units which trigger the action-responses). There is therefore no mystery about what the data in the TRACE represents. That is detailed by those connections. Furthermore, when the data is transferred from the SRA to the TRACE memory, those relative positions are preserved and therefore the nature (and interpretation) of the data is also preserved.

    My proposal is that the type of data (vision, hearing, tactile. scent, taste) is indicated by the sensory lobe from which the data is sourced, and the action-response which it triggers (run-away-from, try-to-repeat, eat-it, make-love-to, ...) indicates the significance of that data (be-afraid-of, desire, food, sex, ...).

Note-3: If the time sequence seems wrong (because the motivation for an action, should precede that action being performed) then bear this in mind - in the later stages of the evolution of this mechanism the system, becomes able to predict actions before they are performed. That has an effect on the apparent sequence of events, with performance coming after the prediction of what the response action will be. These issues will be discussed later (see section 5.04).

    In addition to all that, within the sequence of STATES in the TRACE will be GOAL-STATES (which the SRA is programmed to try to achieve and repeat) and also anti-GOAL-STATES (which it is programmed to avoid). From the nature of those action-responses, the GOALs and anti-GOALS can be identified (retrospectively). These things are determined by the evolutionary history of the species (and not the experience of any single individual).



4.06 DOLL_2
            (The LTSM Memory)


The mechanism with DOLLS 0,1 and 2



    DOLL_2 comes next, and brings with it a second and new type of memory. Prior to the advent of DOLL_2, the mechanism has been throwing away a great deal of information, most of which is effectively junk - i.e. repetitions of experiences which have not much value and certainly not enough value to merit retention. But among that stuff there is a small percentage of previous experience which may be worthy of retention.
    The problem for DOLL_2, however, since it has not yet developed anything we could describe as an "intellect", is this - How can it recognise these more useful bits of previous experience? What should it select for retention.
    The answer to that question is readily available - priority levels are associated with each stimulus-response sub-system. This priority level will be stored within each STATE of the SRA. So whenever the current STATE of the SRA contains a high-value priority condition. That STATE, and the entire contents of the TRACE memory, will be copied in its entirety, to a new memory store. This will be designated the "LTSM" (or "Longer-Term Selective-Memory"). The priority level which triggers this response will vary from one individual to another, and the length of memory storage will also vary.

schematic of the LTSM memory

    Notice what has been retained. It is a sequence of recognised conditions and their action-responses activated by that recognition, of events and actions lasting a few seconds (later to become minutes) that on at least one previous occasion (and probably more often) achieved an important condition. That condition might be described as "good" or as "bad". A "good" condition is one which evolution has designated as one that should be sought, and a "bad" condition can be described as an "anti-GOAL". There is no form of intelligent judgement being exercised here. These designations are simply the consequence of survival. GOAL states are states which are sought (which promotes survival). Anti-GOALS are avoided (which also promotes survival).
    Each of these sequences of events and responses end with some priority condition. If that condition is a GOAL, then the reasonable choice is to adopt the pattern of behaviour itemised in the sequence. There is then a good chance that the GOAL state might be re-experienced.
    However, if the end condition is an anti-GOAL then the obvious advice would be to avoid doing all that stuff again. But of course, what is stored in the LTSM does not tell the mechanism what it should do - only what it should not do. As a result therefore the best thing that the mechanism can perform might be called a "startle reaction". That could be a jump, a yell, a dead stop. Something which is different from the recorded sequence of actions, and which might, (repeat "might") change the course of events. What would be a real stroke of luck would be a change which might alter things so that it finds itself at the start of a "good" sequence of actions. (Not a very likely event but better than no chance at all).

Note-1: At this point there arises one of the most difficult issues for my RDM proposal. How can the system be programmed to recognise repetitions in a long sequences of actions, while also allowing some degree of toleration? I think the answer is that it does not do that. It does the job gradually on short sequences at first, then slightly longer ones. Then with sequences that are a shade longer ... and so on.
    There is another question - How much toleration does it permit? The answer, I think, is very little initially. But gradually increasing later. This I think is not an issue that evolution could deal with. What evolution can do, however, is to construct the mechanism that would be used to put these sequences together.
    Once the mechanism is in place, the problem is more likely to be handled by an individual as he/she matures after birth. But it is also an issue which most non-human creatures may never deal with in a satisfactory way. What is likely to be required are the mechanisms introduced (in this account) by DOLL_3 (see below). I refer to concept formation and to REM-Dream sleep. Also relevant is the discovery that REM-Dreams are essential for the formation of transitive logic chains (Walker 2017).
    Another question (which has some relevance) - How would you fit a jigsaw puzzle together? Would you try to fit all the pieces at the same time? I do not think anyone would do that. One at a time would be the way - slowly and with patience. It would also be helpful if you could put a partially completed puzzle away for safe-keeping, without any disturbance to the pieces that are already in place. One first then another and another, perhaps a day, or a night later.
    And yet another question - What criterion would you use for the "sameness" of pieces? That would depend would it not, upon the type of those pieces themselves. The criterion of "sameness" in these sequences of actions, is likely to be one of "achieves the same condition" rather than simply having identical appearance.




    And so, with the introduction of DOLL-2, the mechanism has provided itself with a choice among three separate options -

(1) to continue what was being done before,
(2) to react as normal to any new condition which has occurred,
(3) If the new condition is one that happened before and resulted in a priority condition, then either -
    (3a) repeat what was done before (to reach that GOAL state), or
    (3b) do something else (a startle reaction) to avoid an anti-GOAL state.

    The action-response (3b - i.e. the startle reaction) is used when, at an early stage there is no conscious procedure in operation. It is still used at a later stage but can switch over to a conscious procedure which will search for a preferable form of behaviour. This indicates that the auto-pilot mode of behaviour is not completely without a conscious element. A mechanism which would replicate these characteristics could be as follows -

(1) Be engaged in the construction of some other topic. This is a conscious procedure.
(2) At the same time follow the prepared script assiduously. This is an unconscious process because that script has already been constructed.
(3) Either in parallel with (2) or periodically, for very short intervals, check for discrepancies between the current environment and what was expected according to the script. This is what could be described as "low-level", or "background" consciousness.
(4) If a discrepancy is discovered, generate an interrupt, produce a startle reachion and trigger a re-establishment of a conscious construction of the environment depiction. And initiate a search for an appropriate action.

    We do have evidence of creatures behaving in that way. Konrad Lorenz, was a famous zoologist who made a study of animal behaviour back in the 1950s. He built an observation chamber - something like an aquarium I imagine, but larger. Inside it he put soil, rocks, hillocks and vegetation. He then installed a family of water-shrews in a den within the chamber. And then he watched them very carefully ...

QUOTE_LORENZ


    Do humans ever behave in this way - following a remembered script, precisely and without deviation?
    Yes we do. The next time you tie your shoe laces think about what you are doing. And think too about how you are doing it. To what extent are your hands operating in auto-pilot mode? I suggest that you will discover that your mind is not engaged in the task very closely. You know what your hands are doing and if you were interrupted, you could pause and then resume the procedure, but the details of what your hands are doing will seem to be known to your hands and not to your conscious mind. If you had to take conscious control of your hands (in detail) you would need to slow down and think very carefully how to proceed.
    Which is probably why sports men and women, spend so much time rehearsing various manoeuvres so that an action, which has been acquired by a process of conscious learning, is transfered into what is sometimes called "muscle memory", and can then be performed automatically without being slowed down by the process we call "thinking consciously".
    There is another bit of human behaviour which is an illustration of this kind of following of a script without a great deal of thought. It is sometimes referred to as driving on auto-pilot. It usually occurs when we driving a car on a road with which we are very familiar. It is a practice that should not be encouraged. Nevertheless it happens and we usually "snap-out" of that mode of behaviour when something unexpected happens. Usually also it occurs when we are engaged in thought about something else.

Note-2: It seems now that that may have been (one of) the evolutionary precursors of full consciousness. It also appears to me that we have now discovered an explanation for the phenomenon of "focus of attention". The current focus is defined by the nature of the depiction structure which is currently being built.




4.07 DOLL_3
            (Compression: Chunks becoming Concepts)


The mechanism with DOLLS, 0,1,2,3


    DOLL_3 ushers in a change in the pattern of events. We still observe an improvement in the process of thinking. But DOLL_3 offers the mechanism information in a more compact form. It takes up less space. It also offers the data packaged in a new way.
    This makes possible a new and more efficient method of constructing representations of experience. The change is subtle however. What DOLL_3 does is to make it possible for the mechanism to think using concepts.
    The chunks of data which the mechanism used prior to the creation of concepts, had components which were present in the recorded memory of a given event. A concept however, goes further. At a later stage, when a brain compresses the data pertaining to several similar events, it includes data which were associated with some of these events, but not necessarily all of them. So we could say that these are "hypothetical" components which might be present, but which also might not be present. And that introduces the idea that our representation of events could be partly or perhaps (at a later stage) completely imaginary (yet still based on actual experience). So when we construct a representation of an event, using concepts, that representation can include what could be termed "near misses". And with that we usher in the concept of "dangerous events" - events which might have had bad outcomes, or "adventures", or the opposite - potentially good ones.



4.08 An Example: Compression creating "Chunks"

    To consolidate these ideas, (i.e. about the inclusion of hypothetical elements in memory recall) consider this example. A memory is constructed and when the mechanism tries to reduce the volume of the stored material, it identifies several points at which the same set of events or circumstances is repeated. I shall represent these using alphabetic symbols, like this -



    ABC .... ABC .... ABC .... and so on.

    Where the dots (i.e. "....") represents arbitrary unimportant material.

The compression mechanism extracts the chunks of material which are repeated exactly giving the corresponding compressed form which then becomes -

    (ABC) .... <=! .... <=! .... <=! ....

    Where the symbol "<=!" is an address pointer which "points" at the repeated item (ABC) When it is necessary to reconstruct these memory data are then EITHER the data represented by (ABC) is restored to its original location (within the memory record being restored) OR these memory data are copied elsewhere and pointers are followed to find and reveal the original data content.

Note: Later, when we reach the discussion of DOLL_6 (sub-section 4.15) we shall encounter the possibility of an individual learning about dangerous events from others (by linguistic communication) and the possibility of acquiring information about dangerous anti-GOAL experiences, without needing to have experienced them personally.

    The data represented by the characters (ABC) are what I have been calling "chunks" and it is these which can be slotted back into the appropriate spot when it is called upon to do that. Note that the content represented by (ABC) is material which has actually been experienced on at least one specific occasion.


4.09 An Example: Compression creating "Concepts"
    Now let us look at how "concepts" are formed. Let us revise the original memory store by adding extra symbols which occur along with the repeated material in certain circumstances. This results in the memory record shown below.

    ABCx .... ABCy .... ABCz .... ABCx .... ABCy .... ABCz .... ABCx .... and so on.

    The procedure which extracted the chunk (ABC), would have ignored these extra symbols, x, y, z, because these symbols do not represent events or circumstances which, within the experience of the creature concerned, did not occur on EVERY occasion along with ABC. But the extra symbols are part of the system's recorded memory and so that might now be regarded as vital information. So how could the way these extra symbols are disposed - are recorded by the compression procedure? ... Like this perhaps -

    (ABC)-[x,y,z] .... <=! .... <=! .... <=! .... etc.

    Once again the backward-arrow-exclamation-mark is an address pointer, but now it points, not only at the stored symbols representing only those characters which were ALWAYS present within the identified repeated items, but also the extra characters [x,y,z] which can be interpreted as meaning ...

    ALWAYS (ABC) and SOMETIMES also [x and/or y and/or z] To increase the stored information we could also add a number to each item in the and/or list to indictate the PROBABILITY of its appearance. That information could be obtained by counting the proportion of the examples of occurrence in which that symbol was present. That additional information could be useful, but, since experience of events is continuous, would almost certainly be quickly made obsolete and need to be updated, with the addition of that extra information, those stored "chunks" become (in my terminology) "concepts". I shall return to that issue when we deal later with the role of dreams in concept formation.




4.10 What Kept Concept Formation Going?
    There is a problem with this idea that evolution might have produced the construction of concepts. Concepts are of use only when you have a fairly large collection of them. But concepts are formed only one at a time. So how does the mechanism of forming them progress beyond the formation of the first one? There has to be some immediate advantage gained (even something very small) by the formation of that very first one, which will keep the process going. Otherwise the formation of that very first one would produce a species that was prone to failure, and would be liable to become extinct.
    That first very small advantage, I think, is the more compact space that concepts occupy. That could be the immediate marginal advantage. So the mechanism forms chunks of information, and then compresses them into an even smaller space. And it keeps going.
    Consider now, what happens when memories are reconstructed. If memories are reconstructed using concepts instead of chunks, those reconstructions include material that (in some instances) did NOT ACTUALLY occur, but MIGHT HAVE occurred.
    Consider what that means. It means that the reconstructions of memories includes information about what might have happened. Therefore, a creature which has access to that information can acquire a concept of danger, of near-miss, of adventure, of thrill, of good luck and bad luck. I think that that adds up to the acquisition of a sophisticated understanding of events and circumstances. Human understanding is the most sophisticated of course. But I suspect we share that type of understanding, to some degree at least, with many other mammals - such as apes, wolves, pigs, horses, whales and dolphins.
    Given that information, when a plan is being formed, with the intention of some GOAL being achieved, it might be advisable also to consider whether some adverse event MIGHT occur, and to consider that before any response is finally selected.


4.11 De-compression
    The ability to compress data to make it take up less space, is useful only if the mechanism also has the ability to de-compress it. That is, to recover all the significant information of the original version. One way to do that is to physically restore the data, that was removed from its original location, to its original location. That, however, is not always a procedure which is practical, especially if the space vacated has already been re-occupied by some other type of data.
    What is usually a more practical method of restoration is to leave the removed data where it is, and to then re-connect it with its original location using pointers. A pointer is a small snippet of information which is left behind in the original location (when the data is removed). That snippet carries information about the new location where the data can be found. The idea is very similar to the address pointers often used in computer science.
    The change from the re-construction of the original data (prior to compression) and the construction of an understanding about what can occur in the world in general, is important. It marks a change from information about an actual experience, to more general knowledge of, what I have called here an "understanding" about potential events. And by making that change, we can understand more about possible future events and carry that further to potential longer-term consequences.
    Once the mechanism gets started along this pathway it can gradually expand the amount of material that is included in a comparison as it looks for commonality between chunks of material. I imagine that the process could start with a comparison of physical attributes like shape or material components. But it can also gradually expand the comparison to include more and more stuff - like the use to which things are put.
    Consider, for example, the concept "chair". Chairs do not all look alike. Some of them have four legs. Some have three. A shooting stick has only one leg and a flat-topped boulder has none at all. But each of these can, in the right circumstances, be regarded as being "a chair" nevertheless.
    The context within which we deal with objects is also important. To the owner of a shop all the goods he or she sells corresponds to the concept "stock". Everything from a bag of sugar to a motor car or even a large piece of real-estate, can be regarded as "stock". The owner buys them and then, hopefully, sells them. That is quite a broad category but the process of concept identification and construction, if continued far enough, will encompass all of these concepts - eventually. Within the system we are putting together, the concepts "bag of sugar", "motor car" and "stock" will all coexist simultaneously. They will have different roles to play and they will also overlap. The structure of the store of concepts will be very complicated and will refect the way and the sequence with which an individual may have identified them, and incorporated them into that structure. No two people will have them organised in their minds in the same way.
    The point is this - unless we have a mechanism that can do all of that, we are not going to be able to construct a mechanism with human-like patterns of behaviour. This DOLL_3 may not fit easily with the other DOLLS but it is an essential part of the evolution of the mechanism that we seek to construct.

    It occurs to me that searching the data store for the examples of various superficially different things, which can be used as tools for a common purpose, could be one of the types of expertise for which we could develop special DOLL characteristics. I have not included a description of any such DOLL in my inventory of DOLL specialisms. But it does seem to me, at this juncture, to be a distinct and possibly fruitful possibility. At certain times, which could occur while the brain is asleep, a DOLL could search out the memory of tools being put to a particular use. It could then compile a list of all the memory content when that happened. There will be occasions when different objects were used to achieve similar ends. Think, for example, on the different objects which can (in extremis) be used as a hammer, or a tin-opener. I can recall an ingenious and very hungry friend opening a sardine can (after the opening key had broken off) and doing so with an ice-axe and a karabiner, The sardines, I may add, emerged totally macerated, but my friend achieved the state of nutrition that he desired, (somewhat desperately, as I recall).



4.12 DOLL_4
            (Representing "REALITY" using "CONCEPTS")


The mechanism with DOLLS 0,1,2,3,4



    DOLL_4 consolidates the acquisition of concepts. Concepts differ from the compression chunks, as we saw earlier, by incorporating much more information than could be included as a result of simple compression. An important role is also reserved for what we could now call "causal connections".
    The philosopher David Hume was the first to draw our attention to the fact that there was something very strange about the concept of "causation".
    We observe two events - X and Y, with X apparently happening before Y in such a way that when we observe X, we are able to anticipate the observation of Y. Why? Why are we able to predict the observation of Y with confidence? We see a bright flash in the sky and immediately anticipate that we will hear a loud roll of thunder shortly.
    We see a stone falling to the ground and anticipate hearing a thud as it strikes the ground.
    Hume explained our expectations by suggesting that we think there is what he called "a necessary connection" between the two events. But what he also correctly pointed out, is that no matter how hard we try to observe that connection we do not actually observe it, or in any other way experience any perception of it.
    Statisticians correctly warn us to avoid jumping to the conclusion that there is a causal connection between two events, merely because we have observed the repeated juxtaposition of these events in time. We should require some additional confirmation before we form that conclusion. For example, if we can prevent the occurrence of X (the cause) and then note that Y (the supposed effect) fails to occur - we are, to some extent justified in seeing that as confirmation of a causal linkage. But there is no perfect confirmation possible.
    A causal connection is just one of many things that we may think we observe, but which, in fact, we do not.
    Consider gravitation. Before Einstein offered us an alternative explanation of gravitation, Newton's version of events held sway. It seemed then certain, that stars and planets etc. "attracted" one another. Confirmation of that seemed to be provided in the form of predictions. We could calculate orbits, eclipses etc. with some precision. A few exceptions did little to shake our confidence in how dependable were those calculations ... until the Michelson-Morley experiment (in 1881) which gave us a result which could not be explained using Newton's Laws. Specifically - light appears to travel at a single fixed speed no matter in which direction we may be travelling at the time that that measurement is observed.
    Einstein started from that point and offered an alternative explanation based on the idea of the distortion of space - a counter-intuitive idea which (as it turned out) gave much more accurate predictions and explained a few things that Newton's version of events, could not.
    Even so, we do not actually observe the distortion of space. What we do detect are experimental results that we use to explain what seems to be the distortion of space - gravitational waves - the apparently periodic variation of distances in a carefully controlled space. And that will just need to suffice until some new Einstein-like person offers a better alternative - which might never happen but which might - we can never eliminate that possibility.
    And then again, when we are able to make a direct perception of some happening, with our eyes or our ears etc. what is really happening or could be (we should recognise), is that some aspect of our external environment is having a direct effect on some molecule or some other physical aspect of our eyes, ears or some such. Every one of us could have a hallucination. We can never be absolutely certain of the validity of any perception.
    Radiation from the Sun affects the cells in our retinae. Physical surfaces affect the nerve endings in our finger-tips, and so on. These apparently observable effects, operate in such a way that brain signals are generated and pass into our brains where they are processed and ultimately trigger some response action. Or so we suppose. If medical science improves we will perhaps acquire a more detailed and accurate description of these events. But our perceptions are mechanisms, like instruments. We think we understand how they are able to do what they do, but even that is subject to possible change. So what seems to be direct perception of some event does not guarantee perfect knowledge of that event.

    Note, however, this important difference between what DOLL_4 is doing and what previous dolls were able to do. Until this stage, all the mechanism could do was to reconstruct memory acquired in the past. That was done by restoring remembered material (which had been compressed) which had been experienced in the past and (in effect) restored to their appropriate locations. But DOLL_3 and DOLL_4 have converted these compressed chunks into concepts. Concepts contain material drawn from ALL the occurrences which shared some aspects of those remembered events. When these bits of information are included within the structure of the concepts they become adjuncts to the remembered events which MIGHT have occurred, but have not necessarily done so. So what the mechanism has constructed is not the memory of any single past event, but is what I call here an "UNDERSTANDING" of such events (in general). Once this change has been put into effect, it is possible that those other DOLLS, which were previously operating without the assistance of concepts, could now upgrade themselves and begin to use concepts as well. Potentially they all understand reality somewhat better than before.



4.13 DOLL_5
            (Representing "REALITY" using "SELF")


The mechanism with DOLLS 0,1,2,3,4,5 and SELF



    There is one very important component of these observed events, which is missing from these reconstructed representations of past experiences ... I refer to the concept SELF. If these events are part of the personal experience of an individual person, then it must be the case that that individual was present when the event occurred at the location where it occurred. That is not necessarily the case, but we have not yet introduced the idea that a person can acquire experience by being told about these experiences by other people. That comes later with another DOLL.
    So, putting that to one side, a representation of that person must be a component of the experience. There is no reason why the introduction of that SELF should be reserved for a separate doll, but neither is there any reason why it should not. Since doing so does not introduce any obvious difficulty or defect in the system, I shall reserve it for DOLL_5.
    What is SELF? SELF is a physical object. Presumably it is a human being. The only occasions when it is will not be, will occur in (imaginative) stories about animals or toys. I shall ignore these for the present.
    But SELF is more. SELF not only has the usual complement of arms, legs and other body parts, but it has a brain; it has beliefs; it has memories and also appears, to itself, to have emotions ... like hopes, fears, dislikes, skills, aversions, intentions, etc. etc. which together and in various complicated ways, are the sum total of that inner person. These aspects and/or properties, define a person.
    A person is several things -

    (1) a physical object within the environment;
    (2) a mental structure located within a brain;
        A mental structure is also a physical structure - made of brain-stuff.
    (3) a repository for the representations of all its components and properties.
    (4) also where the memory record of all those "conceptual understandings" of remembered events will be stored - yet another type of memory. We can call that an "episodic" memory.

    This one is constructed using representations of experienced events, which I have called here "understandings" of events. Storing these, and also, later, being recovered from that storage, necessitates reconstruction and that, according to my suggested explanation of consciousness, is the active, on-going performance of a conscious experience.

    The very long record of all these events includes - not everything, but most of the things, and particularly the most significant of events, that have happened since those memories began to be recorded. That represents the memory that that person has of his or her own life-span. Unlike the other memory records we discussed earlier, this one can be used to predict future events and in particular it can be used to predict (and with reasonable accuracy) how this particular person is likely to react to these future events. So a person can predict how SELF (that person) will behave in the future. We have words which describe how we expect a person to react. "Courageously", "cowardly", "reasonably", "insightfully" and so on. These are some of the descriptive terms - which are also emotional terms. I shall return to that topic in the next section.



4.14 Consciousness and the Concept of SELF
    We have reached a point in my account of the evolution of the DOLLs, where I can define what I mean by "being conscious".
    My thesis suggests that "being conscious" means having several types of awareness.

(1) Awareness of perception and corresponding reaction.
(2) Awareness of past experience. (TRACE memory)
(3) Awareness of past behaviour. (LTSM memory)
(4) Conceptual understanding of events and circumstances (Concept formation)
(5) Awareness of consequences. (concept of causal connection)
(6) Awareness of SELF as a component of reality. (SELF + episodic memory)
(7) Awareness of consequences of exercising choice options.
(8) Awareness of other people's opinions. (communication using language)

    Some of the difficulties we have with the concept of consciousness can be attributed (I think) to the way we have given the phenomenon a single term and failed to recognise that it has several components, all of which must be present, before we can describe a person or a creature as having "full blown consciousness".
    So "being conscious" means having all of the abilities listed above. An individual can be "partially" conscious or being "momentarily conscious" when that individual has any or all of the components (1), (2) and (3) above. Items (5) and (6) introduce the ability to remember events and circumstances and to have full blown consciousness.
    I will expand on these ideas later - in the discussion section (section 5.00ff). In that section I will present an important and very convoluted argument. I will argue that consciousness is not some kind of separate property of a brain. I suggest that the development of the physical mechanism which I describe here, is in fact the mechanism of bring conscious. The items of information which that mechanism puts together is what the individual (who owns that brain) then automatically knows (consciously). When that mechanism is described what is being described is what that person knows (and how he/she is able to know it).
    For the present, however, I will restrict myself to suggesting that before the emergence of the SELF concept, the mechanism cannot be any more than momentarily aware of its own actions. That is, a brief kind of awareness of what it is doing, as it is doing it, but not aware of any related issues, like WHY it is doing these things.

Note: While I do not want to devote many words, in this text, to the criticism of alternative views, it has to be admitted that there are those who take a different view of consciousness. The main thrust of their arguments, seems to consist of little more than forceful assertions, which are put forward without supporting evidence or logical arguments. Perhaps the most assertive of these claims, is the claim that the conscious mind and the physical mechanism of the brain are two separate entities and that there is (as they claim) "no evidence" to the contrary. This the so-called "Mind-Body Duality" which dates back, at least (and famously) to René Descartes, and to possibly a long time before that. The most extreme of these claims is that it would be possible to produce a robotic entity, referred to as a "zombie", which would be able to behave in a way that is indistinguishable from human behaviour, but which is totally without any form of consciousness. The main arguments advanced for this, by chalmers, are -

(1) The "Explanatory Argument", and
(2) The "Conceivability Argument".

These are presented by Chalmers as follows -

The Explanatory Argument:
    1. Physical accounts explain at most structure and function.
    2. Explaining structure and function does not suffice to explain consciousness.
    3. No physical account can explain consciousness.


The Conceivability Argument:
    ... it is conceivable that there be a system that is physically identical to a conscious being but lacks at least some of its conscious states. Such a system might be a zombie: a system that is physically identical to a conscious being but that lacks consciousness entirely. .... .... There is little reason to believe that zombies exist in the actual world, but many hold that they are at least conceivable. We can coherently imagine zombies, and there is no contradiction in the idea that reveals itself even on reflection. As an extension to the idea, many hold that the same goes for a zombie world: a universe which is identical to ours but in which there is no consciousness.

[Chalmers 2010 p106-107]



    In support of these arguments Chalmers claims that this position is supported by several famous persons, including Descartes. I concede that Descartes was an extremely intelligent man and that he was responsible for many wonderful ideas. But I would also claim that when he espoused the idea of the Mind-Body Duality, he made a significant mistake. He never gave an effective response to the objections raised by Elizabeth of Bohemia.

    In response to the arguments offered by Chalmers, however, I would say that much depends upon how we do our conceiving and our reflection. For example, I can conceive of a circular triangle (drawn on a two-dimensional sheet of paper) but I cannot imagine (even after prolonged reflection) how such a thing could be drawn in reality - in this world or in any other. I also point out that according to my version of things, consciousness is a function, and that therefore, if, as Chalmers claims in item 2 in his list of items describing the explanatory argument, a physical account can indeed explain functions, then we may legitimately draw the conclusion that physical explanations can, after all, be able to explain consciousness.



4.15 DOLL_6
            (Other People and the ability to use language for communication)


with DOLLS 0,1,2,3,4,5,6, and language



    DOLL_6 is the part of our brain mechanism which recognises that it is part of a community of people. It is also where it acquires the ability to communicate with that community. That distribution of expertise seems somewhat burdensome to be handled by just a single DOLL and so it may well be the case that we should enhance DOLL_6 by a regarding it to be a sub-community of subordinate DOLLS - 6(a), 6(b), 6(c), ... and so on.

    Our ability to use language did not come suddenly from nowhere. A more plausible account suggests that language came gradually, with time, using, initially, body-language, facial expressions and gestures. Many other mammals (dogs, whales, apes, etc.) got that far but not much further. When a dog stands in front of you, staring at its dog-bowl, and then at you, and then again at the bowl - it is asking you a question. It is saying "Does that bowl not remind you of something?"
    But what I refer to here, is an ability to communicate complex ideas, To do that we use natural language, and other means such as drawing pictures and enhanced body-language signals.
    The acquisition of language, according to my thesis (Noble 1988, 2012), comes after the ability to form concepts. So concepts come first. Then comes an association between the sound-labels of word to those concepts. A person (the speaker) utters the sound-label for a concept. Another person (the listener) identifies those words (by their sound), associates each with a concept. That other person brings a copy of (or a reference to) the relevant concept structure, from his or her store of concepts, assembles these concepts with all their associations, into the growing structure that is a depiction of the meaning of the utterance (by the speaker). The listener then identifies any associations which are peculiar to him/herself and labels them as such.

NOTE: A Digression (on these Objections)
    That theory about the nature of a conversation and the communication of meaning, had also been suggested by several others over many years, and, equally, had also been denounced, several times, by others. Janet Fodor called it the "Ideational Theory of Meaning" and she described the way it operates with these words -

    "I have a thought or an idea, I formulate a sentence, I utter it to you, and when you hear it you come to have the same thought as me. This may be a crude picture of the way language is used but it is not an obviously false one."

[Fodor J. 1977 p17]



    The objections to this theory, which she had thus raised, were also widely held. These objections were as follows -

(1) Concepts (or "ideas" as she called them) in the minds of different people are not identical.
(2) While the theory may work successfuly for declarative sentences, it is not clear how it could be a successful mechanism for processing questions or commands.
(3) It is not clear what type of concept (or idea) could be associated with a word like "How".

    Bertrand Russell offered an objection which was identical to Fodor's. He asked what meaning should be associated with the word "How". I can, however, offer an effective response to those objections.

(1) The concepts of people are not identical.

Answer 1: People are generally aware of those differences between individuals, and so when they formulate an utterance, they discount those personal and individual nuances of meaning from what we may call "the public meaning" of a statement. This is true of both the speaker and the listener. This mechanism does not always operate perfectly, but can be recognised and clarified by the use of additional statements.

(2) What about (a) questions and (b) commands?

Answer 2a: In the context of a declarative sentence the method of communication and understanding operates as described above. In the case of a question, however, that speaker is requesting clarification of confirmation of a suggested circumstance. "Is that door open?" clearly indicates two alternative configurations of a particular door.

Answer 2b: A command makes a statement about some circumstance, and adds to that the additional information predicting that the speaker will be displeased (to some degree) if the person addressed does not take action to ensure that a different condition pertains - e.g. "Shut that door!". A command also indicates the state of mind of the speaker - or state of mind he or she will be in if the door is not shut.

In both cases (2a) and (2b) the meaning of the utterance, is an action. In the case of (2a) it is an action concerned with perception - to check the prevailing circumstances and then to report on the findings. In the case of (2b) it is a physical action - the closing of the door - or it requires a report denying that the door is open.

(3) What is the meaning of HOW?

Answer 3: The concepts with which individual words are associated range over several different categories. Consider an analogous situation - the construction of a model aeroplane or battleship. The kit which we purchase for this purpose contains several component parts of the finished article. But it will also contain several other items. OIne of these will be a set of instructions which tells us how to put those component parts together. That kit will also contain a very important item - a tube of glue which will be used to join those parts. An utterance must also contain words which serve those same functions. So a word like "How" will introduce a phrase or clause which will describes the mechanism of some action taken. In these circumstances the meaning of "How" is once again an action, but this time it is a linguistic action to interpret the meaning of a verbal phrase followed by another appropriate action - perhaps making an addition to the meaning structure of an utterance.

    An example: -
    "How he pulled out those nails was with a claw hammer."

    Other words like "on" and "in" and "in front of", will provide similar information about the way a representational structure should be assembled.
    I think that my response to those objections listed above and raised by Janet Fodor and others, are effective. Accordingly I defend the validity of what Janet Fodor calls the "ideational theory" but I do add that small extra element concerning the use of embedded action specifications. In that new form I call it the "Construction Kit Theory of language Understanding", or briefly - the "Kit Theory".

Note: The difference between my own explanation of the meaning of words and phrases and those of Fodor, Russell and others, appears to be that I am prepared to consider the possibility that the meaning of a statement might, in some circumstances, (or even in most circumstances), be an action of some kind, whereas I suspect that they have not considered that possibility.
    I attach some importance, and significance, to having had personal experience of using a computer programming language which is able to treat data and a program specification, as interchangeable items. And subsequently, being able to embed a program specification inside a data structure, like any other numerical or textual data item, and then, some later time, being able to activate that program and to watch its behaviour, unfolding before my eyes, and, optionally, in slow motion, watching it rearranging the other data items within that same data structure, and on occasions even altering its own location and structure.

End Digression.


To continue .... Having associated each word with its own meaning concept, and, in some instances having triggered the requisite actions, and built the structures defined in that way - the mechanism will then append these new structures to the growing depiction structure which it is assembling.
    It is the re-doing of that assembly process, that is, the construction of the depiction of all that information of which that person is consciously aware ... that is the physical mechanism which is the procedure we call "being conscious".
    That is why I say that being conscious is a function. It is a process which produces a result - a structured set of data which can be used to re-assemble the whole structure over and over again.
    And the doing of that re-assembly will inform the person who owns that brain - enabling that person to re-see the images, re-hear the sounds, re-feel the textures, and so on.
    That person will also be able to re-understand the significance of those experiences because that person will be able to re-access to all the responses which are appropriate forms of response. And so that person will be able to predict (with reasonable accuracy) the new experiences which will flow from these actions.

    So, if this structure is built during a conversation, the structure (or rather the data needed to reconstruct it), will be associated with the representation of the mind of the speaker (person). If he/she (the listener) decides for one reason or another to believe the speaker, the new structure may be added to the representational structure of so-called "reality". That connection will quite possibly be maintained. Versions of this reality will be associated with the listeners own belief, to the speaker's belief, and, perhaps, to a supposed external reality. This leaves open a possibility of these structures being somewhat different. If we do not keep open that possibility we would shut off the ability to represent the belief that "this person is lying to me" or "This person is mistaken".
    Of course if SELF corresponds to the LISTENER rather than the SPEAKER, these positions will be reversed. What that implies, however, is that both parties to this conversation must, in addition to a SELF entity have an equivalent for that other person's brain, either a SPEAKER or a LISTENER.
    I appreciate that by including these details along with the interpretation of an utterance, we are making the process very convoluted and problematic. Nevertheless, because human relationships are indeed complicated, I do not see how we can avoid producing a representation which also has those complications.

Note: When we are discussing the evolutionary emergence of language, it is relevant to note that, while most people associate the emergence of language as a phenomenon associated with Home Sapiens only, and well within the last half million years, Prof Everett (Everett 2012) has argued that Homo Erectus also could speak. He has found archaeological evidence that these people (if I may use that term) visited offshore islands and that that involved coordinated paddling in order to deal with unhelpful currents. That, he argues (I think persuasively) that that also required the ability to exchange linguistic instructions. This would imply the advent of language some 1.8 million years earlier than commonly supposed, under the influence of selection pressure to coordinate behaviour.




4.16 Types of Word and Types of Concept
- and a Complex Multi-dimensional Structure which "understands" circumstances


    A major problem which arises when we deal with the interpretation of language, is that the form in which language is constrained to use when it is presented to us, is essentially linear. In contrast, the form in which our understanding of a given circumstance occurs in the external world, is multi-dimensional. The difficulty is how we can use a linear presentation to describe or represent a multidimensional circumstances - and then to build a structure which will represent that multidimensional circumstance adequately. It is obvious that many words can be associated with the physical components of reality. The representation of these is easy enough at an elementary level.
    But listing the multiplicity of dimensions which are presented to us and sometimes require to be represented, is difficult, but I will try.
    We have the three orthogonal spatial dimensions which may often be required; But we also have intrinsic coordinates when an object under consideration has its own in-built coordinate system. Consider, for example, if a verbal description refers to some other object being "in front of a car", does that expression refer to a position which is in danger of being struck by that car if it moves forwards, or does it refer to a position which is between the car and some observer?
    Next we have a single time dimension. But complications may arise even with that. Most of those complexities are created by our ability to predict events which have not yet happened. We often refer to some event coming "before" another, when we do that should we then regard that as being "before" the predicted event or "before" that prediction is made. The difference can be crucial, because if causal connections are involved, it may be that it is not clear if the causal precursor really precedes its consequence - and does so in real-time or in predicted-time.
    Further complications arise when two or more events take place in parallel - "I was reading a book when John fell out of the window." A consideration which further complicates the depiction of events of that kind, is that a verbal description is not always precise about the timing of coincidences.
    Complications can also arise when we try to depict a single physical objects.
    To illustrate that point, let us take the physical object a "drum" as an example. This object, typically, has a particular shape. It is cylindrical. It has components - usually a cylinder made of some rigid substance like wood or metal. The two open ends of that cylinder are then covered in skin, or cloth, or plastic, or any substance, which, when it is held taught, will vibrate and emit a loud noise when it is struck with a solid object (a drum stick). There are also various ways of supporting a drum so that it is held in a position where it can be struck by a person repeatedly and conveniently. We could go into more detail about those forms of support.
    But we cannot be finished there. To complete this description, we must also add information about how (typically) a drum is used. We could add more information about other types of musical instrument, the sounds they make, about singing, about rhythm, about the pleasure experienced by (some) people as they listen to the sounds produced by all these musical instruments, (including that drum of course), and, if we want a complete description the lack of pleasure experienced by others.

    That is a rather long-winded account - much longer than is usually provided by linguists when they discuss the meaning of words, but I think all of that stuff MUST be included in a complete account of the meaning of such a word.
    When we try to implement a computer system which will handle language, we can, of course, start with a simple statement of meaning and gradually expand the formal definition of meaning. And that is what happens if we follow the growing awareness of a child as he/she acquires the ability to use language as a means of expression.
    And it is at that point - as you will be aware if you have read my

(AUTHOR'S NOTE)

about how I began my exploration of this problem, and found, to my discomfort, that all this stuff needs to be included if you want to build a computer system which will converse easily, intelligently (and intelligibly) with humans.
    And that is why we have words (and phrases) like - "before that", "afterwards", "formerly", "at the same time as", "while", "then", "next", "sometimes", and a great many more. We also have a prefix to some words, like "ex-" as in "an ex-husband", which re-defines the time context of the information being provided.
    There is also the issue of location - so we have words like - "there", "alongside", "in front of", "miles away", "behind", "inside", "outside", "in", "out", "beyond", "near", "nearer than", "towards", "away from", .... and also many more.
    What I am describing, of course, are the meanings of prepositions and prepositional phrases. And there are a host of other kinds of words, which we all understand, but which we seem to think can be understood simply by attaching them to a syntax label. That is not the case. We need, as I have indicated here, a full description of the implications of the use in sentences of each of those words.
    How can we express a meaning for the word "but"? That word means that something contradicts our normal expectations. That cannot be expressed properly without explaining what expectations are. So to get at the meaning of a very small word like "but" we need to describe a person (or an animal) and explain how and why it has expectations. Now I know that that is a very onerous requirement, BUT I do not understand how and why we can dodge that requirement and still understand quite ordinary sentences.

    Note - An Important Rule: Do not store the data. Store the means to re-construct the data. That is, withhold some, or perhaps most of that information, and then trigger the actions which will make that data available - but only as the need arises. The clues which can trigger that action.
    And that prompts me to state a general rule which should govern the attempt to find a good method to depict these complex circumstances -


RULE: All complex circumstances should be omitted, unless and until they have been explicitly included.

    What role in all this does syntax play? When a sentence is spoken or written, there is some kind of implied grouping of the words. Syntax is a system of place assignments which are given to words. These determine the inter-relationships between the words. They also assign the positions within a sentence (or utterance) across which there is no implied relationship or no direct relationship. Syntax is therefore a convention by means of which we can group words to combine meanings.
    There are some snags however. One of my favourite snags is illustrated by the phrase - "advanced passenger train". Those of us who speak English within the UK, have learned by hearing that phrase that it means a train (of advanced design) for transporting unspecified passengers. It is definitely NOT an unspecified type of train for transporting passengers of an advanced age - although, since the words "advanced" and "passenger" are placed next to one another, we might have been forgiven for thinking it was otherwise. That is an example which illustrates how syntax is defined by common usage. Each natural language has its own idiosyncratic syntax. There is no general rules which apply to all languages.
    In developing his linguistic theory, which was heavily dependent upon syntax, Noam Chomsky tried to escape that fact by claiming that there was something he called "Deep" or "Universal" Grammar which for each of us was known innately and which required us to acquire by learning, a set of "transformations". These converted this "Deep Grammar" syntactical relationship into the syntactical relationships of any given "real" natural language. These rules were often contrived and (in my view) implausible. He did that in an attempt to show that his analysis was applicable to all natural languages. I think his attempt failed.

Note: Professor Daniel Everett, who studied, and learned to speak the language of the Piraha people who live in the Amazonian jungle, also takes a disapproving view of Chomsky's theories. Everett published a grammar of the Piraha language, and came to the conclusion that Chomsky was mistaken because of (apparently) a disconformity he identified between the Piraha grammar and Chomsky's theories. (Everett 2005), (Chomsky 1957).

    A good example of (what I think) is the implausibility of those syntactical transformational rules, is provided by what is called "Negative Transportation". According to Prince (Prince 1976) the sentence -

(1) "Max doesn't believe that Anne will leave."

is ambiguous and could reasonably be interpreted as meaning the same as either -

(2a) "It is not the case that Max believes Anne will leave."
or
(2b) "Max believes that Anne will not leave."

    Complicated rules have been devised which provide for these two options. I think what it is more plausible is that the seeming availability of (2b) is the result, not of any formal rule, which requires us to adopt (2b) as a correct interpretation of (1), but of our becoming accustomed to speakers who use the language carelessly. These careless speakers often use a sentence of form (1) incorrectly to mean (2b) rather than (2a). It is an idiosyncratic usage which is so common that we may well find it more convenient to ignore the misuse and to anticipate the mistake. That is how we deal with even more common and blatantly erroneous usage.
        "We done it."
    We do not need complicated rules to help us understand that (2a) is a correct interpretation of (1) or that (2b) is not a correct interpretation. But we also know that (2b) is a near-miss and close enough not to matter greatly.
    The trouble is compounded, however, by the fact that a statement involving a belief may be true, while at the same time, the circumstance which is the substance of that belief, may not be true. In other words, a person can believe something which is not true. If (2b) does happen to be true, then that will imply that (1) is also true, but the reverse relationship does not apply. The transformational rule which substitutes (2b) for (1) is therefore not a reliable device for discovering the precise meaning of a statement, but is a handy rule of thumb which may help us discover what the speaker intended us to understand.
    If the speaker is present, this is a circumstance in which we could interrogate the speaker to discover the true meaning of the utterance. Otherwise, however, we may need to rely upon those rules of thumb to assist us to get closer to the true meaning. The overall result is a sufficiently close approximation to enable us not to be too concerned.

    The rules of any given syntax keep changing. Currently those of us who are at the age when we become creatures of habit, are resisting pressure to use the expression -
    "We were sat on a bench when ..."..
That is an American-ism, of course. I would prefer to say -
    "We were sitting on a bench when ..."
(past tense continuous) ... which sounds correct to my ears. I have no doubt others will think I am being pedantic. But that, I think, reveals a truism ... syntax is determined by what sounds correct. And everyone's ears hear words differently.
    Some words with other syntactical classifications are more easily handled. Adjectives modify the properties of nouns. Adverbs (currently disappearing) modify verbs (speed of action etc). These are relatively easy to understand. It is the small frequently used words which often give us the greatest difficulty. LIke - as, on, if, the, a, I, you, him, it, ... To what does "it" refer in the expression "It is raining" and "It is twelve o'clock"? I would say that "it" refers to the general ambiance or condition of the environment.
    Consider please, this sentence ...

    "John kicked Bill on the knee, one the spur of the moment, on a Saturday, on a bus, in anger, in December, in the middle of the night, in the middle of an argument .... "

    Could you provide a meaning for those two words "ON" and "IN" in each of those circumstances? Or would you find it easier to deal with that statement if you embedded a routine inside the meanings of both "on" and "in" which would, when required, investigate the nature of the sentence in which it was located, and could test various alternative strategies for dealing with the context?

    In the book in which I first tackled these problems, in the last few chapters, I explored ways of dealing with the structure of sentences (Noble 1988). This involved breaking up the traditional dependence upon various word classifications and patterns of these word classifications, with an analysis of a given sentence trying to find which of these patterns would be the most appropriate. I explored an alternative approach. Each word was associated with a specification of its meaning. These meaning structures were general and did not include any details of - for example - the agent responsible for a given action. Syntactical information about how the identity of that agent could be discovered was transferred to a routine, and then that routine was embedded within the meaning structure of each word (as required). This produced a system in which each word and its associated meaning brought with it its own relevant syntactical knowledge. These embedded routines were then extracted and triggered into action, one at a time. Repeated execution of these routines were enabled if these attempts failed. My reasoning was that the traditional syntactical approach was too inflexible. I preferred that each word brought with it its own version of syntactical analysis. I preferred also that syntax was relegated to a subordinate role within the superordinate role of semantic interpretation. I freely confess that these early attempts on my part, and with the indifferent equipment I had at that time, operated much too slowly and had numerous failings.
    I have explored these issues further in

Appendix D on Langauge






4.17 Verbs
    Ah Ha! A verb - the core of the meaning of a sentence. There is normally only one verb in every sentence. Despite my rejection of syntax as an approach to sentence analysis I still find it useful as a formal classification for the analysis and ordering of a textual description of words.

The structure of a verb


The diagram shows the way the mechanism which I envisage, processes one part of sentence which contains a verb. In this example the verb is not identified. The definition of the verb which is stored in the brain (after its meaning has been learned) is used to translate it into a structure (as shown). (* see note below) Each "box" shape in the diagram is a record structure. The agent and object are initially unknown. Each record represents some component sub-structure. One of these (probably several in most instances) is a causal connection between a cause (entity or condition) and an effect (entity or condition). Each component has a time-stamp which indicates its location in the time-sequence of events. The identity of the agent and object of this verb are usually found in the locations shown. Syntax is often used to identify the actual locations in any given example, but that bit of the procedure can sometimes be by-passed altogether.

Note: The embedded functions or routines are endowed with sufficient syntactical knowledge to be able to recognise (for example) when each finds itself in a passive voice sentence. It will be assumed, initially, that it is in an active voice sentence and search to the left to find the "agent". But if it finds one of the keywords which indicate a passive voice structure (like "was", "were" or "had been", coupled with a past tense version of the verb), it will abandon that leftwards search and search to the right.
    The classification of individual words associated with syntax, has its uses. Syntax is useful in this context to help these search routines identify the target they seek. As a sentence is input to the system, the appropriate meaning structures are appended to each word. The embedded functions are identified, extracted and placed on an "action list". These functions are then activated from the top down. If successful they are removed from the list. If they are unsuccessful they are replaced on the list to await their turn to be activated again. When I first tried this strategy I also experimented with routines which could search back over several previous sentences. This seemed useful for finding a previous mention of a reference to "the man" whereas there was no such referral backwards to previous sentences initiated in respect of "a man".

    The rectangles which have no words written on them, and are coloured pale blue (in the diagram above), are intended to show that there are some components which are not shown. Some of these represent additional entities (such as subordinate anatomical components). Others represent relationships which exist between the components (which may not be shown). An import component when it is presented along with an animate agent, is one which carries the legend "Brain of ..." with the ID of the agent. That is an indication that the action described, may be a deliberate action caused by the agent (and the agent's brain).
    A fuller explanation of my approach to the interpretation of language has been transferred to an appendix. If you wish to read it in full. click here ....

APPENDIX D: LANGUAGE

    Alternatively, just keep going here (below). You can read it later. All the appendices are listed and can be accessed at the end of this document.


* Note_1: The issue of how the meanings of various words can be learned, is of more than of passing interest. If a growing child is at an early stage in that learning process, the thesis proposed by Chomsky, would seem to imply that before that process can even begin, that child must have an innate knowledge of "Deep Grammar", and then must learn the transformations needed to convert that "Deep Grammar" structure into the syntax of one particular natural language. What puzzles me, however, about this over-elaborate story is that I cannot imagine, what it is, that that child is applying his or her knowledge of syntactical analysis to. There has to be some grist to a mill to grind, before that mill can function.
    These ideas seem to require a great many different and quite separate skills to be acquired, without any clearly defined time-sequence, before a language, can be learned. All this is odd enough to strain my credence.
    I prefer my own version of events - that a child will learn the meanings of words first - in isolation; will then slowly acquire some familiarity with the patterns, within which words might occur together, starting with two-word phrases; Only later will that child begin to become familiar with the formal syntax of longer expressions. I think that happens after the child knows a good deal about the meanings of certain phrases. That is, the meanings of words are learned first. The acceptable patterns of these words are acquired later. Syntax, in my view, is a later-acquired refinement of semantics.
    I find it altogether more plausible that a child will have already learned the nature of concepts, and some knowledge of how these can be put together to form a representation of reality. After which, all that child, needs to acquire is to learn associations between the sounds of the spoken words, and certain individual concepts (which he/she has already in his or her brain). That child already knows how to put these concepts (i.e. their meaning structures) together to form a representation of reality. The learning task is almost complete, long before the acquisition of syntax puts a lid on it.

Note_2: According to Pinker, Chomsky himself has admitted that his syntax theory is not compatible with the conventional understanding of evolution (Pinker 1994 p355).

PINKER


    The verb in a sentence carries the main thread of the narrative of the whole sentence. A verb (normally) tells a story. The structure, which the mechanism constructs, runs forwards in a normal time sequence. The various rectangular structures shown in the diagram, correspond to entities, the relationships between entities, actions which make "cause" some other new relationships to be created. The whole structure has empty slots for the causative agent, the object, instruments and other such items. It has as many "cases" as a grammarian would routinely associate with a verb, We can also add extra cases if it seems appropriate to do so. A verb (normally) has a tense which tells us when the deed was done (past, present or future) also whether or not the deed continued over a period of time. The rest of the sentence provides indications of the identify of these components. To construct a meaning structure for a sentence, we must identify all of these and slot them into their appropriate positions. To be able to do that, we must first build the structure which represents the meaning of the verb, and then add to it sub-structures which represent the causative agent, the object, an instrument, and all the other stuff. We must also place the time of the action in relation to a notional concept of NOW. I am simplifying of course. Many sentences do not conform to that typical layout. Syntax is often useful for disambiguating awkwardly phrased sentences. Then again some people converse informally without a recognisable trace of syntactical precision - another reason why I am sceptical of Chomsky's enthusiasm for syntax analysis.

   
"One of the things that have seemed puzzling about language is that, in ordinary speech, sentences are true or false, but single words are neither."

[Russell 1962]



    That is because what is true or false is a (supposed) condition which refers to the interpreted meaning of a sentence. Single words, generally, do not have any well defined meaning, because usually, bits of their meanings are missing. These missing bits can be discovered (again usually) by referring to the sentence within which those words occur. The process we call "learning a language" is a gradual process during which we abandon communication based on the use of single words and single phrases, and adopt a form of multi-word communication in which various sentence structures, which the relative positions of words provide us with additional information.

Note: Absolute truth is a theoretical condition which, in principle, is normally unobtainable. We may approach that condition, as when in a court of law we prove some proposition "beyond reasonable doubt". In general, however, we can never quite reach it - except in mathematics. In that context, however, truth is always subject to certain pre-conditions (or axioms) which must (conditionally) be assumed to be true ... which means, of course, that the concept of truth, is once again, hedged in by caveats.



4.18 A Proposed Evolutionary Development of a Language Understanding
    The confidence I have about the correctness of this approach to language processing, is based on the fact that I can identify a relationship between the process of which I have given an outline here, and the process of understanding what we see, hear, and in other ways perceive with our eyes, ears etc. The way, in which we might be able to understand language utterances is based upon and could obviously evolve from the way the process I have already described by means of which we might understand our surrounding environmental reality, through the agency of our perceptions. If we have doubt that the method I have outlined here has achieved a suitable interpretation of a given sentence we could demonstrate its adequacy (or otherwise) by using the resultant structure as a specification belonging to a short animated cartoon.
    The basic source of information comes in the form of linguistic utterances. But the final output takes the form of the structure in the diagram. It is important to note, however, that our more general understanding of "reality" acquires information from perceptions, but that that too (in this approach), gives us output, in the same format as we use for more general understanding. We can see therefore that there is a clear pathway of evolution for the understanding of linguistic statements, from the mechanisms which have already been developed and used for the (conscious) understanding of events and circumstances prior to the acquisition of a facility for using language. I find that plausible.



4.19 Three Versions of Belief

Three versions of belief


    DOLL_6 (or its associated internal community of DOLLS) introduces at least three versions of belief. These belong to -

(1) its own representation of SELF,
(2) its own representation of external REALITY and
(3) a version which belongs to a person, or persons, with whom the brain is in communication.

    The mechanism then needs to make a decision about whether it will believe the version being presented to it by the speaker. Note that it can decide to believe that version of events even if it is not identical to a version of belief associated with SELF. That might be the case if the two versions refer to different aspects of reality. One of these will be under observation by SELF. The second will be reported to it by the SPEAKER. It might also occur when SELF recognises that it has been mistaken. The question then is whether that new version of events should now be incorporated into the beliefs of SELF.
    Once again, this is not a decision that is made freely by SELF using some independent decision process. It is a decision which is sometimes forced upon SELF for social reasons - i.e. by the confident presentation of the SPEAKER, or its own prediction of disapproval by society and subsequent disadvantage experienced by itself. Much will depend on the way in which the mechanism represents the social community, and that will vary from person to person.



4.20 Who or What Makes that Decision?
    In the previous section I made this statement -

            "The mechanism then needs to make a decision ..."

    I now question the validity of that comment. It is not clear to what it is, exactly, that I am referring, when I use that phrase "the mechanism". It is quite clear that SELF is a physical structure which is located within the skull of a particular person - that is, within another physical entity, or, at least, that is what is being assumed. But although it is perfectly clear that certain parts of that SELF have physical counterparts - like anatomical components of the physical body of that person. It is also clear that there are other parts within that SELF structure - like the episodic memory of events and experiences, which do NOT have any external counterparts. That is, there does not exist another "real" episodic memory within which that person stores experiences. This memory record within SELF is not a reproduction of something else. It is the real thing.
    And so - until that structure, which I am calling an "understanding" of events and circumstances, until that material is stored inside SELF, the mechanism of that person's brain, and therefore that person him/herself, cannot remember what has been experienced. (I am assuming that that understanding will be constructed elsewhere within the brain and then stored in SELF when it is complete. But it could be that the actual site of construction is within SELF.)
    I have heard and/or read many strange things being claimed (or simply implied) about the relationship that exists between the physical brain and the supposed metaphysical self, but I have never heard it claimed that a metaphysical self could be aware of some information of which the physical brain itself was not aware.
    So, until all that stuff is stored within SELF, that person concerned will be completely unaware of what he or she has experienced. Therefore, so far as he or she is concerned, these events have not happened at all.
    The same is true of other aspects of SELF. When any form of "thinking" takes place within that structure, that is once again "the real thing", not a representation of some action which is really happening somewhere else.
    We will bear all that in mind when we are discussing WHEN it is that that conscious understanding begins to operate as the brain travels along that suggested evolutionary pathway.
    My thesis hinges on this point. I will return to this issue in the discussion (section 5) where I will present a much more detailed analysis of the consequences that arise when we abandon the assumption (which is often a hidden assumption) that consciousness is a metaphysical phenomenon which acquires the various bits and pieces it needs to be able to do its job .... from nowhere at all it would seem, and somehow does not require any normal kind of explanation.



4.21 The Representation of the Community
    Earlier, I described the techniques of so-called "List Processing". This is particularly useful, when the entity being represented, has multiple contents and these change frequently. I now introduce an adaptation of that which is termed "Dynamic Lists". It is particularly useful when representing lists of indeterminate length or even of infinite length.
    Recall that a list structure is a chain of structures called "List-Links".

A list link


    These can be chained by ensuring that the address pointer of each list-link "points" at the numerical address location of the next list-link. This is illustrated in the next diagram. The list is terminated by arranging for the the last pointer to point at some "null" value, shown here as an "Earth" symbol (borrowed from electronic diagrams).

A list structure


    Note that to delete a datum from the list we need only rearrange address-pointers as shown below. Datum_3 (blue) has effectively been deleted from the list, even although it is still present in the diagram. The List-link it occupies, is now available to be overwritten by some new datum.

list deletion


    We can also note that there is no reason why we must limit ourselves to just one address-pointer per list-link. In this way, with a judicious choice of list-link structure, and even with mixed list-link structures, we can represent all manner of different things, such as family trees, complex machinery, or organic stuctures. Perhaps the most interesting, however, is the dynamic list which can, to an extent, take control of its own structure.


4.22 Dynamic Lists
    The diagram below shows the main features of a dynamic list. The HEADER_RECORD defines the contents of the normal list-link. These can be more or less anything we can think of. One of the elements of the Header Record, however, is what is called "THE GENERATOR". This is the name of a computer program (or subroutine) which, when prodded into action, will create a new list-link and append it to the list of members of the attached list. Other elements of the Header Record can tell us, for example, the maximum number of members, the current number of members. We can for example, imagine this list to be infinite in size.

A dynamic list structure


    Consider the representation of a football crowd. It is not infinite, but it is usually larger than we would care to specify - and certainly larger than we would willingly attempt to represent every member individually. Consider also this excerpt from a story ...

    "A crowd of men burst into the room. One of them brandished a sledge hammer ,,, "

    No mention here of exactly how many men. It is an indeterminate number, but definitely not infinite. So we do not generate any members at all but we might have guessed a number greater than 5 but less than 20. (They all need to get inside the room.) Add that information to the header record. We then generate one of these men and place a sledge hammer in his hand.
    "In his hand". How can we represent that circumstance? We can express it using an active programme of events such that, if the hand moves, the hammer will move with it. The programme will need to have a commensurate degree of complexity which can deal with the type of representation required. Admittedly the result is complex. We must not be frightened by these complexities.
    Other details may emerge as the story continues and these too can be added to the growing representation. The technique is powerful and can accommodate all the problems, concerning multiple contents, which I can anticipate.



4.23 The Principle of Minimal Detail
    The same technique (i.e. dynamic lists) could be used to represent liquid and gaseous entities - i.e. the stuff that things are made of. The number of atoms in a balloon for example ... or the Earth's atmosphere. We could vary the number of atoms per cubic meter at different altitudes. As I said, it is a powerful technique. With it we could build a representation of a liquid being poured from one vessel to another. We might need, in that case, to be able to specify the cohesive forces between the molecules of a liquid and thus be able to represent the viscosity of that liquid - and perhaps even run a realtime simulation. There would be no requirement to include all of these factors on every occasion.

    There is a principle involved here. What is included in a representation, initially, is a bare minimum. Details of the kind discussed in this sub-section, are included only when they are explicitly demanded by circumstances. To meet that requirement, data which would deliver an ability of that kind, would need to be available, nevertheless.
    A requirement for information about viscosity, I suspect, would need to be delegated to yet another specialist DOLL. The method of implementation that I would adopt initially, would be to make those occasionally required expansions of detail, be introduced by triggering embedded routines. That would enable steadily increasing details to be delivered in phases, as the construction of an understanding advanced. It would be a slow process however. In defence of the idea, I point out that it takes a intelligent human child several years to develop that skill.
    This mechanism, however, requires a considerable degree of flexibility. That is yet another reason why I do not accept the mechanism of conventional syntactical analysis. It is too inflexible. There is a need on some occasions for a system which allows the structure to be taken apart and new novel structures to be inserted. My proposal allows that.



4.24 The Context Dependency of Meaning
    I return now to an issue I raised earlier but for which I deferred detailed discussion. Now that I have described list structures, I can now supply that detail.
    What happens when a single word has multiple meanings? How could the system I am describing choose the correct meaning in several different and distinct contexts - and do so quickly?
    The first thing we must recognise is that there is no perfect solution to this problem. It should also be noted, however, that the computer systems which produce the sub-titles on TV screens, seem to choose the wrong meanings for words, rather more often than a human would normally do. One approach which I would expect to operate fairly well, would be to place the several meanings of a given word, in a list structure. For this technique to work reasonably well we must have several lists of words, with a different order of (the same words) in each list. The several lists are then associated with certain contexts. The solution then requires that we identify a given context before we start to determine the meaning of each sentence. These contexts are determined by the concepts associated with the words which have already been processed.
    Initially a person is represented as having a physical body and a brain. These two entities and only those entities. But that brain will enter the scene when there is some action involved. The question is - is that action deliberate or accidental? If it is deliberate then that person's brain will be shown to be the causal precursor of that action. If it is definitely an accident, then the brain is declared NOT to be the causal precursor. If there is no indication at all, then it could be either of these two scenarios. And that is what happens when there is no information to indicate which.
    If it is remarked that this person has an ingrowing toe-nail, the appropriate embedded routine is triggered and some anatomical knowledge is added to the representation of that person. Two feet are added. One of them (unspecified) has a toe added (unspecified) and it is represented as having that ingrowing toe-nail. And so the mechanism continues ... adding extra information as it is required.

    And now, what happens if we encounter an word which is ambiguous? At this point I think I need an illustrative example. For the purposes of this explanation, let us choose the word "paper" and for simplicity let us also say that that word has two possible meanings as follows. Meaning(1)= "examination paper" while Meaning(2) = "Newspaper" and, again (for simplicity) let us assume that these are the only alternatives.
    We now need to tackle the interpretation of two different passages.

Passage(1) - "John went into the examination hall and sat at a desk. He studied the paper."

Passage(2) - "John went in to the station. He bought a paper at the kiosk. When he had settled into the train, he spread his paper and began to read."

    I have made this example fairly easy. The phrase "examination hall" is the trigger which establishes the list of alternative interpretations of "paper" to read -
        [Meaning(1), Meaning(2)],
    while the concepts associated with "station" and with "kiosk" would have this list of alternatives
        [Meaning(2), Meaning(1)].
    In each case the automatic choice of meaning will be the first in each list. The presence of the other alternative ensures that if there is some mismatch discovered later, it is possible for the system to make a late correction to its interpretation.



4.25 Action taken for the survival benefit of the Community
    And now we can consider the contribution that this DOLL_6 can make to the increasing number of options available to be considered as an appropriate response. For all the previous DOLLs, ALL of the information (to which the mechanism must find a suitable response), originates from its OWN perceptions. It may have originated some time previously, and it may have been heavily processed before being used for these purposes, but never previously has the mechanism been supplied by information which has been produced by others processing perceptions observed by other people. But now that is exactly what is happening and the mechanism must choose between its own interests and those of the community of which it is a member.
    The decision is not entirely novel, of course. Close family members have been able to call upon the individual for altruistic responses. But up until now, these decisions have not necessarily depended upon the ability to communicate by language. A child cries and a parent can, or at least might, react instinctively. These altruistic responses could merge almost imperceptably into the type of decision to which I refer in this section. Nevertheless, I think that this is the start and the source of social morality. That is, action taken which is in the interests of the community and not (necessarily) in the interests of SELF.
    Such action is not entirely without self-interest, however. Courageous, or what could be called "self-sacrificial behaviour" can be associated with some self-advantages. For example gaining a higher standing (or maintaining one) within a community can lead to the gaining of a beneficial share of resources, or a community having a high regard for the offspring of a highly positioned community member. The advantage gained by brave action on the battle field can be a survival benefit for the hero's family - even if it is fatal to the individual. I hasten to add, however, that for this self-sacrificial behaviour to be performed does not require the person acting in this way, to be consciously aware of the evolutionary causes. The person who acts thus just has to recognise (and predict) how bad they will feel if they do not act in that way.

Note: I felt that I had to add that clarification because that is the point which Francis Collins got wrong when he claimed that evolution could never explain human altruistic behaviour (Collins 2007).




4.26 DOLL_7

SRA with 8 DOLLS, specialisation



DOLL_7 ushers in a new era of specialisation. This time however, instead of having a variety of specialists, equipped with various specialised anatomical adaptations, as we saw in a pre-doll period, especially exemplified by insect colonies, we have specialists equipped with specialised intellectual abilities. Previously, these special abilities were partially produced by gene mutation or by specialised feeding regimes. In this new era, specialisation is determined by personal inclination augmented by specialist education. This is made possible by linguistic communication. I suppose that we stretch a point and describe that specialised education as a kind of specialised feeding.
    We are now given advice from a large group of people who claim to have expertise - doctors, dentists, personal trainers, financial advisers, teachers, scientists (of many different topics), religious leaders, gang-leaders, parents, friends and neighbours, architects, odd-job men, gardening experts, politicians, internet bloggers, newspaper agony aunts, lawyers, social media correspondents, .... Our problem is to decide which of these really are experts and which of them are charlatans. We also need to think about the rationale we use to make that decision.




4.27 Is DOLL_7 the End of the DOLLS?
    Perhaps it is the end of the DOLLs. And then again, perhaps not.
    DOLL_7 marks the approximate end of the sequence of DOLLs. The exact number of DOLLs that we need, scarcely matters. We could easily justify fewer or more. The allocation of functions could also be different. I am not wedded to the number suggested here.
    What does seem to be inescapable to me, however, is the principle of subdivision into mechanisms which can support several alternative ways of responding to perceived events, and to do so in parallel. That way, different techniques, even those which are inherently serial in form, can be brought to a conclusion in approximately the same length of time. It also seems clear that the evolutionary development of the basic mechanism - i.e. the SRA, cannot be expected to handle the complex requirements of the whole system. The DOLLS idea is simply that - an idea, which offers us a new way to approach the problem and perhaps release us from the restrictions imposed by the problems of "Time-Complexity". (Sigman and Dehaene 2008).


4.28 An Exceptional Case - a genuine choice
    I argued earlier that the most plausible mechanism which we could propose for the way a choice is made when the committee of expert specialists (the DOLLS) is considering which choice of response should be made, was to let that choice "emerge". That is, that choice should be by a self-advancement of the best option available. Each DOLL makes an assessment of its own reliability and submits that assessment value. The set of DOLLS will then "choose" the option with the highest reliability measure. That will be the option which comes first in an ordered list of options. The merit of that approach to decision-making is that the decision can be made rapidly. In optimum circumstances, the ordered list of options will contain only one option - all the others having withdrawn themselves.
    But there is a exception to that arrangement. As we near the end of the DOLLS' sequence, when a great deal more information becomes available, there comes a point at which an important decision is required between at least two options. What is required then is a response which will enable the SELF to achieve what will be seen as a personal GOAL state (or the avoidance of an anti-GOAL), and a choice of response which will predictably favour the interests of the community of which the individual is a member. For most creatures which live within a single community that becomes the choice which must be made. In humans, however, the choice which must be made can be much more complicated. That is because humans tend to live within multiple overlapping communities. Note that this situation can arise only with a species that lives in communities, and within which it can exchange information with other members - by language or some other similar means.
    At the point when this decision needs to be made, there can be no automatic "emergence" of the "best" choice. To take that decision, what is required is a prediction of the long-term consequences of each of the available options. On one hand we have an action which we might call a "SELF-INTEREST" response. On the other we have when in contrast could be called a "COMMUNITY-INTEREST" response. That, at any rate, is what happens in the archetypical case. There are other circumstances when that type of stark choice does not apply. But usually that is what happens.
    For humans, however, the problem of choice can be very complicated indeed -

Myself? My family? My ex-family? My friends? My gang? My common interest group? My team? My ethnic group? My class? My educational group (school, university, etc.)? My company? My shareholders? My employees? My clients? My neighbours? My civic national group? My World population?

    And the problematic nature of that decision becomes daily more complicated as those groups and the related problems become intertwined, and as it becomes more and more difficult to favour one group without simultaneously and severely disadvantaging another.



4.29 Housekeeping
    That brings my description of the various DOLLS and their particular roles, to an end. But the description of the whole RDM system cannot end there, without some mention of the additional functions which must also be performed -

(1) to purge memories of redundant material;

(2) to consolidate and expand concept formation; and

(3) to develop various examples of logical chains which lead the search for behavioural pathways to GOAL conditions or the avoidance of anti-GOALS.

    This requires extensive exploration of stored information - the memory stores in the first instance. At a later stage, having processed the memory stores and formed a collection of more informative concepts (which are then stored and used to reconstruct both remembered events and hypothetical predicted events) the mechanism then processes that datastore to extend these concepts again, and to include within them a wider range of material - which is then used in the same way - to construct even more information-laden concepts.
    In this way the store of concepts is converted from being a compressed repository of knowledge concerning remembered events and circumstances. It becomes a compressed understanding of the way the world has been perceived to function.



4.30 Sleeping and Dreaming
    I have made passing reference to the issue of sleep and dreaming several times earlier. I think it is now time to examine these problematic issues more closely.
    When we consider the inherent dangers which prevailed within the environment which existed during those long years before humans were able to develop our relatively safe urban environment, and which exist now, for the benefit of themselves and for many other species, it is surprising that those species (particularly mammalian species such as small mammals and larger grazing herd-creatures, all of which commonly act as prey for large carnivores) - that those creatures spend a significant proportion of their lives asleep. That is, they spend a significant proportion of their lives effectively defenceless, and with their senses of perception effectively switched off, and therefore unable to be alerted by their perceptions to approaching danger. Various stratagems (like sleeping up trees, in caves, and in company (to provide sentinels) would have mitigated dangers, but only to an extent. Of this we can be sure however - sleeping must, of itself, provide some considerable survival advantage to compensate for these obvious dangers. And there is an obvious component of sleeping which could supply those advantages in addition to the physical repair of muscles and other body parts. I refer to the activity we call "dreaming". Dreaming, makes significant use of the brain. Therefore, dreaming must provide some significant mental advantages.
    Research shows that this is indeed the case. The focus of attention, for research in this field, has been on dream sleep. There may be several types of dream sleep but there are two well known types of dreaming - REM-sleep (or "Rapid-Eye-Movement" dreaming) and Nrem or "Non-rapid-eye-movement" dreaming). During REM dreaming the muscular control of the eyes is very active and seems to involve the rapid swivelling of the eyes from side to side.
    If Nrem dreaming is denied, test subjects suffer a decreased efficiency in brain functions - particularly in memory function. If humans are denied Nrem dreaming over an extended period, they become confused.
    It has also been shown that REM-dreams are associated with an enhanced ability to form logical transitive connections or chain-logic (Walker 2017).

    Walker offers the example of the logical and transitive relationship "is larger than". If the test subjects have observed when awake that (A > B) and (B > C), they are then apparently able to acquire a knowledge that A > C (provided they have been permitted to have REM-dreams thereafter).
    I assume from that, that the same is true of a relationship such as (A causes B) and (B causes C) leading to the conclusion that (A causes C). If that is indeed the case, then it is clear that REM-dreams are required to be able to formulate the causal behavioural chains which enable us to construct patterns of behaviour which would help us to achieve GOAL conditions. It is obvious too that an ability of that kind would be essential to help us survive in a complex environment where every individual could not realistically have actual experience of every variation of all the circumstances that are possible.
    Another relationship which, based on these research findings, one would expect would also be dependent upon REM-dreaming, would be a "leads-to" relationship. Hence if ("X leads-to Y") and ("Y leads-to "Z") then it must be the case that ("X leads-to Z").
    This ability to chain relationships in that way must be one on which the ability to form a plan which will achieve a desirable GOAL condition (or the finding of alternative ways of avoiding anti-GOALs). Drew McDermott, who made a specialism in developing and programming plans for achieving goals, and developed the programming language "CONNIVER", has described as "difficult" the problems associated with complicated logical chains, of these kinds of programme (McDermott 2001). I use the term "Chain Logic" as a general description for the putting together of these complex plans involving transitive logical linkages.

    It is rather difficult to devise algorithms which can accomplish the creation of a store of concepts, from the store of simple memories (of successive momentary conditions of the SRA). But once again the principle of segmentation comes to our aid. REM dreaming can be segmented. There is no reason why the complete process must be completed during a single nightly session. All that is required is a more or less coherent succession of sessions, during which several component algorithms can be activated several nights in succession, and during which a given component-algorithm can process the results obtained during previous sessions. It has been shown experimentally (walker 2017) that denial of REM sleep is detrimental to our ability to form logic-chains. If my conjectures are correct a greater degree of damage will be demonstrated by long term denial of REM dreaming sessions. To demonstrate harm it should not be necessary to deny REM dreaming completely. To create the necessary laboratory conditions to observe this effect, we would not need to create unethical conditions for human test subjects. It would only be necessary to shorten the REM-dream sessions and these conditions must arise as a side-effect when a person is on shift work, or lives in a busy urban area with many noises of vehicular traffic, noisy neighbours, fights in the street and so on. The effects will be seen as shortened logic-chains. I predict that a comparison between city and quiet rural environments should demonstrate that those from a noisy city environment have (comparatively) more difficulty in understanding that there are multiple, and long-term causes of some particular kinds of events. In urban dwellers there will also be increased tendency to believe the standardisation of racial types - e.g. All people of (choose your own people-target type to be denigrated) are focused greedily on the acquisition of money, or drugs, or they are lazy, or are inclined to criminal activity, or ... etc. The analysis of results is made difficult by the fact that all people (and all subdivisions thereof) have some proportion of their numbers prone to such beliefs.


    It occurs to me that an alternative to the formation and storage of long chains of logical relationship, to fit a large number of scenarios, would be the development of a collection of constructional units, like this ...

Constructional unit for building chain logic


    The development and use of a constructional unit of that kind would be more realistic than the laborious procedure of storing a precise chain to solve individual problems. That, admittedly, would require a little more time for the assembly of a solution in real-time, but it would reduce the amount of storage required and it would put some limit on the search space required if there was no preparation done at all.




5.00 Discussion

5.01 On Knowing, (and on NOT Knowing), about Reality (External and Internal)

    Most of us understand or will readily accept that a conscious brain is able to construct an internal representation of things - things that we normally (and quite reasonably) assume are the constituent parts of an external reality.
    In this discussion, however, I want to challenge these assumptions. Perhaps it would be more correct to say that I want to invite the reader to think about the nature of those internal structures which the brain can build. It is not that I think that that assumption is wrong or that I deny the existence of external reality. What I challenge is the idea that that internal representational structure which we create in our minds, is only that. I suggest instead, that It is really much more than that - that it is, in fact, a real and essential component of the mental operations to which we give the general term "thinking". Until that internal structure is available to depict what it is we understand our own thoughts to mean and to imply, we merely "act". We act without regard to the consequences of those actions; without considering what alternative actions might be possible; and without calculating what could be their alternative consequences. In short - we merely "act", We do not "think".

    Our perceptions detect events and circumstances which occur and exist in that external reality. They deliver to our brains structures consisting of neurones, axons, dendrites and synapses in various arrangements. These, for brevity and simplicity I will call "brain-stuff". And with that brain-stuff our brains then builds a structure which is, in effect a map of our environment.
    When we are at home and there is a power cut, we are then able to use that map as a rough guild, and with the additional guidance of our sense of touch, we are then able to grope our way to where we keep an emergency torch or a supply of candles and matches, and then use these to cast a light on the sudden darkness. On that, at least, perhaps we can agree.
    We all know, however, that our internal representation of that external reality, can be wrong. In extreme cases, we could be experiencing a dream, or suffering from hallucinations.
    But I want to go a good deal further than that. I want to draw the reader's attention to this - there are parts of that internal representation of external reality, for which that term "representation" is inappropriate. There are parts of that structure which are not a representation of anything at all, because they are the real thing in themselves.

what is external and internal with respect to SELF

The diagram shows a schematic of the mechanism. The two panels (coloured pale blue and pink) deal with the conversion between some external format of signals and the internal format which I have called "brain-stuff". The contents of the central panel (coloured pale grey) have no external counterparts. most of the units within this section constitutes "reality". Those structural units within this panel are, for the most part, the "real" thing.


    These are parts which have no "external" counterpart anywhere else. I am talking about predictions of future events; about memories of past events (except in astronomy, where the distances are so vast, and because the speed of light is finite, we find ourselves looking out at the past). I am also talking about that part of that internal structure which depicts SELF. The representation of the physical body of SELF does indeed have an external counterpart. But the bit of it which depicts the memory inside the brain of that physical body, has no external counterpart. That memory bit is not a representation. It is the real thing. It is the real memory of SELF - and therefore, so is the action which commits all of that internal structure into storage within that real memory of SELF (including those representations which DO have external counterparts). And so, until that storage of memory action is performed, the SELF is unable to remember any of these events. And furthermore, what it is then able to remember, is precisely the events and circumstances that are depicted by that structure - all of it, and absolutely nothing else.

    If that suggestion is correct, the implications could be very significant. It means that when we think, conscious experience of that thinking could be automatic. It could be that whenever (and whatever) we think, we will always be consciously aware of what we are thinking - but only while we are thinking.
    Once that thought has been thought - it is gone - vanished. It has no persistent existence. And that means that if we are asked afterwards about that conscious experience (if the structure which I have been talking about, and which depicts what that thinking was all about, if that structure has not been stored safely in that memory of SELF), then the only possible honest response to that question, will be to deny that there ever was any conscious experience of the events which took place. The implication is that honest reports on our own conscious experience are not, in certain circumstances, a completely reliable source of information.
    To express that thought briefly - the active process of building that internal depiction of the meaning of our thoughts, is precisely the brain mechanism's way of thinking consciously. That is, it may not be a process which generates or in some way creates conscious experience. It is the actual mechanism of being conscious.



5.02 Building a Structure to Understand Events and Circumstances
(which we can also understand readily enough)

    What I am trying to devise in this section, is a vocabulary and a notation which we can use to describe what this mechanism is doing, and which we also find intelligible.
    That structure which I previously was calling a "Representation of events and circumstances" will henceforth be called a "Depiction of events and circumstances". The word "representation" seems to me to hint that there are two versions - a real version and a "representation version" (i.e. a copy version) which represents that real thing. It need not be interpreted that way, but I want to avoid any such suggestion.
    Anyway, this structure tells a story which we understand. The story has components which are concepts. Most of these concepts correspond to things which we can perceive, by sight, hearing, touch, smell or taste.
    These are the components parts of the structure and they are made of brain-stuff. However, to discuss these between ourselves, and to make ourselves able to understand them, we must present them to each other in some other form. So they must be written, or drawn in diagrams, in a form or format, which is easily processed by our brains into the format of brain-stuff. And for that, the easiest way to present them is in the format of language. We already have the brain systems which can deal with that.

    For that purpose, I propose a simple combined graphic and language technique. We draw these components as rectangular boxes and write inside these boxes some description of what we are able to perceive. It has to be perception by vision I am afraid. I have not be able to devise a way of presenting scents, or sounds, or tactile sensations, or tastes, on a printed page.
    Here is a list of items to which these correspond:

(1) physical objects; (2) the properties of these physical objects; (3) such as the positions they occupy within that reality and relative to one another other; (4) the shapes they have and the shape and locations of their sub-components; (5) how they move about in relation to one another; (6) some clues about why they move in the way they do (so that we can predict what will happen); (7) and also how we must move in order to move freely among them and do so without banging into any of those physical objects.

    These then are the components of our internal representation of external reality. So these things are also the nature of the things, which those rectangular shapes in my diagram represent (see 4.17).

A structural unit for building depictions


ID number consists ofs an arbitrary alphabetic prefix say "A" and a generated unique number to produce a composite alpha-numeric code such as "A374".

Time-Stamps are of two types.
    Type-1 is a relative time-stamp such as T1 or T2. These run in sequence to produce a series such as T1, T2, T3, T4, .... and so on. Each rectangle, or other drawn shape, gets one of these time-stamps assigned to it. Elsewhere there may be an additional indication which tells us the sequence order T1 < T2 < T3 < ....
    Type-2 is a time-stamp which may depict the time of one part of a whole structure as it relates to some hypothetical point in time called "NOW". This allows us to depict the timing of an action or condition, or the tense of a verb in various ways, to indicate a continuing action, or perhaps the time of some action which has definitely been completed, or a time when some action in the future is anticipated.

Legend is denoted "XXXXXXXXX" This is a short section of text which is a human-readable name for a physical object. It can also be the name of a component eg "Leg of ID-number". Other possible legends identify some perception such as "Movement of ID" which may carry a reference to (for example) a "Leg of ID". We can also have some rectangle which depicts some unperceived entity (but one known or assumed to be present) such a "Brain-of ID".

A Causal Connection I have already raised the issue of David Humes' powerful analysis of causation and his suggestion that we recognise that we cannot identify the detailed rationale which underlies this concept, while at the same time recognising its fundamental role in explaining the behaviour of entities within our environment.
    My suggested form of depiction, which we can use for this important component of the way we understand events, can represent a causal-connections with an elongated hexagon - like this -

A structural unit for building causal connections


    In my notation, movements are usually depicted as having been caused by the physical body of a person - not always, but often that is the case. But if the action is deliberate, we can represent that as having been caused by the brain of some person. Movement usually involves the movement of a limb of the person and that (in its turn) can cause the movement of something else. It is possible and quite often convenient to depict one causal link as being the cause of another causal link. This possibility seems to carry the implication that some very complicated circumstances, can be represented quite easily.
    By introducing a negation into the legend, we can also depict a "prevention" rather than a causal connection, or even a situation in which some circumstances makes a certain outcome less likely (or more likely) to occur.

depicting prevention and an allows relationship


    By allowing these more complex types of circumstances this method of depiction shows it has a potential to be very powerful indeed. Here is an illustration which depicts the following circumstance -

Sentence = "John forgot his key and so was unable to open the door when he returned".


    The development of a method of depiction which can deal with that sentence is the desired result. That sentence appears to be deceptively simple. But consider what is required to depict the meaning of "forgot". To get at that we will need to be able to depict the meaning of "remembered", and then to negate that, and depict the consequences which follow. Being aware of these difficulties, what I seek is a method of producing a bundle of structures, of graphic symbols which depict some moderately simple set of circumstances, and then to be able to give that bundle a single name which can then introduce the whole thing (when required). The idea is reminiscent of the way we can bundle-up a group of actions, give that bundle a unique name, provide specific arguments on which it must perform some action, and then call it a "subroutine". It is a powerful technique which has enabled us to develop large software products of almost bewildering complexity. I am attracted to the idea but the thought occurs, that what this technique could produce, is the ability to develop a store of information which will, in the first instance, be relatively simple, but when detail is required can gradually be expanded, to develop that detail.
    So here goes ...

Depicting a scenario



    Yet another reason why I prefer to abandon formal syntactical analysis, is illustrated by this example. In formal syntactical analysis, the word "key" is classified as a noun. It is classified in that way because it is a physical item. And yet, a key is also has an associated use to which it is often applied, the definition of its meaning must therefore include some reference to a procedure (i.e. the opening of doors, safes, and a variety of other physical objects, to enable selective access to its interior.) I am not aware of any way in which formal syntactical analysis can accommodate these requirements.
    The technique which I am proposing here can deal selectively with the meaning of each word in the language, individually. It is admittedly more laborious, but I am convinced that that cannot be avoided. The equivalent of "subroutines" which I am proposing here, is one possible approach which may allow us to deal in a satisfactory way with those complexities. These are early stages in what may be a long road to a usable system. I envisage a long and systematic approach via individual examples punctuated by periodic reviews and re-assessments to consolidate and streamline the system as a whole.


    So how, with this combined graphics and text system of presentation, should we depict the concept of "remember"? The first question that springs to my mind is that I want to know what it is that is being remembered. A key! That's what it is. But we cannot construct the appropriate representation without knowing what a key is, and there why, generally, it is a good thing not to forget it. It enables us to open things. In this case the door of a house. We also need to know that there are two stages to the condition we call "opened". There is the idea of the lock being released, and also the condition of the door standing open. The first of these is a condition which enables the second.
    According to my diagram of "prevent" and "allow", to enable something, is to prevent it being prevented - a double negative which (perhaps surprisingly) does not mean that open (type 2) is achieved. It is merely prevented from being prevented (i.e. it is "allowed").
    To save space I will, from now on, abandon these diagrams and adopt a more concise textual format. I will also draw attention to the idea that the notation can indicate that a particular scenario can be located in the "Brain of John". And we can then negate that, which will, I suggest, be a representation of some item of information having been NOT(remembered). To indicate that it has been forgotten, we would need to indicate that it also had been remembered at some previous time. In this new notation I can write -

#UNLOCK(X) = NOT(PREVENT(X));
#OPENDOOR(John,House) = ALLOW(John,HOUSE) i.e. allow John to enter House);
#REMEMBER(John,KEY) = (BRAIN_OF_John causes ACTION-of-John causes KEY_in_POCKET_of_John);

    Which enables us to write

NOT(#REMEMBER(John,KEY); and therefore
NOT(#OPENDOOR(John,HOUSE) at some later time.

Note that -
    (X therefore Y) =
        (PERCEPTION of X)
        causes
        (ME_TO_THINK_THAT : Y will_be_PERCEIVED)

    That depiction of the meaning of the sentence above has been somewhat compressed.



5.03 The Proposed Evolutionary Pathway associated with my RDM Proposal

    Consider the advantages of adopting my RDM explanation of conscious experience. And that means that we can dispense with metaphysics. It means that we can abandon the absurdity of a kind of claimed "explanation", which is based upon an idea which is itself totally inexplicable.
    I have suggested that the story told by that structure is deliberately very similar to the way I have also suggested we can represent the meanings of words. I have done that for good reason. Not only does that save myself a lot of work, but (more significantly) it could also save evolution a lot of work.
    It is also the case that everything that evolution can, and has produced, is a small modification of something else which has already been in existence. So that is yet another reason why my proposal should be preferred. It offers a plausible pathway for evolution to travel along. And as it does that it -

(1) develops an ability to understand its own perceptions;
(2) builds a depiction of that understanding;
(3) develops that structure further to include memories of past events;
(4) develops the idea of concepts which include items which might not have occurred in some remembered past events, but alternatively might have done so;
(5) develops the concept of causation, and uses that concept to predict future events;
(6) develops the concept of causal link further and then uses it to predict future events and the consequences of various alterative ways (which includes some possibly dangerous consequences, and to formulate plans to experience or to avoid experiencing those events;
(7) identifies the occurrence of these planning activities, interprets them as emotional mental conditions, which motivate (or cause) these actions to be performed, as the mechanism seeks (with different intensities and complexities) to achieve some of those end results, or to avoid others;
(8) develops the concept of SELF, which has a special location within it, to accommodate an episodic memory of these remembered events and circumstances as they are experienced;
(9) stores the complete structure (which depicts those experiences) inside the structure of SELF;
(10) develops an ability to converse with other people, and to interpret spoken language statements. To do that it associates each word in an utterance with a concept, which it has developed into an information-laden structure in its own right, and it has inserted that structure into its understanding of events and circumstances. And it does all that using exactly the same format and procedures, that it used previously to build the depiction of events and circumstances information sourced from its own perceptions. To this it applies an added ability to use (learned) syntactical information to segment expressions, and convert the linear sequence of concepts to fit the non-linear sequence of events depicted, and link all of that information into the growing structure, which I have called an "understanding".
(11) develops extra special abilities through formal and informal education and then shares with others the information gained by using those special abilities.



5.04 What is Consciousness?

    The view I have of how a conscious brain operates, and the view that I have developed in this text, is resolutely and unapologetically reductionist.
    Consciousness, I propose, is not an inexplicable magical condition. It is an entirely non-magical procedure. When that procedure is being performed by a physical mechanism (by movements and by changes in the condition pertaining within that brain-stuff within a particular brain), the owner of that brain will experience conscious awareness - and do so on every occasion when that procedure is performed. That is because being conscious is precisely what that procedure does. It manufactures a conscious experience. In order to do that, it engages in the construction of an understanding of certain circumstances. Usually these circumstances will be the environmental circumstances within which the brain's owner is currently located. But sometimes, it could be some other set of circumstances. It could, for example, be the circumstances described in a book (in which the brain's owner is immersed). Or it could be some academic problem - like devising the proof of some mathematical theorem. And what that individual is then consciously aware of, is what that procedure is doing. Being in a conscious state, is not a difficult concept.
    However, the difficulties which the human race has encountered - over centuries of philosophical discourse, is, I fear, largely due to our own determination to exaggerate its importance of our being conscious, by bestowing on it an aura of eternal mystery, and superiority over other animals. We are superior to them in certain respects, but let us not be carried away by that thought. None of us can claim any personal responsibility for that superiority, and in certain other respects, intelligence excepted, there is no kind of ability which we have and in terms of which we are not surpassed in performance by at least one non-human animal species.
    In my view, that "being conscious" procedure constructs a depiction of some condition, and it is that "depiction" that is the focus of that creature's conscious attention. A conscious experience, however, has several important component factors, and in only certain circumstances will all of these factors be present, If they are not all present, (as may well be the case particularly for some non-human creatures) then what may be experienced could be only a momentary kind of consciousness. If that current experience is not then immediately stored in a way that makes it available as a memory for future reference, then the owner of that brain will not later then be aware that it/he/she has had a conscious experience at all. There are also other factors which must be present if the owner of that brain is to be fully aware of its own conscious experiences. Thus -

(1) The procedure must be able to have access to various types of memory. It is by reprocessing the material stored there that the brain is able to form concepts.
(2) The brain must able to form concepts. These will become the constructional units with which the procedure then builds the larger structure which will depict various circumstances.
(3) A concept is a structure which will be a depiction of some event or circumstance (real or imaginary) which contains items which the owner of that brain has actually experienced in the past - or about which the owner has only been informed by others that it has sometimes occurred in these circumstances.
(4) The conscious brain must also have developed a sub-component (i.e. a concept) which describes causal-links. These are used by the procedure to predict future events.
(5) The procedure must have access to a component of that structure which I have called SELF. That SELF must have a sub-component where episodic memories of the succession of experiences are stored.
(6) The brain must be able to sleep and to dream. These dreams will provide a mechanism of offline processing which will make it possible for that brain to form chains of logical relationships.
(7) To reach the highest level of conscious understanding the human brain must be able to exchange information with its companions and, in that way, become conversant with experiences that it has not, and in some cases could not have experienced directly.



5.05 What are Emotions?
    On this topic, we humans have again shown a tendency to confuse ourselves by over-complicating the attempt to find an explanation.
    My view is that emotions are the result of a misinterpretation - not a wildly inaccurate misinterpretation, but nevertheless not precisely accurate.
    What I suggest is that the mental condition which we identify as an emotional condition of some kind, is a state of preparation for some goal (or anti-goal) condition. That is, it is a state of physical preparations - a physical procedure. The basic goal states are specified as internal conditions (of nutrition, sexual satisfaction, thirst satisfaction, companionship, physical comfort and so on). Thus, what we identify as an "emotional conditions" are the physical preparations required to achieve a goal, or anti-goal avoidance, conditions which have already been identified and are being experienced in an imaginary form.
    Those preparations are needed to identify the external conditions which will enable the person concerned to reach the identified goal or avoid the identified anti-goal. These preparations require the anticipation of the experience of a goal (or anti-goal) When we predict a future experience of some GOAL state, or try to avoid an anti-GOAL state, we are NOT able to observe what is happening within our own brain in any detail. That, by the way, is a direct contradiction to the so-called "HOT" (or HOST) theories of human behaviour (see Rosenthal 2002, and Rolls 2014). For these, it is the ability to observe, which (in some mysterious way) is responsible for a conscious experience. What we are able to be aware of, however, is the performance of some procedure. And I think , that what that procedure consists of, is the planning of a technique which will probably result in the experience of that GOAL, or the avoidance of that anti-GOAL.
    Furthermore, the awareness we have of that planning action, is also the evidence which helps us to predict an action (which has not at that time, begun to be overtly enacted), it is entirely unremarkable that we should interpret that condition as being the motivation for that kind of response. Although, what I am suggesting here is that that early activity (i.e. the planning of the action to be taken) is really the first stage of the performance of the procedure in question.
    In other words we interpret that planning procedure as being an emotional state which motivates that action-response. That interpretation could be described as a" mistake". But it could also be described as a "fortunate mistake", because while it may be a technical error, it also enables us to make accurate predictions And these predictions then enable us (and give us time) to construct appropriate reactions.
    I also suggest that evolution brought about the likelihood of that technical error, not deliberately, but, as in the case of all its other lasting and beneficial effects of evolution, precisely because it had an effect which helped us to survive and multiply.



5.06 A Comment on Pain (as a special case)
    I do not know if it is appropriate to regard PAIN as a special kind of emotion. But it seems appropriate to raise it as an issue at this point. I suspect that pain is one of the earliest kind of "feelings" to evolve, but it seems also to have some different characteristics from any other kind of feeling.

(1) It usually stimulates an immediate response.
(2) It usually brings all other thoughts and actions to an end (or at least a pause).
(3) Having dealt with the sudden onset of pain, we are often able to resume what we were doing before that onset.

    These constitute a set of characteristics that would be classified by a computer scientist, as an "interrupt signal" - that is, something urgent that must be dealt with as nearly instantly as possible. When an interrupt signal is received, information about what actions are currently being performed is stored up (with a view to resumption later) and then the interrupt itself is serviced. Sometimes, however, it is difficult to resume what was being done before. That happens when the interrupt is so urgent and dramatic that there is insufficient time to prepare for a resumption of normal activity. In those instances the interrupt signal can wipe out all other information. So pain, I suggest, is a very primitive characteristic of animal behaviour which may have been present in creatures before the Cambrian Explosion. It can also be a characteristic of intelligent behaviour but creatures (which evolved later) may be able to predict the experience of pain, and to avoid a repetition of that experience. For example, I have never known an intelligent creature to forget the experience of touching an electric fence.


5.07 The SELF is BOTH a Singularity AND a Multiplicity
    The SRA plays a crucial role in the whole mechanism. Evolution, over aeons of time, and through the mechanism of survival and reproduction, provides GOALS and anti-GOALS (or "likes" and "dislikes"). In the same way as it provides various types of automatic stimulus-response behaviour. The SRA also provides the rest of the system with a clock which divides time into intervals and also with priorities which helps the mechanism as a whole to select a response for implementation when two or more are triggered in a single time interval (and with contradictory response specifications).
    The SRA also integrates the behaviour of the other DOLLS by triggering their various responses to give the collective behaviour a greater range of options.
    Finally, as the human species diversified into specialisms, the SRA has introduced the mechanism of a conscious choice of behaviour, which operates through longer term and more accurate predictions of consequences.


5.08 Degrees of Consciousness
    If all this is the case we can see that not only do we have several conscious entities, but also that these must correspond to several different degrees of consciousness. As evolution adds more DOLLS it also adds extra degrees of consciousness. We become aware of more complex circumstances and are able to respond to these using more sophisticated action-responses and therefore have available more efficient mechanisms of consciousness.



5.09 Reflections
    My thesis holds that to be conscious means to be engaged in the performance a procedure which constructs a representation of something, That something may be something imagined, or something real. But usually that "something" structure is a representation of what we think is "external reality". Note, however, that, so far as this discussion is concerned, "within our own body", and even "within our own physical brain", is a locality that is considered to lie within our "external" environment. The only things considered to be "internal" are the ideas which we construct within our own thinking procedures. These will all be found within the structure which I have described as a depiction of events and circumstances.
    Throughout this text I have pointed out that we do not know with absolute certainty that our representation of reality (which we acquire by means of our perceptions) is accurate, strictly and in all respects. But it is the best that we can achieve and it does enable us to make reasonably accurate predictions. Therefore it serves our needs adequately.
    Allow me please to answer an objection before it is asked. As I suspect everyone else does, I assume that external reality exists and that my understanding of it is correct ... optical illusions, and similar, excepted. Any inaccuracy, however, scarcely matters, because ...
    .... despite the lack of certainty which surrounds this issue, the accuracy of our predictions can be checked. That is because to do that check we need only compare two sets of data -

(1) the predictions (of which the representations are internal to our minds), and

(2) we compare that with our representations of experience/perception of that reality when the future is revealed to us, (which also takes the form of an internal structure).

So we are therefore comparing like with like. If either one of those two structures is subject to some systematic distortion of reality (by our perceptions), the other will also be affected in, presumably, the same way.

    Once full consciousness is operational it plays an essential part in the mechanism of the mammalian brain as it tries to construct an understanding of reality. Moreover, the information which the construction process uses to build that representation, is also the knowledge or understanding which accrues to the brain's own conscious experience. Indeed, those two processes - the construction of a representation, and the conscious understanding of events and circumstances, are two different ways of talking about a single process. Over evolutionary time the construction process will improve and conscious understanding will become concomitantly more sophisticated.

    In a recent book, Professor Rolls speculated that we humans may not specify an action-response to the recognition of some perceived condition, in terms of an explicit action - designed to achieve a GOAL state or avoid an anti-GOAL, but by describing a desired GOAL condition by itself (Rolls 2014).

    "The thesis here is that when a rewarding or punishing stimulus in the environment elicits an emotional state, we can perform any appropriate and arbitrary action to obtain the reward, or avoid the punisher. That is, the reward or punisher defines the goal for action, but does not specify the action itself. The action itself can be selected by the animal as appropriate in the current circumstances as that most appropriate for obtaining the reward or avoiding or escaping from the punisher. This is more flexible than simply learning a fixed behavioural response to a stimulus, which is what is implied by stimulus-response (S-R) or habit learning theories of the 1930s."   [Rolls 2014 p56]

    I thought that that was an interesting idea, and I spent some time trying to devise a technique for specifying a desirable GOAL state. I found that very difficult. What I was able to see, however, was that I was creating difficulties for myself, by assuming that a GOAL condition needed to be specified in terms of visual perceptions. Vision is not suitable for that task. It is much easier to specify a GOAL condition in terms of olfactory perceptions - provided these are combined with actions in what could be described as an exploratory mode of behaviour. Our sense of smell (and presumably those of most other creatures who have very superior olfactory perceptions), seem to be able to detect scent gradients with remarkable ease. Moths, for example, fly long distances to find a mate.
    How do they do that? I guess that not only do they have hard-templates which make it easy for them to detect the directionality of scents, but that they combine that with exploratory behaviour, like flying round in circles, while they sniff the air (or do something equivalent to that). So I see a solution to the problem recognised by Rolls. We (and lots of other creatures) may specify action-responses in that way - by specifying an exploratory action, punctuated by detection actions which will in turn specify another (perhaps different exploration style or even a change of perception facility), which will direct the creature towards a GOAL state.
    The conclusion I reach is that Rolls is right in that a goal must be specified, but that that must be accompanied by the specification of an exploratory style of action, which is designed to discover the goal condition (and recognise when that condition has been reached).
    Referring back to Lorenz's shrews, it is very likely that these creatures, as they "whiskered about" were trying to follow a scent gradient to rediscover that shrew's own scent, which was probably a strong marker for its familiar pathway. Here is that quote again (to save the reader having to refer back).

LORENZ


    Years ago, I used to take my pet Jack Russell for walks in the hills near my home. I noted that on the ascent, she tended to wander about (apparently) erratically and to urinate frequently. She could also recognise the word "back" and would respond to that command quickly .... whereupon, and usefully, even in thick mist, she could guide me on the return journey, by seeking and finding the sequence of her own urination markings. That was not a very elegant or carefully planned laboratory experiment, but I find the evidence it was able to produce, very persuasive. Our animal brains seem to respond to perceptions by following specified actions (including, but not only goals in isolation). These seem to be specified conditionally and are followed sequentially, using mixed perceptions. As I suggested earlier (in section 4.06) this behaviour could be an evolutionary development of what I called a "startle reaction".



5.10 Conclusion To summarise all of that, I shall repeat, and enlarge upon, a comment I made in my abstract.

    To be conscious is to be performing a certain procedure. This procedure has four main components, or sub-procedures. These are ....

(1) A procedure which is aware of present circumstances - and sufficiently so to be able to recognise certain individual events and circumstances as they arise, and then be able to trigger, for each of these, an appropriate action-response.

(2) A procedure which is able to remember past experiences and to be able to integrate that information into a growing depiction of events and circumstances. Again each of these circumstances has an associated action-response, which can then be triggered.

(3) A procedure which is able to anticipate future experiences and to be able to integrate that information into the growing depiction of events and circumstances. Yet again these circumstances as they are recognised (and analysed) will trigger preparations to select and to construct an appropriatre action-response.

(4) The structure which the combination of the above three sub-procedures, is then stored within SELF. More correctly, what is stored are the mechanisms which enable that structure to be re-built - which process re-creates the conscious experience.

    For those who still hold that a consciousness state has no functional role to play in the behaviour of animals in general, and in our own behaviour in particular, I draw attention to the form of behaviour we often call "driving in in auto-pilot mode" (see section 4.06). I then ask this question -

    Do you believe that driving in auto-pilot mode has a certain amount of danger associated with that mental state - i.e. operating without items (3) and (4) - the last two components listed above? And do you also accept that that particular danger is greatly reduced when full consciousness is restored?
    If so, then please explain why a resumption of consciousness is effective in that way - if, that is, you continue to insist that consciousness is a kind of useless appendage to the way our brains operate, or that a zombie, without the aid of any conscious experience at all, could behave in a way that is indistinguishable from the behaviour of a fully conscious human being operating normally - that is, definitely not in auto-pilot mode.




POSTSCRIPT
    And now I will leave you with this additional thought ... my RDM proposal suggests, to me, not only that we humans will each have our own unique and individual way of being intelligent, but that we might also exhibit a considerable degree of speciality with respect to our various and individual forms of stupidity. Ponder on that if you will. I must confess, however, that the internal multiplicity, that is my-SELF, finds that idea moderately alarming, oddly comforting and distinctly amusing - and all at the same time.



THE END




POST-POSTSCRIPT
    If you would like to discuss any of these issues with myself, then please email me at this address:

    rdm@tartanhen.co.uk

    I will try to respond also by email.
REFERENCES

Carhart-Harris R.L. et al (2016)
"Neural correlates of the LSD experience revealed by multimodal neuroimaging"
Proc. Nat. Acad of Sciences 11 April 2016

Collins, F. (2007)
"The Language of God"
Penguin 2007

Chalmers D (2016)
"The Character of Consciousness"
Oxford Univ. Press (2016)

Chomsky N (1957)
"Syntactic Structures"
Mouton 1957

Everett D (2005)
"Cultural Constraints on Grammar and Cognition in Piraha"
Current Anthropology 46 (4) 621-646.

Everett D. (2018)
"Homo Erectus may have been a sailor who could speak"
Presented at Am. Assoc for the Advancement of Science in Austin,
February 16th, 2018

Feinberg E.T. Mallatt J.M. (2016)
"The Ancient Origins of Consciousness. How the Brain Crteated Experience."
MIT Press 2016

Fodor J. D. (1977)
"Semantics. Theories of Meaning in Generative Grammar"
Harvestor Press 1977

Hubel D.H. & Wiesel T.N. (1959)
"Receptive Fields of single neurones in the Cat's Striate Cortex"
J. Physiol. 1959 148(3) 574-591

McDermott, D.V. (2001)
"Mind and Mechanisms"
MIT Press 2001

Noble, H. (1988)
"Natural Language Processing"
Blackwells Scientific Publications 1966

Noble, H. (2012)
"No Magic: A Natural Explanation of Consciousness"
Tartan Hen Publications 2012

Prince E.F. (1976)
"The Syntax and Semantics of Neg-raising, with Evidenmce from French"
Language 52, (2) June 1976 404-426

Rolls E,T, (2008)
"Emotion, Higher-Order Syntactic Thoughts and Consciousness"
The Frontiers of Consciousness: Chicele Lectures ed. Weiskrantz and Davies
OUP 2008

Rolls E.T. (2014)
"Emotion and Decision-Making Explained"
OUP 2014

Rosenthal D. (2002)
"Explaining Consciousness"
In: Philosophy of Mind Ed. Chalmers
OUP 2002

Schneider G.E. (2014)
"Brain Structure and its Origins"
MIT Press 2014

Sigman M and Dehaene S. (2008)
"Brain Mechanisms of Serial and Parallel Processing during Dual-Task Performance"
J. Neuroscience Julky 23, 2008, 28 (30) 7585-7598

Sloman A. and Chrisley R. (2015)
"Virtual Machines and Consciousness"
J. Consciousness Studies 10 No 4-5, 2003