Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots

This article delves into how the Conscious Turing Machine Robot (CtmR) addresses Kevin Mitchell’s questions about consciousness and sentience. CtmR, with its Model-of-the-World and various processors, provides insights into what it means to be sentient, how conscious attention and awareness work, and the development of self-awareness. Key topics include the distinction between conscious and unconscious states, the integration of sensory information, and the adaptive value of conscious experience.


This content originally appeared on HackerNoon and was authored by AIthics

:::info Authors:

(1) Lenore Blum (lblum@cs.cmu.edu);

(2) Manuel Blum (mblum@cs.cmu.edu).

:::

Abstract and 1 Introduction

2 Brief Overview of CtmR, a Robot with a CTM Brain

2.1 Formal Definition of CtmR

2.2 Conscious Attention in CtmR

2.3 Conscious Awareness and the Feeling of Consciousness in CtmR

2.4 CtmR as a Framework for Artificial General Intelligence (AGI)

3 Alignment of CtmR with Other Theories of Consciousness

4 Addressing Kevin Mitchell’s questions from the perspective of CtmR

5 Summary and Conclusions

6 Acknowledgements

7 Appendix

7.1 A Brief History of the Theoretical Computer Science Approach to Computation

7.2 The Probabilistic Competition for Conscious Attention and the Influence of Disposition on it

References

4 Addressing Kevin Mitchell’s questions from the perspective of CtmR

Here we answer Kevin Mitchell’s questions (Mitchell, 2023) from the perspective of the Conscious Turing Machine Robot (CtmR). Many of Mitchell’s questions are in fact several (often intertwined) questions, separated into parts.

\ Our answers refer to and supplement what we have discussed in our Overview. They deal only with the CtmR model, meaning an entity or robot with a CTM brain. These answers say nothing about other models. They say nothing about whether a worm is conscious or not - unless the worm has a CTM brain. From here on, unless otherwise stated, everything we have to say is about a robot with a CTM brain, what we call the CtmR model.

\ KM1*. “Q1. What kinds of things are sentient? Q2. What kinds of things is it like something to be? Q3. What is the basis of subjective experience and what kinds of things have it?”

\ A1. CtmR is sentient, meaning it is able to perceive and feel things. It can also make decisions and attempt to carry them out, sometimes successfully, sometimes not. (We say CtmR has agency.) As mentioned above, we have nothing to say about entities that are not CtmRs. However, we can and will sometimes say what parts of CtmR are responsible for what parts of its sentience.

\ A2. The Model-of-the-World (MotW) plays an essential role in “what it is like” to be a CtmR. It contains multimodal Brainish-labeled sketches of referents in CtmR’s worlds. The sketches and labels (as well as Brainish itself) develop and evolve throughout the life of CtmR. The labels succinctly indicate what CtmR learns or “thinks” about the referents. For example, the label SELF applied to a sketch in the Model-of-the-World (MotW) indicates that that particular sketch’s referent is (a part of) CtmR itSELF.

\

\ KM2*. “Q1. Does being sentient necessarily involve conscious awareness? Q2. Does awareness (of anything) necessarily entail self-awareness? Q3. What is required for ‘the lights to be on’?”

\ BB2.

\ A1. In CtmR, sentience has two main components, conscious attention and conscious awareness.

\ Conscious attention (access consciousness) occurs when all LTM processors receive the global broadcast of CtmR’s current conscious content, that being the current winning chunk in the competition for STM. (By the way, STM is a buffer and broadcast station only. It is not and does not have a processor.)

\ Conscious awareness arises when the broadcasted chunk refers to a Brainish-labeled sketch in the MotW. The labels describe what CtmR is consciously aware of.

\ A2. Does awareness necessarily entail self-awareness? No. The infant CtmR initially builds a world model that does not include a labeled sketch of itself, so it has no self-awareness. In time, however, that model will include a rough labeled sketch of itself and the label SELF. The label SELF marks the beginning of self-awareness, which eventually develops into full-blown self- awareness.

\ A3. The lights come on gradually, as the MotW gets populated with sketches and their labels. (For more on this, see our answer to KM4.)

\ KM3*. “Q1. What distinguishes conscious from non-conscious entities? (That is, why do some entities have the capacity for consciousness while other kinds of things do not?) Q2. Are there entities with different degrees or kinds of consciousness or a sharp boundary?”

\ BB3. We rephrase this as:

\ Q1. What distinguishes a consciously aware CtmR from a non-consciously aware CtmR?

\ Q2. Are there CtmRs with different degrees or kinds of consciousness?

\ A1. Every CtmR pays conscious attention to every broadcast. Absent a Model-of-the-Worldprocessor (MotWp) and its Model-of-the-World (MotW) with sketches labeled in Brainish, there is no conscious awareness.

\ CtmR can be consciously aware when awake or dreaming. It is not consciously aware when it is in deep sleep, when its STM contains a NoOp chunk, i.e., a chunk with a NoOp gist and a high enough |weight| to keep all other chunks at bay. (See our answers to KM4 for discussions of Sleep and Dream processors.)

\ A2. CtmR can have a varying degree of consciousness. Its many processors are instrumental in developing rich sketches in CtmR’s world models. Involvement by those processors, the Smell, Vision, Hearing, Touch, …, processors, raises the degree of conscious awareness. Even in deep sleep, however, a CtmR can still carry out tasks (utilizing unconscious communication between processors via links) but without attention and therefore without awareness.

\ Faulty processors or faulty competition paths can diminish what gets into STM, hence diminish both conscious attention and conscious awareness. For example, a faulty CtmR can exhibit blindsight, meaning it can do things that are normally done with conscious sight, but without having the feeling that it is sighted (Blum & Blum, 2022). This can happen, for example, if the Vision processor fails to get its chunks into STM. Perhaps relevant branches in the Up-Tree are broken, or the Vision processor fails to give high enough |weight| to its chunks.

\ Different degrees of consciousness already occur in a developing CtmR. As we have noted, an infant CtmR has only a very foggy world model which does not even include a sketch of itself. Sketches with annotated labels develop and become refined gradually. They are what CtmR is consciously aware of.

\ KM4. “Q1. For things that have the capacity for consciousness, what distinguishes the state of consciousness from being unconscious? Q2. Is there a simple on/off switch? Q3. How is this related to arousal, attention, awareness of one’s surroundings (or general responsiveness)?”

\ BB4.

\ A1. Only chunks with non-zero weight have a chance to win the competition for STM and thus become globally broadcast. (In other words, CtmR can only pay conscious attention to chunks that have non-zero weight.) That occurs with a probability proportional to the chunk’s |weight|. If all chunks have zero weight, then chunks flit in and out of STM at random and so fast that CtmR loses anything remotely resembling sustained attention (like Robbie the Robot in Forbidden Planet).

\ This is a state of unconsciousness. CtmR can get out of this state only when some processor creates a nonzero-weighted chunk.

\ Another unconscious state occurs when a Sleep processor generates a NoOp chunk (a chunk having a NoOp gist) that has a “sufficiently high” |weight|. A sufficiently high |weight| is one well above the weight of any other chunk. That prevents other chunks - including those from processors that interact with the outer world[2]7 - from having much chance to enter STM.

\ When the |weight| of a Sleep processor’s chunk drops a bit (but not enough to let input-output chunks enter STM), a CtmR’s Dreamprocessor can take over, enabling chunks that create dreams to emerge. If the |weight| of the Sleep processor’s chunks drop even further, CtmR wakes up.

\ 2. The above are some of the ways CtmR can go from consciousness to unconsciousness and back. There is no simple on/off switch for consciousness in CtmR.

\ A3. In an unconscious state, CtmR is not aware of its surroundings, though it might be aroused by pangs of intense hunger, other pains, a very loud explosion, and so on. This occurs, for example, when these pangs, pains, and sounds overwhelm the Sleep processor, meaning their |weight| is greater than the |weight| of the latter.

\ KM5*. “What determines what we are conscious of at any moment?”

\ BB5. In CtmR, at every clock tick, t, there is exactly one chunk in STM. When a chunk is broadcast, CtmR pays conscious attention to that one chunk only (and only if chunks do not flit around from one clock tick to the next). Chunks are purposely small (in a well-defined way) to ensure that all processors focus on the same thought.

\ KM6*. “Why do some neural or cognitive operations go on consciously and others subconsciously? Why/how are some kinds of information permitted access to our conscious awareness while most are excluded?”

\ BB6. Operations within each LTM processor are done unconsciously. Communication between LTM processors via links is unconscious communication. It is much quicker than conscious communication that goes through STM.

\ the infant CtmR, most communication between processors is conscious. Then as processors form links, communication can go quickly through links, meaning unconsciously. This is what happens after the young CtmR learns to ride a bike.[28]

\ KM7*. “What distinguishes things that we are currently consciously aware of, from things that we could be consciously aware of if we turned our attention to them, from things that we could not be consciously aware of (that nevertheless play crucial roles in our cognition)?”

\ BB7. For CtmR to be consciously aware of a thing, call that thing abc, a chunk referring to abc must get into STM. Once it does, CtmR pays conscious attention to abc. But even conscious attention to abc does not make for conscious awareness of abc. For that, the chunk must reference a sketch in the MotW that is called or labeled abc.

\ What things, though important for cognition, cannot enter consciousness? Here are a couple of answers from CtmR:

\ 1. Things that must be done so quickly that the communication necessary to do the thing cannot go through STM. For example, CtmR must quickly swerve away from an oncoming car while riding its bike.

\ 2. Things like abc whose doing would take away from a more important thing like xyz. In that case, time permits only one of abc and xyz to be attended to. If there is barely enough time to do one (and only one) of them, then CtmR cannot be conscious of abc while moving to do xyz.

\ KM8. “Q1. Which systems are required to support conscious perception? Q2. Where is the relevant information represented? Q3. Is it all pushed into a common space or does a central system just point to more distributed representations where the details are held?”

\ BB8.

\ A1. In CtmR, conscious awareness is impossible without the MotWp (among others). Conscious attention is possible without the MotWp, but impossible without the broadcast station.

\ A2., A3. The relevant information is held in the MotW and in individual processors. For example, color in the Color processor, smell in the Smell processor, and so on. So, in that sense, information is distributed. When CtmR first sees and smells a rose, these processors alert the MotWp which in turn attaches the labels RED and SWEET to its sketch of the rose in the MotW. At some point in time, RED and SWEET become fused as a Brainish word or gist, and in that sense, information is unified.

\ KM9*. “Q1. Why does consciousness feel unitary? Q2. How are our various informational streams bound together? Q3. Why do things feel like *our* experiences or *our* thoughts?

\ BB9.

\ A1. At each clock tick, all LTM processors simultaneously receive a global broadcast of the conscious content (current chunk) in STM. That gives CtmR its sense of a unitary experience.

\ A2. In addition, processor links and Brainish multimodal gists further bind information together.

\ A3. If CtmR’s conscious content refers to a thought or experience that MotWp has labeled SELF, CtmR will be consciously aware of that thought as its own. If that thought is also labeled FEELS, CtmR will not only know that the thought is its own, it will also feel that it is its own.

\ KM10*. “Where does our sense of selfhood come from? How is our conscious self related to other aspects of selfhood? How is this sense of self related to actually being a self?

\ BB10. Here again, world models, with their learned Brainish-labeled sketches, determine CtmR’s sense of self. The MotW’s sketches are labeled with a variety of gists. For this question, the labels SELF, FEELS, and CONSCIOUS are particularly important. If all three labels are attached to a sketch of CtmR in the MotW, then CtmR FEELS that itsSELF is CONSCIOUS.Known pathologies occur when any one (or more) of these labels is missing, or when sketches are mislabeled.[29]

\ KM11. “Q1. Why do some kinds of neural activity feel like something? Q2. Why do different kinds of signals feel different from each other? Q3. Why do they feel specifically like what they feel like?”

\ BB11.

\ A1. In CtmR, inputs from different sensors go to different sensory processors. Those different senses become incorporated in the MotW with different Brainish labels.

\ A2. In the MotW, sketches of a red rose and a red fire engine are both labeled RED. Over time, each of these sketches can gain many other labels as well. For example, the fire truck sketch likely gets the Brainish labels FIRETRUCK and LOUDSIREN while the rose sketch does not. The rose sketch gets labeled “SILKYFEEL and SWEETSMELL”.

\ A3. The two referents are distinguished in the MotW, and with more Brainish labels, “feel specifically like what they feel like.”

\ KM12*. “Q1: How do we become conscious of our own internal states? Q2: How much of our subjective experience arises from homeostatic control signals that necessarily have valence? Q3: If such signals entail feelings, how do we know what those feelings are about?”

\ BB12.

\ A1. In the Overview, we indicated how the infant CtmR would know it is hungry when a high |weight| negatively valenced chunk from a Fuel Gauge processor reaches STM and is broadcast from it.

\ A2. The LOWFUEL chunk will trigger an actuator to connect CtmR’s fuel intake to a fuel source (in humans, the breast). Assuming it works, that will eventually result in a sketch of the fuel source (in the MotW) being labeled FUELSOURCE and PLEASURESOURCE. At the same time, the label FUELINTAKE and FEELS_PLEASURE will be attached to the sketch of CtmR when it is hungry and being fueled. A high |weight| broadcast indicates that “CtmR feels pleasure when it gets fuel if it’s hungry.” This process is an example of homeostasis in CtmR and how CtmR becomes conscious of its own internal state.

\ A3: The about-ness of those feelings come from the Brainish labels and sketches that evolve during CtmR’s lifetime.

\ KM13. “Q1. How does the about-ness of conscious states (or subconscious states) arise? Q2: How does the system know what such states refer to? (When the states are all the system has access to).”

\ BB13.

\ A1., A2. Conscious states in CtmR are broadcasted states.

\ The MotW is all that CtmR knows about its (inner and outer) worlds. This includes CtmR’s actions and their effects. When choosing which of several actions to take, the MotW processor predicts the effect of each of its possible actions. It does this by simulating the world’s response to the action in its MotW.[30] Each response from the world has a Brainish description with Brainish labels.

\

  • If a broadcasted chunk refers to a Brainish labeled sketch in the MotW, the chunk’s gist is the about-ness of the current conscious state. For example, the gist could be sketch of a rose labeled RED and SWEET_SMELL.

    \

  • If a broadcasted chunk refers to a Brainish labeled prediction (gotten from the simulation in the MotW) this is the about-ness of the current conscious state.

    \

  • If a broadcasted chunk refers to Brainish labeled response (gotten from the world) this is the current about-ness.

\ KM14. “Q1. What is the point of conscious subjective experience? Or of a high level common space for conscious deliberation? Q2. Or of reflective capacities for metacognition? Q3. What adaptive value do these capacities have?”

\ BB14

\ A1. A conscious subjective feeling is experienced when broadcasted chunks refer to Brainishlabeled sketches in the MotW.. The labels describe the subjective feelings that conscious awareness is about. Without these feelings, CtmR would not be compelled by feelings to act appropriately.[31]

\ A2. Reflective capacities enable CtmR to treat itself, as referred to in the MotW by the sketch of itself, with all the tools it uses to treat other sketches.

\ A3. Conscious subjective experience is adaptive because all processors receive each and every broadcast, so all processors can contribute to the understanding of the broadcast and/or its solution. The point is that all processors focus their attention on the same thing. Suppose the broadcast is a problem like “Must fill the fuel tank”: the Navigation processor might contribute a choice of routes to local fuel stations. The Computation processor might compute how much fuel is required for each choice. The Weather processor might weigh in if one of the routes is blocked. The conscious subjective experience can take account of these estimations.

\ KM15*. “Q1. How does mentality arise at all? Q2. When do information processing and computation or just the flow of states through a dynamical system become elements of cognition and Q3. why are only some elements of cognition part of conscious experience?”

\ BB15.

\ A1. The question asks: "How does the capacity for intelligent thought come about?” CtmR is ideal for answering this question since at birth, all ≳107 processors are independent. The first processors to come online – meaning they have sufficient weight to get their chunks to STM - are those having homeostatic importance like the Nociceptor Gauge (monitors pain), Fuel Gauge (monitors hunger), and so on, or have immediate access to the senses like vision, hearing, and so on. These processors help the MotWp to make and improve its predictions and world models. The next processors to come online are those that affect the activators, one of which cries for help. Then come processors that detect coincidences like: “This visual input and that auditory input coincide.” This is the beginning of intelligent thought.

\ A2. The CtmR model, unlike Baars’ GW, has no Central Executive. The competition for conscious attention, which replaces the Central Executive, gives CtmR much of its cognitive power. That competition efficiently considers all information submitted for consideration by its more than 10 7 processors. It allots ideas a winning probability or share of consciousness (broadcast time) proportional to its estimated importance). [32] It enables processors to solve a problem even though CtmR does not know which processors have the interest, expertise or time to consider the

\ A3. Some elements of cognition can be done with a single processor. That processor doesn’t need to search through an enormous data base for its information: it already knows where the necessary information is held. Processors that do need to search for the information, [33] must search for it. They do need to broadcast. That broadcast begins the process of using consciousness to do cognition.

\ KM16*. “Q1. How does conscious activity influence behavior? Q2. Does a capacity for conscious cognitive control equal “free will”? Q3. How is mental causation even supposed to work? Q4. How can the meaning of mental states constrain the activities of neural circuits?”

\ BB16.

\ A1. In CtmR, conscious activity is intertwined with behavior.

\ In CtmR, all LTM processors receive the broadcasted conscious content. Different processors have differing amounts of time to deal with that content. Of those that have time, some have a more reasonable idea how to deal with the broadcast than others. A broadcasted message that the fuel gauge is low can prompt one processor to try to conserve fuel, another to trigger a search for a source of fuel, and so on. A broadcast of danger may prompt CtmR to choose between fight, flight or freeze, each championed by a different processor.

\ Additionally, CtmR’s disposition plays an important factor in the competition that selects which chunk will be globally broadcast and hence its behavior. (See Appendix 7.2 for more information about CtmR’s competition and the influence of its disposition.)

\ A2. As for “free will”, CtmR’s ability to assess a situation, consider various possibilities, predict the consequences of each, and based on that make a decision (all under resource constraints) gives CtmR its feeling of “free will”. For example, imagine CtmR playing a game of chess. When and for as long as CtmR has to decide which of several possible moves to make, it knows it is “free” to choose whichever move has the greatest utility for it. That is free will. See (Blum & Blum, 2022).

\ A3. In CtmR, the MotW is fundamental to mental causation. To will an act in the world, the MotWp performs that action in the MotW, then looks to see if the act got accomplished in the world.

\ For example, suppose the infant CtmR discovers that it can somehow move its left leg. It becomes aware through its sensors that “willing” the movement of that leg is successful. For comparison, it may discover that it cannot pick up a rock, Yoda style, with the power of thought. Moving the leg or lifting the rock can be willed by performing the action in the MotW. Sensors must verify if the act has been successful. If it has, that is mental causation.

\ A4. As an example, in our answer to KM4, we discussed how the Sleep processor generates a nondreaming sleep state by raising its own |weight| so high that other chunks can’t reach STM. This shows how the sleep state constrains activity in CtmR’s Up-Tree.

\ Kevin Mitchell ends his blog with the words, “If we had a theory that could accommodate all those elements and provide some coherent framework[34] in which they could be related to each other – not for providing all the answers but just for asking sensible questions – well, that would be a theory of consciousness.”

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::


[27] Something like this can happen in total depression and catatonia in CtmR, and slow-wave (non-REM) sleep in humans.

\ [28] Humans learn to play ping pong consciously. In a ping pong tournament however, one must let the unconscious take over, insist that the conscious get out of the way. In swimming, repetition gives one’s unconscious an opportunity to improve one’s stroke, but it doesn’t enable a new stroke to be acquired. That requires conscious attention. For example, the dolphin kick is weird and unnatural, but since it works for dolphins, it makes sense to simulate it, and that is done consciously at first. The unconscious then optimizes the constants.

\ [29] Some human examples of pathologies due to mislabeling: body integrity dysphoria, phantom limb syndrome, Cotard’s syndrome, anosognosia, paranoia, …. .

\ [30] This is similar to the kind of simulation that the MotW does in a dream sequence.

\ [31] A person who has pain and knows everything about it but lacks the ability to feel its agony has pain asymbolia. Such a person is not motivated to respond normally to pain. Children born with pain asymbolia have rarely lived past the age of 3. The experience of pain, whether physical or emotional, serves as a motivator for behaving appropriately to the pain.

\ [32] This is something that tennis and chess tournaments do not provide.

\ [33] Like the processor that asks, “What her name?”

\ [34] Italics ours.


This content originally appeared on HackerNoon and was authored by AIthics


Print Share Comment Cite Upload Translate Updates
APA

AIthics | Sciencx (2024-09-04T01:00:18+00:00) Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots. Retrieved from https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/

MLA
" » Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots." AIthics | Sciencx - Wednesday September 4, 2024, https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/
HARVARD
AIthics | Sciencx Wednesday September 4, 2024 » Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots., viewed ,<https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/>
VANCOUVER
AIthics | Sciencx - » Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/
CHICAGO
" » Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots." AIthics | Sciencx - Accessed . https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/
IEEE
" » Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots." AIthics | Sciencx [Online]. Available: https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/. [Accessed: ]
rf:citation
» Addressing Kevin Mitchell’s Questions From the Perspective of Conscious Turing Machine Robots | AIthics | Sciencx | https://www.scien.cx/2024/09/04/addressing-kevin-mitchells-questions-from-the-perspective-of-conscious-turing-machine-robots/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.