In his 1950 paper Computing Machinery and Intelligence Allen Turing proposed a test to determine if ‘machines can think.’ A modern equivalent to the test can be described in this way: you sit in front of your computer and through the internet you connect to another computer. At the other computer there are two possibilities: either you will be talking to a person or a computer program that is designed to pretend to be human. Your part in the game, as one of the ‘interrogators,’ is to type a series of questions to determine if the respondent you are questioning is, in fact, a computer. If it is, and if, after a certain period of time, you fail to be able to determine that you are definitely talking to a machine or mistake the machine for a human, the computer program has succeeded.
Turing assumes that the best strategy for the human would be to tell the truth, and that the best strategy for the machine would be to ‘provide answers that would naturally be given by a man,’ in other words, to hide from the interrogators that it is good at problems like arithmetic, where machines excel, and bad at problems like moral quandaries, where humans excel.
Turing called this test ‘the imitation game’ after a similar game where the respondents are a man and a woman and the interrogator has to determine which is which, but the test has become more widely known as the ‘Turing Test.’
In the paper Turing makes a bold forecast. He predicts that in 50 years an ‘average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning.’ But real-life Turing test competitions have not proved so successful, even at the present time seventy-seven years after Turing’s prediction.
Participants in the Loebner Prize Competition—an annual event in which computer programs are submitted to the Turing Test— had come nowhere near the standard that Turing envisaged. ~ The Turing Test 2003, revised 2016, Stanford Encyclopedia of Philosophy
The test, especially in the form of a competition, has been criticized by many experts. They suggest that the test is not really a test of the thinking abilities of the computer, but rather a test of whether a programmer can create a machine that uses language in a way that seems human. In other words competitors, especially when there are financial prizes involved, are likely to be more interested in beating the test than in exploring real issues that face AI engineers.
One of the most famous criticisms of the Turing test—and of the idea that machines can think—is a thought experiment devised by John R. Searle that’s become known as the ‘Chinese Room Argument.’ Imagine a man who speaks only English sits in a closed room. In front of him there are cards of Chinese symbols and a book of instructions written in English. The only input he has from the outside is that he is sometimes given more symbols. But the instruction book doesn’t teach him to understand Chinese, instead it says things like ‘if you are a given a symbol that looks like this, take the symbol that looks like this and put it next to the symbol that looks like this and submit it to the outside.’ All the while the man is being asked questions in Chinese like ‘What color are your eyes?’ or ‘Do you like melon?’ And to those questions the instruction book tells him to submit the symbols that translate as ‘green’ and ‘yes, especially watermelon.’
In this analogy we can see clearly that the man does not understand, but is instead manipulating symbols.
Just manipulating the symbols is not enough to guarantee cognition, perception, understanding, thinking and so forth. And since computers are symbol-manipulating devices, merely running a computer program is not enough to guarantee cognition. ~ John R. Searle (Is the Brain’s Mind a Computer Program?)
In ‘Robots’ a chapter in Michio Kaku’s 2008 book The Physics of the Impossible he gives a brief history up to that point of various attempts by organizations and companies to create a human-like robot and describes how they all ended in failure or in results that were disappointing. In particular he notes two areas of difficulty for AI engineers: movement (the recognition of objects and the ability to navigate around them) and common sense.
In another section of the same chapter Kaku talks about the problem of consciousness; that is, that we don’t really understand what consciousness is, and therefore cannot ‘come to a consensus as to whether a machine can be conscious.’ Kaku thinks that the subject has been ‘overblown’ and that philosophers and psychologists have over mystified it. Basically he thinks that we should just try to build a (thinking) machine and see what happens. In his discussion he quotes the American cognitive scientist Marvin Minsky who describes consciousness as ‘a society of minds.’
The thinking process in our brain is not localized but spread out with different centers competing with one another at any given time. Consciousness may then be viewed as a sequence of thoughts and images issuing from these different, smaller ‘minds,’ each one grabbing and competing for our attention. ~ Kaku’s description of Minsky’s ‘society of minds.’
The description is so close to the model of functions in Gurdjoeff’s picture of man as a three-brained being that one wonders if Minsky was directly or indirectly influenced by Gurdjieff’s system.
In reading about AI two things have struck me. The first is that there is no objective or agreed-upon scientific definition of what intelligence is. In other words when we come to trying to design an intelligent machine, we use ourselves as a model or benchmark, without having a precise knowledge of why and how we are intelligent. The second is that the esoteric model of attention and consciousness and different brains, expounded by Gurdjieff and Ouspensky, is more revealing and precise than present-day scientific theories.
Intelligence. Intelligence is difficult to define simply because there are many kinds of intelligence. This is hardly a mystery. A brilliant physicist may be socially inept. He may excel in understanding how the physical laws of the universe operate, but at the same time have no understanding of why the people around him act the way they do. He may have a profound intellectual intelligence and a childish emotional intelligence. The model taught by Gurdjieff defines seven separate centers of intelligence for man: the intellectual, emotional, instinctive/moving centers (collectively called the lower centers), a separate center for sex, and two higher centers, which only function regularly as a result of inner effort. Though these centers are said to be energetic and can exist in the entire body, the lower centers are easily observed to have a center of gravity in specific places in the body: the intellectual center in the head, the emotional center in the solar plexus, and the instinctive and moving centers in the lower part of the back. There is a simple exercise to demonstrate this. Every time a man says the word ‘I’ it automatically sounds in the center that is uppermost at the time; that is, in the head, the solar plexus, or the lower back. If his statement about himself is emotionally charged, it will sound in the solar plexus; if he is thinking or considering something, it will sound in the head, and if he is desirous of something, it will sound in the lower back.
Life is full of examples that demonstrate the energetic nature of the four lower centers. Think about a bad acting performance. The actor imitates the gestures and tones of voice that normally convey emotion, but his performance is unconvincing because the energy is not there. And everybody knows it. You may not formulate it to yourself; you may just think that it’s a terrible performance. But no matter how well the actor parrots what people look like and sound like when they are emotional, you won’t be convinced. This is because your emotional center vibrates at a certain frequency, and you will not be moved to feel emotion unless the actor can make his emotions vibrate on the same frequency. This is why actors learn special techniques to manifest actual emotions. Anything else is unconvincing.
The instinctive and moving centers also have their own expression. If you are in pain, your tone of voice often conveys it even in simple conversation. You can complain by how you say something as well as by what you say. The body can also display its physical prowess and confidence. You see this in athletes. There is a look and a feel about an athlete when he is doing well and winning. The reserve is also true; when an athlete loses his confidence, it shows in his movements and gestures. The energy of the body, like emotion, has a particular frequency, which is communicated to the instinctive/moving centers in others.
When we talk about the four lower centers, one of the most important differences between them is the speed at which they operate. Ouspensky gives us some numbers, which are estimates, but they give an idea of the differences. He says that the instinctive and moving centers are about 30,000 times faster than the intellectual center. And that the emotional center is about 30,000 times faster than the instinctive/moving centers. This means that the emotional center is nine hundred million times faster than the intellectual center.
Let’s say that a man goes into a restaurant and orders a soup that is made from ten ingredients. Let’s also say that he has an educated palate. He may be able to list all the ingredients after one taste. Now imagine that a scientist in a lab is given the same soup and must test it to determine all its ingredients. Even with the best equipment, it would take him substantially longer to get the same results. What’s interesting to note about this example is that the man in the restaurant is using two kinds of intelligence; he needs an instinctive intelligence to be able to distinguish the various tastes and smells, and intellectual memory to remember the names assigned to the different flavors. In the lab the scientist is using only one, the intellectual center.
Common sense is difficult to program in robots because common sense in humans is not based alone on intellect, but is instead based on experiences that are common to two or more centers. We know that fire burns, not only because we were told as children, but also because we all have been burnt. The experience of being burnt places the memory (knowledge) in the senses (the instinctive center) and often, because being burnt evokes reactions like anger and fear, in the emotional center as well. Essentially the experience of being burnt is three experiences, all affecting different parts of us, which is why it is not easily forgotten.
Closed Systems. Computers are good at certain kinds of activities. They are very good at sifting through large amounts of data and choosing the best possible answer or output for a particular problem. But they are only good at systems where the aims and laws that govern the activities are known and defined. This is why they excel at games, like chess, where the rules are fixed and the number of possibilities, though large, is known.
Ultimately all systems are finite. Even the universe is governed by a set of laws. We may not be aware of those laws or understand how they affect us, but we can assume that there are laws and that they don’t change. The difference between living life in the universe and playing a game like chess, besides complexity, is that in the universe there are different levels. In life any activity exists in time and, as it unfolds, forces from other levels may interact with it or even disrupt it. In life we are not protected by a shell of rules that are set down and agreed upon by the different parties involved.
Chess, by some, is thought to be a good measure of intelligence. And at the present time the best chess-playing programs can beat the best human players. But forget that for a moment. Imagine that two machines play each other. One has a deeper and more thorough program. The other is good, but not as good. Theoretically the computer with the better program will win every time. This is so because the two programs exist in a closed system without influences from other levels. In life it is different. When I was a young man, I had a friend who was a good chess player, and even though I wasn’t nearly as talented as he was, we sometimes played. At first he always won, but then I saw that if I played an extremely conservative and defensive game, he eventually became bored and made mistakes. When that happened I could sometimes beat him. I had found an advantage, but it had nothing to do with my talent or my understanding of chess. It was psychological or emotional advantage. It was an advantage that came from a level outside of the vacuum of the game.
Of course this advantage would not help me in playing against a computer program—it would not make mistakes because it doesn’t know boredom—but it is also an advantage that could not be employed by the less thorough program to beat the better program. Because both programs act in a closed system, the lesser program is doomed to always be beaten by the better program.
The main point I want to make here is that when we say that computers are more intelligent than humans, we are usually saying that computers are better or quicker at specific tasks. We are not talking about a computer or robot becoming well-rounded or skilled in different types of intelligence.
Algorithms. Despite some recent advances in code writing that allow programs to act in ways that imitate brain activity and despite some advances in programs that rewrite their code, the decision-making process of a program is still accomplished by the use of algorithms. Wikipedia informally defines an algorithm as ‘a set of rules that precisely define a sequence of operations.’ It also notes that usually the sequence ‘stops:’ that is, comes to conclusion or creates an output. (In a chess program an output or a conclusion would be to move a piece.)
Let’s imagine that we’ve built a robot and that we’re teaching it to walk. Let’s say that we’ve already managed to program it to move forward and back and to the left and to the right. Now we want to write a routine that will deal with navigating around objects. The first thing we’re going to have to do is to detect if there is actually an object blocking our path. So we may want to begin with an if statement that says something like:
If your sensors detect something blocking your path
stop
Now we have to determine if this ‘something’ is actually a physical object as opposed to sunlight or a shadow or a projected image or a fire or a living organism that has its own motion, like a cat or a dog; maybe it’s even a moving object like a ball rolling across the floor. All of these, as well as many other possibilities, will have to have programmed responses. We might want to have a subroutine that detects and recognizes all the different types of living organisms that the robot will possibly come into contact with, and then another that detects and recognizes all kinds of visual phenomenon that have no physical mass. And then there will have to be another subroutine that can recognize threats of all kinds, and another to determine if a moving object is rolling toward it or rolling away from it. You can see that the amount of code we’re going to have to write is going to be involved and lengthy. We’re going to have to have thousands of subroutines, and we haven’t even gotten yet to the responses to whatever it is that is blocking our robot’s path. If it’s a cat your robot may want to say something to the cat, like ‘Move,’ or ‘Go on,’ but if it’s a dog there will need to be a determination about whether the dog is a threat or not. The responses are also going to be complicated by the fact that the robot is going to have to recognize everything in its environment in order to be able to move in a direction that is not going to put it into more danger, or, in a benign environment, we’re going have to make sure that it doesn’t turn around and walk into a wall. And this is just a beginning. There will be many other possibilities that need to be solved by writing algorithms.
The point of this is that the majority of our responses to objects in our environment are not intellectual. They are instinctive/moving or emotional. In most cases they are the result of two or three different types of intelligence.
So the question of why object recognition and motion is so difficult to program when it is so easy for us is that it is easy for us because we have an instinctive center and a moving center and that they work together and with an emotional and an intellectual center. We have four types of intelligence as well as four types of memories. Algorithms, at best, imitate the way our thinking works. And remember, our moving and instinctive intelligence is, by Ouspensky’s numbers, 30,000 times faster than our intellectual abilities. This speed is not gained by having a quicker processor, but by connecting to a different mode of perception, a different world, a world that is under fewer laws than the world of the intellect.
Not long ago the Mars rovers were programmed with some of the most sophisticated detection and motion software available, and they have been described as having the motion abilities of an ‘intelligent insect.’
All the exponents of AI say over and over again that our progress in this industry is exponential, but they assume that the different types of human intelligence that they are trying to imitate progress at speeds that are not exponential. But according to this model, they do. Even if we assume that an algorithmic method can operate in an equivalent way to the modes of perception of the instinctive, moving, and emotional centers, there are still some high numbers to overcome: 30,000 times faster for tasks like object recognition, navigation, and self-preservation and nine hundred million times faster for any kind of emotional perception.
Memories. Though there is no evidence of this, most scientists think that all our memories and experiences are stored in the brain more or less in the same way that we store our photographs on the hard drive of our computer. In the esoteric model memories are stored in centers, and the brain, really the entire chemical and nervous systems in the body, is seen as a network that links the perception of the centers to the body and to the other centers.
After you learn how to drive a car, you do not need to think about it when you drive. The memories or experience of how to drive exists in the moving center. The memories are stimulated by the senses and then communicated to the muscles through the brain and nervous system. If you had to think about each move, you would never be able to react quickly enough to drive safely. So the question becomes: if all types of memories exist in the brain alone, what accounts for the enormous difference in the speed of learned movements compared to a step by step intellectual (algorithmic) mapping out of simple movements? What mechanism in the brain allows for some impulses to be communicated so much faster than others?
In strokes or head injuries where the mobility of the limbs is affected, it is not that the memories of how to move are damaged—if that were the case, then recovery would simply require that the patient relearn all the affected activities. What is damaged is the brain’s ability to communicate with the affected limbs. This, I believe, is why strokes can be so frustrating for the patient. The memories of how to walk or speak are intact, but the memories can no longer be communicated to the body because the apparatus that communicates from the moving center to the body has been damaged. This also explains why the brain has the capability to reprogram itself (plasticity). If old pathways are damaged, the centers can, to some extent, find new pathways to communicate to the body. Of course there are many factors here: the age of the patient, for instance, and the extent of the injury, and there has to be a willingness to learn and make effort.
Stimulus/Response. Gurdjieff became famous for saying that man is a stimulus/response machine and that he cannot ‘do,’ so you would think that his system would not contradict a philosophy held by some scientists that machines are capable of thinking—Searle called this school of thought ‘strong AI’—but it does.
They believe that by designing the right programs with the right inputs and outputs, they are literally designing minds. ~ John R. Searle (Is the Brian’s Mind a Computer Program?)
When we talk about consciousness and AI, we must be clear: what scientists want to reproduce in machine learning is not what Gurdjieff called full consciousness. It might be better called a basic awareness that is part and parcel of being human.
Man is a very complicated machine; he is really not a machine, but a big factory consisting of many different machines all working at different speeds, with different fuels, in different conditions. ~ Ouspensky
Gurdjieff’s model of man as a chemical factory is far more complex than present-day scientific and philosophical models. The scientific and philosophical models are mainly concerned with the mind-body problem. (The mind-body problem is the question of how the mind and body interact, and whether there is a mind at all or whether mind is simply a subjective bodily state.) In Gurdjieff’s model, the mind and the body are considered distinct, in that they can stand apart and even oppose or observe the other. Often they work together. When you use your intellectual function to convince someone to give you something that you desire, the desire is not usually rooted in the mind. The mind has very few desires, and the body has many.
As I have pointed out, in the fourth-way model in addition to the mind and the body there are also separate centers for movement, emotion, and sex, as well as two centers for higher consciousness. And there are parts to the lower centers that are distinguished by the quality of attention. Attention can be mechanical, as in the spitting out of an opinion, or it can be fascinated or held as in watching a film, or it can be held by an effort as in studying material for school that you don’t want to study.
There are different theories about this, but in most modern-day theories sensations, emotions, sex, and sometimes mind are considered features of a body, but in the esoteric model we separate them not only because they function with different energies (a different energy allows a different perception) but because they vibrate with different worlds that exist outside of us.
Man is a micro cosmos. Each center has a specific energy and that energy vibrates with a specific world. For instance our instinctive center, when it works properly, vibrates with world twenty-four which is represented on Earth by natural environments like forests or seashores. It is in these types of environments, as opposed to environments like cities, where the instinctive center feels most at home.
In movies when they portray futuristic robots that have the capacity for sex, the writers usually assume that to give their robots ‘sensors for pleasure’ is enough to mimic sex drive. But the human experience is different. We know that the drive for sex is more powerful than the drive for other sensual pleasures. This is so because the sex center, when we experience it, functions with a more powerful (and higher) energy than the energy that comes with other pleasures. The drive for sex is not so much about pleasure as it is about the energy we feel when we experience that pleasure. Sex can motivate many other activities—everything really—because it is essentially energetic.
This is all to say that the man-machine that programmers want to emulate is far more complex than is generally realized.
Materialism and Dualism. It’s impossible to talk about AI and consciousness without saying something about ‘materialism’ and ‘dualism.’ Materialism and dualism are philosophies which in many cases limit the way we think about AI and consciousness. Materialism is the belief that nothing exists except matter and its movements and modifications. Dualism is the belief that that mind and body are in some categorical way separate from each other, and that mental phenomena are non-physical in nature.
Searle complains that these two philosophies are a barrier to the study of consciousness because materialism doesn’t admit that consciousness exists—you cannot study something that doesn’t exist—and dualism, even though it admits consciousness as a phenomenon, says that science cannot study the mind because it is non-physical and therefore cannot yield verifiable results.
In his article ‘Consciousness’ Searle writes that states (of consciousness) are subjective. By that he means that they cannot be separated from the ‘human or animal subject’ or studied from what he calls a ‘third person ontology,’ say, in the way we study minerals from a mountain. (Ontology is the branch of metaphysics that deals with the nature of being.) Basically Searle, for common-sense reasons, dismisses the materialistic view that consciousness doesn’t exist. (After all we all know the difference between someone who is alive and walking around and someone who is dead or in a coma.) At the same time his view, being essentially materialistic, dismisses the dualistic view because he is certain that ‘consciousness is entirely caused by neurobiological processes and is realized in brain structures.’ He believes that the correct and only way to study consciousness is to study how the brain works, and in his article he challenges neuroscientists to try to solve the problem of consciousness. He even suggests areas of research.
Searle compares the processes that create consciousness to the processes that produce digestion, but it is not a good analogy. The creation of awareness and consciousness is far more complex and it can evolve in ways that are not possible for our digestive system.
Even if we accept that consciousness is the result of chemical and electrical processes in the brain, it doesn’t necessarily follow that consciousness can’t become strong enough to exist without these processes. Nature offers us many examples that give us an idea that this is a possibility. The human embryo needs the processes of its mother’s systems for nine months before it can exist separately, and even after birth the child is dependent on its parents for a considerable time afterward.
And of course the analogy of a second self being ‘born’ is often found in esoteric and religious texts. The esoteric Christian name for the higher emotional center is the ‘son of man.’ Though man can be viewed as a chemical factory, we can also argue that it is a factory that was designed to produce an independent consciousness.
The Mechanics of Consciousness. Basic awareness can be thought of as the capacity of one center to observe another center. For awareness of self—we’re not talking about consciousness yet—you have to have more than one level. The mind cannot observe itself, but it can observe emotions or sensations which are generated by the emotional or the instinctive centers. It is the connections between centers that are the difference between being awake in the normal sense (the second state) and being asleep in bed (the first state). When we can’t fall asleep at night, it is because we are unable to disconnect our centers and when we awaken in the morning the centers begin to connect together allowing us to perform complex tasks.
In deep sleep there are no dreams because dreams are the result of unbroken connections between centers. If you dream that you are lying in a coffin and can’t move, then perhaps a fear in your instinctive center has marginally connected to your emotional center to create the images. And the frustration you feel can sometimes be as simple as being unable to connect to the moving center so that you can climb out of bed.
The body needs to sleep, but the centers don’t.
The centers themselves do not need to stop and sleep. Sleep brings the centers neither harm nor profit. ~ Gurdjieff
Here another way of thinking about the connections between lower centers: your dog’s sense of self is not as clearly defined as your sense of self because your dog has no intellectual center. He has three lower centers and you have four. Your sense of self is more sophisticated because of the increased number of possible connections. But your dog does have a rudimentary sense of self because; for example, his emotional center can connect and then responds to smells and to sensations of warmth or cold or hunger which originate in his instinctive center. These three levels—instinctive, moving, and emotional—allow him to form a simple unifying identity.
Thinking may make us human, but it doesn’t make us conscious.
In the fourth way consciousness is distinguished from sensations and emotions and thoughts. Joy and anger and pain and pleasure are not features of consciousness, but are distinguishable from higher centers because they originate from one or more of the four lower centers. In the same way that the mind can observe joy or pain, higher centers can observe the mind and how it observes the other centers.
The study of consciousness is problematic for scientists largely because they only admit two levels of consciousness, being asleep in bed and dreaming and being awake, and because they take being awake (walking around and doing things) as a static state. If consciousness was always the same and if thoughts, emotions, sensations, moods, and intuitions were simply features of consciousness, then there would be no way to study it. Again we come back to the problem that for observation you need at least two levels, something needs to observe something else. It’s a little bit like the fish and the water analogy: the fish who knows nothing but water, cannot know what water is.
There is another law here that should be mentioned which says that the lower cannot observe the higher. What this means for self-observation is that, for instance, the intellectual center by itself cannot really observe the emotional center in the moment because emotions vibrate at a higher and finer energy. But it can observe the effects of the emotional center, and it can affect the emotional center through attitudes and persistence exactly because it is a separate intelligence. To observe emotions in the present, higher centers are needed.
It’s interesting to note that nature gave man two mechanical levels of consciousness: sleep and what Ouspensky called ‘waking sleep.’ If man had been born with one level of consciousness—if he somehow didn’t need to sleep at night—it would be next to impossible for him to understand that consciousness has higher and lower levels.
As long as they don’t admit that self-study is a necessary prelude to an understanding of ourselves, scientists and philosophers will always just turn in circles around what they call the ‘problem of consciousness.’ Man has many illusions and buffers and prejudices, and scientists and philosophers are not free from these subjectivities just by the virtue of their profession. To fully understand ordinary consciousness, high centers must begin to function, and for that inner effort is necessary. A capacity for abstract thought and a scientific attitude are not enough. Though a study of how consciousness is connected to biology could yield some interesting knowledge, the path to truly understanding consciousness can only begin with the control of attention and exercises like self-remembering, which means work on being. Without the second layer of attention that self-remembering brings to the study of consciousness, there is no observer. And without an observer, there can be no talk of knowing the different types of intelligence that are at our disposal.
Thank you for your thoughtful article. I administer the FB group ‘Anthroposophy and Mechanical Occultism’. Rather than launching on a long recitation of Anthro perspectives, I would simply point to the highly advanced clairvoyance, unmatched by Gurdjieff or other western seers (in my opinion) which Steiner utilizes in his perception and understanding of consciousness. This in no way discredits the views of other ‘clairvoyants’ (or scientists) but rather clarifies understanding, much as an advanced algorithm’ out-performs a lesser program . Steiner lived a century ago, and although his cosmology includes millenia of consciousness evolution, AI (as with all technology) has undergone exponential growth since his time. While this does not invalidate Steiner’s remarkable foresight (the coming of AI), the context of different centuries must be accounted for Steiner pointed out that this ‘new’ scientific force (AI/technology) and its increasing ‘interweaving’ with human consciousness is inevitable. It is a ‘locked-in’ evolutionary phenomena, and as such, must be dealt with. Thus the seminal question and challenge is not if, but rather how mankind integrates this evolving reality, this merging of science and spirit.
I have been thinking about this subject for some time. You do a good job illuminating some of the details. Its a brave subject. Maybe it’s me but scientists give man’s consciousness way too much credit and therefore put it on a pedestal instead doing what Gurdjieff did, which was to get to know it. According to Gurdjieff, as a race, man doesn’t amount to much more than advanced apes.
If we stick with Gurdjieff’s model, AI will never truly flourish (an AI that we could recognize, that is) until the ‘kernel’ is subdivided, like Gurdjieff’s ape-human, in to 4 components: intellect, emotion, moving/instinct, and sex center. Each one of these centers would be a tall task for any team of scientists to replicate. But it should be possible if we hold true the statement “man is a machine”. Then, in order to parallel common human consciousness as we know it (level 2 of 4, waking sleep), the AI’s four centers would have to be able to observe the other centers independently in some way. This would create a fifth center not made with hands and hence, be parallel to our lowly understanding of human consciousness. Consciousness literally means “knowing together”.
When the robot has to determine if there is something obstructing its path your are referring to association. The work, as per Nicoll, teaches of three types of association: voluntary, involuntary, and those relating to the work, i.e., self observation associations. The robot would have to associate a pot hole with “go around”, a puddle as “fun” or a puddle as “dangerous”. But as you point out, these types conversations about AI architecture could go on for decades without material progress because man is asleep. If we hold Voltaire’s statement true (in all things man describes he merely only describes himself) then man will just create an advanced a/i machine that has the same level of being as an advanced ape.
John and Tim thank you both for your well-thought out comments.
You write: “Even if we accept that consciousness is the result of chemical and electrical processes in the brain, it doesn’t necessarily follow that consciousness can’t become strong enough to exist without these processes.”
If your hypothesis is correct, does that not imply the existence of an unknown substance that either _is_ consciousness or that can be a substrate for consciousness? How would we verify the existence of such a substrate substance? Or if the substance is consciousness itself, how do we verify that consciousness is a substance, as opposed to an epiphenomenon arising from ordinary matter?
Martin: This thought, or hypothesis as you call it, was written mainly to bring a different perspective to a particular formatory conclusion that some scientists make in regard to consciousness: that is, that if consciousness is the result of chemical and electrical processes in the brain, then it cannot continue to exist when the brain (or the body) dies. I was trying to present a different way of looking at this seemingly logical conclusion. The idea is that consciousness as we know it may be dependent on brain and body processes, but that the possibility exists for it to develop and to eventually exist on its own.
From one angle, the substrate of consciousness is the machine. It creates a particular focus and inner environment for the possibility of the growth of consciousness, or rather the focusing of consciousness. And the material of consciousness is higher hydrogens.
I don’t like the word epiphenomenon in relation to consciousness because it makes it sound like the development of consciousness is a secondary or even a byproduct of life, and though this may be true from one angle, it gives an impression that consciousness is mechanical and that we have little control over. But maybe that’s just my understanding of the word.
Of course part of the problem in talking about these ideas is that most scientific studies (and scientists for that matter) don’t distinguish between different levels of being. I suspect that the idea that consciousness dies with the body for some people and not for others has never occurred to them.
Your question on how we can verify these ideas can only be answered by you, that is, though personal verification. All your inner work should be leading to experiences that demonstrate that consciousness can exist without functions. Self-remembering is the key, particularly is troubling situations. It is the only thing that can pull together the energy created by the body into a feeling of a self, a self that has the possibility to become more and more independent from the reactions of the machine and the troubles of the world.
William,
I can verify that consciousness is not functions. This is done, as you say, by self-remembering, by realising that one is not what one observes, and that includes thoughts and feelings. One is then the observer. But that does not prove in any way that consciousness is not dependent on the machine.
Consider that it is possible to explain, at least in outline, the functioning of a micro-organism, or (say) a hydra or cellular slime mould in physical and chemical terms. ‘Life’ then is an emergent property rather than a substance in its own right. Is it not possible that consciousness is also an emergent property?
Of course we have no proof either way, which is really my point. Shakespeare wrote of ‘the undiscovered country from whose bourn no traveller returns.’
I would suggest that the idea of the immortality of the soul is not essential to the Fourth Way, because the aim is to be present, and presence is now, not any other time.
Kind regards,
I found your article thanks to a search engine based on ChatGPT: https://www.perplexity.ai/. I was trying to find relationships between AI and the teachings of the 4th Way. to see if my website appeared in the searches, however I found your article, a discovery for me.
I have known about the fourth way for years, but I started to take seriously its study in a school 2 years ago already. My website, empatiaeia.com is influenced by the 4th Way, and although I have not been so direct, there are many articles in which I have used quotes from Gurdjieff, Outspensky… and the editorial line is highly influenced by these teachings.
At the moment I have only had time to read this article, I don’t know if you have returned to the subject of AI, but I will continue reading you. My blog is written in Spanish, but with online translators, the translation is almost perfect. I invite you to visit it. I also take this opportunity to ask your permission to link to this in a future article, where I am putting this comment. With your reading you have provided me with material that I did not count on.
Thanks.
I already post the new post: “Conversing with ChatGPT: Gurdjieff and the allegory of the carriage” https://www.empatiaeia.com/conversando-con-chatgpt-gurdjieff/