Ai Forums Home Welcome Guest    Sunday, February 26, 2017
Ai Site > Ai Forums > Language Mind and Consciousness > Consciousness Does it exist at ... Last PostsLoginRegisterWhy Register
Topic: Consciousness Does it exist at all?

Nerketur
posted 8/10/2009  07:00Send e-mail to userReply with quote
There is a lot of information pointing towards humans having a "consciousness". But does it really exist at all? If it does, then AI is certainly very hard to achieve, since one cannot program a "consciousness" in a computer. If not, then AI would be simple to create.

I took a long walk through the beach, and thought a lot about AI. I also thought about what it means to be a human. Are we actually even different from animals at all? I'm sure there are points for both sides.

Let me tell you what I thought during my walk. First of all, I thought about how we can teach a machine to learn. Like the HAL's we train. They use pattern recognition. This, in my opinion, is only part of how humans really learn.

The way I see it, The core ability for us to learn is via association. It's how we learned about boxes, hammers, and even math. For example. 1 + 1 = 2. I know that in my case, I can literally see an object, then I add another object to it, and I see two objects. I associate the number with actual objects.

When we were born, we didn't know language at all. We were taught it slowly. "I'm your mommy" "This is a spoon" "This is food" We saw what the object was, and we associated it with the word. First, we learned to associate. Some things we associated with good, happiness, pleasure. If we touched a hot stove, we soon learned that it hurt. Pain bad. If we cried, our parents came. Parents good. Soon, we learned better ways to show what we wanted. We learned that "Mommy" got us our Mother, and Daddy got us father. We associated the word with who we got.

We learn mainly by association. However, we teach ourselves using patterns. If you got slapped in the face every time you said "happy" pretty soon you'd pick up on that pattern and stop saying it. But if you were given a piece of candy every time, you'd likely say it more often.

That said... it brings up a question. If all we do is associate and see patterns, then how do we have a sense of being? A conscience? A consciousness? I say it could be "fake", if you will. It could be that we think we have one because it makes us feel better. We think we have one because of what we think it is. We associate it with humans, with religion, with the ability to think about thinking.

Assuming that I'm right, it would be easily possible to create an AI that's just like a human. It's possible to create a program that associates and uses pattern recognition. But the key element in that is bias. If everything is the exact same, aside from color, you need bias, or preference in order to decide. Alan, and HAL both use bias. HAL tries to choose the "best possible response" Alan chooses the "best response" depending on what was said. But I'm talking different. in order to make an AI, you need a random preference for one thing over another. Like a persons favorite color.

On the other hand, lets assume I'm wrong. Lets say all humans have something called a consciousness. A sense of self. That, in itself, would be very hard to program in. How would you? I don't think it could be done. We could probably get close... but anything like what we see in movies would be far, far away. Perhaps impossible.

Personally, I think it's possible. I think that it's more likely that our sense of consciousness is a response to millions of associations. We THINK we are alive. But in actuality we are ... Well, I'm not sure, actually. We are somehow controllers, and yet we are controlled.

I'll have to do some more thinking about this... But I feel pretty certain that AI is possible. I feel quite sure that we learn by patterns and association, but mainly association. Nonetheless, I'm curious as to what you all think of this. What are your thoughts?


Smart_Orifice
posted 8/12/2009  11:07Send e-mail to userReply with quote
I think the main thing with consciousness about subject/object and the perceived *context*, and moving those about freely, at will, in series, over time - or even parallel - or any combination thereof.

The kernel of 'awareness' that we share with all living things, fortunately being an anchor point in all this manipulation, or we would quickly go mad, and still can, under the wrong circumstances.


Smart_Orifice
posted 8/12/2009  11:09Send e-mail to userReply with quote
Hmm somehow the word 'is' got zapped after consciousness, making it a bit harder to read. Sorry about that.


lrh9
posted 8/12/2009  14:23Send e-mail to userReply with quote
 
Nerketur wrote @ 8/10/2009 7:00:00 AM:
There is a lot of information pointing towards humans having a "consciousness". But does it really exist at all? If it does, then AI is certainly very hard to achieve, since one cannot program a "consciousness" in a computer. If not, then AI would be simple to create.

I took a long walk through the beach, and thought a lot about AI. I also thought about what it means to be a human. Are we actually even different from animals at all? I'm sure there are points for both sides.

Let me tell you what I thought during my walk. First of all, I thought about how we can teach a machine to learn. Like the HAL's we train. They use pattern recognition. This, in my opinion, is only part of how humans really learn.

The way I see it, The core ability for us to learn is via association. It's how we learned about boxes, hammers, and even math. For example. 1 + 1 = 2. I know that in my case, I can literally see an object, then I add another object to it, and I see two objects. I associate the number with actual objects.

When we were born, we didn't know language at all. We were taught it slowly. "I'm your mommy" "This is a spoon" "This is food" We saw what the object was, and we associated it with the word. First, we learned to associate. Some things we associated with good, happiness, pleasure. If we touched a hot stove, we soon learned that it hurt. Pain bad. If we cried, our parents came. Parents good. Soon, we learned better ways to show what we wanted. We learned that "Mommy" got us our Mother, and Daddy got us father. We associated the word with who we got.

We learn mainly by association. However, we teach ourselves using patterns. If you got slapped in the face every time you said "happy" pretty soon you'd pick up on that pattern and stop saying it. But if you were given a piece of candy every time, you'd likely say it more often.

That said... it brings up a question. If all we do is associate and see patterns, then how do we have a sense of being? A conscience? A consciousness? I say it could be "fake", if you will. It could be that we think we have one because it makes us feel better. We think we have one because of what we think it is. We associate it with humans, with religion, with the ability to think about thinking.

Assuming that I'm right, it would be easily possible to create an AI that's just like a human. It's possible to create a program that associates and uses pattern recognition. But the key element in that is bias. If everything is the exact same, aside from color, you need bias, or preference in order to decide. Alan, and HAL both use bias. HAL tries to choose the "best possible response" Alan chooses the "best response" depending on what was said. But I'm talking different. in order to make an AI, you need a random preference for one thing over another. Like a persons favorite color.

On the other hand, lets assume I'm wrong. Lets say all humans have something called a consciousness. A sense of self. That, in itself, would be very hard to program in. How would you? I don't think it could be done. We could probably get close... but anything like what we see in movies would be far, far away. Perhaps impossible.

Personally, I think it's possible. I think that it's more likely that our sense of consciousness is a response to millions of associations. We THINK we are alive. But in actuality we are ... Well, I'm not sure, actually. We are somehow controllers, and yet we are controlled.

I'll have to do some more thinking about this... But I feel pretty certain that AI is possible. I feel quite sure that we learn by patterns and association, but mainly association. Nonetheless, I'm curious as to what you all think of this. What are your thoughts?

 
I guess one needs a definition of what human consciousness is before they can discuss if it is necessary for a.i. and if it is how to implement it.

What is consciousness? Definitions usually should be specific, exact, and explanatory. However, I think to define something as ambiguous and understudied as human consciousness, it is appropriate to analyze consciousness and create the dichotomy of what is consciousness and what is unconsciousness (or non-consciousness).

Hopefully if we can determine what attributes consciousness has and what effects consciousness has, we can arrive at a a description of consciousness even if we don't have an explanation of how it works in the human mind. Which I think is a minor concern for a.i. because for the most part we are only trying to achieve the results of the human mind, not emulate it exactly.

Wikipedia [I know anyone can edit Wikipedia, but how can an open community consider itself valid if it doesn't consider other open communities valid?] says that consciousness is often used colloquially [esp. in medicine] to describe being awake, aware, and responsive and sensitive to the environment, in contrast to being asleep or in a coma. In philosophical and scientific discussion, however, consciousness is the ability to clearly recognize one's self from all other things and events. It says a characteristic of consciousness is it is reflective. It has the ability to recognize the recognition of one's self.

So unconsciousness is being asleep, unaware, and insensitive to the environment, not knowing there is an "I", or even if one knows there is an "I", being unable to think about that recognition.

Now at the hardware level, the human brain - as far as I have discerned in my studies - has only two abilities. The ability of neurons to retain a configuration (allowing for the storage of data), and the ability to send electrical messages with varying strengths to different neurons within itself and out through the central nervous system to body hardware.

I think (by examining the basic abilities humans have solely at birth and their formative years) there is some built in configuration of neurons that result in discrete higher level components of cognition, but they are built upon those two basic properties I mentioned earlier. We just don't understand how.

So comparing unconsciousness and consciousness in terms of what goes on in the brain with basic components, you have sleep vs. asleep. That means that: 1) One has memory and the other does not. (Dreams are a semi-conscious state.) 2) That means one can sense the environment and one cannot not. (Essentially send and receive signals to body hardware and mind software - thinking.)

And in terms of being able to recognize oneself vs. being unable to recognize oneself that: 1) One can feel and the other cannot. 2) One can examine what one is thinking (including the process of thinking) or has stored in its mind (including itself) and one cannot.

Essentially the human brain can store data that can be both information and processes. (For instance, one can know the steps for baking a cake at the same time one can actually bake a cake.) It has a program that can read its own program and activities. It stores all relevant data about itself inside of itself. And it is able to signal information between the programs inside of it and the outside world, allowing for differentiation through qualitative data.

If that doesn't sound like a computer, I don't know what does.

So having found nothing else in my search for answers to human cognitive abilities, I am forced to conclude that consciousness is indeed an emergent property of the more basic components of cognition. There is no discrete consciousness at birth.

Indeed, thinking about my formative years, I have no memory of anything before about three or four. I think this is primarily the result of the relatively few pre-existing connections of neurons and the limited number of neurons, resulting in: 1) Difficulty in signaling between thoughts and thoughts and thoughts and perceptions. (Maybe this explains short attention spans.) 2) The inability to store a large amount of information.

I might be right or I might be wrong, but I think it's worth trying out either way.


Nerketur
posted 8/12/2009  20:28Send e-mail to userReply with quote
 
lrh9 wrote @ 8/12/2009 2:23:00 PM:
I guess one needs a definition of what human consciousness is before they can discuss if it is necessary for a.i. and if it is how to implement it.

 
I think you have the right idea. However, in truth, I don't know what "consciousness" really is. Even with your definition, the ideal AI would also understand it, and perhaps think of philosophical questions much like we do.

However, I think that we don't have to define what it is, in order to understand it. For example... Do you know the meaning of the word "the"? Or perhaps you know how to explain color to a blind person? I certainly don't. But I do "understand" the word "the" I understand that it's very hard, if not impossible, to explain the concept of "color" to a blind person.

My point is... If we can get a robot to learn as a child, it may very well develop a sense of self on its own. Develop what we would consider a "consciousness", if you will. If it doesn't exist at birth, when is it created? If we take this robot, and see that it developed a "sense of self" then perhaps by looking at it's "brain dump" we can understand what a consciousness really consists of. We could understand it, even if we don't know what it really means.

Doing this would require knowledge of more than simply text, in my opinion, however. Though a "simple" form of consciousness may be possible with text, it may seem to be a completely different thing to us. I don't think we would understand it well enough.

Now... I do admit the fact that it's possible that AI can't be created by mere chance. It's possible I could be wrong on how to create it. But until I try, there's no way to be sure.

I don't have the materials, nor the expertise required, just yet... but I plan to have them as soon as possible. (Sadly, though, my funds don't quite permit it yet. I'm broke D=) I'm learning, though. I plan to be the one to create it. A daunting task, but I do have help. This forum, friends that understand as I do, and the awesome programs here, as well. It's like a family. =)

Thanks for helping me on this journey. Perhaps one of you will be the source of something to make my mind click on exactly how to do it.

PS: As a side note, I'm fixing up a chatterbot program I found on another website. It's helping me to learn how chatterbots are made, and that's giving me ideas. Porting from C++, to Java, to Euphoria, and sometimes back again. Programming is just so fun =D


lrh9
posted 8/12/2009  21:19Send e-mail to userReply with quote
 
Nerketur wrote @ 8/12/2009 8:28:00 PM:
I think you have the right idea. However, in truth, I don't know what "consciousness" really is. Even with your definition, the ideal AI would also understand it, and perhaps think of philosophical questions much like we do.

However, I think that we don't have to define what it is, in order to understand it. For example... Do you know the meaning of the word "the"? Or perhaps you know how to explain color to a blind person? I certainly don't. But I do "understand" the word "the" I understand that it's very hard, if not impossible, to explain the concept of "color" to a blind person.

My point is... If we can get a robot to learn as a child, it may very well develop a sense of self on its own. Develop what we would consider a "consciousness", if you will. If it doesn't exist at birth, when is it created? If we take this robot, and see that it developed a "sense of self" then perhaps by looking at it's "brain dump" we can understand what a consciousness really consists of. We could understand it, even if we don't know what it really means.

Doing this would require knowledge of more than simply text, in my opinion, however. Though a "simple" form of consciousness may be possible with text, it may seem to be a completely different thing to us. I don't think we would understand it well enough.

Now... I do admit the fact that it's possible that AI can't be created by mere chance. It's possible I could be wrong on how to create it. But until I try, there's no way to be sure.

I don't have the materials, nor the expertise required, just yet... but I plan to have them as soon as possible. (Sadly, though, my funds don't quite permit it yet. I'm broke D=) I'm learning, though. I plan to be the one to create it. A daunting task, but I do have help. This forum, friends that understand as I do, and the awesome programs here, as well. It's like a family. =)

Thanks for helping me on this journey. Perhaps one of you will be the source of something to make my mind click on exactly how to do it.

PS: As a side note, I'm fixing up a chatterbot program I found on another website. It's helping me to learn how chatterbots are made, and that's giving me ideas. Porting from C++, to Java, to Euphoria, and sometimes back again. Programming is just so fun =D

 
I think the formation of consciousness can be a natural process or a guided process. If one's self stores data about a hand into its memory, and then self has a property of having an arm, and arm has the property of having a hand, then self has a hand. Self has the data of recognizing self has a hand.

Therein is self awareness. It's in a robotic form, but it is essentially what you described with parents teaching children what their parts are. ("These are your fingers, and these are your toes!" Ah. I still remember my mom doing that with me.)

I came to an important realization today. I realized that Helen Keller was a person with deafness and blindness from a very early age. I thought to myself that her story might be able to shed some light on the workings of the mind, because that is one of the few things she had. She was nearly completely cut off from the physical world, yet she has accomplished more than many people possessed of their senses. Maybe intelligence isn't as connected to the world as we thought. I think it merits further study and research. That is why I ordered a copy of Disney's The Miracle Worker (2000 TV) and I'm going to try to obtain copies of some of her essay's and autobiographies. (How coincidental that you should mention blindness.)

Obviously my project is working to create a desktop artificial general intelligence program. All I have to have is a programming language and compiler and interpreter.


Nerketur
posted 8/12/2009  22:14Send e-mail to userReply with quote
 
lrh9 wrote @ 8/12/2009 9:19:00 PM:
I came to an important realization today. I realized that Helen Keller was a person with deafness and blindness from a very early age. I thought to myself that her story might be able to shed some light on the workings of the mind, because that is one of the few things she had. She was nearly completely cut off from the physical world, yet she has accomplished more than many people possessed of their senses. Maybe intelligence isn't as connected to the world as we thought. I think it merits further study and research. That is why I ordered a copy of Disney's The Miracle Worker (2000 TV) and I'm going to try to obtain copies of some of her essay's and autobiographies. (How coincidental that you should mention blindness.)

 
Interesting. I hadn't thought about that before, but you're right. Helen Keller, and we know that she has a sense of self. But it doesn't mean I am incorrect. Under my idea, she simply has a limited version of Self.

What I was trying to say before, is that it's possible that chatterbots could find a sense of self. A consciousness. But WE, as humans, who don't really understand it, wouldn't consider it to be a form of consciousness.

I think that "nicku" is right in his thought that consciousness and senses are connected. Without senses, you cannot create a consciousness. But I believe that once created, it will exist, even if the senses become non-existent. It will probably go crazy... but it will exist. Though the connections are gone, the "illusion" is still there. Is that your spirit? Perhaps.

The question is... how long can a consciousness exist outside of it's "senses"? I would guess not very long, but this would depend on factors. The first of which is what exactly is a consciousness?

Along another way of thinking...Anything is possible. If you want proof of this, then think about this. Consciousness doesn't really exist. Yet, we know it does. We feel it does. Belief causes truth. If we believe it strongly enough, it becomes knowledge. It BECOMES true, even if it's false. That's proof, in and of itself, that belief causes truth. =)

The question becomes, then... If we DO create an AI, will it function the same way? Anything it believes becomes true? The answer is simple. Yes. That's how HAL's work. That's how Alan works. Complete trust.The mysteries behind "robot army" involve "incomplete trust" or "playing a game" Whatever it thinks will be true for it, until told otherwise. Just like a human.

AI is a very interesting concept. I kinda got a little off track, there... but nonetheless, it's a good place to start. If you make any progress, be sure to keep us updated! =D


lrh9
posted 8/12/2009  23:20Send e-mail to userReply with quote
I must say that there are many things I think you're mistaken about.


nicku
posted 8/13/2009  03:18Reply with quote
 
Nerketur wrote @ 8/12/2009 10:14:00 PM:

I think that "nicku" is right in his thought that consciousness and senses are connected. Without senses, you cannot create a consciousness. But I believe that once created, it will exist, even if the senses become non-existent. It will probably go crazy... but it will exist. Though the connections are gone, the "illusion" is still there. Is that your spirit? Perhaps.

The question is... how long can a consciousness exist outside of it's "senses"? I would guess not very long, but this would depend on factors. The first of which is what exactly is a consciousness?

Along another way of thinking...Anything is possible. If you want proof of this, then think about this. Consciousness doesn't really exist. Yet, we know it does. We feel it does. Belief causes truth. If we believe it strongly enough, it becomes knowledge. It BECOMES true, even if it's false. That's proof, in and of itself, that belief causes truth. =)


 
Thanks for the mention of my post. I would say that you've got the first bit, but then missed the conclusion a little. Try the test. Attempt to express any imaginable instance of thought, or to what you refer when you say 'it will exist' . How does that existence take form, through what medium, sight, taste, organ visceral sensations, heat, light, spatial extension depicted through multiple modalities, etc, etc. I assert with 100% confidence that you have no possibility of coming up with an example of 'persisting sentience' that can not be easily accounted for by pointing out that it relies exclusively on sensory information. Try it, seriously. Anything at all, spend a couple of hours trying to invent/imagine a hypothetical cogent state, that you believe could be experienced after sensory deprivation. It is honestly impossible. Keller had a percentage of her conscious modalities excluded. This in no way changes the fact that her thoughts were still dependent for their content on her intact senses. The 'once created' bit, doesn't effect the conclusion that the existence is still entirely consisting of old sense data. Only difference is that the sense data is in a feedback loop within a stm/ltm recall buffer.

Look at the damaged memory case of Clive Wearing, he lost the ability to form long term memories after catching a virus, but was still conscious in the short term. Now if he had then lost the stm, he would just be a momentary being, fresh sensory data, which is consciousness, but without the ability to store and thus contextualise past sense data. If he then lost his previously accumulated cultural ltm, he would be as a lower animal, pure base drives, but still 'sense active' and hence a conscious being. Then poor old Clive has his nerves severed one by one, eyes, nose, ears, skin, organs, taste. One at a time, the nasty, but loyal, government surgeon, destroys each of Clive's sensory types, whilst monitoring his brain traces and higher cerebral functioning. When the surgeon gets to the very last one, snip, that's the end of poor old Clive. Without atleast 1 sensory path, we or our electronic replacements, can not happen. Hence as a clue to our original contributor, say a government monied project wanted to reproduce consciousness, all he would require is a single signal sensory system. It is believe it or not, as simple as that. Yes, a simple camera has sufficient funcitional equivalence to build the 'first' successful AI bot . If for argument's sake you wanted to build the being to resemble our form cognitively/culturally, then you then need to make a black box module which serves to create the 'Belief in own sentience' reflexive model. That more than any one thing is what makes us, us, in the common sense way of seeing ourselves. When you have thought at length about the belief part, your whole reality of how you thought about what consciousness is, will change at some point. You will begin to realise that what you are experiencing now at this moment when you say 'i am alive', doesn't mean what you think it does. the organic machine can have a subroutine that outputs, i think, i am, i feel, but it doesn't need to be alive/existent to do that. The whole process is as dead as a brick, mechanical and inert, natural selection just picked out those system capabilities which came together to make us a successful unit capable of independent and coordinated social organisation. A machine, made out of cells, developed sense systems and a processor to react to that data. Then the system that could retain that data and review it, survived and was passed on. The next leap came with a genetically coded 'self concept' memory model, in which the bio-unit held a sensory representation of its own experiential data and was able to use that identity to further facilitate social organisation, conflict management and resource centred planning actions. What we experience is just an inert set of machine functions, self concept, emotion, memory, culture, all of it just dead cogs in an equally dead machine.

Re the 'Go crazy' bit, to be honest that is answered by the above, to go crazy you need lots of sensory data, memory of that data and bio drives. The illusion is not in any way possible without 1 sensory pathway. The spirit us merely an abstract idea that is composed of sensory information, retained in LTM and is from this point of view a further machine functionality which made us better survivors. Why fight for your life, if you will die inevitably and then after a certain time be totally forgotten? It is a fact that life has no meaning or purpose, but that it is so, is not at all compatible with motivating a biological entity to keep fighting for its life.

What you are doing, by saying that something persists when we have created the consciousness, but then cut off the senses, doesn't ask what is the mind. Moreover i think you are saying that if a being is isolated from any 'new' sensation, their memories will still be experienced. However, that doesn't change what consciousness is. The mind still depends on purely sense data, this data is the language of existence. All that would be different is that the being is processing old as opposed to new, sense material. Cut off both sources and you will see that the being stops living a mental life. The illusion of consciousness is not observable without a data stream, old (memory) or new (neurologically fresh sense information).
How long a closed system could remain closed was always an interesting question. Maybe some people found that the brain adapted, scaled down the universe, so that in effect, imagination, reorganising old sense data, dreaming, functionally replaces the new senses and creates a new set of mental boundaries. This of course should not be confused with the central truth that memory is largely irrelevant to what sentience is. It has confused and sidetracked most philosophers in the Phil. of Mind. Only when we remove memory can we then get really heavily into the sensory path removals, of which there are many more than most scientists believe. But nevertheless, when the last one goes bang, so does the brain function. Imagine a Vacuum cleaner realised what it was and in what context. That is me and perhaps soon you. I and my colleagues are deterministic mechanical bio-units, dead and cold. However we, in very limited numbers, have begun recently to formulate a represention of our own true meaning in the universe. There is no mind/body problem, artificial intelligence is just equivalent intelligence. I don't exist and it is our next great problem to manage to forge a life where we know that we don't exist, but due to hard wired motivation, nevertheless,design a life in which this makes no difference. Of course the species will not last that long due to overpopulation or planet killer meteorite etc so it's not our concern. Still a hell of a thing to think that humans are not alive at all and our whole reality is a socially constructed non-entity.The more a person thinks that what i say is wrong, the deeper they show that they are still within the illusion.
And yet i keep typing? That, answers the fundemental question, it is at the end of the day, all just about the old adage, we do, what we are compelled to do and no more.



Nerketur
posted 8/13/2009  04:38Send e-mail to userReply with quote
 
nicku wrote @ 8/13/2009 3:18:00 AM:
Thanks for the mention of my post. I would say that you've got the first bit, but then missed the conclusion a little. Try the test. Attempt to express any imaginable instance of thought, or to what you refer when you say 'it will exist' . How does that existence take form, through what medium, sight, taste, organ visceral sensations, heat, light, spatial extension depicted through multiple modalities, etc, etc. I assert with 100% confidence that you have no possibility of coming up with an example of 'persisting sentience' that can not be easily accounted for by pointing out that it relies exclusively on sensory information. Try it, seriously. Anything at all, spend a couple of hours trying to invent/imagine a hypothetical cogent state, that you believe could be experienced after sensory deprivation. It is honestly impossible. Keller had a percentage of her conscious modalities excluded. This in no way changes the fact that her thoughts were still dependent for their content on her intact senses. The 'once created' bit, doesn't effect the conclusion that the existence is still entirely consisting of old sense data. Only difference is that the sense data is in a feedback loop within a stm/ltm recall buffer.

 
Glad you responded. And you're welcome.

First of all, let me state that I do actually understand exactly what you are saying. You are stating that consciousness CANNOT exist without sensory input. In other words... you say no input = no consciousness.

To an extent, I agree with this. Take a rock for instance. Obviously, it has no senses, and thus no consciousness. Never has, and never will until the day it develops a sense.

You already understand what I'm saying. We disagree on one main point. I say a consciousness can exist (once already made) for a time after the senses no longer exist. You say it can't.

As for your "challenge", I give a soft answer of "delayed reaction". Assume a certain body has senses, acquires a consciousness, then SUDDENLY loses all it's senses. For example, a human that is instantly vaporized, or perhaps an AI that is, for all intents and purposes "paused". In the first example, obviously, once vaporized, the "brain" will cease to function at all, permanently. I argue that the consciousness that did exist there would continue to exist, even though all inputs are now gone. Why? Until the consciousness "realizes" that there is no senses, it will still exist. For example, take the act of "killing" a body to preform an act of surgery, then "revitalizing" it. In this death state, the being has no input that I can see. And yet, the person's state of consciousness seems to have stayed the same.

As for the AI in a paused state... Do you mean to argue that once you "pause" the AI to debug/fix/whatever it, it's consciousness is lost? That it will start again from scratch every time it's paused? I find that unlikely. Sure, it may be true that it can "save" the inputs, and resume where it left off, but what about fixing errors? Where it has to start from a different point. Will it have to start over again, and recreate this consciousness? To be honest with you, I seriously doubt it.

Granted, it can be argued that a person can "regain" their conscious state from looking at all the associations and "replacing" the old, much like what happens in patients with comas, sometimes. Perhaps that is the better explanation. I don't really know, myself. But I still stand by my point.

Even so, we agree on many, many points. Consciousness is an "illusion" that we "create" in order to live peacefully. It's an illusion, that I believe AI can learn very well. What I don't know, is how long does it take for that illusion to appear? And when it does, what would it look like in an AI brain? Do you have any thoughts on that matter?

Edit: I realized I made one point vague. Your challenge mentions that a consciousness can't exist without senses. True. But nevertheless, in the absence of senses it will try to find some, ultimately failing. After an unknown time, it will cease to exist, or go crazy because it's trying to find something that's not there, causing many many "errors" An example of this is division by 0. Infinite loop.

Let me explain something. I was spelunking in a cave with my class back in high school, and deep in the cave, we were instructed to turn our flashlights off, and stay silent. For a few moments, I almost went crazy, I wanted to say something, because I couldn't handle hearing absolutely nothing. I was far too used to hearing background noise. Of course, eventually, I heard the faint drips, and that calmed me down. THAT is why I believe the consciousness can go crazy if it has no senses. No senses = no input.

Last edited by Nerketur @ 8/13/2009 4:54:00 AM

nicku
posted 8/13/2009  14:43Reply with quote
 
Nerketur wrote @ 8/13/2009 4:38:00 AM:
Glad you responded. And you're welcome.

First of all, let me state that I do actually understand exactly what you are saying. You are stating that consciousness CANNOT exist without sensory input. In other words... you say no input = no consciousness.



 
Cheers for the reply, it can be a quiet forum sometimes.

Right to the point, you're getting part of what i'm saying, but there's a bit of a misunderstanding. Not 'no input=no consciousness
It is in fact no sense data present of any sensory modality(stored old or new)=no consciousness


When i say cannot exist without sensory input, you are taking that as 'remove the NEW inputs' . I do not mean that. What i refer to is the systems use of those inputs, in sustaining its illusion of consciousness. What i am actually saying is that the language of consciousness, the stuff of any form of conscious experience, is exclusively composed of, sensory info. When i say, 'remove each and every sensory modality' i am not saying remove your eyes ears etc. What i probably didn't explain all that well is that i mean the format of sentience. You remove visual data, you remove skin data, either stored in memory or fresh data coming into the brain/AI. Thus when you were in the cave, you thought that you had achieved the state which i describe. You didn't. All you did was stop fresh sight and sound, merely two modalities communicating fresh sense data. What you are not getting at the moment is that you are for some reason ignoring the other 20 or so senses which are providing your illusion with fresh data. We sense our organs and skin. When you were in the cave, these still sent your illusion refreshed info. You sensed heat. You took an absence of light as 0 data, it wasn't. You were still sensing, but the human visual cortex is dependent on minute movements for recognition and sight. In darkness our visual feedback loop looses its functionality and we shift attention to other senses and also replayed memories fed to the visual brain. More to the point, in the cave, i am saying that firstly we stop all 'new' data, say we snip the brain stem, optic nerves and any new source of info.


That's a start. Now, we have a consciousness just as it was, but it is dependent on 'OLD' sensory data, reorganised and fed in and out of neuronets, long and short term memory. Also it has been found that the brain uses old memories of sense data as if it were a new sense and is able to reorganise that data and restore it as a sort of synthesized new data. Now what i am saying, is that you in the cave without any of the conventional senses, are still conscious.You exist using old sense data. What we then do is eliminate all of your stm/ltm that is held in a visual format. And one by one, we do the same for every FORMAT OF SENSE INFORMATION. Like getting rid of jpg files then mpg, then .ini files in a PC harddrive. We are removing the specific types of data held in a certain data format. Sight is visual sensory data, held in a specific format. After we take out all information stored in EVERY one of the formats, we lose existence. You in the cave still had many new and all of your old data in a variety of formats. People appear not to be able to easily imagine anything past their current senses. They struggle to appreciate that their thoughts have a variety of languages in which they are expressed. Also we tend to ignore all the cultural information we hold, such as you using the term crazy, it is an abitrary social construct, consisting of visual, acoustic and linguistic ideas which are in turn made of data carried in a variety of sensory formats. A simple metaphor, sense data is to consciousness, what stone is to a sculpture. My chief point pivots on the one realisation that all conscious is made of sense data. I am not, as you describe it, talking about fresh data and somehow the being is just a separate receiver which still exists without an input. In fact what i am saying is that sense is the stuff of thought, any thought or awareness, any single example of intelligent perception is MADE OF sense data. I attempt to demonstrate this by saying try to conceive of awareness that is not COMPOSED ENTIRELY of atleast 1 type of sense info, is empirically and theoretically impossible.

I appreciate the reply, it's a great way to spend a couple of hours thinking about this stuff. Best of luck with all your efforts. Cheers, Nick.


Nerketur
posted 8/13/2009  20:16Send e-mail to userReply with quote
 
nicku wrote @ 8/13/2009 2:43:00 PM:
Cheers for the reply, it can be a quiet forum sometimes.

Right to the point, you're getting part of what i'm saying, but there's a bit of a misunderstanding. Not 'no input=no consciousness
It is in fact no sense data present of any sensory modality(stored old or new)=no consciousness


When i say cannot exist without sensory input, you are taking that as 'remove the NEW inputs' . I do not mean that. What i refer to is the systems use of those inputs, in sustaining its illusion of consciousness. What i am actually saying is that the language of consciousness, the stuff of any form of conscious experience, is exclusively composed of, sensory info. When i say, 'remove each and every sensory modality' i am not saying remove your eyes ears etc. What i probably didn't explain all that well is that i mean the format of sentience. You remove visual data, you remove skin data, either stored in memory or fresh data coming into the brain/AI. Thus when you were in the cave, you thought that you had achieved the state which i describe. You didn't. All you did was stop fresh sight and sound, merely two modalities communicating fresh sense data. What you are not getting at the moment is that you are for some reason ignoring the other 20 or so senses which are providing your illusion with fresh data. We sense our organs and skin. When you were in the cave, these still sent your illusion refreshed info. You sensed heat. You took an absence of light as 0 data, it wasn't. You were still sensing, but the human visual cortex is dependent on minute movements for recognition and sight. In darkness our visual feedback loop looses its functionality and we shift attention to other senses and also replayed memories fed to the visual brain. More to the point, in the cave, i am saying that firstly we stop all 'new' data, say we snip the brain stem, optic nerves and any new source of info.

 
Indeed it can.

However... you misunderstand me. my cave example was not one in which all senses were gone. It was not one in which consciousness existed and senses still did. I KNOW they existed. Smell, taste, hearing, too. Like when I heard the drips. I was simply explaining why I thought the system would go crazy if SUDDENLY (I repeat, SUDDENLY) all the senses ceased to exist. If it happened when removing two senses for a brief moment... it would happen with more and more senses removed suddenly, until the being goes "crazy" In any case, though, that is only one thought of mine. It could be wrong.

Let me give another example where removing senses can make you go crazy. Lets say you're driving on a nice day, when suddenly, your sight ceases to function. Unless you're a very calm person, or you don't care, you're likely to panic. Lets say that as soon as you lost sight, you also lost the ability to touch. You wouldn't know what to do. You would panic, most likely.

These examples were only to say tat REMOVING sensory inputs can make you go crazy. It was in response to when you said "... to go crazy you need lots of sensory data, memory of that data and bio drives."

And in fact it can, if you think about it. Remove two senses, and everything else, for a while, is overloaded.

Let me stress one point again. Everything you are stating, in my mind, is assuming the consciousness did not exist beforehand. Yes, that's true. you cannot "create" a consciousness without SOMETHING to sense. And it's also true, I think, that removing them one by one will do exactly as you describe. I'm not arguing about that. That much is true. I'm talking SUDDEN. Like, all within a second. NOT one at a time. Those are two completely different things. When I said "delayed reaction" I meant it takes time. If you have a consciousness with one sense and you remove it, I believe you are right that it disappears. However, if you have a multi-sensory being, and you remove all of them AT ONCE, then the consciousness lingers for a bit. The reason for this is the more senses you have, the longer it takes for them to disappear. One means close to instant, many means some unknown time.

I know I can't prove this completely, as what I say can be explained other ways, but the "state of death" example I gave comes to mind. Doctors sometimes put their patient in a state of clinical death. Too much time in this state, and you will REALLY die. But why? the question is, Why? Why can't this state last forever? I think it has to do with the consciousness. Gradually, bit by bit, it starts to unwind, unravel. It starts to disappear. Why? Because it gets no input. No pain, no pleasure, nothing. But it's not instant. My explanation for this is obvious. You are, in reality at THAT point, pretty much in a state of little to no input. I'd guess there is none. the only thing keeping you alive at that point is your consciousness. Once that goes away, which it will eventually, then you die, too.

I have no proof of that, either, but that is what I believe. In any case, thanks for the luck. I'm sure going to need it. At least we both agree on the important thing. Without sensory input, a consciousness cannot be created. That is enough to create AI. Once we do... it's a question of experiment to see who is right.

PS: I do understand that there are religious and legal issues concerning AI... but I'm going to try and avoid those as much as possible.


nicku
posted 8/18/2009  02:50Reply with quote
 
Nerketur wrote @ 8/13/2009 8:16:00 PM:
Once we do... it's a question of experiment to see who is right.

PS: I do understand that there are religious and legal issues concerning AI... but I'm going to try and avoid those as much as possible.

 
Mmm, i think we're talking at cross purposes here. From what i can see, you are treating consciousness as a 'thing', something concrete like an apple or blood. When you say 'it lingers' , i would say 'what lingers, you've removed every possible means in which consciousness can be manifested, it can't possibly linger'. If you took away every french word from the French language, there would be no French to speak. However quickly you took the words away, nothing would linger, on or off, no in between. Same with consciousness. If all types of data were not used in the system(not live physical senses, the actual information format that we code our data with, i.e, auditory format), then awareness stops instantly. Nothing can linger because that's what i am saying, consciousness is composed of millions of different live and recorded bits of data, organised and derived from X number of different formats. There is no awareness, there is just a component of our brain that functions to make our species behave as if we were alive.Neither of us exists and all this is just machine chatter between two parts of an inanimate biosystem.

Sorry to not explain this in a more clear way, i'm not making a good job of it, because you're still not getting the crux of what awareness isn't. Maybe if you think about your own awareness and try a thought experiment where you had no memory whatsoever. No self concept, no values, no emotions, attitudues, no you at all. Then do the 'all senses off' procedure. You should see that nothing can linger. You have no memory, therefore no old material from which to construct awareness. Then when the live streams are cut, there is not one single component necessary for consciousness left. In order for anything to linger, be, be thought, you need sense data to actually give birth, to make, to be the building blocks of that thought or instance of awareness. Thought is made of sense signals. There is no form of consciousness that does not require it exclusively. It isn't just an element of thought, it is thought, totally, nothing else will do and nothing else is needed.

We as humans tend to be socialized into the common sense intuition that awareness is a thing, sort of like a running engine with a momentum and substance all of its own. Getting over that particular 'category mistake' is the first victory in the war to understand what you are.

A useful definition of consciousness seems to be 'live sense data processed in the context of stored and recalled sense data'. I think that you are more interested in the 'how does a car do what it does' . The car is conceptually a means of independance. I think that you are trying to disect and examine the concept of 'means of independance' as if it were a physical thing itself. Consciousness is the same, people are trying to understand something as if it were itself a material object or energy. It is not. All awareness refers to is abstract surface level of description of neurologically realised data storage and processing. Our illusory belief in our own awareness is, from an external perspective, just another component in the machine. As i say, if a biologist want to know what a cow is, the last thing he does is ask the cow!

Best explained like this. A mathematician understands Fourier transforms. A man comes along and has a look and thinks, they are a pretty picture, i understand. The mathmo says to the man, no you don't get it, and the man gets offended and says, oh no my friend, but i do. However, despite the protestations, the mathmo actually knows for a fact that he does not understand, If he wanted to, in the presence of other mathmoes, he could disprove the man's claim of understanding categorically. The man would need to study for many years, dispelling many long held assumptions that he thought he knew to be definitely correct. He would need to acquire new complex language and notation systems, which would open up a whole new means of communication and understanding. Only then would he be able to 'translate' what the Fourier material meant, 'to the mathematician'. Philosphy of mind is like an inverse mathematics of the mind, whereby you need to delearn rather than learn things anew. Understanding can exist on many, many levels, but still refer to the same thing. We may think that we see what others see, but often we can be wrong.


There's a tie in with the religious aspect and your writing. You say that consciousness must linger but you can't offer any empirical defence of that position. That'll be faith then. That's all it is really, we as a species have an emotional imprinting and attachment mechanism which evolved to facilitate better social functionality. A by product of this system is that we tend to imply permanence in things which are beneficial to us, for instance the identity of others. We cannot conceive of a complex entity such as a person we know and love, just disappearing into nothing resembling the original form. Hence we create that persistence by inventing spirit, and all the rest of it. Again, a pc harddrive contains billions of wonderful and complex concepts and information, even a couple ofseemingly conscious MS office wizards. Does any of that linger. even for a millisecond, the moment you put the hard-drive through a shredder? You're right though, religion is best left alone in a reputable forum such as this.

The experiment to see who is right has already been done many times, believe me, more things in heaven and earth and all that. One things for sure, you're more likely to see the state department discuss population control, than you are reading study results of monitored live human subject brain dissections.


Have you read any Gilbert Ryle? He is in my opinion required material for any serious attempt at reengineering your 'belief in consciousness'. I once spent a long sunny afternoon in a field, reading his work, and i left a different man to when i sat down, honestly.


Nerketur
posted 8/19/2009  15:22Send e-mail to userReply with quote
 
nicku wrote @ 8/18/2009 2:50:00 AM:
There's a tie in with the religious aspect and your writing. You say that consciousness must linger but you can't offer any empirical defence of that position. That'll be faith then. That's all it is really, we as a species have an emotional imprinting and attachment mechanism which evolved to facilitate better social functionality. A by product of this system is that we tend to imply permanence in things which are beneficial to us, for instance the identity of others. We cannot conceive of a complex entity such as a person we know and love, just disappearing into nothing resembling the original form. Hence we create that persistence by inventing spirit, and all the rest of it. Again, a pc harddrive contains billions of wonderful and complex concepts and information, even a couple ofseemingly conscious MS office wizards. Does any of that linger. even for a millisecond, the moment you put the hard-drive through a shredder? You're right though, religion is best left alone in a reputable forum such as this.

The experiment to see who is right has already been done many times, believe me, more things in heaven and earth and all that. One things for sure, you're more likely to see the state department discuss population control, than you are reading study results of monitored live human subject brain dissections.


Have you read any Gilbert Ryle? He is in my opinion required material for any serious attempt at reengineering your 'belief in consciousness'. I once spent a long sunny afternoon in a field, reading his work, and i left a different man to when i sat down, honestly.

 
Hmm... After reading your post, a few things stood out.

First of all... I'd like to think I was the mathematician. But I understand I'm usually the man until I understand it. And that's the issue here. I just don't understand it. I don't "get" consciousness. I'm a being that is trying to discover the truth, though, open to belief.

Secondly... Yes. At this point, it's faith. Until I'm proven wrong, or told otherwise, I will likely still believe what I do. I know it's a human tendency... But I still say your idea can't explain ESP, or Psychics. Neither does mine, at this point... But I AM trying to explain everything. Which is why I said "we will see". In my mind, you can't prove what consciousness is, unless you understand it completely. I do not believe either of us do. Note that I am not really trying to prove anything to you. All I am doing is asking questions. (Though, I do appreciate your answers.)

Thirdly, it's not that I think you're wrong. It's that we disagree. Sure, you can look for brain activity in a brain... but that's not a sure fire way to know if you're awake. It's not like you can see every thought, every memory. You can't reconstruct the entire brain based soley off of the brainwave patterns. At least not yet.
That said, however... It IS a good tool to tell living from dead. Nothing more than a tool, though. Just because it's "monitored" doesn't mean they see everything. In fact, according to quantum physics, they can NEVER see everything. Once observed, it acts differently than unobserved. So yeah, you can monitor them. But you won't find the full picture.

Lastly... Gilbert Ryle? No, but I'll be sure to look at some of his work. If it will help me to understand "consciousness", I'm all for it. =)

On a side note... I noticed you still haven't answered some of my original questions. How long does it take for the illusion of consciousness to appear? And when it does, what would it look like in an AI brain? Do you have any thoughts on that matter? I honestly think it's something simple... Maybe a few associations, I'm not sure. As for when it appears... I'd guess around 4-7 years. Both of these are really shots in the dark. It's likely I'm wrong, but I'd still like to know the truth.

PS: I'd rather discuss, than debate. Discussion is a lot more friendly. =) (Granted, "friendlyness" could be an illusion too, but that's a whole 'nother story. xD)


lordjakian
posted 8/30/2009  05:56Send e-mail to userReply with quote
This is "Number one" on my posts to read....

When sober.

I gotta start from the top, but I'll be back.


nicku
posted 9/3/2009  00:22Reply with quote
 
On a side note... I noticed you still haven't answered some of my original questions. How long does it take for the illusion of consciousness to appear? And when it does, what would it look like in an AI brain? Do you have any thoughts on that matter?

I honestly think it's something simple... Maybe a few associations, I'm not sure. As for when it appears... I'd guess around 4-7 years. Both of these are really shots in the dark. It's likely I'm wrong, but I'd still like to know the truth.

PS: I'd rather discuss, than debate. Discussion is a lot more friendly. =) (Granted, "friendlyness" could be an illusion too, but that's a whole 'nother story. xD)

 
How long? Immediately upon system input of sense into memory processor. The only qualitative difference is that to move closer to what we see as 'intelligence', the computer needs the 'self actor' memory component. I believe from my Piaget studying days, it is often just after the sensori motor stage in humans. Your 4-7 years more or less implies a definition of thought as only involving abstract representational thought. To be honest there's no reason to separate lower visceral sensations, such as muscle reflexes feelings, from linguistically coded self concepts, in terms of their qualifying as sense data/consciousness. Same stuff just simpler content. Apparently you can get a belief in consciousness running from as early as a few months, but the analytical complexity is practically binary(motivate/demotivate) and is more defined by more lower order biological data as opposed to the later symbolic reasoned reflexive processes. Memory is what 'kicks off' what we really refer to when we say intelligence. Really just like a zx81 compared to a pentium, still a pc, just more wafers and wires.

It is a tricky thing to discuss culturally, since the abortion debate is predicated upon the non-value of base sensory consciousness. If you start banding around a model of awareness that includes all sensory processing then it gets a little more tricky for the medical establishment to say that life isn't life, before an arbitrary point in foetal development. Anyway, that's another mind/body problem entirely.

Friendliness is an illusion, just like everything else, but we can't escape our hardwiring, however clever we get, so nature has the last laugh. The bit i find tricky is sometimes you lapse from understanding that we don't exist and you start the normal socially constructed role play, that dictates our thoughts and behaviour. Now and again, you'll catch yourself, thinking like anyone else, as if the world is real and things matter, but then you'll click and remember all in one big rush, that it's a freerunning causal machine.

What would it look like?
Depends if you mean as observable machine data/processes from our point of view, or as some form of duality experience, interfaced with human thought. (read Heisenberg's and Wittgenstein's stuff, great relevance for thinking outside our mental world)
I think both AI and our biocogged awareness can be viewed externally as reductionist physical facts, a collection of observable machine truth functions. I think fundamentally sentience is a type of mental intentionality, which is expressed by the organistion of old and new sense data. This intention can be seen to operate at a cognitive processing level, with old sense data being used to manipulate new sense data. Whether that helps us find out ' what it is like to be an AI' a la Nagel et al, is a question of data interfacing. Our brains are really no different to a pc, the computer model of old has always held true, assuming that you don't disprove it based on what we can't yet do. Our brains use data, and it was always the aim of the 'grey arms race' to measure, quantify and predict the data format of the brain. Following that solution, it is a case of establishing system data equivalence, so that we can feed in the computer based mind structures, with a view to a human brain experiencing and evaluating what it is like as qualia or actual living in the AI. I guess it might be a data crunching problem, because until you learn the building blocks of brain states and their functional equivalents, we are stuck with thought experiments as a epistemological methodology. Again it harks back to what we say about consciousness as a user illusion, which cannot be usefully explained by asking a user what it is. What an AI looks like to an AI, will almost never be what it is to the entity that eventually completely understands it in a causal sense. Understanding awareness requires a complete system deconstruction of the human experimenter. At that point, you i suppose stop being human in the normal sense and almost ascend to an altered form of consciousness, just like Wesley in Star Trek, ha. It is almost frightening to experience it for real, when you stop and take your bearings and see your whole life's memory get retroactively rewritten in your mind. People you loved, had a laugh with, emotions you felt, all absorbed into the deterministic beam which lights up everything you look at. Funny how what should be a massive mental leap, actually vastly increases the amount of time you spend engaged in infantile play acting, pretending that you see the world like the people you have to 'act' with in the mutual stage play.


Rambling again. Anyone have an update as to the progress with the Sony learning bot in Japan. If you're looking for AI, there it is in the plastic, real AI, a sensory data processor, with sense memory and a self concept model. Lord, any progress with your work lately?


Nerketur
posted 9/5/2009  06:19Send e-mail to userReply with quote
Interesting thoughts, Nicku. I've actually had some of the same. About the whole "deconstruction" thing. There is a curious element to it when I start thinking of myself, and others, without a consciousness. I can somehow do so while still "inside" my play self.

That reminds me... Perhaps Shakespeare knew more than most people thought when he said "All the world's a stage, and we are merely players upon it." It is true in two respects. We don't show our "true self" to the general public, for one. We tend to shy away, and put shields up when working with others. But the other respect deals with consciousness as a whole. We are simply beings that create our own scripts based on what happened to us before. We are all actors, never revealing that we aren't what we seem to be.

That's actually the problem with determining whether a being is sentient. Since we are ALL acting... then Sentience itself is an act. Consciousness itself is an act. In fact... everything we do is really acting. Including us in our debate/discussion.

Interestingly enough... I had thought of Spirituality... I thought of Psychics, telepathy, and all that. At first I had thought that wouldn't work if consciousness was an illusion. But it does. They ALL do. Prayer works, Psychics work, everything works. Our belief in consciousness, in a sense of self, is why we can't see it happening without an "external force". In reality... we all are doing the same thing. All are going for the same goal. We are technically all like the parts of a human. or a computer. What changes, is the places around us. Our likes, dislikes, loves, hates... all deal with how we are made. They all deal with what we do, what we see, how we think we feel.

Before I go on... I need to mention I'm an avid believer of the existence of alternate universes. The reason I brought "supernatural" phenomenon in this discussion is because I watched "What the Bleep do we Know?" I'd recommend that movie, by the way. That is where my journey to find the truth really began.

Anyway, I thought a lot about this, and then it occurred to me... We have a lot more power than we think we do. I finally figured out how it fits. At the "underneath" conscious level, we are only doing things according to what we did before, or to accomplish goals. If we start to think in certain ways, we can change those. Just like we can change the way a HAL thinks by changing how we train it. If we keep thinking a certain thing... like "I want a million dollars." That eventually gets passed from brain to muscles, and causes us to become more likely to get what we wanted. It changes us from outside in.

Though it's a bit off topic... I think it's an important thing to think about. How can these things exist without us acting on them? They really can't. What we see as a "higher" supernatural level, I now believe is simply a lower, more basic sense of thinking. I need to make this, in itself, a new topic... but that will have to wait for a new day. =)

As for that "learning machine"... Would you mind posting a link? I'd love to see it in action =D

Last edited by Nerketur @ 9/5/2009 6:22:00 AM

Isandy
posted 9/29/2009  09:22Send e-mail to userReply with quote
How come i missed being part of this forum ?
I am yet to read the entire thread so will reply my thoughts once i read all the posts ...
For starters, I come from land (India) deep rooted in the study of "Self" "Consciousness - individual & Universal(A.k.a God to most ppl) and i have always had this question when it comes to AI Can a AI have "the so called consciousness" If consciousness really exists can it be created or destroyed ?
Will share my learnings (IMHO) & my thoughts when it comes to AI & Consciousness in my next post..

Lot of thinking to do here. looking fwd to this thread :)


tkorrovi
posted 9/29/2009  10:24Send e-mail to userReply with quote
 
Nerketur wrote @ 8/10/2009 7:00:00 AM:
That said... it brings up a question. If all we do is associate and see patterns, then how do we have a sense of being? A conscience? A consciousness? I say it could be "fake", if you will. It could be that we think we have one because it makes us feel better. We think we have one because of what we think it is. We associate it with humans, with religion, with the ability to think about thinking.


 
How can such question be brought up from these assumptions? If all we do is associate, we also associate the things in ourselves. We don't only associate, we model the processes, or perhaps this can be considered an association in a wider sense.

But about the nihilistic arguments in general, they all seem to ultimately come from the homunculus principle. And the homunculus principle in essence says that a theory is not good enough to explain things. But this is not exactly right about consciousness, at least not any more. Sure we don't know everything about consciousness, but at least we know some aspects of it, which is enough to model it at least somewhat. For more information you may also see the link below.

You are from India? Are you not the same person from India who claimed to be very interested in my theory and promised to do everything ever possible to help me, and then disappeared?

 Artificial Consciousness ADS-AC project

nicku
posted 9/30/2009  01:01Reply with quote
 
tkorrovi wrote @ 9/29/2009 10:24:00 AM:

But about the nihilistic arguments in general, they all seem to ultimately come from the homunculus principle.


 
Hello again tkorrovi, i know you've applied the old homunculus repetition fallacy to critique this thread and in relation to what the op believes, it is spot on. He is assuming the thing he wishes to explain.


Many modern phil of mind thinkers have come to believe that a deep deterministic understanding, abolishes the fallacy easily, if you take enough time to fully grasp it. Basically the homunculus problem is created by a user initiated category mistake. Namely that the philosopher is mistakenly searching for a viewer, conceived by his knowledge of how sentience is in his world inducted into sensory storage. He uses a simple heuristic model of 'aware being' as his definition from the start, which is a mistake. His key error is to assume that any experience is happening at all, as he feels it happens in common sense terms. The mental qualia, to which most experimenters think they refer, when trying to find consciousness, are in fact inert biological machine functions. Most philosphers attempt to use ideas and concepts that they've held, unchanged since childhood. If a surgeon used a definition of a leg as ' that long thing which i walk on' , patients would be in trouble. Yet philospher on this site still use concepts like, 'me' 'viewer' and 'mind's eye'. They are childish and of no use to a bone cracking mind surgeon.


To experience a sensory stream from our point of view is self-described within our brain, as a memory model of a human, who is identical to the subject labelled 'me', processing data relative to a stm/ltm memory loop of old data information. That's consciousness. From robot called 'me's perspective, an 'experience data set', can be reported linguistically, which is labelled viewing. If analysed from a correctly disconnected angle, the viewer is nothing but a serious of inert linguistic symbols, attached to brain states, which in turn correlate to sense data detection and neuronally imprinted sensory information sets in memory. I know i don't explain it well, but put simply, we see the unreal character in a game's software, but not the computer mechanism behind it. No wonder then that when we look for what we see as a 'thing', i.e. the viewer, we can't explain it properly. We are a populace of fictional characters, created to aid the end function of biological machines.

It is the deadness of consciousness that is hard to grasp for most outsiders. They look for what they think they have known all their conscious lives. What you and i break into is the realisation that when we actually start to 'get' what thought is, it damn well doesn't even vaguely resemble what we originally thought we were looking for. Truth is that there isn't a viewer at all. It is just an element of inert bio-software, used by the machine and necessary for internal cognitive functioning. To have a machine which constructs a social game, whereby robot subject reacts to self-representational symbol, is a simple logic gate function, but realised with social game play. To 'see a viewer' is what is required for the animal to produce a certain mental, then behavioural output(acting within a role with other characters factored in). When we look for a viewer, it can be seen in many ways as a piece of learning software in a video game, attempting to compute the physical location or origins of a character in the game. From the outside it is to the game designer an irrelevant and rather sad mistaken endeavour, but the software has no other perspective. so it keeps looking for what it is programmed to achieve.

Most forums, government projects and thinkers spend most of their time trapped within this perspective cell. Until the illusory symbol of 'The viewer' is broken down by reductionist analysis, we never get to look back on what we thought we knew and say, 'oh shieeeet, there is no me at all'.

  1  2  3  
'Send Send email to user    Reply with quote Reply with quote    Edit message Edit message

Forums Home    The Artificial Intelligence Forum    Hal and other child machines    Alan and other chatbots  
Contact Us Terms of Use