Ai Forums Home Welcome Guest    Wednesday, June 28, 2017
Ai Site > Ai Forums > Language Mind and Consciousness > Physiology of sense Last PostsLoginRegisterWhy Register
Topic: Physiology of sense

body104
posted 4/23/2010  23:50Reply with quote
Hi! Could anybody answer one question:
what is the mechanism of pain and pleasure on the "lowest level"?
I mean, it is known that these senses arise from stimulus of some nerves (and/or from diffusion of some chemical agents). But why one factor leads to one sense, and another factor - to another sense? I mean, there is no key difference between these factors from the the senses' point of view, isn't it? (the senses arise on the higher level)

Why I am asking, I highly suspect, that it is impossible to develop artificial consciousness without understanding the mechanism of senses, because without senses artificial creature would not have any incentive to make a single thought.
As some users (Nick) here on forum noted, consciousness <=> sense. Additionally, I think it is necessary to distinguish between consciousness and self-consciousness. Some consciousness exist (i suppose) in any animal, and senses are responsible for it (though this consciousness differ from simpler to more complex animals and it is hard to compare). As for the self-consciousness: animals, newborn kids and feral children without conversational speech experience do not have it. Hence, I suggest that the second signal system (speech) is responsible for it. The second signal system leads to ability to abstract and to aggregate (according to wikipedia). The simplest model, i'd provide, is: when newborn kid learns to label things (abstract), he labels other humans: “human”. Then he looks at himself and aggregate: “human”. That is self-consciousness.

Anyway. IMO, self-consciousness can not exist without usual consciousness, hence it is necessary to understand the mechanism of senses.

I would suggest that the first step to model consciousness is to develop some artificial creature with senses, that would have incentive to move in space just because of pain for instance, but not because of some algorithm. Existing artificial sensors can send signals, but how to develop artificial pain/pleasure?

P.S.
1. As I found out, many AI scientists are not familiar with important empirical data and relative physiological functional models.
2. Looks like the question of what is sense relates to philosophical concepts of qualia.
3. Maybe the question I asked is impossible to answer yet. Maybe some knowledge of quantum physics is necessary to model artificial neuronet advanced enough for senses and consciousness.

Last edited by body104 @ 4/24/2010 8:11:00 PM

nicku
posted 4/24/2010  04:51Reply with quote
Hi and thanks for sending a mail,
it's a bit late, so will try and give you a short prelim response.

Chiefly your post contains the answer within itself. Interestingly you actually mention the relevance of the lack of what you call 'self-consciousness' or self concept in psychology. This is central to what can show us the nature of our category mistake when we try to explain and theorize about pain or consciousness. The self is a behavioural database of past sense input data values, with specific content of observed behavioural history. The self is also a human's reflexive mental description which simply holds likely behavioural intentions in the future context. You could say it is what some call 'intentionality' based on empirically recorded behavioural probabilities. The self is simply a filing cabinet for behaviourally generated past sense information. This filing cabinet is an information node with which the social system of human interaction can filter and direct group action, by the assignment of behaviour functions to each individual.

To explain, the self concept is just a two dimensional way of referring to behavioural and informational sense data held in stm/ltm storage. In the way that you say a new born isn't conscious, you can take this further and say why. Well simply put, what we define as awarness is just the physical presence of certain symbolic references to specific sense memories. When you are able to classify a self concept, you effectively bring that state into being by meeting your own definition of what it is to be aware. Monkey declares definition of self, monkey has self.

To the average person, a personality is a rich conglomeration of character traits and emotional flavours associated through experience with that person's identity. To a 'no holds barred' determinist, the self is a collection of various behavioural tendencies/intentionalities which are used by humans to regulate and inform the social interactional system. Seeing that the neonate has no self concept gives us the answer of why that is. Namely, before labelling of human mental operations is possible, by storage of symbolic sense data, we have not created the consciousness.

The self concept is a biological sense information tautology or self fulfilling definition. If i define awareness as that which i do, then i obviously meet the definition. I am conscious because my sensory processing meets the definition which my brain holds of what constitues thought. We quite simply define ourselves into the illusion of existence. When the human animal does this, he can operate in a social role and thus raise our evolutionary chances through cooperation and communication.

This may seem bizarre to some who still think of sentience as a concrete property, an actual thing. However it is vital to realise that pain and awareness are just 2 dimensional organisations of behaviourally situated sense information. Pain must be disassociated from the viewer. It is nothing more than 'a script of behavioural intention or roleplaying instructions'. Sense data is received into a neurological mechanism which assigns a logic function to the labelled input, this is pain, this is what procedure is necessary when all conditions for a pain process are fulfilled, i,e, nerve stimulation, socially constructed interpretations of what reaction is appropriate and importantly the hardwired physiological link to reflexes in the body.

We tend to make no distinction between feeling pain and our reaction to it, but nevertheless there is a definite evolved decoding of pain signals into psychosocial mental process and a physical response. These physical secondary responses to pain such as hormone release, cardiovascular changes etc are interwoven with the common sense ideas that we bring to the table when we discuss this topic as philosophers. But it is vital to realise that whether pain sense data is reacted to physiologically or mentally, these reactions are just neutral machine functions in a biological robot. Try something, Try to imagine getting a nail in your foot, and then, imagine that all the physical reflexes have been disabled. It is a strange exercise in introspection. I have through self testing established that pain can be disconnected from the physical signal to the psychological reaction to the perceived signal. You can redefine the pain as someone elses in your mind and actually be aware of the extreme pain, but you've broken the link to your normal reactions such as screaming, pulling away or heart rate increase. In that way it shows that internal labelling or autodefining our sense inputs is what makes them 'mean' what they do. And this can be changed.

Put simply, pain, as experienced by the user, is merely sense data which has with animal evolution been hard wired to produce an instantaneous link to physical bodily reactions. We say 'oww that hurts' when we get burnt. In an effective reductionist analysis of this, we produce a mechanistic view of sense input in to stm buffer, deep brain evolved reaction+mental ltm socialised reactions and interpretation of reaction.


Hence the pain process, to answer your question, is on a 1st level analysis, just a biological logic gate.

In fact consciousness when studied deeply enough may be sufficiently reduced to a series of mechanistic logic functions. The distance between most philosopher's idea of consciousnes and what it actually is, comes from the object/subject paradox. We are almost practically excluded from being both the thing being explained and explainer, because we use the thing being explained to do the explaining.

As i mention, pre-deconstructed humans are riddled with mental assumptions which effectively limit that person's available perception of the issue. When you get that there is no one doing the experiencing, you're half way there.

This is why the AI movement is stalled. We are trying to meet a definition that is invalid, thus we never get there. When we 'get' that awareness is a 3d picture from a 2d line drawing, we can start to get that we have already succeeded in recreating the thinking ape.

cheers, Nick.


nicku
posted 4/24/2010  05:15Reply with quote
just reread your post and i have to say that you've got it bang on. What you are saying is that humans define themselves into existence by meeting their own defintion.

Labelling, language, speech, whatever we call it, people need to use this concept in their work if they are to progress. The mind is aware, is another way of saying, awareness=what i have=i am therefore aware. This is what i am saying when i say sentience only requires the 'belief in consciousness'. Consciousness is how 'I' behave in a respect of 'Not I' and is therefore to be seen as a logic function or binary separation of identities through language definitions. Another way of saying what you describe. All that is required for the AI or human primate to behave as a social unit is for it to define itself as such within a descriptive data base of retained sensory data. That's it in a nutshell. We are not aware at all, We are a lump of flesh, which achieves social goals by using symbols to enable the animal to define into being an autonomous identity.
Progress, ha.

Last edited by nicku @ 4/24/2010 5:20:00 AM

body104
posted 4/24/2010  19:55Reply with quote
Thx for the answer, nicku.
So, according to you, pain/pleasure is just a logic function associated by brain to corresponding sensor data. But wait, "function" (or "logic gate") is too abstract notion for a determinist, isn't it? How is it realized physically? If i want to construct another one creature with senses of pain and pleasure, how could I do that?

And I am asking not about the reaction to sensor data. Maybe the sense of pain arises after the release of hormones and so on. Chemical agents stimulate special parts of brain and pain/pleasure arises. The question is: why one type of stimulus leads to pain, and another type - to pleasure, but not the contrary. You say that it is just a logic function and it could be, that the 1st type of stimulus lead to pleasure and the last type - to pain, correct? IMO this answer is not enough to reconstruct creature with pain.

“we have already succeeded in recreating the thinking ape“
What do you mean?

As for the self-consciousness: i described it from the functionalist's point of view and you describe it from the determinist's point of view. There is no contradiction.
But as for the pain it is necessary to describe it from the determinist's point of view to recreate it, because it is a basic concept. And the answer with a logic gate is too functionalistic imo.

Last edited by body104 @ 4/24/2010 8:10:00 PM

tkorrovi
posted 4/25/2010  16:29Send e-mail to userReply with quote
Feelings are not reactions to stimuli, but states of the entire system. Pleasure is some kind of harmony, while pain is a state of alert and lack of harmony. This is why the same stimuli can in different circumstances cause both of these feelings. A fully self-developing system has such states just because of the way how it works, at least the states of harmony and lack of harmony occur in every such system.

It is not necessary to model every feeling, it is enough that the system has certain objective abilities which can be tested, having all these is a very extensive requirement, so that a system which satisfies this requirement also very likely has the other abilities. Yes, we cannot test the subjective aspects. It can only be tested whether a system is Artificial Consciousness, whatever abilities it was intended to have, not whether it is conscious, because this is what Artificial Consciousness is about.

 Artificial Consciousness ADS-AC project
Last edited by tkorrovi @ 4/25/2010 4:31:00 PM

nicku
posted 4/26/2010  23:07Reply with quote
In response to the 'too functionalist' part, any academic in the philosophy of mind, as well as in most disciplines, come up against what you've just done there. I would call it perspective blindness.

Any analytical project must address a subject of study from a restricted focus of attention. I believe this is so because human brains are only really good at addressing one main thing at a time. This structural bias leads to academics developing a very isolated and limited range of coverage when attempting an explanation of a whole system. Take Marxism say, versus feminism, in the sociology of labour markets. It is laughable to suggest that either one of these accurately provides a sufficiently broad range of issues to explain well the system under study. In the same way, you are maybe dismissing just one particular point of discussion which we were focussing on, i.e, the functionalist black box level of analysis when applied to the physiology of pain, for the same reason. I realise that you no doubt have a much more complex handle on the subject. However to overlook a particular explanation, namely the logic gate analogy, fails to realise that the truely correct model of human consciousness and pain can be explained in parallel from several angles and areas of focus.

We as academics can be seen dismissing a theory based on its apparent voids or theoretical deficiencies, for example we are very ready to write-off functionalism, because it fails to adequately account for causal detail and qualia. That in my opinion, fails to grasp that our very monolithic approach means that we tend to avoid looking at the mind in terms of multiple levels of analysis. Just because functional theories may at one time look exclusively at high level structural modules, doesn't mean that they are wrong, in respect of a small facet of the 'grand theory'.

I am, like yourself, very involved with the reductionist project, because as you say, there is no understanding without grasping each aspect of a machine. As an example, we are both attempting to explain the workings of a steam train working on a commuter line. I say that the train is able to convert one form of stored energy into motive force transmitted to rails via rotary motion. Another guy says no no, the train works because the driver presses a combination of buttons and levers. Yet another says you're all wrong, you buy a ticket and the train takes you to where you want to go. Stupid example, but it demonstrates what academics spend millions of dollars/pounds each year doing. They argue with eachother that one limited view is incorrect, because it doesn't account for the known empirical data relating to another abitrary focus. Point is, all of them are right, but we as humans are surprisingly resistant to accepting anothers theory. if it distracts them from the particular focus stream that they've chosen to adopt.

Pain does, in one respect, act in a similar way to a logic gate, at a functional level. We do still need live electro stimulation MRI studies, dissection enabled disproofs to confirm neurological processes and neurotec entity modelling to establish functional interactions of neuronal networks. We can use this positively in understanding the mind.

However, in practice, the majority of AI theorists are born to fail, both because they are one view bound, but more importantly because as you say, a good determinist can only make further progress by bringing together a vast range of complex physiological empirical material at the same time. Most people who study it haven't got a chance because they don't have the benefit of anatomical data. Can you imagine a machinist trying to build a motorbike part if he had never seen any drawings or even inside the engine to look at what and how the part works. Same with pain, have scalpal, will learn.

It is funny that some philosopher react strongely if you suggest that philosphy of mind will develop into an almost exlusively medical undertaking in the possible near future. However, medical explanations could, given a world where we could tell the public things, feedback on a conceptual level to software engineers, who can relatively easily create a human mental analogue which replicates in every way the real thing. Like i said before, it isn't surprising that people can't make much progress in AI, given that 99.99% of specialists, nevermind amateurs, carry with them a totally false concept of what the illusion of consciousness actually is.
That said, i can assure you that a purely theoretical level of analysis is and has been realised in some contemporary research circles. We can, using almost exlusively functionally driven modular analogues of human neuro systems, replicate a human. Maybe it's a bit of that inbuilt nonunderstanding by us, which automatically uses a common sense 'user illusion' derived idea that consciousness is somehow unscientific and therefore special. Well i know that it won't be enough for most, but the fact is we are not special at all and every day we are shown to be just a biological machine. Like any machine, we can be copied, given the right lack of assumptions.
Cheers for the reply, it's all good fun. thanks, Nick.


nicku
posted 4/26/2010  23:38Reply with quote
to Tkorrovi, saying that feelings are not reactions to stimuli, is like saying that a dvd player's digitially encoded video signals, read from a disc, are not causal reactive processes to stimuli, but actually the state of being 'ON'. States are a functional perspective of atomistic elements in action. Because you don't see the causal levels, you exclude them as non-existent. It is an empirically demonstratable fact that feelings are reactions to stimuli. Ask a neurologist to show you a couple of brain state studies. With all due respect i think that you are plagued with a 'user bias'. I believe you are using the 'what it is like to be, Tkorrovi' part of your existence, to evaluate the expanatory power of the theories you encounter.

I would definitely recommend that you do some wider work on Putnam, Nagel and Churchland et al. They might give you a new world of ideas, whereby you realise that the human isn't actually conscious in the way we understand it as users. Your big leap will come when you first accept that in order to interact 'at all', anything in the universe must be physical composed and deterministically captive at a macrophysical level.
After that if you still want to continue you might also realise that if everything is reducible to causal antecedents, then the thing that you think is your awareness, is just an invariant reducible machine which uses the 'machine self/other referencing' to enable social functioning. As you have seen, it is not something that can be explained in even many posts, without the candidate's psychological deconstruction. Better to just leave it alone, because as i always keep saying, it is a dangerous place to go if you want a live a practical human life. Humans haven't evolved to live effectively AND at the same time understand that they don't actually exist at all. cheers, Nick.


hunt
posted 4/27/2010  04:41Send e-mail to userReply with quote
 
nicku wrote @ 4/26/2010 11:38:00 PM:
to Tkorrovi, saying that feelings are not reactions to stimuli, is like saying that a dvd player's digitially encoded video signals, read from a disc, are not causal reactive processes to stimuli, but actually the state of being 'ON'.

...

 
I think Tkorrovi was trying to point out that feelings depend on an individual's external stimuli as well as internal factors. What I mean is, how a piece of external stimuli is processed into an emotional response may vary depending on the individual's current state of mind and depending on any other external stimuli. I don't think s/he's trying to deny cause and effect.

If I misunderstood, please feel free to chime in, Tkorrovi.


nicku
posted 4/27/2010  18:14Send e-mail to userReply with quote
cheers Hunter, and sorry to Tokorrovi for oversimplifying his position, that'll teach me for skim reading his post. I just read the 'harmony' bit and i thought it was one of those posts from that mystical guy on here who talks in poetical riddles.

I know that he doesn't deny causality from having been familiar with his work over the years. It was just that saying that pain is a systemic state which encapsulates many intermediate influences and not a reaction to stimuli, relates to what i've said in my other post. People selectively point to one focus of study. When they do that, they tend to reject other points of view, even though essentially they are different levels of description of the same phenomena. Pain is a both a reaction to stimuli and a state of the whole system, if one adopts a functionalist framework to explain the latter.
I'm not usually so functionally minded, but i felt it necessary with this particular treatment of pain, because the normative assumptions which people use to understand their introspective experiences, tend to make people see pain and awareness as unique qualia, beyond observation by science. Hence most when attempting to begin what pain is and how it relates to consciousness, seem to say' it feels like this' or 'the determinist account doesn't account for how it feels for me'. I guess alot of the problem is that, without medically based study, we don't even begin to get off the ground as far as the fine detail of reductionist accounts. Determinism in the current philosophical context is hopelessly inadequate in terms of its ability to derive, test and disprove any hypothesis.With the increasing microdeterministic data, we find that these theories do begin to explain and predict the key aspects of pain and our illusory experience of it.

Must remember to read posts fully in the future.
cheers, Nick.



hunt
posted 5/1/2010  22:21Reply with quote
 
nicku wrote @ 4/27/2010 6:14:00 PM:
cheers Hunter, and sorry to Tokorrovi for oversimplifying his position, that'll teach me for skim reading his post. I just read the 'harmony' bit and i thought it was one of those posts from that mystical guy on here who talks in poetical riddles.

I know that he doesn't deny causality from having been familiar with his work over the years. It was just that saying that pain is a systemic state which encapsulates many intermediate influences and not a reaction to stimuli, relates to what i've said in my other post. People selectively point to one focus of study. When they do that, they tend to reject other points of view, even though essentially they are different levels of description of the same phenomena. Pain is a both a reaction to stimuli and a state of the whole system, if one adopts a functionalist framework to explain the latter.
I'm not usually so functionally minded, but i felt it necessary with this particular treatment of pain, because the normative assumptions which people use to understand their introspective experiences, tend to make people see pain and awareness as unique qualia, beyond observation by science. Hence most when attempting to begin what pain is and how it relates to consciousness, seem to say' it feels like this' or 'the determinist account doesn't account for how it feels for me'. I guess alot of the problem is that, without medically based study, we don't even begin to get off the ground as far as the fine detail of reductionist accounts. Determinism in the current philosophical context is hopelessly inadequate in terms of its ability to derive, test and disprove any hypothesis.With the increasing microdeterministic data, we find that these theories do begin to explain and predict the key aspects of pain and our illusory experience of it.

Must remember to read posts fully in the future.
cheers, Nick.


 
I'll admit I'm not versed on the distinction between "functional" versus "deterministic" approaches to philosophy. I do agree with your main point, however: that too often people ascribe the source of their own experiences to some mystical or unphysical phenomenon rather than to the same principles of cause and effect that govern, well, *everything else in the universe*.

I blame a lack of adequate science education. It really is tragic, because properly understanding the physical mechanisms that govern our own behavior/bodily functions does not somehow diminish what it means to be human, but rather illuminates more precisely what makes us so unique and incredibly fascinating.

I think that is what interests me about artificial intelligence in general and natural language processing in particular. It's such an innately human behavior to turn the variety of our experience into a system of symbols and structures that we can trade back and forth between us. How do we do it? How do we understand it? How do we reduce this behavior to an algorithm? What would such algorithms look like?


tkorrovi
posted 5/2/2010  00:12Send e-mail to userReply with quote
And what if it is not anything which we can call "algorithm"? Then the train of thought is entirely out of rails? Algorithm usually describes a serial processing, but what if the system is massively simultaneous? What if it is entirely self-developing, algorithm cannot very well describe that which happens in a self-developing system.


body104
posted 5/3/2010  00:24Reply with quote
Thx for the answer, nicku. I had examination week and had no time to response.

You misunderstood me, Nicku. I have not meant, of course, that functionalist analysis is not necessary. I meant that you provided a pretty reductionist model of consciousness, but in the key part - what is sense physically – you implemented a functionalist model. That is because, as I start realizing now, there is no answer yet.
To repeat the question: what is the sense of pain in physical terms (aka reductionist model of pain)? How can we distinguish between pain and pleasure? In other words what is awareness of pain/pleasure in physical terms.
You and many other philosophers say that it is just an illusion/logic gate/etc. Maybe, from the evolutionary point of view this level of abstraction is enough, but we still need more precise answer. How can illusion make your hand move if I bring a lighter to it? There must exist some physical mechanism, responsible for this awareness. (the same awareness, that exists in animals).
As I found, out this is so called “hard problem of consciousness” and has no solution yet. http://en.wikipedia.org/wiki/Hard_problem_of_consciousness

By the way, the most interesting ideas I found so far are David M. Rosenthal's Higher-Order Theories of Consciousness. He divides mental states in 2 levels, but again do not single out that level, that allows as to say “I”, and, as I suppose, the speech is responsible for. This is the important mental state all strong AI specialists want to recreate.

To algorithm guys: read Penrose. He pretty convincingly shows using Godel's incompleteness theorems, that it is impossible using algorithms to make some conclusions, that human mind is capable to do.

Last edited by body104 @ 5/5/2010 4:40:00 PM

hunt
posted 5/5/2010  02:13Reply with quote
 
tkorrovi wrote @ 5/2/2010 12:12:00 AM:
And what if it is not anything which we can call "algorithm"? Then the train of thought is entirely out of rails? Algorithm usually describes a serial processing, but what if the system is massively simultaneous? What if it is entirely self-developing, algorithm cannot very well describe that which happens in a self-developing system.

 
By algorithm, I mean only a set of instructions. This does not preclude parallel processing or "self-development". (I put this term in quotes, because it is quite vague.)

"To algorithm guys: read Penrose. He pretty convincingly shows using Godel's incompleteness theorems, that it is impossible using algorithms to make some conclusions, that human mind is capable to do."

Ah, geez. Listen, Penrose needs to leave neuroscience to the biologists. For those unfamiliar with his work, let me attempt to put Penrose's ideas in a nutshell...

There is a molecule in the brain called tubulin. This molecule is large and capable of existing in two configurations, let's call them A and B. Tubulin, Penrose posits, is also capable of existing in a state that is a superposition of both A and B configurations.

The idea of superposition of states is a uniquely quantum mechanical phenomenon. (This is generally where people insert a reference to "Schroedinger's cat." Look it up if you haven't heard of it.) If we measure the state of the tubuilin molecule, we will measure either A or B, with a probability determined by the tubulin's wavefunction.

If many tubulin molecules become entangled--if they share the same state--then by measuring *one* tubulin molecule's state, you will know the state of *all* the entangled tubulin. This act of measurement is called "collapsing the wavefunction." It is called this because once the measurement is made, the tubulin molecules are no longer in a superposition of states, but have "collapsed" into either the A or B state.

There are other ways to collapse a wavefunction besides making a measurement. Penrose speculates (and this is pure speculation, built on no actual theory) that gravity is capable of collapsing wavefunctions. That is, gravity is a classical phenomenon and, when objects become heavy enough, it forces them to occupy a specific space, instead of a superposition of places.

Note that we are making a lot of assumptions here about what a molecule is doing when it is in a superposition of states: we are supposing that it is literally in two places at once. Quantum mechanics is a mathematical structure that does not necessary require such a literal interpretation.

Anyway, the idea is that tubulin in a superposition of states entangles neighboring tubulin molecules until the entangled state reaches a critical mass and collapses into a region of either A or B. This collapse and emergence of a large swath of A or B tubulin can only happen in a quantum mechanical scenario (not classically) and--most importantly for the philosophically minded--which configuration all that tubulin winds up in is not deterministic.

Brains have free will! ...Right.

So some physicists have actually taken the time to call Penrose out on this crap. Max Tegmark (an excellent teacher--I had him in undergrad for 8.033) argues from basic physics principles that neither tubulin, nor any molecule in the brain, exists in the quantum limit: Tegmark, PRE, Vol. 61 4194 (2000) (http://pre.aps.org/abstract/PRE/v61/i4/p4194_1). His argument can be summed up in one sentence: the brain is too warm. Any "coherent state" (a state in which multiple tubulin exist in the same state) will be kicked around by thermal energy until each tubulin's state is independent of all the others.

Penrose countered that the model Tegmark used is not the same as the one he used to derive his result. But both of them were rendered moot by a group of chemist, biologist, and physicists who came along and showed that the kind of superposition of A and B states Penrose describes can't exist in tubulin in the first place. The reason is is that tubulin doesn't just exist on it's own in the brain. It's found within a larger object called a microtubule that fixes each tubulin in one configuration or the other. The paper is here: McKemmish et al., PRE, Vol. 80 021912 (2009) (http://pre.aps.org/abstract/PRE/v80/i2/e021912)

So that's the end of that BS. Trust me, if someone were to discover a classically-sized quantum system that exists at room temperature, it would not be relegated to the rants of a mathematician eager for a book deal. It would be leapt on by over-eager physicists across the country and probably be involved in the CPU of your laptop in five years time. :)


hunt
posted 5/5/2010  02:16Reply with quote
Wow, my reply came out longer than I intended. Perhaps a warning to the reader: the above post is only worth your time if you are interested in...
a) quantum consciousness
b) macroscopic scale quantum phenomena
c) what happens when mathematicians have too much time on their hands


body104
posted 5/5/2010  06:16Reply with quote
That is interesting, hunt.
But still. That part where he uses Godel's theorems has nothing with quantum physics. It only shows the limitedness of conclusions, that can be made using formal systems, such as algorithms. And that human mind can make conclusions about validity / falsity there, where it can not be done using algorithms.
Offtop: Today realized that all talks about qualia are talks about nothing. Methodologically the concept of qualia is equal to the concept of soul: nobody knows what it is and it has nothing with physics. The argument about philosophical zombie, when you take some human and delete qualia, is the same, as if you take some human and delete soul, getting usual zombie. Has it any sense? Don't think so.

Last edited by body104 @ 5/5/2010 4:44:00 PM

body104
posted 5/5/2010  16:00Reply with quote
I think I realized what nicku said about the logic function.
In two words: According to some articles signals from sense organs make circular motion in brain, coming trough association cortex, hippocampus, etc. Hippocampus is engaged in emotional reactions and memory mechanisms. Then enriched with memory and emotional data signal is compared with the original signal and that is where sense arises.
Hence, my best guess: somewhere deeply in hippocampus the information is written about the response on “pleasurable” signals and on signals from the pain sensors. Physically this information has no essential difference (like two waves with different lengths) and enriches the original signal in (for instance) two ways. This mechanism was formed by evolution: individuals with two different behavioral patterns (escaping from damaging factors, reaching to “pleasurable” factors) depending on the response signal type dominated the others who had no this mechanism. So, this hardwired information is some kind of memory of our ancestors.
How is it so, that pain orders someone to move his hand from damaging factor? Suppose, the same mechanism, as unconditional reflexes: response signal affects motivation center or something and automatically gives human strong incentive.
That means that human is totally automated mechanism and the hard problem of consciousness just has no sense because there is no such thing as awareness. Totally agree with nicku.


Last edited by body104 @ 5/5/2010 4:55:00 PM

hunt
posted 5/6/2010  03:11Reply with quote
 
body104 wrote @ 5/5/2010 6:16:00 AM:
That is interesting, hunt.
But still. That part where he uses Godel's theorems has nothing with quantum physics. It only shows the limitedness of conclusions, that can be made using formal systems, such as algorithms. And that human mind can make conclusions about validity / falsity there, where it can not be done using algorithms.

 
As far as I understand it, Penrose's argument amounts to a game in which all probes of a system consists of states of the same system. Which gives rise to a failure whenever a state of the system is asked to probe itself.

I recommend Paul Almond's consideration of the subject: http://www.paul-almond.com/RefutationofPenroseGodelTuring.htm

Unfortunately, as I'm not versed in the mathematical semantics that the problem and theory are originally expressed in, I'll have to defer to the consensus of those in the field. And I think the clear consensus is that Penrose's argument is flawed.


body104
posted 5/6/2010  03:58Reply with quote
That is a wonderful link, thank you, hunt.
Damn it is so much interesting to learn more in AI and robotics field, but I am getting irrelevant MA in Economics after BS in IT. Because In Russia you have low incomes if you do science.

Last edited by body104 @ 5/6/2010 4:16:00 AM

hunt
posted 5/6/2010  04:30Reply with quote
Ha ha ha--I'm in the physics business myself. I'm pretty sure that doesn't pay well anywhere in the world. There's more to life than dollars and rubles, but they sure don't hurt, do they?

  1  
'Send Send email to user    Reply with quote Reply with quote    Edit message Edit message

Forums Home    The Artificial Intelligence Forum    Hal and other child machines    Alan and other chatbots  
Contact Us Terms of Use