Ai Forums Home Welcome Guest    Friday, August 18, 2017
Ai Site > Ai Forums > Language Mind and Consciousness > conciousness, is it needed? Last PostsLoginRegisterWhy Register
Topic: conciousness, is it needed?

toborman
posted 5/11/2010  23:04Send e-mail to userReply with quote
Why are we interested in achieving consciousness, self-awareness, sapience, and sentience in Artificial Intelligence? I can’t think of any tasks that require these, yet they seem to be of great interest on the forums.

 Human Mind Map

tkorrovi
posted 5/12/2010  09:09Send e-mail to userReply with quote
Yes it is. We model a certain natural phenomena. To restrict oneself with intelligence is often more restrictive than one may think. We need awareness, awareness of processes, that is, making models of the processes in the environment based on the information which we get, and then running these models. There are some models of the environment in many AI systems, even in quite simple ones. But intelligence doesn't necessarily require making models of the environment. And there are several other aspects which come from such modelling, which may also be necessary even in primitive systems.

 Artificial Consciousness ADS-AC project
Last edited by tkorrovi @ 5/12/2010 9:21:00 AM

hunt
posted 5/12/2010  18:55Reply with quote
 
toborman wrote @ 5/11/2010 11:04:00 PM:
Why are we interested in achieving consciousness, self-awareness, sapience, and sentience in Artificial Intelligence? I can’t think of any tasks that require these, yet they seem to be of great interest on the forums.

 
Sticking to the definition that consciousness includes self-awareness, I'd say that an artificial intelligence requires the ability to analyze its own past decisions and determine whether or not to make those decisions again in the future, based on feedback (new information).

If this means the AI is 'conscious', so be it.

On a side note, I find it very frustrating when people attempt to discuss such ill-defined concepts as 'consciousness' from a standpoint that requires precise definitions (for example, from the engineering standpoint of actually building an AI). That's why scientific fields develop jargon, to give imprecise words a precise meaning.

The downfall comes when those words (such as 'consciousness') are so loaded with meaning outside the field that it obfuscates our ability to talk about them from a practical (engineering) standpoint.


toborman
posted 5/12/2010  23:20Send e-mail to userReply with quote
 
hunt wrote @ 5/12/2010 6:55:00 PM:
Sticking to the definition that consciousness includes self-awareness, I'd say that an artificial intelligence requires the ability to analyze its own past decisions and determine whether or not to make those decisions again in the future, based on feedback (new information).

If this means the AI is 'conscious', so be it.

On a side note, I find it very frustrating when people attempt to discuss such ill-defined concepts as 'consciousness' from a standpoint that requires precise definitions (for example, from the engineering standpoint of actually building an AI). That's why scientific fields develop jargon, to give imprecise words a precise meaning.

The downfall comes when those words (such as 'consciousness') are so loaded with meaning outside the field that it obfuscates our ability to talk about them from a practical (engineering) standpoint.

 
I think you've correctly identified the problem with the term conciousness. It is vague and ambiguous. I prefer this definition of metacognition from wiki:

Metacognition is classified into three components:
· Metacognitive knowledge (also called metacognitive awareness) is what individuals know about themselves and others as cognitive processors.
· Metacognitive regulation is the regulation of cognition and learning experiences through a set of activities that help people control their learning.
· Metacognitive experiences are those experiences that have something to do with the current, on-going cognitive endeavor.
Metacognition refers to a level of thinking that involves active control over the process of thinking that is used in learning situations. Planning the way to approach a learning task, monitoring comprehension, and evaluating the progress towards the completion of a task: these are skills that are metacognitive in their nature. Similarly, maintaining motivation to see a task to completion is also a metacognitive skill. The ability to become aware of distracting stimuli – both internal and external – and sustain effort over time also involves metacognitive or executive functions. The theory that metacognition has a critical role to play in successful learning means it is important that it be demonstrated by both students and teachers. Students who demonstrate a wide range of metacognitive skills perform better on exams and complete work more efficiently. They are self-regulated learners who utilize the "right tool for the job" and modify learning strategies and skills based on their awareness of effectiveness. Individuals with a high level of metacognitive knowledge and skill identify blocks to learning as early as possible and change "tools" or strategies to ensure goal attainment. The metacognologist is aware of their own strengths and weaknesses, the nature of the task at hand, and available "tools" or skills. A broader repertoire of "tools" also assists in goal attainment. When "tools" are general, generic, and context independent, they are more likely to be useful in different types of learning situations.
Another distinction in metacognition is executive management and strategic knowledge. Executive management processes involve planning, monitoring, evaluating and revising one's own thinking processes and products. Strategic knowledge involves knowing what (factual or declarative knowledge), knowing when and why (conditional or contextual knowledge) and knowing how (procedural or methodological knowledge). Both executive management and strategic knowledge metacognition are needed to self-regulate one's own thinking and learning (Hartman, 2001).
Finally, there is a distinction between domain general and domain-specific metacognition. Domain general refers to metacognition which transcends particular subject or content areas, such as setting goals. Domain specific refers to metacognition which is applied in particular subject or content areas, such as editing an essay or verifying one's answer to a mathematics problem.

What do you think?

 Human Mind Map

tkorrovi
posted 5/13/2010  12:49Send e-mail to userReply with quote
 
hunt wrote @ 5/12/2010 6:55:00 PM:
On a side note, I find it very frustrating when people attempt to discuss such ill-defined concepts as 'consciousness' from a standpoint that requires precise definitions (for example, from the engineering standpoint of actually building an AI).

 
Well with that, you put many people who are trying to make True AI or some other advanced AI, roughly between huge tongs. Using the words "intelligence" and "consciousness" in "Artificial Intelligence", "Artificial Consciousness", etc, only determines the scope within which the solution should be.

The aim of Artificial Intelligence projects is rarely to model the whole human intelligence. Also the aim of Artificial Consciousness is to model only certain aspects of consciousness, which can be precisely defined.

What is important is that the word should be appropriate, so that it enables a large enough scope for a certain implementation. When for whatever reason, the word "consciousness" is not allowed to use, and "intelligence" is too restrictive for a certain ability, then this unconditionally prevents a researcher or developer from defining certain tasks that he/she otherwise would like to work with. So, it is not just a matter of a verbal game.

Such restrictions are somewhat similar to calling the whole AI an "applied science". One may think that it doesn't matter how it is called, but calling it an applied science efficiently prevents any official acceptance of any projects which are only about research, because some systems simply cannot be used for any practical purposes. It is like, there is no practical purpose for which the large hadron collider can be used, and some AI systems are no different. So not a game, either.

Last edited by tkorrovi @ 5/13/2010 1:34:00 PM

hunt
posted 5/14/2010  05:43Reply with quote
toborman,

Suppose you have a machine that computes an output based on user input. It then uses the next user input to refine its own algorithm for creating output. (For example, if a response is poorly received by the user, it may lower the confidence parameters for that response, leading to a different response being chosen for similar input in the future.)

In some sense, the machine is thinking about the problem of generating output. It decides it has thought about the problem poorly (generated bad output), and changes its method of thinking about such problems. Is this metacognition?

Or is metacognition defined as the act of thinking about the fact that it is changing its own parameters. For such a machine as I have described, there is no reason for it to ever think (or, perhaps more aptly, say) anything about the fact that it changes its confidence parameters.

A dog may get scolded for peeing on the carpet, and in the future, decide to go on the grass instead, but this does not mean it has meditated on the fact that it changed its mind.

So which is the level of metacognition? As far as developing artificial intelligence is concerned, I imagine only the ability to tune confidence levels is necessary. But for human level intelligence, perhaps the AI must also be able to think about the fact that its tuning itself.

Then again, there are many processes going on in our brains at all times that we are never aware of. Does this make us not intelligent? Clearly not. So how many meta's does it take to call something smart? ;)

I need to think about this more. Any insights would be appreciated.


tkorrovi,

I do not think that precisely defined vocabulary restricts the development of AI skill sets. (Although it may limit funding opportunities that empty phrases like "artificial consciousness" open up.) Any phrase that encompasses too large a scope becomes just that--empty.

Last edited by hunt @ 5/14/2010 5:50:00 AM

toborman
posted 5/14/2010  07:07Send e-mail to userReply with quote
 
hunt wrote @ 5/14/2010 5:43:00 AM:
toborman,

So which is the level of metacognition? As far as developing artificial intelligence is concerned, I imagine only the ability to tune confidence levels is necessary. But for human level intelligence, perhaps the AI must also be able to think about the fact that its tuning itself.

 
I’m thinking that the metacognition wiki definition includes not only the ability to change procedures (Metacognitive regulation), but also to justify and evaluate the efficacy of the change (Metacognitive knowledge supported by Metacognitive experiences).

“The metacognologist is aware of their own strengths and weaknesses, the nature of the task at hand, and available "tools" or skills”.

Perhaps one could test this by asking the intelligent agent (human, AI or alien) to justify its decision. Ex: Why did you change your procedure?


 Human Mind Map

tkorrovi
posted 5/14/2010  09:00Send e-mail to userReply with quote
 
hunt wrote @ 5/14/2010 5:43:00 AM:
I do not think that precisely defined vocabulary restricts the development of AI skill sets. (Although it may limit funding opportunities that empty phrases like "artificial consciousness" open up.) Any phrase that encompasses too large a scope becomes just that--empty.

 
I do not agree with you, i don't agree that "artificial consciousness" is an empty phrase, for substantiated reasons, for peer reviewed reasons, too, if you like. And i don't remember that any prominent scientists has ever called "artificial consciousness" an empty phrase, if only these people matter. Some are very skeptical whether the things can be done, yes, but this is not the same thing.


hunt
posted 5/14/2010  09:19Reply with quote
 
tkorrovi wrote @ 5/14/2010 9:00:00 AM:
I do not agree with you, i don't agree that "artificial consciousness" is an empty phrase, for substantiated reasons, for peer reviewed reasons, too, if you like. And i don't remember that any prominent scientists has ever called "artificial consciousness" an empty phrase, if only these people matter. Some are very skeptical whether the things can be done, yes, but this is not the same thing.

 
My point wasn't that "artificial consciousness" is an empty phrase, but that bandying it about in too general a way renders it empty. Besides, I'm sure "peer-reviewed" application of the phrase has more to do with public cachet than with communicating to peers in the community.


Eddy Adams
posted 6/18/2010  09:55Send e-mail to userReply with quote
Assume we agree on this consciousness.
Then who decides the morality to run it.
I think we are still struggling with this in the real world.
Also I think this anthropomorphic tendency,
is not so good. It tends to loop back. You know, the deadly embrace. Let's not try to make them smarter, just more efficient. Cognitive extensions.

  1  
'Send Send email to user    Reply with quote Reply with quote    Edit message Edit message

Forums Home    The Artificial Intelligence Forum    Hal and other child machines    Alan and other chatbots  
Contact Us Terms of Use