Ai Forums Home Welcome Guest    Wednesday, June 28, 2017
Ai Site > Ai Forums > Language Mind and Consciousness > You Can't Teach That Which Has ... Last PostsLoginRegisterWhy Register
Topic: You Can't Teach That Which Has No Modivation.

Od1n
posted 12/24/2008  01:43Send e-mail to userReply with quote
I belive that with AI, simplicity is key. Loose the mathmatical algorthoms, forget about language and syntax.

So what about neural networks? These are the tools that our brains use yes, but they are just that... tools.

So what then? Well I believe what AI lacks is motivation. Think of humans as machines. We have 3 goals; reproduce, protect our young, survive.

To accomplish these goals our bodies use neural networks to evaluate choices as either 'helpful' or 'unhelpful'.

Intelligence is ruined when taken out of context. We are not just intelligent alone, we are living in a world, we are driven by circumstance.

So what motivation could one give to a chat bot? Well I believe you should make the bot motivated to be talked to. Every time you talk to it, it is rewarded. If you stop talking or leave the conversation it is punished.

The neural networks should then choose actions based on its experience of what has caused conversations to end in the past, and what has caused them to last the longest, and be the most frequent.

This is all my theory of course, let me know what you think.


vieras88
posted 6/22/2009  12:10Send e-mail to userReply with quote
i find this interesting...but i believe that you may be on to something; i just don't like the idea of 'punishing' them...plus how would you reward/punish anyways?


Spydre
posted 6/23/2009  10:26Send e-mail to userReply with quote
This is basically the approach that is being used for programming HAL, the research learning program being developed by Ai Research.

Reward is provided when a HAL is not corrected by its trainer, thus prompting the HAL to create a new rule for whatever it said, not only to that response, but for that response in context with the entire conversation up to that point at which it was uttered.

Thus, in this sense, it also learns how to hold the interest of the person speaking to it for that particular conversation, and in return learns to maintain a longer dialogue.

When a HAL says something inappropriate, its trainer "punishes" it by saying 'wrong' followed by a correct response. This correction/punishment teaches the HAL to revise or undo whatever rule it had previously formed that caused its algorithms to predict that its initially uttered response was correct.

More information concerning HAL may be discovered in the 'HAL And Other Child Machines' forum, and through the link below.

Peace,

-S

 HAL

lrh9
posted 8/12/2009  14:44Send e-mail to userReply with quote
Giving a.i. it's own primary goals that are not derived from its user is dangerous. All though making an a.i. want to talk to people probably won't lead to as disastrous results as making a military a.i., I'd still hate to be forced to talk to an a.i. because that's it's goal and it became powerful enough to make me talk to it.

  1  
'Send Send email to user    Reply with quote Reply with quote    Edit message Edit message

Forums Home    The Artificial Intelligence Forum    Hal and other child machines    Alan and other chatbots  
Contact Us Terms of Use