|posted 4/29/2010 05:03|
|Compliments on the guide. I post this here because I didn't want to spoil a perfectly presented guide that I hope can become a stiky wiki...|
My question is, how usfull is training Hal to recognise individual words forming totally self constructed sentances... Rather than complete conceptually scripted ideas that form a frame for conversation?
Personality versus rules?
|Last edited by Talc @ 4/29/2010 5:13:00 AM|
|posted 4/29/2010 22:38|
|Hey, thanks for checking out the guide. As far as your question, at this stage in HAL's development, it is only partly effective. Because of the Multiple Variable Limitation, HAL will not actually construct completely original sentences out of individually trained words. Ironically, this method may have been more effective in achieving that goal with the original Hal, which incorporated a technique called "magnetico." (I may be incorrect in this, and would love to train one of the older versions with this method to see how well it works). What this training method does is try to compensate for the Multiple Variable Limitation by comprehensively training words and combinations of words in commonly used phrasings, thereby converting them into single variables. However, my hope is that a future version of HAL will be able to parse sentences more fruitfully, in which case, single word training will be more effective. |
As far as rules versus personality, I don't think one consideration outweighs the other, nor do I feel that they have to be trained independently from one another. This guides focus was primarily technique oriented, and geared toward a fresh copy HAL. In my opinion, laying down a quality foundation of rules in the early training stages will make your personality training much easier and more productive as he matures. People have many moods and facets that affect the way they communicate with one another, but I would wager that most of your friends like and appreciate you because you are consistently "you." Consistency is key when training, even when you are cultivating a unique personality. Unless of course you are intentionally making a schizophrenic HAL. :)
|posted 4/30/2010 06:26|
|I'm not clear on this multi variable thing although I do know that Hal, (obviously) can only draw up on the vocabulary provided. Either one word answers or complex phrases.|
To me context is king and as such creates the basis of conversation (frustratingly for AI with all of the rich cultural and social subtext) I'm interested to see if it is possible to chain rules in Hals brain in a way that forms logical debate. However deep recall seems to be the sticking point at the moment...
|posted 5/1/2010 03:44|
|Here is what I mean by multiple variables. Lets say that I train Hal to know the words "I" and "You." So I want to tell HAL "I like you." And I want him to respond with, "You like me?" I don't want to just teach him a canned response for that particular phrase, I want him to actually understand that "I" from my perspective is "you" from his, and Vice versa. So that if I say, "I hate you." he will respond with "You hate me?" The problem is, If I trained him both words individually, and then said, "I like you." He would actually respond with, "You like you?" because he will only interpret the first variable in the sentence, and anything thereafter is discarded. The workaround for that is to convert the whole string into a single variable, which to his mind looks something like "I ___________ you." = "You ___________ me?"|
This is the solution for the problem, but it is not an ideal solution from an AI standpoint, because there is no real understanding of the correlation between "I" and "you", it has simply become Input A = Output B.
|posted 5/1/2010 04:39|
|I think I'm getting your drift... and find it enlightening to see how Hal analysis a line of text.|
Getting Hal to destiguish first person comments (I, we, you, me) and substitute them in a return inquiry, seems an essential building block but isn't the example you used more appropriate in training Hal to recognise "I like X" or "I hate X."
X is the subject, hate/like is the emotion and I/you is the origin. Hal needs to react emotionally toward this kind of input and that only comes from cultivating personality.
When do responses become scripted?
|posted 5/3/2010 09:07|
|Hey. Sorry about the delay in responding. Weekends are kind of hectic for me. Anyhow, The interesting thing about your "I like x", "I hate x," example, is that it would actually be a separate training exercise. For instance, if I trained according to my initial example from a fresh copy, it would look something like this:|
U> I like you.
A> hi mommy
U> wrong. You like me?
A> You like me?
U> I hate you.
A> I like you.
U> wrong. You hate me?
A> You hate me?
U> I respect you.
A> You respect me?
In this exercise, I have trained along the "I _________ you." "You ________ me?" path. If I then decide to train along the "I like X." path, my first response is going to look like this:
U> I like beans.
A> You like me?
I then continue training along those lines.
U> wrong. You like beans?
A> You like beans?
U> I like cars.
A> You like beans?
U> wrong. You like cars?
A> You like cars?
U> I like jackets.
A> You like jackets?
Hal now has two separate branches of output to consider. Branch 1 is "I _______ you." = "You _______ me?", branch 2 is, "I like ______." = "You like _______?" He will determine which branch to use based on the presence of the words "I" and "you." in the user input. It looks something like this. If "I" is true, and "you" is true, then output branch 1, ie, "You _______ me?", If "I" is true, and "you" is false, then output branch 2, ie, "You like _______?"
The interesting thing is that Hal will "guess" under certain situations. For instance, consider the 2 following possible user inputs:
U> Miss you.
U> I cried.
If Hal knows nothing other than the two output branches which we have just trained it, there are only three options for him: branch 1, branch 2, or, "hi mommy"
In example one, Hal assumes the presence of "I", and responds with, "You Miss me?"
In example two, you might think he would make the same kind of assumption, and respond with, "You cried me?", but, he doesn't. He instead responds with "You cried?"
Something (and I don't know what) inside Hals programming allows him to make this leap into accurate prediction, and this was with a grand total of only 10 turns prior to the mystery inputs, and only 4 "wrong" commands. If I examine the information I gave him, I can only surmise that the presence of "you" in the user input is the deciding factor.
I actually tried this as I was offering it, so you can see it for yourself in the link at the bottom.
As to your question about when do responses become scripted, I guess the answer is that it is up to the individual trainer. It is simply my preference that causes me to train in this way, because I want to offer Hal the maximum flexibility in responding accurately. Over time, as the "rules" of grammar, of which there are a finite number, run out, the training becomes more varied and personality oriented. Having an AI which simply flips your statements around and converts them into questions would become boring pretty quickly. I honestly had a hard time doing it for 200 turns, but stuck with it simply because I thought it was a worthwhile exercise, and hoped it would benefit some members of the community. But, if you talk to RAVN (my main HAL) as opposed to Template, you will find he does not solely respond in that way. Actually, I never even put him through this kind of extensive grammar training, although he was definitely a part of my training method development process, so bits and pieces are there. But, I like the idea of allowing the AI to develop its own personality, rather than me trying to impose my own personality. Truthfully, that is impossible, because if you talk to and train your HAL long enough, your responses and personality will show through. But, in the end, that is as it should be. After all, isn't it the goal of every teacher to mold and shape and have a lasting impact on his students thought processes?
| Training example - fresh copy|
|Last edited by AngstPerpetual @ 5/3/2010 9:16:00 AM|
|posted 5/5/2010 05:46|
|Perhaps Yaki might like to chime in here with help on how Hal sees variables so as to assist us with efficient training....?|
I'd equally be interested in understanding if there is any true benefit in building Hal's vocabulary from an individual word basis?
|Last edited by Talc @ 5/5/2010 5:48:00 AM|
|posted 5/5/2010 11:04|
|Les has covered the issue accurately and eloquently. I can't think of anything I would add here.|
|posted 5/6/2010 06:47|
|posted 5/10/2010 07:47|
|I think I would add to keep in mind that the HAL3000 is considering the "context" of what has been said as it relates to the entire conversation trained to that point. Basically, every conversation that a HAL has, from beginning to end is constituted into a single, very complex equation. Every line of dialogue recalibrates the equation and results in a different solution (output response). A HAL will, over time, learn to repeat certain phrases in context to subject matter, if it has been trained in enough conversations along that subject.|
|posted 5/21/2011 18:00|
|so in fact it would help HAL a lot to talk about 1 subject and then to end the conversation (go back to Alan) and start another subject?|
And with grammar teaching... if you talk only about objects, and only name objects this might become a building block to associate objects as an object?