|posted 3/23/2017 20:29|
|New method to improve backprop|
|posted 3/23/2017 20:59|
| What is HW?|
|posted 3/23/2017 21:27|
|HW stands for hardware and is typically GPU or FPGA or other hardware used to evaluate and train AI|
|posted 3/23/2017 22:47|
| I have my own style of genetic algorithm for NN's to use in AGI's |
How do yours work?
|posted 3/23/2017 23:01|
|Difficult to answer here. Have spent my spare time since 2013 to develop an entire eco system of sw to form a complete solution for deep genetic algorithms. |
In short i have found a mechanism to shorten training time and to find better solutions for sparse datasets
|posted 3/23/2017 23:49|
| Well, i consider my self a really good at NN theorist.|
Between 2013 and late 2015 was the deep learning boom.
What you say is you have a one shot NN.
I should be able to understand your, somewhat.
| One-Shot Learning - Fresh Machine Learning #1 |
|posted 3/24/2017 03:20|
| goggle nn forum|
|posted 3/24/2017 15:43|
| I believe i understand most of what you are saying.|
It just happen i am working on audio waves right now in C/C++.
For my AGi audio infrastructure project. The AGI vision infrastructure will
come next .
So i can really relate to that sine wave example.
I like assembly language But by the time a master 6502 assembly language
thing had move on. C/C++ is so much more portable and there is long
lasting continuity that I like.
I had it with real mod, flat mod, and dos extenders.
So now i prefer Linux.
And i am still getting around to a in line
assembler, if i find an easy tutorial.
So back to the sine wave example:
Right now I am not recording with RNN or LSTM. maybe later.
I am just copying audio directly into memory and hit it with
My approach to genetic nn algorithms is just like yours. I start with one
little NN with random weights. Then feed in the data and record the
All of this tiny NN simulation would happen in a very small location in
| nn function autoencoder|
|Last edited by keghn @ 3/24/2017 3:49:00 PM|
|posted 3/24/2017 16:03|
| Then a clone of would be made in the near memory area. One weight would be allowed to re-randomise and the others are pinned.|
A link is made So that i have a link list to all of other mutated clones. And
to others that follow.
A search would look through each one and test it with gradient of descent.
In a real brain this way would work really well because nn work like
a non centralised cpu. Every thing is running in parallel.
nn function autoencoder
| nn function autoencoder|
|Last edited by keghn @ 4/3/2017 11:27:00 PM|
|posted 3/25/2017 14:33|
My AI Minds use the AudListen mind-module to send input into the auditory memory.
keghn wrote @ 3/24/2017 4:03:00 PM:
Then a clone of would be made in the near memory area.
| Perl Mind Programming Journal|
|posted 3/25/2017 17:04|
|Are u interested in my framework! I am thinking of letting it out to a few beta testers |
|posted 3/25/2017 23:08|
| I find your framework interesting.|
|posted 3/28/2017 17:03|
|Yes. I saw that. Its actually very very simillar to exactly what i have been working on since last year. I have published a lot of details since 2016 about it|
|posted 4/3/2017 07:49|
|A huge factor of DNA is the proteins that it produces. Another is epigenetic effects. Try taking into account which proteins are produced at which times rather than the DNA structure. Then work your way back to the DNA and whether it is switched on. |
|posted 4/3/2017 14:03|
|Off topic. Not at all what we are talking about. We are talking about deep learning and neural networks|
|posted 4/5/2017 18:00|
| DNA is a 4 bit number system. Values are called nibbles.|
I work with it lot for testing data compression.
@ToolTech I seen in the neural network world that there are encoders,
perceptrons, and decoders.
In the post above there are is decoder that does a sine wave. Input data is transformed into a output data.
What do you think of perceptrons? To detect something and then pushed out a activation on a dedicated channel.
The way i envision a simple detector
is a detector NN that detect the height of a sine wave.
Let say the a sign that wave bounce between 0 and 1 and can be trained
to detect 0.5 .
This perceptron would have one input and eleven outputs for:
When 0.5 is detected it would be trained to generate a 0.99999 on output nerve ending
number five? All other of the other outputs would be less then 0.999999999.
|Last edited by keghn @ 4/5/2017 6:03:00 PM|