(November 7, 2016 at 5:40 pm)ProgrammingGodJordan Wrote: Neural models SELF ORGANIZE on the horizon of GRADIENT DESCENT bound calculations, as some cost sequence minimizes error on input signals.
Such are non trivial models that like humans, LEARN ENVIRONMENTALLY.
Bi dimensional games via atari q, are a microcosm of real life. (2d games, though of low resolution, provide non trivial variation, in the task sequence analysed)
These models take:
(0) Pixels (as humans do)
(1) Controller access absent explicit reward mapping (as humans fair)
(2) Reward notation /score (as humans fair)
[*B*]
Mathilda, likely lacks accurate comprehension of said field, (as observed amidst her ignorant responses).
(November 7, 2016 at 5:40 pm)ProgrammingGodJordan Wrote: My stipulations are thereafter of thorough nature, whilst Mathilda's are narrow in content.
Yeah that confirms what I suspected. You really should have paid attention to my recommendation in the shoutbox to read Christoph Koch's Biophysics of Computation. The book makes it quite clear that the kind of artificial neural networks that you are referring to are computational models inspired by the brain. Real neurons don't actually work like that. Because of how frequently you make comments about people not having written a neural network, I'm guessing that you wrote some kind of backprop ANN once (or called one of the many routines written for you in some package or library), trained it on some data and now think that you are some kind of expert. What you think you are talking about is most certainly not computational neuroscience.
Here's a tip. Leave the ego out of this.