Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
i_phish_cats
on Sept 28, 2018
|
parent
|
context
|
favorite
| on:
Genetic algorithms for training deep neural networ...
You are evolving the topology, but using regular gradient descent/backprop for any given network, correct?
jawarner
on Sept 28, 2018
|
next
[–]
No, in NEAT both the weights and topology are evolved. It is totally gradient-free.
wholemoley
on Sept 29, 2018
|
prev
[–]
Yeah, topology and weights. It's highly subject to initial conditions. You almost need another NEAT network to evolve the initial conditions. I believe it's turtles all the way down.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: