Stopping training

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Stopping training

sarwj
Dear Forum,

Please excuse a newbie's question. I am adapting and writing a script for a Simbrain backdrop network. In training it, I am trying to stop the training at an intelligent point rather than just after a certain number of iterations or a mean square error threshold as the API offers. What would you recommend as a smart way to stop training? For example, is it possible to detect (in a script of course) that the MSE is not getting any smaller and then go back to the point where it levelled and use the trained network at that point?


Or has anyone implemented known algorithms for stopping training. For example does Simbrain (or 3rd party) offer a script/algorithm for any of these:

Newton's method
Conjugate gradient
Quasi-Newton method
Levenberg-Marquardt

Many thanks for any help you can offer.
Reply | Threaded
Open this post in threaded view
|

Re: Stopping training

jyoshimi
Administrator
Stopping training (in a script) after error stops being reduced should be pretty easy (just track error deltas between iterations and stop when the deltas pass a threshold).    Reverting to an earlier network state would be more difficult, and could use a fair bit of memory (since at least on a naive implementation this would require storing all the network parameters at every time step or every nth time step), but is also doable.

If you implement something like conjugate gradient or Levenberg-Marquardt let us know!  

- Jeff
Reply | Threaded
Open this post in threaded view
|

Re: Stopping training

Wassim Jabi
Dear Jeff,

Many thanks for your reply. Is there a way to make a deep copy of the Bac=
kPropTrainer after it has been randomised so that we capture the ideal nu=
mber of iterations that produce the lowest MSE and then iterate the clone=
d trainer to that exact number of iterations. Unfortunately, I am not ver=
y familiar with Java syntax, so if you can provide a code snippet, that w=
ould be wonderful. Many thanks for your help.

Best Regards,

Wassim
Reply | Threaded
Open this post in threaded view
|

Re: Stopping training

jyoshimi
Administrator
Hi Wassim,

The iteration number is easy to get, but a deep copy of the trainer is not.  You'd have to fork the code and write your own version of the BackpropTrainer class, which wouldn't be too hard but would require knowledge of java.  Here are the relevant classes:

https://github.com/simbrain/simbrain/blob/master/src/org/simbrain/network/trainers/BackpropTrainer.java
https://github.com/simbrain/simbrain/blob/master/src/org/simbrain/network/trainers/IterableTrainer.java

I'd put it on my list to add this, but as I've mentioned before, we're rebuilding all this from the ground up.   For greater flexibility I suggest a python solution like tensor  flow or keras.

Then again, maybe there is an easy way to do what you have in mind.   I don't have the time to work with you to find it, but if you do make progress, let me know / submit a pull request, and we can take it from there!

Cheers

- Jeff