Bug in validation of backprop network ?

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Bug in validation of backprop network ?

Jeroen
Hi,

I trained a backprop network using an input file with 16 rows, and an output file of 16 rows.
Inputs and outputs were all binary values.
The network learned the input-output combinations perfectly, the error was zero.
When cycling to the input file again, after training completion, all inputs produced the perfect outputs.
Then I made a copy of the input file, saved it with a new name, and kept only the first 5 rows (deleted the remaining rows).   When I uploaded this file in the Validate Input Data tab, the output was NOT perfect anymore.
How is this possible?  Am I overlooking something, or is this a bug?

Thanks,

Jeroen
Reply | Threaded
Open this post in threaded view
|

Re: Bug in validation of backprop network ?

jyoshimi
Administrator
Hi Jeroen,

The validate input tab is fairly counterintuitive, and based on how many people want to use simbrain for backprop, and the headaches this is causing, I'm definitely going to be changing that... at some point.    

The easiest way to do this would be to train your network again, and then _not_ use the validate input tab.   Go to the actual input layer of the network (Layer 1), and double click on the yellow interaction box.  click on the set inputs tab,  load  your truncated data there, and then test using the step button there.   If training was 0, you should continue to get good results.  (The problem with the validate inputs tab is that it just copies what's in the input data tab; this is fine for a quick check that backprop worked properly, but not for any kind of real validation where you test how your results generalize to new data).

If that fails post your data or email me and we can try to figure it out.

My Spring break is coming up, and at the top of my list is to make a video on how to do this type of thing.  Others have been asking for it here and it's been a few months.   I feel confident I can post a video in the next 2 weeks.

As for improving the interface for backprop, that could take quite a bit longer, as we are doing a bunch of other things in Simbrain now, including a pretty cool 3d world.   Perhaps this summer.  Or earlier if anyone's interested in working on it with me.

- Jeff
Reply | Threaded
Open this post in threaded view
|

Re: Bug in validation of backprop network ?

Jeroen
Hi,

Sorry, I tried it, but it did not work.
The 1st row of my truncated data set gave the same wrong answer.
During training with this row, the network associated the right answer to this row.

If you have time to replicate the problem, here are the details.
3 layer back propagation network.  2-24-1 neurons.
Input consists of 4 binary values.
If exactly one of the inputs is 1, then the output is 1, else the output is 0.

Input file:
0 0 0 0
0 0 0 1
0 0 1 0
0 1 0 0
1 0 0 0
0 0 1 1
0 1 0 1
1 0 0 1
0 1 1 0
1 0 1 0
1 1 0 0
0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0
1 1 1 1

Output file:
0
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0

Training arrives at a zero error.
Validation:
use first 5 rows of the input file (saved as a separate file).
Now the first row (with 4 zeroes)  unexpectedly gives a wrong output: 1.

It would be great if you could tell me how to make it work as expected.

Thanks,
Jeroen
Reply | Threaded
Open this post in threaded view
|

Re: Bug in validation of backprop network ?

jyoshimi
Administrator
Hi again.  I just did a quick test on this and could not get 0 error.   I got .0625, I suspect because of that first row, which produced a 1 instead of a 0 whether or not I used the truncated input.

If you can indeed get zero error  can you save your simulation as a workspace file and email it to me?  Or maybe you can post it here somehow.

- Jeff
Reply | Threaded
Open this post in threaded view
|

Re: Bug in validation of backprop network ?

Jeroen
Hi,

I initially got an error of .0625 too, but if you re-randomize the weights several times, eventually it will find a state with error zero. Once the error is zero, you have to stop the training, because sometimes, the error shoots up again.

Thanks for looking at it,

Jeroen
Reply | Threaded
Open this post in threaded view
|

Re: Bug in validation of backprop network ?

jyoshimi
Administrator
Hi Jeroen,

The workspace you sent seemed to have recurrent connections in the hidden layer.  In the few times I've tried I could not get 0 error with this training set, and that first row seems to be the culprit.  But I've been rushing through various tasks so maybe I just didn't try hard enough.   If you can get a network with no recurrent connections to do this, please send me the workspace.  Cheers,

- Jeff