Hi,
I find the range of models covered in Simbrain to be quite exciting, and I am sure 3.0 will be even richer. Oddly, however, I find it difficult to do what I (perhaps wrongly) would consider to be the more basic ANN operations. Perhaps these are not covered, but more likely, I am just not finding the right means. I want to start with a simple BP network, such as the XOR network here. I would like to train the network, and observe the decrease in error. I can open, or create the network, just fine. I can also open two data-world sets, one for input patterns and one for output patterns. I change the 'clamped' default of the input units to be 'linear' and set them up so that each receives input from one column of the input data table. I set the data table to be in 'iterative' mode. Now when I press Run, the network cycles through the patterns. But I am not using the output patterns, nor do I see any commands to train the network or monitor its output error. I know that there is a BP lesson where the network has a box around it, a BackPropagation label, and if I click on that I get the option to train the network, but I don't see it for a network I create myself, of for the XOR network. Any help greatly apprecitated! Fred |
Update: I have discovered that one can create a BP-network as a specific type of subnetwork, and then there is a training option that requests input and output files.
Training seems wrong to me though. On a simple 2-2-1 network, training on XOR, with a learning rate of 0.9 and some momentum, the network is taking ages (literally millions) of iterations to get anywhere, and it still isn't solving XOR. The error appears briefly on screen, but doesn't seem to be logged anywhere. I have trained networks with these parameters many times, and the solution should be reached quickly. To test the network, I open the input patterns as a data world, and step through them by clicking on each pattern in turn. But this is again not logged and also cumbersome. Is there a formal testing mechanism that will cycle through the input patterns and produce some quantitative overview of how the network is performing? All help welcome. Fred |
Administrator
|
Hi Fred,
Backprop is not one of Simbrain's best features, currently. The user interface is a bit counter-intuitive (as you've discovered), and it has not received as much attention as other aspects of the program. Part of the reason for this is that backprop is well supported in other open source packages (e.g. emergent, joone). It is one of the many things being improved for version 3. Having said that, it does work, the testing mechanism you describe does exist, and XOR can certainly be trained in less than a million iterations! The docs on this are here, but they are not as clear as they could be. Here's what you have to do. First, create a new backprop network (Insert > New Network > Backprop) with 2 input nodes, 1 output node, and as many hidden units as you want. It sounds like you were able to do this. Second, right click on the "backprop" tab, and select "Train." All the action will take place inside this dialog. Third, inside the dialog, click the randomize button. Fourth, click the input file and output file buttons, and attach data files to them. For input, use Simbrain > simulations > networks > bp > training > xor_in.csv, and similarly use xor_out.csv for output. I think you made it this far. Fifth, click the play button. If everything is set up right, you will see a representation of the current error, and you can just run it until it reaches a level you are happy with. There is also a "batch" tab where you can run for a set number of iterations. Since this bypasses the Gui it's faster. (One problem here is that if you set up the input and output files wrong, you don't get any error message.) When you're done, press Ok. You can then test the network by selecting the input nodes and pressing the up and down arrow keys on your keyboard to set various input patterns, and iterating the network. If I should clarify any of this let me know! Let me end by asking you (and others) a question. One thing I've thought a lot about, but have never decided on, is what the most intuitive way would be to implement backprop in a Gui. I want to do it in a way that is consistent with Simbrain's emphasis on visual intuition and ease of use, but as a user interface problem backprop is a bit challening. The problem is there are all these steps that require different types of configuration. Setting up the network, setting up training and testing data, training the network, cross-validating, pre-processing data, etc. I have a rough idea of how I want to do these things in the next version of Simbrain, but I'd be curious what others think. Best, - Jeff |
Hi Jeff,
Many thanks for the feedback. Still having a few issues, but I'll take them offline with you, and return to your question about how BP might best be presented in Simbrain. Setting up the network at present is simple, so no change needed there. Data management can be tricky, but the basics are already just about available. I would suggest that rather than loading training data and output data through the subnetwork menu, instead the user ought to be encouraged to open up data worlds: one each for training, testing, and validation, if that is done. It would be good to be able to keep the input patterns and the desired targets in the same csv file. I wouldn't work too hard on facilitating automated training, error monitoring, verification and testing. Instead, the user should run the network with a single data world at a time. Manually changing the I/O dataset to perform verification by hand, and to test at the end, is not too onerous, and while not what one would want in an optimized system, might be a good idea for a teaching system. No special suport for verification and testing are necessary, as long as the user remembers to clamp the weights before using them. However a key utility that is missing and that would make all this work well in a teaching setting is graphing: continuous graphing of error is essential to make sense of the training protocol. I know you are working on some graphing for 3.0, but I would suggest it would be a good thing to prioritize. Plotting the individual activation functions in input space is also useful, as for example here: http://cspeech.ucd.ie/~fred/teaching/oldcourses/ann98/bpexample.html I'd be happy to discuss further, or help out, of course! Fred |
Free forum by Nabble | Edit this page |