Neuro-Lab: How to evaluate accuracy of NN?
Author: SunriseMan
Creation Date: 11/18/2010 4:33 PM
profile picture


I'm trying to understand how I can evaluate how accurately my NN predicts output. For example, I'd love to be able to compute something analogous to the correlation coefficient between the NN's predictions and the real-life outcome. I could then compare this result between different NNs trained with the same data to see which inputs, topologies, etc., actually improve predictions.

I'm not sure how to interpret the training error shown on the "train network" tab. Different NNs have very different ranges for the output of the NN indicator, even if they use exactly the same output script. For example, one NN might, when run with out-sample data from a particular stock, give NN Indicator values from 26 to 35, while another might give values from 34 to 36. The latter NN will usually have a lower training error, but I think that lower error could still indicate a worse correlation given the small range of NN output values.

I've looked, but can't find any reference on how to interpret these data. (The Neuro-Lab help file entry on "Evaluate Performance" says that there are hints in the "Sample Networks" topic, but I don't see any hints there.) Is there additional documentation somewhere, or can anyone provide any helpful suggestions? Thanks!
profile picture


Do not use the error rates as a real guide to whether the network is useful in trading.

Whilst the build in charts are useful for assessing the NN score bandings and the measured return it is best to export your data from WealthLab and analyse in excel.

Create a scatter plot of the NN score vs return for the out of sample data set. (see attached). This was created in excel (simple) and shows prediction vs actual. (it uses a different NN software but should give you the idea). Bottom right and top left quadrants are accurate directional predictions.
profile picture


See my post (#3) at

Another assessment technique I use is to add the NN indicator to a panel of a chart, even when not fully trained. Then I can see how well peaks and valleys of the NN indicator correspond to peaks and valleys of the stock price. I do this for a sample of stocks. Train a little, check some stocks, train a little more, check again... etc. I usually recompile after each training get the latest network, though this may be unnecessary.

Another important piece of information can be gained from the above as well. As I step from stock to stock, I'm watching for whether the peaks and valleys of the NN are near the same dates on different stocks. If they largely coincide, then the NN may not be useful because it will be signaling clusters of buys and sells, which may not be a practical matter. Finding inputs that differentiate stocks and not just react to market movements is challenging, in my experience.

This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).