Neuro-Lab: the weights in a trained network's XML file
Author: tschwei
Creation Date: 3/9/2017 11:00 PM
profile picture

tschwei

#1
Hi Len,

I have been adding inputs to my input script of my networks, in the hopes of increasing the prediction power of the trained network. I'm now at over 30 inputs, and performance has become a limiting factor on how many more I can add. I suspect some of the inputs are not helping improve the accuracy/prediction power very much, but removing each one, in turn, and re-training the network is very time consuming.

I was hoping I could maybe look at the trained network and see what weights it applied to each input, hoping that those with the lowest weights might be the least-important inputs (and those with the higher weights would be most important).

I hope that make sense. So, I decided to find where Neuro-Lab saves the networks on my drive. I found them in the following location:
C:\Users\<User>\AppData\Roaming\Fidelity Investments\WealthLabPro\1.0.0.0\Data\NeuroLab

When I opened one up, I noticed it had more or less the following format:
<InputScript>
<OutputScript>
<MinAndMax>
<DataSelection>
<Network>
<Topology>

Underneath <Topology> element appears to be data on each layer in the network and each neuron in the layer, and each synapse in the neuron (which I was happy to see) contains a <Weight> element.

For example, the first neuron in the first layer in my network is this:

<Layers>
<Layer>
<Neurons>
<Neuron>
<Bias>
<Weight>0</Weight>
</Bias>
<NeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</NeuronId>
<Synapses>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</TargetNeuronId>
<Weight>0.41788479038557969</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>1</NeuronNumber>
</TargetNeuronId>
<Weight>-0.21194759657743661</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>2</NeuronNumber>
</TargetNeuronId>
<Weight>-0.94150249053897084</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>3</NeuronNumber>
</TargetNeuronId>
<Weight>-0.443499964401986</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>4</NeuronNumber>
</TargetNeuronId>
<Weight>0.955038972852294</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>5</NeuronNumber>
</TargetNeuronId>
<Weight>-0.4273352622469242</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>6</NeuronNumber>
</TargetNeuronId>
<Weight>-1.8834088895003829</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>7</NeuronNumber>
</TargetNeuronId>
<Weight>0.29801942425274197</Weight>
</Synapsis>
<Synapsis>
<SourceNeuronId>
<LayerNumber>0</LayerNumber>
<NeuronNumber>0</NeuronNumber>
</SourceNeuronId>
<TargetNeuronId>
<LayerNumber>1</LayerNumber>
<NeuronNumber>8</NeuronNumber>
</TargetNeuronId>
<Weight>0.407942087089083</Weight>
</Synapsis>
</Synapses>
</Neuron>
<Neuron>
etc....

Now to my question:
The values in the <Weight> elements... are these the "relative importance" of each synapse's connection to the next lower layer?

If so, would it be reasonable to assume that if you summed up the absolute value of each of these weights for a neuron you would get a total weight for this neuron/input? If so, using this technique, for the first input we would have the following "total weight":

0.41788479038557969 +
0.21194759657743661 +
0.94150249053897084 +
0.443499964401986 +
0.955038972852294 +
0.4273352622469242 +
1.8834088895003829 +
0.29801942425274197 +
0.407942087089083 +
=5.986579478

If I did this calculation for all of the inputs in all of the neurons in the first layer, would I have the relative weights for all of the inputs?

And finally, could I use these weights to decide which inputs are the most and least valuable in the network?

My goal is to try to figure out which of the 30 inputs I am using in my network are the most and least valuable (in the hopes of trimming down the ones that aren't helping much).

Doing all this for one of my networks did take a while, but it was faster (and I suspect more accurate) than simply removing an input, re-training and testing the network (then repeat for each of the 30 inputs!).

If there is an easier way to figure out which inputs are most/least important I would greatly appreciate learning about it!

As usual, thanks for all the help!

Tim







profile picture

LenMoz

#2
QUOTE:
an easier way ...
Not that I know of. Further, to get it exactly right, you have to continue that process all the way to the output node. The good news? The first set of weights should be a good indication and may be sufficient for your purpose. Getting rid of an input having all small weights should have little effect on the network's accuracy.
profile picture

hdimon

#3


Please, help me to solve the following problem. The data base I need for are at the screen form in TOOLS Neuro-Lab Select Training Data in Training Set. How can I serve this data base on a disk?
profile picture

Eugene

#4
Not sure what "serve" means but the location of the data is found in post #1. In your case:

C:\Users\<User>\AppData\Roaming\Fidelity Investments\WealthLabDev\1.0.0.0\Data\NeuroLab\<network name>.xml

But the Training Set is not stored with the file.
profile picture

hdimon

#5
How can I store Training set that I have seen on the screen with the file?
profile picture

Eugene

#6
You can't.
This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).