Neuro-Lab compiles the strategy excessively in MSB / Optimization mode
Author: LenMoz
Creation Date: 8/16/2016 6:08 PM
profile picture

LenMoz

#1
If I run Task Manager (Windows 10) while running an optimization, the "Background Process" category includes "Visual C# Command Line Compiler". This line appears for 3 seconds, then disappears for 3 seconds, then appears, etc. This oscillation continues for the duration of the optimization run.

So, two questions...
1. What is the purpose of this? I'm not modifying the strategy while the optimization runs.
2. If it is not necessary, can the strategy process recognize that it is running under optimization and gain some speed by bypassing the compile step (if that's what it is).

Edit: The 3-second timing is while running a 10-year backtest on 145 symbols with a 500-line strategy. It may be hard to see the phenomenon on a smaller backtest.
profile picture

Eugene

#2
profile picture

LenMoz

#3
Eugene!!!!!! And you wonder why I've stopped posting on these boards.

I'm making a suggestion to save time on an optimization run. This is a topic of interest to several users. Now do your job and answer the questions.
profile picture

Eugene

#4
QUOTE:
Eugene!!!!!! And you wonder why I've stopped posting on these boards.

I'm making a suggestion to save time on an optimization run. This is a topic of interest to several users. Now do your job and answer the questions.


You're barking up the wrong tree Len. I'm not responsible for developing the main Wealth-Lab application and have never had access to its source code. But I trust the Fidelity VP Product Development who designed Wealth-Lab's optimizer so I tried to informally suggest that there must be a solid reason behind something that doesn't appear broken. When it becomes a business critical issue rather than some "topic of interest" I'll stand out and certainly do my job.
profile picture

LenMoz

#5
Can you refer me to the documentation (SLA) between you and Fidelity so I know what is MS123's and what is Fidelity's?
profile picture

Eugene

#6
Our third party company MS123 LLC runs the wealth-lab.com website, is licensed by Fidelity to support and resell Wealth-Lab products to international customers. We are not involved in preparing the Fidelity Data Providers as well as are not responsible for developing the main Wealth-Lab application. We also do not determine what goes into the product. Besides other activities, MS123 takes care of the website, support, Extension development, and acts as an analyst and facilitator to submit problems to Fidelity.
profile picture

LenMoz

#7
You italicized "problems". My problem is that optimization runs too slowly. Others in these forums have seen this as a problem as well. I included a suggestion that may improve performance. So, act as an analyst and facilitate.
profile picture

Eugene

#8
Optimization may perform not as expected for a different reason (which could be determined by the developer with debugger). But "slowly" is akin to the infamous "it doesn't work" (just as useful and descriptive) - yet even worse being subjective.

What is "too slowly" exactly? Compared to a competitor's product? To a smaller backtest? Or it gets progressively slower as it runs? Or maybe it's some particular piece of code that slows down whereas plain vanilla "Moving Average Crossover" is fine in this aspect? etc.

In other words, please help me help you by describing your problem clearly. Give me some facts to reproduce and submit a bug report. Thanks.
profile picture

vk

#9
Hi LenMoz,
Thank you for pointing out this interesting observation. I will bring this to the attention of the right people and hopefully they either have an explanation and good reason for it or they will put it onto their list for future improvements. Right now I would not know why this is happening. Ones again thank you for pointing it out and I will get back to you as soon as I have an answer.

VK
profile picture

LenMoz

#10
I'm simply asking someone (at Fidelity) to look at the optimizer host code to see whether needless compiles are being done, as evidenced by Task Manager. If they are, change the code to compile only once at the start of the optimization. It has nothing to do with competitors, or backtest size, or any particular piece of code. You could simply refer Fidelity to this thread.
profile picture

Eugene

#11
Len, before pointing the finger at csc.exe keep in mind that there always may be other reasons like GC or a bug like QC 55091.

Volker, in addition to knowing "why" wouldn't it be necessary to know "what" exactly is happening? ;) Running Windows 10 (like OP) I've been unable to reproduce any csc.exe popping up on the Background Processes / Resource Monitor while doing an Exhaustive optimization to start with.
profile picture

LenMoz

#12
Another reason that occurs to me for a compile is that the strategy includes a neural network. Perhaps it's compiling the neural network script?

EDIT: It may not be related to optimization, but rather Neuro-Lab. The same phenomenon occurs simply doing a Multiple-Symbol Backtest.
profile picture

Eugene

#13
Good catch Len. NL may be compiling its scripts/indicators during optimizations. We'll have to look into it and determine if it's doing its compilations excessively or this is required.
profile picture

Eugene

#14
Seems like NL must be compiling the various scripts (input, output, indicator) as part of its script execution workload.
profile picture

Cone

#15
In summary, does the the "Visual C# Command Line Compiler" observation occur only for optimizations of strategies that employ a NN?
profile picture

LenMoz

#16
Cone,

That seems to be the case. I see it a lot because I have very few strategies that don't invoke NNIndicator.Series / use a neural network.

Edit: I ran a non-NN strategy and did not see the compile.

(This thread may be mistitled)
profile picture

Eugene

#17
QUOTE:
(This thread may be mistitled)

Added a mention of Neuro-Lab to reflect your findings.
profile picture

LenMoz

#18
QUOTE:
Added a mention of Neuro-Lab to reflect your findings

"Optimization" in the title may not be needed. I think we'll find that it has no role in this. Multi-symbol backtest is sufficient to invoke the compiler multiple times. I thought "optimization" before the later tests. (Edit)Possibly "MSB seems to compile the strategy a lot when Neuro-Lab is used"?
profile picture

Eugene

#19
QUOTE:
(Edit)Possibly "MSB seems to compile the strategy a lot when Neuro-Lab is used"?

But that doesn't exclude optimization as a likely scenario. Therefore the "Optimization / MSB..." in the new title.

QUOTE:
Multi-symbol backtest is sufficient to invoke the compiler multiple times.

Right. Wealth-Lab executes the strategy (including Neuro-Lab's scripts) on each symbol sequentially, and then applies the position sizing overlay.
profile picture

LenMoz

#20
Hi,

There is a *****20-to-1 speed improvement ***** to be had here. I think a redesign and implementation effort to have the NNIndicator parse the Xml and compile only once is worth pursuing, to benefit all users of Neuro-Lab.

Using NNIndicator, a 10-year backtest on 145 symbols takes 64 seconds. The NN uses 11 input DataSeries and a single hidden layer having 4 nodes. The very same backtest, using my own designed procedure that doesn't compile at all, takes 3 seconds, a 21 to 1 improvement. It parses the NeuroLab Xml only once, Signals produced are identical.

Here's the design I used. I created a class having a data structure for the NN topology and weights and two major procedures. The first, ParseNetworkXml, builds the data structure. The second NeuroCalc, calculates the NNIndicator DataSeries. The messy part of my solution is that it requires copying the NeuroLab Input Script into the strategy to build the NN's input DataSeries. Calls to neuroLab.Input are replaced by inputs.Add, where "inputs" is a List of DataSeries. Edit: I forgot to mention. An MSB only parses the Xml once, on the first symbol, and stores results as a Global.

Sidebar: Did you know that the weights are in the Xml twice, having identical values?

Len
profile picture

LenMoz

#21
I called Fidelity today to try to raise some awareness of this performance issue. The person I spoke to didn't give me much hope, other than indicating that this forum is followed by their developers. So, Fidelity developers, any reaction to this? The time in the strategy (i.e. pre-Visualizers) running a multi-symbol backtest truly shows a 20 to 1 improvement when the XML parsing and input script compiling is done only once.
profile picture

Eugene

#22
While the speed improvement you attained is really impressive, "copying the NeuroLab Input Script into the strategy to build the NN's input DataSeries" sounds like an added modification that has to be performed on a by strategy basis. Is this true? If so then from both usability and compatibility standpoint for the commercial product it's a tough call. Disclaimer: I'm not the NL developer.
profile picture

LenMoz

#23
My solution, designed as a proof-of-concept, does indeed require strategy by strategy hand tailoring. It would not be the desired solution. The desired solution would require no change to strategy code. Rather, NNIndicator.Series would have a mechanism to detect whether it had built NN data structures and compiled the input script in this run, so as to build only once and re-use rather than the current rebuild at each symbol. I don't have the code so I can't design the final solution. Does this make sense? Fidelity, are you in there? Hello???

Edit: It's not that is so terribly difficult. The two methods that parse the network and calculate the DataSeries are together only 350 lines of code. For my purposes, I built a free-standing .dll so the hand-tailoring in each strategy is rather simple.
profile picture

LenMoz

#24
Any progress on this? Any response at all?

IMHO, the current design is so unnecessarily slow as to make NeuroLab unfeasible for any meaningful purpose. I'm doing optimizations using my solution that I could not even consider before.

Len
profile picture

vk

#25
Hi Len,
Neuro-Lab was a product developed by MS123, the Wealth-Lab support team that you probably mostly communicate with. The person who developed Neuro-Lab is not working for us anymore. In fact he just stopped working a few weeks before you discovered the "bug". Hence getting it fixed would be a tremendous financial effort. As far as we know you were the only one "discovering" it and/or reporting it. I am not in the position to talk about the Fidelity plans to release WL7, however if it materializes there should be a new NL, which then should definitely consider it.
Finally, I reached out to the developer to get an estimate on the fix, if it is within the scope I will get it done. Does that sound ok?
profile picture

LenMoz

#26
QUOTE:
As far as we know you were the only one "discovering" it and/or reporting it.

That's because the underlying design isn't published. Who would guess at a compile for every MSB symbol? I've used NeuroLab since 2013 and always thought the slowness was because of the time required to build the input DataSeries. I found the real (compile) reason by accident. Unfortunately, NeuroLab doesn't seem to have a very big user community; no one has lent support to my request.

profile picture

Carova

#27
QUOTE:
Unfortunately, NeuroLab doesn't seem to have a very big user community; no one has lent support to my request.


Maybe because it is so slow?? I tried and and decided it was way too slow for my needs.

Vince
profile picture

LenMoz

#28
Any progress on this?
profile picture

Cone

#29
Try version 1.0.3.0 available in extension updates now.

We were able to eliminate the unnecessary compiles, but didn't achieve the order of speed improvement that you did with your solution. Nonetheless, Neuro-Lab operates more than 200% faster now, which is definitely a big improvement!
profile picture

LenMoz

#30
I ran one of my strategies using my solution and yours. Prior to 1.0.3.0, this run would have taken about 65 seconds including visualizers. Using 1.0.3.0, pre-visualizers, the run took 16 seconds. Using my solution, the run took 3 seconds (also pre-visualizers). I wish I had captured the pre-visualizer time before installing 1.0.3.0. Further, I did not see compiler executions in 1.0.3.0.

Bottom line, thanks for the update. There is still room for improvement.

How many times is the XML parsed? I parse only once, on the first symbol, and store the result in global storage. That could be a difference.
profile picture

Eugene

#31
According to the developer, caching requests to the .XML file resulted only in a minimum speed improvement of ~10%. Since this could be a breaking change he considered it's not worth the trouble.
profile picture

LenMoz

#32
I have an object, NeuralModel, that contains data structures and methods to replicate NNIndicator.Series functionality. It is instantiated on the first symbol. The constructor parses the XML into arrays. The object is stored as a Global. Method NeuralCalc constructs a DataSeries equivalent to NNIndicator.Series.

So, the top of my strategy looks like this...
CODE:
Please log in to see this code.



The constructor builds these data structures. My solution handles networks with up to two hidden layers only.
CODE:
Please log in to see this code.

IMHO the calls to parse XML are your (major) bottleneck.
profile picture

superticker

#33
QUOTE:
How many times is the XML parsed? I parse only once, on the first symbol, and store the result in global storage.
I'm not sure if the "parsing" time is what's slowing it down. My guess is the cause of the slowness maybe in the creation and destruction of all the data structures that follow the parsing. In other words, Garbage Collection (GC).

Try examining the Wealth-Lab process with Process Explorer (from SysInternals, now owned by Microsoft). Take a look at the .NET framework process tasks for the Generation 0,1, and 2 heap activity. If GC activity is over 5%, you have GC problems. From a GC prospective, you're always better off allocating the data structures and reusing them if possible; taking them down, GCing, then recreating them again is really slow.

QUOTE:
According to the developer, caching requests to the .XML file resulted only in a minimum speed improvement of ~10%.
And that maybe true if you're only modeling 2 or 3 parameters and have a really large processor cache. But if you're modeling 14 parameters such that the GC problem will no longer fit in the on-chip processor cache, that will make a factor of 5 difference.

When we (computer engineers) design caching systems, we allow a factor of 5 speed difference between each tier (L1, L2, L3) of the caching architecture. It's part of the "parameter funneling model" of the system design to maximize gain while minimizing chip real estate. So if the GC gets a cache miss on the L2 cache and that memory access is deferred to the L3 cache (or off-chip RAM memory) instead, that's a speed hit of 5x1 (L3) or 5x5 (off-chip RAM) times respectively. Bottom line, as your memory footprint gets bigger, the cache misses really slow you down big time.

QUOTE:
Unfortunately, NeuroLab doesn't seem to have a very big user community;...
QUOTE:
Maybe because it is so slow? I tried and and decided it was way too slow ... Vince
I agree. If it's too slow, no one will use it.

The other problem is the lack of experience users have had with neural networks. Neural nets are ideal for fitting model parameters for nonlinear, discontinuous, fuzzy systems like we have for stock trading. But how many WL users have had a graduate level course in neural networks (either in EE or computer science) to know that? That's your biggest problem.

What Fidelity should do is host a bi-annual symposium for Wealth-Lab users. The sessions at such a symposium can then cover some of these advanced topics. I would host a Wealth-Lab symposium in parallel with an established symposium about stock investing/trading so you get enough critical mass (i.e. attendance) to make it successful.... And it would be nice to meet some of the developers.
profile picture

LenMoz

#34
Thanks for your insightful post, superticker. Since the compiles have been removed, I think there is high probability that Xml processing is the biggest remaining culprit. Through a ticket, I've provided MS123/Fidelity with the C# project that builds my object plus a strategy script that compares timing of NIndicator.Series to my solution.

QUOTE:
What Fidelity should do is host a bi-annual symposium for Wealth-Lab users.
QUOTE:
And it would be nice to meet some of the developers.

I couldn't agree more! I wouldn't miss it.
profile picture

Cone

#35
At the Tri-annual Traders' Expo (next one in Las Vegas in 3 weeks) you can meet the developer(s). We've given "Deep Dive" classes at the expo for years, but it turns out to be difficult to attract advanced users to attend and always ends up being closer to a Wealth-Lab 101 class.
profile picture

LenMoz

#36
I've started a "WealthLab at Trade Shows" thread so these off-topic posts might be found.
This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).