Optimization randomly showing 0 trades for some combinations of strategy parameters
Author: Panache
Creation Date: 11/12/2018 11:45 PM
profile picture

Panache

#1
I'm having an issue with optimizations randomly returning 0 trades for some combinations of parameters, even though when I run the strategy with those parameter settings, it shows that there were trades. (The strategy uses priorities, so the results should be the same every time.). While I'm sure it isn't truly random, I'm still trying to isolate the source of the problem.

Awhile ago, I posted something to the effect that periodically I had to shut down and restart Wealth-Lab Pro after doing many optimizations. I recall Cone chiming in and making reference to the garbage collector. Since garbage collection is kind of a random event, I'd like to reread that thread. Unfortunately, I can't find it searching internally here or externally through Google or Bing. If you have a better way of searching, I would appreciate it if you could provide me with a link to that post.

Thank you.



profile picture

superticker

#2
The attachment shows the Firefox setup for performing a Google search over the Wealth-Lab forum. The search definition string in the attachment is
CODE:
Please log in to see this code.

To perform the search, type: wls searchString
... in the Firefox URL bar.

----
If garbage collection is affecting optimization results, then there's a serious bug somewhere. Garbage collection should always be transparent to any computed results. I would strip out any code that tries to control heap management, which includes any code attempting to control garbage collection or locking variables in memory.
profile picture

Eugene

#3
I couldn't find such forum thread either. Looks like a side effect of your workflow i.e. multiple optimizations (not recommended) and Disabling Concurrent Garbage Collection. Also, what optimization method(s) are you using when you get this?
profile picture

Panache

#4
It appears that I was just pushing my CPU too hard. The problem only happens when the number of simultaneous optimizations exceeds the number of logical cores in my CPU.

I'm doing some benchmarking on running various numbers of multiple optimizations simultaneously with and without background garbage collection. I should have enough data to post in a day or two.
profile picture

superticker

#5
QUOTE:
The [variable corruption] problem only happens when the number of simultaneous optimizations exceeds the number of logical cores in my CPU.
There may be a thread-safe problem with the .NET framework that shows up when switching between cores. Or Wealth-Lab and its optimizers aren't using thread-safe data types. Which optimizer is involved? Does the same problem occur with the other optimizers?

If you can precisely characterize when the variable corruption occurs with the VS debugger, you can report the problem to Fidelity or Microsoft.

I've ran the Particle Swarm optimizer on different strategies simultaneously without noticing a problem. But there still could be a hidden problem somewhere.

One thing to consider is the Wealth-Lab indicator API requires member functions to be "static" and therefore stateless; otherwise, one Chart window will clobber the "shared" static indicator state variables of another. If your indicator must have unique state variables, then they need to be either saved in indicator cache or you need to save them by reference (so each Chart instance has a unique copy) by creating a special indicator member function that's not static like .Series and .Value are. I've done the latter, and it works well with multiple Chart windows running with the same strategy. With stateful indicators, you can now have indicator properties to inspect those states.
profile picture

Panache

#6
QUOTE:
Which optimizer is involved? Does the same problem occur with the other optimizers?

I get the problem with both my Abacus and the Exhaustive optimizer. I know mine does not use threading.

However, the issue is kind of moot, because I have since found that if I have more simultaneous optimizations running than the number of logical cores, the optimizations run so much slower on my machine that it isn't worth doing. I've be doing some benchmarking regarding simultaneous optimizations the last couple of days, and I'll post my results later this week.

QUOTE:
I've ran the Particle Swarm optimizer on different strategies simultaneously without noticing a problem.

As I was sorting some of the results, a NAN for Profit/Bar jumped out at me. It didn't make sense for the strategy I was optimizing, which is why I started looking deeper. The weird thing about it was that if the parameter went from say 1 to 100 step 1, 1-20 might run fine, 21 might generate the error, but 22-100 would run fine.
profile picture

Panache

#7
Eugene,

I don't think there is anything in the Exhaustive to Local Maxima optimizer that is contributing to this problem, but I would appreciate it if you would take a quick look at the code again just in case. If there is anything that would make it more efficient, I'd be happy to spend some time fixing the optimizer.
profile picture

Eugene

#8
Nothing striking appears in the code on the surface. Since nobody else is having this issue, I guess it's because they aren't consuming more physical CPU cores with parallel optimizations than there are. Let's keep an eye on it.
profile picture

Panache

#9
Thank you. I just felt bad that I might have released something to the community that had a bug in it.
profile picture

abegy

#10
Sorry for my late reply.

For information, I have seen the same issues previously.
But as I haven't detected conditions of appear, I haven't reported.

Until now, when this issue appears, I restart my computer and retry the optimization.
profile picture

Eugene

#11
It's a good idea to start fresh after running long-running or multiple optimizations on large sets of data.
profile picture

superticker

#12
QUOTE:
... [restart WL] after running long-running or multiple optimizations on large sets of data.
Part of the problem is WL doesn't release indicator cache space by default. Try adding the following line somewhere after the trading loop to reduce the problem so you don't need to restart WL.
CODE:
Please log in to see this code.

If I run the same optimization twice, I get slightly different results on some stocks even when using the LastPosition.Priority feature. Some of that is because the indicators (e.g. ATR) I'm using are unstable. But some of this may be the WL optimizer manager itself. It may occasionally pass poor seed data (initial bounds) to the WL optimizers. It's not worth my time to debug this nondeterministic problem (And it would require writing customized debugger code for the optimizer.). I just rerun the optimization on the stocks that optimized poorly the last time to get better results on their next optimization. I do this as a routine practice. About one-in-twenty stocks can optimize poorly on the first try.

NOTE: I typically use the Particle Swarm optimizer, but I don't think the nondeterministic optimizing behavior is in the optimizer itself. It's something else.

---
On a numerical analysis note, the optimizer is looking for a maxima on a multidimensional surface with many local maxima. So it's not too surprising the optimizer can easily slip into one maxima or another with different runs using unstable WL indicators. Without doing an exhaustive search evaluating all maxima (which the optimizer isn't doing), you're really "somewhat" randomly picking one local maxima over another depending on which picking algorithm you're using for the optimizer.

To combat this many-local-maxima problem, minimize the number of parameters in your optimization. And try to employ indicators that produce smooth (stable) solution surfaces in noisy environments.
profile picture

abegy

#13
Thank you superticker. I will do.
Eugene, why there is nothing in the wiki about this ?
profile picture

Eugene

#14
Bars.Cache.Clear is documented in the QuickRef.
profile picture

Panache

#15
QUOTE:
when this issue appears, I restart my computer and retry the optimization

I haven't found this to be necessary. Merely restarting Wealth-Lab solves the problem for me. That made me think there might be a memory leak, but I don't see any signs of that.

I have found that using a non-static class library (so each instance of the strategy is using it's own instance of the class library) reduces the frequency of the problem, but does not eliminate it.

QUOTE:
Part of the problem is WL doesn't release indicator cache space by default.


I'm running some optimizations with your suggested code, and I'll report back on how much it helps me.

profile picture

superticker

#16
QUOTE:
I have found that using a non-static class library (so each instance of the strategy is using it's own instance of the class library) reduces the frequency of the problem,
As I mentioned in post #5, if there are state variables in the class (including an indicator class), then you have to employ non-static member functions if there are multiple instances of a given strategy running.

But for indicator classes, the WL API doesn't make previsions for internal (field) state variables. So you need to define a special non-static member function to use these state variables. Naturally, the .Series and .Value member functions should remain static to follow the WL API.

The non-static functions will create more garbage collection overhead, of course. So use them only when you need them.

---
A remaining issue, "Is indicator cache management thread safe?" Honestly, I don't know if WL's caching implementation employs .NET thread safe data types. Perhaps someone familiar with the internals of WL could comment on that.

Thread safe data types tie the hands of the Windows scheduler; I don't like that. But if you're running multiple instances of an indicator or strategy, there may not be a choice.
profile picture

Panache

#17
For my strategies,
CODE:
Please log in to see this code.

doesn't make any difference.
This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).