How to tweak the system to boost Optimizer speed?
Author: kribel
Creation Date: 8/15/2013 12:29 PM
profile picture

kribel

#1
Hello,

I am trying to find out how the OS and the hardware can be tweaked to boost the WealtLab Optimizers speed without multicore support. I have a few suggestions. It would be great if the WealthLab support would comment on them.

1. Increase RAM
At the moment I am configuring a new PC for my Backtesting and Optimizations. I am thinking of 32GB RAM. Can WL 64Bit actually use that much?

2. Speed-up access to the historical data
For each parameter set during an optimization WealthLab needs to access the historical data of each symbol form the selected DataSet. Does it keep this information in RAM or does it access the files on the hard drive each time?
If it accesses the files for each parameter set separately on the hard drive, then I see the following two options to boost this:

2a. Use a SSD
I think this is obvious. The SSD is much faster then a HDD.

2b. Create a virtual drive in your RAM
If your hardware has enough RAM, then it is possible to create a virtual drive in RAM and move the historical data there and let WealthLab access it from this virtual drive.

@WealthLab Support:
What do you think about this? Do you have any other tips how to boost the speed?

Many thanks,
Konstantin
profile picture

Eugene

#2
Konstantin,

1. See the FAQ, please: Is Wealth-Lab Developer/Pro 6 able to use 4(6,8...) Gb of RAM when running on a 64-bit OS?

2. Although file access is not the bottleneck here, go for SSD and you'll stay pleased in general.
profile picture

kribel

#3
Hi Eugene,

2. What is the bottleneck here?
There is a benchmark from Asrock which says that with the virtual drive in RAM the file access is 5x faster compared to an SSD. Here is the link: http://www.youtube.com/watch?feature=player_embedded&v=0amR2ruwNVo

But the original question stays unanswered. Could you please answer it?
QUOTE:
For each parameter set during an optimization WealthLab needs to access the historical data of each symbol form the selected DataSet. Does it keep this information in RAM or does it access the files on the hard drive each time?


3. Overclocking the CPU is an additional way to speed up the optimization. Don't you think?

Is there anything else possible?
profile picture

Eugene

#4
QUOTE:
There is a benchmark from Asrock which says that with the virtual drive in RAM the file access is 5x faster compared to an SSD.

You already guess that parallelizing optimizations would speed the things up better than anything else for multi-core CPUs. Compared to that, I doubt that it makes sense to care how fast is access to the small .WL file. Is it (say) 0.2 ms or 0.004 ms, I couldn't care less.

QUOTE:
3. Overclocking the CPU is an additional way to speed up the optimization. Don't you think?

Obviously, throwing CPU cycles at it is always a way.

QUOTE:
For each parameter set during an optimization WealthLab needs to access the historical data of each symbol form the selected DataSet. Does it keep this information in RAM or does it access the files on the hard drive each time?

I don't know for sure, but even if it occasionally hits the hard drive (as I think), the overhead here is minimal. Just give it a thought: depending on strategy's complexity, it might take seconds or even minutes for a single optimization run so those milliseconds taken by HDD access hardly play a big role. Don't forget that Windows caches file operations.

QUOTE:
Is there anything else possible?


How about developing parameterless or adaptive strategies that either don't rely on heavy optimizations, over-optimization, and periodic reoptimization, or have a minimal number of parameters that is reoptimized rather infrequently.
profile picture

LenMoz

#5
QUOTE:
Is there anything else possible?

Perhaps I'm being repetitive, but think about doubling step sizes. Think of it this way. Doubling the step size of a single parameter reduces the required calculations by half, the same effect as doubling processor and I/O speed. Each additional parameter where you are able to double the step size cuts it in half again.

Len
profile picture

kribel

#6
Hello there,

@Eugene:
Thanks for your input! I think I will just try the different possibilities and compare them.

QUOTE:
How about developing parameterless or adaptive strategies that either don't rely on heavy optimizations, over-optimization, and periodic reoptimization, or have a minimal number of parameters that is reoptimized rather infrequently.


This kind of a strategy still indirectly contains parameters. But in order to indirectly apply values to parameters, I think it necessary to experience their direct impact.

@Len:
Thanks! Step size is a possible way. But this is another story. This only reduces the number of parameter combinations. It does not accelerate the hardware.

Cheers
Konstantin
This website uses cookies to improve your experience. We'll assume you're ok with that, but you can opt-out if you wish (Read more).