- ago
I personally target APR, but what metric do you guys prefer to target most?
0
773
11 Replies

Reply

Bookmark

Sort
Glitch8
 ( 10.94% )
- ago
#1
WealthLab Score, of course 😎
0
- ago
#2
I select a metric which takes performance and risk into account.

Because there are many performance measures:
APR, AvgProfit, ProfitPct, ProfitPerBar, WinRate, etc...
and there are many risk measures:
MaxDD, Stdev of Equity Curve, etc...
there are many measures which take both sides into account.

The most wellknown such Risk/Reward metricis certainly the Sharpe Ratio, so it is certainly a good start.
0
vk8
 ( 52.48% )
- ago
#3
Good question, with no “right” answer.
1. Number of trades - if you don’t have enough trades it could be just luck.
2. Average percent profit - if it is too low than the execution becomes very important.
3. APR
4. Everything else that has been said here before

But here comes the most important advise:

Use walk forward optimization.
Here I prefer the “expanding window” but I also use slighting windows to get another picture of possible results.
1
- ago
#4
QUOTE:
Because there are many performance measures ... and there are many risk measures ... there are many measures which take both sides into account. The most well known such Risk/Reward metrics certainly the Sharpe Ratio ...

The point is that a metric that takes both sides into account is an area of financial research.

I think the best approach is to develop a ScoreCard metric (and coding it for WL) for taking both sides into account. I've looked at this problem using the Equity Curve. But even when narrowing down the problem to characterizing the Equity Curve, questions arise. There are many ways to characterize the Equity Curve for this problem. For example, ...

... should the ScoreCard metric only look at how well the strategy Equity Curve is positively trending without reference to the Buy-and-Hold "control" equity curve? Or without looking at the benchmark equity curve? Should the merit metric penalize the strategy on a drop in equity when the control (benchmark) equity drops as well? If so, what should be the penalty?

The point is there can be a lot wrapped into a merit metric for a WL ScoreCard metric characterizing the Equity Curve. There's a great deal of variables to consider here.

The good news is WL lets you develop your own ScoreCard metrics if you're a talented programmer and know a little about time series analysis. There's also the issue of which terms in your proposed model are most significant and should be kept or dropped. This is a statistical and research question to pose to a stat package like R or MatLab. MatLab can do stepwise regression. R has some packages for all-possible-subset regression. Both can compute separate "P"s (probabilities of significance) for each term in your merit model.

Happy computing, researching, and publishing to you.
1
- ago
#5
There is one other issue that makes all of this even more complicated. The underlying process is stochastic, and any one value of a computed Metric has to be treated as a sample from an unknown distribution.
0
- ago
#6
QUOTE:
The underlying process is stochastic, and any one value of a computed Metric has to be treated as a sample from an unknown distribution.

And all of today's commercial stat packages are going to assume the error is Normally distributed for each term in the ScoreCard model they are fitting when they compute the "P"s (significance) for each term in the model. There's no simple way to overcome this unless you write your own stat package from C++ robust-statistics library code.

I would assume Normally distributed error in the terms of your ScoreCard Equality-Curve model and declare this as a limitation in your paper when you publish it. I'm sure your readers will understand and will still want to read your paper (including me). Every research publication has limitations--we accept that. I predict you'll get a "usable" general solution to your ScoreCard Equality-Curve merit model, although the Kurtosis will not be 3 (normal) and the skewness will not be zero (normal) for the error in the fit.
0
- ago
#7
superticker,

The gist of my comment was a caution to the OP that even after choosing a Metric that meets the desired need, that the numerical result from that Metric will not truly be deterministic, and that small changes in the data values/sequences/etc. will give different results, because the unlying process is stochastic.

How to possibly address that, and a host of other modeling issues, is the subject for a whole different discussion! ;)

V
0
- ago
#8
QUOTE:
... a caution to the OP that even after choosing a Metric that meets the desired need, that the numerical result from that Metric will not truly be deterministic,...

And I totally agree. Even if there were a "robust" stepwise regression algorithm that could correctly solve for the "P"s for each of the modeling terms without assuming a Normal error distribution, there would still be many possible "close" solutions (i.e. it wouldn't be single-solution deterministic).

These lack of determinism is what makes writing an ideal parameter optimizer for WL next to impossible. This also means, on each run of the optimizer, you're going to get a different parameter fit regardless of the ScoreCard metric used.

If there's an article that compares (ScoreCard) Equity-Curve merit metrics (such as the Sharpe Ratio), please cite it here. I would love to read it. One has already been cited on this forum for Sharpe Ratio "correction". https://www.wealth-lab.com/Discussion/Metric-Probabilistic-Sharpe-Ratio-6429
0
- ago
#9
I keep looking for a good article that compares metrics but I have not found one to date. I have been given some advice by highly-knowledgeable contacts in the business whose organization I feel certain has done some work in this area.

It is unfortunate that the Sharpe Ratio "correction", i.e. the Probabilistic Sharpe Ratio, has not garnered more votes. :(

V
0
- ago
#10
A less academic, but more pragmatic study, that I found interesting when I read it a few years ago:
http://quantfiction.com/2018/08/20/trading-metrics-that-actually-matter/
0
- ago
#11
alkimit,

Thanks for the link. Very interesting, and quite surprising! What that author did not do is assess the predictive potential of Vince's "Optimal f", which unfortunately is not too great.

V
0

Reply

Bookmark

Sort