- ago
This metric is a significant improvement over the standard Sharpe Ratio, as suggested by David Bailey and Marcos Lopez de Prado (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1821643) to address the non-Gaussian distributions regularly found in trading systems. This permits a User to select among strategies with similar Sharpe Ratios to identify those with a higher probability of maintaining performance Out-of-Sample (https://quantdare.com/probabilistic-sharpe-ratio/, https://stockviz.biz/2020/05/23/probabilistic-sharpe-ratio/).

Code for the calculation in Python by one of the original Authors (Marcos Lopez de Prado) can be found on Github (https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio).

Vince
3
1,421
Solved
15 Replies

Reply

Bookmark

Sort
- ago
#1
Apparently standard Sharpe ratio estimates of the Equity Curve are not normally distributed (i.e. They are non-Gaussian and highly skewed; I didn't know that.). As a result, standard approaches for evaluating and testing them aren't valid.

The cited paper discusses an alternative metric (Probabilistic Sharpe Ratio), which may lend itself to more traditional evaluation. I'm not able to tell if the Probabilistic Sharpe Ratio would be more normally distributed, but that would certainly be a goal. But clearly, the Probabilistic Sharpe Ratio would be better merit metric (for testing) than the standard Sharpe ratio. Interesting work for anyone with some statistical background.
0
- ago
#2
superticker,

I knew that you would appreciate this work! ;)

Vince
1
- ago
#3
It appears that the Python code MIGHT have a couple of small errors (as noted here {https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio/issues}).

The R code (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R) appears to not have these issues.

Vince
1
Glitch8
 ( 8.38% )
- ago
#4
Can some one describe the computation in plain English?
0
- ago
#5
Glitch,

It is essentially the Sharpe Ratio, modified by the Skewness and Kurtosis of the distribution of the returns.

From the R implementation (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R):

refSR defines the reference Sharpe Ratio and should be in the same periodicity as the returns (non-annualized).
n is the record length of the returns
sr is the sharpe ratio of the returns
sk is the skewness of the returns
kr is the kurtosis of the returns

If x is the distribution:
sr = SharpeRatio(x, Rf, p, "StdDev")
sk = skewness(x)
kr = kurtosis(x,method='moment')

and

sr_prob = pnorm(((sr - refSR)*((n-1)^(0.5)))/(1-sr*sk+(sr^2)*(kr-1)/4)^(0.5))

"pnorm" calculates cumulative distribution function of normal distribution from the Mean and Standard Deviation.

Does that help?

Vince
0
- ago
#6
Glitch,

On the other hand you can take the R-Script for this function and use it directly in C#:

https://vvella.blogspot.com/2010/08/integrate-c-net-and-r-taking-best-of.html

Vince
0
- ago
#7
QUOTE:
It is essentially the Sharpe Ratio, modified by the Skewness and Kurtosis of the distribution of the returns.
So the Normal distribution has a Skewness of zero and a Kurtosis of 3. We want to reshape the Sharpe Ratio distribution so it looks more normal via a bit of stretching one tail, compressing the other tail, and scaling.

QUOTE:
On the other hand you can take the R-Script for this function and use it directly in C# ...
I wouldn't do that for WL "production use". That would require installing R and running the calculations through the R interpreter, which would be slow. Just convert the R code to C# so the solution isn't interpreted. I might be possible to do a line-by-line conversion of the Python code (GetHub) to C# code.
0
- ago
#8
QUOTE:
We want to reshape the Sharpe Ratio distribution so it looks more normal via a bit of stretching one tail, compressing the other tail, and scaling.

Yes. Seems rather clever to me.

QUOTE:
I might be possible to do a line-by-line conversion of the Python code

That would probably be the best, I agree. However, the Python code seems to have 2 minor errors that would need to be corrected (https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio/issues).

Vince
0
- ago
#9
I'm taking a step back and reevaluating this problem. Pardon me for using this as a statistics forum for the moment.

The Central Limit Theorem tells us if there are multiple sources of error (which there definitely are in this case), that regardless of their distribution, the resulting error distribution must be normal. That's important because we want the transforming function (the Sharpe ratio in the case) to average out all sources of error for stability (reproducibility). Unfortunately, that's not happening in this case--and that's a problem.

Bottom line, the Sharpe ratio is a bias transform that fails to average out all error sources. It's not usable in a stochastic environment. We need a new metric which shows more robust statistical behavior in a noisey environment. And certainly, I'm not the first one to make this reality check.
0
- ago
#10
QUOTE:
And certainly, I'm not the first one to make this reality check.


Yes, the Sharpe Ratio is flawed. Any metric which assumes a Gaussian distribution of returns will be flawed. Unfortunately, there are not many metrics based on robust statistics that have been developed. A large portion of the professional community continues to ignore the problem, the most sophisticated members do not.

I see the PSR as the best crude attempt to correct the shortcoming of the SR.

Vince
0
- ago
#11
QUOTE:
there are not many metrics based on robust statistics that have been developed
There's no requirement to use robust statistics. Classical (moment-oriented) statistics can work if the merit metric is "naturally" normally distributed.

For those unfamiliar with robust statistics, they do not assume a normal distribution. You can employ testing of the traditional Sharpe ratio directly with robust statistics. But my point is different.

I'm saying that the Central Limit Theorem allows us to cancel out all error sources intrinsically if we can discover a new metric that naturally yields a normal distribution. This will provide us with the most stable solution regardless of the type of statistical analysis used.

QUOTE:
I see the PSR as the best crude attempt to correct the shortcoming of the SR.
I agree with that. But we still need to be looking for a PSR replacement as well. Even it falls short.
0
- ago
#12
QUOTE:
I'm saying that the Central Limit Theorem allows us to cancel out all error sources intrinsically if we can discover a new metric that naturally yields a normal distribution. This will provide us with the most stable solution regardless of the type of statistical analysis used.

Agreed!

Vince
0
- ago
#13
The more I read about this Metric the more I understand its predictive value for model building. I sure hope that Glitch can get this into WL7.

Vince
0
- ago
#15
It's become possible to construct custom performance metrics with finantic's latest ScoreCard extension! Check it out, looks amazing:
https://www.wealth-lab.com/Discussion/finantic-ScoreCard-Released-6987
0
Best Answer

Reply

Bookmark

Sort