This metric is a significant improvement over the standard Sharpe Ratio, as suggested by David Bailey and Marcos Lopez de Prado (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1821643) to address the non-Gaussian distributions regularly found in trading systems. This permits a User to select among strategies with similar Sharpe Ratios to identify those with a higher probability of maintaining performance Out-of-Sample (https://quantdare.com/probabilistic-sharpe-ratio/, https://stockviz.biz/2020/05/23/probabilistic-sharpe-ratio/).

Code for the calculation in Python by one of the original Authors (Marcos Lopez de Prado) can be found on Github (https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio).

Vince

Code for the calculation in Python by one of the original Authors (Marcos Lopez de Prado) can be found on Github (https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio).

Vince

Rename

Apparently standard Sharpe ratio estimates of the Equity Curve are not normally distributed (i.e. They are non-Gaussian and highly skewed; I didn't know that.). As a result, standard approaches for evaluating and testing them aren't valid.

The cited paper discusses an alternative metric (Probabilistic Sharpe Ratio), which may lend itself to more traditional evaluation. I'm not able to tell if the Probabilistic Sharpe Ratio would be more normally distributed, but that would certainly be a goal. But clearly, the Probabilistic Sharpe Ratio would be better merit metric (for testing) than the standard Sharpe ratio. Interesting work for anyone with some statistical background.

The cited paper discusses an alternative metric (Probabilistic Sharpe Ratio), which may lend itself to more traditional evaluation. I'm not able to tell if the Probabilistic Sharpe Ratio would be more normally distributed, but that would certainly be a goal. But clearly, the Probabilistic Sharpe Ratio would be better merit metric (for testing) than the standard Sharpe ratio. Interesting work for anyone with some statistical background.

superticker,

I knew that you would appreciate this work! ;)

Vince

I knew that you would appreciate this work! ;)

Vince

It appears that the Python code MIGHT have a couple of small errors (as noted here {https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio/issues}).

The R code (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R) appears to not have these issues.

Vince

The R code (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R) appears to not have these issues.

Vince

Can some one describe the computation in plain English?

Glitch,

It is essentially the Sharpe Ratio, modified by the Skewness and Kurtosis of the distribution of the returns.

From the R implementation (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R):

refSR defines the reference Sharpe Ratio and should be in the same periodicity as the returns (non-annualized).

n is the record length of the returns

sr is the sharpe ratio of the returns

sk is the skewness of the returns

kr is the kurtosis of the returns

If x is the distribution:

sr = SharpeRatio(x, Rf, p, "StdDev")

sk = skewness(x)

kr = kurtosis(x,method='moment')

and

sr_prob = pnorm(((sr - refSR)*((n-1)^(0.5)))/(1-sr*sk+(sr^2)*(kr-1)/4)^(0.5))

"pnorm" calculates cumulative distribution function of normal distribution from the Mean and Standard Deviation.

Does that help?

Vince

It is essentially the Sharpe Ratio, modified by the Skewness and Kurtosis of the distribution of the returns.

From the R implementation (https://github.com/braverock/PerformanceAnalytics/blob/master/R/ProbSharpeRatio.R):

refSR defines the reference Sharpe Ratio and should be in the same periodicity as the returns (non-annualized).

n is the record length of the returns

sr is the sharpe ratio of the returns

sk is the skewness of the returns

kr is the kurtosis of the returns

If x is the distribution:

sr = SharpeRatio(x, Rf, p, "StdDev")

sk = skewness(x)

kr = kurtosis(x,method='moment')

and

sr_prob = pnorm(((sr - refSR)*((n-1)^(0.5)))/(1-sr*sk+(sr^2)*(kr-1)/4)^(0.5))

"pnorm" calculates cumulative distribution function of normal distribution from the Mean and Standard Deviation.

Does that help?

Vince

Glitch,

On the other hand you can take the R-Script for this function and use it directly in C#:

https://vvella.blogspot.com/2010/08/integrate-c-net-and-r-taking-best-of.html

Vince

On the other hand you can take the R-Script for this function and use it directly in C#:

https://vvella.blogspot.com/2010/08/integrate-c-net-and-r-taking-best-of.html

Vince

QUOTE:So the Normal distribution has a Skewness of zero and a Kurtosis of 3. We want to reshape the Sharpe Ratio distribution so it looks more normal via a bit of stretching one tail, compressing the other tail, and scaling.

It is essentially the Sharpe Ratio, modified by the Skewness and Kurtosis of the distribution of the returns.

QUOTE:I wouldn't do that for WL "production use". That would require installing R and running the calculations through the R interpreter, which would be slow. Just convert the R code to C# so the solution isn't interpreted. I might be possible to do a line-by-line conversion of the Python code (GetHub) to C# code.

On the other hand you can take the R-Script for this function and use it directly in C# ...

QUOTE:

We want to reshape the Sharpe Ratio distribution so it looks more normal via a bit of stretching one tail, compressing the other tail, and scaling.

Yes. Seems rather clever to me.

QUOTE:

I might be possible to do a line-by-line conversion of the Python code

That would probably be the best, I agree. However, the Python code seems to have 2 minor errors that would need to be corrected (https://github.com/rubenbriones/Probabilistic-Sharpe-Ratio/issues).

Vince

I'm taking a step back and reevaluating this problem. Pardon me for using this as a statistics forum for the moment.

The Central Limit Theorem tells us if there are multiple sources of error (which there definitely are in this case), that regardless of their distribution, the resulting error distribution

Bottom line, the Sharpe ratio is a bias transform that

The Central Limit Theorem tells us if there are multiple sources of error (which there definitely are in this case), that regardless of their distribution, the resulting error distribution

__must__be normal. That's important because we want the transforming function (the Sharpe ratio in the case) to__average out__**sources of error for stability (reproducibility). Unfortunately, that's not happening in this case--and that's a problem.***all*Bottom line, the Sharpe ratio is a bias transform that

__fails to average out__all error sources. It's not usable in a stochastic environment. We need a new metric which shows more robust statistical behavior in a noisey environment. And certainly, I'm not the first one to make this reality check.QUOTE:

And certainly, I'm not the first one to make this reality check.

Yes, the Sharpe Ratio is flawed. Any metric which assumes a Gaussian distribution of returns will be flawed. Unfortunately, there are not many metrics based on robust statistics that have been developed. A large portion of the professional community continues to ignore the problem, the most sophisticated members do not.

I see the PSR as the best crude attempt to correct the shortcoming of the SR.

Vince

QUOTE:There's no requirement to use robust statistics. Classical (moment-oriented) statistics can work if the merit metric is "naturally" normally distributed.

there are not many metrics based on robust statistics that have been developed

For those unfamiliar with robust statistics, they do

__not__assume a normal distribution. You can employ testing of the traditional Sharpe ratio directly with robust statistics. But my point is different.

I'm saying that the Central Limit Theorem allows us to cancel out

__all__error sources intrinsically if we can discover a new metric that naturally yields a normal distribution. This will provide us with the

__most stable__solution regardless of the type of statistical analysis used.

QUOTE:I agree with that. But we still need to be looking for a PSR replacement as well. Even it falls short.

I see the PSR as the best crude attempt to correct the shortcoming of the SR.

QUOTE:

I'm saying that the Central Limit Theorem allows us to cancel out all error sources intrinsically if we can discover a new metric that naturally yields a normal distribution. This will provide us with the most stable solution regardless of the type of statistical analysis used.

Agreed!

Vince

The more I read about this Metric the more I understand its predictive value for model building. I sure hope that Glitch can get this into WL7.

Vince

Vince

It's become possible to construct custom performance metrics with finantic's latest ScoreCard extension! Check it out, looks amazing:

https://www.wealth-lab.com/Discussion/finantic-ScoreCard-Released-6987

https://www.wealth-lab.com/Discussion/finantic-ScoreCard-Released-6987

Your Response
Post

Edit Post

Login is required