Right now, all strategies published on WealthLab.com are ranked using performance results from a single internal dataset (“WealthLab 100”). This approach ensures a consistent benchmark across strategies, but it doesn’t always reflect how a strategy is meant to perform in its intended market.
I’m considering a change that would allow each published strategy to specify its own dataset for ranking purposes. For example, a crypto strategy could use a crypto dataset, an ETF strategy could use an ETF dataset, and so on.
The benefit would be more accurate and relevant rankings within each strategy’s domain.
The downside is that it could open the door to curve fitting if authors choose datasets that flatter their results. So, we’d likely enforce some manual moderation around dataset selection.
I’d love to get your feedback before moving forward.
- Do you prefer that all strategies continue to use a common benchmark dataset for comparability?
- Or would you rather see rankings reflect each strategy’s own target market and data universe?
- Any ideas on how we could balance fairness and flexibility?
I’m considering a change that would allow each published strategy to specify its own dataset for ranking purposes. For example, a crypto strategy could use a crypto dataset, an ETF strategy could use an ETF dataset, and so on.
The benefit would be more accurate and relevant rankings within each strategy’s domain.
The downside is that it could open the door to curve fitting if authors choose datasets that flatter their results. So, we’d likely enforce some manual moderation around dataset selection.
I’d love to get your feedback before moving forward.
- Do you prefer that all strategies continue to use a common benchmark dataset for comparability?
- Or would you rather see rankings reflect each strategy’s own target market and data universe?
- Any ideas on how we could balance fairness and flexibility?
Rename
QUOTE:
- Do you prefer that all strategies continue to use a common benchmark dataset for comparability?
- Or would you rather see rankings reflect each strategy’s own target market and data universe?
I would do both. You can't compare apples to oranges, but a cherry-picked dataset is a good thing, not a bad thing.
All my datasets are cherry-picked and reevaluated each weekend. Yes, that's cheating, but I'm here to make money, not to play fair. But playing fair for comparison purposes is still useful.
I’m afraid both is not an option, it’s too much maintenance. We want one or the other. Which would you prefer more for the web site strategy rankings?
Your Response
Post
Edit Post
Login is required