OPEN-SOURCE SCRIPT

Quick scan for signal

🙏🏻 Hey TV, this is QSFS, following:
Quick scan for drift

^^ Quick scan for drift (QSFD)
Quick scan for cycles

^^ Quick scan for cycles (QSFC)

As mentioned before, ML trading is all about spotting any kind of non-randomness, and this metric (along with 2 previously posted) gonna help ya'll do it fast. This one will show you whether your time series possibly exhibits mean-reverting / consistent / noisy behavior, that can be later confirmed or denied by more sophisticated tools. This metric is O(n) in windowed mode and O(1) if calculated incrementally on each data update, so you can scan Ks of datasets w/o worrying about melting da ice.

快照
^^ windowed mode

Now the post will be divided into several sections, and a couple of things I guess you’ve never seen or thought about in your life:

1) About Efficiency Ratios posted there on TV;
Some of you might say this is the Efficiency Ratio you’ve seen in Perry's book. Firstly, I can assure you that neither me nor Perry, just as X amount of quants all over the world and who knows who else, would say smth like, "I invented it," lol. This is just a thing you R&D when you need it. Secondly, I invite you (and mods & admin as well) to take a lil glimpse at the following screenshot:

快照
^^ not cool...

So basically, all the Efficiency Ratios that were copypasted to our platform suffer the same bug: dudes don’t know how indexing works in Pine Script. I mean, it’s ok, I been doing the same mistakes as well, but loxx, cmon bro, you... If you guys ever read it, the lines 20 and 22 in da code are dedicated to you xD

2) About the metric;
This supports both moving window mode when Length > 0 and all-data expanding window mode when Length < 1, calculating incrementally from the very first data point in the series: O(n) on history, O(1) on live updates.

Now, why do I SQRT transform the result? This is a natural action since the metric (being a ratio in essence) is bounded between 0 and 1, so it can be modeled with a beta distribution. When you SQRT transform it, it still stays beta (think what happens when you apply a square root to 0.01 or 0.99), but it becomes symmetric around its typical value and starts to follow a bell-shaped curve. This can be easily checked with a normality test or by applying a set of percentiles and seeing the distances between them are almost equal.

Then I noticed that on different moving window sizes, the typical value of the metric seems to slide: higher window sizes lead to lower typical values across the moving windows. Turned out this can be modeled the same way confidence intervals are made. Lines 34 and 35 explain it all, I guess. You can see smth alike on an autocorrelogram. These two match the mean & mean + 1 stdev applied to the metric. This way, we’ve just magically received data to estimate alpha and beta parameters of the beta distribution using the method of moments. Having alpha and beta, we can now estimate everything further. Btw, there’s an alternative parameterization for beta distributions based on data length.

Now what you’ll see next is... u guys actually have no idea how deep and unrealistically minimalistic the underlying math principles are here.

I’m sure I’m not the only one in the universe who figured it out, but the thing is, it’s nowhere online or offline. By calculating higher-order moments & combining them, you can find natural adaptive thresholds that can later be used for anomaly detection/control applications for any data. No hardcoded thresholds, purely data-driven. Imma come back to this in one of the next drops, but the truest ones can already see it in this code. This way we get dem thresholds.

Your main thresholds are: basis, upper, and lower deviations. You can follow the common logic I’ve described in my previous scripts on how to use them. You just register an event when the metric goes higher/lower than a certain threshold based on what you’re looking for. Then you take the time series and confirm a certain behavior you were looking for by using an appropriate stat test. Or just run a certain strategy.

To avoid numerous triggers when the metric jitters around a threshold, you can follow this logic: forget about one threshold if touched, until another threshold is touched.

In general, when the metric gets higher than certain thresholds, like upper deviation, it means the signal is stronger than noise. You confirm it with a more sophisticated tool & run momentum strategies if drift is in place, or volatility strategies if there’s no drift in place. Otherwise, you confirm & run ~ mean-reverting strategies, regardless of whether there’s drift or not. Just don’t operate against the trend—hedge otherwise.

3) Flex;
Extension and limit thresholds based on distribution moments gonna be discussed properly later, but now you can see this:

快照
^^ magic

Look at the thresholds—adaptive and dynamic. Do you see any optimizations? No ML, no DL, closed-form solution, but how? Just a formula based on a couple of variables? Maybe it’s just how the Universe works, but how can you know if you don’t understand how fundamentally numbers 3 and 15 are related to the normal distribution? Hm, why do they always say 3 sigmas but can’t say why? Maybe you can be different and say why?

This is the primordial power of statistical modeling.

4) Thanks;
I really wanna dedicate this to Charlotte de Witte & Marion Di Napoli, and their new track "Sanctum." It really gets you connected to the Source—I had it in my soul when I was doing all this ∞
dimensionefficiencygeneralizednoiseratiosignalstatisticsTrend AnalysisVolatility

開源腳本

在真正的TradingView精神中,這個腳本的作者以開源的方式發佈,這樣交易員可以理解和驗證它。請向作者致敬!您可以免費使用它,但在出版物中再次使用這段程式碼將受到網站規則的約束。 您可以收藏它以在圖表上使用。

想在圖表上使用此腳本?


更多:

免責聲明