ZynIQ Trend Master V2 - (Pro Pack)Overview
ZynIQ Trend Master v2 (Pro) provides a structured, multi-layered approach to trend analysis. It combines volatility-aware trend detection, adaptive cloud colouring, and pullback signalling to help traders see trend strength, continuation phases and potential shift points with clarity.
Key Features
• Multi-profile trend modes (Scalping / Intraday / Swing)
• Adaptive trend cloud with colour transitions based on strength
• Volatility-aware pullback detection
• Optional HTF trend alignment
• Clean labels marking key transitions
• Configurable filters for smoothing and responsiveness
• Lightweight visuals for fast intraday charting
Use Cases
• Identifying conditions where trend strength is increasing or weakening
• Timing entries during pullbacks within a trend
• Aligning intraday and HTF directional bias
• Combining with breakout, volume or market structure tools for confirmation
Notes
This tool provides structured trend context and momentum flow. It is not a trading system on its own. Use with your preferred confirmation and risk management.
Statistics
ZynIQ Session Master v2 - (Lite Pack)Overview
ZynIQ Session Master v2 (Lite) highlights key market sessions and their associated ranges, helping traders understand when volatility tends to shift between Asian, London and New York sessions. It provides clean visual context for intraday trading without overwhelming the chart.
Key Features
• Automatic detection and shading of major trading sessions
• Configurable session highlighting
• Optional range markers for Asia, London and New York
• Lightweight visuals suitable for fast intraday charting
• Simple session-based structure for context around volatility shifts
• Optional labels marking session transitions
Use Cases
• Seeing where session volatility typically increases
• Identifying when price is leaving a session range
• Timing trades around session opens
• Combining session structure with breakout, trend or momentum tools
Notes
This script provides session structure and volatility context. It is not a standalone trading system. Use alongside your preferred confirmation and risk management.
Weighted KDE Mode🙏🏻 The ‘ultimate’ typical value estimator, for the highest computational cost @ time complexity O(n^2). I am not afraid to say: this is the last resort BFG9000 you can ‘ever’ get to make dem market demons kneel before y’all
Quickguide
pls read it, you won’t find it anywhere else in open access
When to use:
If current market activity is so crazy || things on your charts are really so bad (contaminated data && (data has very heavy tails || very pronounced peak)), the only option left is to use the peak (mode) of Kernel Density Estimate , instead of median not even mentioning mean. So when WMA won’t help, when WPNR won’t help, you need this thing.
Setting it up:
Interval: choose what u need, you can use usual moving windows, but I also added yearly and session anchors alike in old VWAP (always prefer 24h instead of Session if your plan allows). Other options like cumulative window are also there.
Parameters: this script ain't no joke, it needs time to make calculations, so I added a setting to calculate only for the last N bars (when “starting at bar N” is put on 0). If it’s not zero it acts as a starting point after which the calculations happen (useful for backtesting). Other parameters keep em as they are, keep student5 kernel , turn off appropriate weights if u apply it to other than chart data, on other studies etc.
But instead of listening to me just experiment with parameters and see what they change, would take 5 mins max
Been always saying that VWAP is ish, not time-aware etc, volume info is incorporated in a lil bit wrong way… So I decided not just to fix VWAP (you can do it yourself in 5 mins), but instead to drop there the Ultimate xD typical value estimator that is ever possible to do. Time aware, volume / inferred volume aware, resistant to all kinds of BS. This is your shieldwall.
How it works:
You can easily do a weighted kernel density estimation, in our case including temporal and intensity information while accumulating densities. Here are some details worth mentioning about the thing:
Kernels are raw (not unit variance), that’s easier to work with later.
h_constants for each kernel were calculated ^^ given that ^^ with python mpmath module with high decimal precision.
In bandwidth calculation instead of using empirical standard deviation as a scaler, I use... ta.range(src, len) / math.sqrt(12)
...that takes data range and converts it to standard deviation, assuming data is uniformly distributed. That’s exactly what we need: a scaler that is coherent with the KDE, that has nothing to do with stdevs, as the kernels except for gaussian ones (that we don’t even need to use). More importantly, if u take multiple windows and see over time which distro they approach on the long term, that would be the uniform one (not the normal one as many think). Sometimes windows are multimodal, sometimes Laplace like etc, so in general all together they are uniform ish.
The one and only kernel you really need is Student t with v = 5 , for the use case I highlighted in the first part of the post for TV users. It’s as far as u can get until ish becomes crazy like undefined variance etc. It has the highest kurtosis = 9 of all distros, perfect for the real use case I mentioned. Otherwise, you don’t even need KDE 4 real, but still I included other senseful kernels for comparison or in case I am trippin there.
Btw, don’t believe in all that hype about Epanechnikov kernel which in essence is made from beta distribution with alpha = beta = 2, idk why folk call it with that weird name, it’s beta2 kernel. Yes on papers it really minimises AMISE (that’s how I calculated h constants for all dem kernels in the script), but for really crazy data (proper use case for us), it ain't provides even ‘closely’ compared with student5 kernel. Not much else to add.
Shout out to @RicardoSantos for inspiration, I saw your KDE script a long time ago brotha, finna got my hands on it.
∞
Normal Dist Deviation LevelsThis indicator shows where the current price sits within a normal-distribution “sigma” framework and projects those levels as short, local reference lines rather than full trailing bands.
It first calculates a moving average (SMA or EMA, user-selectable) over a chosen lookback length and the corresponding standard deviation of price around that mean. The mean is treated as the 0σ level, and fixed price levels are computed at ±1σ, ±2σ, and ±3σ from that mean for the most recent bar.
For each of these sigma prices, the script draws a short horizontal segment that spans only a limited number of candles into the past and into the future, giving clean local “price bars” instead of bands across the entire chart. The colors and line styles differentiate 0σ (blue), ±1σ (solid), ±2σ (dashed), and ±3σ (dotted), visually marking moderate to extreme deviations from the mean.
To make interpretation easier, the indicator also places text labels to the right of the price bars, a couple of candles ahead of the line ends. Each label shows both the statistical region and its approximate normal-distribution probability, such as “50% (0σ)”, “15.87% (+1σ / -1σ)”, “2.27% (+2σ / -2σ)”, and “0.14% (+3σ / -3σ)”, so you can quickly see how unusual the current deviation is in probabilistic terms.
CS Institutional X-Ray (Perfect Sync)Title: CS Institutional X-Ray
Description:
CS Institutional X-Ray is an advanced Order Flow and Market Structure suite designed to reveal what happens inside Japanese candles.
Most traders only see open and close prices. This indicator utilizes VSA (Volume Spread Analysis) algorithms and Synthetic Footprint Logic to detect institutional intervention, liquidity manipulation, and market exhaustion.
🧠 1. The Mathematical Engine: Synthetic Footprint
The core of this indicator is not based on moving average crossovers, but on market physics: Effort vs. Result.
The script scans every candle and calculates:
Buy/Sell Pressure: Analyzes the close position relative to the total candle range and weights it by volume.
Synthetic Delta: Calculates the net difference between buyer and seller aggression.
Volume Anomalies: Detects when volume is abnormally high (Institutional) or low (Retail).
The Absorption Logic: The indicator hunts for divergences between candle color and internal flow.
Example: If price drops hard (Red Candle) with massive volume, but the close moves away from the low, the algorithm detects that massive LIMIT orders absorbed the selling pressure. Result: Institutional Buy Signal.
📊 2. The Institutional Semaphore (Visual Guide)
The indicator automatically recolors candles to show the real state of the auction:
🔵 CYAN (Whale Buy): Bullish Absorption. Institutions buying aggressively or absorbing selling pressure at support.
🟣 MAGENTA (Whale Sell): Bearish Absorption. Institutions selling into strength or stopping a rally with sell walls.
⚪ GREY (Exhaustion/Zombie): "No-Trade" Zone. Volume is extremely low. The movement lacks institutional backing and is prone to failure.
🟢/🔴 Normal: Market in equilibrium.
🛡️ 3. Smart Zone System (Market Memory)
The indicator draws and manages Support and Resistance levels based on volume events, not just pivots.
Virgin Zones (Bright): When a "Whale" appears, a solid line is projected. If price has not touched it again, it is a high-probability bounce zone.
Automatic Mitigation: The exact moment price touches a line, the indicator detects the mitigation. The line turns Grey and Dotted, and the label dims. This keeps the chart clean, showing only what is relevant now.
☠️ 4. Manipulation Detector (Liquidity Grabs)
The system distinguishes between a normal reversal and a "Stop Hunt".
Signal: ☠️ GRAB
Logic: If price breaks a previous Low/High to sweep liquidity and closes with an absorption candle (Whale), it is marked as a "Grab." This is the system's most powerful reversal signal.
🧱 5. FVG with Liquidity Score
The indicator draws Fair Value Gaps (Imbalances) and assigns them a volume score.
"Vol: 3.0x": Indicates that the gap was created with 3 times the average volume, making it a much stronger price magnet than a standard FVG.
🚀 How to Trade with CS Institutional X-Ray
Identify the Footprint: Wait for a Cyan or Magenta candle to appear.
Validate the Trap: If the signal comes with a "☠️ GRAB" label, the probability of success increases drastically.
The Retest (Entry): Do not chase price. Place a Limit order on the generated Zone Line or at the edge of the FVG.
Management: Use opposite zones or mitigated zones (grey) as Take Profit targets.
Included Settings:
Fully configurable Alerts for Whales, Grabs, and Retests.
Total customization of colors and styles.
Bottom Up - Reverso ProReverso Pro by Bottom Up - Excess is the signal. Reversion is the edge.
Reverso is a mean reverting indicator that identifies market excesses and signals reversals for highly probable retracements to an average value.
Reverso's algorithm is extremely precise because it also takes into account the historical volatility of the instrument and constantly recalibrates itself dynamically without repainting.
This tool is suitable for mean-reversion traders who want to study EMA reactions, understand market trends, and refine entry/exit strategies based on price-memory dynamics.
Why Reverso Pro is different (This isn’t just another indicator)
Zero repainting – What you see is what you get. No tricks, no redraws, ever.
Dynamically adapts to the historical volatility of the instrument — works the same on Forex, stocks, indices, or some random crypto.
Constant real-time recalibration — adjusts instantly to volatility regime changes.
Fully adjustable sensitivity — From machine-gun signals for brutal scalping to only the most extreme deviations for monster-probability swing trades.
Native multi-timeframe control — Choose the timeframe used for signal calculation (5 min, 1H, daily, or custom). Reverso bends to your style.
When a Reverso signal fires:
Price has reached a statistically extreme deviation from its historical memory.
The probability of a snapback to the mean is at its peak.
It’s time to go counter-trend with the lowest risk and the highest reward possible.
Customization Options
You can use it on any timeframe and instrument.
You can customize also the timeframe over which the signals are processed to suit very fast scalping trading or to intercept slower and longer movements for swing trading.
The sensitivity of the indicator can also be customized to emit multiple signals or identify only the most extreme levels of deviation from the mean.
Add to chart. Turn on alerts. Happy trading!
Bottom Up - The Ecosystem Designed for Traders
bottomup.finance
Gaussian Hidden Markov ModelA Hidden Markov Model (HMM) is a statistical model that assumes an underlying process is a Markov process with unobservable (hidden) states. In the context of financial data analysis, a HMM can be particularly useful because it allows for the modeling of time series data where the state of the market at a given time depends on its state in the previous time period, but these states are not directly observable from the market data. When we say that a state is "unobservable" or "hidden," we mean that the true state of the process generating the observations at any time is not directly visible or measurable. Instead, what is observed is a set of data points that are influenced by these hidden states.
The HMM uses a set of observed data to infer the sequence of hidden states of the model (in our case a model with 3 states and Gaussian emissions). It comprises three main components: the initial probabilities, the state transition probabilities, and the emission probabilities. The initial probabilities describe the likelihood of starting in a particular state. The state transition probabilities describe the likelihood of moving from one state to another, while the emission probabilities (in our case emitted from Gaussian probability density functions, in the image red yellow and green Laplace probability densitty functions) describe the likelihood of the observed data given a particular state.
MODEL FIT
Posterior
By default, the indicator displays the posterior distribution as fitted by training a 3-state Gaussian HMM. The posterior refers to the probability distribution of the hidden states given the observed data. In the case of your Gaussian HMM with three states, the posterior represents the probabilities that the model assigns to each of these three states at each time point, after observing the data. The term "posterior" comes from Bayes' theorem, where it represents the updated belief about the model's states after considering the evidence (the observed data).
In the indicator, the posterior is visualized as the probability of the stock market being in a particular volatility state (high vol, medium vol, low vol) at any given time in the time series. Each day, the probabilities of the three states sum to 1, with the plot showing color-coded bands to reflect these state probabilities over time. It is important to note that the posterior distribution of the model fit tells you about the performance of the model on past data. The model calculates the probabilities of observations for all states by taking into account the relationship between observations and their past and future counterparts in the dataset. This is achieved using the forward-backward algorithm, which enables us to train the HMM.
Conditional Mean
The conditional mean is the expected value of the observed data given the current state of the model. For a Gaussian HMM, this would be the mean of the Gaussian distribution associated with the current state. It’s "conditional" because it depends on the probabilities of the different states the model is in at a given time. This connects back to the posterior probability, which assigns a probability to the model being in a particular state at a given time.
Conditional Standard Deviation Bands
The conditional standard deviation is a measure of the variability of the observed data given the current state of the model. In a Gaussian HMM, each state has its own emission probability, defined by a Gaussian distribution with a specific mean and standard deviation. The standard deviation represents how spread out the data is around the mean for each state. These bands directly relate to the emission probabilities of the HMM, as they describe the likelihood of the observed values given the current state. Narrow bands suggest a lower standard deviation, indicating the model is more confident about the data's expected range when in that state, while wider bands indicate higher uncertainty and variability.
Transition Matrix
The transition matrix in a HMM is a key component that characterizes the model. It's a square matrix representing the probabilities of transitioning from one hidden state to another. Each row of the transition matrix must sum up to 1 since the probabilities of moving from a given state to all possible subsequent states (including staying in the same state) must encompass all possible outcomes.
For example, we can see the following transition probabilities in our model:
Going from state X: to X (0.98), to Y (0.02), to Z (0)
Going from state Y: to X (0.03), to Y (0.96), to Z (0.01)
Going from state Z: to X (0), to Y (0.11), to Z (0.89)
MODEL TEST
When the "Test Out of Sample” option is enabled, the indicator plots models out-of-sample predictions. This is particularly useful for real-time identification of market regimes, ensuring that the model's predictive capability is rigorously tested on unseen data. The indicator displays the out of sample posterior probabilities which are calculated using the forward algorithm. Higher probability for a particular state indicate that the model is predicted a higher likelihood that the market is currently in that state. Evaluating the models performance on unseen data is crucial in understanding how well the model explains data that are not included in its training process.
Hurst Exponent - Detrended Fluctuation AnalysisIn stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analyzing time series that appear to be long-memory processes and noise.
█ OVERVIEW
We have introduced the concept of Hurst Exponent in our previous open indicator Hurst Exponent (Simple). It is an indicator that measures market state from autocorrelation. However, we apply a more advanced and accurate way to calculate Hurst Exponent rather than simple approximation. Therefore, we recommend using this version of Hurst Exponent over our previous publication going forward. The method we used here is called detrended fluctuation analysis. (For folks that are not interested in the math behind the calculation, feel free to skip to "features" and "how to use" section. However, it is recommended that you read it all to gain a better understanding of the mathematical reasoning).
█ Detrend Fluctuation Analysis
Detrended Fluctuation Analysis was first introduced by by Peng, C.K. (Original Paper) in order to measure the long-range power-law correlations in DNA sequences . DFA measures the scaling-behavior of the second moment-fluctuations, the scaling exponent is a generalization of Hurst exponent.
The traditional way of measuring Hurst exponent is the rescaled range method. However DFA provides the following benefits over the traditional rescaled range method (RS) method:
• Can be applied to non-stationary time series. While asset returns are generally stationary, DFA can measure Hurst more accurately in the instances where they are non-stationary.
• According the the asymptotic distribution value of DFA and RS, the latter usually overestimates Hurst exponent (even after Anis- Llyod correction) resulting in the expected value of RS Hurst being close to 0.54, instead of the 0.5 that it should be. Therefore it's harder to determine the autocorrelation based on the expected value. The expected value is significantly closer to 0.5 making that threshold much more useful, using the DFA method on the Hurst Exponent (HE).
• Lastly, DFA requires lower sample size relative to the RS method. While the RS method generally requires thousands of observations to reduce the variance of HE, DFA only needs a sample size greater than a hundred to accomplish the above mentioned.
█ Calculation
DFA is a modified root-mean-squares (RMS) analysis of a random walk. In short, DFA computes the RMS error of linear fits over progressively larger bins (non-overlapped “boxes” of similar size) of an integrated time series.
Our signal time series is the log returns. First we subtract the mean from the log return to calculate the demeaned returns. Then, we calculate the cumulative sum of demeaned returns resulting in the cumulative sum being mean centered and we can use the DFA method on this. The subtraction of the mean eliminates the “global trend” of the signal. The advantage of applying scaling analysis to the signal profile instead of the signal, allows the original signal to be non-stationary when needed. (For example, this process converts an i.i.d. white noise process into a random walk.)
We slice the cumulative sum into windows of equal space and run linear regression on each window to measure the linear trend. After we conduct each linear regression. We detrend the series by deducting the linear regression line from the cumulative sum in each windows. The fluctuation is the difference between cumulative sum and regression.
We use different windows sizes on the same cumulative sum series. The window sizes scales are log spaced. Eg: powers of 2, 2,4,8,16... This is where the scale free measurements come in, how we measure the fractal nature and self similarity of the time series, as well as how the well smaller scale represent the larger scale.
As the window size decreases, we uses more regression lines to measure the trend. Therefore, the fitness of regression should be better with smaller fluctuation. It allows one to zoom into the “picture” to see the details. The linear regression is like rulers. If you use more rulers to measure the smaller scale details you will get a more precise measurement.
The exponent we are measuring here is to determine the relationship between the window size and fitness of regression (the rate of change). The more complex the time series are the more it will depend on decreasing window sizes (using more linear regression lines to measure). The less complex or the more trend in the time series, it will depend less. The fitness is calculated by the average of root mean square errors (RMS) of regression from each window.
Root mean Square error is calculated by square root of the sum of the difference between cumulative sum and regression. The following chart displays average RMS of different window sizes. As the chart shows, values for smaller window sizes shows more details due to higher complexity of measurements.
The last step is to measure the exponent. In order to measure the power law exponent. We measure the slope on the log-log plot chart. The x axis is the log of the size of windows, the y axis is the log of the average RMS. We run a linear regression through the plotted points. The slope of regression is the exponent. It's easy to see the relationship between RMS and window size on the chart. Larger RMS equals less fitness of the regression. We know the RMS will increase (fitness will decrease) as we increases window size (use less regressions to measure), we focus on the rate of RMS increasing (how fast) as window size increases.
If the slope is < 0.5, It means the rate of of increase in RMS is small when window size increases. Therefore the fit is much better when it's measured by a large number of linear regression lines. So the series is more complex. (Mean reversion, negative autocorrelation).
If the slope is > 0.5, It means the rate of increase in RMS is larger when window sizes increases. Therefore even when window size is large, the larger trend can be measured well by a small number of regression lines. Therefore the series has a trend with positive autocorrelation.
If the slope = 0.5, It means the series follows a random walk.
█ FEATURES
• Sample Size is the lookback period for calculation. Even though DFA requires a lower sample size than RS, a sample size larger > 50 is recommended for accurate measurement.
• When a larger sample size is used (for example = 1000 lookback length), the loading speed may be slower due to a longer calculation. Date Range is used to limit numbers of historical calculation bars. When loading speed is too slow, change the data range "all" into numbers of weeks/days/hours to reduce loading time. (Credit to allanster)
• “show filter” option applies a smoothing moving average to smooth the exponent.
• Log scale is my work around for dynamic log space scaling. Traditionally the smallest log space for bars is power of 2. It requires at least 10 points for an accurate regression, resulting in the minimum lookback to be 1024. I made some changes to round the fractional log space into integer bars requiring the said log space to be less than 2.
• For a more accurate calculation a larger "Base Scale" and "Max Scale" should be selected. However, when the sample size is small, a larger value would cause issues. Therefore, a general rule to be followed is: A larger "Base Scale" and "Max Scale" should be selected for a larger the sample size. It is recommended for the user to try and choose a larger scale if increasing the value doesn't cause issues.
The following chart shows the change in value using various scales. As shown, sometimes increasing the value makes the value itself messy and overshoot.
When using the lowest scale (4,2), the value seems stable. When we increase the scale to (8,2), the value is still alright. However, when we increase it to (8,4), it begins to look messy. And when we increase it to (16,4), it starts overshooting. Therefore, (8,2) seems to be optimal for our use.
█ How to Use
Similar to Hurst Exponent (Simple). 0.5 is a level for determine long term memory.
• In the efficient market hypothesis, market follows a random walk and Hurst exponent should be 0.5. When Hurst Exponent is significantly different from 0.5, the market is inefficient.
• When Hurst Exponent is > 0.5. Positive Autocorrelation. Market is Trending. Positive returns tend to be followed by positive returns and vice versa.
• Hurst Exponent is < 0.5. Negative Autocorrelation. Market is Mean reverting. Positive returns trends to follow by negative return and vice versa.
However, we can't really tell if the Hurst exponent value is generated by random chance by only looking at the 0.5 level. Even if we measure a pure random walk, the Hurst Exponent will never be exactly 0.5, it will be close like 0.506 but not equal to 0.5. That's why we need a level to tell us if Hurst Exponent is significant.
So we also computed the 95% confidence interval according to Monte Carlo simulation. The confidence level adjusts itself by sample size. When Hurst Exponent is above the top or below the bottom confidence level, the value of Hurst exponent has statistical significance. The efficient market hypothesis is rejected and market has significant inefficiency.
The state of market is painted in different color as the following chart shows. The users can also tell the state from the table displayed on the right.
An important point is that Hurst Value only represents the market state according to the past value measurement. Which means it only tells you the market state now and in the past. If Hurst Exponent on sample size 100 shows significant trend, it means according to the past 100 bars, the market is trending significantly. It doesn't mean the market will continue to trend. It's not forecasting market state in the future.
However, this is also another way to use it. The market is not always random and it is not always inefficient, the state switches around from time to time. But there's one pattern, when the market stays inefficient for too long, the market participants see this and will try to take advantage of it. Therefore, the inefficiency will be traded away. That's why Hurst exponent won't stay in significant trend or mean reversion too long. When it's significant the market participants see that as well and the market adjusts itself back to normal.
The Hurst Exponent can be used as a mean reverting oscillator itself. In a liquid market, the value tends to return back inside the confidence interval after significant moves(In smaller markets, it could stay inefficient for a long time). So when Hurst Exponent shows significant values, the market has just entered significant trend or mean reversion state. However, when it stays outside of confidence interval for too long, it would suggest the market might be closer to the end of trend or mean reversion instead.
Larger sample size makes the Hurst Exponent Statistics more reliable. Therefore, if the user want to know if long term memory exist in general on the selected ticker, they can use a large sample size and maximize the log scale. Eg: 1024 sample size, scale (16,4).
Following Chart is Bitcoin on Daily timeframe with 1024 lookback. It suggests the market for bitcoin tends to have long term memory in general. It generally has significant trend and is more inefficient at it's early stage.
Fast Autocorrelation Estimator█ Overview:
The Fast ACF and PACF Estimation indicator efficiently calculates the autocorrelation function (ACF) and partial autocorrelation function (PACF) using an online implementation. It helps traders identify patterns and relationships in financial time series data, enabling them to optimize their trading strategies and make better-informed decisions in the markets.
█ Concepts:
Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay.
This indicator displays autocorrelation based on lag number. The autocorrelation is not displayed based over time on the x-axis. It's based on the lag number which ranges from 1 to 30. The calculations can be done with "Log Returns", "Absolute Log Returns" or "Original Source" (the price of the asset displayed on the chart).
When calculating autocorrelation, the resulting value will range from +1 to -1, in line with the traditional correlation statistic. An autocorrelation of +1 represents a perfect correlation (an increase seen in one time series leads to a proportionate increase in the other time series). An autocorrelation of -1, on the other hand, represents a perfect inverse correlation (an increase seen in one time series results in a proportionate decrease in the other time series). Lag number indicates which historical data point is autocorrelated. For example, if lag 3 shows significant autocorrelation, it means current data is influenced by the data three bars ago.
The Fast Online Estimation of ACF and PACF Indicator is a powerful tool for analyzing the linear relationship between a time series and its lagged values in TradingView. The indicator implements an online estimation of the Autocorrelation Function (ACF) and the Partial Autocorrelation Function (PACF) up to 30 lags, providing a real-time assessment of the underlying dependencies in your time series data. The Autocorrelation Function (ACF) measures the linear relationship between a time series and its lagged values, capturing both direct and indirect dependencies. The Partial Autocorrelation Function (PACF) isolates the direct dependency between the time series and a specific lag while removing the effect of any indirect dependencies.
This distinction is crucial in understanding the underlying relationships in time series data and making more informed decisions based on those relationships. For example, let's consider a time series with three variables: A, B, and C. Suppose that A has a direct relationship with B, B has a direct relationship with C, but A and C do not have a direct relationship. The ACF between A and C will capture the indirect relationship between them through B, while the PACF will show no significant relationship between A and C, as it accounts for the indirect dependency through B. Meaning that when ACF is significant at for lag 5, the dependency detected could be caused by an observation that came in between, and PACF accounts for that. This indicator leverages the Fast Moments algorithm to efficiently calculate autocorrelations, making it ideal for analyzing large datasets or real-time data streams. By using the Fast Moments algorithm, the indicator can quickly update ACF and PACF values as new data points arrive, reducing the computational load and ensuring timely analysis. The PACF is derived from the ACF using the Durbin-Levinson algorithm, which helps in isolating the direct dependency between a time series and its lagged values, excluding the influence of other intermediate lags.
█ How to Use the Indicator:
Interpreting autocorrelation values can provide valuable insights into the market behavior and potential trading strategies.
When applying autocorrelation to log returns, and a specific lag shows a high positive autocorrelation, it suggests that the time series tends to move in the same direction over that lag period. In this case, a trader might consider using a momentum-based strategy to capitalize on the continuation of the current trend. On the other hand, if a specific lag shows a high negative autocorrelation, it indicates that the time series tends to reverse its direction over that lag period. In this situation, a trader might consider using a mean-reversion strategy to take advantage of the expected reversal in the market.
ACF of log returns:
Absolute returns are often used to as a measure of volatility. There is usually significant positive autocorrelation in absolute returns. We will often see an exponential decay of autocorrelation in volatility. This means that current volatility is dependent on historical volatility and the effect slowly dies off as the lag increases. This effect shows the property of "volatility clustering". Which means large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes.
ACF of absolute log returns:
Autocorrelation in price is always significantly positive and has an exponential decay. This predictably positive and relatively large value makes the autocorrelation of price (not returns) generally less useful.
ACF of price:
█ Significance:
The significance of a correlation metric tells us whether we should pay attention to it. In this script, we use 95% confidence interval bands that adjust to the size of the sample. If the observed correlation at a specific lag falls within the confidence interval, we consider it not significant and the data to be random or IID (identically and independently distributed). This means that we can't confidently say that the correlation reflects a real relationship, rather than just random chance. However, if the correlation is outside of the confidence interval, we can state with 95% confidence that there is an association between the lagged values. In other words, the correlation is likely to reflect a meaningful relationship between the variables, rather than a coincidence. A significant difference in either ACF or PACF can provide insights into the underlying structure of the time series data and suggest potential strategies for traders. By understanding these complex patterns, traders can better tailor their strategies to capitalize on the observed dependencies in the data, which can lead to improved decision-making in the financial markets.
Significant ACF but not significant PACF: This might indicate the presence of a moving average (MA) component in the time series. A moving average component is a pattern where the current value of the time series is influenced by a weighted average of past values. In this case, the ACF would show significant correlations over several lags, while the PACF would show significance only at the first few lags and then quickly decay.
Significant PACF but not significant ACF: This might indicate the presence of an autoregressive (AR) component in the time series. An autoregressive component is a pattern where the current value of the time series is influenced by a linear combination of past values at specific lags.
Often we find both significant ACF and PACF, in that scenario simply and AR or MA model might not be sufficient and a more complex model such as ARMA or ARIMA can be used.
█ Features:
Source selection: User can choose either 'Log Returns' , 'Absolute Returns' or 'Original Source' for the input data.
Autocorrelation Selection: User can choose either 'ACF' or 'PACF' for the plot selection.
Plot Selection: User can choose either 'Autocorrelarrogram' or 'Historical Autocorrelation' for plotting the historical autocorrelation at a specified lag.
Max Lag: User can select the maximum number of lags to plot.
Precision: User can set the number of decimal points to display in the plot.
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
️Omega RatioThe Omega Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It is defined as the probability-weighted ratio, of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.
█ OVERVIEW
As we have mentioned many times, stock market returns are usually not normally distributed. Therefore the models that assume a normal distribution of returns may provide us with misleading information. The Omega Ratio improves upon the common normality assumption among other risk-return ratios by taking into account the distribution as a whole.
█ CONCEPTS
Two distributions with the same mean and variance, would according to the most commonly used Sharpe Ratio suggest that the underlying assets of the distribution offer the same risk-return ratio. But as we have mentioned in our Moments indicator, variance and standard deviation are not a sufficient measure of risk in the stock market since other shape features of a distribution like skewness and excess kurtosis come into play. Omega Ratio tackles this problem by employing all four Moments of the distribution and therefore taking into account the differences in the shape features of the distributions. Another important feature of the Omega Ratio is that it does not require any estimation but is rather calculated directly from the observed data. This gives it an advantage over standard statistical estimators that require estimation of parameters and are therefore sampling uncertainty in its calculations.
█ WAYS TO USE THIS INDICATOR
Omega calculates a probability-adjusted ratio of gains to losses, relative to the Minimum Acceptable Return (MAR). This means that at a given MAR using the simple rule of preferring more to less, an asset with a higher value of Omega is preferable to one with a lower value. The indicator displays the values of Omega at increasing levels of MARs and creating the so-called Omega Curve. Knowing this one can compare Omega Curves of different assets and decide which is preferable given the MAR of your strategy. The indicator plots two Omega Curves. One for the on chart symbol and another for the off chart symbol that u can use for comparison.
When comparing curves of different assets make sure their trading days are the same in order to ensure the same period for the Omega calculations. Value interpretation: Omega<1 will indicate that the risk outweighs the reward and therefore there are more excess negative returns than positive. Omega>1 will indicate that the reward outweighs the risk and that there are more excess positive returns than negative. Omega=1 will indicate that the minimum acceptable return equals the mean return of an asset. And that the probability of gain is equal to the probability of loss.
█ FEATURES
• "Low-Risk security" lets you select the security that you want to use as a benchmark for Omega calculations.
• "Omega Period" is the size of the sample that is used for the calculations.
• “Increments” is the number of Minimal Acceptable Return levels the calculation is carried on. • “Other Symbol” lets you select the source of the second curve.
• “Color Settings” you can set the color for each curve.
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
SYMBOL NOTES - UNCORRELATED TRADING GROUPSWrite symbol-specific notes that only appear on that chart. Organized into 6 uncorrelated groups for safe multi-pair trading.
📝 SYMBOL NOTES - UNCORRELATED TRADING GROUPS
This indicator solves two problems every serious trader faces:
1. Keeping Track of Your Analysis
Write notes for each trading pair and they'll only appear when you view that specific chart. No more forgetting your key levels, trade ideas, or analysis!
2. Avoiding Correlated Risk
The symbols are organized into 6 groups where ALL pairs within each group are completely UNCORRELATED. Trade any combination from the same group without worrying about double exposure.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 THE PROBLEM THIS SOLVES
Have you ever:
- Opened XAUUSD and EURUSD at the same time, then Fed news hit and BOTH positions went against you?
- Traded GBPUSD and GBPJPY together, then BOE announcement stopped out both trades?
- Forgotten what levels you were watching on a pair?
This indicator helps you avoid these costly mistakes!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 THE 6 UNCORRELATED GROUPS
Each group contains pairs that share NO common currency:
```
GRUP 1: XAUUSD • EURGBP • NZDJPY • AUDCHF • NATGAS
GRUP 2: EURUSD • GBPJPY • AUDNZD • CADCHF
GRUP 3: GBPUSD • EURJPY • AUDCAD • NZDCHF
GRUP 4: USDJPY • EURCHF • GBPAUD • NZDCAD
GRUP 5: USDCAD • EURAUD • GBPCHF
GRUP 6: NAS100 • DAX40 • UK100 • JPN225
```
**Example - GRUP 1:**
- XAUUSD → Uses USD + Gold
- EURGBP → Uses EUR + GBP
- NZDJPY → Uses NZD + JPY
- AUDCHF → Uses AUD + CHF
- NATGAS → Commodity (independent)
= 7 different currencies, ZERO overlap!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**✅ HOW TO USE**
1. Add indicator to any chart
2. Open Settings (gear icon ⚙️)
3. Find your symbol's group and input field
4. Write your note (support levels, trade ideas, etc.)
5. Switch charts - your note appears only on that symbol!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚙️ SETTINGS
- Note Position: Choose where the note box appears (6 positions)
- Text Size: Tiny, Small, Normal, or Large
- Show Group Name: Display which correlation group
- Show Symbol Name: Display current symbol
- Colors: Customize background, text, group label, and border colors
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 TRADING STRATEGY TIPS
Safe Multi-Pair Trading:
1. Pick ONE group for the day
2. Look for setups on ANY symbol in that group
3. Open positions freely - they won't correlate!
4. Even if major news hits, only ONE position is affected
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 COMPATIBLE WITH
- All major forex brokers
- Prop firms (FTMO, Alpha Capital, etc.)
- Works on any timeframe
- Futures symbols supported (MGC, M6E, etc.)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Visible RangeOverview This is a precision tool designed for quantitative traders and engineers who need exact control over their chart's visual scope. Unlike standard time calculations that fail in markets with trading breaks (like A-Shares, Futures, or Stocks), this indicator uses a loop-back mechanism to count the actual number of visible bars, ensuring your indicators (e.g., MA60, MA200) have sufficient sample data.
Why use this? If you use multi-timeframe layouts (e.g., Daily/Hourly/15s), it is critical to know exactly how much data is visible.
The Problem: In markets like the Chinese A-Share market (T+1, 4-hour trading day), calculating Time Range / Timeframe results in massive errors because it includes closed market hours (lunch breaks, nights, weekends).
The Solution: This script iterates through the visible range to count the true bar_index, providing 100% accurate data density metrics.
Key Features
True Bar Counting: Uses a for loop to count actual candles, ignoring market breaks. perfect for non-24/7 markets.
Integer Precision: Displays time ranges (Days, Hours, Mins, Secs) in clean integers. No messy decimals.
Compact UI: Displays information in a single line (e.g., View: 30 Days (120 Bars)), default to the Top Right corner to save screen space.
Fully Customizable: Adjustable position, text size, and colors to fit any dark/light theme.
Performance Optimized: Includes max_bars_back limits to prevent browser lag on deep history lookups.
Settings
Position: Default Top Right (can be moved to any corner).
Max Bar Count: Default 5000 (Safety limit for loop calculation).
Blockchain Fundamentals: PPT [CR]Blockchain Fundamentals: PPT
A proprietary market positioning indicator that analyzes price behavior using percentile-based statistical methods. The PPT (Percentile Position Transform) provides a normalized oscillator view of market conditions, helping traders identify potential trend exhaustion and reversal zones through multi-timeframe statistical analysis.
█ FEATURES
Dual Signal Lines
The indicator plots two distinct signals:
- White Line — Primary signal representing the normalized, smoothed market position. This is the main signal used for trading decisions.
- Red Line — Raw statistical measurement before final normalization. Useful for identifying divergences and signal development.
Background Coloring
Dynamic background colors provide at-a-glance market context:
- Green Background — Indicates bullish positioning when the primary signal exceeds the buffer threshold.
- Red Background — Indicates bearish positioning when the primary signal falls below the buffer threshold.
- Gray Background — Neutral zone where no clear directional bias is present.
Flip Buffer
An adjustable threshold system designed to reduce noise and false signals:
- Enable Flip Buffer — Toggle the buffer system on or off.
- Buffer Size — Adjustable threshold level (default -0.1) that determines when background colors change. Higher values reduce sensitivity; lower values increase responsiveness.
Reference Levels
Three horizontal reference lines provide context:
- Center line at 0 — Neutral market position.
- Upper dashed line at +1 — Extreme bullish positioning threshold.
- Lower dashed line at -1 — Extreme bearish positioning threshold.
█ HOW TO USE
Signal Interpretation
The indicator operates as a mean-reversion oscillator within a normalized range:
1 — Values approaching +1 suggest extended bullish conditions where price may be overextended relative to recent history.
2 — Values approaching -1 suggest extended bearish conditions where price may be oversold relative to recent history.
3 — Crosses of the center line (0) indicate shifts in the underlying statistical trend.
Trading Applications
While specific trading strategies will vary by individual approach and market conditions:
- Consider the extremes (+1 and -1 levels) as potential areas of interest for mean-reversion setups.
- Background color changes can help identify when market positioning shifts from one regime to another.
- Divergences between the white and red lines may provide early warning of potential trend changes.
- The buffer zone (gray background) represents areas where market positioning is relatively neutral.
█ LIMITATIONS
- The indicator requires sufficient historical data to function properly. In assets with limited price history, the statistical measurements may be less reliable during early data periods.
- As a percentile-based system, the indicator is relative to recent history. Changing market regimes may require interpretation adjustments.
- Not designed for high-frequency or scalping strategies due to its daily data dependency.
- Background colors are visual aids and should not be used as standalone trading signals without additional confirmation.
█ NOTES
This indicator is part of the Blockchain Fundamentals suite and represents proprietary research into statistical market positioning analysis.
Users should experiment with the buffer settings to match their risk tolerance and trading style. More conservative traders may prefer larger buffer values to reduce signal frequency, while active traders might benefit from smaller buffers that provide earlier warnings.
"Smart Dashboard" for Institutional Price Targets.This script is designed to create a "Smart Dashboard" for Institutional Price Targets.
Think of it as a tool that asks, "What does Wall Street think this stock is worth?" and then draws specific "Buy Zones" on your chart based on those professional valuations.
Here is a breakdown of how it works in plain English for an investor:
1. The Core Concept: Wall Street Consensus
The indicator doesn't use standard technical analysis (like RSI or Moving Averages). Instead, it looks at Fundamental Data. It pulls the average Price Target set by institutional analysts (banks, hedge funds, research firms).
Example: If Goldman Sachs, Morgan Stanley, and JP Morgan all agree that NVDA is worth $150, this tool grabs that $150 number.
2. The "Data Engine" (The Smart Part)
The code includes a sophisticated "search engine" (Section 2 & 3 of the code) to ensure it finds the most accurate price target.
The Problem: Sometimes data feeds are empty, or they are in the wrong currency (e.g., a Canadian stock showing a price target in USD, which makes the chart look broken).
The Solution: This script follows a "Waterfall" priority list to find data:
Priority 1: It checks NASDAQ data first (often the most accurate for tech stocks like Apple or Tesla).
Priority 2: If the local currency data is missing, it forces a search for USD data (this is the "USD Fix" in the title).
Priority 3: It checks NYSE data.
Backup: If all else fails, it uses the generic TradingView average.
In short: It works very hard to make sure it doesn't give you a blank screen or a currency error.
3. The "Institutional Buy Zones" (The Strategy)
Once the tool finds the "Fair Value" (the Analyst Target), it calculates deep discount levels where an institutional investor might want to buy the dip.
It draws four colored lines below the current price:
Target (Dashed Line): This is the Fair Value. (The goal).
Level 1 (Green Line - 90%): This is 10% below fair value. A standard "buy the dip" zone.
Level 2 (Blue Line - 70%): This is 30% below fair value. This is considered a "Value Buy" or a "Deep Discount."
Level 3 (Orange Line - ~66.5%): A specific Fibonacci-style extension of the deep discount.
Level 4 (Red Line - 63%): The "Crash" buy zone. If price hits this, the stock is trading massively below what analysts think it is worth.
4. The Dashboard
On the screen (top right by default), there is a clean table that summarizes everything:
Target: Tells you the exact price analysts are aiming for.
Dist %: Tells you how far away the current price is from that target (e.g., "+20%" means the stock needs to rise 20% to hit the target).
Source: Tells you where it found the data (e.g., "Nasdaq FQ"), so you know if the data is trustworthy.
How an Investor Uses This:
Validation: You want to buy a stock, but you check this tool. If the price is above the dashed Target line, the tool is telling you the stock is effectively "overpriced" compared to Wall Street's expectations.
Entry Points: You are waiting to enter a position. You set limit orders at the Green (90%) or Blue (70%) lines, knowing these are math-based discount levels relative to the company's fundamental valuation.
Summary: It automates the research process of looking up analyst price targets and draws "Sale Price" lines on your chart automatically.
🎯 SHORT BAG DETECTOR🎯 SHORT BAG DETECTOR: The Liquidation Squeeze Signal
💡 What This Indicator Does
The SHORT BAG DETECTOR is a powerful volatility and volume-based indicator designed to identify high-probability price areas where trapped short sellers (those holding a "short bag" of losing positions) are most vulnerable to a short squeeze or liquidation event.
It automatically scans for a rare confluence of three critical market conditions, generating a single, high-conviction signal (the large orange marker) for optimal entry timing.
🔎 The 3 Confluence Conditions
The main OLD BAG DETECTED! signal only triggers when all three of the following conditions occur simultaneously:
Old Level Touch: The price returns to a significant, aged historical pivot high or low price (established over the last 150 days). This level represents the average entry price for a large number of short or long positions.
Significant Gap: The current day opens with a meaningful price gap (user-defined percentage) against the direction of the trapped traders. This creates immediate urgency and stress for the "bag holders."
Volume Spike: The signal is confirmed by a massive volume spike (user-defined multiplier over average volume). This confirms that the movement is driven by forced liquidation (short-covering) and aggressive buying/selling, not just minor market noise.
📊 Key Features
High-Conviction Orange Signal: Marks the optimal timing for a potential squeeze/reversal driven by short liquidation.
Gap Markers (Green/Red): Clearly identifies significant bullish and bearish gaps on the chart.
Toggleable Minor Levels (Blue Labels): Shows all historical pivot levels being tracked for full context (can be easily disabled in the settings to reduce chart clutter).
📈 How to Use the Signal
The indicator is best used to identify continuation trades or volatile reversals. When the OLD BAG DETECTED! signal appears:
Bullish Signal (When price gaps up to an old low): Indicates a strong potential reversal as shorts from that low level are forced to cover.
Bearish Signal (When price gaps down to an old high): Indicates a potential reversal as longs from that high level are forced to liquidate.
This tool is perfect for traders looking to capitalize on volatility events and forced liquidations.
ES-VIX Expected Daily MoveThis indicator calculates the expected daily price movement for ES futures based on current volatility levels as measured by the VIX (CBOE Volatility Index).
Formula:
Expected Daily Move = (ES Price × VIX Price) / √252 / 100
The calculation converts the annualized VIX volatility into an expected daily move by dividing by the square root of 252 (the approximate number of trading days per year).
Features:
Real-time calculation using current ES futures price and VIX level
Histogram visualization in a separate pane for easy trend analysis
Information table displaying:
Current ES futures price
Current VIX level
Expected daily move in points
Expected daily move as a percentage
RSI os/ob overlay on candle - RichFintech.comRSI os/ob overlay on candle - RichFintech.com reduce the time your eyes must to look two pane, easier to analysis and tired eyes
ETH Upgrades: Exact Price + DateThis indicator places markers on the chart that show you the exact date and price where each Ethereum upgrade occurred.






















