在腳本中搜尋"ha溢价率"
Machine Learning: Lorentzian Classification█ OVERVIEW
A Lorentzian Distance Classifier (LDC) is a Machine Learning classification algorithm capable of categorizing historical data from a multi-dimensional feature space. This indicator demonstrates how Lorentzian Classification can also be used to predict the direction of future price movements when used as the distance metric for a novel implementation of an Approximate Nearest Neighbors (ANN) algorithm.
█ BACKGROUND
In physics, Lorentzian space is perhaps best known for its role in describing the curvature of space-time in Einstein's theory of General Relativity (2). Interestingly, however, this abstract concept from theoretical physics also has tangible real-world applications in trading.
Recently, it was hypothesized that Lorentzian space was also well-suited for analyzing time-series data (4), (5). This hypothesis has been supported by several empirical studies that demonstrate that Lorentzian distance is more robust to outliers and noise than the more commonly used Euclidean distance (1), (3), (6). Furthermore, Lorentzian distance was also shown to outperform dozens of other highly regarded distance metrics, including Manhattan distance, Bhattacharyya similarity, and Cosine similarity (1), (3). Outside of Dynamic Time Warping based approaches, which are unfortunately too computationally intensive for PineScript at this time, the Lorentzian Distance metric consistently scores the highest mean accuracy over a wide variety of time series data sets (1).
Euclidean distance is commonly used as the default distance metric for NN-based search algorithms, but it may not always be the best choice when dealing with financial market data. This is because financial market data can be significantly impacted by proximity to major world events such as FOMC Meetings and Black Swan events. This event-based distortion of market data can be framed as similar to the gravitational warping caused by a massive object on the space-time continuum. For financial markets, the analogous continuum that experiences warping can be referred to as "price-time".
Below is a side-by-side comparison of how neighborhoods of similar historical points appear in three-dimensional Euclidean Space and Lorentzian Space:
This figure demonstrates how Lorentzian space can better accommodate the warping of price-time since the Lorentzian distance function compresses the Euclidean neighborhood in such a way that the new neighborhood distribution in Lorentzian space tends to cluster around each of the major feature axes in addition to the origin itself. This means that, even though some nearest neighbors will be the same regardless of the distance metric used, Lorentzian space will also allow for the consideration of historical points that would otherwise never be considered with a Euclidean distance metric.
Intuitively, the advantage inherent in the Lorentzian distance metric makes sense. For example, it is logical that the price action that occurs in the hours after Chairman Powell finishes delivering a speech would resemble at least some of the previous times when he finished delivering a speech. This may be true regardless of other factors, such as whether or not the market was overbought or oversold at the time or if the macro conditions were more bullish or bearish overall. These historical reference points are extremely valuable for predictive models, yet the Euclidean distance metric would miss these neighbors entirely, often in favor of irrelevant data points from the day before the event. By using Lorentzian distance as a metric, the ML model is instead able to consider the warping of price-time caused by the event and, ultimately, transcend the temporal bias imposed on it by the time series.
For more information on the implementation details of the Approximate Nearest Neighbors (ANN) algorithm used in this indicator, please refer to the detailed comments in the source code.
█ HOW TO USE
Below is an explanatory breakdown of the different parts of this indicator as it appears in the interface:
Below is an explanation of the different settings for this indicator:
General Settings:
Source - This has a default value of "hlc3" and is used to control the input data source.
Neighbors Count - This has a default value of 8, a minimum value of 1, a maximum value of 100, and a step of 1. It is used to control the number of neighbors to consider.
Max Bars Back - This has a default value of 2000.
Feature Count - This has a default value of 5, a minimum value of 2, and a maximum value of 5. It controls the number of features to use for ML predictions.
Color Compression - This has a default value of 1, a minimum value of 1, and a maximum value of 10. It is used to control the compression factor for adjusting the intensity of the color scale.
Show Exits - This has a default value of false. It controls whether to show the exit threshold on the chart.
Use Dynamic Exits - This has a default value of false. It is used to control whether to attempt to let profits ride by dynamically adjusting the exit threshold based on kernel regression.
Feature Engineering Settings:
Note: The Feature Engineering section is for fine-tuning the features used for ML predictions. The default values are optimized for the 4H to 12H timeframes for most charts, but they should also work reasonably well for other timeframes. By default, the model can support features that accept two parameters (Parameter A and Parameter B, respectively). Even though there are only 4 features provided by default, the same feature with different settings counts as two separate features. If the feature only accepts one parameter, then the second parameter will default to EMA-based smoothing with a default value of 1. These features represent the most effective combination I have encountered in my testing, but additional features may be added as additional options in the future.
Feature 1 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 2 - This has a default value of "WT" and options are: "RSI", "WT", "CCI", "ADX".
Feature 3 - This has a default value of "CCI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 4 - This has a default value of "ADX" and options are: "RSI", "WT", "CCI", "ADX".
Feature 5 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Filters Settings:
Use Volatility Filter - This has a default value of true. It is used to control whether to use the volatility filter.
Use Regime Filter - This has a default value of true. It is used to control whether to use the trend detection filter.
Use ADX Filter - This has a default value of false. It is used to control whether to use the ADX filter.
Regime Threshold - This has a default value of -0.1, a minimum value of -10, a maximum value of 10, and a step of 0.1. It is used to control the Regime Detection filter for detecting Trending/Ranging markets.
ADX Threshold - This has a default value of 20, a minimum value of 0, a maximum value of 100, and a step of 1. It is used to control the threshold for detecting Trending/Ranging markets.
Kernel Regression Settings:
Trade with Kernel - This has a default value of true. It is used to control whether to trade with the kernel.
Show Kernel Estimate - This has a default value of true. It is used to control whether to show the kernel estimate.
Lookback Window - This has a default value of 8 and a minimum value of 3. It is used to control the number of bars used for the estimation. Recommended range: 3-50
Relative Weighting - This has a default value of 8 and a step size of 0.25. It is used to control the relative weighting of time frames. Recommended range: 0.25-25
Start Regression at Bar - This has a default value of 25. It is used to control the bar index on which to start regression. Recommended range: 0-25
Display Settings:
Show Bar Colors - This has a default value of true. It is used to control whether to show the bar colors.
Show Bar Prediction Values - This has a default value of true. It controls whether to show the ML model's evaluation of each bar as an integer.
Use ATR Offset - This has a default value of false. It controls whether to use the ATR offset instead of the bar prediction offset.
Bar Prediction Offset - This has a default value of 0 and a minimum value of 0. It is used to control the offset of the bar predictions as a percentage from the bar high or close.
Backtesting Settings:
Show Backtest Results - This has a default value of true. It is used to control whether to display the win rate of the given configuration.
█ WORKS CITED
(1) R. Giusti and G. E. A. P. A. Batista, "An Empirical Comparison of Dissimilarity Measures for Time Series Classification," 2013 Brazilian Conference on Intelligent Systems, Oct. 2013, DOI: 10.1109/bracis.2013.22.
(2) Y. Kerimbekov, H. Ş. Bilge, and H. H. Uğurlu, "The use of Lorentzian distance metric in classification problems," Pattern Recognition Letters, vol. 84, 170–176, Dec. 2016, DOI: 10.1016/j.patrec.2016.09.006.
(3) A. Bagnall, A. Bostrom, J. Large, and J. Lines, "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms." ResearchGate, Feb. 04, 2016.
(4) H. Ş. Bilge, Yerzhan Kerimbekov, and Hasan Hüseyin Uğurlu, "A new classification method by using Lorentzian distance metric," ResearchGate, Sep. 02, 2015.
(5) Y. Kerimbekov and H. Şakir Bilge, "Lorentzian Distance Classifier for Multiple Features," Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, 2017, DOI: 10.5220/0006197004930501.
(6) V. Surya Prasath et al., "Effects of Distance Measure Choice on KNN Classifier Performance - A Review." .
█ ACKNOWLEDGEMENTS
@veryfid - For many invaluable insights, discussions, and advice that helped to shape this project.
@capissimo - For open sourcing his interesting ideas regarding various KNN implementations in PineScript, several of which helped inspire my original undertaking of this project.
@RikkiTavi - For many invaluable physics-related conversations and for his helping me develop a mechanism for visualizing various distance algorithms in 3D using JavaScript
@jlaurel - For invaluable literature recommendations that helped me to understand the underlying subject matter of this project.
@annutara - For help in beta-testing this indicator and for sharing many helpful ideas and insights early on in its development.
@jasontaylor7 - For helping to beta-test this indicator and for many helpful conversations that helped to shape my backtesting workflow
@meddymarkusvanhala - For helping to beta-test this indicator
@dlbnext - For incredibly detailed backtesting testing of this indicator and for sharing numerous ideas on how the user experience could be improved.
RSI_Heikinashi📜 Title:
Heikin-Ashi RSI Candle Plot with Multi-Timeframe Analysis and EMA Overlay
📖 Full Description:
This is an original custom indicator that transforms the traditional Relative Strength Index (RSI) into a Heikin-Ashi (HA) candle representation, allowing traders to visualize RSI trends with greater clarity, less noise, and multi-timeframe perspective.
🛠️ Core Concept and Original Method:
Rather than plotting a single RSI line, this script recalculates RSI into a Heikin-Ashi candle format, using a double EMA smoothing method on the RSI data itself.
Here's how the transformation works:
RSI Calculation:
RSI is computed traditionally using Wilder's Moving Average (RMA) for smoothing gains and losses.
The RSI period and price source are fully customizable (default length = 28, source = close).
Heikin-Ashi Style Smoothing (applied to RSI):
The HA Close is calculated as the EMA of the average between the current RSI and previous HA Close.
The HA Open is calculated as the EMA of the average between the previous HA Open and the current HA Close.
The HA High and HA Low are dynamically calculated based on the maximum/minimum values of the current RSI, HA Open, and HA Close.
Smoothing is done via 5-period EMA, which adds a unique layer of trend smoothing without traditional price-based HA calculation.
Multi-Timeframe Comparison:
In addition to plotting the chart timeframe HA RSI, the indicator retrieves the 1-hour timeframe HA RSI using request.security.
This allows traders to align trades with higher timeframe RSI trends, a powerful technique for multi-timeframe confirmation.
50 EMA Overlay:
A 50-period Exponential Moving Average (EMA) is plotted over both the chart timeframe HA RSI and the 1-hour HA RSI.
EMA acts as a trend filter or dynamic support/resistance for RSI behavior.
RSI Bands and Visual Aids:
Standard RSI bands at 70 (Overbought), 50 (Midline), and 30 (Oversold) are plotted.
A shaded background between the 30–70 levels helps highlight RSI range-bound movements versus breakout momentum.
🔥 Why this script is original and useful:
Unique Application:
This is not a simple RSI plot or standard Heikin-Ashi candle — it is a specialized smoothing method applied directly to RSI values for a clearer, noise-reduced momentum reading.
Multi-Timeframe Advantage:
Unlike typical RSI indicators, it includes a 1-hour timeframe comparison alongside the chart timeframe, improving decision-making across intraday and swing strategies.
Advanced Smoothing Logic:
Double EMA smoothing of RSI and HA-style recalculations offer a much smoother signal than traditional RSI or basic RSI/EMA crossovers.
Visualized Trend Strength:
Using colored candles instead of just a line enhances readability and gives an intuitive sense of momentum direction, strength, and possible reversals.
Fully Customizable:
Traders can adjust the RSI period and source depending on asset volatility or timeframe preferences.
📋 How to Use:
Look for HA RSI candles color changes for early momentum shifts.
Use the 50 EMA crossovers on HA RSI to confirm larger trend changes.
Compare chart timeframe vs 1H timeframe HA RSI for stronger signal alignment.
Watch for overbought/oversold breaks beyond the 70/30 bands for trade entries or exits.
⚙️ Inputs:
RSI Length (Default: 28)
RSI Source (Default: Close)
📢 Important Note:
This script is originally conceptualized and custom-built.
It is not a mashup of existing open-source indicators and introduces a new smoothing technique for RSI visualization.
🙏 Credits:
Script developed by Sri_RSI.
Any Oscillator Underlay [TTF]We are proud to release a new indicator that has been a while in the making - the Any Oscillator Underlay (AOU) !
Note: There is a lot to discuss regarding this indicator, including its intent and some of how it operates, so please be sure to read this entire description before using this indicator to help ensure you understand both the intent and some limitations with this tool.
Our intent for building this indicator was to accomplish the following:
Combine all of the oscillators that we like to use into a single indicator
Take up a bit less screen space for the underlay indicators for strategies that utilize multiple oscillators
Provide a tool for newer traders to be able to leverage multiple oscillators in a single indicator
Features:
Includes 8 separate, fully-functional indicators combined into one
Ability to easily enable/disable and configure each included indicator independently
Clearly named plots to support user customization of color and styling, as well as manual creation of alerts
Ability to customize sub-indicator title position and color
Ability to customize sub-indicator divider lines style and color
Indicators that are included in this initial release:
TSI
2x RSIs (dubbed the Twin RSI )
Stochastic RSI
Stochastic
Ultimate Oscillator
Awesome Oscillator
MACD
Outback RSI (Color-coding only)
Quick note on OB/OS:
Before we get into covering each included indicator, we first need to cover a core concept for how we're defining OB and OS levels. To help illustrate this, we will use the TSI as an example.
The TSI by default has a mid-point of 0 and a range of -100 to 100. As a result, a common practice is to place lines on the -30 and +30 levels to represent OS and OB zones, respectively. Most people tend to view these levels as distance from the edges/outer bounds or as absolute levels, but we feel a more way to frame the OB/OS concept is to instead define it as distance ("offset") from the mid-line. In keeping with the -30 and +30 levels in our example, the offset in this case would be "30".
Taking this a step further, let's say we decided we wanted an offset of 25. Since the mid-point is 0, we'd then calculate the OB level as 0 + 25 (+25), and the OS level as 0 - 25 (-25).
Now that we've covered the concept of how we approach defining OB and OS levels (based on offset/distance from the mid-line), and since we did apply some transformations, rescaling, and/or repositioning to all of the indicators noted above, we are going to discuss each component indicator to detail both how it was modified from the original to fit the stacked-indicator model, as well as the various major components that the indicator contains.
TSI:
This indicator contains the following major elements:
TSI and TSI Signal Line
Color-coded fill for the TSI/TSI Signal lines
Moving Average for the TSI
TSI Histogram
Mid-line and OB/OS lines
Default TSI fill color coding:
Green : TSI is above the signal line
Red : TSI is below the signal line
Note: The TSI traditionally has a range of -100 to +100 with a mid-point of 0 (range of 200). To fit into our stacking model, we first shrunk the range to 100 (-50 to +50 - cut it in half), then repositioned it to have a mid-point of 50. Since this is the "bottom" of our indicator-stack, no additional repositioning is necessary.
Twin RSI:
This indicator contains the following major elements:
Fast RSI (useful if you want to leverage 2x RSIs as it makes it easier to see the overlaps and crosses - can be disabled if desired)
Slow RSI (primary RSI)
Color-coded fill for the Fast/Slow RSI lines (if Fast RSI is enabled and configured)
Moving Average for the Slow RSI
Mid-line and OB/OS lines
Default Twin RSI fill color coding:
Dark Red : Fast RSI below Slow RSI and Slow RSI below Slow RSI MA
Light Red : Fast RSI below Slow RSI and Slow RSI above Slow RSI MA
Dark Green : Fast RSI above Slow RSI and Slow RSI below Slow RSI MA
Light Green : Fast RSI above Slow RSI and Slow RSI above Slow RSI MA
Note: The RSI naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Stochastic and Stochastic RSI:
These indicators contain the following major elements:
Configurable lengths for the RSI (for the Stochastic RSI only), K, and D values
Configurable base price source
Mid-line and OB/OS lines
Note: The Stochastic and Stochastic RSI both have a normal range of 0 to 100 with a mid-point of 50, so no rescaling or transformations are done on either of these indicators. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Ultimate Oscillator (UO):
This indicator contains the following major elements:
Configurable lengths for the Fast, Middle, and Slow BP/TR components
Mid-line and OB/OS lines
Moving Average for the UO
Color-coded fill for the UO/UO MA lines (if UO MA is enabled and configured)
Default UO fill color coding:
Green : UO is above the moving average line
Red : UO is below the moving average line
Note: The UO naturally has a range of 0 to 100 with a mid-point of 50, so no rescaling or transformation is done on this indicator. The only manipulation done is to properly position it in the indicator-stack based on which other indicators are also enabled.
Awesome Oscillator (AO):
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the AO calculation
Configurable price source for the moving averages used in the AO calculation
Mid-line
Option to display the AO as a line or pseudo-histogram
Moving Average for the AO
Color-coded fill for the AO/AO MA lines (if AO MA is enabled and configured)
Default AO fill color coding (Note: Fill was disabled in the image above to improve clarity):
Green : AO is above the moving average line
Red : AO is below the moving average line
Note: The AO is technically has an infinite (unbound) range - -∞ to ∞ - and the effective range is bound to the underlying security price (e.g. BTC will have a wider range than SP500, and SP500 will have a wider range than EUR/USD). We employed some special techniques to rescale this indicator into our desired range of 100 (-50 to 50), and then repositioned it to have a midpoint of 50 (range of 0 to 100) to meet the constraints of our stacking model. We then do one final repositioning to place it in the correct position the indicator-stack based on which other indicators are also enabled. For more details on how we accomplished this, read our section "Binding Infinity" below.
MACD:
This indicator contains the following major elements:
Configurable lengths for the Fast and Slow moving averages used in the MACD calculation
Configurable price source for the moving averages used in the MACD calculation
Configurable length and calculation method for the MACD Signal Line calculation
Mid-line
Note: Like the AO, the MACD also technically has an infinite (unbound) range. We employed the same principles here as we did with the AO to rescale and reposition this indicator as well. For more details on how we accomplished this, read our section "Binding Infinity" below.
Outback RSI (ORSI):
This is a stripped-down version of the Outback RSI indicator (linked above) that only includes the color-coding background (suffice it to say that it was not technically feasible to attempt to rescale the other components in a way that could consistently be clearly seen on-chart). As this component is a bit of a niche/special-purpose sub-indicator, it is disabled by default, and we suggest it remain disabled unless you have some pre-defined strategy that leverages the color-coding element of the Outback RSI that you wish to use.
Binding Infinity - How We Incorporated the AO and MACD (Warning - Math Talk Ahead!)
Note: This applies only to the AO and MACD at time of original publication. If any other indicators are added in the future that also fall into the category of "binding an infinite-range oscillator", we will make that clear in the release notes when that new addition is published.
To help set the stage for this discussion, it's important to note that the broader challenge of "equalizing inputs" is nothing new. In fact, it's a key element in many of the most popular fields of data science, such as AI and Machine Learning. They need to take a diverse set of inputs with a wide variety of ranges and seemingly-random inputs (referred to as "features"), and build a mathematical or computational model in order to work. But, when the raw inputs can vary significantly from one another, there is an inherent need to do some pre-processing to those inputs so that one doesn't overwhelm another simply due to the difference in raw values between them. This is where feature scaling comes into play.
With this in mind, we implemented 2 of the most common methods of Feature Scaling - Min-Max Normalization (which we call "Normalization" in our settings), and Z-Score Normalization (which we call "Standardization" in our settings). Let's take a look at each of those methods as they have been implemented in this script.
Min-Max Normalization (Normalization)
This is one of the most common - and most basic - methods of feature scaling. The basic formula is: y = (x - min)/(max - min) - where x is the current data sample, min is the lowest value in the dataset, and max is the highest value in the dataset. In this transformation, the max would evaluate to 1, and the min would evaluate to 0, and any value in between the min and the max would evaluate somewhere between 0 and 1.
The key benefits of this method are:
It can be used to transform datasets of any range into a new dataset with a consistent and known range (0 to 1).
It has no dependency on the "shape" of the raw input dataset (i.e. does not assume the input dataset can be approximated to a normal distribution).
But there are a couple of "gotchas" with this technique...
First, it assumes the input dataset is complete, or an accurate representation of the population via random sampling. While in most situations this is a valid assumption, in trading indicators we don't really have that luxury as we're often limited in what sample data we can access (i.e. number of historical bars available).
Second, this method is highly sensitive to outliers. Since the crux of this transformation is based on the max-min to define the initial range, a single significant outlier can result in skewing the post-transformation dataset (i.e. major price movement as a reaction to a significant news event).
You can potentially mitigate those 2 "gotchas" by using a mechanism or technique to find and discard outliers (e.g. calculate the mean and standard deviation of the input dataset and discard any raw values more than 5 standard deviations from the mean), but if your most recent datapoint is an "outlier" as defined by that algorithm, processing it using the "scrubbed" dataset would result in that new datapoint being outside the intended range of 0 to 1 (e.g. if the new datapoint is greater than the "scrubbed" max, it's post-transformation value would be greater than 1). Even though this is a bit of an edge-case scenario, it is still sure to happen in live markets processing live data, so it's not an ideal solution in our opinion (which is why we chose not to attempt to discard outliers in this manner).
Z-Score Normalization (Standardization)
This method of rescaling is a bit more complex than the Min-Max Normalization method noted above, but it is also a widely used process. The basic formula is: y = (x – μ) / σ - where x is the current data sample, μ is the mean (average) of the input dataset, and σ is the standard deviation of the input dataset. While this transformation still results in a technically-infinite possible range, the output of this transformation has a 2 very significant properties - the mean (average) of the output dataset has a mean (μ) of 0 and a standard deviation (σ) of 1.
The key benefits of this method are:
As it's based on normalizing the mean and standard deviation of the input dataset instead of a linear range conversion, it is far less susceptible to outliers significantly affecting the result (and in fact has the effect of "squishing" outliers).
It can be used to accurately transform disparate sets of data into a similar range regardless of the original dataset's raw/actual range.
But there are a couple of "gotchas" with this technique as well...
First, it still technically does not do any form of range-binding, so it is still technically unbounded (range -∞ to ∞ with a mid-point of 0).
Second, it implicitly assumes that the raw input dataset to be transformed is normally distributed, which won't always be the case in financial markets.
The first "gotcha" is a bit of an annoyance, but isn't a huge issue as we can apply principles of normal distribution to conceptually limit the range by defining a fixed number of standard deviations from the mean. While this doesn't totally solve the "infinite range" problem (a strong enough sudden move can still break out of our "conceptual range" boundaries), the amount of movement needed to achieve that kind of impact will generally be pretty rare.
The bigger challenge is how to deal with the assumption of the input dataset being normally distributed. While most financial markets (and indicators) do tend towards a normal distribution, they are almost never going to match that distribution exactly. So let's dig a bit deeper into distributions are defined and how things like trending markets can affect them.
Skew (skewness): This is a measure of asymmetry of the bell curve, or put another way, how and in what way the bell curve is disfigured when comparing the 2 halves. The easiest way to visualize this is to draw an imaginary vertical line through the apex of the bell curve, then fold the curve in half along that line. If both halves are exactly the same, the skew is 0 (no skew/perfectly symmetrical) - which is what a normal distribution has (skew = 0). Most financial markets tend to have short, medium, and long-term trends, and these trends will cause the distribution curve to skew in one direction or another. Bullish markets tend to skew to the right (positive), and bearish markets to the left (negative).
Kurtosis: This is a measure of the "tail size" of the bell curve. Another way to state this could be how "flat" or "steep" the bell-shape is. If the bell is steep with a strong drop from the apex (like a steep cliff), it has low kurtosis. If the bell has a shallow, more sweeping drop from the apex (like a tall hill), is has high kurtosis. Translating this to financial markets, kurtosis is generally a metric of volatility as the bell shape is largely defined by the strength and frequency of outliers. This is effectively a measure of volatility - volatile markets tend to have a high level of kurtosis (>3), and stable/consolidating markets tend to have a low level of kurtosis (<3). A normal distribution (our reference), has a kurtosis value of 3.
So to try and bring all that back together, here's a quick recap of the Standardization rescaling method:
The Standardization method has an assumption of a normal distribution of input data by using the mean (average) and standard deviation to handle the transformation
Most financial markets do NOT have a normal distribution (as discussed above), and will have varying degrees of skew and kurtosis
Q: Why are we still favoring the Standardization method over the Normalization method, and how are we accounting for the innate skew and/or kurtosis inherent in most financial markets?
A: Well, since we're only trying to rescale oscillators that by-definition have a midpoint of 0, kurtosis isn't a major concern beyond the affect it has on the post-transformation scaling (specifically, the number of standard deviations from the mean we need to include in our "artificially-bound" range definition).
Q: So that answers the question about kurtosis, but what about skew?
A: So - for skew, the answer is in the formula - specifically the mean (average) element. The standard mean calculation assumes a complete dataset and therefore uses a standard (i.e. simple) average, but we're limited by the data history available to us. So we adapted the transformation formula to leverage a moving average that included a weighting element to it so that it favored recent datapoints more heavily than older ones. By making the average component more adaptive, we gained the effect of reducing the skew element by having the average itself be more responsive to recent movements, which significantly reduces the effect historical outliers have on the dataset as a whole. While this is certainly not a perfect solution, we've found that it serves the purpose of rescaling the MACD and AO to a far more well-defined range while still preserving the oscillator behavior and mid-line exceptionally well.
The most difficult parts to compensate for are periods where markets have low volatility for an extended period of time - to the point where the oscillators are hovering around the 0/midline (in the case of the AO), or when the oscillator and signal lines converge and remain close to each other (in the case of the MACD). It's during these periods where even our best attempt at ensuring accurate mirrored-behavior when compared to the original can still occasionally lead or lag by a candle.
Note: If this is a make-or-break situation for you or your strategy, then we recommend you do not use any of the included indicators that leverage this kind of bounding technique (the AO and MACD at time of publication) and instead use the Trandingview built-in versions!
We know this is a lot to read and digest, so please take your time and feel free to ask questions - we will do our best to answer! And as always, constructive feedback is always welcome!
light_logLight Log - A Defensive Programming Library for Pine Script
Overview
The Light Log library transforms Pine Script development by introducing structured logging and defensive programming patterns typically found in enterprise languages like C#. This library addresses a fundamental challenge in Pine Script: the lack of sophisticated error handling and debugging tools that developers expect when building complex trading systems.
At its core, Light Log provides three transformative capabilities that work together to create more reliable and maintainable code. First, it wraps all native Pine Script types in error-aware containers, allowing values to carry validation state alongside their data. Second, it offers a comprehensive logging system with severity levels and conditional rendering. Third, it includes defensive programming utilities that catch errors early and make code self-documenting.
The Philosophy of Errors as Values
Traditional Pine Script error handling relies on runtime errors that halt execution, making it difficult to build resilient systems that can gracefully handle edge cases. Light Log introduces a paradigm shift by treating errors as first-class values that flow through your program alongside regular data.
When you wrap a value using Light Log's type system, you're not just storing data – you're creating a container that can carry both the value and its validation state. For example, when you call myNumber.INT() , you receive an INT object that contains both the integer value and a Log object that can describe any issues with that value. This approach, inspired by functional programming languages, allows errors to propagate through calculations without causing immediate failures.
Consider how this changes error handling in practice. Instead of a calculation failing catastrophically when it encounters invalid input, it can produce a result object that contains both the computed value (which might be na) and a detailed log explaining what went wrong. Subsequent operations can check has_error() to decide whether to proceed or handle the error condition gracefully.
The Typed Wrapper System
Light Log provides typed wrappers for every native Pine Script type: INT, FLOAT, BOOL, STRING, COLOR, LINE, LABEL, BOX, TABLE, CHART_POINT, POLYLINE, and LINEFILL. These wrappers serve multiple purposes beyond simple value storage.
Each wrapper type contains two fields: the value field v holds the actual data, while the error field e contains a Log object that tracks the value's validation state. This dual nature enables powerful programming patterns. You can perform operations on wrapped values and accumulate error information along the way, creating an audit trail of how values were processed.
The wrapper system includes convenient methods for converting between wrapped and unwrapped values. The extension methods like INT() , FLOAT() , etc., make it easy to wrap existing values, while the from_INT() , from_FLOAT() methods extract the underlying values when needed. The has_error() method provides a consistent interface for checking whether any wrapped value has encountered issues during processing.
The Log Object: Your Debugging Companion
The Log object represents the heart of Light Log's debugging capabilities. Unlike simple string concatenation for error messages, the Log object provides a structured approach to building, modifying, and rendering diagnostic information.
Each Log object carries three essential pieces of information: an error type (info, warning, error, or runtime_error), a message string that can be built incrementally, and an active flag that controls conditional rendering. This structure enables sophisticated logging patterns where you can build up detailed diagnostic information throughout your script's execution and decide later whether and how to display it.
The Log object's methods support fluent chaining, allowing you to build complex messages in a readable way. The write() and write_line() methods append text to the log, while new_line() adds formatting. The clear() method resets the log for reuse, and the rendering methods ( render_now() , render_condition() , and the general render() ) control when and how messages appear.
Defensive Programming Made Easy
Light Log's argument validation functions transform how you write defensive code. Instead of cluttering your functions with verbose validation logic, you can use concise, self-documenting calls that make your intentions clear.
The argument_error() function provides strict validation that halts execution when conditions aren't met – perfect for catching programming errors early. For less critical issues, argument_log_warning() and argument_log_error() record problems without stopping execution, while argument_log_info() provides debug visibility into your function's behavior.
These functions follow a consistent pattern: they take a condition to check, the function name, the argument name, and a descriptive message. This consistency makes error messages predictable and helpful, automatically formatting them to show exactly where problems occurred.
Building Modular, Reusable Code
Light Log encourages a modular approach to Pine Script development by providing tools that make functions more self-contained and reliable. When functions validate their inputs and return wrapped values with error information, they become true black boxes that can be safely composed into larger systems.
The void_return() function addresses Pine Script's requirement that all code paths return a value, even in error handling branches. This utility function provides a clean way to satisfy the compiler while making it clear that a particular code path should never execute.
The static log pattern, initialized with init_static_log() , enables module-wide error tracking. You can create a persistent Log object that accumulates information across multiple function calls, building a comprehensive diagnostic report that helps you understand complex behaviors in your indicators and strategies.
Real-World Applications
In practice, Light Log shines when building sophisticated trading systems. Imagine developing a complex indicator that processes multiple data streams, performs statistical calculations, and generates trading signals. With Light Log, each processing stage can validate its inputs, perform calculations, and pass along both results and diagnostic information.
For example, a moving average calculation might check that the period is positive, that sufficient data exists, and that the input series contains valid values. Instead of failing silently or throwing runtime errors, it can return a FLOAT object that contains either the calculated average or a detailed explanation of why the calculation couldn't be performed.
Strategy developers benefit even more from Light Log's capabilities. Complex entry and exit logic often involves multiple conditions that must all be satisfied. With Light Log, each condition check can contribute to a comprehensive log that explains exactly why a trade was or wasn't taken, making strategy debugging and optimization much more straightforward.
Performance Considerations
While Light Log adds a layer of abstraction over raw Pine Script values, its design minimizes performance impact. The wrapper objects are lightweight, containing only two fields. The logging operations only consume resources when actually rendered, and the conditional rendering system ensures that production code can run with logging disabled for maximum performance.
The library follows Pine Script best practices for performance, using appropriate data structures and avoiding unnecessary operations. The var keyword in init_static_log() ensures that persistent logs don't create new objects on every bar, maintaining efficiency even in real-time calculations.
Getting Started
Adopting Light Log in your Pine Script projects is straightforward. Import the library, wrap your critical values, add validation to your functions, and use Log objects to track important events. Start small by adding logging to a single function, then expand as you see the benefits of better error visibility and code organization.
Remember that Light Log is designed to grow with your needs. You can use as much or as little of its functionality as makes sense for your project. Even simple uses, like adding argument validation to key functions, can significantly improve code reliability and debugging ease.
Transform your Pine Script development experience with Light Log – because professional trading systems deserve professional development tools.
Light Log Technical Deep Dive: Advanced Patterns and Architecture
Understanding Errors as Values
The concept of "errors as values" represents a fundamental shift in how we think about error handling in Pine Script. In traditional Pine Script development, errors are events – they happen at a specific moment in time and immediately interrupt program flow. Light Log transforms errors into data – they become information that flows through your program just like any other value.
This transformation has profound implications. When errors are values, they can be stored, passed between functions, accumulated, transformed, and inspected. They become part of your program's data flow rather than exceptions to it. This approach, popularized by languages like Rust with its Result type and Haskell with its Either monad, brings functional programming's elegance to Pine Script.
Consider a practical example. Traditional Pine Script might calculate a momentum indicator like this:
momentum = close - close
If period is invalid or if there isn't enough historical data, this calculation might produce na or cause subtle bugs. With Light Log's approach:
calculate_momentum(src, period)=>
result = src.FLOAT()
if period <= 0
result.e.write("Invalid period: must be positive", true, ErrorType.error)
result.v := na
else if bar_index < period
result.e.write("Insufficient data: need " + str.tostring(period) + " bars", true, ErrorType.warning)
result.v := na
else
result.v := src - src
result.e.write("Momentum calculated successfully", false, ErrorType.info)
result
Now the function returns not just a value but a complete computational result that includes diagnostic information. Calling code can make intelligent decisions based on both the value and its associated metadata.
The Monad Pattern in Pine Script
While Pine Script lacks the type system features to implement true monads, Light Log brings monadic thinking to Pine Script development. The wrapped types (INT, FLOAT, etc.) act as computational contexts that carry both values and metadata through a series of transformations.
The key insight of monadic programming is that you can chain operations while automatically propagating context. In Light Log, this context is the error state. When you have a FLOAT that contains an error, operations on that FLOAT can check the error state and decide whether to proceed or propagate the error.
This pattern enables what functional programmers call "railway-oriented programming" – your code follows a success track when all is well but can switch to an error track when problems occur. Both tracks lead to the same destination (a result with error information), but they take different paths based on the validity of intermediate values.
Composable Error Handling
Light Log's design encourages composition – building complex functionality from simpler, well-tested components. Each component can validate its inputs, perform its calculation, and return a result with appropriate error information. Higher-level functions can then combine these results intelligently.
Consider building a complex trading signal from multiple indicators:
generate_signal(src, fast_period, slow_period, signal_period) =>
log = init_static_log(ErrorType.info)
// Calculate components with error tracking
fast_ma = calculate_ma(src, fast_period)
slow_ma = calculate_ma(src, slow_period)
// Check for errors in components
if fast_ma.has_error()
log.write_line("Fast MA error: " + fast_ma.e.message, true)
if slow_ma.has_error()
log.write_line("Slow MA error: " + slow_ma.e.message, true)
// Proceed with calculation if no errors
signal = 0.0.FLOAT()
if not (fast_ma.has_error() or slow_ma.has_error())
macd_line = fast_ma.v - slow_ma.v
signal_line = calculate_ma(macd_line, signal_period)
if signal_line.has_error()
log.write_line("Signal line error: " + signal_line.e.message, true)
signal.e := log
else
signal.v := macd_line - signal_line.v
log.write("Signal generated successfully")
else
signal.e := log
signal.v := na
signal
This composable approach makes complex calculations more reliable and easier to debug. Each component is responsible for its own validation and error reporting, and the composite function orchestrates these components while maintaining comprehensive error tracking.
The Static Log Pattern
The init_static_log() function introduces a powerful pattern for maintaining state across function calls. In Pine Script, the var keyword creates variables that persist across bars but are initialized only once. Light Log leverages this to create logging objects that can accumulate information throughout a script's execution.
This pattern is particularly valuable for debugging complex strategies where you need to understand behavior across multiple bars. You can create module-level logs that track important events:
// Module-level diagnostic log
diagnostics = init_static_log(ErrorType.info)
// Track strategy decisions across bars
check_entry_conditions() =>
diagnostics.clear() // Start fresh each bar
diagnostics.write_line("Bar " + str.tostring(bar_index) + " analysis:")
if close > sma(close, 20)
diagnostics.write_line("Price above SMA20", false)
else
diagnostics.write_line("Price below SMA20 - no entry", true, ErrorType.warning)
if volume > sma(volume, 20) * 1.5
diagnostics.write_line("Volume surge detected", false)
else
diagnostics.write_line("Normal volume", false)
// Render diagnostics based on verbosity setting
if debug_mode
diagnostics.render_now()
Advanced Validation Patterns
Light Log's argument validation functions enable sophisticated precondition checking that goes beyond simple null checks. You can implement complex validation logic while keeping your code readable:
validate_price_data(open_val, high_val, low_val, close_val) =>
argument_error(na(open_val) or na(high_val) or na(low_val) or na(close_val),
"validate_price_data", "OHLC values", "contain na values")
argument_error(high_val < low_val,
"validate_price_data", "high/low", "high is less than low")
argument_error(close_val > high_val or close_val < low_val,
"validate_price_data", "close", "is outside high/low range")
argument_log_warning(high_val == low_val,
"validate_price_data", "high/low", "are equal (no range)")
This validation function documents its requirements clearly and fails fast with helpful error messages when assumptions are violated. The mix of errors (which halt execution) and warnings (which allow continuation) provides fine-grained control over how strict your validation should be.
Performance Optimization Strategies
While Light Log adds abstraction, careful design minimizes overhead. Understanding Pine Script's execution model helps you use Light Log efficiently.
Pine Script executes once per bar, so operations that seem expensive in traditional programming might have negligible impact. However, when building real-time systems, every optimization matters. Light Log provides several patterns for efficient use:
Lazy Evaluation: Log messages are only built when they'll be rendered. Use conditional logging to avoid string concatenation in production:
if debug_mode
log.write_line("Calculated value: " + str.tostring(complex_calculation))
Selective Wrapping: Not every value needs error tracking. Wrap values at API boundaries and critical calculation points, but use raw values for simple operations:
// Wrap at boundaries
input_price = close.FLOAT()
validated_period = validate_period(input_period).INT()
// Use raw values internally
sum = 0.0
for i = 0 to validated_period.v - 1
sum += close
Error Propagation: When errors occur early, avoid expensive calculations:
process_data(input) =>
validated = validate_input(input)
if validated.has_error()
validated // Return early with error
else
// Expensive processing only if valid
perform_complex_calculation(validated)
Integration Patterns
Light Log integrates smoothly with existing Pine Script code. You can adopt it incrementally, starting with critical functions and expanding coverage as needed.
Boundary Validation: Add Light Log at the boundaries of your system – where user input enters and where final outputs are produced. This catches most errors while minimizing changes to existing code.
Progressive Enhancement: Start by adding argument validation to existing functions. Then wrap return values. Finally, add comprehensive logging. Each step improves reliability without requiring a complete rewrite.
Testing and Debugging: Use Light Log's conditional rendering to create debug modes for your scripts. Production users see clean output while developers get detailed diagnostics:
// User input for debug mode
debug = input.bool(false, "Enable debug logging")
// Conditional diagnostic output
if debug
diagnostics.render_now()
else
diagnostics.render_condition() // Only shows errors/warnings
Future-Proofing Your Code
Light Log's patterns prepare your code for Pine Script's evolution. As Pine Script adds more sophisticated features, code that uses structured error handling and defensive programming will adapt more easily than code that relies on implicit assumptions.
The type wrapper system, in particular, positions your code to take advantage of potential future features or more sophisticated type inference. By thinking in terms of wrapped values and error propagation today, you're building code that will remain maintainable and extensible tomorrow.
Light Log doesn't just make your Pine Script better today – it prepares it for the trading systems you'll need to build tomorrow.
Library "light_log"
A lightweight logging and defensive programming library for Pine Script.
Designed for modular and extensible scripts, this utility provides structured runtime validation,
conditional logging, and reusable `Log` objects for centralized error propagation.
It also introduces a typed wrapping system for all native Pine values (e.g., `INT`, `FLOAT`, `LABEL`),
allowing values to carry errors alongside data. This enables functional-style flows with built-in
validation tracking, error detection (`has_error()`), and fluent chaining.
Inspired by structured logging patterns found in systems like C#, it reduces boilerplate,
enforces argument safety, and encourages clean, maintainable code architecture.
method INT(self, error_type)
Wraps an `int` value into an `INT` struct with an optional log severity.
Namespace types: series int, simple int, input int, const int
Parameters:
self (int) : The raw `int` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: An `INT` object containing the value and a default Log instance.
method FLOAT(self, error_type)
Wraps a `float` value into a `FLOAT` struct with an optional log severity.
Namespace types: series float, simple float, input float, const float
Parameters:
self (float) : The raw `float` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `FLOAT` object containing the value and a default Log instance.
method BOOL(self, error_type)
Wraps a `bool` value into a `BOOL` struct with an optional log severity.
Namespace types: series bool, simple bool, input bool, const bool
Parameters:
self (bool) : The raw `bool` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `BOOL` object containing the value and a default Log instance.
method STRING(self, error_type)
Wraps a `string` value into a `STRING` struct with an optional log severity.
Namespace types: series string, simple string, input string, const string
Parameters:
self (string) : The raw `string` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `STRING` object containing the value and a default Log instance.
method COLOR(self, error_type)
Wraps a `color` value into a `COLOR` struct with an optional log severity.
Namespace types: series color, simple color, input color, const color
Parameters:
self (color) : The raw `color` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `COLOR` object containing the value and a default Log instance.
method LINE(self, error_type)
Wraps a `line` object into a `LINE` struct with an optional log severity.
Namespace types: series line
Parameters:
self (line) : The raw `line` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `LINE` object containing the value and a default Log instance.
method LABEL(self, error_type)
Wraps a `label` object into a `LABEL` struct with an optional log severity.
Namespace types: series label
Parameters:
self (label) : The raw `label` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `LABEL` object containing the value and a default Log instance.
method BOX(self, error_type)
Wraps a `box` object into a `BOX` struct with an optional log severity.
Namespace types: series box
Parameters:
self (box) : The raw `box` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `BOX` object containing the value and a default Log instance.
method TABLE(self, error_type)
Wraps a `table` object into a `TABLE` struct with an optional log severity.
Namespace types: series table
Parameters:
self (table) : The raw `table` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `TABLE` object containing the value and a default Log instance.
method CHART_POINT(self, error_type)
Wraps a `chart.point` value into a `CHART_POINT` struct with an optional log severity.
Namespace types: chart.point
Parameters:
self (chart.point) : The raw `chart.point` value to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `CHART_POINT` object containing the value and a default Log instance.
method POLYLINE(self, error_type)
Wraps a `polyline` object into a `POLYLINE` struct with an optional log severity.
Namespace types: series polyline, series polyline, series polyline, series polyline
Parameters:
self (polyline) : The raw `polyline` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `POLYLINE` object containing the value and a default Log instance.
method LINEFILL(self, error_type)
Wraps a `linefill` object into a `LINEFILL` struct with an optional log severity.
Namespace types: series linefill
Parameters:
self (linefill) : The raw `linefill` object to wrap.
error_type (series ErrorType) : Optional severity level to associate with the log. Default is `ErrorType.error`.
Returns: A `LINEFILL` object containing the value and a default Log instance.
method from_INT(self)
Extracts the integer value from an INT wrapper.
Namespace types: INT
Parameters:
self (INT) : The wrapped INT instance.
Returns: The underlying `int` value.
method from_FLOAT(self)
Extracts the float value from a FLOAT wrapper.
Namespace types: FLOAT
Parameters:
self (FLOAT) : The wrapped FLOAT instance.
Returns: The underlying `float` value.
method from_BOOL(self)
Extracts the boolean value from a BOOL wrapper.
Namespace types: BOOL
Parameters:
self (BOOL) : The wrapped BOOL instance.
Returns: The underlying `bool` value.
method from_STRING(self)
Extracts the string value from a STRING wrapper.
Namespace types: STRING
Parameters:
self (STRING) : The wrapped STRING instance.
Returns: The underlying `string` value.
method from_COLOR(self)
Extracts the color value from a COLOR wrapper.
Namespace types: COLOR
Parameters:
self (COLOR) : The wrapped COLOR instance.
Returns: The underlying `color` value.
method from_LINE(self)
Extracts the line object from a LINE wrapper.
Namespace types: LINE
Parameters:
self (LINE) : The wrapped LINE instance.
Returns: The underlying `line` object.
method from_LABEL(self)
Extracts the label object from a LABEL wrapper.
Namespace types: LABEL
Parameters:
self (LABEL) : The wrapped LABEL instance.
Returns: The underlying `label` object.
method from_BOX(self)
Extracts the box object from a BOX wrapper.
Namespace types: BOX
Parameters:
self (BOX) : The wrapped BOX instance.
Returns: The underlying `box` object.
method from_TABLE(self)
Extracts the table object from a TABLE wrapper.
Namespace types: TABLE
Parameters:
self (TABLE) : The wrapped TABLE instance.
Returns: The underlying `table` object.
method from_CHART_POINT(self)
Extracts the chart.point from a CHART_POINT wrapper.
Namespace types: CHART_POINT
Parameters:
self (CHART_POINT) : The wrapped CHART_POINT instance.
Returns: The underlying `chart.point` value.
method from_POLYLINE(self)
Extracts the polyline object from a POLYLINE wrapper.
Namespace types: POLYLINE
Parameters:
self (POLYLINE) : The wrapped POLYLINE instance.
Returns: The underlying `polyline` object.
method from_LINEFILL(self)
Extracts the linefill object from a LINEFILL wrapper.
Namespace types: LINEFILL
Parameters:
self (LINEFILL) : The wrapped LINEFILL instance.
Returns: The underlying `linefill` object.
method has_error(self)
Returns true if the INT wrapper has an active log entry.
Namespace types: INT
Parameters:
self (INT) : The INT instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the FLOAT wrapper has an active log entry.
Namespace types: FLOAT
Parameters:
self (FLOAT) : The FLOAT instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the BOOL wrapper has an active log entry.
Namespace types: BOOL
Parameters:
self (BOOL) : The BOOL instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the STRING wrapper has an active log entry.
Namespace types: STRING
Parameters:
self (STRING) : The STRING instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the COLOR wrapper has an active log entry.
Namespace types: COLOR
Parameters:
self (COLOR) : The COLOR instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the LINE wrapper has an active log entry.
Namespace types: LINE
Parameters:
self (LINE) : The LINE instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the LABEL wrapper has an active log entry.
Namespace types: LABEL
Parameters:
self (LABEL) : The LABEL instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the BOX wrapper has an active log entry.
Namespace types: BOX
Parameters:
self (BOX) : The BOX instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the TABLE wrapper has an active log entry.
Namespace types: TABLE
Parameters:
self (TABLE) : The TABLE instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the CHART_POINT wrapper has an active log entry.
Namespace types: CHART_POINT
Parameters:
self (CHART_POINT) : The CHART_POINT instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the POLYLINE wrapper has an active log entry.
Namespace types: POLYLINE
Parameters:
self (POLYLINE) : The POLYLINE instance to check.
Returns: True if an error or message is active in the log.
method has_error(self)
Returns true if the LINEFILL wrapper has an active log entry.
Namespace types: LINEFILL
Parameters:
self (LINEFILL) : The LINEFILL instance to check.
Returns: True if an error or message is active in the log.
void_return()
Utility function used when a return is syntactically required but functionally unnecessary.
Returns: Nothing. Function never executes its body.
argument_error(condition, function, argument, message)
Throws a runtime error when a condition is met. Used for strict argument validation.
Parameters:
condition (bool) : Boolean expression that triggers the runtime error.
function (string) : Name of the calling function (for formatting).
argument (string) : Name of the problematic argument.
message (string) : Description of the error cause.
Returns: Never returns. Halts execution if the condition is true.
argument_log_info(condition, function, argument, message)
Logs an informational message when a condition is met. Used for optional debug visibility.
Parameters:
condition (bool) : Boolean expression that triggers the log.
function (string) : Name of the calling function.
argument (string) : Argument name being referenced.
message (string) : Informational message to log.
Returns: Nothing. Logs if the condition is true.
argument_log_warning(condition, function, argument, message)
Logs a warning when a condition is met. Non-fatal but highlights potential issues.
Parameters:
condition (bool) : Boolean expression that triggers the warning.
function (string) : Name of the calling function.
argument (string) : Argument name being referenced.
message (string) : Warning message to log.
Returns: Nothing. Logs if the condition is true.
argument_log_error(condition, function, argument, message)
Logs an error message when a condition is met. Does not halt execution.
Parameters:
condition (bool) : Boolean expression that triggers the error log.
function (string) : Name of the calling function.
argument (string) : Argument name being referenced.
message (string) : Error message to log.
Returns: Nothing. Logs if the condition is true.
init_static_log(error_type, message, active)
Initializes a persistent (var) Log object. Ideal for global logging in scripts or modules.
Parameters:
error_type (series ErrorType) : Initial severity level (required).
message (string) : Optional starting message string. Default value of ("").
active (bool) : Whether the log should be flagged active on initialization. Default value of (false).
Returns: A static Log object with the given parameters.
method new_line(self)
Appends a newline character to the Log message. Useful for separating entries during chained writes.
Namespace types: Log
Parameters:
self (Log) : The Log instance to modify.
Returns: The updated Log object with a newline appended.
method write(self, message, flag_active, error_type)
Appends a message to a Log object without a newline. Updates severity and active state if specified.
Namespace types: Log
Parameters:
self (Log) : The Log instance being modified.
message (string) : The text to append to the log.
flag_active (bool) : Whether to activate the log for conditional rendering. Default value of (false).
error_type (series ErrorType) : Optional override for the severity level. Default value of (na).
Returns: The updated Log object.
method write_line(self, message, flag_active, error_type)
Appends a message to a Log object, prefixed with a newline for clarity.
Namespace types: Log
Parameters:
self (Log) : The Log instance being modified.
message (string) : The text to append to the log.
flag_active (bool) : Whether to activate the log for conditional rendering. Default value of (false).
error_type (series ErrorType) : Optional override for the severity level. Default value of (na).
Returns: The updated Log object.
method clear(self, flag_active, error_type)
Clears a Log object’s message and optionally reactivates it. Can also update the error type.
Namespace types: Log
Parameters:
self (Log) : The Log instance being cleared.
flag_active (bool) : Whether to activate the log after clearing. Default value of (false).
error_type (series ErrorType) : Optional new error type to assign. If not provided, the previous type is retained. Default value of (na).
Returns: The cleared Log object.
method render_condition(self, flag_active, error_type)
Conditionally renders the log if it is active. Allows overriding error type and controlling active state afterward.
Namespace types: Log
Parameters:
self (Log) : The Log instance to evaluate and render.
flag_active (bool) : Whether to activate the log after rendering. Default value of (false).
error_type (series ErrorType) : Optional error type override. Useful for contextual formatting just before rendering. Default value of (na).
Returns: The updated Log object.
method render_now(self, flag_active, error_type)
Immediately renders the log regardless of `active` state. Allows overriding error type and active flag.
Namespace types: Log
Parameters:
self (Log) : The Log instance to render.
flag_active (bool) : Whether to activate the log after rendering. Default value of (false).
error_type (series ErrorType) : Optional error type override. Allows dynamic severity adjustment at render time. Default value of (na).
Returns: The updated Log object.
render(self, condition, flag_active, error_type)
Renders the log conditionally or unconditionally. Allows full control over render behavior.
Parameters:
self (Log) : The Log instance to render.
condition (bool) : If true, renders only if the log is active. If false, always renders. Default value of (false).
flag_active (bool) : Whether to activate the log after rendering. Default value of (false).
error_type (series ErrorType) : Optional error type override passed to the render methods. Default value of (na).
Returns: The updated Log object.
Log
A structured object used to store and render logging messages.
Fields:
error_type (series ErrorType) : The severity level of the message (from the ErrorType enum).
message (series string) : The text of the log message.
active (series bool) : Whether the log should trigger rendering when conditionally evaluated.
INT
A wrapped integer type with attached logging for validation or tracing.
Fields:
v (series int) : The underlying `int` value.
e (Log) : Optional log object describing validation status or error context.
FLOAT
A wrapped float type with attached logging for validation or tracing.
Fields:
v (series float) : The underlying `float` value.
e (Log) : Optional log object describing validation status or error context.
BOOL
A wrapped boolean type with attached logging for validation or tracing.
Fields:
v (series bool) : The underlying `bool` value.
e (Log) : Optional log object describing validation status or error context.
STRING
A wrapped string type with attached logging for validation or tracing.
Fields:
v (series string) : The underlying `string` value.
e (Log) : Optional log object describing validation status or error context.
COLOR
A wrapped color type with attached logging for validation or tracing.
Fields:
v (series color) : The underlying `color` value.
e (Log) : Optional log object describing validation status or error context.
LINE
A wrapped line object with attached logging for validation or tracing.
Fields:
v (series line) : The underlying `line` value.
e (Log) : Optional log object describing validation status or error context.
LABEL
A wrapped label object with attached logging for validation or tracing.
Fields:
v (series label) : The underlying `label` value.
e (Log) : Optional log object describing validation status or error context.
BOX
A wrapped box object with attached logging for validation or tracing.
Fields:
v (series box) : The underlying `box` value.
e (Log) : Optional log object describing validation status or error context.
TABLE
A wrapped table object with attached logging for validation or tracing.
Fields:
v (series table) : The underlying `table` value.
e (Log) : Optional log object describing validation status or error context.
CHART_POINT
A wrapped chart point with attached logging for validation or tracing.
Fields:
v (chart.point) : The underlying `chart.point` value.
e (Log) : Optional log object describing validation status or error context.
POLYLINE
A wrapped polyline object with attached logging for validation or tracing.
Fields:
v (series polyline) : The underlying `polyline` value.
e (Log) : Optional log object describing validation status or error context.
LINEFILL
A wrapped linefill object with attached logging for validation or tracing.
Fields:
v (series linefill) : The underlying `linefill` value.
e (Log) : Optional log object describing validation status or error context.
Stock Relative Strength Power IndexAs always, this is not financial advice and use at your own risk. Trading is risky and can cost you significant sums of money if you are not careful. Make sure you always have a proper entry and exit plan that includes defining your risk before you enter a trade.
This idea recently came out of some discussions I stumbled across in a trading group I am a part of regarding Relative Strength and Relative Weakness (shortened to RS and RW from here on out). The whole mechanism behind this trading system is to filter out underperforming securities relative to the current market direction to be in only the strongest or weakest stocks when the market is currently experiencing a bullish or bearish cycle. The idea behind this is there is no point in parking your money into a stock that is treading water or even going down if the market is making strong moves upwards. At that point, you are at worst losing money, and at best trading equal to the index/ETF, in which case the argument is why are you not just trading the index/ETF instead? RS or RW will filter out these sector laggards and allow you to position yourself into strong (or the strongest) stocks at any given time to help improve portfolio performance. Further, not only does it protect your position should the market shift against you briefly, it also often sees exceptional performance in the same cycle. For example, if $SPY makes a 5% move over the course of a month, a stock with RS/RW may make a 10% move, or more, allowing you to see increased profit potential.
RS/RW is based on the idea of performance, that is the raw percent change of a security over a given time period relative to a benchmark. This benchmark is often the S&P500 (ES/SPX/SPY and their derivatives). I have to stress that this is not beta, which measures the volatility of a stock over a given period (i.e. if $SPY moves $1, $NVDA will often move $1.74). This is a measurement of the market (i.e. $SPY) has moved 1% over the course of a day, $NVDA has moved 8% over the course of the day. This is very often used as a signal of institutional interest as apart from some very unique moments, retail traders cannot and will not provide enough market pressure to move a market outside of a stock's normal trading range, nor will they outperform the sector or market as a whole consistently over time without some big money to make them move. The problem with running strict performance analysis (i.e. % change from period T ago to period T + n at present) is that while it gives us a baseline of how much the stock has moved, it doesn't overall mean much. For instance, if a $100 stock has moved 5% today, but has been experiencing a period of increased volatility and it's Average True Range (ATR) (the amount a stock will move over X number of periods, on average) is $7, performance seems impressive but is actually generally fairly weak to what the stock has been doing recently. Conversely, if we take a second stock, this time worth $20, and it too has moved 5% in one day but has an ATR of only $0.25, that stock has made an exceptional move and we want to be part of that.
Here, I have created an indicator called the Stock Relative Strength Power Index. This takes the stock's rate of change (ROC) (the % move it has made over X number of periods), the stock's normalized ATR (the ATR represented as a percentage instead of a raw value), and compares these to one another to get the "Power Rating": a representation of the true strength of a stock over X number of periods. The indicator does two things. First, the raw ROC is divided by the stock's normalized ATR to assess whether the stock is moving outside of its normal range of variation or not. Second, since we are interested in trading only stocks with exceptional RS/RW to the market, I have also applied this same calculation to the S&P500 ($SPY) and the various SPDR sector indexes. These comparisons allow for a rapid and accurate assessment of the true strength of a stock at any given time on any given time frame. To cycle back above to our examples, the $100 stock has a Power Rating of only 0.71 (i.e. it is trading less than its current average), while our $20 stock has a Power Rating of 5. If we then compare these to both the market as a whole and the sector that the stock is a part of, we get a much clearer indication of the true buying or selling pressure imposed on the stock at any given time.
Use:
The indicator has 3 lines. The blue line is the security of interest, the red line is the market baseline (i.e. the sector ETF $SPY), and the white line is the sector index. I have given an example above on the semiconductor/tech stock $NVDA on a 30min timeframe. You can see that since the start of 2023, $NVDA has generally been strong to the market and its own sector since the blue line is greater than both the red and white lines over many days. This would have provided some nice day trading opportunities, or even some nice short term swing trades. The values themselves are generally meaningless outside of either the 1 or -1 value lines. All that matters is that the current ticker is surpassing both the market and the sector while being > 1.0 for a long trade or less than -1.0 for a short trade. However, I must stress this indicator gives no trade signals on its own, it is purely a confirmation indicator. An example of a trade would be if you had a trade signal given by either an indicator or by price action suggesting to buy some $NVDA for a trade to the upside, the Power Rating indicator would confirm this by showing if $NVDA was actually showing true strength by being both greater than 1 (the cutoff for it surpassing its ATR) and being above both the red and white lines. Further, you can see $NVDA has been stronger than the market when using the comparison function as well, but the has fluxed in and out of strength intraday when using the actual indicator vs. the static performance ratio chart (plotted as line graphs on the chart).
I have made it possible to change the colour of the plots and the line levels. The adjustment of the line levels gives the trader the flexibility to change their target breakout level (i.e. only trading stocks that have a Power Rating > 2, for example, meaning they are trading at least 2X their normal trading range). The third security comparison is flexible and can be used to compare to the sector ETF (initial intention) or it can be used to compare to other tickers within the same sector, for example. The trader should select the appropriate ETF for the given security of interest to avoid false confirmation if they want to use an ETF as their third input. The proper sector should be readily available on most online websites and accessible in a matter of seconds meaning that the delay is minimal, at worst. If a trader wishes to add additional functionality, such as a crypto trader using BTCUSD as the benchmark instead of $SPY, I encourage them to copy and paste this script and modify as needed since I have made this open source.
This indicator works on all timeframes. The lookback period can be changed, so a day trader who may use a 5min chart (and use a period of 12 to get the hourly Power Rating) will find this equally useful as someone who may be a core trader who wants to look at the performance over the course of years and may use a 60 period on a monthly chart.
Happy trading and I hope this helps!
CFB-Adaptive CCI w/ T3 Smoothing [Loxx]CFB-Adaptive CCI w/ T3 Smoothing is a CCI indicator with adaptive period inputs and T3 smoothing. Jurik's Composite Fractal Behavior is used to created dynamic period input.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones [Loxx]STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones is a standard deviation filtered R-squared Adaptive T3 moving average with dynamic zones.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Real Woodies CCIAs always, this is not financial advice and use at your own risk. Trading is risky and can cost you significant sums of money if you are not careful. Make sure you always have a proper entry and exit plan that includes defining your risk before you enter a trade.
Ken Wood is a semi-famous trader that grew in popularity in the 1990s and early 2000s due to the establishment of one of the earliest trading forums online. This forum grew into "Woodie's CCI Club" due to Wood's love of his modified Commodity Channel Index (CCI) that he used extensively. From what I can tell, the website is still active and still follows the same core principles it did in the early days, the CCI is used for entries, range bars are used to help trader's cut down on the noise, and the optional addition of Woodie's Pivot Points can be used as further confirmation of support and resistance. This is my take on his famous "Woodie's CCI" that has become standard on many charting packages through the years, including a TradingView sponsored version as one of the many stock indicators provided by TradingView. Woodie has updated his CCI through the years to include several very cool additions outside of the standard CCI. I will have to say, I am a bit biased, but I think this is hands down one of the best indicators I have ever used, and I am far too young to have been part of the original CCI Club. Being a daytrader primarily, this fits right in my timeframe wheel house. Woodie designed this indicator to work on a day-trading time scale and he frequently uses this to trade futures and commodity contracts on the 30 minute, often even down to the one minute timeframe. This makes it unique in that it is probably one of the only daytrading-designed indicators out there that I am aware of that was not a popular indicator, like the MACD or RSI, that was just adopted by daytraders.
The CCI was originally created by Donald Lambert in 1980. Over time, it has become an extremely popular house-hold indicator, like the Stochastics, RSI, or MACD. However, like the RSI and Stochastics, there are extensive debates on how the CCI is actually meant to be used. Some trade it like a reversal indicator, where values greater than 100 or less than -100 are considered overbought or oversold, respectively. Others trade it like a typical zero-line cross indicator, where once the value goes above or below the zero-line, a trade should be considered in that direction. Lastly, some treat it as strictly a momentum indicator, where values greater than 100 or less than -100 are seen as strong momentum moves and when these values are reached, a new strong trend is establishing in the direction of the move. The CCI itself is nothing fancy, it just visualizes the distance of the closing price away from a user-defined SMA value and plots it as a line. However, Woodie's CCI takes this simple concept and adds to it with an indicator with 5 pieces to it designed to help the trader enter into the highest probability setups. Bear with me, it initially looks super complicated, but I promise it is pretty straight-forward and a fun indicator to use.
1) The CCI Histogram. This is your standard CCI value that you would find on the normal CCI. Woodie's CCI uses a value of 14 for most trades and a value of 20 when the timeframe is equal to or greater than 30minutes. I personally use this as a 20-period CCI on all time frames, simply for the fact that the 20 SMA is a very popular moving average and I want to know what the crowd is doing. This is your coloured histogram with 4 colours. A gray colouring is for any bars above or below the zero line for 1-4 bars. A yellow bar is a "trend bar", where the long period CCI has been above/below the zero line for 5 consecutive bars, indicating that a trend in the current direction has been established. Blue bars above and red bars below are simply 6+n number of bars above or below the zero line confirming trend. These are used for the Zero-Line Reject Trade (explained below). The CCI Histogram has a matching long-period CCI line that is painted the same colour as the histogram, it is the same thing but is used just to outline the Histogram a bit better.
2) The CCI Turbo line. This is a sped-up 6 period CCI. This is to be used for the Zero-Line Reject trades, trendline breaks, and to identify shorter term overbought/oversold conditions against the main trend. This is coloured as the white line.
3) The Least Squares Moving Average Baseline (LSMA) Zero Line. You will notice that the Zero Line of the indicator is either green or red. This is based on when price is above or below the 25-period LSMA on the chart. The LSMA is a 25 period linear regression moving average and is one of the best moving averages out there because it is more immune to noise than a typical MA. Statistically, an LSMA is designed to find the line of best fit across the lookback periods and identify whether price is advancing, declining, or flat, without the whipsaw that other MAs can be privy to. The zero line of the indicator will turn green when the close candle is over the LSMA or red when it is below the LSMA. This is meant to be a confirmation tool only and the CCI Histogram and Turbo Histogram can cross this zero line without any corresponding change in the colour of the zero line on that immediate candle.
4) The +100 and -100 lines are used in two ways. First, they can be used by the CCI Histogram and CCI Turbo as a sort of minor price resistance and if the CCI values cannot get through these, it is considered weakness in that trade direction until they do so. You will notice that both of these lines are multi-coloured. They have been plotted with the ChopZone Indicator, another TradingView built-in indicator. The ChopZone is a trend identification tool that uses the slope and the direction of a 34-period EMA to identify when price is trending or range bound. While there are ~10 different colours, the main two a trader needs to pay attention to are the turquoise/cyan blue, which indicates price is in an uptrend, and dark red, which indicates price is in a downtrend based on the slope and direction of the 34 EMA. All other colours indicate "chop". These colours are used solely for the Zero-Line Reject and pattern trades discussed below. They are plotted both above and below so you can easily see the colouring no matter what side of the zero line the CCI is on.
5) The +200 and -200 lines are also used in two ways. First, they are considered overbought/oversold levels where if price exceeds these lines then it has moved an extreme amount away from the average and is likely to experience a pullback shortly. This is more useful for the CCI Histogram than the Turbo CCI, in all honesty. You will also notice that these are coloured either red, green, or yellow. This is the Sidewinder indicator portion. The documentation on this is extremely sparse, only pointing to a "relationship between the LSMA and the 34 EMA" (see here: tlc.thinkorswim.com). Since I am not a member of Woodie's CCI Club and never intend to be I took some liberty here and decided that the most likely relationship here was the slope of both moving averages. Therefore, the Sidewinder will be green when both the LSMA and the 34 EMA are rising, red when both are falling, and yellow when they are not in agreement with one another (i.e. one rising/flat while the other is flat/falling). I am a big fan of Dr. Alexander Elder as those who follow me know, so consider this like Woodie's version of the Elder Impulse System. I will fully admit that this version of the Sidewinder is a guess and may not represent the real Sidewinder indicator, but it is next to impossible to find any information on this, so I apologize, but my version does do something useful anyways. This is also to be used only with the Zero-Line Reject trades. They are plotted both above and below so you can easily see the colouring no matter what side of the zero line the CCI is on.
How to Trade It According to Woodie's CCI Club:
Now that I have all of my components and history out of the way, this is what you all care about. I will only provide a brief overview of the trades in this system, but there are quite a few more detailed descriptions listed in the Woodie's CCI Club pamphlet. I have had little success trading the "patterns" but they do exist and do work on occasion. I just prefer to trade with the flow of the markets rather than getting overly scalpy. If you are interested in these patterns, see the pamphlet here (www.trading-attitude.com), hop into the forums and see for yourself, or check out a couple of the YouTube videos.
1) Zero line cross. As simple as any other momentum oscillator out there. When the long period CCI crosses above or below the zero line open a trade in that direction. Extra confirmation can be had when the CCI Turbo has already broken the +100/-100 line "resistance or support". Trend traders may wish to wait until the yellow "trend confirmation bar" has been printed.
2) Zero Line Reject. This is when the CCI Turbo heads back down to the zero line and then bounces back in the same direction of the prevailing trend. These are fantastic continuation trades if you missed the initial entry either on the zero line cross or on the trend bar establishment. ZLR trades are only viable when you have the ChopZone indicator showing a trend (turquoise/cyan for uptrend, dark red for downtrend), the LSMA line is green for an uptrend or red for a downtrend, and the SideWinder is either green confirming the uptrend or red confirming the downtrend.
3) Hook From Extreme. This is the exact same as the Zero Line Reject trade, however, the CCI Turbo now goes to the +100/-100 line (whichever is opposite the currently established trend) and then hooks back into the established trend direction. Ideally the HFE trade needs to have the Long CCI Histogram above/below the corresponding 100 level and the CCI Turbo both breaks the 100 level on the trend side and when it does break it has increased ~20 points from the previous value (i.e. CCI Histogram = +150 with LSMA, CZ, and SW all matching up and trend bars printed on CCI Histogram, CCI Turbo went to -120 and bounced to +80 on last 2 bars, current bar closes with CCI Turbo closing at +110).
4) Trend Line Break. Either the CCI Turbo or CCI Histogram, whichever you prefer (I find the Turbo a bit more accurate since its a faster value) creates a series of higher highs/lows you can draw a trend line linking them. When the line breaks the trendline that is your signal to take a counter trade position. For example, if the CCI Turbo is making consistently higher lows and then breaks the trendline through the zero line, you can then go short. This is a good continuation trade.
5) The Tony Trade. Consider this like a combination zero line reject, trend line break, and weak zero line cross all in one. The idea is that the SW, CZ, and LSMA values are all established in one direction. The CCI Histogram should be in an established trend and then cross the zero line but never break the 100 level on the new side as long as it has not printed more than 9 bars on the new side. If the CCI Histogram prints 9 or less bars on the new side and then breaks the trendline and crosses back to the original trend side, that is your signal to take a reversal trade. This is best used in the Elder Triple Screen method (discussed in final section) as a failed dip or rip.
6) The GB100 Trade. This is a similar trade as the Tony Trade, however, the CCI Histogram can break the 100 level on the new side but has to have made less than 6 bars on the new side. A trendline break is not necessary here either, it is more of a "pop and drop" or "momentum failure" trade trying in the new direction.
7) The Famir Trade. This is a failed CCI Long Histogram ZLR trade and is quite complicated. I have never traded this but it is in the pamphlet. Essentially you have a typical ZLR reject (i.e. all components saying it is likely a long/short continuation trade), but the ZLR only stays around the 50 level, goes back to the trend side, fails there as well immediately after 1 bar and then rebreaks to the new side. This is important to be considered with the LSMA value matching the side of the trade, so if the Famir says to go long, you need the LSMA indicator to also say to go long.
8) The Vegas Trade. This is essentially a trend-reversal trade that takes into account the LSMA and a cup and handle formation on the CCI Long Histogram after it has reached an extreme value (+200/-200). You will see the CCI Histogram hit the extreme value, head towards the zero line, and then sort of round out back in the direction of the extreme price. The low point where it reversed back in the direction of the extreme can be considered support or resistance on the CCI and once the CCI Long Histogram breaks this level again, with LSMA confirmation, you can take a counter trend trade with a stop under/over the highest/lowest point of the last 2 bars as you want to be out quickly if you are wrong without much damage but can get a huge win if you are right and add later to the position once a new trade has formed.
9) The Ghost Trade. This is nothing more than a(n) (inverse) head and shoulders pattern created on the CCI. Draw a trend line connecting the head and shoulders and trade a reversal trade once the CCI Long Histogram breaks the trend line. Same deal as the Vegas Trade, stop over/under the most recent 2 bar high/low and add later if it is a winner but cut quickly if it is a loser.
Like I said, this is a complicated system and could quite literally take years to master if you wanted to go into the patterns and master them. I prefer to trade it in a much simpler format, using the Elder Triple Screen System. First, since I am a day trader, I look to use the 20 period Woodie's on the hourly and look at the CZ, SW, and LSMA values to make sure they all match the direction of the CCI Long Histogram (a trend establishment is not necessary here). It shows you the hourly trend as your "tide". I then drill down to the 15 minute time frame and use the Turbo CCI break in the opposite direction of the trend as my "wave" and to indicate when there is a dip or rip against the main trend. Lastly, I drill down to a 3 minute time frame and enter when the CCI Long Histogram turns back to match the main trend ("ripple") as long as the CCI Turbo has broken the 100 level in the matched direction.
Enjoy, and please read the pamphlet if you have any questions about the patterns as they are not how I use these and will not be able to answer those questions.
Backtesting on Non-Standard Charts: Caution! - PineCoders FAQMuch confusion exists in the TradingView community about backtesting on non-standard charts. This script tries to shed some light on the subject in the hope that traders make better use of those chart types.
Non-standard charts are:
Heikin Ashi (HA)
Renko
Kagi
Point & Figure
Range
These chart types are called non-standard because they all transform market prices into synthetic views of price action. Some focus on price movement and disregard time. Others like HA use the same division of bars into fixed time intervals but calculate artificial open, high, low and close (OHLC) values.
Non-standard chart types can provide traders with alternative ways of interpreting price action, but they are not designed to test strategies or run automated traded systems where results depend on the ability to enter and exit trades at precise price levels at specific times, whether orders are issued manually or algorithmically. Ironically, the same characteristics that make non-standard chart types interesting from an analytical point of view also make them ill-suited to trade execution. Why? Because of the dislocation that a synthetic view of price action creates between its non-standard chart prices and real market prices at any given point in time. Switching from a non-standard chart price point into the market always entails a translation of time/price dimensions that results in uncertainty—and uncertainty concerning the level or the time at which orders are executed is detrimental to all strategies.
The delta between the chart’s price when an order is issued (which is assumed to be the expected price) and the price at which that order is filled is called slippage . When working from normal chart types, slippage can be caused by one or more of the following conditions:
• Time delay between order submission and execution. During this delay the market may move normally or be subject to large orders from other traders that will cause large moves of the bid/ask levels.
• Lack of bids for a market sell or lack of asks for a market buy at the current price level.
• Spread taken by middlemen in the order execution process.
• Any other event that changes the expected fill price.
When a market order is submitted, matching engines attempt to fill at the best possible price at the exchange. TradingView strategies usually fill market orders at the opening price of the next candle. A non-standard chart type can produce misleading results because the open of the next candle may or may not correspond to the real market price at that time. This creates artificial and often beneficial slippage that would not exist on standard charts.
Consider an HA chart. The open for each candle is the average of the previous HA bar’s open and close prices. The open of the HA candle is a synthetic value, but the real market open at the time the new HA candle begins on the chart is the unrelated, regular open at the chart interval. The HA open will often be lower on long entries and higher on short entries, resulting in unrealistically advantageous fills.
Another example is a Renko chart. A Renko chart is a type of chart that only measures price movement. The purpose of a Renko chart is to cluster price action into regular intervals, which consequently removes the time element. Because Trading View does not provide tick data as a price source, it relies on chart interval close values to construct Renko bricks. As a consequence, a new brick is constructed only when the interval close penetrates one or more brick thresholds. When a new brick starts on the chart, it is because the previous interval’s close was above or below the next brick threshold. The open price of the next brick will likely not represent the current price at the time this new brick begins, so correctly simulating an order is impossible.
Some traders have argued with us that backtesting and trading off HA charts and other non-standard charts is useful, and so we have written this script to show traders what happens when order fills from backtesting on non-standard charts are compared to real-world fills at market prices.
Let’s review how TV backtesting works. TV backtesting uses a broker emulator to execute orders. When an order is executed by the broker emulator on historical bars, the price used for the fill is either the close of the order’s submission bar or, more often, the open of the next. The broker emulator only has access to the chart’s prices, and so it uses those prices to fill orders. When backtesting is run on a non-standard chart type, orders are filled at non-standard prices, and so backtesting results are non-standard—i.e., as unrealistic as the prices appearing on non-standard charts. This is not a bug; where else is the broker emulator going to fetch prices than from the chart?
This script is a strategy that you can run on either standard or non-standard chart types. It is meant to help traders understand the differences between backtests run on both types of charts. For every backtest, a label at the end of the chart shows two global net profit results for the strategy:
• The net profits (in currency) calculated by TV backtesting with orders filled at the chart’s prices.
• The net profits (in currency) calculated from the same orders, but filled at market prices (fetched through security() calls from the underlying real market prices) instead of the chart’s prices.
If you run the script on a non-standard chart, the top result in the label will be the result you would normally get from the TV backtesting results window. The bottom result will show you a more realistic result because it is calculated from real market fills.
If you run the script on a normal chart type (bars, candles, hollow candles, line, area or baseline) you will see the same result for both net profit numbers since both are run on the same real market prices. You will sometimes see slight discrepancies due to occasional differences between chart prices and the corresponding information fetched through security() calls.
Features
• Results shown in the Data Window (third icon from the top right of your chart) are:
— Cumulative results
— For each order execution bar on the chart, the chart and market previous and current fills, and the trade results calculated from both chart and market fills.
• You can choose between 2 different strategies, both elementary.
• You can use HA prices for the calculations determining entry/exit conditions. You can use this to see how a strategy calculated from HA values can run on a normal chart. You will notice that such strategies will not produce the same results as the real market results generated from HA charts. This is due to the different environment backtesting is running on where for example, position sizes for entries on the same bar will be calculated differently because HA and standard chart close prices differ.
• You can choose repainting/non-repainting signals.
• You can show MAs, entry/exit markers and market fill levels.
• You can show candles built from the underlying market prices.
• You can color the background for occurrences where an order is filled at a different real market price than the chart’s price.
Notes
• On some non-standard chart types you will not obtain any results. This is sometimes due to how certain types of non-standard types work, and sometimes because the script will not emit orders if no underlying market information is detected.
• The script illustrates how those who want to use HA values to calculate conditions can do so from a standard chart. They will then be getting orders emitted on HA conditions but filled at more realistic prices because their strategy can run on a standard chart.
• On some non-standard chart types you will see market results surpass chart results. While this may seem interesting, our way of looking at it is that it points to how unreliable non-standard chart backtesting is, and why it should be avoided.
• In order not to extend an already long description, we do not discuss the particulars of executing orders on the realtime bar when using non-standard charts. Unless you understand the minute details of what’s going on in the realtime bar on a particular non-standard chart type, we recommend staying away from this.
• Some traders ask us: Why does TradingView allow backtesting on non-standard chart types if it produces unrealistic results? That’s somewhat like asking a hammer manufacturer why it makes hammers if hammers can hurt you. We believe it’s a trader’s responsibility to understand the tools he is using.
Takeaways
• Non-standard charts are not bad per se, but they can be badly used.
• TV backtesting on non-standard charts is not broken and doesn’t require fixing. Traders asking for a fix are in dire need of learning more about trading. We recommend they stop trading until they understand why.
• Stay away from—even better, report—any vendor presenting you with strategies running on non-standard charts and implying they are showing reliable results.
• If you don’t understand everything we discussed, don’t use non-standard charts at all.
• Study carefully how non-standard charts are built and the inevitable compromises used in calculating them so you can understand their limitations.
Thanks to @allanster and @mortdiggiddy for their help in editing this description.
Look first. Then leap.
Heikin-Ashi Mean Reversion Oscillator [Alpha Extract]The Heikin-Ashi Mean Reversion Oscillator combines the smoothing characteristics of Heikin-Ashi candlesticks with mean reversion analysis to create a powerful momentum oscillator. This indicator applies Heikin-Ashi transformation twice - first to price data and then to the oscillator itself - resulting in smoother signals while maintaining sensitivity to trend changes and potential reversal points.
🔶 CALCULATION
Heikin-Ashi Transformation: Converts regular OHLC data to smoothed Heikin-Ashi values
Component Analysis: Calculates trend strength, body deviation, and price deviation from mean
Oscillator Construction: Combines components with weighted formula (40% trend strength, 30% body deviation, 30% price deviation)
Double Smoothing: Applies EMA smoothing and second Heikin-Ashi transformation to oscillator values
Signal Generation: Identifies trend changes and crossover points with overbought/oversold levels
Formula:
HA Close = (Open + High + Low + Close) / 4
HA Open = (Previous HA Open + Previous HA Close) / 2
Trend Strength = Normalized consecutive HA candle direction
Body Deviation = (HA Body - Mean Body) / Mean Body * 100
Price Deviation = ((HA Close - Price Mean) / Price Mean * 100) / Standard Deviation * 25
Raw Oscillator = (Trend Strength * 0.4) + (Body Deviation * 0.3) + (Price Deviation * 0.3)
Final Oscillator = 50 + (EMA(Raw Oscillator) / 2)
🔶 DETAILS Visual Features:
Heikin-Ashi Candlesticks: Smoothed oscillator representation using HA transformation with vibrant teal/red coloring
Overbought/Oversold Zones: Horizontal lines at customizable levels (default 70/30) with background highlighting in extreme zones
Moving Averages: Optional fast and slow EMA overlays for additional trend confirmation
Signal Dashboard: Real-time table showing current oscillator status (Overbought/Oversold/Bullish/Bearish) and buy/sell signals
Reference Lines: Middle line at 50 (neutral), with 0 and 100 boundaries for range visualization
Interpretation:
Above 70: Overbought conditions, potential selling opportunity
Below 30: Oversold conditions, potential buying opportunity
Bullish HA Candles: Green/teal candles indicate upward momentum
Bearish HA Candles: Red candles indicate downward momentum
MA Crossovers: Fast EMA above slow EMA suggests bullish momentum, below suggests bearish momentum
Zone Exits: Price moving out of extreme zones (above 70 or below 30) often signals trend continuation
🔶 EXAMPLES
Mean Reversion Signals: When the oscillator reaches extreme levels (above 70 or below 30), it identifies potential reversal points where price may revert to the mean.
Example: Oscillator reaching 80+ levels during strong uptrends often precedes short-term pullbacks, providing profit-taking opportunities.
Trend Change Detection: The double Heikin-Ashi smoothing helps identify genuine trend changes while filtering out market noise.
Example: When oscillator HA candles change from red to teal after oversold readings, this confirms potential trend reversal from bearish to bullish.
Moving Average Confirmation: Fast and slow EMA crossovers on the oscillator provide additional confirmation of momentum shifts.
Example: Fast EMA crossing above slow EMA while oscillator is rising from oversold levels provides strong bullish confirmation signal.
Dashboard Signal Integration: The real-time dashboard combines oscillator status with directional signals for quick decision-making.
Example: Dashboard showing "Oversold" status with "BUY" signal when HA candles turn bullish provides clear entry timing.
🔶 SETTINGS
Customization Options:
Calculation: Oscillator period (default 14), smoothing factor (1-50, default 2)
Levels: Overbought threshold (50-100, default 70), oversold threshold (0-50, default 30)
Moving Averages: Toggle display, fast EMA length (default 9), slow EMA length (default 21)
Visual Enhancements: Show/hide signal dashboard, customizable table position
Alert Conditions: Oversold bounce, overbought reversal, bullish/bearish MA crossovers
The Heikin-Ashi Mean Reversion Oscillator provides traders with a sophisticated momentum tool that combines the smoothing benefits of Heikin-Ashi analysis with mean reversion principles. The double transformation process creates cleaner signals while the integrated dashboard and multiple confirmation methods help traders identify high-probability entry and exit points during both trending and ranging market conditions.
Adaptive Candlestick Pattern Recognition System█ INTRODUCTION
Nearly three years in the making, intermittently worked on in the few spare hours of weekends and time off, this is a passion project I undertook to flesh out my skills as a computer programmer. This script currently recognizes 85 different candlestick patterns ranging from one to five candles in length. It also performs statistical analysis on those patterns to determine prior performance and changes the coloration of those patterns based on that performance. In searching TradingView's script library for scripts similar to this one, I had found a handful. However, when I reviewed the ones which were open source, I did not see many that truly captured the power of PineScrypt or leveraged the way it works to create efficient and reliable code; one of the main driving factors for releasing this 5,000+ line behemoth open sourced.
Please take the time to review this description and source code to utilize this script to its fullest potential.
█ CONCEPTS
This script covers the following topics: Candlestick Theory, Trend Direction, Higher Timeframes, Price Analysis, Statistic Analysis, and Code Design.
Candlestick Theory - This script focuses solely on the concept of Candlestick Theory: arrangements of candlesticks may form certain patterns that can potentially influence the future price action of assets which experience those patterns. A full list of patterns (grouped by pattern length) will be in its own section of this description. This script contains two modes of operation for identifying candlestick patterns, 'CLASSIC' and 'BREAKOUT'.
CLASSIC: In this mode, candlestick patterns will be identified whenever they appear. The user has a wide variety of inputs to manipulate that can change how certain patterns are identified and even enable alerts to notify themselves when these patterns appear. Each pattern selected to appear will have their Profit or Loss (P/L) calculated starting from the first candle open succeeding the pattern to a candle close specified some number of candles ahead. These P/L calculations are then collected for each pattern, and split among partitions of prior price action of the asset the script is currently applied to (more on that in Higher Timeframes ).
BREAKOUT: In this mode, P/L calculations are held off until a breakout direction has been confirmed. The user may specify the number of candles ahead of a pattern's appearance (from one to five) that a pattern has to confirm a breakout in either an upward or downward direction. A breakout is constituted when there is a candle following the appearance of the pattern that closes above/at the highest high of the pattern, or below/at its lowest low. Only then will percent return calculations be performed for the pattern that's been identified, and these percent returns are broken up not only by the partition they had appeared in but also by the breakout direction itself. Patterns which do not breakout in either direction will be ignored, along with having their labels deleted.
In both of these modes, patterns may be overridden. Overrides occur when a smaller pattern has been detected and ends up becoming one (or more) of the candles of a larger pattern. A key example of this would be the Bearish Engulfing and the Three Outside Down patterns. A Three Outside Down necessitates a Bearish Engulfing as the first two candles in it, while the third candle closes lower. When a pattern is overridden, the return for that pattern will no longer be tracked. Overrides will not occur if the tail end of a larger pattern occurs at the beginning of a smaller pattern (Ex: a Bullish Engulfing occurs on the third candle of a Three Outside Down and the candle immediately following that pattern, the Three Outside Down pattern will not be overridden).
Important Functionality Note: These patterns are only searched for at the most recently closed candle, not on the currently closing candle, which creates an offset of one for this script's execution. (SEE LIMITATIONS)
Trend Direction - Many of the patterns require a trend direction prior to their appearance. Noting TradingView's own publication of candlestick patterns, I utilize a similar method for determining trend direction. Moving Averages are used to determine which trend is currently taking place for candlestick patterns to be sought out. The user has access to two Moving Averages which they may individually modify the following for each: Moving Average type (list of 9), their length, width, source values, and all variables associated with two special Moving Averages (Least Squares and Arnaud Legoux).
There are 3 settings for these Moving Averages, the first two switch between the two Moving Averages, and the third uses both. When using individual Moving Averages, the user may select a 'price point' to compare against the Moving Average (default is close). This price point is compared to the Moving Average at the candles prior to the appearance of candle patterns. Meaning: The close compared to the Moving Average two candles behind determines the trend direction used for Candlestick Analysis of one candle patterns; three candles behind for two candle patterns and so on. If the selected price point is above the Moving Average, then the current trend is an 'uptrend', 'downtrend' otherwise.
The third setting using both Moving Averages will compare the lengths of each, and trend direction is determined by the shorter Moving Average compared to the longer one. If the shorter Moving Average is above the longer, then the current trend is an 'uptrend', 'downtrend' otherwise. If the lengths of the Moving Averages are the same, or both Moving Averages are Symmetrical, then MA1 will be used by default. (SEE LIMITATIONS)
Higher Timeframes - This script employs the use of Higher Timeframes with a few request.security calls. The purpose of these calls is strictly for the partitioning of an asset's chart, splitting the returns of patterns into three separate groups. The four inputs in control of this partitioning split the chart based on: A given resolution to grab values from, the length of time in that resolution, and 'Upper' and 'Lower Limits' which split the trading range provided by that length of time in that resolution that forms three separate groups. The default values for these four inputs will partition the current chart by the yearly high-low range where: the 'Upper' partition is the top 20% of that trading range, the 'Middle' partition is 80% to 33% of the trading range, and the 'Lower' partition covers the trading range within 33% of the yearly low.
Patterns which are identified by this script will have their returns grouped together based on which partition they had appeared in. For example, a Bullish Engulfing which occurs within a third of the yearly low will have its return placed separately from a Bullish Engulfing that occurred within 20% of the yearly high. The idea is that certain patterns may perform better or worse depending on when they had occurred during an asset's trading range.
Price Analysis - Price Analysis is a major part of this script's functionality as it can fundamentally change how patterns are shown to the user. The settings related to Price Analysis include setting the number of candles ahead of a pattern's appearance to determine the return of that pattern. In 'BREAKOUT' mode, an additional setting allows the user to specify where the P/L calculation will begin for a pattern that had appeared and confirmed. (SEE LIMITATIONS)
The calculation for percent returns of patterns is illustrated with the following pseudo-code (CLASSIC mode, this is a simplified version of the actual code):
type patternObj
int ID
int partition
type returnsArray
float returns
// No pattern found = na returned
patternObj TEST_VAL = f_FindPattern()
priorTestVal = TEST_VAL
if not na( priorTestVal )
pnlMatrixRow = priorTestVal.ID
pnlMatrixCol = priorTestVal.partition
matrixReturn = matrix.get(PERCENT_RETURNS, pnlMatrixRow, pnlMatrixCol)
percentReturn = ( (close - open ) / open ) * 100%
array.push(matrixReturn.returns, percentReturn)
Statistic Analysis - This script uses Pine's built-in array functions to conduct the Statistic Analysis for patterns. When a pattern is found and its P/L calculation is complete, its return is added to a 'Return Array' User-Defined-Type that contains numerous fields which retain information on a pattern's prior performance. The actual UDT is as follows:
type returnArray
float returns = na
int size = 0
float avg = 0
float median = 0
float stdDev = 0
int polarities = na
All values within this UDT will be updated when a return is added to it (some based on user input). The array.avg , array.median and array.stdev will be ran and saved into their respective fields after a return is placed in the 'returns' array. The 'polarities' integer array is what will be changed based on user input. The user specifies two different percentages that declare 'Positive' and 'Negative' returns for patterns. When a pattern returns above, below, or in between these two values, different indices of this array will be incremented to reflect the kind of return that pattern had just experienced.
These values (plus the full name, partition the pattern occurred in, and a 95% confidence interval of expected returns) will be displayed to the user on the tooltip of the labels that identify patterns. Simply scroll over the pattern label to view each of these values.
Code Design - Overall this script is as much of an art piece as it is functional. Its design features numerous depictions of ASCII Art that illustrate what is being attempted by the functions that identify patterns, and an incalculable amount of time was spent rewriting portions of code to improve its efficiency. Admittedly, this final version is nearly 1,000 lines shorter than a previous version (one which took nearly 30 seconds after compilation to run, and didn't do nearly half of what this version does). The use of UDTs, especially the 'patternObj' one crafted and redesigned from the Hikkake Hunter 2.0 I published last month, played a significant role in making this script run efficiently. There is a slight rigidity in some of this code mainly around pattern IDs which are responsible for displaying the abbreviation for patterns (as well as the full names under the tooltips, and the matrix row position for holding returns), as each is hard-coded to correspond to that pattern.
However, one thing I would like to mention is the extensive use of global variables for pattern detection. Many scripts I had looked over for ideas on how to identify candlestick patterns had the same idea; break the pattern into a set of logical 'true/false' statements derived from historically referencing candle OHLC values. Some scripts which identified upwards of 20 to 30 patterns would reference Pine's built-in OHLC values for each pattern individually, potentially requesting information from TradingView's servers numerous times that could easily be saved into a variable for re-use and only requested once per candle (what this script does).
█ FEATURES
This script features a massive amount of switches, options, floating point values, detection settings, and methods for identifying/tailoring pattern appearances. All modifiable inputs for patterns are grouped together based on the number of candles they contain. Other inputs (like those for statistics settings and coloration) are grouped separately and presented in a way I believe makes the most sense.
Not mentioned above is the coloration settings. One of the aims of this script was to make patterns visually signify their behavior to the user when they are identified. Each pattern has its own collection of returns which are analyzed and compared to the inputs of the user. The user may choose the colors for bullish, neutral, and bearish patterns. They may also choose the minimum number of patterns needed to occur before assigning a color to that pattern based on its behavior; a color for patterns that have not met this minimum number of occurrences yet, and a color for patterns that are still processing in BREAKOUT mode.
There are also an additional three settings which alter the color scheme for patterns: Statistic Point-of-Reference, Adaptive coloring, and Hard Limiting. The Statistic Point-of-Reference decides which value (average or median) will be compared against the 'Negative' and 'Positive Return Tolerance'(s) to guide the coloration of the patterns (or for Adaptive Coloring, the generation of a color gradient).
Adaptive Coloring will have this script produce a gradient that patterns will be colored along. The more bullish or bearish a pattern is, the further along the gradient those patterns will be colored starting from the 'Neutral' color (hard lined at the value of 0%: values above this will be colored bullish, bearish otherwise). When Adaptive Coloring is enabled, this script will request the highest and lowest values (these being the Statistic Point-of-Reference) from the matrix containing all returns and rewrite global variables tied to the negative and positive return tolerances. This means that all patterns identified will be compared with each other to determine bullish/bearishness in Adaptive Coloring.
Hard Limiting will prevent these global variables from being rewritten, so patterns whose Statistic Point-of-Reference exceed the return tolerances will be fully colored the bullish or bearish colors instead of a generated gradient color. (SEE LIMITATIONS)
Apart from the Candle Detection Modes (CLASSIC and BREAKOUT), there's an additional two inputs which modify how this script behaves grouped under a "MASTER DETECTION SETTINGS" tab. These two "Pattern Detection Settings" are 'SWITCHBOARD' and 'TARGET MODE'.
SWITCHBOARD: Every single pattern has a switch that is associated with its detection. When a switch is enabled, the code which searches for that pattern will be run. With the Pattern Detection Setting set to this, all patterns that have their switches enabled will be sought out and shown.
TARGET MODE: There is an additional setting which operates on top of 'SWITCHBOARD' that singles out an individual pattern the user specifies through a drop down list. The names of every pattern recognized by this script will be present along with an identifier that shows the number of candles in that pattern (Ex: " (# candles)"). All patterns enabled in the switchboard will still have their returns measured, but only the pattern selected from the "Target Pattern" list will be shown. (SEE LIMITATIONS)
The vast majority of other features are held in the one, two, and three candle pattern sections.
For one-candle patterns, there are:
3 — Settings related to defining 'Tall' candles:
The number of candles to sample for previous candle-size averages.
The type of comparison done for 'Tall' Candles: Settings are 'RANGE' and 'BODY'.
The 'Tolerance' for tall candles, specifying what percent of the 'average' size candles must exceed to be considered 'Tall'.
When 'Tall Candle Setting' is set to RANGE, the high-low ranges are what the current candle range will be compared against to determine if a candle is 'Tall'. Otherwise the candle bodies (absolute value of the close - open) will be compared instead. (SEE LIMITATIONS)
Hammer Tolerance - How large a 'discarded wick' may be before it disqualifies a candle from being a 'Hammer'.
Discarded wicks are compared to the size of the Hammer's candle body and are dependent upon the body's center position. Hammer bodies closer to the high of the candle will have the upper wick used as its 'discarded wick', otherwise the lower wick is used.
9 — Doji Settings, some pulled from an old Doji Hunter I made a while back:
Doji Tolerance - How large the body of a candle may be compared to the range to be considered a 'Doji'.
Ignore N/S Dojis - Turns off Trend Direction for non-special Dojis.
GS/DF Doji Settings - 2 Inputs that enable and specify how large wicks that typically disqualify Dojis from being 'Gravestone' or 'Dragonfly' Dojis may be.
4 Settings related to 'Long Wick Doji' candles detailed below.
A Tolerance for 'Rickshaw Man' Dojis specifying how close the center of the body must be to the range to be valid.
The 4 settings the user may modify for 'Long Legged' Dojis are: A Sample Base for determining the previous average of wicks, a Sample Length specifying how far back to look for these averages, a Behavior Setting to define how 'Long Legged' Dojis are recognized, and a tolerance to specify how large in comparison to the prior wicks a Doji's wicks must be to be considered 'Long Legged'.
The 'Sample Base' list has two settings:
RANGE: The wicks of prior candles are compared to their candle ranges and the 'wick averages' will be what the average percent of ranges were in the sample.
WICKS: The size of the wicks themselves are averaged and returned for comparing against the current wicks of a Doji.
The 'Behavior' list has three settings:
ONE: Only one wick length needs to exceed the average by the tolerance for a Doji to be considered 'Long Legged'.
BOTH: Both wick lengths need to exceed the average of the tolerance of their respective wicks (upper wicks are compared to upper wicks, lower wicks compared to lower) to be considered 'Long Legged'.
AVG: Both wicks and the averages of the previous wicks are added together, divided by two, and compared. If the 'average' of the current wicks exceeds this combined average of prior wicks by the tolerance, then this would constitute a valid 'Long Legged' Doji. (For Dojis in general - SEE LIMITATIONS)
The final input is one related to candle patterns which require a Marubozu candle in them. The two settings for this input are 'INCLUSIVE' and 'EXCLUSIVE'. If INCLUSIVE is selected, any opening/closing variant of Marubozu candles will be allowed in the patterns that require them.
For two-candle patterns, there are:
2 — Settings which define 'Engulfing' parameters:
Engulfing Setting - Two options, RANGE or BODY which sets up how one candle may 'engulf' the previous.
Inclusive Engulfing - Boolean which enables if 'engulfing' candles can be equal to the values needed to 'engulf' the prior candle.
For the 'Engulfing Setting':
RANGE: If the second candle's high-low range completely covers the high-low range of the prior candle, this is recognized as 'engulfing'.
BODY: If the second candle's open-close completely covers the open-close of the previous candle, this is recognized as 'engulfing'. (SEE LIMITATIONS)
4 — Booleans specifying different settings for a few patterns:
One which allows for 'opens within body' patterns to let the second candle's open/close values match the prior candles' open/close.
One which forces 'Kicking' patterns to have a gap if the Marubozu setting is set to 'INCLUSIVE'.
And Two which dictate if the individual candles in 'Stomach' patterns need to be 'Tall'.
8 — Floating point values which affect 11 different patterns:
One which determines the distance the close of the first candle in a 'Hammer Inverted' pattern must be to the low to be considered valid.
One which affects how close the opens/closes need to be for all 'Lines' patterns (Bull/Bear Meeting/Separating Lines).
One that allows some leeway with the 'Matching Low' pattern (gives a small range the second candle close may be within instead of needing to match the previous close).
Three tolerances for On Neck/In Neck patterns (2 and 1 respectively).
A tolerance for the Thrusting pattern which give a range the close the second candle may be between the midpoint and close of the first to be considered 'valid'.
A tolerance for the two Tweezers patterns that specifies how close the highs and lows of the patterns need to be to each other to be 'valid'.
The first On Neck tolerance specifies how large the lower wick of the first candle may be (as a % of that candle's range) before the pattern is invalidated. The second tolerance specifies how far up the lower wick to the close the second candle's close may be for this pattern. The third tolerance for the In Neck pattern determines how far into the body of the first candle the second may close to be 'valid'.
For the remaining patterns (3, 4, and 5 candles), there are:
3 — Settings for the Deliberation pattern:
A boolean which forces the open of the third candle to gap above the close of the second.
A tolerance which changes the proximity of the third candle's open to the second candle's close in this pattern.
A tolerance that sets the maximum size the third candle may be compared to the average of the first two candles.
One boolean value for the Two Crows patterns (standard and Upside Gapping) that forces the first two candles in the patterns to completely gap if disabled (candle 1's close < candle 2's low).
10 — Floating point values for the remaining patterns:
One tolerance for defining how much the size of each candle in the Identical Black Crows pattern may deviate from the average of themselves to be considered valid.
One tolerance for setting how close the opens/closes of certain three candle patterns may be to each other's opens/closes.*
Three floating point values that affect the Three Stars in the South pattern.
One tolerance for the Side-by-Side patterns - looks at the second and third candle closes.
One tolerance for the Stick Sandwich pattern - looks at the first and third candle closes.
A floating value that sizes the Concealing Baby Swallow pattern's 3rd candle wick.
Two values for the Ladder Bottom pattern which define a range that the third candle's wick size may be.
* This affects the Three Black Crows (non-identical) and Three White Soldiers patterns, each require the opens and closes of every candle to be near each other.
The first tolerance of the Three Stars in the South pattern affects the first candle body's center position, and defines where it must be above to be considered valid. The second tolerance specifies how close the second candle must be to this same position, as well as the deviation the ratio the candle body to its range may be in comparison to the first candle. The third restricts how large the second candle range may be in comparison to the first (prevents this pattern from being recognized if the second candle is similar to the first but larger).
The last two floating point values define upper and lower limits to the wick size of a Ladder Bottom's fourth candle to be considered valid.
█ HOW TO USE
While there are many moving parts to this script, I attempted to set the default values with what I believed may help identify the most patterns within reasonable definitions. When this script is applied to a chart, the Candle Detection Mode (along with the BREAKOUT settings) and all candle switches must be confirmed before patterns are displayed. All switches are on by default, so this gives the user an opportunity to pick which patterns to identify first before playing around in the settings.
All of the settings/inputs described above are meant for experimentation. I encourage the user to tweak these values at will to find which set ups work best for whichever charts they decide to apply these patterns to.
Refer to the patterns themselves during experimentation. The statistic information provided on the tooltips of the patterns are meant to help guide input decisions. The breadth of candlestick theory is deep, and this was an attempt at capturing what I could in its sea of information.
█ LIMITATIONS
DISCLAIMER: While it may seem a bit paradoxical that this script aims to use past performance to potentially measure future results, past performance is not indicative of future results . Markets are highly adaptive and often unpredictable. This script is meant as an informational tool to show how patterns may behave. There is no guarantee that confidence intervals (or any other metric measured with this script) are accurate to the performance of patterns; caution must be exercised with all patterns identified regardless of how much information regarding prior performance is available.
Candlestick Theory - In the name, Candlestick Theory is a theory , and all theories come with their own limits. Some patterns identified by this script may be completely useless/unprofitable/unpredictable regardless of whatever combination of settings are used to identify them. However, if I truly believed this theory had no merit, this script would not exist. It is important to understand that this is a tool meant to be utilized with an array of others to procure positive (or negative, looking at you, short sellers ) results when navigating the complex world of finance.
To address the functionality note however, this script has an offset of 1 by default. Patterns will not be identified on the currently closing candle, only on the candle which has most recently closed. Attempting to have this script do both (offset by one or identify on close) lead to more trouble than it was worth. I personally just want users to be aware that patterns will not be identified immediately when they appear.
Trend Direction - Moving Averages - There is a small quirk with how MA settings will be adjusted if the user inputs two moving averages of the same length when the "MA Setting" is set to 'BOTH'. If Moving Averages have the same length, this script will default to only using MA 1 regardless of if the types of Moving Averages are different . I will experiment in the future to alleviate/reduce this restriction.
Price Analysis - BREAKOUT mode - With how identifying patterns with a look-ahead confirmation works, the percent returns for patterns that break out in either direction will be calculated on the same candle regardless of if P/L Offset is set to 'FROM CONFIRMATION' or 'FROM APPEARANCE'. This same issue is present in the Hikkake Hunter script mentioned earlier. This does not mean the P/L calculations are incorrect , the offset for the calculation is set by the number of candles required to confirm the pattern if 'FROM APPEARANCE' is selected. It just means that these two different P/L calculations will complete at the same time independent of the setting that's been selected.
Adaptive Coloring/Hard Limiting - Hard Limiting is only used with Adaptive Coloring and has no effect outside of it. If Hard Limiting is used, it is recommended to increase the 'Positive' and 'Negative' return tolerance values as a pattern's bullish/bearishness may be disproportionately represented with the gradient generated under a hard limit.
TARGET MODE - This mode will break rules regarding patterns that are overridden on purpose. If a pattern selected in TARGET mode would have otherwise been absorbed by a larger pattern, it will have that pattern's percent return calculated; potentially leading to duplicate returns being included in the matrix of all returns recognized by this script.
'Tall' Candle Setting - This is a wide-reaching setting, as approximately 30 different patterns or so rely on defining 'Tall' candles. Changing how 'Tall' candles are defined whether by the tolerance value those candles need to exceed or by the values of the candle used for the baseline comparison (RANGE/BODY) can wildly affect how this script functions under certain conditions. Refer to the tooltip of these settings for more information on which specific patterns are affected by this.
Doji Settings - There are roughly 10 or so two to three candle patterns which have Dojis as a part of them. If all Dojis are disabled, it will prevent some of these larger patterns from being recognized. This is a dependency issue that I may address in the future.
'Engulfing' Setting - Functionally, the two 'Engulfing' settings are quite different. Because of this, the 'RANGE' setting may cause certain patterns that would otherwise be valid under textbook and online references/definitions to not be recognized as such (like the Upside Gap Two Crows or Three Outside down).
█ PATTERN LIST
This script recognizes 85 patterns upon initial release. I am open to adding additional patterns to it in the future and any comments/suggestions are appreciated. It recognizes:
15 — 1 Candle Patterns
4 Hammer type patterns: Regular Hammer, Takuri Line, Shooting Star, and Hanging Man
9 Doji Candles: Regular Dojis, Northern/Southern Dojis, Gravestone/Dragonfly Dojis, Gapping Up/Down Dojis, and Long-Legged/Rickshaw Man Dojis
White/Black Long Days
32 — 2 Candle Patterns
4 Engulfing type patterns: Bullish/Bearish Engulfing and Last Engulfing Top/Bottom
Dark Cloud Cover
Bullish/Bearish Doji Star patterns
Hammer Inverted
Bullish/Bearish Haramis + Cross variants
Homing Pigeon
Bullish/Bearish Kicking
4 Lines type patterns: Bullish/Bearish Meeting/Separating Lines
Matching Low
On/In Neck patterns
Piercing pattern
Shooting Star (2 Lines)
Above/Below Stomach patterns
Thrusting
Tweezers Top/Bottom patterns
Two Black Gapping
Rising/Falling Window patterns
29 — 3 Candle Patterns
Bullish/Bearish Abandoned Baby patterns
Advance Block
Collapsing Doji Star
Deliberation
Upside/Downside Gap Three Methods patterns
Three Inside/Outside Up/Down patterns (4 total)
Bullish/Bearish Side-by-Side patterns
Morning/Evening Star patterns + Doji variants
Stick Sandwich
Downside/Upside Tasuki Gap patterns
Three Black Crows + Identical variation
Three White Soldiers
Three Stars in the South
Bullish/Bearish Tri-Star patterns
Two Crows + Upside Gap variant
Unique Three River Bottom
3 — 4 Candle Patterns
Concealing Baby Swallow
Bullish/Bearish Three Line Strike patterns
6 — 5 Candle Patterns
Bullish/Bearish Breakaway patterns
Ladder Bottom
Mat Hold
Rising/Falling Three Methods patterns
█ WORKS CITED
Because of the amount of time needed to complete this script, I am unable to provide exact dates for when some of these references were used. I will also not provide every single reference, as citing a reference for each individual pattern and the place it was reviewed would lead to a bibliography larger than this script and its description combined. There were five major resources I used when building this script, one book, two websites (for various different reasons including patterns, moving averages, and various other articles of information), various scripts from TradingView's public library (including TradingView's own source code for *all* candle patterns ), and PineScrypt's reference manual.
Bulkowski, Thomas N. Encyclopedia of Candlestick Patterns . Hoboken, New Jersey: John Wiley & Sons Inc., 2008. E-book (google books).
Various. Numerous webpages. CandleScanner . 2023. online. Accessed 2020 - 2023.
Various. Numerous webpages. Investopedia . 2023. online. Accessed 2020 - 2023.
█ AKNOWLEDGEMENTS
I want to take the time here to thank all of my friends and family, both online and in real life, for the support they've given me over the last few years in this endeavor. My pets who tried their hardest to keep me from completing it. And work for the grit to continue pushing through until this script's completion.
This belongs to me just as much as it does anyone else. Whether you are an institutional trader, gold bug hedging against the dollar, retail ape who got in on a squeeze, or just parents trying to grow their retirement/save for the kids. This belongs to everyone.
Private Beta for new features to be tested can be found here .
Vires In Numeris
Hikkake Hunter 2.0This script serves as a successor to a previous script I wrote for identifying Hikkakes nearly two years ago.
The old version has been preserved here:
█ OVERVIEW
This script is a rework of an old script that identified the Hikkake candlestick pattern. While this pattern is not usually considered a part of the standard candlestick patterns set, I found a lot of value when finding a solution to identifying it. A Hikkake pattern is a 3-candle pattern where a middle candle is nested in between the range of the prior candle, and a candle that follows has a higher high and a higher low (bearish setup) or a lower high and a lower low (bullish setup). What makes this pattern unique is the "confirmation" status of the pattern; within 3 candles of this pattern's appearance, there must be a candle that closes above the high (bullish setup) or below the low (bearish setup) of the second candle. Additional flexibility has been added which allows the user to specify the number of candles (up to 5) that the pattern may have to confirm after its appearance.
█ CONCEPTS
This script will cover concepts mainly focusing on candlestick analysis, price analysis (with higher timeframes), and statistical analysis. I believe there is also educational value presented with the use of user-defined-types (UDTs) in accomplishing these concepts that I hope others will find useful.
Candlestick Analysis - Identification and confirmation of the patterns in the deprecated script were clunky and inefficient. While the previous script required the use of 6 candles to perform the confirmations of patterns (restricted solely to identifying patterns that confirmed in 3 candles or less), this script only requires 3 candles to identify and process patterns by utilizing a UDT representing a 'pattern object'. An object representing a pattern will be created when it has been identified, and fields within that object will be set for processing by the functions it is passed to. Pattern objects are held by a var array (values within the array persist between bars) and will be removed from this array once they have been confirmed or non-confirmed.
This is a significant deviation from the previous script's methods, as it prevents unnecessary re-evaluations of the confirmation status of patterns (i.e. Hikkakes confirmed on the first candle will no longer need to be checked for confirmations on the second or third; a pitfall of the deprecated version which required multiple booleans tracking prior confirmation statuses). This deviation is also what provides the flexibility in changing the number of candles that can pass before a pattern is deemed non-confirmed.
As multiple patterns can be confirmed simultaneously, this script uses another UDT representing a linked-list reduction of the pattern object used to process it. This liked-list object will then be used for Price Analysis.
Price Analysis - This script employs the use of a UDT which contains all the returns of confirmed patterns. The user specifies how many candles ahead of the confirmed pattern to calculate its return, as well as where this calculation begins. There are two settings: FROM APPEARANCE and FROM CONFIRMATION (default). Price differences are calculated from the open of the candle immediately following the candle which had confirmed the pattern to the close of the candle X candles ahead (default 10). ( SEE FEATURES )
Because of how Pine functions, this calculation necessitates a lookback on prior candles to identify when a pattern had been confirmed. This is accomplished with the following pseudo-code:
if not na(confirmed linked-list )
for all confirmed in list
GET MATRIX PLACEMENT
offset = FROM CONFIRMATION ? 0 : # of candles to confirm
openAtFind = open
percent return = ((close - openAtFind) / openAtFind) * 100
ADD percent return TO UDT IN MATRIX
All return UDTs are held in a matrix which breaks up these patterns into specific groups covered in the next section.
Higher Timeframes - This script makes a request.security call to a higher timeframe in order to identify a price range which breaks up these patterns into groups based on the 'partition' they had appeared in. The default values for this partitioning will break up the chart into three sections: upper, middle, and lower. The upper section represents the highest 20% of the yearly trading range that an asset has experienced. The lower section represents the trading range within a third (33%) of the yearly low. And the middle section represents the yearly high-low range between these two partitions.
The matrix containing all return UDTs will have these returns split up based on the number of candles required to confirm the pattern as well as the partition the pattern had appeared in. The underlying rationale is that patterns may perform better or worse at different parts of an asset's trading range.
Statistical Analysis - Once a pattern has been confirmed, the matrix containing all return UDTs will be queried to check if a 'returnArray' object has been created for that specific pattern. If not, one will be initialized and a confirmed linked-list object will be created that contains information pertinent to the matrix position of this object.
This matrix contains the returns of both the Bullish and Bearish Hikkake patterns, separated by the number of candles needed to confirm them, and by the partitions they had appeared in. For the standard 3 candles to confirm, this means the matrix will contain 18 elements (dependent on the number of candles allowed for confirmations; its size will range from 12 to 30).
When the required number of candles for Price Analysis passes, a percent return is calculated and added to the returnArray contained in the matrix at the location derived from the confirmed linked-list object's values. The return is added, and all values in the returnArray are updated using Pine's built in array.___ functions. This returnArray object contains the array of all returns, its size, its average, the median, the standard deviation of returns, and a separate 3-integer array which holds values that correspond to the types of returns experienced by this pattern (negative, neutral, and positive)*.
After a pattern has been confirmed, this script will place the partition and all of the aforementioned stats values (plus a 95% confidence interval of expected returns) related to that pattern onto the tooltip of the label that identifies it. This allows users to scroll over the label of a confirmed pattern to gauge its prior performance under specific conditions. The percent return of the specific pattern identified will later be placed onto the label tooltip as well. ( SEE LIMITATIONS )
The stats portion of this script also plays a significant role in how patterns are presented when using the Adaptive Coloring mode described in FEATURES .
*These values are incremented based on user-input related to what constitutes a 'negative' or 'positive' return. Default values would place any return by a pattern between -3% and 3% in the 'neutral' category, and values exceeding either end will be placed in the 'negative' or 'positive' categories.
█ FEATURES
This script contains numerous inputs for modifying its behavior and how patterns are presented/processed, separated into 5 groups.
Confirmation Setting - The most important input for this script's functioning. This input is a 'confirm=true' input and must be set by the user before the script is applied to the chart. It sets the number of candles that a pattern has to confirm once it has been identified.
Alert Settings - This group of booleans sets which types of alerts will fire during the scripts execution on the chart. If enabled, the four alerts will trigger when: a pattern has been identified, a pattern has been confirmed, a pattern has been non-confirmed, and show the return for that confirmed pattern in an alert. Because this script uses the 'alert' function and not 'alertcondition', these must be enabled before 'any alert() function call' is set in TradingView's 'alerts' settings.
Partition Settings - This group of inputs are responsible for creating (and viewing) the partitions that breaks the returns of the patterns identified up into their respective groups. The user may set the resolution to grab the range from, the length back of this resolution the partitions get their values from, the thresholds which breaks the partitions up into their groups, and modify the visibility (if they're shown, the colors, opacity) of these partitions.
Stats Settings - These inputs will drastically alter how patterns are presented and the resulting information derived from them after their appearance. Because of this section's importance, some of these inputs will be described in more detail.
P/L Sample Length - Defines the number of candles after the starting point to grab values from in the % return calculation for that pattern.
P/L Starting Point - Defines the starting point where the P/L calculation will take place. 'FROM APPEARANCE' will set the starting point at the candle immediately following the pattern's appearance. 'FROM CONFIRMATION' will place the starting point immediately following the candle which had confirmed the pattern. ( SEE LIMITATIONS )
Min Returns Needed - Sets how many times a specific pattern must appear (both by number of candles needed to confirm and by partition) before the statistics for that pattern are displayed onto the tooltip (and for gradient coloration in Adaptive Coloring mode).
Enable Adaptive Coloring - Changes the coloration of the patterns based on the bullish/bearishness of the specified Gradient Reference value of that pattern compared to the Return Tolerance values OR the minimum and maximum values of that specified Gradient Reference value contained in the matrix of all returns. This creates a color from a gradient using the user-specified colors and alters how many of the patterns may appear if prior performance is taken into account.
Gradient Reference - Defines which stats measure of returns will be used in the gradient color generation. The two settings are 'AVG' and 'MEDIAN'.
Hard Limit - This boolean sets whether the Return Tolerance values will not be replaced by values that exceed them from the matrix of returns in color gradient generation. This changes the scale of the gradient where any Gradient Reference values of patterns that exceed these tolerances will be colored the full bullish or bearish gradient colors, and anything in between them will be given a color from the gradient.
Visibility Settings - This last section includes all settings associated with the overall visibility of patterns found with this script. This includes the position of the labels and their colors (+ pattern colors without Adaptive Coloring being enabled), and showing patterns that were non-confirmed.
Most of these inputs in the script have these kinds of descriptions to what they do provided by their tooltips.
█ HOW TO USE
I attempted to make this script much easier to use in terms of analyzing the patterns and displaying the information to the user. The previous script would have the user go to the 'data window' side bar on TradingView to view the returns of a pattern after they had specified which pattern to analyze through the settings, needlessly convoluted. This aim at simplicity was achieved through the use of UDTs and specific code-design.
To use, simply apply the indicator to a chart, set the number of candles (between 2 and 5) for confirming this specific pattern and adjust the many settings described above at your leisure.
█ LIMITATIONS
Disclaimer - This is a tool created with the hopes of helping identify a specific pattern and provide an informative view about the performance of that pattern. Previous performance is not indicative of future results. None of this constitutes any form of financial advice, *use at your own risk*.
Statistical Analysis - This script assumes that all patterns will yield a NORMAL DISTRIBUTION regarding their returns which may not be reflective of reality. I personally have limited experience within the field of statistics apart from a few high school/college courses and make no guarantees that the calculation of the 95% confidence interval is correct. Please review the source code to verify for yourself that this interval calculation is correct (Function Name: f_DisplayStatsOnLabel).
P/L Starting Point - Because of when the object related to the confirmation status of a pattern is created (specifically the linked-list object) setting the 'P/L Starting Point' to 'FROM APPEARANCE' will yield the results of that P/L calculation at the same time as 'FROM CONFIRMATION'.
█ EXAMPLES
Default Settings:
Partition Background (default):
Partition Background (Resolution D : Length 30):
Adaptive Coloration:
Show Non-Confirmed:
STD-Adaptive T3 [Loxx]STD-Adaptive T3 is a standard deviation adaptive T3 moving average filter. This indicator acts more like a trend overlay indicator with gradient coloring.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Loxx's Expanded Source Types
Softmax Normalized T3 Histogram [Loxx]Softmax Normalized T3 Histogram is a T3 moving average that is morphed into a normalized oscillator from -1 to 1.
What is the Softmax function?
The softmax function, also known as softargmax: or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
T3 Velocity Candles [Loxx]T3 Velocity Candles is a candle coloring overlay that calculates its gradient coloring using T3 velocity.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
T3 Striped [Loxx]Theory:
Although T3 is widely used, some of the details on how it is calculated are less known. T3 has, internally, 6 "levels" or "steps" that it uses for its calculation.
This version:
Instead of showing the final T3 value, this indicator shows those intermediate steps. This shows the "building steps" of T3 and can be used for trend assessment as well as for possible support / resistance values.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Alerts
Signals
Bar coloring
Loxx's Expanded Source Types
R-squared Adaptive T3 Ribbon Filled Simple [Loxx]R-squared Adaptive T3 Ribbon Filled Simple is a T3 ribbons indicator that uses a special implementation of T3 that is R-squared adaptive.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
Included:
Alerts
Signals
Loxx's Expanded Source Types
T3 Slope Variation [Loxx]T3 Slope Variation is an indicator that uses T3 moving average to calculate a slope that is then weighted to derive a signal.
The center line
The center line changes color depending on the value of the:
Slope
Signal line
Threshold
If the value is above a signal line (it is not visible on the chart) and the threshold is greater than the required, then the main trend becomes up. And reversed for the trend down.
Colors and style of the histogram
The colors and style of the histogram will be drawn if the value is at the right side, if the above described trend "agrees" with the value (above is green or below zero is red) and if the High is higher than the previous High or Low is lower than the previous low, then the according type of histogram is drawn.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Alets
Signals
Bar coloring
Loxx's Expanded Source Types
Multi T3 Slopes [Loxx]Multi T3 Slopes is an indicator that checks slopes of 5 (different period) T3 Moving Averages and adds them up to show overall trend. To us this, check for color changes from red to green where there is no red if green is larger than red and there is no red when red is larger than green. When red and green both show up, its a sign of chop.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals: long, short, continuation long, continuation short.
Alerts
Bar coloring
Loxx's expanded source types
T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering [Loxx]T3 Volatility Quality Index (VQI) w/ DSL & Pips Filtering is a VQI indicator that uses T3 smoothing and discontinued signal lines to determine breakouts and breakdowns. This also allows filtering by pips.***
What is the Volatility Quality Index ( VQI )?
The idea behind the volatility quality index is to point out the difference between bad and good volatility in order to identify better trade opportunities in the market. This forex indicator works using the True Range algorithm in combination with the open, close, high and low prices.
What are DSL Discontinued Signal Line?
A lot of indicators are using signal lines in order to determine the trend (or some desired state of the indicator) easier. The idea of the signal line is easy : comparing the value to it's smoothed (slightly lagging) state, the idea of current momentum/state is made.
Discontinued signal line is inheriting that simple signal line idea and it is extending it : instead of having one signal line, more lines depending on the current value of the indicator.
"Signal" line is calculated the following way :
When a certain level is crossed into the desired direction, the EMA of that value is calculated for the desired signal line
When that level is crossed into the opposite direction, the previous "signal" line value is simply "inherited" and it becomes a kind of a level
This way it becomes a combination of signal lines and levels that are trying to combine both the good from both methods.
In simple terms, DSL uses the concept of a signal line and betters it by inheriting the previous signal line's value & makes it a level.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Signals
Alerts
Related indicators
Zero-line Volatility Quality Index (VQI)
Volatility Quality Index w/ Pips Filtering
Variety Moving Average Waddah Attar Explosion (WAE)
***This indicator is tuned to Forex. If you want to make it useful for other tickers, you must change the pip filtering value to match the asset. This means that for BTC, for example, you likely need to use a value of 10,000 or more for pips filter.
T3 PPO [Loxx]T3 PPO is a percentage price oscillator indicator using T3 moving average. This indicator is used to spot reversals. Dark red is upward price exhaustion, dark green is downward price exhaustion.
What is Percentage Price Oscillator (PPO)?
The percentage price oscillator (PPO) is a technical momentum indicator that shows the relationship between two moving averages in percentage terms. The moving averages are a 26-period and 12-period exponential moving average (EMA).
The PPO is used to compare asset performance and volatility, spot divergence that could lead to price reversals, generate trade signals, and help confirm trend direction.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
VHF-Adaptive T3 iTrend [Loxx]VHF-Adaptive T3 iTrend is an iTrend indicator with T3 smoothing and Vertical Horizontal Filter Adaptive period input. iTrend is used to determine where the trend starts and ends. You'll notice that the noise filter on this one is extreme. Adjust the period inputs accordingly to suit your take and your backtest requirements. This is also useful for scalping lower timeframes. Enjoy!
What is VHF Adaptive Period?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types