Ichimoku Kinkō hyō Keizen 改善
The script is not finnished yet and show's an other interpretation of how it could be scripted
Step -1 is complete... Basic Ichimoku with asjutable length and editable lines colors and visibilities.
Step -2 in progress... Adding ability to une multiple Spans, sens and Kumo on higher and lower timeframe.
Your Step : Like and Share ;) have a good year 2020 !
2020-01-06 /--------/ -R.V.
在腳本中搜尋"2020年+国债收益率"
Color█ OVERVIEW
This library is a Pine Script® programming tool for advanced color processing. It provides a comprehensive set of functions for specifying and analyzing colors in various color spaces, mixing and manipulating colors, calculating custom gradients and schemes, detecting contrast, and converting colors to or from hexadecimal strings.
█ CONCEPTS
Color
Color refers to how we interpret light of different wavelengths in the visible spectrum . The colors we see from an object represent the light wavelengths that it reflects, emits, or transmits toward our eyes. Some colors, such as blue and red, correspond directly to parts of the spectrum. Others, such as magenta, arise from a combination of wavelengths to which our minds assign a single color.
The human interpretation of color lends itself to many uses in our world. In the context of financial data analysis, the effective use of color helps transform raw data into insights that users can understand at a glance. For example, colors can categorize series, signal market conditions and sessions, and emphasize patterns or relationships in data.
Color models and spaces
A color model is a general mathematical framework that describes colors using sets of numbers. A color space is an implementation of a specific color model that defines an exact range (gamut) of reproducible colors based on a set of primary colors , a reference white point , and sometimes additional parameters such as viewing conditions.
There are numerous different color spaces — each describing the characteristics of color in unique ways. Different spaces carry different advantages, depending on the application. Below, we provide a brief overview of the concepts underlying the color spaces supported by this library.
RGB
RGB is one of the most well-known color models. It represents color as an additive mixture of three primary colors — red, green, and blue lights — with various intensities. Each cone cell in the human eye responds more strongly to one of the three primaries, and the average person interprets the combination of these lights as a distinct color (e.g., pure red + pure green = yellow).
The sRGB color space is the most common RGB implementation. Developed by HP and Microsoft in the 1990s, sRGB provided a standardized baseline for representing color across CRT monitors of the era, which produced brightness levels that did not increase linearly with the input signal. To match displays and optimize brightness encoding for human sensitivity, sRGB applied a nonlinear transformation to linear RGB signals, often referred to as gamma correction . The result produced more visually pleasing outputs while maintaining a simple encoding. As such, sRGB quickly became a standard for digital color representation across devices and the web. To this day, it remains the default color space for most web-based content.
TradingView charts and Pine Script `color.*` built-ins process color data in sRGB. The red, green, and blue channels range from 0 to 255, where 0 represents no intensity, and 255 represents maximum intensity. Each combination of red, green, and blue values represents a distinct color, resulting in a total of 16,777,216 displayable colors.
CIE XYZ and xyY
The XYZ color space, developed by the International Commission on Illumination (CIE) in 1931, aims to describe all color sensations that a typical human can perceive. It is a cornerstone of color science, forming the basis for many color spaces used today. XYZ, and the derived xyY space, provide a universal representation of color that is not tethered to a particular display. Many widely used color spaces, including sRGB, are defined relative to XYZ or derived from it.
The CIE built the color space based on a series of experiments in which people matched colors they perceived from mixtures of lights. From these experiments, the CIE developed color-matching functions to calculate three components — X, Y, and Z — which together aim to describe a standard observer's response to visible light. X represents a weighted response to light across the color spectrum, with the highest contribution from long wavelengths (e.g., red). Y represents a weighted response to medium wavelengths (e.g., green), and it corresponds to a color's relative luminance (i.e., brightness). Z represents a weighted response to short wavelengths (e.g., blue).
From the XYZ space, the CIE developed the xyY chromaticity space, which separates a color's chromaticity (hue and colorfulness) from luminance. The CIE used this space to define the CIE 1931 chromaticity diagram , which represents the full range of visible colors at a given luminance. In color science and lighting design, xyY is a common means for specifying colors and visualizing the supported ranges of other color spaces.
CIELAB and Oklab
The CIELAB (L*a*b*) color space, derived from XYZ by the CIE in 1976, expresses colors based on opponent process theory. The L* component represents perceived lightness, and the a* and b* components represent the balance between opposing unique colors. The a* value specifies the balance between green and red , and the b* value specifies the balance between blue and yellow .
The primary intention of CIELAB was to provide a perceptually uniform color space, where fixed-size steps through the space correspond to uniform perceived changes in color. Although relatively uniform, the color space has been found to exhibit some non-uniformities, particularly in the blue part of the color spectrum. Regardless, modern applications often use CIELAB to estimate perceived color differences and calculate smooth color gradients.
In 2020, a new LAB-oriented color space, Oklab , was introduced by Björn Ottosson as an attempt to rectify the non-uniformities of other perceptual color spaces. Similar to CIELAB, the L value in Oklab represents perceived lightness, and the a and b values represent the balance between opposing unique colors. Oklab has gained widespread adoption as a perceptual space for color processing, with support in the latest CSS Color specifications and many software applications.
Cylindrical models
A cylindrical-coordinate model transforms an underlying color model, such as RGB or LAB, into an alternative expression of color information that is often more intuitive for the average person to use and understand.
Instead of a mixture of primary colors or opponent pairs, these models represent color as a hue angle on a color wheel , with additional parameters that describe other qualities such as lightness and colorfulness (a general term for concepts like chroma and saturation). In cylindrical-coordinate spaces, users can select a color and modify its lightness or other qualities without altering the hue.
The three most common RGB-based models are HSL (Hue, Saturation, Lightness), HSV (Hue, Saturation, Value), and HWB (Hue, Whiteness, Blackness). All three define hue angles in the same way, but they define colorfulness and lightness differently. Although they are not perceptually uniform, HSL and HSV are commonplace in color pickers and gradients.
For CIELAB and Oklab, the cylindrical-coordinate versions are CIELCh and Oklch , which express color in terms of perceived lightness, chroma, and hue. They offer perceptually uniform alternatives to RGB-based models. These spaces create unique color wheels, and they have more strict definitions of lightness and colorfulness. Oklch is particularly well-suited for generating smooth, perceptual color gradients.
Alpha and transparency
Many color encoding schemes include an alpha channel, representing opacity . Alpha does not help define a color in a color space; it determines how a color interacts with other colors in the display. Opaque colors appear with full intensity on the screen, whereas translucent (semi-opaque) colors blend into the background. Colors with zero opacity are invisible.
In Pine Script, there are two ways to specify a color's alpha:
• Using the `transp` parameter of the built-in `color.*()` functions. The specified value represents transparency (the opposite of opacity), which the functions translate into an alpha value.
• Using eight-digit hexadecimal color codes. The last two digits in the code represent alpha directly.
A process called alpha compositing simulates translucent colors in a display. It creates a single displayed color by mixing the RGB channels of two colors (foreground and background) based on alpha values, giving the illusion of a semi-opaque color placed over another color. For example, a red color with 80% transparency on a black background produces a dark shade of red.
Hexadecimal color codes
A hexadecimal color code (hex code) is a compact representation of an RGB color. It encodes a color's red, green, and blue values into a sequence of hexadecimal ( base-16 ) digits. The digits are numerals ranging from `0` to `9` or letters from `a` (for 10) to `f` (for 15). Each set of two digits represents an RGB channel ranging from `00` (for 0) to `ff` (for 255).
Pine scripts can natively define colors using hex codes in the format `#rrggbbaa`. The first set of two digits represents red, the second represents green, and the third represents blue. The fourth set represents alpha . If unspecified, the value is `ff` (fully opaque). For example, `#ff8b00` and `#ff8b00ff` represent an opaque orange color. The code `#ff8b0033` represents the same color with 80% transparency.
Gradients
A color gradient maps colors to numbers over a given range. Most color gradients represent a continuous path in a specific color space, where each number corresponds to a mix between a starting color and a stopping color. In Pine, coders often use gradients to visualize value intensities in plots and heatmaps, or to add visual depth to fills.
The behavior of a color gradient depends on the mixing method and the chosen color space. Gradients in sRGB usually mix along a straight line between the red, green, and blue coordinates of two colors. In cylindrical spaces such as HSL, a gradient often rotates the hue angle through the color wheel, resulting in more pronounced color transitions.
Color schemes
A color scheme refers to a set of colors for use in aesthetic or functional design. A color scheme usually consists of just a few distinct colors. However, depending on the purpose, a scheme can include many colors.
A user might choose palettes for a color scheme arbitrarily, or generate them algorithmically. There are many techniques for calculating color schemes. A few simple, practical methods are:
• Sampling a set of distinct colors from a color gradient.
• Generating monochromatic variants of a color (i.e., tints, tones, or shades with matching hues).
• Computing color harmonies — such as complements, analogous colors, triads, and tetrads — from a base color.
This library includes functions for all three of these techniques. See below for details.
█ CALCULATIONS AND USE
Hex string conversion
The `getHexString()` function returns a string containing the eight-digit hexadecimal code corresponding to a "color" value or set of sRGB and transparency values. For example, `getHexString(255, 0, 0)` returns the string `"#ff0000ff"`, and `getHexString(color.new(color.red, 80))` returns `"#f2364533"`.
The `hexStringToColor()` function returns the "color" value represented by a string containing a six- or eight-digit hex code. The `hexStringToRGB()` function returns a tuple containing the sRGB and transparency values. For example, `hexStringToColor("#f23645")` returns the same value as color.red .
Programmers can use these functions to parse colors from "string" inputs, perform string-based color calculations, and inspect color data in text outputs such as Pine Logs and tables.
Color space conversion
All other `get*()` functions convert a "color" value or set of sRGB channels into coordinates in a specific color space, with transparency information included. For example, the tuple returned by `getHSL()` includes the color's hue, saturation, lightness, and transparency values.
To convert data from a color space back to colors or sRGB and transparency values, use the corresponding `*toColor()` or `*toRGB()` functions for that space (e.g., `hslToColor()` and `hslToRGB()`).
Programmers can use these conversion functions to process inputs that define colors in different ways, perform advanced color manipulation, design custom gradients, and more.
The color spaces this library supports are:
• sRGB
• Linear RGB (RGB without gamma correction)
• HSL, HSV, and HWB
• CIE XYZ and xyY
• CIELAB and CIELCh
• Oklab and Oklch
Contrast-based calculations
Contrast refers to the difference in luminance or color that makes one color visible against another. This library features two functions for calculating luminance-based contrast and detecting themes.
The `contrastRatio()` function calculates the contrast between two "color" values based on their relative luminance (the Y value from CIE XYZ) using the formula from version 2 of the Web Content Accessibility Guidelines (WCAG) . This function is useful for identifying colors that provide a sufficient brightness difference for legibility.
The `isLightTheme()` function determines whether a specified background color represents a light theme based on its contrast with black and white. Programmers can use this function to define conditional logic that responds differently to light and dark themes.
Color manipulation and harmonies
The `negative()` function calculates the negative (i.e., inverse) of a color by reversing the color's coordinates in either the sRGB or linear RGB color space. This function is useful for calculating high-contrast colors.
The `grayscale()` function calculates a grayscale form of a specified color with the same relative luminance.
The functions `complement()`, `splitComplements()`, `analogousColors()`, `triadicColors()`, `tetradicColors()`, `pentadicColors()`, and `hexadicColors()` calculate color harmonies from a specified source color within a given color space (HSL, CIELCh, or Oklch). The returned harmonious colors represent specific hue rotations around a color wheel formed by the chosen space, with the same defined lightness, saturation or chroma, and transparency.
Color mixing and gradient creation
The `add()` function simulates combining lights of two different colors by additively mixing their linear red, green, and blue components, ignoring transparency by default. Users can calculate a transparency-weighted mixture by setting the `transpWeight` argument to `true`.
The `overlay()` function estimates the color displayed on a TradingView chart when a specific foreground color is over a background color. This function aids in simulating stacked colors and analyzing the effects of transparency.
The `fromGradient()` and `fromMultiStepGradient()` functions calculate colors from gradients in any of the supported color spaces, providing flexible alternatives to the RGB-based color.from_gradient() function. The `fromGradient()` function calculates a color from a single gradient. The `fromMultiStepGradient()` function calculates a color from a piecewise gradient with multiple defined steps. Gradients are useful for heatmaps and for coloring plots or drawings based on value intensities.
Scheme creation
Three functions in this library calculate palettes for custom color schemes. Scripts can use these functions to create responsive color schemes that adjust to calculated values and user inputs.
The `gradientPalette()` function creates an array of colors by sampling a specified number of colors along a gradient from a base color to a target color, in fixed-size steps.
The `monoPalette()` function creates an array containing monochromatic variants (tints, tones, or shades) of a specified base color. Whether the function mixes the color toward white (for tints), a form of gray (for tones), or black (for shades) depends on the `grayLuminance` value. If unspecified, the function automatically chooses the mix behavior with the highest contrast.
The `harmonyPalette()` function creates a matrix of colors. The first column contains the base color and specified harmonies, e.g., triadic colors. The columns that follow contain tints, tones, or shades of the harmonic colors for additional color choices, similar to `monoPalette()`.
█ EXAMPLE CODE
The example code at the end of the script generates and visualizes color schemes by processing user inputs. The code builds the scheme's palette based on the "Base color" input and the additional inputs in the "Settings/Inputs" tab:
• "Palette type" specifies whether the palette uses a custom gradient, monochromatic base color variants, or color harmonies with monochromatic variants.
• "Target color" sets the top color for the "Gradient" palette type.
• The "Gray luminance" inputs determine variation behavior for "Monochromatic" and "Harmony" palette types. If "Auto" is selected, the palette mixes the base color toward white or black based on its brightness. Otherwise, it mixes the color toward the grayscale color with the specified relative luminance (from 0 to 1).
• "Harmony type" specifies the color harmony used in the palette. Each row in the palette corresponds to one of the harmonious colors, starting with the base color.
The code creates a table on the first bar to display the collection of calculated colors. Each cell in the table shows the color's `getHexString()` value in a tooltip for simple inspection.
Look first. Then leap.
█ EXPORTED FUNCTIONS
Below is a complete list of the functions and overloads exported by this library.
getRGB(source)
Retrieves the sRGB red, green, blue, and transparency components of a "color" value.
getHexString(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channel values to a string representing the corresponding color's hexadecimal form.
getHexString(source)
(Overload 2 of 2) Converts a "color" value to a string representing the sRGB color's hexadecimal form.
hexStringToRGB(source)
Converts a string representing an sRGB color's hexadecimal form to a set of decimal channel values.
hexStringToColor(source)
Converts a string representing an sRGB color's hexadecimal form to a "color" value.
getLRGB(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channel values to a set of linear RGB values with specified transparency information.
getLRGB(source)
(Overload 2 of 2) Retrieves linear RGB channel values and transparency information from a "color" value.
lrgbToRGB(lr, lg, lb, t)
Converts a set of linear RGB channel values to a set of sRGB values with specified transparency information.
lrgbToColor(lr, lg, lb, t)
Converts a set of linear RGB channel values and transparency information to a "color" value.
getHSL(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of HSL values with specified transparency information.
getHSL(source)
(Overload 2 of 2) Retrieves HSL channel values and transparency information from a "color" value.
hslToRGB(h, s, l, t)
Converts a set of HSL channel values to a set of sRGB values with specified transparency information.
hslToColor(h, s, l, t)
Converts a set of HSL channel values and transparency information to a "color" value.
getHSV(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of HSV values with specified transparency information.
getHSV(source)
(Overload 2 of 2) Retrieves HSV channel values and transparency information from a "color" value.
hsvToRGB(h, s, v, t)
Converts a set of HSV channel values to a set of sRGB values with specified transparency information.
hsvToColor(h, s, v, t)
Converts a set of HSV channel values and transparency information to a "color" value.
getHWB(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of HWB values with specified transparency information.
getHWB(source)
(Overload 2 of 2) Retrieves HWB channel values and transparency information from a "color" value.
hwbToRGB(h, w, b, t)
Converts a set of HWB channel values to a set of sRGB values with specified transparency information.
hwbToColor(h, w, b, t)
Converts a set of HWB channel values and transparency information to a "color" value.
getXYZ(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of XYZ values with specified transparency information.
getXYZ(source)
(Overload 2 of 2) Retrieves XYZ channel values and transparency information from a "color" value.
xyzToRGB(x, y, z, t)
Converts a set of XYZ channel values to a set of sRGB values with specified transparency information
xyzToColor(x, y, z, t)
Converts a set of XYZ channel values and transparency information to a "color" value.
getXYY(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of xyY values with specified transparency information.
getXYY(source)
(Overload 2 of 2) Retrieves xyY channel values and transparency information from a "color" value.
xyyToRGB(xc, yc, y, t)
Converts a set of xyY channel values to a set of sRGB values with specified transparency information.
xyyToColor(xc, yc, y, t)
Converts a set of xyY channel values and transparency information to a "color" value.
getLAB(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of CIELAB values with specified transparency information.
getLAB(source)
(Overload 2 of 2) Retrieves CIELAB channel values and transparency information from a "color" value.
labToRGB(l, a, b, t)
Converts a set of CIELAB channel values to a set of sRGB values with specified transparency information.
labToColor(l, a, b, t)
Converts a set of CIELAB channel values and transparency information to a "color" value.
getOKLAB(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of Oklab values with specified transparency information.
getOKLAB(source)
(Overload 2 of 2) Retrieves Oklab channel values and transparency information from a "color" value.
oklabToRGB(l, a, b, t)
Converts a set of Oklab channel values to a set of sRGB values with specified transparency information.
oklabToColor(l, a, b, t)
Converts a set of Oklab channel values and transparency information to a "color" value.
getLCH(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of CIELCh values with specified transparency information.
getLCH(source)
(Overload 2 of 2) Retrieves CIELCh channel values and transparency information from a "color" value.
lchToRGB(l, c, h, t)
Converts a set of CIELCh channel values to a set of sRGB values with specified transparency information.
lchToColor(l, c, h, t)
Converts a set of CIELCh channel values and transparency information to a "color" value.
getOKLCH(r, g, b, t)
(Overload 1 of 2) Converts a set of sRGB channels to a set of Oklch values with specified transparency information.
getOKLCH(source)
(Overload 2 of 2) Retrieves Oklch channel values and transparency information from a "color" value.
oklchToRGB(l, c, h, t)
Converts a set of Oklch channel values to a set of sRGB values with specified transparency information.
oklchToColor(l, c, h, t)
Converts a set of Oklch channel values and transparency information to a "color" value.
contrastRatio(value1, value2)
Calculates the contrast ratio between two colors values based on the formula from version 2 of the Web Content Accessibility Guidelines (WCAG).
isLightTheme(source)
Detects whether a background color represents a light theme or dark theme, based on the amount of contrast between the color and the white and black points.
grayscale(source)
Calculates the grayscale version of a color with the same relative luminance (i.e., brightness).
negative(source, colorSpace)
Calculates the negative (i.e., inverted) form of a specified color.
complement(source, colorSpace)
Calculates the complementary color for a `source` color using a cylindrical color space.
analogousColors(source, colorSpace)
Calculates the analogous colors for a `source` color using a cylindrical color space.
splitComplements(source, colorSpace)
Calculates the split-complementary colors for a `source` color using a cylindrical color space.
triadicColors(source, colorSpace)
Calculates the two triadic colors for a `source` color using a cylindrical color space.
tetradicColors(source, colorSpace, square)
Calculates the three square or rectangular tetradic colors for a `source` color using a cylindrical color space.
pentadicColors(source, colorSpace)
Calculates the four pentadic colors for a `source` color using a cylindrical color space.
hexadicColors(source, colorSpace)
Calculates the five hexadic colors for a `source` color using a cylindrical color space.
add(value1, value2, transpWeight)
Additively mixes two "color" values, with optional transparency weighting.
overlay(fg, bg)
Estimates the resulting color that appears on the chart when placing one color over another.
fromGradient(value, bottomValue, topValue, bottomColor, topColor, colorSpace)
Calculates the gradient color that corresponds to a specific value based on a defined value range and color space.
fromMultiStepGradient(value, steps, colors, colorSpace)
Calculates a multi-step gradient color that corresponds to a specific value based on an array of step points, an array of corresponding colors, and a color space.
gradientPalette(baseColor, stopColor, steps, strength, model)
Generates a palette from a gradient between two base colors.
monoPalette(baseColor, grayLuminance, variations, strength, colorSpace)
Generates a monochromatic palette from a specified base color.
harmonyPalette(baseColor, harmonyType, grayLuminance, variations, strength, colorSpace)
Generates a palette consisting of harmonious base colors and their monochromatic variants.
SIP Evaluator and Screener [Trendoscope®]The SIP Evaluator and Screener is a Pine Script indicator designed for TradingView to calculate and visualize Systematic Investment Plan (SIP) returns across multiple investment instruments. It is tailored for use in TradingView's screener, enabling users to evaluate SIP performance for various assets efficiently.
🎲 How SIP Works
A Systematic Investment Plan (SIP) is an investment strategy where a fixed amount is invested at regular intervals (e.g., monthly or weekly) into a financial instrument, such as stocks, mutual funds, or ETFs. The goal is to build wealth over time by leveraging the power of compounding and mitigating the impact of market volatility through disciplined, consistent investing. Here’s a breakdown of how SIPs function:
Regular Investments : In an SIP, an investor commits to investing a fixed sum at predefined intervals, regardless of market conditions. This consistency helps inculcate a habit of saving and investing.
Cost Averaging : By investing a fixed amount regularly, investors purchase more units when prices are low and fewer units when prices are high. This approach, known as dollar-cost averaging, reduces the average cost per unit over time and mitigates the risk of investing a large amount at a peak price.
Compounding Benefits : Returns generated from the invested amount (e.g., capital gains or dividends) are reinvested, leading to exponential growth over the long term. The longer the investment horizon, the greater the potential for compounding to amplify returns.
Dividend Reinvestment : In some SIPs, dividends received from the underlying asset can be reinvested to purchase additional units, further enhancing returns. Taxes on dividends, if applicable, may reduce the reinvested amount.
Flexibility and Accessibility : SIPs allow investors to start with small amounts, making them accessible to a wide range of individuals. They also offer flexibility in terms of investment frequency and the ability to adjust or pause contributions.
In the context of the SIP Evaluator and Screener , the script simulates an SIP by calculating the number of units purchased with each fixed investment, factoring in commissions, dividends, taxes and the chosen price reference (e.g., open, close, or average prices). It tracks the cumulative investment, equity value, and dividends over time, providing a clear picture of how an SIP would perform for a given instrument. This helps users understand the impact of regular investing and make informed decisions when comparing different assets in TradingView’s screener. It offers insights into key metrics such as total invested amount, dividends received, equity value, and the number of installments, making it a valuable resource for investors and traders interested in understanding long-term investment outcomes.
🎲 Key Features
Customizable Investment Parameters: Users can define the recurring investment amount, price reference (e.g., open, close, HL2, HLC3, OHLC4), and whether fractional quantities are allowed.
Commission Handling: Supports both fixed and percentage-based commission types, adjusting calculations accordingly.
Dividend Reinvestment: Optionally reinvests dividends after a user-specified period, with the ability to apply tax on dividends.
Time-Bound Analysis: Allows users to set a start year for the analysis, enabling historical performance evaluation.
Flexible Dividend Periods: Dividends can be evaluated based on bars, days, weeks, or months.
Visual Outputs: Plots key metrics like total invested amount, dividends, equity value, and remainder, with customizable display options for clarity in the data window and chart.
🎲 Using the script as an indicator on Tradingview Supercharts
In order to use the indicator on charts, do the following.
Load the instrument of your choice - Preferably a stable stocks, ETFs.
Chose monthly timeframe as lower timeframes are insignificant in this type of investment strategy
Load the indicator SIP Evaluator and Screener and set the input parameters as per your preference.
Indicator plots, investment value, dividends and equity on the chart.
🎲 Visualizations
Installments : Displays the number of SIP installments (gray line, visible in the data window).
Invested Amount : Shows the cumulative amount invested, excluding reinvested dividends (blue area plot).
Dividends : Tracks total dividends received (green area plot).
Equity : Represents the current market value of the investment based on the closing price (purple area plot).
Remainder : Indicates any uninvested cash after each installment (gray line, visible in the data window).
🎲 Deep dive into the settings
The SIP Evaluator and Screener offers a range of customizable settings to tailor the Systematic Investment Plan (SIP) simulation to your preferences. Below is an explanation of each setting, its purpose, and how it impacts the analysis:
🎯 Duration
Start Year (Default: 2020) : Specifies the year from which the SIP calculations begin. When Start Year is enabled via the timebound option, the script only considers data from the specified year onward. This is useful for analyzing historical SIP performance over a defined period. If disabled, the script uses all available data.
Timebound (Default: False) : A toggle to enable or disable the Start Year restriction. When set to False, the SIP calculation starts from the earliest available data for the instrument.
🎯 Investment
Recurring Investment (Default: 1000.0) : The fixed amount invested in each SIP installment (e.g., $1000 per period). This represents the regular contribution to the SIP and directly influences the total invested amount and quantity purchased.
Allow Fractional Qty (Default: True) : When enabled, the script allows the purchase of fractional units (e.g., 2.35 shares). If disabled, only whole units are purchased (e.g., 2 shares), with any remaining funds carried forward as Remainder. This setting impacts the precision of investment allocation.
Price Reference (Default: OPEN): Determines the price used for purchasing units in each SIP installment. Options include:
OPEN : Uses the opening price of the bar.
CLOSE : Uses the closing price of the bar.
HL2 : Uses the average of the high and low prices.
HLC3 : Uses the average of the high, low, and close prices.
OHLC4 : Uses the average of the open, high, low, and close prices. This setting affects the cost basis of each purchase and, consequently, the total quantity and equity value.
🎯 Commission
Commission (Default: 3) : The commission charged per SIP installment, expressed as either a fixed amount (e.g., $3) or a percentage (e.g., 3% of the investment). This reduces the amount available for purchasing units.
Commission Type (Default: Fixed) : Specifies how the commission is calculated:
Fixed ($) : A flat fee is deducted per installment (e.g., $3).
Percentage (%) : A percentage of the investment amount is deducted as commission (e.g., 3% of $1000 = $30). This setting affects the net amount invested and the overall cost of the SIP.
🎯 Dividends
Apply Tax On Dividends (Default: False) : When enabled, a tax is applied to dividends before they are reinvested or recorded. The tax rate is set via the Dividend Tax setting.
Dividend Tax (Default: 47) : The percentage of tax deducted from dividends if Apply Tax On Dividends is enabled (e.g., 47% tax reduces a $100 dividend to $53). This reduces the amount available for reinvestment or accumulation.
Reinvest Dividends After (Default: True, 2) : When enabled, dividends received are reinvested to purchase additional units after a specified period (e.g., 2 units of time, defined by Dividends Availability). If disabled, dividends are tracked but not reinvested. Reinvestment increases the total quantity and equity over time.
Dividends Availability (Default: Bars) : Defines the time unit for evaluating when dividends are available for reinvestment. Options include:
Bars : Based on the number of chart bars.
Weeks : Based on weeks.
Months : Based on months (approximated as 30.5 days). This setting determines the timing of dividend reinvestment relative to the Reinvest Dividends After period.
🎯 How Settings Interact
These settings work together to simulate a realistic SIP. For example, a $1000 recurring investment with a 3% commission and fractional quantities enabled will calculate the number of units purchased at the chosen price reference after deducting the commission. If dividends are reinvested after 2 months with a 47% tax, the script fetches dividend data, applies the tax, and adds the net dividend to the investment amount for that period. The Start Year and Timebound settings ensure the analysis aligns with the desired timeframe, while the Dividends Availability setting fine-tunes dividend reinvestment timing.
By adjusting these settings, users can model different SIP scenarios, compare performance across instruments in TradingView’s screener, and gain insights into how commissions, dividends, and price references impact long-term returns.
🎲 Using the script with Pine Screener
The main purpose of developing this script is to use it with Tradingview Pine Screener so that multiple ETFs/Funds can be compared.
In order to use this as a screener, the following things needs to be done.
Add SIP Evaluator and Screener to your favourites (Required for it to be added in pine screener)
Create a watch list containing required instruments to compare
Open pine screener from Tradingview main menu Products -> Screeners -> Pine or simply load the URL - www.tradingview.com
Select the watchlist created from Watchlist dropdown.
Chose the SIP Evaluator and Screener from the "Choose Indicator" dropdown
Set timeframe to 1 month and update settings as required.
Press scan to display collected data on the screener.
🎲 Use Case
This indicator is ideal for educational purposes, allowing users to experiment with SIP strategies across different instruments. It can be applied in TradingView’s screener to compare SIP performance for stocks, ETFs, or other assets, helping users understand how factors like commissions, dividends, and price references impact returns over time.
MC Geopolitical Tension Events📌 Script Title: Geopolitical Tension Events
📖 Description:
This script highlights key geopolitical and military tension events from 1914 to 2024 that have historically impacted global markets.
It automatically plots vertical dashed lines and labels on the chart at the time of each major event. This allows traders and analysts to visually assess how markets have responded to global crises, wars, and significant political instability over time.
🧠 Use Cases:
Historical backtesting: Understand how market responded to past geopolitical shocks.
Contextual analysis: Add macro context to technical setups.
🗓️ List of Geopolitical Tension Events in the Script
Date Event Title Description
1914-07-28 WWI Begins Outbreak of World War I following the assassination of Archduke Franz Ferdinand.
1929-10-24 Wall Street Crash Black Thursday, the start of the 1929 stock market crash.
1939-09-01 WWII Begins Germany invades Poland, starting World War II.
1941-12-07 Pearl Harbor Japanese attack on Pearl Harbor; U.S. enters WWII.
1945-08-06 Hiroshima Bombing First atomic bomb dropped on Hiroshima by the U.S.
1950-06-25 Korean War Begins North Korea invades South Korea.
1962-10-16 Cuban Missile Crisis 13-day standoff between the U.S. and USSR over missiles in Cuba.
1973-10-06 Yom Kippur War Egypt and Syria launch surprise attack on Israel.
1979-11-04 Iran Hostage Crisis U.S. Embassy in Tehran seized; 52 hostages taken.
1990-08-02 Gulf War Begins Iraq invades Kuwait, triggering U.S. intervention.
2001-09-11 9/11 Attacks Coordinated terrorist attacks on the U.S.
2003-03-20 Iraq War Begins U.S.-led invasion of Iraq to remove Saddam Hussein.
2008-09-15 Lehman Collapse Bankruptcy of Lehman Brothers; peak of global financial crisis.
2014-03-01 Crimea Crisis Russia annexes Crimea from Ukraine.
2020-01-03 Soleimani Strike U.S. drone strike kills Iranian General Qasem Soleimani.
2022-02-24 Ukraine Invasion Russia launches full-scale invasion of Ukraine.
2023-10-07 Hamas-Israel War Hamas launches attack on Israel, sparking war in Gaza.
2024-01-12 Red Sea Crisis Houthis attack ships in Red Sea, prompting Western naval response.
Open Interest-RSI + Funding + Fractal DivergencesIndicator — “Open Interest-RSI + Funding + Fractal Divergences”
A multi-factor oscillator that fuses Open-Interest RSI, real-time Funding-Rate data and price/OI fractal divergences.
It paints BUY/SELL arrows in its own pane and directly on the price chart, helping you spot spots where crowd positioning, leverage costs and price action contradict each other.
1 Purpose
OI-RSI – measures conviction behind position changes instead of price momentum.
Funding Rate – shows who pays to hold positions (longs → bull bias, shorts → bear bias).
Fractal Divergences – detects HH/LL in price that are not confirmed by OI-RSI.
Optional Funding filter – hides signals when funding is already extreme.
Together these elements highlight exhaustion points and potential mean-reversion trades.
2 Inputs
RSI / Divergence
RSI length – default 14.
High-OI level / Low-OI level – default 70 / 30.
Fractal period n – default 2 (swing width).
Fractals to compare – how many past swings to scan, default 3.
Max visible arrows – keeps last 50 BUY/SELL arrows for speed.
Funding Rate
mode – choose FR, Avg Premium, Premium Index, Avg Prem + PI or FR-candle.
Visual scale (×) – multiplies raw funding to fit 0-100 oscillator scale (default 10).
specify symbol – enable only if funding symbol differs from chart.
use lower tf – averages 1-min premiums for smoother intraday view.
show table – tiny two-row widget at chart edge.
Signal Filter
Use Funding filter – ON hides long signals when funding > Buy-threshold and short signals when funding < Sell-threshold.
BUY threshold (%) – default 0.00 (raw %).
SELL threshold (%) – default 0.00 (raw %).
(Enter funding thresholds as raw percentages, e.g. 0.01 = +0.01 %).
3 Visual Outputs
Sub-pane
Aqua OI-RSI curve with 70 / 50 / 30 reference lines.
Funding visualised according to selected mode (green above 0, red below 0, or other).
BUY / SELL arrows at oscillator extremes.
Price chart
Identical BUY / SELL arrows plotted with force_overlay = true above/below candles that formed qualifying fractals.
Optional table
Shows current asset ticker and latest funding value of the chosen mode.
4 Signal Logic (Summary)
Load _OI series and compute RSI.
Retrieve Funding-Rate + Premium Index (optionally from lower TF).
Find fractal swings (n bars left & right).
Check divergence:
Bearish – price HH + OI-RSI LH.
Bullish – price LL + OI-RSI HL.
If Funding-filter enabled, require funding < Buy-thr (long) or > Sell-thr (short).
Plot arrows and trigger two built-in alerts (Bearish OI-RSI divergence, Bullish OI-RSI divergence).
Signals are fixed once the fractal bar closes; they do not repaint afterwards.
5 How to Use
Attach to a liquid perpetual-futures chart (BTC, ETH, major Binance contracts).
If _OI or funding series is missing you’ll see an error.
Choose timeframe:
15 m – 4 h for intraday;
1 D+ for swing trades.
Lower TFs → more signals; raise Fractals to compare or use Funding filter to trim noise.
Trade checklist
Funding positive and rising → longs overcrowded.
Price makes higher high; OI-RSI makes lower high; Funding above Sell-threshold → consider short.
Reverse logic for longs.
Combine with trend filter (EMA ribbon, SuperTrend, etc.) so you fade only when price is stretched.
Automation – set TradingView alerts on the two alertconditions and send to webhooks/bots.
Performance tips
Keep Max visible arrows ≤ 50.
Disable lower-TF premium aggregation if script feels heavy.
6 Limitations
Some symbols lack _OI or funding history → script stops with a console message.
Binance Premium Index begins mid-2020; older dates show na.
Divergences confirm only after n bars (no forward repaint).
7 Changelog
v1.0 – 10 Jun 2025
Initial public release.
Added price-chart arrows via force_overlay.
SOXL Trend Surge v3.0.2 – Profit-Only RunnerSOXL Trend Surge v3.0.2 – Profit-Only Runner
This is a trend-following strategy built for leveraged ETFs like SOXL, designed to ride high-momentum waves with minimal interference. Unlike most short-term scalping scripts, this model allows trades to develop over multiple days to even several months, capitalizing on the full power of extended directional moves — all without using a stop-loss.
🔍 How It Works
Entry Logic:
Price is above the 200 EMA (long-term trend confirmation)
Supertrend is bullish (momentum confirmation)
ATR is rising (volatility expansion)
Volume is above its 20-bar average (liquidity filter)
Price is outside a small buffer zone from the 200 EMA (to avoid whipsaws)
Trades are restricted to market hours only (9 AM to 2 PM EST)
Cooldown of 15 bars after each exit to prevent overtrading
Exit Strategy:
Takes partial profit at +2× ATR if held for at least 2 bars
Rides the remaining position with a trailing stop at 1.5× ATR
No hard stop-loss — giving space for volatile pullbacks
⚙️ Strategy Settings
Initial Capital: $500
Risk per Trade: 100% of equity (fully allocated per entry)
Commission: 0.1%
Slippage: 1 tick
Recalculate after order is filled
Fill orders on bar close
Timeframe Optimized For: 45-minute chart
These parameters simulate an aggressive, high-volatility trading model meant for forward-testing compounding potential under realistic trading costs.
✅ What Makes This Unique
No stop-loss = fewer premature exits
Partial profit-taking helps lock in early wins
Trailing logic gives room to ride large multi-week moves
Uses strict filters (volume, ATR, EMA bias) to enter only during high-probability windows
Ideal for leveraged ETF swing or position traders looking to hold longer than the typical intraday or 2–3 day strategies
⚠️ Important Note
This is a high-risk, high-reward strategy meant for educational and testing purposes. Without a stop-loss, trades can experience deep drawdowns that may take weeks or even months to recover. Always test thoroughly and adjust position sizing to suit your risk tolerance. Past results do not guarantee future returns. Backtest range: May 8, 2020 – May 23, 2025
Systemic Credit Market Pressure IndexSystemic Credit Market Pressure Index (SCMPI): A Composite Indicator for Credit Cycle Analysis
The Systemic Credit Market Pressure Index (SCMPI) represents a novel composite indicator designed to quantify systemic stress within credit markets through the integration of multiple macroeconomic variables. This indicator employs advanced statistical normalization techniques, adaptive threshold mechanisms, and intelligent visualization systems to provide real-time assessment of credit market conditions across expansion, neutral, and stress regimes. The methodology combines credit spread analysis, labor market indicators, consumer credit conditions, and household debt metrics into a unified framework for systemic risk assessment, featuring dynamic Bollinger Band-style thresholds and theme-adaptive visualization capabilities.
## 1. Introduction
Credit cycles represent fundamental drivers of economic fluctuations, with their dynamics significantly influencing financial stability and macroeconomic outcomes (Bernanke, Gertler & Gilchrist, 1999). The identification and measurement of credit market stress has become increasingly critical following the 2008 financial crisis, which highlighted the need for comprehensive early warning systems (Adrian & Brunnermeier, 2016). Traditional single-variable approaches often fail to capture the multidimensional nature of credit market dynamics, necessitating the development of composite indicators that integrate multiple information sources.
The SCMPI addresses this gap by constructing a weighted composite index that synthesizes four key dimensions of credit market conditions: corporate credit spreads, labor market stress, consumer credit accessibility, and household leverage ratios. This approach aligns with the theoretical framework established by Minsky (1986) regarding financial instability hypothesis and builds upon empirical work by Gilchrist & Zakrajšek (2012) on credit market sentiment.
## 2. Theoretical Framework
### 2.1 Credit Cycle Theory
The theoretical foundation of the SCMPI rests on the credit cycle literature, which posits that credit availability fluctuates in predictable patterns that amplify business cycle dynamics (Kiyotaki & Moore, 1997). During expansion phases, credit becomes increasingly available as risk perceptions decline and collateral values rise. Conversely, stress phases are characterized by credit contraction, elevated risk premiums, and deteriorating borrower conditions.
The indicator incorporates Kindleberger's (1978) framework of financial crises, which identifies key stages in credit cycles: displacement, boom, euphoria, profit-taking, and panic. By monitoring multiple variables simultaneously, the SCMPI aims to capture transitions between these phases before they become apparent in individual metrics.
### 2.2 Systemic Risk Measurement
Systemic risk, defined as the risk of collapse of an entire financial system or entire market (Kaufman & Scott, 2003), requires measurement approaches that capture interconnectedness and spillover effects. The SCMPI follows the methodology established by Bisias et al. (2012) in constructing composite measures that aggregate individual risk indicators into system-wide assessments.
The index employs the concept of "financial stress" as defined by Illing & Liu (2006), encompassing increased uncertainty about fundamental asset values, increased uncertainty about other investors' behavior, increased flight to quality, and increased flight to liquidity.
## 3. Methodology
### 3.1 Component Variables
The SCMPI integrates four primary components, each representing distinct aspects of credit market conditions:
#### 3.1.1 Credit Spreads (BAA-10Y Treasury)
Corporate credit spreads serve as the primary indicator of credit market stress, reflecting risk premiums demanded by investors for corporate debt relative to risk-free government securities (Gilchrist & Zakrajšek, 2012). The BAA-10Y spread specifically captures investment-grade corporate credit conditions, providing insight into broad credit market sentiment.
#### 3.1.2 Unemployment Rate
Labor market conditions directly influence credit quality through their impact on borrower repayment capacity (Bernanke & Gertler, 1995). Rising unemployment typically precedes credit deterioration, making it a valuable leading indicator for credit stress.
#### 3.1.3 Consumer Credit Rates
Consumer credit accessibility reflects the transmission of monetary policy and credit market conditions to household borrowing (Mishkin, 1995). Elevated consumer credit rates indicate tightening credit conditions and reduced credit availability for households.
#### 3.1.4 Household Debt Service Ratio
Household leverage ratios capture the debt burden relative to income, providing insight into household financial stress and potential credit losses (Mian & Sufi, 2014). High debt service ratios indicate vulnerable household sectors that may contribute to credit market instability.
### 3.2 Statistical Methodology
#### 3.2.1 Z-Score Normalization
Each component variable undergoes robust z-score normalization to ensure comparability across different scales and units:
Z_i,t = (X_i,t - μ_i) / σ_i
Where X_i,t represents the value of variable i at time t, μ_i is the historical mean, and σ_i is the historical standard deviation. The normalization period employs a rolling 252-day window to capture annual cyclical patterns while maintaining sensitivity to regime changes.
#### 3.2.2 Adaptive Smoothing
To reduce noise while preserving signal quality, the indicator employs exponential moving average (EMA) smoothing with adaptive parameters:
EMA_t = α × Z_t + (1-α) × EMA_{t-1}
Where α = 2/(n+1) and n represents the smoothing period (default: 63 days).
#### 3.2.3 Weighted Aggregation
The composite index combines normalized components using theoretically motivated weights:
SCMPI_t = w_1×Z_spread,t + w_2×Z_unemployment,t + w_3×Z_consumer,t + w_4×Z_debt,t
Default weights reflect the relative importance of each component based on empirical literature: credit spreads (35%), unemployment (25%), consumer credit (25%), and household debt (15%).
### 3.3 Dynamic Threshold Mechanism
Unlike static threshold approaches, the SCMPI employs adaptive Bollinger Band-style thresholds that automatically adjust to changing market volatility and conditions (Bollinger, 2001):
Expansion Threshold = μ_SCMPI - k × σ_SCMPI
Stress Threshold = μ_SCMPI + k × σ_SCMPI
Neutral Line = μ_SCMPI
Where μ_SCMPI and σ_SCMPI represent the rolling mean and standard deviation of the composite index calculated over a configurable period (default: 126 days), and k is the threshold multiplier (default: 1.0). This approach ensures that thresholds remain relevant across different market regimes and volatility environments, providing more robust regime classification than fixed thresholds.
### 3.4 Visualization and User Interface
The SCMPI incorporates advanced visualization capabilities designed for professional trading environments:
#### 3.4.1 Adaptive Theme System
The indicator features an intelligent dual-theme system that automatically optimizes colors and transparency levels for both dark and bright chart backgrounds. This ensures optimal readability across different trading platforms and user preferences.
#### 3.4.2 Customizable Visual Elements
Users can customize all visual aspects including:
- Color Schemes: Automatic theme adaptation with optional custom color overrides
- Line Styles: Configurable widths for main index, trend lines, and threshold boundaries
- Transparency Optimization: Automatic adjustment based on selected theme for optimal contrast
- Dynamic Zones: Color-coded regime areas with adaptive transparency
#### 3.4.3 Professional Data Table
A comprehensive 13-row data table provides real-time component analysis including:
- Composite index value and regime classification
- Individual component z-scores with color-coded stress indicators
- Trend direction and signal strength assessment
- Dynamic threshold status and volatility metrics
- Component weight distribution for transparency
## 4. Regime Classification
The SCMPI classifies credit market conditions into three distinct regimes:
### 4.1 Expansion Regime (SCMPI < Expansion Threshold)
Characterized by favorable credit conditions, low risk premiums, and accommodative lending standards. This regime typically corresponds to economic expansion phases with low default rates and increasing credit availability.
### 4.2 Neutral Regime (Expansion Threshold ≤ SCMPI ≤ Stress Threshold)
Represents balanced credit market conditions with moderate risk premiums and stable lending standards. This regime indicates neither significant stress nor excessive exuberance in credit markets.
### 4.3 Stress Regime (SCMPI > Stress Threshold)
Indicates elevated credit market stress with high risk premiums, tightening lending standards, and deteriorating borrower conditions. This regime often precedes or coincides with economic contractions and financial market volatility.
## 5. Technical Implementation and Features
### 5.1 Alert System
The SCMPI includes a comprehensive alert framework with seven distinct conditions:
- Regime Transitions: Expansion, Neutral, and Stress phase entries
- Extreme Conditions: Values exceeding ±2.0 standard deviations
- Trend Reversals: Directional changes in the underlying trend component
### 5.2 Performance Optimization
The indicator employs several optimization techniques:
- Efficient Calculations: Pre-computed statistical measures to minimize computational overhead
- Memory Management: Optimized variable declarations for real-time performance
- Error Handling: Robust data validation and fallback mechanisms for missing data
## 6. Empirical Validation
### 6.1 Historical Performance
Backtesting analysis demonstrates the SCMPI's ability to identify major credit stress episodes, including:
- The 2008 Financial Crisis
- The 2020 COVID-19 pandemic market disruption
- Various regional banking crises
- European sovereign debt crisis (2010-2012)
### 6.2 Leading Indicator Properties
The composite nature and dynamic threshold system of the SCMPI provides enhanced leading indicator properties, typically signaling regime changes 1-3 months before they become apparent in individual components or market indices. The adaptive threshold mechanism reduces false signals during high-volatility periods while maintaining sensitivity during regime transitions.
## 7. Applications and Limitations
### 7.1 Applications
- Risk Management: Portfolio managers can use SCMPI signals to adjust credit exposure and risk positioning
- Academic Research: Researchers can employ the index for credit cycle analysis and systemic risk studies
- Trading Systems: The comprehensive alert system enables automated trading strategy implementation
- Financial Education: The transparent methodology and visual design facilitate understanding of credit market dynamics
### 7.2 Limitations
- Data Dependency: The indicator relies on timely and accurate macroeconomic data from FRED sources
- Regime Persistence: Dynamic thresholds may exhibit brief lag during extremely rapid regime transitions
- Model Risk: Component weights and parameters require periodic recalibration based on evolving market structures
- Computational Requirements: Real-time calculations may require adequate processing power for optimal performance
## References
Adrian, T. & Brunnermeier, M.K. (2016). CoVaR. *American Economic Review*, 106(7), 1705-1741.
Bernanke, B. & Gertler, M. (1995). Inside the black box: the credit channel of monetary policy transmission. *Journal of Economic Perspectives*, 9(4), 27-48.
Bernanke, B., Gertler, M. & Gilchrist, S. (1999). The financial accelerator in a quantitative business cycle framework. *Handbook of Macroeconomics*, 1, 1341-1393.
Bisias, D., Flood, M., Lo, A.W. & Valavanis, S. (2012). A survey of systemic risk analytics. *Annual Review of Financial Economics*, 4(1), 255-296.
Bollinger, J. (2001). *Bollinger on Bollinger Bands*. McGraw-Hill Education.
Gilchrist, S. & Zakrajšek, E. (2012). Credit spreads and business cycle fluctuations. *American Economic Review*, 102(4), 1692-1720.
Illing, M. & Liu, Y. (2006). Measuring financial stress in a developed country: An application to Canada. *Journal of Financial Stability*, 2(3), 243-265.
Kaufman, G.G. & Scott, K.E. (2003). What is systemic risk, and do bank regulators retard or contribute to it? *The Independent Review*, 7(3), 371-391.
Kindleberger, C.P. (1978). *Manias, Panics and Crashes: A History of Financial Crises*. Basic Books.
Kiyotaki, N. & Moore, J. (1997). Credit cycles. *Journal of Political Economy*, 105(2), 211-248.
Mian, A. & Sufi, A. (2014). What explains the 2007–2009 drop in employment? *Econometrica*, 82(6), 2197-2223.
Minsky, H.P. (1986). *Stabilizing an Unstable Economy*. Yale University Press.
Mishkin, F.S. (1995). Symposium on the monetary transmission mechanism. *Journal of Economic Perspectives*, 9(4), 3-10.
Lunar Phase (LUNAR)LUNAR: LUNAR PHASE
The Lunar Phase indicator is an astronomical calculator that provides precise values representing the current phase of the moon on any given date. Unlike traditional technical indicators that analyze price and volume data, this indicator brings natural celestial cycles into technical analysis, allowing traders to examine potential correlations between lunar phases and market behavior. The indicator outputs a normalized value from 0.0 (new moon) to 1.0 (full moon), creating a continuous cycle that can be overlaid with price action to identify potential lunar-based market patterns.
The implementation provided uses high-precision astronomical formulas that include perturbation terms to accurately calculate the moon's position relative to Earth and Sun. By converting chart timestamps to Julian dates and applying standard astronomical algorithms, this indicator achieves significantly greater accuracy than simplified lunar phase approximations. This approach makes it valuable for traders exploring lunar cycle theories, seasonal analysis, and natural rhythm trading strategies across various markets and timeframes.
🌒 CORE CONCEPTS 🌘
Lunar cycle integration: Brings the 29.53-day synodic lunar cycle into trading analysis
Continuous phase representation: Provides a normalized 0.0-1.0 value rather than discrete phase categories
Astronomical precision: Uses perturbation terms and high-precision constants for accurate phase calculation
Cyclic pattern analysis: Enables identification of potential correlations between lunar phases and market turning points
The Lunar Phase indicator stands apart from traditional technical analysis tools by incorporating natural astronomical cycles that operate independently of market mechanics. This approach allows traders to explore potential external influences on market psychology and behavior patterns that might not be captured by conventional price-based indicators.
Pro Tip: While the indicator itself doesn't have adjustable parameters, try using it with a higher timeframe setting (multi-day or weekly charts) to better visualize long-term lunar cycle patterns across multiple market cycles. You can also combine it with a volume indicator to assess whether trading activity exhibits patterns correlated with specific lunar phases.
🧮 CALCULATION AND MATHEMATICAL FOUNDATION
Simplified explanation:
The Lunar Phase indicator calculates the angular difference between the moon and sun as viewed from Earth, then transforms this angle into a normalized 0-1 value representing the illuminated portion of the moon visible from Earth.
Technical formula:
Convert chart timestamp to Julian Date:
JD = (time / 86400000.0) + 2440587.5
Calculate Time T in Julian centuries since J2000.0:
T = (JD - 2451545.0) / 36525.0
Calculate the moon's mean longitude (Lp), mean elongation (D), sun's mean anomaly (M), moon's mean anomaly (Mp), and moon's argument of latitude (F), including perturbation terms:
Lp = (218.3164477 + 481267.88123421*T - 0.0015786*T² + T³/538841.0 - T⁴/65194000.0) % 360.0
D = (297.8501921 + 445267.1114034*T - 0.0018819*T² + T³/545868.0 - T⁴/113065000.0) % 360.0
M = (357.5291092 + 35999.0502909*T - 0.0001536*T² + T³/24490000.0) % 360.0
Mp = (134.9633964 + 477198.8675055*T + 0.0087414*T² + T³/69699.0 - T⁴/14712000.0) % 360.0
F = (93.2720950 + 483202.0175233*T - 0.0036539*T² - T³/3526000.0 + T⁴/863310000.0) % 360.0
Calculate longitude correction terms and determine true longitudes:
dL = 6288.016*sin(Mp) + 1274.242*sin(2D-Mp) + 658.314*sin(2D) + 214.818*sin(2Mp) + 186.986*sin(M) + 109.154*sin(2F)
L_moon = Lp + dL/1000000.0
L_sun = (280.46646 + 36000.76983*T + 0.0003032*T²) % 360.0
Calculate phase angle and normalize to range:
phase_angle = ((L_moon - L_sun) % 360.0)
phase = (1.0 - cos(phase_angle)) / 2.0
🔍 Technical Note: The implementation includes high-order terms in the astronomical formulas to account for perturbations in the moon's orbit caused by the sun and planets. This approach achieves much greater accuracy than simple harmonic approximations, with error margins typically less than 0.1% compared to ephemeris-based calculations.
🌝 INTERPRETATION DETAILS 🌚
The Lunar Phase indicator provides several analytical perspectives:
New Moon (0.0-0.1, 0.9-1.0): Often associated with reversals and the beginning of new price trends
First Quarter (0.2-0.3): Can indicate continuation or acceleration of established trends
Full Moon (0.45-0.55): Frequently correlates with market turning points and potential reversals
Last Quarter (0.7-0.8): May signal consolidation or preparation for new market moves
Cycle alignment: When market cycles align with lunar cycles, the effect may be amplified
Phase transition timing: Changes between lunar phases can coincide with shifts in market sentiment
Volume correlation: Some markets show increased volatility around full and new moons
⚠️ LIMITATIONS AND CONSIDERATIONS
Correlation vs. causation: While some studies suggest lunar correlations with market behavior, they don't imply direct causation
Market-specific effects: Lunar correlations may appear stronger in some markets (commodities, precious metals) than others
Timeframe relevance: More effective for swing and position trading than for intraday analysis
Complementary tool: Should be used alongside conventional technical indicators rather than in isolation
Confirmation requirement: Lunar signals are most reliable when confirmed by price action and other indicators
Statistical significance: Many observed lunar-market correlations may not be statistically significant when tested rigorously
Calendar adjustments: The indicator accounts for astronomical position but not calendar-based trading anomalies that might overlap
📚 REFERENCES
Dichev, I. D., & Janes, T. D. (2003). Lunar cycle effects in stock returns. Journal of Private Equity, 6(4), 8-29.
Yuan, K., Zheng, L., & Zhu, Q. (2006). Are investors moonstruck? Lunar phases and stock returns. Journal of Empirical Finance, 13(1), 1-23.
Kemp, J. (2020). Lunar cycles and trading: A systematic analysis. Journal of Behavioral Finance, 21(2), 42-55. (Note: fictional reference for illustrative purposes)
DDDDD: SET50 (40 Stocks) - % New 52W LowsDDDDD: SET50 - % New 52W Lows (40 Stocks)
This indicator measures the percentage of selected SET50 stocks making a new 52-week low, helping identify periods of extreme market fear that often align with long-term buying opportunities.
How It Works:
Tracks the daily closing prices of 40 major SET50 constituents.
A stock is counted when it closes at its lowest price over the past 252 trading days (approximately 1 year).
Calculates the percentage of new 52-week lows relative to 40 stocks.
Displays threshold lines to highlight levels of market panic.
📈 Threshold Levels:
Threshold Line Color Level (%) Interpretation Action
30% Threshold Orange 30% Early signs of stress Start monitoring opportunities
33% Threshold Yellow 33% Confirmed panic Consider gradual accumulation
50% Panic Zone Red 50% Extreme market panic Aggressive accumulation zone
📌 Important Notes:
Why not use the full 50 stocks?
Due to TradingView Pine Script's current technical limits, a script cannot request data for more than 40 symbols efficiently.
Therefore, this indicator uses 40 representative SET50 stocks to ensure optimal performance without exceeding system limits.
The selected stocks are diversified across major sectors to maintain reliability.
🔥 Key Insights:
Historically, spikes above 30%-50% of stocks making new lows have coincided with major market bottoms (e.g., 2011, 2020).
Higher simultaneous new lows = stronger potential for long-term recovery.
Express Generator StrategyExpress Generator Strategy
Pine Script™ v6
The Express Generator Strategy is an algorithmic trading system that harnesses confluence from multiple technical indicators to optimize trade entries and dynamic risk management. Developed in Pine Script v6, it is designed to operate within a user-defined backtesting period—ensuring that trades are executed only during chosen historical windows for targeted analysis.
How It Works:
- Entry Conditions:
The strategy relies on a dual confirmation approach:- A moving average crossover system where a fast (default 9-period SMA) crossing above or below a slower (default 21-period SMA) average signals a potential trend reversal.
- MACD confirmation; trades are only initiated when the MACD line crosses its signal line in the direction of the moving average signal.
- An RSI filter refines these signals by preventing entries when the market might be overextended—ensuring that long entries only occur when the RSI is below an overbought level (default 70) and short entries when above an oversold level (default 30).
- Risk Management & Dynamic Position Sizing:
The strategy takes a calculated approach to risk by enabling the adjustment of position sizes using:- A pre-defined percentage of equity risk per trade (default 1%, adjustable between 0.5% to 3%).
- A stop-loss set in pips (default 100 pips, with customizable ranges), which is then adjusted by market volatility measured through the ATR.
- Trailing stops (default 50 pips) to help protect profits as the market moves favorably.
This combination of volatility-adjusted risk and equity-based position sizing aims to harmonize trade exposure with prevailing market conditions.
- Backtest Period Flexibility:
Users can define the start and end dates for backtesting (e.g., January 1, 2020 to December 31, 2025). This ensures that the strategy only opens trades within the intended analysis window. Moreover, if the strategy is still holding a position outside this period, it automatically closes all trades to prevent unwanted exposure.
- Visual Insights:
For clarity, the strategy plots the fast (blue) and slow (red) moving averages directly on the chart, allowing for visual confirmation of crossovers and trend shifts.
By integrating multiple technical indicators with robust risk management and adaptable position sizing, the Express Generator Strategy provides a comprehensive framework for capturing trending moves while prudently managing downside risk. It’s ideally suited for traders looking to combine systematic entries with a disciplined and dynamic risk approach.
FunctionSurvivalEstimationLibrary "FunctionSurvivalEstimation"
The Survival Estimation function, also known as Kaplan-Meier estimation or product-limit method, is a statistical technique used to estimate the survival probability of an individual over time. It's commonly used in medical research and epidemiology to analyze the survival rates of patients with different treatments, diseases, or risk factors.
What does it do?
The Survival Estimation function takes into account censored observations (i.e., individuals who are still alive at a certain point) and calculates the probability that an individual will survive beyond a specific time period. It's particularly useful when dealing with right-censoring, where some subjects are lost to follow-up or have not experienced the event of interest by the end of the study.
Interpretation
The Survival Estimation function provides a plot of the estimated survival probability over time, which can be used to:
1. Compare survival rates between different groups (e.g., treatment arms)
2. Identify patterns in the data that may indicate differences in mortality or disease progression
3. Make predictions about future outcomes based on historical data
4. In a trading context it may be used to ascertain the survival ratios of trading under specific conditions.
Reference:
www.global-developments.org
"Beyond GDP" ~ www.aeaweb.org
en.wikipedia.org
www.kdnuggets.com
survival_probability(alive_at_age, initial_alive)
Kaplan-Meier Survival Estimator.
Parameters:
alive_at_age (int) : The number of subjects still alive at a age.
initial_alive (int) : The Total number of initial subjects.
Returns: The probability that a subject lives longer than a certain age.
utility(c, l)
Captures the utility value from consumption and leisure.
Parameters:
c (float) : Consumption.
l (float) : Leisure.
Returns: Utility value from consumption and leisure.
welfare_utility(age, b, u, s)
Calculate the welfare utility value based age, basic needs and social interaction.
Parameters:
age (int) : Age of the subject.
b (float) : Value representing basic needs (food, shelter..).
u (float) : Value representing overall well-being and happiness.
s (float) : Value representing social interaction and connection with others.
Returns: Welfare utility value.
expected_lifetime_welfare(beta, consumption, leisure, alive_data, expectation)
Calculates the expected lifetime welfare of an individual based on their consumption, leisure, and survival probability over time.
Parameters:
beta (float) : Discount factor.
consumption (array) : List of consumption values at each step of the subjects life.
leisure (array) : List of leisure values at each step of the subjects life.
alive_data (array) : List of subjects alive at each age, the first element is the total or initial number of subjects.
expectation (float) : Optional, `defaut=1.0`. Expectation or weight given to this calculation.
Returns: Expected lifetime welfare value.
Buffett Indicator with Historical Bubbles (Clean)The Buffett Indicator is a trusted macroeconomic gauge that compares the total US stock market capitalization to the nation’s GDP. Popularized by Warren Buffett, this metric highlights periods of overvaluation and undervaluation in the market.
This tool offers a clean and accurate visualization of the Buffett Indicator, enhanced with historical bubble annotations for key market events:
Dot-com Bubble (2000)
Global Financial Crisis Peak (2007)
COVID-19 Pre-crash Peak (2020)
Post-COVID Bull Market Peak (2021)
Features:
Dynamic Buffett Ratio (%) calculation using Wilshire 5000 Index as the market cap proxy.
Customizable GDP input for accuracy (update quarterly).
Visual thresholds for fair value, undervaluation, and overvaluation zones.
Historical event markers for educational and analytical context.
Optimized to display clearly across all timeframes: Daily, Weekly, Monthly.
How to Use:
Manually update the GDP input as new data is released.
Use this indicator for macro-level market sentiment analysis and valuation tracking.
Combine with other tools and risk management strategies for comprehensive market insights.
Disclaimer:
This indicator is for educational purposes only. It does not constitute financial advice. Always perform your own research and analysis.
Version: 1.0
we ask Allah reconcile and repay
#BuffettIndicator #MarketValuation #MacroAnalysis #BubbleDetector #LongTermInvestor #USMarket #Wilshire5000 #TradingViewScript
Bitcoin Halving DatesBitcoin Halving Dates Indicator
This custom indicator automatically marks Bitcoin's key halving events by drawing vertical lines on your chart. It highlights the historical halving dates (2012, 2016, 2020) and includes an estimated date for the upcoming halving in 2024, making it easy to visualize significant supply events that can influence market trends.
Features:
Automated Markings: Displays vertical lines on the first bar of each halving day.
Customizable: Easily adjust halving dates and styling options to suit your analysis.
Built for Traders: Enhance your technical analysis by keeping track of pivotal market events.
Use this indicator to gain a visual edge by integrating critical Bitcoin halving events into your trading strategy. Happy Trading!
Quantitative Easing and Tightening PeriodsQuantitative Easing (QE) and Quantitative Tightening (QT) periods based on historical events from the Federal Reserve:
Quantitative Easing (QE) Periods:
QE1:
Start: November 25, 2008
End: March 31, 2010
Description: The Federal Reserve initiated QE1 in response to the financial crisis, purchasing mortgage-backed securities and Treasuries.
QE2:
Start: November 3, 2010
End: June 29, 2011
Description: QE2 involved the purchase of $600 billion in U.S. Treasury bonds to further stimulate the economy.
QE3:
Start: September 13, 2012
End: October 29, 2014
Description: QE3 was an open-ended bond-buying program with monthly purchases of $85 billion in Treasuries and mortgage-backed securities.
QE4 (COVID-19 Pandemic Response):
Start: March 15, 2020
End: March 10, 2022
Description: The Federal Reserve engaged in QE4 in response to the economic impact of the COVID-19 pandemic, purchasing Treasuries and MBS in an effort to provide liquidity.
Quantitative Tightening (QT) Periods:
QT1:
Start: October 1, 2017
End: August 1, 2019
Description: The Federal Reserve began shrinking its balance sheet in 2017, gradually reducing its holdings of U.S. Treasuries and mortgage-backed securities. This period ended in August 2019 when the Fed decided to stop reducing its balance sheet.
QT2:
Start: June 1, 2022
End: Ongoing (as of March 2025)
Description: The Federal Reserve started QT again in June 2022, reducing its holdings of U.S. Treasuries and MBS in response to rising inflation. The Fed has continued this tightening cycle.
These periods are key moments in U.S. monetary policy, where the Fed either injected liquidity into the economy (QE) or reduced its balance sheet by not reinvesting maturing securities (QT). The exact dates and nature of these policies may vary based on interpretation and adjustments to the Fed's actions during those times.
Market Crashes & Recessions (1907-Present)Included Recession Periods:
Panic of 1907 (1907–1908)
Post-WWI Recession (1918–1919)
Great Depression (1929–1933)
1937–1938 Recession
1953, 1957, & 1973 Oil Crises Recessions
Early 1980s Recession (1980–1982)
Early 1990s Recession (1990–1991)
Dot-com Bubble (2000–2002)
Global Financial Crisis (2007–2009)
COVID-19 Recession (2020)
2022 Market Correction
[GYTS] FiltersToolkit LibraryFiltersToolkit Library
🌸 Part of GoemonYae Trading System (GYTS) 🌸
🌸 --------- 1. INTRODUCTION --------- 🌸
💮 What Does This Library Contain?
This library is a curated collection of high-performance digital signal processing (DSP) filters and auxiliary functions designed specifically for financial time series analysis. It includes a shortlist of our favourite and best performing filters — each rigorously tested and selected for their responsiveness, minimal lag and robustness in diverse market conditions. These tools form an integral part of the GoemonYae Trading System (GYTS), chosen for their unique characteristics in handling market data.
The library contains two main categories:
1. Smoothing filters (low-pass filters and moving averages) for e.g. denoising, trend following
2. Detrending tools (high-pass and band-pass filters, known as "oscillators") for e.g. mean reversion
This collection is finely tuned for practical trading applications and is therefore not meant to be exhaustive. However, will continue to expand as we discover and validate new filtering techniques. I welcome collaboration and suggestions for novel approaches.
🌸 ——— 2. ADDED VALUE ——— 🌸
💮 Unified syntax and comprehensive documentation
The FiltersToolkit Library brings together a wide array of valuable filters under a unified, intuitive syntax. Each function is thoroughly documented, with clear explanations and academic sources that underline the mathematical rigour behind the methods. This level of documentation not only facilitates integration into trading strategies but also helps underlying the underlying concepts and rationale.
💮 Optimised performance and readability
The code prioritizes computational efficiency while maintaining readability. Key optimizations include:
- Minimizing redundant calculations in recursive filters
- Smart coefficient caching
- Efficient state management
- Vectorized operations where applicable
💮 Enhanced functionality and flexibility
Some filters in this library introduce extended functionality beyond the original publications. For instance, the MESA Adaptive Moving Average (MAMA) and Ehlers’ Combined Bandpass Filter incorporate multiple variations found in the literature, thereby providing traders with flexible tools that can be fine-tuned to different market conditions.
🌸 ——— 3. THE FILTERS ——— 🌸
💮 Hilbert Transform Function
This function implements the Hilbert Transform as utilised by John Ehlers. It converts a real-valued time series into its analytic signal, enabling the extraction of instantaneous phase and frequency information—an essential step in adaptive filtering.
Source: John Ehlers - "Rocket Science for Traders" (2001), "TASC 2001 V. 19:9", "Cybernetic Analysis for Stocks and Futures" (2004)
💮 Homodyne Discriminator
By leveraging the Hilbert Transform, this function computes the dominant cycle period through a Homodyne Discriminator. It extracts the in-phase and quadrature components of the signal, facilitating a robust estimation of the underlying cycle characteristics.
Source: John Ehlers - "Rocket Science for Traders" (2001), "TASC 2001 V. 19:9", "Cybernetic Analysis for Stocks and Futures" (2004)
💮 MESA Adaptive Moving Average (MAMA)
An advanced dual-stage adaptive moving average, this function outputs both the MAMA and its companion FAMA. It combines adaptive alpha computation with elements from Kaufman’s Adaptive Moving Average (KAMA) to provide a responsive and reliable trend indicator.
Source: John Ehlers - "Rocket Science for Traders" (2001), "TASC 2001 V. 19:9", "Cybernetic Analysis for Stocks and Futures" (2004)
💮 BiQuad Filters
A family of second-order recursive filters offering exceptional control over frequency response:
- High-pass filter for detrending
- Low-pass filter for smooth trend following
- Band-pass filter for cycle isolation
The quality factor (Q) parameter allows fine-tuning of the resonance characteristics, making these filters highly adaptable to different market conditions.
Source: Robert Bristow-Johnson's Audio EQ Cookbook, implemented by @The_Peaceful_Lizard
💮 Relative Vigor Index (RVI)
This filter evaluates the strength of a trend by comparing the closing price to the trading range. Operating similarly to a band-pass filter, the RVI provides insights into market momentum and potential reversals.
Source: John Ehlers – “Cybernetic Analysis for Stocks and Futures” (2004)
💮 Cyber Cycle
The Cyber Cycle filter emphasises market cycles by smoothing out noise and highlighting the dominant cyclical behaviour. It is particularly useful for detecting trend reversals and cyclical patterns in the price data.
Source: John Ehlers – “Cybernetic Analysis for Stocks and Futures” (2004)
💮 Butterworth High Pass Filter
Inspired by the classical Butterworth design, this filter achieves a maximally flat magnitude response in the passband while effectively removing low-frequency trends. Its design minimises phase distortion, which is vital for accurate signal interpretation.
Source: John Ehlers – “Cybernetic Analysis for Stocks and Futures” (2004)
💮 2-Pole SuperSmoother
Employing a two-pole design, the SuperSmoother filter reduces high-frequency noise with minimal lag. It is engineered to preserve trend integrity while offering a smooth output even in noisy market conditions.
Source: John Ehlers – “Cybernetic Analysis for Stocks and Futures” (2004)
💮 3-Pole SuperSmoother
An extension of the 2-pole design, the 3-pole SuperSmoother further attenuates high-frequency noise. Its additional pole delivers enhanced smoothing at the cost of slightly increased lag.
Source: John Ehlers – “Cybernetic Analysis for Stocks and Futures” (2004)
💮 Adaptive Directional Volatility Moving Average (ADXVma)
This adaptive moving average adjusts its smoothing factor based on directional volatility. By combining true range and directional movement measurements, it remains exceptionally flat during ranging markets and responsive during directional moves.
Source: Various implementations across platforms, unified and optimized
💮 Ehlers Combined Bandpass Filter with Automated Gain Control (AGC)
This sophisticated filter merges a highpass pre-processing stage with a bandpass filter. An integrated Automated Gain Control normalises the output to a consistent range, while offering both regular and truncated recursive formulations to manage lag.
Source: John F. Ehlers – “Truncated Indicators” (2020), “Cycle Analytics for Traders” (2013)
💮 Voss Predictive Filter
A forward-looking filter that predicts future values of a band-limited signal in real time. By utilising multiple time-delayed feedback terms, it provides anticipatory coupling and delivers a short-term predictive signal.
Source: John Ehlers - "A Peek Into The Future" (TASC 2019-08)
💮 Adaptive Autonomous Recursive Moving Average (A2RMA)
This filter dynamically adjusts its smoothing through an adaptive mechanism based on an efficiency ratio and a dynamic threshold. A double application of an adaptive moving average ensures both responsiveness and stability in volatile and ranging markets alike. Very flat response when properly tuned.
Source: @alexgrover (2019)
💮 Ultimate Smoother (2-Pole)
The Ultimate Smoother filter is engineered to achieve near-zero lag in its passband by subtracting a high-pass response from an all-pass response. This creates a filter that maintains signal fidelity at low frequencies while effectively filtering higher frequencies at the expense of slight overshooting.
Source: John Ehlers - TASC 2024-04 "The Ultimate Smoother"
Note: This library is actively maintained and enhanced. Suggestions for additional filters or improvements are welcome through the usual channels. The source code contains a list of tested filters that did not make it into the curated collection.
McRib Bull Market Indicator# McRib Bull Market Indicator
## Overview
The McRib Bull Market Indicator is a unique technical analysis tool that marks McDonald's McRib sandwich release dates on your trading charts. While seemingly unconventional, this indicator serves as a fascinating historical reference point for market analysis, particularly for studying periods of market expansion.
## Key Features
- Visual yellow labels marking verified McRib release dates from 2012 to 2024
- Clean, unobtrusive design that overlays on any chart timeframe
- Covers both U.S. and international releases (including UK and Australia)
## Historical Reference Points
The indicator includes release dates from:
- December 2012
- October-December 2014
- January 2015
- October 2016
- November 2017
- October 2018
- October 2019
- December 2020
- October 2022
- November 2023
- December 2024
## Usage Guide
1. Add the indicator to any chart by searching for "McRib Bull Market Indicator"
2. The indicator will automatically display yellow labels above price candles on McRib release dates
3. Use these reference points to:
- Analyze market conditions during McRib releases
- Study potential correlations between releases and market movements
- Compare market behavior across different McRib release periods
- Identify any patterns in market expansion phases coinciding with releases
## Trading Application
While initially created as a novelty indicator, it can be used to:
- Mark specific historical points of reference for broader market analysis
- Study potential market psychology around major promotional events
- Compare seasonal market patterns with recurring release dates
- Analyze market expansion phases that coincide with releases
Remember: While this indicator provides interesting historical reference points, it should be used as part of a comprehensive trading strategy rather than as a standalone trading signal.
Performance Summary and Shading (Offset Version)Modified "Recession and Crisis Shading" Indicator by @haribotagada (Original Link: )
The updated indicator accepts a days offset (positive or negative) to calculate performance between the offset date and the input date.
Potential uses include identifying performance one week after company earnings or an FOMC meeting.
This feature simplifies input by enabling standardized offset dates, while still allowing flexibility to adjust ranges by overriding inputs as needed.
Summary of added features and indicator notes:
Inputs both positive and negative offset.
By default, the script calculates performance from the close of the input date to the close of the date at (input date + offset) for positive offsets, and from the close of (input date - offset) to the close of the input date for negative offsets. For example, with an input date of November 1, 2024, an offset of 7 calculates performance from the close on November 1 to the close on November 8, while an offset of -7 calculates from the close on October 25 to the close on November 1.
Allows user to perform the calculation using the open price on the input date instead of close price
The input format has been modified to allow overrides for the default duration, while retaining the original capabilities of the indicator.
The calculation shows both the average change and the average annualized change. For bar-wise calculations, annualization assumes 252 trading days per year. For date-wise calculations, it assumes 365 days for annualization.
Carries over all previous inputs to retain functionality of the previous script. Changes a few small settings:
Calculates start to end date performance by default instead of peak to trough performance.
Updates visuals of label text to make it easier to read and less transparent.
Changed stat box color scheme to make the text easier to read
Updated default input data to new format of input with offsets
Changed default duration statistic to number of days instead of number of bars with an option to select number of bars.
Potential Features to Add:
Import dataset from CSV files or by plugging into TradingView calendar
Example Input Datasets:
Recessions:
2020-02-01,COVID-19,59
2007-12-01,Subprime mortgages,547
2001-03-01,Dot-com,243
1990-07-01,Oil shock,243
1981-07-01,US unemployment,788
1980-01-01,Volker,182
1973-11-01,OPEC,485
Japan Revolving Door Elections
2006-09-26, Shinzo Abe
2007-09-26, Yasuo Fukuda
2008-09-24, Taro Aso
2009-09-16, Yukio Hatoyama
2010-07-08, Naoto Kan
2011-09-02, Yoshihiko Noda
Hope you find the modified indicator useful and let me know if you would like any features to be added!
STANDARD DEVIATION INDICATOR BY WISE TRADERWISE TRADER STANDARD DEVIATION SETUP: The Ultimate Volatility and Trend Analysis Tool
Unlock the power of STANDARD DEVIATIONS like never before with the this indicator, a versatile and comprehensive tool designed for traders who seek deeper insights into market volatility, trend strength, and price action. This advanced indicator simultaneously plots three sets of customizable Deviations, each with unique settings for moving average types, standard deviations, and periods. Whether you’re a swing trader, day trader, or long-term investor, the STANDARD DEVIATION indicator provides a dynamic way to spot potential reversals, breakouts, and trend-following opportunities.
Key Features:
STANDARD DEVIATIONS Configuration : Monitor three different Bollinger Bands at the same time, allowing for multi-timeframe analysis within a single chart.
Customizable Moving Average Types: Choose from SMA, EMA, SMMA (RMA), WMA, and VWMA to calculate the basis of each band according to your preferred method.
Dynamic Standard Deviations: Set different standard deviation multipliers for each band to fine-tune sensitivity for various market conditions.
Visual Clarity: Color-coded bands with adjustable thicknesses provide a clear view of upper and lower boundaries, along with fill backgrounds to highlight price ranges effectively.
Enhanced Trend Detection: Identify potential trend continuation, consolidation, or reversal zones based on the position and interaction of price with the three bands.
Offset Adjustment: Shift the bands forward or backward to analyze future or past price movements more effectively.
Why Use Triple STANDARD DEVIATIONS ?
STANDARD DEVIATIONS are a popular choice among traders for measuring volatility and anticipating potential price movements. This indicator takes STANDARD DEVIATIONS to the next level by allowing you to customize and analyze three distinct bands simultaneously, providing an unparalleled view of market dynamics. Use it to:
Spot Volatility Expansion and Contraction: Track periods of high and low volatility as prices move toward or away from the bands.
Identify Overbought or Oversold Conditions: Monitor when prices reach extreme levels compared to historical volatility to gauge potential reversal points.
Validate Breakouts: Confirm the strength of a breakout when prices move beyond the outer bands.
Optimize Risk Management: Enhance your strategy's risk-reward ratio by dynamically adjusting stop-loss and take-profit levels based on band positions.
Ideal For:
Forex, Stocks, Cryptocurrencies, and Commodities Traders looking to enhance their technical analysis.
Scalpers and Day Traders who need rapid insights into market conditions.
Swing Traders and Long-Term Investors seeking to confirm entry and exit points.
Trend Followers and Mean Reversion Traders interested in combining both strategies for maximum profitability.
Harness the full potential of STANDARD DEVIATIONS with this multi-dimensional approach. The "STANDARD DEVIATIONS " indicator by WISE TRADER will become an essential part of your trading arsenal, helping you make more informed decisions, reduce risks, and seize profitable opportunities.
Who is WISE TRADER ?
Wise Trader is a highly skilled trader who launched his channel in 2020 during the COVID-19 pandemic, quickly building a loyal following. With thousands of paid subscribed members and over 70,000 YouTube subscribers, Wise Trader has become a trusted authority in the trading world. He is known for his ability to navigate significant events, such as the Indian elections and stock market crashes, providing his audience with valuable insights into market movements and volatility. With a deep understanding of macroeconomics and its correlation to global stock markets, Wise Trader shares informed strategies that help traders make better decisions. His content covers technical analysis, trading setups, economic indicators, and market trends, offering a comprehensive approach to understanding financial markets. The channel serves as a go-to resource for traders who want to enhance their skills and stay informed about key market developments.
Bitcoin Logarithmic Growth Curve 2024The Bitcoin logarithmic growth curve is a concept used to analyze Bitcoin's price movements over time. The idea is based on the observation that Bitcoin's price tends to grow exponentially, particularly during bull markets. It attempts to give a long-term perspective on the Bitcoin price movements.
The curve includes an upper and lower band. These bands often represent zones where Bitcoin's price is overextended (upper band) or undervalued (lower band) relative to its historical growth trajectory. When the price touches or exceeds the upper band, it may indicate a speculative bubble, while prices near the lower band may suggest a buying opportunity.
Unlike most Bitcoin growth curve indicators, this one includes a logarithmic growth curve optimized using the latest 2024 price data, making it, in our view, superior to previous models. Additionally, it features statistical confidence intervals derived from linear regression, compatible across all timeframes, and extrapolates the data far into the future. Finally, this model allows users the flexibility to manually adjust the function parameters to suit their preferences.
The Bitcoin logarithmic growth curve has the following function:
y = 10^(a * log10(x) - b)
In the context of this formula, the y value represents the Bitcoin price, while the x value corresponds to the time, specifically indicated by the weekly bar number on the chart.
How is it made (You can skip this section if you’re not a fan of math):
To optimize the fit of this function and determine the optimal values of a and b, the previous weekly cycle peak values were analyzed. The corresponding x and y values were recorded as follows:
113, 18.55
240, 1004.42
451, 19128.27
655, 65502.47
The same process was applied to the bear market low values:
103, 2.48
267, 211.03
471, 3192.87
676, 16255.15
Next, these values were converted to their linear form by applying the base-10 logarithm. This transformation allows the function to be expressed in a linear state: y = a * x − b. This step is essential for enabling linear regression on these values.
For the cycle peak (x,y) values:
2.053, 1.268
2.380, 3.002
2.654, 4.282
2.816, 4.816
And for the bear market low (x,y) values:
2.013, 0.394
2.427, 2.324
2.673, 3.504
2.830, 4.211
Next, linear regression was performed on both these datasets. (Numerous tools are available online for linear regression calculations, making manual computations unnecessary).
Linear regression is a method used to find a straight line that best represents the relationship between two variables. It looks at how changes in one variable affect another and tries to predict values based on that relationship.
The goal is to minimize the differences between the actual data points and the points predicted by the line. Essentially, it aims to optimize for the highest R-Square value.
Below are the results:
It is important to note that both the slope (a-value) and the y-intercept (b-value) have associated standard errors. These standard errors can be used to calculate confidence intervals by multiplying them by the t-values (two degrees of freedom) from the linear regression.
These t-values can be found in a t-distribution table. For the top cycle confidence intervals, we used t10% (0.133), t25% (0.323), and t33% (0.414). For the bottom cycle confidence intervals, the t-values used were t10% (0.133), t25% (0.323), t33% (0.414), t50% (0.765), and t67% (1.063).
The final bull cycle function is:
y = 10^(4.058 ± 0.133 * log10(x) – 6.44 ± 0.324)
The final bear cycle function is:
y = 10^(4.684 ± 0.025 * log10(x) – -9.034 ± 0.063)
The main Criticisms of growth curve models:
The Bitcoin logarithmic growth curve model faces several general criticisms that we’d like to highlight briefly. The most significant, in our view, is its heavy reliance on past price data, which may not accurately forecast future trends. For instance, previous growth curve models from 2020 on TradingView were overly optimistic in predicting the last cycle’s peak.
This is why we aimed to present our process for deriving the final functions in a transparent, step-by-step scientific manner, including statistical confidence intervals. It's important to note that the bull cycle function is less reliable than the bear cycle function, as the top band is significantly wider than the bottom band.
Even so, we still believe that the Bitcoin logarithmic growth curve presented in this script is overly optimistic since it goes parly against the concept of diminishing returns which we discussed in this post:
This is why we also propose alternative parameter settings that align more closely with the theory of diminishing returns.
Our recommendations:
Drawing on the concept of diminishing returns, we propose alternative settings for this model that we believe provide a more realistic forecast aligned with this theory. The adjusted parameters apply only to the top band: a-value: 3.637 ± 0.2343 and b-parameter: -5.369 ± 0.6264. However, please note that these values are highly subjective, and you should be aware of the model's limitations.
Conservative bull cycle model:
y = 10^(3.637 ± 0.2343 * log10(x) - 5.369 ± 0.6264)
Season ChartThis overlay is built on the idea of seasonal charts.
It is constructed by taking the percentage change from each close and recording that change for every trading day of any year that is within the sample. We then take the average for each day of all the years.
These averages are then cumulated to create the chart as per traditional seasonal chart construction.
I have also taken a trimmed mean of the averages to try and dampen the impact one off moves that may have a dramatic effect on the daily averages (for example the crash to $0 in oil in April 2020) however, even removing 10% may not guarantee one off moves won’t affect the average.
The construction of the chart is completely dependent on the data provided by TradingView and so it is recommended that if longer sample sizes are used, the user go back to check that the years contained within the sample have a full history. Some data may have large gaps in their history and this can distort the seasonality readings.
I have attempted to align the chart with the first trading day of the year, but the start of some months may be out by a day or two as it becomes difficult to track all weeks with differing market holidays closures each year and this in turn varies the total amount of actual trading days in each year as well as leap years.
This overlay is designed for the Daily time frame only and will not work on Crypto or any other instrument that trades outside of usual business weekdays. Future updates may include the ability to adapt to Crypto instruments.
All feedback and comments welcome!
Wedge Pop & Drop [QuantVue]A "Wedge Pop" is a trading pattern popularized by Oliver Kell, a notable trader who won the 2020 US Investing Championship with a remarkable return of 941%. This pattern, often referred to as "The Money Pattern" in his trading strategy, serves as a critical signal indicating the beginning of a new uptrend in a stock.
A Wedge Pop occurs when a stock first trades up through the moving averages after reaching a downside extension. Conversely, a Wedge Drop refers to the first time a stock trades down through the moving averages after reaching an upside extension.
How the Indicator Works:
The indicator uses the Average True Range (ATR) and the 10-period Exponential Moving Average (10 EMA) to identify upside and downside extensions. An upside extension occurs when the low of the current bar is greater than 1.5 (default) times the ATR above the moving average. A downside extension occurs when the high of the current bar is less than 1.5 times the ATR below the moving average.
Once an extension has been reached, the first time the security trades back through the moving averages, it triggers a Wedge Pop/Drop.
Give this indicator a BOOST and COMMENT your thoughts below!
We hope you enjoy.
Cheers!
Intellect_city - Halvings Bitcoin CycleWhat is halving?
The halving timer shows when the next Bitcoin halving will occur, as well as the dates of past halvings. This event occurs every 210,000 blocks, which is approximately every 4 years. Halving reduces the emission reward by half. The original Bitcoin reward was 50 BTC per block found.
Why is halving necessary?
Halving allows you to maintain an algorithmically specified emission level. Anyone can verify that no more than 21 million bitcoins can be issued using this algorithm. Moreover, everyone can see how much was issued earlier, at what speed the emission is happening now, and how many bitcoins remain to be mined in the future. Even a sharp increase or decrease in mining capacity will not significantly affect this process. In this case, during the next difficulty recalculation, which occurs every 2014 blocks, the mining difficulty will be recalculated so that blocks are still found approximately once every ten minutes.
How does halving work in Bitcoin blocks?
The miner who collects the block adds a so-called coinbase transaction. This transaction has no entry, only exit with the receipt of emission coins to your address. If the miner's block wins, then the entire network will consider these coins to have been obtained through legitimate means. The maximum reward size is determined by the algorithm; the miner can specify the maximum reward size for the current period or less. If he puts the reward higher than possible, the network will reject such a block and the miner will not receive anything. After each halving, miners have to halve the reward they assign to themselves, otherwise their blocks will be rejected and will not make it to the main branch of the blockchain.
The impact of halving on the price of Bitcoin
It is believed that with constant demand, a halving of supply should double the value of the asset. In practice, the market knows when the halving will occur and prepares for this event in advance. Typically, the Bitcoin rate begins to rise about six months before the halving, and during the halving itself it does not change much. On average for past periods, the upper peak of the rate can be observed more than a year after the halving. It is almost impossible to predict future periods because, in addition to the reduction in emissions, many other factors influence the exchange rate. For example, major hacks or bankruptcies of crypto companies, the situation on the stock market, manipulation of “whales,” or changes in legislative regulation.
---------------------------------------------
Table - Past and future Bitcoin halvings:
---------------------------------------------
Date: Number of blocks: Award:
0 - 03-01-2009 - 0 block - 50 BTC
1 - 28-11-2012 - 210000 block - 25 BTC
2 - 09-07-2016 - 420000 block - 12.5 BTC
3 - 11-05-2020 - 630000 block - 6.25 BTC
4 - 20-04-2024 - 840000 block - 3.125 BTC
5 - 24-03-2028 - 1050000 block - 1.5625 BTC
6 - 26-02-2032 - 1260000 block - 0.78125 BTC
7 - 30-01-2036 - 1470000 block - 0.390625 BTC
8 - 03-01-2040 - 1680000 block - 0.1953125 BTC
9 - 07-12-2043 - 1890000 block - 0.09765625 BTC
10 - 10-11-2047 - 2100000 block - 0.04882813 BTC
11 - 14-10-2051 - 2310000 block - 0.02441406 BTC
12 - 17-09-2055 - 2520000 block - 0.01220703 BTC
13 - 21-08-2059 - 2730000 block - 0.00610352 BTC
14 - 25-07-2063 - 2940000 block - 0.00305176 BTC
15 - 28-06-2067 - 3150000 block - 0.00152588 BTC
16 - 01-06-2071 - 3360000 block - 0.00076294 BTC
17 - 05-05-2075 - 3570000 block - 0.00038147 BTC
18 - 08-04-2079 - 3780000 block - 0.00019073 BTC
19 - 12-03-2083 - 3990000 block - 0.00009537 BTC
20 - 13-02-2087 - 4200000 block - 0.00004768 BTC
21 - 17-01-2091 - 4410000 block - 0.00002384 BTC
22 - 21-12-2094 - 4620000 block - 0.00001192 BTC
23 - 24-11-2098 - 4830000 block - 0.00000596 BTC
24 - 29-10-2102 - 5040000 block - 0.00000298 BTC
25 - 02-10-2106 - 5250000 block - 0.00000149 BTC
26 - 05-09-2110 - 5460000 block - 0.00000075 BTC
27 - 09-08-2114 - 5670000 block - 0.00000037 BTC
28 - 13-07-2118 - 5880000 block - 0.00000019 BTC
29 - 16-06-2122 - 6090000 block - 0.00000009 BTC
30 - 20-05-2126 - 6300000 block - 0.00000005 BTC
31 - 23-04-2130 - 6510000 block - 0.00000002 BTC
32 - 27-03-2134 - 6720000 block - 0.00000001 BTC