Markov Chain Trend ProbabilityA Markov Chain is a mathematical model that predicts future states based on the current state, assuming that the future depends only on the present (not the past). Originally developed by Russian mathematician Andrey Markov, this concept is widely used in:
Finance: Risk modeling, portfolio optimization, credit scoring, algorithmic trading
Weather Forecasting: Predicting sunny/rainy days, temperature patterns, storm tracking
Here's an example of a Markov chain: If the weather is sunny, the probability that will be sunny 30 min later is say 90%. However, if the state changes, i.e. it starts raining, how the probability that will be raining 30 min later is say 70% and only 30% sunny.
Similar concept can be applied to markets price action and trends.
Mathematical Foundation
The core principle follows the Markov Property: P(X_{t+1}|X_t, X_{t-1}, ..., X_0) = P(X_{t+1}|X_t)
Transition Matrix :
-------------Next State
Current----
--------P11 P12
-----P21 P22
Probability Calculations:
P(Up→Up) = Count(Up→Up) / Count(Up states)
P(Down→Down) = Count(Down→Down) / Count(Down states)
Steady-state probability: π = πP (where π is the stationary distribution)
State Definition:
State = UPTREND if (Price_t - Price_{t-n})/ATR > threshold
State = DOWNTREND if (Price_t - Price_{t-n})/ATR < -threshold
How It Works in Trading
This indicator applies Markov Chain theory to market trends by:
Defining States: Classifies market conditions as UPTREND or DOWNTREND based on price movement relative to ATR (Average True Range)
Learning Transitions: Analyzes historical data to calculate probabilities of moving from one state to another
Predicting Probabilities: Estimates the likelihood of future trend continuation or reversal
How to Use
Parameters:
Lookback Period: Number of bars to analyze for trend detection (default: 14)
ATR Threshold: Sensitivity multiplier for state changes (default: 0.5)
Historical Periods: Sample size for probability calculations (default: 33)
Trading Applications:
Trend confirmation for entry/exit decisions
Risk assessment through probability analysis
Market regime identification
Early warning system for potential trend reversals
The indicator works on any timeframe and asset class. Enjoy!
Markov
[SGM Markov Chain]Introduction
A Markov chain is a mathematical model that describes a system evolving over time among a finite number of states. This model is based on the assumption that the future state of the system depends only on the current state and not on previous states, the so-called Markov property. In the context of financial markets, Markov chains can be used to model transitions between different market conditions, for example, the probability of a price going up after going up, or going down after going down.
Script Description
This script uses a Markov chain to calculate closing price transition probabilities across the entire accessible chart. It displays the probabilities of the following transitions:
- Up after Up (HH): Probability that the price rises after going up.
- Down after Down (BB): Probability that the price will go down after going down.
- Up after Down (HB): Probability that the price goes up after going down.
- Down after Up (BH): Probability that the price will go down after going up.
Features
- Color customization: Choose colors for each transition type.
- Table Position: Select the position of the probability display table (top/left, top/right, bottom/left, bottom/right).
MarkovAlgorithmLibrary "MarkovAlgorithm"
Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of
symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a
general model of computation and can represent any mathematical expression from its simple notation.
~ wikipedia
.
reference:
en.wikipedia.org
rosettacode.org
parse(rules, separator)
Parameters:
rules (string)
separator (string)
Returns: - `array _rules`: List of rules.
---
Usage:
- `parse("|0 -> 0|| 1 -> 0| 0 -> ")`
apply(expression, rules)
Aplies rules to a expression.
Parameters:
expression (string) : `string`: Text expression to be formated by the rules.
rules (rule ) : `string`: Rules to apply to expression on a string format to be parsed.
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
apply(expression, rules)
Parameters:
expression (string)
rules (string)
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
rule
String pair that represents `pattern -> replace`, each rule may be ordinary or terminating.
Fields:
pattern (series string) : Pattern to replace.
replacement (series string) : Replacement patterns.
termination (series bool) : Termination rule.
MarkovChainLibrary "MarkovChain"
Generic Markov Chain type functions.
---
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the
probability of each event depends only on the state attained in the previous event.
---
reference:
Understanding Markov Chains, Examples and Applications. Second Edition. Book by Nicolas Privault.
en.wikipedia.org
www.geeksforgeeks.org
towardsdatascience.com
github.com
stats.stackexchange.com
timeseriesreasoning.com
www.ris-ai.com
github.com
gist.github.com
github.com
gist.github.com
writings.stephenwolfram.com
kevingal.com
towardsdatascience.com
spedygiorgio.github.io
github.com
www.projectrhea.org
method to_string(this)
Translate a Markov Chain object to a string format.
Namespace types: MC
Parameters:
this (MC) : `MC` . Markov Chain object.
Returns: string
method to_table(this, position, text_color, text_size)
Namespace types: MC
Parameters:
this (MC)
position (string)
text_color (color)
text_size (string)
method create_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
method generate_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
new_chain(states, name)
Parameters:
states (state )
name (string)
from_data(data, name)
Parameters:
data (string )
name (string)
method probability_at_step(this, target_step)
Namespace types: MC
Parameters:
this (MC)
target_step (int)
method state_at_step(this, start_state, target_state, target_step)
Namespace types: MC
Parameters:
this (MC)
start_state (int)
target_state (int)
target_step (int)
method forward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method backward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method viterbi(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
method baumwelch(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
Node
Target node.
Fields:
index (series int) : . Key index of the node.
probability (series float) : . Probability rate of activation.
state
State reference.
Fields:
name (series string) : . Name of the state.
index (series int) : . Key index of the state.
target_nodes (Node ) : . List of index references and probabilities to target states.
MC
Markov Chain reference object.
Fields:
name (series string) : . Name of the chain.
states (state ) : . List of state nodes and its name, index, targets and transition probabilities.
size (series int) : . Number of unique states
transitions (matrix) : . Transition matrix
HMC
Hidden Markov Chain reference object.
Fields:
name (series string) : . Name of thehidden chain.
states_hidden (state ) : . List of state nodes and its name, index, targets and transition probabilities.
states_obs (state ) : . List of state nodes and its name, index, targets and transition probabilities.
transitions (matrix) : . Transition matrix
emissions (matrix) : . Emission matrix
initial_distribution (float )
FunctionProbabilityViterbiLibrary "FunctionProbabilityViterbi"
The Viterbi Algorithm calculates the most likely sequence of hidden states *(called Viterbi path)*
that results in a sequence of observed events.
viterbi(observations, transitions, emissions, initial_distribution)
Calculate most probable path in a Markov model.
Parameters:
observations (int ) : array . Observation states data.
transitions (matrix) : matrix . Transition probability table, (HxH, H:Hidden states).
emissions (matrix) : matrix . Emission probability table, (OxH, O:Observed states).
initial_distribution (float ) : array . Initial probability distribution for the hidden states.
Returns: array. Most probable path.
FunctionBaumWelchLibrary "FunctionBaumWelch"
Baum-Welch Algorithm, also known as Forward-Backward Algorithm, uses the well known EM algorithm
to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed
feature vectors.
---
### Function List:
> `forward (array pi, matrix a, matrix b, array obs)`
> `forward (array pi, matrix a, matrix b, array obs, bool scaling)`
> `backward (matrix a, matrix b, array obs)`
> `backward (matrix a, matrix b, array obs, array c)`
> `baumwelch (array observations, int nstates)`
> `baumwelch (array observations, array pi, matrix a, matrix b)`
---
### Reference:
> en.wikipedia.org
> github.com
> en.wikipedia.org
> www.rdocumentation.org
> www.rdocumentation.org
forward(pi, a, b, obs)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
Returns: - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
forward(pi, a, b, obs, scaling)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
scaling (bool) : Normalize `alpha` scale.
Returns: - #### Tuple with:
> - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
> - `array _c`: Array with normalization scale.
backward(a, b, obs)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
backward(a, b, obs, c)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
c (float ) : Array with Normalization scaling coefficients.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
baumwelch(observations, nstates)
**(Random Initialization)** Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
nstates (int)
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
baumwelch(observations, pi, a, b)
Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
pi (float ) : Initial probaility distribution.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
FunctionSMCMCLibrary "FunctionSMCMC"
Methods to implement Markov Chain Monte Carlo Simulation (MCMC)
markov_chain(weights, actions, target_path, position, last_value) a basic implementation of the markov chain algorithm
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
target_path : float array, target path array.
position : int, index of the path.
last_value : float, base value to increment.
Returns: void, updates target array
mcmc(weights, actions, start_value, n_iterations) uses a monte carlo algorithm to simulate a markov chain at each step.
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
start_value : float, base value to start simulation.
n_iterations : integer, number of iterations to run.
Returns: float array with path.
FunctionDecisionTreeLibrary "FunctionDecisionTree"
Method to generate decision tree based on weights.
decision_tree(weights, depth) Method to generate decision tree based on weights.
Parameters:
weights : float array, weights for decision consideration.
depth : int, depth of the tree.
Returns: int array