KernelLens🟦 KernelLens is a professional kernel regression library for Pine Script v6, providing eight mathematically rigorous Nadaraya–Watson estimators, a three-mode filter layer, a unified string dispatcher, and a suite of trading utilities — all built from the ground up on correct non-parametric statistics. Unlike existing Pine smoothing libraries — which inherit a decade-old loop-bound bug that silently reduces every kernel window to a handful of bars, regardless of the bandwidth parameter — KernelLens is built with auditable math, NA-safe iteration, input validation at every entry point, and academic references cited inline next to the formulas they describe.
The library integrates eight independent kernel families — Rational Quadratic, Gaussian, Periodic, Locally Periodic, Epanechnikov, Tricube, Triangular, and Cosine — behind a consistent API, with every raw estimator wrapped in a filter layer (None / Smooth / Zero Lag), a unified dispatcher for dropdown-driven kernel selection, and five utility exports covering slope detection, trend state, crossover signaling, residual confidence bands, and Silverman's rule-of-thumb bandwidth recommendation. Every public function validates its inputs, raises descriptive runtime errors on misuse, and returns `na` only when there is genuinely no data — never as a silent fallback.
🟦 MATHEMATICAL FOUNDATION
**The Nadaraya–Watson Estimator**
Given a source series `y_t` and a symmetric kernel `K` with scale parameter `ℓ` (the "bandwidth"), the Nadaraya–Watson estimator of the regression function `m(x) = E ` evaluated at the current bar is:
```
Σᵢ K(dᵢ / ℓ) · y_{t−i}
ŷ(t) = ───────────────────────
Σᵢ K(dᵢ / ℓ)
```
where `dᵢ` is the bar-distance from the kernel center and the sum runs over a finite window determined by the effective support of `K`.
The estimator is a locally weighted average: bars close to the kernel center contribute heavily, distant bars contribute proportionally less, and bars outside the support contribute nothing. It is asymptotically unbiased up to `O(ℓ²)` for twice-differentiable `m`, with variance of order `(n·ℓ)⁻¹` — the classical bias–variance trade-off that defines all non-parametric smoothers.
**Why Kernel Regression Beats Rolling Means**
A simple moving average gives every bar in the window the same weight. Kernel regression gives each bar a weight that decays smoothly with distance, producing:
- **Smoother output** — no step artifacts when bars enter / leave the window
- **Better bias control** — the peak of the kernel sits exactly on the point being estimated
- **Kernel-specific behavior** — compact-support kernels eliminate tail contamination entirely; Rational Quadratic's `α` parameter exposes multi-scale mixing; Periodic kernels resonate with known cycle lengths
The math has been the academic standard for non-parametric regression since Nadaraya (1964) and Watson (1964). KernelLens brings it to Pine Script v6 in its correct, bug-free form.
🟦 THE EIGHT KERNELS
All eight kernels implement the Nadaraya–Watson weighting scheme. They differ in support (compact versus infinite), smoothness (how many times differentiable), and how weight decays with distance.
| # | Kernel | Formula | Support | Smoothness | Character |
|---|---|---|---|---|---|
| 1 | **Rational Quadratic** | `(1 + d² / (2·α·ℓ²))^(−α)` | ℝ | C∞ | Multi-scale mixer — `α` controls stretch versus wiggle |
| 2 | **Gaussian (RBF)** | `exp(−d² / (2·ℓ²))` | ℝ | C∞ | The canonical smoother — smoothest possible with L² optimality |
| 3 | **Periodic** | `exp(−2·sin²(π·d/p) / ℓ²)` | ℝ | C∞ | Resonates with repetition distance `p` — ideal for cycles |
| 4 | **Locally Periodic** | Periodic · Gaussian | ℝ | C∞ | Seasonal patterns that slowly drift with trend |
| 5 | **Epanechnikov** | `(3/4)(1 − u²) · 𝟙{|u|≤1}` | | C⁰ | Asymptotically MSE-optimal (Watson 1964) — no tail contamination |
| 6 | **Tricube** | `(70/81)(1 − \|u\|³)³ · 𝟙{|u|≤1}` | | C² | The LOWESS standard — near-Gaussian with compact support |
| 7 | **Triangular** | `(1 − \|u\|) · 𝟙{|u|≤1}` | | C⁰ | Simplest non-uniform kernel — fastest to compute |
| 8 | **Cosine** | `(π/4)·cos(π·u/2) · 𝟙{|u|≤1}` | | C¹ | Raised-cosine taper — smoother boundary than Epanechnikov |
where `u = d/ℓ` and `𝟙` is the indicator function.
**Infinite-Support vs Compact-Support — Why Both Matter**
| | Infinite Support (RQ, Gauss, Periodic, LocPeriodic) | Compact Support (Epa, Tricube, Triangular, Cosine) |
|---|---|---|
| **Tail weight** | Never exactly zero | Exactly zero beyond ±ℓ |
| **Loop depth** | `3·ℓ` (3-σ cutoff, ≈99.7% mass) | Exactly `ℓ` |
| **Bar contamination** | Distant bars still pull the estimate a tiny amount | Distant bars cannot affect the estimate at all |
| **Best for** | Smooth trends, Gaussian-process intuition | Robust regression, outlier resistance |
KernelLens picks the correct loop depth automatically based on kernel family: `_depthInfinite` for Gaussian-family kernels, `_depthCompact` for bounded kernels, `_depthPeriodic` for Periodic (which must span enough cycles to reach stable weights).
**Why Eight, Not Four**
Most Pine kernel libraries ship only the four kernels from MacKay's Gaussian process tutorial. KernelLens adds the four compact-support classical kernels because:
- **Epanechnikov** minimises asymptotic mean squared error among all non-negative kernels of bounded support (Watson 1964) — it is the MSE-optimal baseline against which all other kernels are measured
- **Tricube** is the kernel used by LOWESS (Cleveland 1979), the de-facto standard for robust locally weighted scatterplot smoothing
- **Triangular** is the cheapest non-uniform compact kernel — useful when loop-budget matters on intraday charts with huge dataset size
- **Cosine** is C¹-continuous at the support boundary, unlike Epanechnikov's C⁰ discontinuity, producing visibly smoother transitions at kernel edges
Adding them makes the library an academically complete toolkit, not just a Pine port of one tutorial.
🟦 FILTER LAYER — NONE / SMOOTH / ZERO LAG
Every kernel export accepts a `_filter` parameter with three valid values. The filter layer is implemented identically across all eight kernels, so switching kernel families does not change filter behavior.
**"No Filter" — Single-Pass Raw Estimate**
```
ŷ = K(y)
```
One Nadaraya–Watson pass over the source. Cheapest mode, most reactive, fully represents the underlying kernel. Use this when you want the kernel's raw behavior with no additional smoothing or lag correction.
**"Smooth" — Double-Pass Estimate**
```
ŷ = K(K(y))
```
The kernel is applied once to the source, then applied again to its own output using the same bandwidth and the same parameters. The result is a more strongly smoothed curve at the cost of one extra loop pass per bar.
This is mathematically equivalent to convolving the kernel with itself — the effective kernel is wider and flatter, pulling longer-range context into each estimate without requiring the user to double the bandwidth.
**"Zero Lag" — Ehlers De-Lagged Estimate**
```
ŷ = 2·K(y) − K(K(y))
```
The ZLEMA identity from Ehlers (*Rocket Science for Traders*, 2000): subtract the smoothing lag from the raw estimate, effectively shifting the output back in time to match the source more closely.
The intuition: `K(y)` lags `y` by some amount; `K(K(y))` lags `K(y)` by the same amount; so `K(y) − K(K(y))` is an estimate of the lag itself, and adding it back to `K(y)` cancels out. The result tracks the source more tightly than either pass alone, at the cost of slightly noisier turning points.
**Lazy Evaluation — No Wasted Cycles**
In `"No Filter"` mode, the second pass is skipped entirely — it never runs. The filter branch uses an `if` block (not a ternary), so Pine's short-circuit semantics prevent the unused computation. A single kernel call costs one pass; `"Smooth"` or `"Zero Lag"` costs two. You only pay for what you use.
🟦 KERNEL CENTER OFFSET — THE `_phase` PARAMETER
Every KernelLens kernel takes a `_phase` parameter that shifts the kernel center into the past by `_phase` bars. It is the library's non-repainting knob.
**_phase = 0 — Live Estimate**
The kernel is centered on the current bar. The most recent price has maximum weight, and the estimate is as fresh as possible. Suitable for live signal generation, but the most recent bar can re-evaluate as it develops within its interval — standard Pine real-time behavior.
**_phase > 0 — Non-Repainting Historical Estimate**
The kernel center is moved `_phase` bars into the past. The estimate becomes the smoothed value *at that historical bar*, not the current bar. Once the bar at `bar_index − _phase` is fully confirmed (`barstate.isconfirmed`), its estimate cannot change again.
This is the standard trick for publishing kernel indicators that do not repaint: you get a stable, historically accurate curve at the cost of shifting the entire output `_phase` bars to the right on the chart. A `_phase = 25` call gives a curve that lags live price by 25 bars but is guaranteed stable for every past bar.
**Why It Belongs in the Library, Not the Caller**
Pushing `_phase` into the kernel's own loop is not the same as evaluating the kernel at a shifted source (`K(src )`). Shifting the source just uses a stale input with a current-bar-centered kernel, which still produces a fresh estimate of a stale series. KernelLens's `_phase` genuinely moves the kernel center, producing a historical-bar estimate that computes over the correct surrounding window.
🟦 NON-REPAINTING BEHAVIOR
Repainting is the single most-asked question about any Pine indicator, and the single most common source of silent failure when a retail trader moves from backtest to live. A strategy that looks flawless on historical bars and then bleeds money the moment it is deployed is almost always suffering from some form of repainting. KernelLens is engineered from first principles to eliminate every class of repainting by construction — not by patching symptoms, but by removing the dependencies that cause repainting in the first place.
**The Two Forms of Repainting**
| Form | Symptom | Typical Cause |
|---|---|---|
| **Historical repainting** | A bar that was closed days or weeks ago silently changes its plotted value when the chart is refreshed or scrolled | `request.security()` with `lookahead = barmerge.lookahead_on`, un-gated higher-timeframe data, or incorrect array rotation that reads into future bars |
| **Real-time repainting** | The plotted value on the live (current developing) bar flickers tick-by-tick as new price ticks arrive, then freezes at a final value when the bar closes | The indicator reads `close ` (or any current-bar value) inside a weighted sum — the current-bar weight changes every tick |
KernelLens avoids the first kind **entirely and unconditionally**: the library contains no `request.security` calls, no higher-timeframe lookups, no `lookahead_on` usage, and no array rotation that could leak future bars into the window. Every historical bar plotted by any KernelLens kernel is computed exclusively from bars that existed at the time that bar was closed. The plotted history is immutable.
Real-time repainting is controlled explicitly by the `_phase` parameter — it is the user's choice whether to accept tick-by-tick flicker on the live bar in exchange for zero lag (`_phase = 0`) or to eliminate the flicker entirely at the cost of a small fixed lag (`_phase ≥ 1`).
**Why Kernel Regression Normally Repaints (And How KernelLens Stops It)**
A traditional Nadaraya–Watson call centered on the current bar evaluates:
```
ŷ(t) = Σᵢ K(dᵢ/ℓ) · y_{t−i} for i = 0 … depth
```
On the live bar, the term `y_{t−0} = close ` is the current real-time price — which changes on every tick. Every tick moves the weighted sum, every tick moves the estimate, and the trader watching the chart sees the kernel plot flicker as the bar develops. The historical bars (where `close ` for that past bar is now fixed) are stable, but the live plot is unstable.
KernelLens's `_phase` parameter shifts the loop so the kernel runs over `i = _phase … _phase + depth`. With `_phase = 2`:
```
ŷ(t) = Σᵢ K((i−2)/ℓ) · y_{t−i} for i = 2 … 2 + depth
```
The sum no longer touches `close ` or `close ` — every bar it reads is already confirmed and cannot change. The live-bar kernel output is therefore identical from the first tick of the bar to the last tick of the bar, and identical again when the bar finally closes. There is no flicker and nothing to repaint.
**The Lag / Stability Trade-Off**
| `_phase` | Lag on Live Bar | Live-Bar Flicker | Historical Repainting | Best For |
|---|---|---|---|---|
| **0** | 0 bars | Yes (real-time only; history is stable) | None | Scalping, academic research, calibration |
| **1** | 1 bar | None | None | Fast day-trading; minimum acceptable lag for a live trading desk |
| **2** | 2 bars | None | None | Default for most users — the sweet spot between freshness and stability |
| **3** | 3 bars | None | None | Swing trading — extra margin against false flickers from erratic ticks |
| **5+** | 5+ bars | None | None | Position trading, long-term chart analysis, published signal marks |
Even at `_phase = 0`, **historical repainting never occurs** — only the live bar flickers during its own development. Once a bar closes, its plotted value is final; scrolling away and back, refreshing the chart, or re-opening TradingView will never change that historical plot. The flicker is exclusively a live-bar tick-by-tick phenomenon.
**KernelLens as a Non-Repainting Primitive**
KernelLens exposes real-time flicker as an explicit, user-controlled trade-off rather than a hidden behavior. The caller picks any point on the spectrum from "fully live" (`_phase = 0`, maximum reactivity with tick-by-tick flicker) to "fully confirmed" (`_phase ≥ 1`, one or more bars of lag in exchange for a curve that never redraws) with a single integer parameter. Historical repainting — the dangerous form that silently rewrites past plots — is eliminated unconditionally regardless of `_phase`.
**How to Verify Non-Repainting Yourself**
Do not trust the word "non-repainting" from any library — always verify. KernelLens can be verified in about thirty seconds:
1. Load a chart with KernelLens on it using `_phase = 2` (or any value > 0).
2. Take a screenshot at any specific historical bar.
3. Scroll far to the left, refresh the chart, or reload the indicator.
4. Return to the same bar. The plotted value at that bar must be pixel-identical to the screenshot — because the computation on that bar used only the bars before it, which have not changed.
5. Repeat with `_phase = 0`. The historical bars must still be pixel-identical — only the live bar's plot can differ between observations, and only because the live bar's `close` is now a different number than it was when you took the screenshot.
For a stricter test, use TradingView's **Bar Replay** mode. Enable Bar Replay, step forward one bar at a time, and watch the kernel plot on each newly-closed bar. With `_phase ≥ 1`, the value plotted on each newly-closed bar will exactly match what the indicator shows after you exit replay mode and view the same bar normally. This is the gold-standard test — Bar Replay reproduces live-bar tick arrival in a controlled way.
**Common Misconceptions**
> *"Any Pine indicator that uses `close` repaints."*
False. Using `close` on a confirmed bar does not repaint — the confirmed bar's close is locked. What can repaint is using `close` on the live bar, and only within that live bar's interval. KernelLens with `_phase > 0` never reads the live-bar close at all.
> *"`lookahead = barmerge.lookahead_on` is always wrong."*
Context-dependent. `lookahead_on` is used correctly in some multi-timeframe indicators to request a higher-TF value that is already settled on the lower TF. KernelLens does not use `request.security` at all, so this question does not apply — but for libraries that do, `lookahead_on` is only problematic when it leaks values from bars that were not yet closed at the lower-TF time of evaluation.
> *"Non-repainting means zero lag."*
False. Zero lag and non-repainting are orthogonal properties. KernelLens `_phase = 0` is zero lag with real-time flicker; `_phase = 2` is two-bar lag with no flicker. You can have any combination of the two, and the right choice depends on the trading style.
> *"The `FILTER_ZEROLAG` mode makes the indicator non-repainting."*
False. `FILTER_ZEROLAG` is an Ehlers-style de-lagging filter applied to the kernel output; it reduces the perceived lag of the estimate, but it does not affect whether the live bar flickers. Non-repainting is controlled exclusively by `_phase`. Choose `_phase` for repainting behavior, and `_filter` for smoothness / lag shape — they are independent knobs.
**When to Accept Real-Time Flicker (`_phase = 0`)**
Despite everything above, there are legitimate reasons to deliberately use `_phase = 0`:
- **Academic research and backtesting** — you want the kernel mathematics in its classical form, centered on the point being estimated, with no phase adjustment
- **Scalping on very short timeframes** — a 2-bar lag on a 1-minute chart is a 2-minute delay, which can matter when you are exiting within a 4-minute window
- **Visual calibration** — when you are choosing a bandwidth by eye, the live-bar flicker actually helps: you see how sensitive the curve is to each incoming tick, which is diagnostic information
- **Indicators that read the kernel output only on `barstate.isconfirmed`** — if your signal logic is gated by `if barstate.isconfirmed`, then live-bar flicker is invisible to your signal (it sees only the frozen close-of-bar value), and you can safely use `_phase = 0` with no practical consequence
For every other case — and especially for any live alert or automated trading system — use `_phase ≥ 1`. Two bars of lag on a clean, stable curve is almost always worth more than zero lag on a curve that redraws itself several times per bar.
🟦 UNIFIED DISPATCHER — `estimate()`
For indicators where the user picks a kernel from a dropdown, writing eight separate ternary branches is tedious and error-prone. KernelLens ships with a unified dispatcher that routes to the correct kernel based on a string argument:
```pine
import a_jabbaroff/KernelLens/1 as kl
line = kl.estimate(
kernelType = kl.KERNEL_GAUSS,
src = close,
bandwidth = 32,
shapeAlpha = 1.0,
period = 1,
phase = 2,
filter = kl.FILTER_SMOOTH)
```
The dispatcher forwards to the matching typed export, so there is no performance penalty versus calling the kernel directly — it is a compile-time routing pass. Unknown kernel names raise a descriptive `runtime.error` naming every valid alternative, so typos fail loudly instead of silently returning `na`.
**Public Constants**
KernelLens exposes its string constants so callers never type the magic values by hand:
| Constant | Value |
|---|---|
| `FILTER_NONE` | `"No Filter"` |
| `FILTER_SMOOTH` | `"Smooth"` |
| `FILTER_ZEROLAG` | `"Zero Lag"` |
| `KERNEL_RQ` | `"Rational Quadratic"` |
| `KERNEL_GAUSS` | `"Gaussian"` |
| `KERNEL_PERIODIC` | `"Periodic"` |
| `KERNEL_LOCPER` | `"Locally Periodic"` |
| `KERNEL_EPA` | `"Epanechnikov"` |
| `KERNEL_TRICUBE` | `"Tricube"` |
| `KERNEL_TRIANG` | `"Triangular"` |
| `KERNEL_COSINE` | `"Cosine"` |
Using the constants in your caller code means the Pine compiler — not a runtime string compare — catches typos at edit time.
🟦 UTILITY LAYER — FIVE PROFESSIONAL HELPERS
KernelLens ships with five utility exports that complement the core estimators. They are the functions you almost always write immediately after getting a smoothed line, factored out so you don't rewrite them in every indicator.
**`slope(estimate, step)` — Discrete First Derivative**
Returns `(y_t − y_{t−step}) / step`, the normalized rate of change over `step` bars. Use it to detect whether a kernel output is trending up, flat, or down — the foundation for any trend-following signal built on top of KernelLens.
```pine
rising = kl.slope(line, 3) > 0.0
```
**`trendState(estimate, step)` — Ternary Trend Indicator**
Returns `+1` if the estimate is rising, `−1` if falling, `0` if exactly flat over the window. A single-call replacement for hand-rolled `line > line ? 1 : line < line ? -1 : 0` ladders.
**`crossSignal(fast, slow)` — Bi-directional Crossover**
Returns `+1` on the bar where `fast` crosses above `slow` (bullish), `−1` on a bearish cross, and `0` otherwise. Built on `ta.crossover` / `ta.crossunder`, so the signal is non-repainting once the bar is confirmed.
**`confidenceBand(src, estimate, window)` — Residual Standard Deviation**
Computes the rolling standard deviation of `(src − estimate)` over a user-defined window. Use the return value as the half-width of a confidence band around the estimate:
```pine
est = kl.gaussian(close, 32, 2, kl.FILTER_SMOOTH)
sigma = kl.confidenceBand(close, est, 50)
upper = est + 1.96 * sigma
lower = est - 1.96 * sigma
```
This is a computationally cheap proxy for the full kernel-weighted local variance — ideal when you need visual bands without paying for a second weighted pass.
**`silvermanBandwidth(src, window)` — Optimal ℓ Suggestion**
Returns the Silverman rule-of-thumb bandwidth:
```
h ≈ 1.06 · σ · n^(−1/5)
```
where `σ` is the rolling standard deviation of the source and `n` is the window size. This is the classical starting point for Gaussian-family bandwidths in academic texts (Silverman 1986). Because Pine requires `simple int` for kernel bandwidth, the returned value is intended for diagnostic display — plot it, read it off the chart, then hard-code the rounded integer into the kernel call.
🟦 INPUT VALIDATION — FAIL LOUDLY, FAIL EARLY
Every public function in KernelLens validates its inputs through a set of internal `_assert*` helpers. Invalid arguments never produce silent `na` fallbacks or buried zero-divisions — they raise `runtime.error` with a descriptive message identifying the function, the parameter, and the expected range.
| Helper | Checks | Raises On |
|---|---|---|
| `_assertFilter` | Filter string is `FILTER_NONE`, `FILTER_SMOOTH`, or `FILTER_ZEROLAG` | Typos like `"No FIlter"` (capital I) — a bug that exists in at least one published kernel indicator |
| `_assertBandwidth` | Bandwidth is a strictly positive integer | Negative or zero bandwidth, which would cause division by zero or infinite loops |
| `_assertPeriod` | Period is a strictly positive integer | Zero period, which would cause `sin(π·d/0)` in Periodic kernels |
| `_assertAlpha` | Rational Quadratic shape parameter is strictly positive | Zero or negative `α`, which would invert the RQ formula |
Error messages are prefixed `KernelLens:` (or `KernelLens.:`) so they are easy to spot in the TradingView runtime log. Every message names the parameter that failed, the value that was passed, and the set of valid alternatives — so a misconfigured chart tells you exactly what to fix.
🟦 LOOP DEPTH — THE BUG FIX THAT MOTIVATED KERNELLENS
The two most popular Pine kernel libraries on TradingView share the same fatal bug: both compute their loop depth as
```pine
_size = array.size(array.from(_src))
```
where `array.from(_src)` creates a **one-element array containing the current value of `_src`**, so `_size` is always `1`. The loop then runs `for i = 0 to 1 + startAtBar`, effectively using only `startAtBar + 2` bars — completely ignoring the user's bandwidth. Every published kernel indicator built on those libraries inherits this silent miscalculation.
KernelLens replaces the broken helper with three explicit depth selectors:
| Helper | Depth | Used By |
|---|---|---|
| `_depthInfinite(bw)` | `max(bw · 3, 4)` | Gaussian, Rational Quadratic, Locally Periodic |
| `_depthCompact(bw)` | `max(bw, 4)` | Epanechnikov, Tricube, Triangular, Cosine |
| `_depthPeriodic(bw, p)` | `max(bw · 3, p · 10, 4)` | Periodic |
For Gaussian-family kernels, the `3·ℓ` cutoff captures approximately 99.7% of the kernel mass (the three-sigma rule). For compact-support kernels, the depth equals the bandwidth exactly — the loop terminates at the kernel's natural zero point. For Periodic kernels, the depth is the larger of the scale-based and cycle-based minima, so the loop always spans enough periods to produce a stable weighted average.
The loop counter `i` runs over bar offsets starting at `_phase`, every bar lookup is NA-checked before being incorporated into the sum, and the final `num / den` division is guarded against zero denominators. On a fresh chart, the kernel gracefully returns `na` for bars where the window extends past available history, rather than producing poisoned sums from implicit NA arithmetic.
🟦 API REFERENCE
**Core Kernel Estimators — Eight Exports**
| Export | Signature |
|---|---|
| `rationalQuadratic` | `(src, bandwidth, shapeAlpha, phase, filter) → float` |
| `gaussian` | `(src, bandwidth, phase, filter) → float` |
| `periodic` | `(src, bandwidth, period, phase, filter) → float` |
| `locallyPeriodic` | `(src, bandwidth, period, phase, filter) → float` |
| `epanechnikov` | `(src, bandwidth, phase, filter) → float` |
| `tricube` | `(src, bandwidth, phase, filter) → float` |
| `triangular` | `(src, bandwidth, phase, filter) → float` |
| `cosineKernel` | `(src, bandwidth, phase, filter) → float` |
**Unified Dispatcher**
| Export | Signature |
|---|---|
| `estimate` | `(kernelType, src, bandwidth, shapeAlpha, period, phase, filter) → float` |
**Utility Layer — Five Exports**
| Export | Signature |
|---|---|
| `slope` | `(estimate, step) → float` |
| `trendState` | `(estimate, step) → int` |
| `crossSignal` | `(fast, slow) → int` |
| `confidenceBand` | `(src, estimate, window) → float` |
| `silvermanBandwidth` | `(src, window) → float` |
**Parameter Types**
| Name | Pine Type | Description |
|---|---|---|
| `src` | `series float` | Source series (close, hl2, ohlc4, or any other price-derived series) |
| `bandwidth` | `simple int` | Kernel scale `ℓ`, must be `> 0` |
| `shapeAlpha` | `simple float` | Rational Quadratic shape parameter, must be `> 0` |
| `period` | `simple int` | Periodic repetition distance, must be `> 0` |
| `phase` | `simple int` | Kernel center offset in bars, must be `≥ 0` |
| `filter` | `simple string` | One of `FILTER_NONE`, `FILTER_SMOOTH`, `FILTER_ZEROLAG` |
| `kernelType` | `simple string` | One of the eight `KERNEL_*` constants |
| `step` | `simple int` | Finite-difference step for `slope` / `trendState`, must be `≥ 1` |
| `window` | `simple int` | Rolling window for `confidenceBand` / `silvermanBandwidth`, must be `≥ 2` |
🟦 USAGE EXAMPLES
**Minimal — One Gaussian Curve**
```pine
//@version=6
indicator("KernelLens — Gaussian Demo", overlay = true)
import a_jabbaroff/KernelLens/1 as kl
line = kl.gaussian(close, 32, 2, kl.FILTER_SMOOTH)
plot(line, "Gaussian", color = color.orange, linewidth = 2)
```
**Fast / Slow Crossover System**
```pine
//@version=6
indicator("KernelLens — RQ Crossover", overlay = true)
import a_jabbaroff/KernelLens/1 as kl
fast = kl.rationalQuadratic(close, 8, 1.0, 2, kl.FILTER_NONE)
slow = kl.rationalQuadratic(close, 32, 1.0, 2, kl.FILTER_SMOOTH)
cross = kl.crossSignal(fast, slow)
plot(fast, "Fast", color = color.aqua, linewidth = 2)
plot(slow, "Slow", color = color.orange, linewidth = 2)
plotshape(cross == 1, "Bull", location = location.belowbar,
color = color.lime, style = shape.triangleup, size = size.tiny)
plotshape(cross == -1, "Bear", location = location.abovebar,
color = color.red, style = shape.triangledown, size = size.tiny)
```
**Confidence Band Envelope**
```pine
//@version=6
indicator("KernelLens — Confidence Band", overlay = true)
import a_jabbaroff/KernelLens/1 as kl
est = kl.tricube(close, 48, 2, kl.FILTER_SMOOTH)
sigma = kl.confidenceBand(close, est, 50)
k = 1.96
upper = est + k * sigma
lower = est - k * sigma
plot(est, "Estimate", color = color.orange, linewidth = 2)
p1 = plot(upper, "+1.96σ", color = color.new(color.aqua, 70))
p2 = plot(lower, "−1.96σ", color = color.new(color.aqua, 70))
fill(p1, p2, color = color.new(color.aqua, 92))
```
**Dropdown-Driven Kernel Selection**
```pine
//@version=6
indicator("KernelLens — Dropdown", overlay = true)
import a_jabbaroff/KernelLens/1 as kl
kernelType = input.string(kl.KERNEL_GAUSS, "Kernel",
options = )
bandwidth = input.int(32, "Bandwidth", minval = 2)
alphaRQ = input.float(1.0,"RQ Alpha", minval = 0.01, step = 0.25)
period = input.int(20, "Period", minval = 1)
phase = input.int(2, "Phase", minval = 0)
filter = input.string(kl.FILTER_SMOOTH, "Filter",
options = )
line = kl.estimate(kernelType, close, bandwidth, alphaRQ, period, phase, filter)
plot(line, "KernelLens", color = color.orange, linewidth = 2)
```
🟦 TIMEFRAME PRESETS — BANDWIDTH BY STYLE
Kernel bandwidth is the single most important parameter. It controls the trade-off between reactivity (small `ℓ`, tight fit, noisier) and stability (large `ℓ`, smooth curve, slower to react). The presets below are tested starting points — adjust by ±25 % to taste.
---
**SCALPER — 1m / 3m / 5m**
| Parameter | Value |
|---|---|
| Bandwidth (ℓ) | 8 |
| Phase | 1 |
| Filter | `FILTER_NONE` |
| Best Kernel | Rational Quadratic or Gaussian |
| RQ shapeAlpha | 1.0 |
**Why:** Short bandwidth means the kernel reacts within a handful of bars. `FILTER_NONE` removes the double-pass lag, so the estimate tracks price as tightly as possible. Phase 1 keeps the estimate nearly live while still avoiding the current-bar tick noise.
---
**DAY TRADER — 15m / 30m / 1H**
| Parameter | Value |
|---|---|
| Bandwidth (ℓ) | 16 |
| Phase | 2 |
| Filter | `FILTER_SMOOTH` |
| Best Kernel | Gaussian or Tricube |
| RQ shapeAlpha | 1.0 |
**Why:** Balanced reactivity — the 16-bar Gaussian is the default Silverman range for intraday price data, and `FILTER_SMOOTH` removes most of the bar-to-bar chop without significantly increasing lag. Tricube provides near-identical behaviour with strict compact support and is preferred on noisy assets where outlier bars should not influence the curve.
---
**SWING TRADER — 4H / 1D**
| Parameter | Value |
|---|---|
| Bandwidth (ℓ) | 32 |
| Phase | 3 |
| Filter | `FILTER_SMOOTH` |
| Best Kernel | Rational Quadratic |
| RQ shapeAlpha | 2.0 |
**Why:** Swing trades need structural signals, not intraday noise. Rational Quadratic with `α = 2.0` mixes medium and long length scales, producing a curve that ignores transient spikes but catches genuine regime shifts. Phase 3 shifts the estimate three bars back so each swing decision is made against a fully confirmed kernel output.
---
**POSITION / LONG-TERM — 1D / 1W / 1M**
| Parameter | Value |
|---|---|
| Bandwidth (ℓ) | 64 |
| Phase | 5 |
| Filter | `FILTER_SMOOTH` or `FILTER_ZEROLAG` |
| Best Kernel | Gaussian or Locally Periodic |
| Period (if LP) | 52 (weekly cycle) |
**Why:** Position traders care about the macro trajectory. A Gaussian with ℓ = 64 produces a curve that only turns on genuine multi-month inflections. Locally Periodic with `period = 52` is the ideal choice when a clear seasonal cycle is present — it uses both the long-range Gaussian envelope and the 52-bar periodicity to highlight cycle turns that align with trend.
---
**RESEARCH — Academic / Backtest**
| Parameter | Value |
|---|---|
| Bandwidth (ℓ) | Compute via `silvermanBandwidth(src, 200)` |
| Phase | 0 |
| Filter | `FILTER_NONE` |
| Best Kernel | Epanechnikov |
**Why:** Epanechnikov is the MSE-optimal kernel; `FILTER_NONE` keeps the estimator in its classical single-pass form; `phase = 0` centers the kernel on the bar being evaluated. This is the configuration that matches the statistical literature exactly — use it when publishing research, running Monte-Carlo studies, or calibrating against reference implementations.
🟦 BANDWIDTH SELECTION
Bandwidth `ℓ` is the single most consequential choice in kernel regression. Too small and the estimate overfits local noise; too large and it flattens real structure. KernelLens exposes two helpers to support both manual and semi-automated bandwidth selection.
**Manual — Start with ℓ ≈ √n**
A practical starting point for financial time series: set `ℓ ≈ √window_of_interest`. If you care about 100-bar structure, try `ℓ = 10`. If you care about 400-bar structure, try `ℓ = 20`. Adjust by ±25 % based on how noisy the result looks.
**Silverman's Rule of Thumb**
The closed-form optimal bandwidth for Gaussian-family kernels under Gaussian source assumptions:
```
h ≈ 1.06 · σ · n^(−1/5)
```
Call `silvermanBandwidth(src, window)` to compute this value live. Because Pine requires `simple int` bandwidth at compile time, the returned value is for diagnostic use — plot it, read the stable value off the chart, then hard-code the rounded integer into your kernel calls.
**Leave-One-Out Cross-Validation (Manual)**
For academic rigor, compute the leave-one-out mean squared error for a range of bandwidths and pick the minimum. KernelLens does not automate this (it would require `series int` bandwidth, which Pine does not support inside kernel loops), but the formula is straightforward:
```
LOOCV(ℓ) = (1/n) · Σᵢ (yᵢ − ŷᵢ⁻ⁱ(ℓ))²
```
where `ŷᵢ⁻ⁱ` is the kernel estimate at bar `i` computed without including bar `i` in the sum. Evaluate offline, pick the minimum, hard-code the result.
🟦 FILTER SELECTION — WHEN TO USE EACH
| Filter | Best For | Avoid When |
|---|---|---|
| `FILTER_NONE` | Live signal generation, research / calibration, compact-support kernels on noisy data | Choppy markets where you need extra smoothing |
| `FILTER_SMOOTH` | Swing and position trades, confidence band midlines, most day-trading setups | Scalping — the double pass adds measurable lag |
| `FILTER_ZEROLAG` | Regime detection, crossover systems that need the curve to track price tightly | Low-volume assets — Zero Lag amplifies high-frequency noise |
The three filters use the same underlying kernel with the same bandwidth, so switching between them does not require re-tuning. Default to `FILTER_SMOOTH` when in doubt — it is the best-behaved option across the widest range of assets and timeframes.
🟦 COMPATIBILITY
KernelLens targets Pine Script v6 and runs on every TradingView chart — no exchange, asset class, or timeframe restriction.
- **Crypto** — Spot, futures, perpetual contracts
- **Forex** — All majors, minors, and exotics
- **Equities** — Stocks, ETFs, indices
- **Commodities** — Metals, energy, agriculture
- **Timeframes** — 1 minute through Monthly
The library is deterministic — given the same source and parameters, every bar of every symbol produces the same estimate. No calibration is needed across assets; the bandwidth parameter alone controls smoothness, and the kernel formulas are scale-free in the source dimension. Silverman's bandwidth helper automatically adapts to each asset's volatility.
🟦 TECHNICAL NOTES
- **Pine Script v6** — uses the modern type system, strict type checking, and the `switch` expression in the unified dispatcher
- **Non-repainting** — kernel outputs for any confirmed bar depend only on that bar's history; there is no look-ahead, no `request.security` with lookahead, and no dependency on the unconfirmed current bar unless `_phase = 0` is deliberately chosen
- **NA-safe iteration** — every bar lookup inside a kernel loop is guarded by `if not na(y)`, so chart history gaps and warm-up bars cannot poison the weighted sum
- **Division-by-zero protection** — every kernel's final division checks `den > 0.0` and returns `na` if the denominator collapses (which can only happen on truly empty windows)
- **Input validation** — every public function asserts its preconditions up front via `_assertFilter`, `_assertBandwidth`, `_assertPeriod`, `_assertAlpha`, and raises `runtime.error` with a descriptive message on misuse — no silent `na` fallbacks
- **Lazy filter evaluation** — the `"No Filter"` path never executes the second kernel pass; the `if`-branch check short-circuits, so single-pass mode is as cheap as a raw kernel call
- **Correct loop bounds** — `_depthInfinite`, `_depthCompact`, and `_depthPeriodic` compute the correct window size per kernel family, fixing the silent `_size = 1` bug that plagues every other published Pine kernel library
- **No persistent state** — the library is purely functional: no `var`, no arrays, no history buffers that grow over time; every export is a pure expression of `(inputs) → output`, so Pine's `max_*_count` limits cannot be exceeded and the library cannot leak memory
- **O(bandwidth) per bar per kernel call** — the loop depth is bounded by the constants in Section 0; there is no hidden quadratic behavior and the cost scales linearly with the user-chosen bandwidth
- **Unicode-safe comments** — the source uses academic notation (`σ`, `ℓ`, `α`, `ŷ`, `ℝ`) where it improves readability; all strings are plain ASCII for runtime compatibility
🟦 ACADEMIC REFERENCES
Every kernel and every formula in KernelLens is cited inline in the source. The combined bibliography:
- **Nadaraya, E. A. (1964).** On estimating regression. *Theory of Probability & Its Applications*, 9(1), 141–142.
- **Watson, G. S. (1964).** Smooth regression analysis. *Sankhyā: The Indian Journal of Statistics, Series A*, 26(4), 359–372.
- **Cleveland, W. S. (1979).** Robust locally weighted regression and smoothing scatterplots. *Journal of the American Statistical Association*, 74(368), 829–836. *(Tricube kernel, LOWESS.)*
- **Silverman, B. W. (1986).** *Density Estimation for Statistics and Data Analysis*. Chapman & Hall, London. *(Bandwidth rule of thumb.)*
- **Wand, M. P. & Jones, M. C. (1995).** *Kernel Smoothing*. Chapman & Hall. *(Unified treatment of all eight kernels.)*
- **MacKay, D. J. C. (1998).** Introduction to Gaussian Processes. *NIPS Tutorial*. *(Periodic and Rational Quadratic kernels.)*
- **Ehlers, J. F. (2000).** *Rocket Science for Traders*. John Wiley & Sons. *(Zero-lag smoothing trick.)*
- **Rasmussen, C. E. & Williams, C. K. I. (2006).** *Gaussian Processes for Machine Learning*. MIT Press. *(Locally Periodic and Rational Quadratic kernels.)*
🟦 VERSIONING & LICENSE
- **Version** — 1.0.0
- **Pine Script** — v6
- **License** — Mozilla Public License 2.0
- **Status** — Production-ready
KernelLens follows semantic versioning. Minor versions add new exports without breaking existing ones; patch versions fix bugs; major versions may change function signatures and will be announced in the changelog.
🟦 DISCLAIMER
KernelLens is a mathematical library for non-parametric regression on financial time series using the Nadaraya–Watson method. The library is provided solely for educational and research purposes and does not constitute financial, investment, or trading advice.
Kernel regression is a local smoothing technique. It estimates the mean of a source series in the neighborhood of the current bar based on historical data, but it does not predict future prices, does not generate trading signals on its own, and does not guarantee the profitability of any strategy built on top of its output.
Past performance of any model does not guarantee future results. Markets contain systemic risks that cannot be eliminated by any amount of mathematical rigor in the kernel itself. Responsibility for any trading decisions made using this library rests entirely with the user. Always apply sound capital management, conduct your own independent analysis, and never risk capital you are not prepared to lose.
The author assumes no liability for direct or indirect losses incurred through the use of KernelLens or any indicator built on top of it.
Pine Script®腳本庫






















