Methodology

Conceptual Framework

The Positioning Levels Model is built on the insight that financial markets often gravitate around certain key price levels where buying or selling pressure intensifies. These "reaction levels" — analogous to support and resistance zones — are price areas where the market has repeatedly shown a response, either bouncing off the level or breaking through it. The goal of this model is to identify those levels objectively, using data-driven analysis rather than subjective chart-watching. By quantifying where significant price reactions occurred in the past, the model captures the "memory" of the market — essentially highlighting where traders are likely positioned in size. This provides a tangible edge in anticipating short-term market direction, an edge that has been validated through extensive historical testing with strict no-lookahead evaluation (meaning future data is never used to identify past levels). In other words, these levels are discovered and evaluated in a way that could be done in real time, ensuring any observed performance is genuine and not an artifact of hindsight.

It is important to clarify that these levels are not derived from a standard volume profile or any traditional indicator. They are inferred from time-anchored price & volume reaction events, where the joint behavior of price movement and market participation at specific moments indicates that significant positions were established or defended at a given price and time. (In fact, the volume profile shown on the charts is provided only for context and often reflects the positions the model detects; it is an effect of positioning, not the source of the levels.) In short, the model finds where the market has repeatedly reacted in the past, under comparable conditions, to map out price zones that matter. Each identified level represents an area that the data shows the market has respected with a measurable edge, giving traders actionable insight grounded in objective analysis.

Data Sources & Significance

Accurate, high-quality intraday data is the backbone of this model. I use intraday price and volume history at minute-level granularity for each instrument, ensuring fine-grained coverage of market dynamics. The analysis is session-aware: each trading session's data is treated discretely, with an understanding of where the regular trading day begins and ends versus overnight trading. This context allows apples-to-apples comparisons (for example, day-session reactions versus overnight reactions) and prevents mixing different market regimes or liquidity conditions. By anchoring calculations to session start times, the model can identify patterns like post-open reactions or other time-of-day effects, comparing like with like across days.

Alongside price, trading volume (market participation) at each price and time is analyzed in tandem. High participation at a specific price and moment often signals significant institutional interest or positions being built at that level. By examining price and volume together with an explicit time anchor, the model can detect where and when meaningful reactions occurred — for example, a sharp price reversal on unusually high volume, or a sudden breakout accompanied by a volume spike. The model even considers "absorption" events (instances of exceptionally high volume with very little price movement) as significant because they indicate heavy positions being absorbed at a level. Any abnormal combination of price change and volume surge (or a conspicuous lack of price change despite heavy volume) can flag a potential positioning level.

All data are sourced from reliable historical market feeds covering a wide range of instruments (equity index futures, commodities, bond futures, major Forex futures, and leading stocks) and are stored locally for analysis. This rich intraday history is updated on a daily schedule, so the model always reflects the latest market conditions. In practice, the algorithm focuses on the most recent few months of intraday data to ensure that the levels identified remain relevant to the current market regime rather than drifting toward stale history.

Analytical Process

The Positioning Levels Model follows a structured analytical pipeline to derive the current key levels from raw data. In simplified terms, the process unfolds in four stages, each building on the previous one:

1. Event Detection

The algorithm begins by scanning through each trading session's intraday data to find significant price-and-volume events. These events are moments where price movement and trading activity surge together, indicating an unusual market reaction. Examples include a sharp reversal on very high volume, a rapid breakdown through a price area accompanied by a volume spike, or a stall in price (a very small range) despite huge volume that signals absorption. Each such event is treated as a potential "seed" pointing to a notable price level in that session.

To quantify significance objectively, the model applies statistical scoring to each price bar. It measures how abnormal the price move was relative to recent volatility and how abnormal the volume was at that moment, combining the two into a joint significance metric. Only events that exceed adaptive, data-driven thresholds are flagged as seeds. These thresholds calibrate to each instrument and session so that only the most notable events (roughly the top handful per session) are selected, keeping the focus on meaningful candidate events and avoiding noise. By being selective and rule-based at this stage, the model lets the data surface actionable levels instead of imposing subjective opinions.

2. Clustering into Candidate Levels

Markets often react multiple times around the same price zone on different days, so the next step consolidates the raw events into distinct candidate price levels. The model clusters event prices while accounting for price proximity and instrument volatility. If multiple detected events occur at nearly the same price (for example, within a few ticks or a small fraction of the instrument's typical daily range), they likely represent the same underlying level. Rather than treating them separately, the model groups them and treats the cluster as one candidate level.

This clustering uses a volatility-sensitive spacing rule: identified levels must be a minimum distance apart in price, typically on the order of one standard deviation of recent movement (or an analogous volatility measure). The algorithm defines a kill radius around each level so that any lower-scoring events inside that radius merge into the stronger one or are discarded as redundant. The result is a de-duplicated set of candidate levels that highlight price areas with repeatable impact.

3. Level Testing (Forward Reaction Outcomes)

For each candidate level, the model conducts a rigorous forward-looking evaluation to measure how price behaves when revisiting that level. This is where the model's strict no-lookahead principle matters: all testing is run as if in real time, using only data that appears after the level is identified. Every subsequent touch of the level is classified using a first-passage analysis that looks for which predefined boundary is hit first.

These outcome thresholds are volatility-adjusted. The model requires larger moves to confirm a bounce or break in more volatile markets and enforces minimum absolute thresholds (microstructure floors) so that tiny wiggles never count as decisive outcomes. The evaluation runs separately for each session, so a level's behavior during the primary day session can be distinguished from its behavior overnight.

4. Scoring and Selection

Once the model has outcome statistics for each candidate level (bounce frequency, break frequency, and the magnitudes of moves), it assigns an expected value (EV) score. The score blends the probability of a favorable reaction with the typical reward-versus-risk profile. High-scoring levels are those where touches tend to produce positive expectancy outcomes, while low or negative scores point to levels that do not add value.

To keep the scores robust, the model applies Bayesian-style shrinkage and minimum-evidence gates. Levels with sparse history are pulled toward a neutral baseline until enough touches accumulate, and any level that fails to meet minimum evidence requirements is excluded or heavily discounted. The final step ranks the remaining levels by score and enforces the volatility-based spacing rule again so that only distinct, high-conviction levels make the dashboard.

Technical Details (Deeper Dive)

Session-Aware Analysis

Markets trade in distinct sessions (for example, day versus overnight for many futures, or discrete trading days for stocks). The model analyzes each session independently and anchors calculations to session start times. This prevents distortion from overnight gaps or thin-liquidity periods and allows session-specific parameters, such as expected session length, to inform the analysis. Levels identified during overnight activity do not pollute the day-session evaluation and vice versa.

The model also knows when not to analyze, allowing maintenance breaks or illiquid segments to be skipped. Outcomes can be weighted by session type so that, for instance, primary day-session evidence carries more influence than overnight touches if that aligns with how the instrument trades.

Volatility-Calibrated Thresholds

Every key threshold in the methodology scales with recent volatility. The model continuously estimates volatility using robust measures (such as bipower variation) and Average True Range, and it uses those metrics to set level zone widths, bounce and break distances, and even reasonable time windows for reactions. Floors ensure the thresholds never fall below sensible minimums, preventing microstructure noise from masquerading as genuine signals.

First-Passage Outcome Classification

When price approaches a candidate level, the model defines two boundaries: an "away" boundary just beyond the level and a "fail" boundary on the near side. The first boundary price touches after visiting the level determines the classification. This first-passage framing offers a consistent, unbiased lens for evaluating level effectiveness and allows for efficient, vectorized computation across years of data.

EV-Only Scoring with Evidence Shrinkage

Rather than relying on ad hoc rules, the model evaluates each level purely on expected value. Scores factor in both win rate and payoff asymmetry, so a level that wins slightly less often but delivers much larger favorable moves can outrank one with a high win rate but tiny payoffs. Bayesian shrinkage tempers scores when sample sizes are small, and minimum evidence gates ensure that one-off reactions never masquerade as high-conviction levels.

Clustering and Sigma-Based Spacing

Even after scoring, the model enforces volatility-based spacing so that reported levels remain meaningfully distinct. If two candidate levels fall within the same kill radius, the higher-scoring one survives and absorbs the supporting evidence. This keeps the dashboard clean and focused on the zones that matter most.

Continuous Update and Output Generation

The entire pipeline is rerun on a daily cadence with fresh intraday data, strictly respecting the no-lookahead rule. Once processing completes, the model renders each instrument's chart with horizontal level bands, annotations, and a volume-profile side pane for context. Bolder bands highlight areas where the evidence is strongest, and the dashboard refreshes automatically after each overnight run so the analysis always reflects current market behavior.