Echo — Finding What You're Looking For

DriftMind detects unknown anomalies — points that deviate from learned behaviour. Echo detects known patterns — reference signatures you've already identified and want to spot again. Together, they answer two different operational questions on the same stream: is this unusual? and is this the bearing-failure signature we saw last month?

Streaming · Multivariate · Amplitude-sensitive · Sub-millisecond · CPU-only
Zero training Zero allocations per point Multivariate signatures Attach / detach at runtime Per-pattern severity

Overview

Every operations team eventually builds a list of failure modes they've seen before. "When the bearing starts to fail, the vibration RMS climbs exponentially over about fifty samples." "A signaling storm looks like a sharp RRC connection burst co-occurring with PRB saturation." "Thermal runaway has a characteristic shape — and it's only dangerous above a certain magnitude."

Echo is a streaming engine that takes those signatures as input and watches live data for them. It runs alongside DriftMind inside the same edge runtime, sharing the data stream but answering a fundamentally different question.

DriftMind asks

"Is this data point statistically unusual given what I've learned so far?"

Echo asks

"Does the current window match a signature I was explicitly told to watch for?"

The two engines are complementary. A high DriftMind anomaly score tells you something is wrong. A high Echo score tells you which specific known failure mode is occurring.

Why Anomaly Detection Isn't Enough

Generic anomaly detection is powerful for catching things you didn't predict. But it has a predictable limitation in production: it tells you something is unusual, not what is unusual or how to respond. Every anomaly looks the same to the scoring system, even though operationally some mean "wake up the on-call engineer at 3am" and others mean "open a ticket for daytime investigation."

Teams compensate by building rule engines on top of the anomaly score. Those rules are brittle: they're written against generic anomaly output, not against the signatures themselves. A rule like "if anomaly score > 0.8 and value > 50, treat as critical" will eventually fire on something that happens to look like that threshold crossing but isn't the actual failure mode.

Echo flips the polarity. Instead of detecting deviation and then interpreting it, you give the engine the signature you care about, and it tells you — with a correlation score — how closely the current window matches. No interpretive layer between detection and action.

How Echo Works

1. Pattern definition

A pattern is a reference signal — a short time-series (typically 30–100 samples) that represents the shape you want to detect. It can be a single feature (vibration RMS over time) or a coordinated set of features (temperature and vibration, RRC connections and PRB utilisation).

2. Pre-processing

When a pattern is registered, Echo smooths it with a Gaussian kernel (bandwidth auto-scaled to pattern length) and computes its statistical profile once. From then on, the pattern data is immutable — no retraining, no updates, no drift compensation.

3. Streaming detection

Every time a new observation arrives, Echo slides a window across the stream that matches the pattern's length. It scores the similarity using a two-stage algorithm:

  • Magnitude pre-filter — an O(m) L1 distance check that rejects windows whose amplitude is clearly wrong. Cheap, fast, eliminates most non-matches.
  • Pearson correlation — measures shape similarity on windows that pass the magnitude filter. The final score combines correlation and magnitude into a [0, 1] match confidence.

4. State machine

Echo runs a two-state machine per pattern: Searching (looking for the early portion of the signature) and Tracking (watching the rest of the pattern unfold). A tolerance counter absorbs single-point noise so a transient spike doesn't reset tracking. The output is a continuously updated match score that grows as the signature progresses.

Every step happens in constant memory per stream (O(m), where m is pattern length) and zero allocations on the hot path. An edge device monitoring thousands of streams with dozens of attached patterns stays memory-stable.

Multivariate Matching

Real failures rarely show up in a single metric. A telecom signaling storm looks like elevated PRB utilisation and a burst of RRC connection requests. Either alone is ambiguous — PRB spikes on ordinary traffic, RRC spikes during mobility events. Only the coordinated pattern indicates the storm.

Echo patterns can declare multiple features. When a multi-feature pattern is attached to a forecaster, the engine computes a score for each feature independently and averages them. A high final score requires all features to align with the reference shape — single-feature decoys are naturally suppressed.

// A two-feature signaling-storm pattern
POST /patterns
{
  "patternName": "signaling-storm",
  "features": {
    "rrc": [ 100, 280, 460, 640, 820, 950, 950, 950, ... ],
    "prb": [  40,  53,  66,  79,  85,  92,  92,  92, ... ]
  }
}

A decoy PRB-only burst won't trigger this pattern because the RRC feature hasn't followed the expected shape. The detector fires only when both streams exhibit the coordinated signature.

Amplitude Sensitivity

Most shape-matching methods normalise amplitude away — MASS and Matrix Profile, for example, use z-score normalisation, which treats the same shape at different magnitudes as equivalent. For a lot of problems this is the right choice. For many industrial and infrastructure problems, it's catastrophic.

A heartbeat at 70 bpm is healthy. The same waveform at 350 bpm means cardiac arrest. A vibration signature at 2 mm/s is normal. The same signature at 10 mm/s means the bearing is failing. Same shape, opposite meaning.

Echo is amplitude-sensitive by design. The magnitude pre-filter enforces an implicit operating-envelope check before the correlation stage even runs. Patterns only fire when both the shape and the magnitude match the reference. This is a deliberate deviation from the time-series literature, and it's what makes Echo usable for operational monitoring out of the box.

Severity-Based Routing

Patterns carry operational meaning. Some are informational, some are urgent. Echo models this directly: every attachment between a pattern and a forecaster specifies a severity.

WARN

Known condition that deserves visibility but not immediate action. Examples: sustained elevated load, reversible drift, capacity approach.

MAJOR

Known failure mode starting. Operator should investigate; degradation is likely if unaddressed.

CRITICAL

Known catastrophic signature. Triggers immediate escalation. Thermal runaway, cascading failure, safety-critical state transitions.

When a prediction comes back with matched Echo patterns, each carries its own score and its severity. Your alerting pipeline can route directly on severity — no interpretive layer between detection and the runbook.

API Walkthrough

Echo is exposed through the same REST API as DriftMind. Three entities: patterns, forecasters, and attachments. Create them independently, attach and detach at runtime.

1. Create a pattern

POST /patterns
{
  "patternName": "bearing-failure",
  "features": {
    "vibration": [ 0.5, 0.52, 0.56, 0.61, 0.68, 0.78, 0.92, ... ]
  }
}

// -> { "patternId": "bddb52d3-...", "patternName": "bearing-failure" }

2. Attach the pattern to a forecaster

POST /forecasters/{forecasterId}/attachments
{
  "patternId": "bddb52d3-...",
  "severity":  "CRITICAL"
}

3. Feed data as usual

POST /forecasters/{forecasterId}/observations
{ "vibration": [0.51, 0.53, 0.49, ...] }

4. Read predictions

Predictions include both DriftMind's anomaly score and the per-pattern Echo matches:

GET /forecasters/{forecasterId}/predictions

{
  "anomalyScore": 0.18,
  "features": { ... },
  "echoPatterns": {
    "bearing-failure": { "score": 0.92, "severity": "CRITICAL" }
  }
}

The same pattern can be attached to many forecasters. Detaching from one doesn't affect the others. Patterns can be added and removed at runtime without restarting the engine.

Using Echo on the Edge

Echo ships as part of the DriftMind Edge container — the same ~70 MB native binary that runs the forecasting engine also runs the pattern matcher. Nothing to install, nothing extra to enable. Pull the image, run it, and Echo is live on the same port as the rest of the API.

1. Pull and run the container

Two images are published on Docker Hub. Most users will want the lab image first — it bundles a Jupyter server with a pre-loaded validation notebook so you can see Echo working on real data in under a minute.

# Lab image — includes Jupyter and the Echo validation notebook
docker run -p 8080:8080 -p 8888:8888 thngbk/driftmind-edge-lab:latest

# Minimal image — API only, ~70 MB, ideal for production
docker run -p 8080:8080 thngbk/driftmind-edge:latest

Once the container is up, open http://localhost:8080/ in a browser to see the self-hosted documentation, or http://localhost:8888 to land in Jupyter.

2. Two ways to use Echo

Pick the interface that matches how the data reaches you.

REST API — for live streams

Feed data as it arrives. Call POST /forecasters/{id}/observations each time new points are available. Predictions (forecasts + anomaly scores + Echo matches) come back on GET /forecasters/{id}/predictions. This is the default path for agents, microservices, and SCADA bridges.

CSV CLI — for offline datasets

Run driftmind-benchmark config.json data.csv against a stored dataset. The CLI creates a forecaster, attaches the patterns declared in your config, streams the rows through both engines, and writes per-row predictions plus per-pattern Echo scores to a result CSV. No server required.

3. The validation notebook

The lab image ships with echo_validation.ipynb, a notebook that walks through four realistic scenarios drawn from our target verticals:

  • Bearing failure signature — Industrial IoT single-feature detection.
  • Cell signaling storm — Telecom RAN multivariate pattern that rejects PRB-only decoys.
  • Thermal runaway — amplitude-sensitive detection across three scale levels (0.5×, 1.0×, 2.0×).
  • Multi-pattern severity — three patterns attached to one forecaster, each routed by severity.

Each scenario creates its own forecaster and patterns, feeds a synthetic stream, and plots the match score alongside the DriftMind anomaly score. It's the fastest way to see the two engines side by side on the same data.

4. CSV CLI configuration

To attach patterns in the CLI, list them under echoAttachments in your config JSON. Each entry points to a pattern JSON file and declares its severity:

// config.json — fed to driftmind-benchmark
{
  "forecasterName": "pump-monitor",
  "features": ["temperature", "vibration"],
  "inputSize": 15,
  "outputSize": 1,
  "echoAttachments": [
    {
      "patternFile": "patterns/bearing-failure.json",
      "patternName": "bearing-failure",
      "severity":    "CRITICAL"
    },
    {
      "patternFile": "patterns/thermal-runaway.json",
      "patternName": "thermal-runaway",
      "severity":    "MAJOR"
    }
  ]
}

// patterns/bearing-failure.json
{
  "temperature": [22, 24, 28, 35, 44, 55, 68],
  "vibration":   [0.3, 0.5, 0.8, 1.2, 2.1, 3.0, 4.5]
}

Run it the same way whether you're inside the lab container or mounting your dataset from the host:

# Inside the lab container
./driftmind-benchmark config.json data.csv

# From the host, mounting a local directory as /data
docker run --rm -v $(pwd):/data thngbk/driftmind-edge-lab:latest \
    ./driftmind-benchmark /data/config.json /data/data.csv

5. The result CSV

Each row in the output includes the actual value, DriftMind's prediction and absolute error, the anomaly score, and one pair of columns per attached pattern:

row, temperature_actual, temperature_predicted, temperature_ae,
     vibration_actual,   vibration_predicted,   vibration_ae,
     anomaly_score,
     echo_bearing-failure_score,  echo_bearing-failure_severity,
     echo_thermal-runaway_score,  echo_thermal-runaway_severity

When a known failure signature starts to appear, you'll see the corresponding echo score climb from 0 toward 1. Filter rows where that score crosses your operational threshold to isolate exactly when the known pattern fired — independently of the generic anomaly score.

Since this release, the CSV also includes a {feature}_sequence column when outputSize > 1, containing the full predicted vector (pipe-separated) alongside the existing first-value columns. Backward-compatible — outputs for outputSize = 1 are byte-identical to earlier releases.

6. No persistence

The edge container is stateless by design. Forecasters, patterns, and attachments live in memory and vanish when the container stops. This is a deliberate choice for edge deployments: the operational state lives in your orchestration layer, not inside the engine. For persistent deployments (SaaS or on-prem Kubernetes), the same API is backed by a durable store — but the edge binary itself is pure compute.

Where Echo Applies

Vertical Example pattern Why shape matters
Industrial IoT Bearing failure, pump cavitation, thermal runaway Catch a known degradation curve before the asset fails
Telecom RAN Signaling storm, handover storm, radio link failure Distinguish multivariate storm signatures from generic load spikes
Data centres Cooling failure, thermal cascade, power anomaly Recognise PUE/thermal transitions with operational severity
Energy / grid Partial discharge, transformer saturation Amplitude-sensitive detection of known electrical faults
Financial / transactional Fraud sequence, card-testing burst Match known attack signatures, not just generic outliers

Echo vs Anomaly Detection

Dimension DriftMind anomaly Echo pattern match
Detects What you didn't expect What you were explicitly looking for
Requires Nothing — learns online A reference signature
Output Continuous anomaly score [0, 1] Per-pattern match score [0, 1] + severity
Action Investigate — something is off Execute runbook — you know exactly what's happening
Amplitude handling Learns from data distribution Explicit, pattern-bound
Neither replaces the other. Production systems use both: DriftMind catches the unknown, Echo names the known, and the combined signal is more actionable than either alone.

Takeaway

Most operational intelligence lives in two buckets: anomalies we couldn't predict and failure modes we've seen before. DriftMind has always addressed the first. Echo addresses the second, with streaming efficiency, multivariate support, amplitude sensitivity, and direct severity attribution.

Together, they close the operational loop: detect the deviation, recognise the signature, route the alert with the right urgency — all on a single CPU, from the first data point, with no retraining phase.

Try Echo today

Echo is included in every DriftMind Edge tier. Pull the container, attach a pattern, and watch it match against live data in under a minute.