Neural Harmonica

Phase-coherent AV entrainment generator scaffold
Audio: cosine-phase start
Video: FPS = beat × N
Motion: down→right→up→left
Calibration: flash + tick
Coherent stimulation across channels

Neural Harmonica treats entrainment as a clocking problem: align audio, luminance, and spatial motion to a shared phase origin, then keep them locked with harmonic frame rates and calibration markers.

Pillbox Flow

1) Choose set → 2) Lock phase origin → 3) Pick harmonic FPS → 4) Render luminance + motion → 5) Add calibration → 6) Mux A/V

Abstract neural waveform stock image
Stock image (Unsplash) • Visual systems • Rhythm

Quick Start

Keep your pipeline deterministic: cosine-phase audio start, harmonic FPS, sinusoidal luminance envelope, and mandatory directional spatial motion. Use calibration markers to measure device latency.

Neural harmonica is a structured audiovisual entrainment generation pipeline designed to produce phase-coherent multimodal rhythmic stimulation (audio + visual + spatial motion) aligned to a selected neural-frequency target.

It is not a musical harmonica; the name refers to harmonic synchronization across sensory channels.

Core concept

The system works by ensuring that every stimulus layer shares the same temporal reference:

All layers are phase-locked so that their peaks, transitions, and directional changes occur at deterministic positions relative to the same oscillatory cycle.

How it operates (algorithmic flow)

1. Frequency selection

Choose:

This defines the temporal oscillation the system will synchronize to.

2. Phase-coherent audio generation

Stereo tones are produced using cosine-phase start:


L(t) = cos(2π fL t)

R(t) = cos(2π fR t)

Starting at cosine phase ensures the waveform peak occurs exactly at t = 0, creating a defined phase origin.

3. Harmonic frame-rate selection

The rendering frame rate is chosen as an integer harmonic multiple of the beat:


FPS = beat × N

This guarantees that each visual cycle is sampled deterministically and does not drift relative to the audio oscillation.

4. Visual luminance entrainment envelope

A sinusoidal brightness modulation is generated:


Brightness(t) = 0.5 (1 + cos(2π beat t))

Because both audio and visual signals start at the same phase, their oscillations remain aligned across the entire duration.

5. Spatial motion activation

A deterministic directional cycle (down → right → up → left) is applied, where each motion phase begins on a beat-aligned boundary.

This adds a spatial entrainment dimension synchronized with the temporal rhythm.

6. Calibration header

A frame-0 visual flash and audio impulse tick provide an external synchronization reference, allowing playback latency to be measured and compensated so the intended phase alignment survives real hardware buffering.

7. Muxed audiovisual output

The synchronized video and audio streams are combined into a single file whose oscillatory structure is internally phase-consistent.

Why the system is called “harmonica”

The “harmonica” metaphor refers to harmonic alignment across multiple modalities:

Instead of independent stimuli, the system produces a multi-channel coherent oscillatory field where every component is mathematically locked to the same temporal waveform.

---

If extended further, the next logical step is implementing an automatic harmonic scheduler, which selects the optimal FPS, envelope resolution, and motion-phase timing directly from the chosen beat frequency so the neural harmonica pipeline becomes frequency-agnostic.