Sunset Sonata maps atmospheric scattering parameters to both visual rendering and sound synthesis. Each physical property of the atmosphere controls specific aspects of the sunset visualization and the generated ambient music in real-time.
The visual rendering uses a WebGL shader implementing atmospheric scattering physics. The Henyey–Greenstein phase function models how light scatters off particles in the atmosphere, creating the characteristic sunset colors.
Controls the number of particles per unit volume in the atmosphere. Higher density increases the amount of light scattering, making the sky more vibrant and the colors more saturated. This directly affects the optical depth calculation in the scattering equation.
The average size of atmospheric particles. Smaller particles scatter blue light more (Rayleigh scattering), while larger particles scatter all wavelengths more uniformly (Mie scattering). This parameter controls the asymmetry parameter (g) in the Henyey–Greenstein phase function.
The variance in particle sizes. A wider distribution creates more complex scattering patterns and smoother color transitions across the sky.
Measures how much light is absorbed or scattered as it passes through the atmosphere.
Higher values create deeper, more intense sunsets with stronger color separation.
The optical depth is calculated as: opticalDepth = thickness × density
A lookup table controlling the spectral response of each color channel. This allows direct manipulation of the RGB color balance in the rendered sky. The values are multiplied with the computed scattering colors.
The angular position of the sun in the sky. Elevation controls how high the sun is above the horizon (negative values place it below, creating sunset/dusk effects). Azimuth controls the compass direction. The sun's position determines the scattering angle and affects the phase function calculation.
Geographic coordinates that can be used to calculate accurate sun positions based on time and date (currently simplified in the demo).
The audio system uses Web Audio API to create an ambient synthesizer with multiple oscillators, filters, and effects. Each atmospheric parameter directly controls specific sound synthesis parameters in real-time, creating a sonic representation of the visual sunset. The system includes dynamic variations that simulate air movement and particle motion, making the sound less static and more organic. Wind-like amplitude modulations, random noise fluctuations, and oscillator detuning create a living, breathing soundscape that responds to atmospheric conditions.
Particle density controls multiple aspects of sound modulation. First, it controls the rate and depth of frequency modulation (FM) applied to all oscillators, creating pitch variations that simulate the dynamic movement of particles. Additionally, higher density creates more wind-like amplitude variations, simulating how particles blow and move through the air. The system uses multiple LFOs (Low Frequency Oscillators) to create complex, organic variations in both pitch and amplitude.
FM Rate = density × 1.0 HzFM Depth = density × 0.5Wind Rate = 0.1 + (density / 5.0) × 2.0 HzWind Depth = 0.05 + (density / 5.0) × 0.15The particle size parameters control multiple aspects of sound generation. The mean size determines the filter cutoff frequency (higher = brighter), while the spread controls both the filter resonance/Q factor and the complexity of variations. Higher spread creates more varied particle sizes, which maps to increased detuning between oscillators (creating beating/chorus effects) and more random noise modulation (simulating the chaotic movement of differently-sized particles in the air). This creates a direct correlation between particle characteristics and both the brightness/texture and the organic, living quality of the sound.
Cutoff = 200 + (sizeMean / 2.0) × 3800 HzResonance = 1.0 + (sizeSpread / 3.0) × 9.0Detune Amount = (sizeSpread / 3.0) × 5.0 centsNoise Amount = (sizeSpread / 3.0) × 0.1Optical thickness controls the decay time of the reverb/convolution effect. Thicker atmospheres create longer, more spacious reverb tails, simulating how light (and by analogy, sound) travels through dense atmospheric layers. The reverb impulse response is dynamically regenerated based on this parameter.
Decay Time = 0.5 + (thickness / 10.0) × 7.5 secondsEach color channel controls both the amplitude and the pitch of different harmonic partials in the synthesizer. The system uses 9 oscillators organized by color channel. Red controls the fundamental and lower harmonics (creating the bass foundation), green controls mid harmonics (body and warmth), and blue controls upper harmonics (brightness and texture). This creates a spectral mapping where the color balance directly affects both the harmonic content and the pitch range of the sound.
Each RGB color channel controls a distinct pitch range. As you adjust the color intensity, the base pitch of that channel's oscillators changes within its range. This creates a direct mapping between color and pitch, where brighter colors produce higher pitches within their respective ranges. The pitch mapping works in conjunction with the harmonic distribution, so each color channel controls both the frequency and amplitude of its associated oscillators.
Red Pitch = 55 + (red / 2.0) × 55 HzGreen Pitch = 110 + (green / 2.0) × 110 HzBlue Pitch = 220 + (blue / 2.0) × 220 HzThe synthesizer consists of:
Signal flow: Oscillators → Amplitude Modulation (Wind LFOs + Noise) → Mixer → Filter → Gain → Reverb → Master → Output
Modulation routing: Frequency Modulation LFO → Oscillator frequencies | Wind LFOs + Noise → Amplitude modulation
All parameters update in real-time as you adjust the sliders. The visual rendering updates immediately via WebGL uniforms, and the audio synthesis parameters are continuously updated while the sound is playing. This creates a live, interactive experience where you can explore the relationship between atmospheric physics and both visual and sonic phenomena.
The weather presets (clear, hazy, polluted, pristine) automatically adjust particle density, optical thickness, and particle size mean to simulate different atmospheric conditions. These provide starting points for exploring different sonic and visual landscapes.
Sunset Sonata — Mapping Air to Sound Parameters