Sunset Sonata is a real-time interactive system that translates atmospheric scattering physics into both visual and sonic outputs. The project establishes mathematical mappings between atmospheric parameters and their representations in two perceptual domains: visual (sunset rendering) and auditory (ambient music synthesis).
The core concept is that the same physical properties that create visual sunset phenomena can be algorithmically mapped to sound synthesis parameters, creating a unified representation of atmospheric conditions. This dual-domain mapping allows for exploration of the relationship between physical reality and perceptual experience.
The visual rendering is based on atmospheric scattering physics, using the Henyey-Greenstein phase function to model light scattering off particles in the atmosphere. The phase function describes the angular distribution of scattered light:
where θ is the scattering angle and g is the asymmetry parameter, which depends on particle size:
Here, u_size represents the mean particle size parameter.
The optical depth τ is calculated as the product of particle density ρ and optical thickness T:
The scattering intensity combines Rayleigh-like scattering (for small particles, creating blue sky) and Mie-like scattering (for larger particles, creating red sunsets):
where: I_rayleigh = 0.3 × (max(0, v̂_y))^0.5, I_mie = P_HG(cos θ, g) × τ
The wavelength-dependent scattering is then:
where LUT = [R, G, B] is the spectral lookup table (color data) and ⊙ denotes element-wise multiplication.
Light attenuation through the atmosphere follows an exponential decay:
where v̂_y is the vertical component of the view direction.
The audio system uses additive synthesis with multiple oscillators, filters, and modulation sources. Each atmospheric parameter maps to specific synthesis parameters.
Particle density ρ ∈ [0.1, 5.0] maps to frequency modulation (FM) parameters:
Additionally, particle density controls wind-like amplitude modulation:
The frequency modulation is applied to all oscillators:
Particle size mean μ_size ∈ [0.01, 2.0] and spread σ_size ∈ [0.1, 3.0] map to filter characteristics:
The size spread also controls variation complexity:
Each oscillator receives random detuning:
where r_i ∈ [−0.5, 0.5] is a random value per oscillator.
Optical thickness T ∈ [0.1, 10.0] maps to reverb decay time:
The reverb impulse response is dynamically generated with exponential decay:
where N = sampleRate × τ_decay and rand(n) is white noise.
Each RGB color channel controls both harmonic amplitude and pitch frequency.
The harmonic distribution function maps RGB values to oscillator amplitudes:
where R, G, B ∈ [0.0, 2.0] are the color intensities. The amplitude for each harmonic h is given by the piecewise formula in the full thesis.
Each color channel controls a distinct pitch range:
To create dynamic, non-static sound, the system simulates air movement through multiple modulation sources.
Two low-frequency oscillators create wind-like amplitude variations. Combined amplitude modulation:
This modulates the gain of each oscillator: G_osc,i(t) = A_base,i × A_total(t).
The complete signal flow can be expressed as:
where f_osc,i(t) includes FM modulation and detuning, G_osc,i(t) includes wind and noise amplitude modulation, φ_i is the phase offset, Filter is a lowpass filter with cutoff f_cutoff and resonance Q, and Reverb is a convolution reverb with decay time τ_decay.
All mappings are updated in real-time as parameters change: y(t) = M(p(t)), where p(t) is the parameter vector, M is the mapping function, and y(t) is the output vector (visual and audio). The system maintains continuity by updating parameters smoothly.
Visual Rendering: WebGL fragment shaders implementing the atmospheric scattering equations.
Audio Synthesis: Web Audio API with 9 oscillators, 1 FM LFO, 2 wind LFOs, 1 filtered noise source, 1 lowpass filter per voice, 1 shared convolution reverb. All parameters updated in real-time through AudioParam automation.
Sunset Sonata demonstrates how physical parameters can be mapped to multiple perceptual domains simultaneously. Key insights:
Sunset Sonata presents a novel approach to sonification and visualization, where atmospheric physics becomes a compositional tool. The project opens possibilities for interactive art installations, educational tools, data sonification, and exploration of cross-modal perception. Future work could explore more sophisticated atmospheric models, additional mapping strategies, machine learning for parameter optimization, and multi-user collaborative experiences.
This project explores the intersection of atmospheric physics, computer graphics, and sound synthesis, creating a unified representation of natural phenomena across visual and auditory domains.
Henyey & Greenstein (1941). Diffuse radiation in the galaxy. Astrophysical Journal, 93, 70-83.
Preetham, Shirley & Smits (1999). A practical model for atmospheric light scattering. Proceedings of SIGGRAPH.
Nishita et al. (1993). Display method of the sky color taking into account multiple scattering. Pacific Graphics.
Chowning (1973). The synthesis of complex audio spectra by means of frequency modulation. JAES.
Roads (1996). The Computer Music Tutorial. MIT Press.
Web Audio API. W3C Recommendation. https://www.w3.org/TR/webaudio/