← back to manual

Sunset Sonata: Mapping Atmospheric Scattering Parameters to Visual and Sonic Phenomena

Xinnan Yu · Scientific-Art Thesis

This thesis presents Sunset Sonata, an interdisciplinary project that maps atmospheric scattering parameters to both visual rendering and real-time sound synthesis. The system creates a direct correspondence between physical properties of the atmosphere (particle density, size distribution, optical thickness, and spectral characteristics) and both visual sunset phenomena and ambient music generation. Through mathematical mappings and algorithmic transformations, atmospheric physics becomes a compositional tool, where each parameter simultaneously controls visual rendering via WebGL shaders and sound synthesis via the Web Audio API. This work explores the relationship between physical phenomena and their perceptual representations across visual and auditory domains.

1. Introduction

Sunset Sonata is a real-time interactive system that translates atmospheric scattering physics into both visual and sonic outputs. The project establishes mathematical mappings between atmospheric parameters and their representations in two perceptual domains: visual (sunset rendering) and auditory (ambient music synthesis).

The core concept is that the same physical properties that create visual sunset phenomena can be algorithmically mapped to sound synthesis parameters, creating a unified representation of atmospheric conditions. This dual-domain mapping allows for exploration of the relationship between physical reality and perceptual experience.

2. Atmospheric Scattering Model

The Henyey-Greenstein Phase Function

The visual rendering is based on atmospheric scattering physics, using the Henyey-Greenstein phase function to model light scattering off particles in the atmosphere. The phase function describes the angular distribution of scattered light:

P_HG(cos θ, g) = (1 − g²) / (4π(1 + g² − 2g cos θ)^(3/2))

where θ is the scattering angle and g is the asymmetry parameter, which depends on particle size:

g = clamp(u_size − 0.5, −0.9, 0.9)

Here, u_size represents the mean particle size parameter.

Optical Depth and Scattering

The optical depth τ is calculated as the product of particle density ρ and optical thickness T:

τ = T × ρ

The scattering intensity combines Rayleigh-like scattering (for small particles, creating blue sky) and Mie-like scattering (for larger particles, creating red sunsets):

I_scatter = I_rayleigh + I_mie

where: I_rayleigh = 0.3 × (max(0, v̂_y))^0.5, I_mie = P_HG(cos θ, g) × τ

The wavelength-dependent scattering is then:

I(λ) = [0.1×I_rayleigh+2.0×I_mie, 0.3×I_rayleigh+1.5×I_mie, 0.5×I_rayleigh+1.0×I_mie] ⊙ LUT

where LUT = [R, G, B] is the spectral lookup table (color data) and ⊙ denotes element-wise multiplication.

Attenuation

Light attenuation through the atmosphere follows an exponential decay:

A = exp(−τ × (1 − v̂_y) × 0.5)

where v̂_y is the vertical component of the view direction.

3. Audio Synthesis Architecture

The audio system uses additive synthesis with multiple oscillators, filters, and modulation sources. Each atmospheric parameter maps to specific synthesis parameters.

Particle Density → Frequency Modulation

Particle density ρ ∈ [0.1, 5.0] maps to frequency modulation (FM) parameters:

f_LFO = ρ × 1.0 Hz, d_FM = ρ × 0.5

Additionally, particle density controls wind-like amplitude modulation:

f_wind = 0.1 + (ρ/5.0)×2.0 Hz, d_wind = 0.05 + (ρ/5.0)×0.15

The frequency modulation is applied to all oscillators:

f_osc(t) = f_base + d_FM × sin(2π f_LFO t)

Particle Size Distribution → Timbre

Particle size mean μ_size ∈ [0.01, 2.0] and spread σ_size ∈ [0.1, 3.0] map to filter characteristics:

f_cutoff = 200 + (μ_size/2.0)×3800 Hz, Q = 1.0 + (σ_size/3.0)×9.0

The size spread also controls variation complexity:

Δ_detune = (σ_size/3.0)×5.0 cents, A_noise = (σ_size/3.0)×0.1

Each oscillator receives random detuning:

f_osc,i = f_base,i × 2^((Δ_detune × r_i)/1200)

where r_i ∈ [−0.5, 0.5] is a random value per oscillator.

Optical Thickness → Resonance Decay

Optical thickness T ∈ [0.1, 10.0] maps to reverb decay time:

τ_decay = 0.5 + (T/10.0) × 7.5 seconds

The reverb impulse response is dynamically generated with exponential decay:

h[n] = rand(n) × (1 − n/N)²

where N = sampleRate × τ_decay and rand(n) is white noise.

Spectral LUT → Harmonic Distribution and Pitch

Each RGB color channel controls both harmonic amplitude and pitch frequency.

Harmonic Amplitude Distribution

The harmonic distribution function maps RGB values to oscillator amplitudes:

A_fundamental = R, A_low = 0.7R+0.3G, A_mid = 0.8G+0.2B, A_high = 0.6B+0.4G

where R, G, B ∈ [0.0, 2.0] are the color intensities. The amplitude for each harmonic h is given by the piecewise formula in the full thesis.

Pitch Frequency Mapping

Each color channel controls a distinct pitch range:

f_red ∈ [55, 110] Hz, f_green ∈ [110, 220] Hz, f_blue ∈ [220, 440] Hz

4. Wind and Air Movement Simulation

To create dynamic, non-static sound, the system simulates air movement through multiple modulation sources.

Wind LFOs

Two low-frequency oscillators create wind-like amplitude variations. Combined amplitude modulation:

A_total(t) = 1.0 + A_wind,1(t) + A_wind,2(t) + A_noise(t)

This modulates the gain of each oscillator: G_osc,i(t) = A_base,i × A_total(t).

5. Synthesis Signal Flow

The complete signal flow can be expressed as:

y(t) = Reverb[ Filter[ Σ G_osc,i(t) × sin(2π f_osc,i(t) t + φ_i) ] ]

where f_osc,i(t) includes FM modulation and detuning, G_osc,i(t) includes wind and noise amplitude modulation, φ_i is the phase offset, Filter is a lowpass filter with cutoff f_cutoff and resonance Q, and Reverb is a convolution reverb with decay time τ_decay.

6. Real-Time Parameter Updates

All mappings are updated in real-time as parameters change: y(t) = M(p(t)), where p(t) is the parameter vector, M is the mapping function, and y(t) is the output vector (visual and audio). The system maintains continuity by updating parameters smoothly.

7. Implementation Details

Visual Rendering: WebGL fragment shaders implementing the atmospheric scattering equations.

Audio Synthesis: Web Audio API with 9 oscillators, 1 FM LFO, 2 wind LFOs, 1 filtered noise source, 1 lowpass filter per voice, 1 shared convolution reverb. All parameters updated in real-time through AudioParam automation.

8. Discussion

Sunset Sonata demonstrates how physical parameters can be mapped to multiple perceptual domains simultaneously. Key insights:

  1. Unified Parameter Space: The same physical parameters control both visual and audio outputs.
  2. Non-Linear Mappings: The mappings involve complex relationships (exponential decay, harmonic series, phase functions).
  3. Dynamic Variations: Wind and noise modulation prevent static output, creating organic soundscapes.
  4. Real-Time Interactivity: The system responds immediately to parameter changes.

9. Conclusion

Sunset Sonata presents a novel approach to sonification and visualization, where atmospheric physics becomes a compositional tool. The project opens possibilities for interactive art installations, educational tools, data sonification, and exploration of cross-modal perception. Future work could explore more sophisticated atmospheric models, additional mapping strategies, machine learning for parameter optimization, and multi-user collaborative experiences.

Acknowledgments

This project explores the intersection of atmospheric physics, computer graphics, and sound synthesis, creating a unified representation of natural phenomena across visual and auditory domains.

References

Henyey & Greenstein (1941). Diffuse radiation in the galaxy. Astrophysical Journal, 93, 70-83.

Preetham, Shirley & Smits (1999). A practical model for atmospheric light scattering. Proceedings of SIGGRAPH.

Nishita et al. (1993). Display method of the sky color taking into account multiple scattering. Pacific Graphics.

Chowning (1973). The synthesis of complex audio spectra by means of frequency modulation. JAES.

Roads (1996). The Computer Music Tutorial. MIT Press.

Web Audio API. W3C Recommendation. https://www.w3.org/TR/webaudio/