Wave Field Synthesis:
Filter:
WFSCollider-Class-Library/Reference (extension) | WFSCollider

Wave Field Synthesis
ExtensionExtension

An introduction to Wave Field Synthesis

General

Wave Field Synthesis (WFS) is a sound production technology designed specifically for spatial audio rendering. Virtual acoustic environments are simulated and synthesised using large numbers of loudspeakers. The innovation of this technology is that sound can appear to emanate from desired virtual starting points, and then move through space in many possible defined spatial pathways. The WFS system from The Game of Life consists of 192 speakers, which are arranged in a square formation of 10 by 10 meters. Within this formation sounds can be composed to move within this square space, however the interesting point is that it is also possible to move sounds outside of this loudspeaker square. So the question is what makes this phenomenon possible considering the loudspeakers are all physically directed inwards.

Sound waves can be compared by analogy to concentric water waves. Suppose one throws a stone into the water. At the point where the stone makes contact with the water surface, spherical waves are created. Imagine that these waves were to encounter a series of boundary points, which consisted of a series of poles, then behind these poles new small waves would emerge. If these poles were placed equidistant from each other, then these small waves would eventually reform into the original wave again. The WFS system works in a similar fashion. Instead of poles in water, our sound waves encounter boundary points of individual loudspeakers placed equidistant from each other. The composer can ‘throw’ their sounds into the space like stones into the water. Using specially designed software, the composer can programme sounds to move in space and to follow many possible trajectory's. A sound could appear to originate from a fixed point and remain there or it could be programmed to move in patterns within and outside the square formation of the loudspeaker arrays. This possibility of being able to move sounds ‘inside’ and ‘outside’ the direct listening environment offers endless creative opportunities for the artist. The composer can also choose to have sound events manifest physically both inside and outside the speaker array formation. For example, it would be possible to render the sound of thunder rumbling in the far off distance just as one would experience it in nature! This can be accomplished by simply reproducing the appropriate sound pressure level to evoke such a sonic environment in the loudspeaker square. The ‘outer’ sound waves then are reconstructed by multiple loudspeakers. Conventional sound reproduction techniques like stereo and surround suggest spatial movements, by perceptually tricking one’s brain using knowledge informed by principles of psychoacoustics. The spatiality of WFS, however, is real. Further down we elaborate on this.

Technical information

The WFS technique is based on a theoretical principal of the Dutch mathematician and physicist Christiaan Huygens from the 17th century. In this principle a spherical wave passes its energy to its neighboring ‘particles’, which in turn radiates another spherical wave, in such a way that the wave at position X can be predicted. In other words, this principle states that any wave front can be regarded as a superposition of elementary spherical waves. This means that any wavefront can then be synthesised from such elementary waves.

So the WFS system synthesises wave fronts according to this principle of Huygens. The basic procedure of WFS was developed in 1988 by Professor Berkhout at the Technical University of Delft. Unlike conventional audio procedures (e.g stereo /surround) the perception of these wave fields is not dependent on psychoacoustic phantom sound source perception. The WFS sound field is actually reconstructed physically.

The positioning of sound within the stereo system (that is based on the same principal as surround) originates from intensity differences or short time differences between both speakers. In the ‘real world’ when sound is perceived from a natural sound source in the environment it arrives at one ear slightly earlier that the other (that is, if the source is not situated straight in front of the listener). Because of this inter-aural time difference we (for frequencies below 1.500 Hertz) use this information to determine which physical direction the sound is coming from. With stereo reproduction technique the sound of the left loudspeaker arrives first at the left ear and later at the right ear, and sound from the right loudspeaker arrives first at the right ear and then later to the left ear. The human brain ‘makes sense’ of this intensity difference to determine the ‘apparent location’ of the sound source.

For the perception of natural sound sources above 1.500 Herz use is made of inter-aural time differences as well as a colouration of the sound, caused amongst other factors by the physical interference of the human head and ear lobes. Again the normal stereo system is deficient in rendering this spatial information and fails to convince, because the sound comes from two loud speakers. Phantom locations emerge and the sound placement is very difficult to localise with any precision. In this situation the brain determines the apparent location of the sound based on the apparent intensity difference. The listener needs to be situated exactly in the middle of the speakers in order to correctly perceive the positioning of stereo sound, because this is the only spot where the ratios of the intensity differences are accurate. With the WFS technique by means of a large number of loud speakers the wave itself is reconstructed. This ensures that the distance between the sound source and the listener is smaller than the distance between listener and the loudspeaker, so any desired wave can be reconstructed. This possibility cannot be achieved with stereo technique, because the loudspeakers are the sources themselves and do not reconstruct a source.

Practically speaking a computer controls a large array of individual loudspeakers arranged as arrays around the listener and the computer synthesis activates each solitary loudspeaker membrane, at the time when the virtual wave front would pass through it. Sounds are no longer simulated (like stereo and surround systems which use psychoacoustic principles to ‘fool’ the perceptual system) and sound reproduction is no longer based on psychoacoustic principles, but instead on purely physical principles.