Most users approaching Ambisonics are usually presented with two avenues to author an Ambisonic soundfield: capture a natural soundfield directly with a Soundfield microphone, 1 or author a planewave from a monophonic signal.2 SuperCollider's inbuilt PanB provides the latter solution.
The ATK provides a much wider palate of authoring tools via FoaEncode. These include:
In combination with the ATK's imaging tools, sound images can then be compositionally controlled as required.
The examples below are intended to briefly illustrate some of the first order encoding options made available in the Ambisonic Toolkit.
As the Ambisonic technique is a hierarchal system, numerous options for playback are possible. These include two channel stereo, two channel binaural, pantophonic and full 3D periphonic. With the examples below, we'll take advantage of this by first choosing a suitable decoder with with to audition.
Choose a decoder suitable for your system, as illustrated here. You'll end up definining ~decoder
and ~renderDecode
.
We have many choices to make when encoding a mono source. These examples include following an encoder with a transformer.
Encoded as an omnidirectional soundfield (no space!), PinkNoise is used as the example sound source.
In a well aligned, dampend studio environment, this usually sounds "in the head". FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains omnidirectional. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
Encoded as a frequency spread soundfield, PinkNoise is used as the example sound source. This sounds as spread across the soundfield, with the various frequency components appearing in various places. FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains spread. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
Encoded as a frequency diffused soundfield, PinkNoise is used as the example sound source. This sounds as diffused across the soundfield, with the various frequency components appearing in various places, with various phases. FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains spread. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
As with mono sources, we have numerous options when encoding stereo sources. Here are a few.
In this example we first encode a single channel of PinkNoise into a stereophonic signal with Pan2. FoaZoom is then used to balance the soundfield across the x-axis (front/back).
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the left to right position of the stereo panned source and MouseY the FoaZoom front to back position (distortion angle). Moving the mouse in a circular motion results in a circular motion of the sound.3
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
For this example we'll look at encoding stereo soundfiles.
The stereo encoder places the left channel at +pi/4 and the right at -pi/4. Compare to the Super Stereo encoder below.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
Super Stereo4 is the classic Ambisonic method to encode stereophonic files, and is considered to be optimal for frontal stereo encoding.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
Ambisonic UHJ is the stereo format for Ambisonics.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
You may like to review the discussion on Gerzon's experimental tetrahedral recording, along with the discussion of the spatial domains.
Here we encode four channels of decorrelated PinkNoise as a decorrelated soundfield, resulting in a maximally diffuse soundfield. FoaPush is used to "push" the soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction. This technique gives the opportunity to continuously modulate between a directional and a diffuse soundfield.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains omnidirectional. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
This example is somewhat unconvential as regards the literature. Four microphones (omnis) are place around the performer in a tetrahedron. This is then matrixed into B-format.
As the performer rotates and moves about, the image shifts through the sound-scene. In a compositional context, FoaPush could be used to control the soundfield.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
The pantophonic encoder may be used to transcode from one format to another. This example transcodes an octophonic recording to the decoder you've chosen.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
The directions encoder may be used to transcode from one format to another. This example transcodes a periphonic 12-channel recording to the decoder you've chosen.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
While no longer manufactured, the ATK includes an encoder for the ZoomH2 handheld digital audio recorder.
The ZoomH2 is a convenient, portable handheld recorder. The device only records horizontal surround (pantophonic), so we don't get height.
As a relatively inexpensive piece of equipment, the imaging of the ZoomH2 isn't always as consistent as we'd prefer. To remedy, the Y gain is tweaked to widen the image, and dominance is applied to stabilise the front.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
As described here, the ZoomH2 encoder reverses the labels for front and back of the ZoomH2. This is done to favour the use of the decoder as a roving, hand-held device, with the display facing the operator.
If one wishes to respect the labelled orientation of the device as does Courville in the example below, we'll need to either adjust the angles argument or apply FoaXform: *newMirrorX. For this example, we'll set angles = [3/4*pi, pi/3]
, which are those specified in the ZoomH2 documentation.
As a relatively inexpensive piece of equipment, the imaging of the ZoomH2 isn't always as consistent as we'd prefer. To remedy, the Y gain is tweaked to widen the image.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.