FluidBuf* Multithreading Behaviour:
Filter:
FluCoMa/Guides (extension) | Libraries > FluidCorpusManipulation

FluidBuf* Multithreading Behaviour
ExtensionExtension

A tutorial on the multithreading behaviour of offline processes of the Fluid Corpus Manipulation Toolkit

Description

The Fluid Corpus Manipulation Toolkit1 provides an open-ended, loosely coupled set of objects to break up and analyse sound in terms of slices (segments in time), layers (superpositions in time and frequency) and objects (configurable or discoverable patterns in sound). Many objects have audio-rate and buffer-based versions.

Some buffer-based processes can be very CPU intensive, and so require a some consideration of SuperCollider's underlying architecture. The FluidBuf* objects have different entry points, from transparent usage to more advanced control, to allow the creative coder to care as much as they need to. The overarching principle is to send the CPU intensive tasks to their own background thread to avoid blocking the Server and its Non-Real Time thread, whilst providing ways to cancel the tasks and monitor their progress.

In SuperCollider, the server can delegate tasks to a non-real-time thread that are unsuitable for the real-time context (too long, too intensive). For instance, loading a soundfile to a buffer. This process is explained in Buffer and Client vs Server. For comprehensive detail see Ross Bencina's 'Inside scsynth ' in Chapter 26 of the SuperCollider book.

Basic Usage

Some FluidBuf* tasks can be much longer than these native tasks, so we can run them in their own worker thread to avoid clogging the server's command queue, which would interfere with you being able to fill buffers whilst these processes are running.

There are two basic approaches to interacting with these objects.

The first is simply to use the process and processBlocking methods. process will use a worker thread (for those objects that allow it), whereas processBlocking will run the job in the Server command queue.

NOTE: Note that 'blocking' in this context refers to the server command queue, not to the language. Both these functions will return immediately in the language.

It is important to understand that there are multiple asyncrhonous things at work here, which can make reasoning about all this a bit tricky. First, and most familiar, the language and the server are asynchronous, and we are used to the role that things like action functions play in managing this asynchrony. When non-real-time jobs, like allocating buffers, or running our Buf* objects in processBlocking mode, then they are processed in order by the server's command queue thread, and so will complete in the order in which they were invoked. However, when we launch jobs in their own worker threads, then they can complete in any order, so we have a further layer of asynchronous behaviour to think about.

If we wish to block sclang on a Buf* job, then this can be done in a Routine by calling wait on the instance object that process and processBlocking return.

It is also possible to invoke these Buf* objects directly on the server through a *kr method, which makes a special UGen to dispatch the job from a synth. This is primarily useful for running a lot of jobs as a batch process, without needing to communicate too much with the language. Meanwhile, the object instances returned by process expose a instance kr method, which can be useful for monitoring the progress of a job running in a worker thread via a scope.

For this tutorial, we will use a demonstrative class, FluidBufThreadDemo, which does nothing except wait on its thread of execution before sending back one value – the amount of time it waited – via a Buffer.

This code will wait for 1000ms, and then print 1000 to the console:

As an alternative to using a callback function, we could use a Routine and wait

What is happening:

  1. The class will check the arguments' validity
  2. The job runs on a new thread (in this case, doing nothing but waiting for 1000 ms, then writing that number to index [0] of a destination buffer)
  3. It receives an acknowledgment of the job being done
  4. It calls the user-defined function with the destination buffer as its argument. In this case, we send it to a function get which prints the value of index 0.

Cancelling

The 'process' method returns an instance of FluidBufProcessor, which manages communication with a job on the server. This gives us a simple interface to cancel a job:

.kr and .*kr Usage

The FluidBuf* classes all have both instance-scope and class-scope kr and *kr methods, which do slightly different things.

The instance method can be used to instantiate a UGen on the server that will monitor a job in progress; however, the UGen plays no role in the lifetime of the job. It is intended as a convinient way to look at the progress of a threaded job using scope or poll. Importantly, note that killing the synth has no effect on the job that's running.

The class method, *kr – more common with UGens – works differently. The UGen that this creates actually spawns a non-real-time job from the synth (so is like calling process from the server), and there is no further interaction with the language. In this context, killing the synth cancels the job.

To cancel a job setup in this way, we just free the synth and the background thread will be killed.

Monitoring .*kr Task Completion

When running a job wholly on the server with *kr, you may still want to know in the language when it has finished. The UGens spawned with the *kr use the done flag so can be used with UGens like Done and FreeSelfWhenDone to manage things when a job finishes.

For instance, using Done and SendReply, we can send a message back to the language upon completion:

Retriggering

FluidBuf* *kr methods all have a trigger argument, which defaults to 1 (meaning that, by default, the job will start immediately). This can be useful for either deferring execution, or for repeatedly triggering a job for batch processing.

Opting Out of Worker Threads

Whilst using a worker thread makes sense for long running jobs, the overhead of creating the thread may outweigh any advantages for very small tasks. This is because a certain amount of pre- and post-task work needs to be done before doing a job, particularly copying the buffers involved to temporary memory to avoid working on scsynth's memory outside of scsynth's official threads.

For these small jobs, you can opt out of using a worker thread by calling 'processBlocking' on a FluidBuf* object, instead of 'process'. This will run a job directly in the server's command FIFO. If your SCIDE status bar turns yellow, then be aware that this means you are clogging the queue and should consider using a thread instead.

It is worth mentioning that there is one exception to the behaviour of the FluidBuf* objects: FluidBufCompose will always run directly in the command FIFO, because the overhead of setting up a job will always be greater than the amount of work this object would have to do.

You can compare these behaviours here. The blocking will run slightly faster than the default non-blocking,

[1] - This toolkit was made possible thanks to the FluCoMa project, https://www.flucoma.org, funded by the European Research Council ( https://erc.europa.eu ) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 725899)