making of    

  Liszt From Space


This page is an explorative essay of how Liszt From Space came to existence.
Behind this minimalist animation is a lot of research and work, eventually culminating in the development of the CymaSonics Modular Editor. I hope this little journey will be interesting to the reader but otherwise no claims are made about it's value ;)


the text is split into these categories:

background 

In 2009 i got introduced by Jan Zehn to a concept of form-creation and form-description, called CymaSonics. It is based on arithmetics, the laws of motion and wave propagation, quantum mechanics and a fair piece of numerology. certainly worth it's own books, CDs, universities and all this kind of reception, i'm going to scratch only on the relevant aspects. CymaSonics is about the properties of Lissajou Curves as well as water-surfaces, grave-chambers, speaker-systems and an endless source for visual, acoustical and digital art.

I soon found myself exploring the numerical realms, especially those related to harmonics, and a lot of tiny programs, patches and blatant hacks originated, trying to get a glimpse of an overview of some of the spaces that opened up. As an example, imagine the rich universe of Lissajou forms.

In it's basic form, the two-dimensional Lissajou Curve has three parameters: f1, f2 and p, meaning two frequencies and a phase. the frequencies are normally integers and the phase is normally an integer fraction of 360, still resulting in an overwhelming variety of forms. There are repetitions and mirrored copies and the space is somewhat predictable. Same goes for three dimensions, altough it's getting already more complex. If you add another set of sine waves, each with own frequency and phase settings, that modulate the base curve it gets less predictable and you're left to try out many parameters in order to understand the general properties. Of course, the resulting shapes are always related to properities of integer arithmetics and as such to music theory. Indeed everything seems to be connected by fundamental mathematical laws and an increasing number of little programs where created, each helping to investigate a piece of the puzzle.


To cope with the complexity certain interfaces had to be developed that are easy to use. Let's stick with the example of a level-2 Lissajou curve mentioned above. Three base oscillators + three modulating oscillators with all methods of modulation described in a modulation matrix results in about 65 parameters. Each new level of modulation results in an exponential increase of that number. So there need to be a way to explore those higher levels intuitively without maintaining huge, screen-filling number-lists of which only a certain sub-set is needed most of the time.


One day in 2010, i tried to implement a node-based modular version of the Lissajou-generator in Processing. I found that object-oriented languages made it quite easy to program the node-based GUI as well as the functionallity behind it. Much later i also gained the experience to implement these things efficiently. :)


CymaSoniXplorer

So this was a bit of the background of my inspiration. Later in 2010 Jan and i started working on the Modular Editor. It was designed as an environment, in which logical, mathematical and geometrical nodes would interact freely, mainly for the creation of sound. It was really a prototyping-tool to quickly wire up and test circuits for the creation of scientific art. After a while, the palette of available modules was enough to create some really nice self-running, audio-visual patches. By far the biggest of them was Liszt From Space, which then won the VisuaLiszt-Award at the Fulldome Festival Jena, 2011.

modular basics 

The development of the Editor was greatly inspired by Native Instrument's Reaktor, with which i had played around since 2000. A few decissions had been made regarding it's design and workings. First of all, each parameter of each module should be visible as plain number and should be modulateable. To make the implementation easier and the whole concept more consistent it was also decided to calculate everything at the sample-rate. There is no distinction between events and audio-samples. For each audio-sample the whole patch is calculated. That means that each module introduces a one-sample delay. Five serial modules therefore need five sample-steps until the input is processed by the last module. While this made programming easy it later led to some brain-twisting in more advanced patches where sample-accurate processing was significant.

the below image shows the interface containing a very basic patch, namely drawing a circle.

Most of these images are much larger than this article's style allows. To see the details, please right-click and view image

you can see the canonical oscillator which is phase-synced by a start gate, a convenient module that issues a gate anytime the processing (or off-line rendering) starts. two 2d-oscillographs show the signals of the sine and cosine outputs of the oscillator. these outputs are also fed to the scope 3d, the main drawing-device in the Modular Editor. It has a lot of parameters, the most important being X, Y and Z. On every sample-step the scope 3d draws a pixel at the given location. One can ignore the Z input in which case it acts as a 2d-oscillograph. When using Z the 3d-position is projected onto the screen by standard perspective projection or optionally by fisheye-lens projection, which was used for the fulldome productions. There are lots of details and stuff about this module but i'll keep it simple here. As said, each sample-step causes one drawn pixel. At a sample-rate of 44.1khz and a screen-rate of 30fps, that makes 1470 pixels per frame. The whole screen fades to black within an adjustable time so the screen shows the trail or history of everything that got drawn.

Through a small modification in the next patch the visual output gets much more interesting. a second oscillator is used to rotate the circle about it's y-axis while it's drawn. The result is a spiral on the surface of a sphere. Note that the saw-output (also called ramp) is used to control the rotation angle. the second oscillator runs at ten hertz so for every full xy-circle, 10 rotations about the y-axis are performed. The inputs of the rotation-module are called X, Y and rotate Z but any axis-aligned rotation can be performed by the right wiring, though it needs some thought-skills to correctly setup a stack of rotations.


that was all very simple, lets move on to more advanced geometries.

torus-helix-helix

Shown here are two methods to generate a two-fold helix on a torus-surface. the first one uses amplitude- and phase-modulation.

First note the frequency-settings. Oscillator B is 20 times faster than A and C is 300 times faster than A and 15 times faster than B. this helps to figure out who's responsible for what features in the image. Oscillator A is used for generating the circle as above. The sine signal of B modulates the amplitude of A. This alone would draw a two-dimensional form resembling a star or sun, but the cosine of B is also used as the Z-position and the setup actually generates a torus-helix. Now C comes into play. Similiary, it's sine-output modulates B's amplitude. This leads to the finer helix-helix-ripples in the z-direction as well as the appropriate amplitude modularion of A (displacement along the radius of the circle). The next step is a trick and not completely geometrically correct but enables for this very small setup. The cosine signal of C modulates the phase of oscillator A (the math module performs a multiply and scales the signal down a bit). This is as if the position of A is constantly shifted back and forth along the perimeter of the circle. Together with the effect of C's sine-output this convincingly creates the two-fold helix.

Now, what is this good for? Well, first it's an interesting mind-game, secondly it can be used to create complex but also naturally looking and inspring shapes that are easily parmeterized and thirdly it's an audio-process! It can be used as a synthesizer, effect-engine or to pan sounds in three dimensions.

And because this is so much fun, here's another example how the same output can be archived with the help of rotation-modules.

The frequency-ratios are a little different here. This is also a quite small and simple patch and this time geometrically correct. I leave it as an excercise to you to figure how it works. Just a tip: the right-most rotate-module and the bottom oscillator create the basic circle.

It is not especially trivial to go up the dimensional ladder. I myself got stuck at the torus-helix-helix-helix-helix-level ;-)

arbitrary shapes

We still havn't created actual sound but all these techniques are simply slow sound so i stick to the visual examples for now. So far, all shapes where based on sine waveforms. As acoustic theory suggests, all other waveforms and all kinds of noise or stock-exchange-charts are composed of sines. For example the infamous saw-wave can be described as a mix of an unlimited number of sine-waves with increasing octave and decreasing amplitude. It is however not computationally affordable to construct a saw-wave like this. What follows are examples of waveforms that are created through interpolation between fixed values in a lookup-table.

This one is very simple. The new module here is the table 2d. It takes a position from 0.0 to 1.0 and outputs the (interpolated) value corresponding to that position. These values can be set by drawing with the mouse.

It might not be immidiately clear to the reader what the specific sequences generate. The top table controls the X and the lower the Y-position. Each column represents one corner of a hexagon. Between the corner points, a simple linear interpolation takes care of connecting them. Can you find out where the shape starts? (the range of the values is -1.0 to 1.0, the screen coordinates are about -1.2 to 1.2 from left-to-right and bottom-to-top)


Here's a slightly more complicated shape.

The details of the cube-generator are hidden inside a container-module and look like this:

Once again, can you match the sequencer-columns with the corners of the cube? The resulting waveform of each of the three axes could be called trapezoid and together they look like this:

It is important to note that all the possible shapes, no matter how complex, must be continuous. They need to be closed loops without gaps in order to work with the interpolation or oscillation method. Therefore non-continuous shapes are put together from smaller pieces.

multiple shapes

There is no inherent concept of polyphony in the Modular Editor. As you will see later in the Liszt-patch, this led to a lot of wiring-up of similiar structures. While true parallel processing is needed for the audio-side, e.g. to have multiple synthesizer-voices, for the creation of graphics one can typically create multiple instances without copied & pasted modules. Here's a simple patch drawing 10 different-sized circles.

New is the sample & hold module. It passes the value from it's input to the output, whenever a gate is received. If not, the output stays the same. The lower oscillator generates a circle through the rotate-module with 10 hertz. Once a full rotation/circle is done, the sample & hold is triggered and it sets a new radius for the circle by passing the upper oscillator's value to the X-input of the rotate-module.

The s & h is very useful for multiple instancing. In the next patch, it's used to set a cube's xy-position. The modules to the left are constants that are used to name and centralize certain parameters. Since all frequencies are dependend of the NR of CUBES and CUBE PAINT SPEED, the wiring gets a bit complicated.

The resulting image:

Oscillator A generates the circle for placing the cubes. B draws the cube (well, it generates the phase that the CUBE-container uses to draw the cube) and triggers the s & h after a full phase. Oscillator C is only a safety-device that keeps A and B in sync.

These two examples showed how parameters (size, position) are changed after a complete draw of one object. In some situations, it is also possible to change the parameters during the drawing of an object or even on every new sample-step. This has the advantage that all instances are drawn simultaneously and one is not so much dependent on a long fade-out-time of the scope 3d. However, this works only for deterministic things like the lookup-table-objects which produce fixed data for a given phase. Free-running oscillation-systems can not be driven that way. They need to by synced once and then calculated/advanced until they have produced the required data.

Another problem is the maximum samples-per-frame that can be drawn by one scope 3d. If any shape requires, say, a thousand pixels per frame to be drawn adequately, you cannot split the drawing into ten instances. Well, after all, the CymaSonics Modular Editor was never meant to be a movie-production-tool. However, the Fulldome Festival 2011 was approaching and i thought, maybe our contribution could be done with the Editor, although it would probably mean a lot of tedious wiring.

The multiple-instance and max. samples-per-frame problem was solved by enabling many slave-scope 3ds to render their data into one master-scope 3d. This technique is described in the following section and also we finally come to the audio-processing capabilities of the Editor.

particle scene 

i've designed a little patch that includes most aspects that where needed for the Liszt From Space film. It's much smaller and easier to understand and can be described step by step.

First of all here's a snapshot of the animation.

It doesn't really work as a still image so please check out the video on vimeo.com.

What happens in the animation? Some particles make their way through the universe, passing the camera once and then, each having a distinct tone that can be heared when they are close enough. The main container looks like this:

As can hopefully be seen, three particles are generated. Each of them consists of a path generator that determines their current position as a function of time. Furthermore there is a VOICE-container which generates the sound for each particle. Finally they are displayed, each by it's own scope 3d inside the VOICE DISPLAY-container.

Two essential modules connect everything. These are the view 7 and the microphone 3d:

    

The view 7 is more a convenience-module. It performs the same stack of transformations to a number of xyz-channels. All in all it does seven serial transformations, hence the name. It is used to transform the world-coordinates of any number of objects into camera-space. In most cases these transformations are sufficient for fluent camera-control.

the microphone 3d is the heart of three dimensional audio-processing in the Editor. It simulates microphones that record the scene in space. Both the number of recorded objects and the number of microphones can be choosen. Each microphone has the duty of regulating the amplitude of each sound according to the direction and the delay-time according to the distance to the sound. It also dampens the amplitude within the maximum listening radius. The microphones are placed around a center position pointing away from it. An ideal case would be if the microphone-positions match the speaker-positions of the sound-system that is reproducing the scene afterwards. There are a few more parameters which i won't describe here. In this particular setup, the microphone 3d-module outputs two audio-channels that are adequate for listening on stereo-headphones or on a standard stereo-hifi-system.

Each position in time module generates a 3d-position which is transformed into camera-space by the view 7. Together with the pure synthesizer sound from the VOICE-module, these are fed to the microphone 3d. The microphone 3d is centered around 0,0,0 so it's also within camera-space. The speed of sound and the distance of the microphones to the center are not scientifically important here, since the visual recognition of the size of the final scene is probably very subjective. Most people do not have experience in floating in free space and watching sound-particles flying by.

Lets have a quick look into the path generator.

The T-input is the current scene-time in seconds. The particle repeats it's path over and over, the length of one sequence is set with the duration. The length of the path along the Z-axis is set with from and to and the particle performs a number of helical spins around this axis. The G-output is not used in this patch but it would provide a gate-signal whenever the particle restarts it's path. All three containers are connected to the scene-time and have different settings for the number of spins and the duration.

Each VOICE-container looks like this:

First the note-input which represents the midi-note-number is converted into hertz with the note to freq-module. Now we could just have an oscillator running at this frequency but that would sound a bit dull. In this case i decided to use a built-in polyphonic oscillator that is retriggered all the time to produce parallel triangle-waveforms that are slightly detuned to each other by the noise white-module. The counter module on top produces the sequence 0,1,2,0,1,2,... which is converted into the power-of-2 series by the power-module (1,2,4,1,2,4...). This is multiplied with the original frequency to create higher octaves. The polyphonic oscillator has a built-in amplitude-envelope so the three unison-detuned octaves seemlessly fade together to create one standing and rich sound. Finally all is pushed through the clipper soft-module which performs a tangens hyperbolicus on the signal to keep it in the -1.0/+1.0-range and to saturate it a bit.

The VOICE DISPLAY-modules receive the transformed xyz-position and the audio signal of each voice and send this data to a scope 3d.

In order that the particles do not appear as only floating pixels, the audio-signal is added to the xy-position. The delay and the oscillator make two different but related signals from the one audio-source. The distance factor-module is a convenience-module. It performs a calculation that would be tedious to wire-up, namely:
(1 / (1 + a * sqrt((X2-X1)^2 + (Y2-Y1)^2 + (Z2-Z1)^2))) ^ b.
In other words, it outputs a value inversely proportional to the distance between two 3d-positions. In this case it's simply used to dampen the brightness of the particle when they are far away.

The last piece in the scene are the stars. Especially in fulldome-productions it is important to supply the viewer with a frame of reference. A complete uniform background would give the viewer no clue if the objects are moving or if the camera is moving. Furthermore in this simplistic scene, the objects are sparsely placed and make it hard to recognize the slow rotation of the camera. The star sphere-module generates a sequence of 3d-positions similiar to the lookup-table-objects discussed earlier. Each position/pixel is randomly choosen to be roughly on the surface of a sphere. The number of stars is fixed and the sequence repeats after the last position. The star positions are also transformed into camera-space and send to a scope 3d in the STARS DISPLAY-container.

Congratulations! You have made it this far! Now take a rest if you like before we enter the Big Patch!

Liszt From Space 

or when the universe feels unwittnessed

All that technical talk made me really tired. In this section, i'd rather want to focus on the artistical reasoning behind the film. Please come back soon, to get the whole picture.