In a message dated 5/5/98 9:51:02 AM, Nick Rothwell wrote:
>>This is a real musical instrument, we're not just triggering
>> sequences; players trigger INDIVIDUAL NOTE EVENTS with POSITIVE
>> FEEDBACK AS TO THEIR INPUT. These note events, however, are
>> 'invisibly' adjusted behind the scenes in real time, so the music
>> aesthetic is coherent as to pitch (note/chord/scale), timbre
>> (instrumentation/effects), and rhythmic/temporal/duration alignment.
>Let me turn this example on its head: if somebody were to design a
>system allowing "anybody to choreograph", featuring a lifelike
>simulated dancer, plus a few simple controls marked "head, left arm,
>right arm" and so on, and if there were "adjustment functions" which
>would take a novice's input on these controls in order to make the
>movement coherent, smooth and articulate, would any novice now be a
Good question! My answer would be, "NO", of course not! A good metaphoric way
to raise the question, though!
Usually, in the conventional situation, we have either of the following two
cases: (A) a musical composition is written by a composer, and a performer
(who might also be the composer) subsequently plays it more or less exactly,
of course allowing for expressivity of tempo, timbre, vibrato, etc. etc., but
essentially the precise chord/scale note event structures and rhythmic
structures are adhered too, OR, case (B) where a performer ALSO is the
composer on the fly, as in flat-out and unpredictable improvisation.
Our environment is similar and yet different, in that it actually BLENDS
elements of both (A) AND (B) above. There IS a composing phase, some of which
is conventional, but some of which is quite novel (as to the tools
environments) and which lays out the "potentials" which subsequent
performer(s) may "extract" via a quite large repertoire of permutations of
possible sensor trigger event scenarios resulting from any imaginable body
position and movements. This overall approach is not totally unique,
GENERALLY speaking; much of the evolution of computer-assisted and body-
sensor-interfaced performance works (including such pioneers as Mark Coniglio
and MIT Media Lab among many others) are roughly similar in that aspect.
I guess what's different in our case is, with reference to a particular input
device, we've (a) focused it with the specific aim of allowing non-musicians
as well as musicians to perform recognizable, conventional, 'popular' music
(e.g. 'popular' speaking in sense of demographic/sales % of total media
distribution), and (b) we've taken ALL musical parameters into account as to
the computer-assistance or "leveraging" of transfer functions. Note that
there is NOTHING "random" or purely "computer generated" in our case, it is
all a combination of pre-runtime, human-input compositional elements PLUS the
human-generated sensor interactions at runtime (dance time); the computers
serve just to CONNECT these two human elements to achieve the final results.
This largely "human" orientation, harnessing the computers in that way, I
think, is part of what makes the resulting music sound like, well,
recognizable/ enjoyable music!
We're working today with songs in many genres including blues, pop, classical,
rock, jazz, country, dance, techno, tribal, trance, world, folk, etc., etc.
Our composer(s) initially completes an "authoring phase" to create the virtual
"song" environment (or in the case of a CD-audio 'play-along', inputs the
chord/scale, meter/tempo and other information according to the musical
structure of the particular CD track). Much of our work has been to try and
tightly-integrate certain key third-party studio developer tools with our own
custom-made tools, to result in a interactive content authoring SUITE which
can generate new content at a VERY efficient rate of production. We've still
got further to go, but it's getting quite efficient.
So there is a pre-play phase (one-time only) which involves/applies the full
expertise of a knowledgeable studio musician/composer/engineer/sound
designer/etc. - however using our suite of authoring tools we've taken this
studio-time process down to literally as little as two person-hours per five
minutes of fully interactive linear content time; and our average runs from
about 1, up to 1 and 1/2, studio-person-days per 5-minute "song". Compared to
generating conventional original (non-interactive) music content, this is
incredibly efficient - and allows for true "repurposing" of existing content;
thus, this process could become a new type of revenue stream for existing
publishers, etc. - but now we're digressing into marketing issues....
Part of this "authoring" process (for non-CD cases) involves setting up
multiple layers of conventional MIDI and/or digital audio "accompaniment
tracks", which can vary greatly according to the musical genre, etc., and
which can be switched ON or OFF during play for a dancer/player's SOLO vs.
degrees of accompanied performance mode. In either CD or non-CD cases, these
accompaniment tracks are synchronized with player-generated musical events and
are seamlessly, aesthetically, and automatically 'meshed' in realtime. (Hence
several synchronized runtime computers.) Much of our learning has been
discovering just how critical and delicate is this stored-linear vs.
interactive "mix", so that the overall result is appreciated as "real music",
while at the SAME TIME not being so sonically dense or OVER-mixed as to
confuse the player as to what they are doing vs. the accompaniment - it's a
VERY fine balance. And we do have a number of pieces which come across quite
well which have NO accompaniment: the player(s) do it all; (e.g. if they stop
still in the center of the platform, or move away from it, you get total
So during the runtime interactive 'song' play, AFTER the authoring phase, the
dancer/player "performs" by extracting from among the moving palettes of
available events pre-authored by 'real musicians'. It's very dynamic and
sophisticated in result, yet VERY simple to do for the player - even a
complete novice gets musical results by merely "waving their arms, legs, and
buns around, rolling, falling down, doing handstands, flips, etc.", yet a
skilled dancer and/or musician who pays attention, even practices (what a
concept), WILL achieve an even MUCH MORE refined, aesthetic result. There's
really NO wrong way to play, except, NOT to MOVE.
Hope that helps 'paint the picture,' in your imagination, at least a little
Thanks for your excellent and thought-provoking questions and feedback!
Any plans to visit US in coming months? Do you attend such as NAMM show in
(We probably won't have a system in Europe before early '99.)
David Clark, Founder & CEO
Dance Media, Inc.
Rancho Santa Fe, CA