[NetBehaviour] Visceral circuitry emergent body
Marco Donnarumma
lists at marcodonnarumma.com
Tue Feb 18 12:10:55 CET 2014
Hi Johannes,
> Hi
> fascinating work Marco, and enjoyed seeing you demonstrate it at CYNETarts
> and in London when we met;
>
> here the work you posted (called sound sculpture)
> again raises interesting questions for me that I am not sure how to
> formulate;
> performance wise and presentationally (lighting) the focus is placed on
> hands, but it's the
> muscles and arms that also are factors (c an' t see the sensors/wiring),
> perhaps
> hands are at outer perimeter of the flexed muscles of whole arms / lower
> arms your sensors sense,
> and perhaps you do perform with a whole "circuitry" / Visceral circuitry
> emergent body
> as you say, or not? but how does the "whole" become performed-embodied
> into the sound (and our
> image of hands clasping)?
>
There are at least three layers composing the staging of the performance.
They are entangled with one another, without hierarchy.
The first is the metaphorical layer. The "hands holding the void" (ref. to
Alberto Giacometti's sculpture of the same name) are the metaphorical means
bearing the imagery of the work. At first, it seems I'm sculpting an empty
space, but as the performance unfolds, the sound forms reveal the physical
qualities of the invisible object. It is a perceptual illusion.
The second layer is the physical/sonic gestures and the lighting playing
along with them. The gesture vocabulary is designed to embody the metaphor
above, and, at the same time, to produce given muscular sounds. The
lighting focus on the hands is meant to outline an imagery shape of the
sonic energy that my body is trying to contain. When I close and open my
hands the vertical ray of light is blocked or spread in the space. In so
doing, it illuminates differently the audience member's bodies, as if there
was an actual physical and luminous object in my hands, exploding and
imploding gesture after gesture.
The third layer is the body/machine coupling, that is, the performance of
my body through the computational system, the Xth Sense. It is a feedback
loop process of mediation that starts when I modulate visceral processes
within my body through physical effort, torsions, and whole-body movements
to produce specific bioacoustic sounds. Those sounds yield unique features
that the sensing software extracts. According to the features of the muscle
movement, I modulate the processing of the bioacoustic sounds. What is
heard during the concert, is at the same time, the raw muscle sound
diffused by the subwoofers, and the digitally processed muscle sound
diffused by the rest of the loudspeakers.
I produce and modulate a visceral sound, process it through the
computational system, listen to the result and modulate it once again. It
is a perceptual feedback loop.
> -- I also felt there was a range of sound that obviously flows changes
> flowing through your patch
> (software) and not through your body/only from body, i.e. how does your
> sound emerge
> from body except metaphorically? or is what we hear coming from "Xth
> Sense" amplified muscle contractions --
>
Perhaps I partly answered above. To elaborate further, the sounds literally
emerge from the body. No metaphor involved here. What is heard is
exclusively bioacoustic sounds from the human body, in both raw and
processed form. Amplified raw muscle sounds and digitally processed muscle
sounds become one sound form.
> how does Xth Sense do that? I think asked you this before, so maybe
> accept it please as a naive audience questions raised
>
Muscle sounds, and other visceral sounds, are very low frequency vibration
of the flesh tissues. Think of the heartbeat. The heartbeat is the sound of
the heart tissues that contract and expand. The Xth Sense captures those
sounds using a tiny microphone. The sound is then fed to the computational
system as an audio input.
> in one's mind, how do you choose what sounds we hear (how is interactivity
> per-formed?) and
> how do the soundfiles n your laptop get processed/affected by your
> muscular activity? are
> there no soundfiles until the XttSense sends signals?
>
No, there is no pre-recorded soundfile, and no sound unless I move and
produce one. All what is heard is produced in real time by the body. As
described above, each physical gesture produces a different sound. Each
sound yields given features. The features are used by the software to
process the sound itself. And so the loop is closed and starts again.
The question of how interactivity is performed is very interesting for me.
Ominous is a quasi-improvised piece, that is, the only aspect of the
performance that I define beforehand is how certain muscle movements will
sound. However, I do not define static qualities of the sound interaction,
but rather complex behaviours. To make an extremely simple example, imagine
that with a quiet torsion of my wrist I increase a reverberation of the
sound, and as soon as the wrist tension increases beyond a certain
threshold an echo delay effect is activated and modulated. Building upon
this method, one can perform in a playful and physical way complex sonic
events.
In Ominous, for instance, if I do not exert a certain amount of tension in
my muscles, the sound will never "explode", as it does towards the end of
the video. But once it does, there's no way to stop it nor make it quieter.
I can only modulate it using intense muscle force, until my arms become
too tired and the sound fades out. This is a constraint that I created on
purpose to make the performance more challenging to play. it is an attempt
to move beyond the idea of "control" over the technology. I wanted
something that I couldn't control fully, an unstable instrument that could
sound in unexpected ways. Performing interactivity is to me an exploratory
practice. For each piece, I perform the body differently, according to
tiredness, the food I ate, the room temperature, the stress, the
excitement, and the sleep I had the night before. One concert might sound
louder and more dynamic, another might sound quieter. The instrument
affords for variety, difference, and also error and bad performance.
best wishes,
M
>
> regards
> Johannes
>
--
Marco Donnarumma
Performer, body tinkerer, teacher and writer.
#soundandmusic #biotech #freeculture
EAVI - Goldsmiths, University of London
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Portfolio: http://marcodonnarumma.com
Research: http://res.marcodonnarumma.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.netbehaviour.org/pipermail/netbehaviour/attachments/20140218/21703022/attachment.htm>
More information about the NetBehaviour
mailing list