roger at eartrumpet.org
Tue Sep 2 04:44:36 CEST 2014
What a beautiful work Wind Array Cascade Machine is..
<I am just wondering how a (physical) body performance, such a Stelarc's, that transposes data to avatar motion/behavior in Second Life (or other 3d immersive environment) compares to external (environment, nature, urban space, light, forces etc) environmental data used to influence and actuate sound and object motion in a dislocated / networked space? and how you work with "music" or sound in this way?>
It's more that the data (light, heat and movement from each node) is visualised within the virtual environment allowing musicians to navigate through it cognisant of each other and the characteristics of their dispersed sensate environments.
Performers can move individually or collectively (synchronously as 3 peoples hands clasped together). Using an arduino set up, networked musicians wear wristband accelerometers that map their hand and wrist movements via a USB input to their machine that are visible in the virtual Sea Swallow World.
As they move, they trigger field recordings and videos of significant locations that provide the inspiration for freely improvised musical responses.
From my small amount of research already other cross-reality practice and research seems focussed on technologies and architecture and installations augmenting real-spaces with virtual ones or visa v for quite utilitarian purposes, smart environments ect. My original post is an enquiry about networked music or artistic projects that use x reality to enhance tele-immersion and presence in networked performance.
This one from MIT labs is quite nice http://tidmarsh.media.mit.edu/ but again an installation.
"Knowledge is only rumour until it is in the muscle" - Asaro Mudmen, Papua New Guinea.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NetBehaviour