[NetBehaviour] Notes on the virtual and others, comments welcome
Alan Sondheim
sondheim at panix.com
Fri Nov 2 03:58:28 CET 2012
Notes on the virtual and others, comments welcome
Hamilton - notes for the macGRID 2012 Workshop presentation, 11-87-12.
http://hmcwordpress.mcmaster.ca/macgrid/
"macGRID is a robust, archivable, simulation research platform and a
corresponding network of academic, industry and community partners who
wish to engage in multidisciplinary research and creation, and resource
and knowledge sharing, using avatar virtual worlds and mixed reality
systems (currently with OpenSim). This initiative is led by Dr. David
Harris Smith, Assistant Professor in Communication Studies & Multimedia at
McMaster, and has been advanced by collaborators and contributors
including new media artist Ian Murray, Humanities Media Computing,
SHARCNET High Performance Computing, GRAND NCE, Dr. Eleni Stroulia and
students at the Department of Computing Science at the University of
Alberta, and Dr. Suzanne Crosta, Dean of Humanities and Dr. Mo Elbestawi,
Vice President of Research at McMaster." (from the Website)
DIRECTORIES: chic/dance/mocapstills/sl/wvu
Early on, wanted to work with 'generalized modules' that allowed anything
to be made out of anything - they were analog, PCM, parameter-control-
modules. You could build anything from analog to video synthesizers from
them as well as control units for multi-media displays.
Dreamed of portable keyboards for working while driving; an early adaptor
of portable tape technologies (audio/video).
Jump to virtual worlds: In virtual worlds, you can _do anything,_ and _do
it live_ with planetary participants and audiences (blurring the line
between the two).
Early on again, I worked with MOOs and MUDs - text-based virtual realities
- their advantage was that you could quickly set one up yourself - you had
control, and again planetary reach. Here I learned a lesson - any wizard,
any governor - could see everything going on; it was all open database.
Jump to Second Life under the aegis of Linden Labs - but they do mainly
leave you alone. You can build anything, provided you have permissions -
as Ian Murray said, permissions are the key; collaboration is as well.
While MOOs and MUDs could be set up and run by a single user, virtual
worlds are more complex, have different requirements, foster collaboration
and, like MOOs and MUDs, hopefully pliable governance.
I believe there are _no_ answers as to just governance in virtual worlds,
as long as they are welcoming. On one hand this is an enormous gift; on
the other, it's led to the splittings of MOOs and the downfalls of news-
groups for example.
Every virtuality by the way develops culturally, linguistically, and
institutionally, from within, and every virtual world develops its own
erotics.
Avatars are different in virtual worlds, but one thing is common:
performativity. Simplest example: type _date_ at a prompt and you get
Thu Nov 1 16:32:53 EDT 2012
Something changes: an action leads to a _qualitatively different result_
reminiscent of the jump cut in films.
Aside: Leverage in the _real world_ is constituted by the body itself,
which one lives within (even for cyborgs); the jump cut is a sign of the
digital, in which something produces something _else._
Performativity in virtual worlds is connected with the user-subject by
complex psychoanalytics; I've used the term _jectivity_ to indicate the
projections and introjections that occur across the screen space, which is
always close, itself, to dissolution. In Videodrome and other films, it's
always possible to reach _through_ the screen. Apparently.
So I'm interested in human representation within the virtual; this implies
both an image and a dynamics, the two of course entangled.
At the same time I've been interested in 'alien architectures' - spaces
that appear to be foundationless, ungrounded, spaces that are almost
impossible to navigate, spaces that create a sense of anomaly and wonder.
A lot of my early virtual world work emphasized these spaces and what it
was possible to do within them. Alas, I haven't been a programmer, and so
have had even in SL to rely on others for scripts, which I could then
alter productively.
More to the point, though, has been the issue of movement. On and off for
the past two decades, I've worked with Foofwa d'Imobilite [give background
information here]. Foofwa's work is at foofwa.com. His work has dealt with
any number of issues, from politics through health, sexuality, being-
Swiss, technology to virtuality. We've embedded him in Second Life, and
he's worked with our avatar movement as well. *
So movement has been a natural for me. I'm also a recorded musician, by
the way, so I'm well aware of my own movements, in terms of string
instruments, keyboards, and some woodwinds. All of this has fed into the
work I've done with motion capture equipment, performers, and Second Life.
Motion capture work: Brief history of my work in the Virtual Environments
Lab at West Virginia University, Morgantown, through Sandy Baldwin and
Frances Van Scoy. We used a lot of equipment, some of which was in
storage. We worked with 3d lasers for modeling - including one large laser
that could take in an entire building in one series of scans. We also
worked with some older motion capture equipment, which we applied in two
basic ways:
1. We recorded one or more performers, using remappings of modes,
including some with 'impossible' topologies, in terms of human movement;
and
2. We recorded, through a rewriting of the motion capture software itself,
through what I've called 'dynamic filtering,' transforming standard motion
capture files on the fly, through the insertion of filters between the
input data and the outputted files. These filters parallel the use of
filters in Gimp or Photoshop [explain].
Endproducts - the altered motion capture files were fed into three worlds:
1. The Blender 3d modeling program, where abstract avatars were used to
examine how behavior appears when it's abstracted from the body;
2. The Poser mannequin modeling program, where the motion capture files
were used to 'break' the mannequin bodies, as well create any number of
videos; and
3. Works in Second Life and OpenSim virtual worlds which involved highly
distorted avatar performances and dances; these were used for live or
mixed reality performance, some augmented reality work, some video work
for conferences, gallery or museum installations, and some pieces made for
live or online choreographies. The ultimate goals of the virtual worlds
work were - what happens when the body is considered completely plastic;
what images of pain, death, wounding, or sexuality are conjured up by
distorted bodies; when does the body become a 'thing' among other things
in the world; what are the politics and anthropology of distorted avatars
and movements - if any.
Last year, Patrick Lichty enabled me to use the highly sophisticated
motion capture equipment at Columbia College, Chicago; here, we didn't
modify any software (we had neither the expertise nor permissions!);
instead, we worked closely with remapping the body in relation to the
30-40 markers that were placed on the body suits. This is where everything
becomes interesting, I think, since we were able to map up to four
dancers/performers into a single avatar output. It was difficult to do
this because the software tended to stop working and 'glitch' the avatar
into a somewhat inert Buddhist image when it could no longer make sense of
the input. But we were able to create complex movements, and one technique
stood out - the 'hive' technique or social avatar 2.0.
The usual mappings we did involved a single performer with the body nodes
remapped on him or her. So there was a topology involved; the hip was
usually the stable or root node. In West Virginia, we started using two
performers; this is what can happen: [demo the torsion/twist]. When I was
in Chicago, I was able to work with four performers, two on trapezes, all
choreographed into a single avatar - and all capable of watching the
results of their movement on a screen. So we tried:
1. Moving the avatar in utterly untoward ways, so that the result was a
limping or broken avatar; and
2. Moving the avatar in utterly normal ways, which meant distorted
movements on the part of the live performers. This was fascinating since
it resonated back to the performers, who themselves were twisted in their
movement. It was amazing choreography, created to 'normalize' the
equipment output.
Foofwa and returning the avatar movements to 'real' life. [examples,
explanation.] The 'smearing' of divides between real and virtual, each
borrowing from, and resonating with, the other.
[Examples]
Note that with all of this, there are no programming errors, only other
avenues, glitches, to be explored. So the aesthetics and phenomenology of
glitch are important here as well. In virtual worlds and with motion
capture, there are in particular 'edge' glitches - within and without
gamespace boundaries - that define, in a sense, _all_ the possibilities of
the avatar, _all_ the possibilities of escape and normalcy...
The imaginaries I work with - virtual worlds; 3d modeling; 3d printing;
very low frequency (VLF) radio; scanner and shortwave radio; augmented
reality; playing music; codework (an entangled amalgam of code, writing,
and computer 'debris'; even birding, which requires abstractions ranging
from migration routes to morphs.
Finally, the idea that the virtual has always been with us, that the body
is always already inscribed, that culture goes all the way down, that
inscription and the digital are entangled amalgams as well, and that
abjection underlies everything, as well as pain, suffering, and death, all
part of it.
Thank you -
=======================================================================
Dance description (for the empyre email list, highly edited here)
I've been following this discussion and thought the best way I might
participate is to describe the work that I've done with Foofwa d'Imobilite
and others over the past decade or so. We went from using video and audio
tracks accompanying choreography, to work in Blender and Poser. The Poser
work was created from bvh (Bio-Vision Hierarchy) files produced with
motion capture (mocap) equipment that used 21 sensors electromagnetically
interacting with an antenna. The antenna fed sensor signals into a hard-
wired 486 microprocessor that output coordinates; these were fed into a
second computer that created the bvh files themselves. we modified the
sensors in a number of ways - some through the software interface, and
some with limb assignment and position. We did a piece called heap for
example - the sensors were dropped in a heap and the bvh file fed into
Poser. We did a star piece, arranging the sensors in a star formation on
the floor and inverting it by exchanging +r from a sensor position to -r.
We also reassigned sensors in several ways - dividing them between two
bodies, remapping inversely onto a single body, and so forth. All of this
produced bvh/Poser mannequins that were used as projections in live per-
formance, or chroma-keyed over dance/performance video.
All of this work was at West Virginia University's Virtual Environments
Lab, headed by Frances van Scoy. I received an NSF consultancy through
Sandy Baldwin and NYSCA grant; through the former, I had a grad assistant
from software engineering, Gary Manes, to assist me. We went into the
mocap software itself and Gary rewrote it, creating a dynamic/behavioral
filter interface, which would produce transforms from the sensor output -
before the 3-d assignment to bvh was made. This was modeled on graphic
software filtering, but the assignments were different - we applied a
function f(x) to the coordinates and/or modified the coordinate mechanism
or input streams themselves. The bvh files that were produced were sent
into Poser for editing; in some cases, Poser mannequin video was output.
But more and more, we edited in Poser to format the bvh for upload to
Second Life; this way we had live 3-d performance based on the transforms.
This performance could interact within Second Life itself - with other
online performers and audience - or through projection, without Second
Life, in real-space where performers might interact with the avatars.
The bvh files are complex and avatars perform, most often at high-speed,
with sudden jumps and motions that involve them intersecting with them-
selves. The motions appeared convulsive and sometimes sexualized. Foofwa
d'Imobilite used projections direct from Poser - about 100 files - as part
of Incidences, a piece produced in Geneva and widely shown. Foofwa, along
with Maud Liardon and my partner, Azure Carter, also imitated avatar move-
ment - and this fed back, from dance/performance into programming and pro-
cessing; at times it has been impossible to tell whether a particular
motion stream originated on- or off-line.
In SL, everything is pure, digital, protocol, numeric; by 'smearing' the
animation input, avatar appearance, and location, we create in-world and
out-world experiences that stray from body and tend towards choratic and
pre-linguistic drives. We've performed a lot at various limits of SL - on
sim edges for example, or at 4k 'up', where the physics changes. The
output is the usual - audience in-world or out-world, as well as video and
stills.
Foofwa, Maud, Azure, and myself all traveled to the Alps where avatar work
was re-enacted live; the performances were on the edge of the Aletsch
glacier. (This was sponsored by a Swiss grant.) What was interesting most
to me here was the development and performance of a field - Foofwa dancing
with a VLF (very low frequency) radio antenna, for example - his body
coupled and modified the electromagnetic capacitance surrounding the wire.
We had done this indoors with Foofwa and Azure; outdoors, against the
glacier, spherics formed a deep part of the content. This also paralleled
work we did with the mocap sensors at WVU - using high-strength magnetics,
we modified the local fieldlines, almost as if we were modeling general
relativity's 4-space gravity/mass interaction - the results were similar.
I'm fascinated by these 'cosmologies in the small'; at the same time, want
to avoid any easy and false metaphoric equivalence with scientific theory.
As for the theory of the work we're doing, at least from a phenomenologi-
cal viewpoint, I've put up http://www.alansondheim.org/sltheory.txt which
has also been published as a book.
At the moment I'm working with sim overload and self-reflexivity: on a
simple and neat level, what if a performing avatar connects to an object
('prim' complex) designed to move away from hir? The result is a total
[avatar/complex] that flees indefinitely - at least until the complex goes
out of world.
More information about the NetBehaviour
mailing list