[NetBehaviour] YouTube Channel Material Culture

Alan Sondheim sondheim at panix.com
Wed Jul 14 03:48:57 CEST 2021



YouTube Channel Material Culture

423 videos
The first 56 include a fair amount of material from the Alps
and Pre-Alps, dancework that still amazes me. Some of it was
shot around the Aletch Glacier, from above and near the
origin. To access, go to
https://www.youtube.com/user/asondheim/videos
Click on "Sort by" and then "Date Added Oldest" - the videos
are there -


Some background from notes for the SXSW Glitch Panel, 2013

Talk outline for SXSW Glitch Panel and background material

1. Who am I? New media artist who works with virtual worlds,
electronic literature, and musical improvisation. I'm
particularly interested in the relationship of the digital
"clean screen" and issues of abjection - dirtiness, wounding,
ecstasy, and distortions, of avatars and real flesh. In other
words, what lies beneath the surface.

2. I've used motion capture as a way of creating images, 3d
printed objects, virtual world avatars and performances, and
live choreographies - by distorting the mocap "chain" which goes
from live performer to computer file to an avatar representation
based on that file. (Illustrate hand gestures)

3. The chain can be altered in several ways:

a. by remapping motion capture nodes across the body. -- in
other words software that's forced through glitches

b. by remapping mocap nodes across several bodies.

c. by reworking the mocap software itself so that it uses
dynamic or behavioral filters that transform the avatar
representation by transforming the files as they are collected.
(Illustrate hand gestures)

d. by moving the performers out to the edge of the game-space
itself -

4. The results can be:
distorted avatar movements in video / still images
live performances in Second Life and mixed reality media
3d printed objects
choreographies
images, videos, anything

5. Why?

How to confound the digital with the abject - with issues of
pain, wounding, death, ecstasy.

How to dis-embed the embedded news media - in other words, how
to reveal the deaths of soldiers and civilians around the world,
in spite of or through digital media. Of course avatars can't do
that, but they can resonate with the slaughter that occurs in
the real - rather than constantly resonating with the virtual
slaughter that occurs in video games. Get rid of the radical
disconnect.

1 digital is dirty: potential wells, channel noise, surface
hacking/cracking, leaking through the real
2 digital is corporate: protocols and codecs, open source,
closed source, committees, communities, TAZ (temporary
autonomous zones)
3 digital tends towards eternity, infinity, closure
4 digital forecloses the analogic, dirty, abjection
5 glitch occurs through or within surfaces
6 glitch tends towards novelty
7 glitch like wounding is intermediary between the accidental
and the determinate-stochastic
8 glitch is dead-inert
9 the opposite of glitch may well be suturing
10 glitch: jump-cut, suturing: continuity-girl
11 analog: is dead-inert; glitch is the scarring of the analogic
12 reality: neither analog nor digital, entangled

visual data representation - empathetic identifications - the
mocap work social experimentation - four dancers into one avatar
for example psychological experimentation - "reading" the
modified avatar -
  identified as human or organism?
  identified as wounded or dancing?

GLITCH: 1, the use of glitch in virtual worlds and how it
extends avatar possibilities; 2, where glitch kills - errors
which give glitch an abject edge; 3, glitch in my motion-capture
work; 4, psychoanalytics of glitch; 5, glitch and programming -
'not a bug but a feature.'

Difference between unutterable pain and its (external)
representation, and utterable programming of its (external,
at a double remove) representation.


I'm interested in human representation within the virtual; this
implies both an image and a dynamics, the two of course
entangled.

We recorded one or more performers, using remappings of modes,
including some with 'impossible' topologies, in terms of human
movement; and

We recorded, through a rewriting of the motion capture software
itself, through what I've called 'dynamic filtering,'
transforming standard motion capture files on the fly, through
the insertion of filters between the input data and the
outputted files. These filters parallel the use of filters in
Gimp or Photoshop [explain].

Endproducts - the altered motion capture files were fed into
three worlds:

1. The Blender 3d modeling program, where abstract avatars were
used to examine how behavior appears when it's abstracted from
the body;

2. The Poser mannequin modeling program, where the motion
capture files were used to 'break' the mannequin bodies, as well
create any number of videos; and

3. Works in Second Life and OpenSim virtual worlds which
involved highly distorted avatar performances and dances; these
were used for live or mixed reality performance, some augmented
reality work, some video work for conferences, gallery or museum
installations, and some pieces made for live or online
choreographies. The ultimate goals of the virtual worlds work
were - what happens when the body is considered completely
plastic; what images of pain, death, wounding, or sexuality are
conjured up by distorted bodies; when does the body become a
'thing' among other things in the world; what are the politics
and anthropology of distorted avatars and movements - if any.

Last year, Patrick Lichty enabled me to use the highly
sophisticated motion capture equipment at Columbia College,
Chicago; here, we didn't modify any software (we had neither the
expertise nor permissions!); instead, we worked closely with
remapping the body in relation to the 30-40 markers that were
placed on the body suits. This is where everything becomes
interesting, I think, since we were able to map up to four
dancers/performers into a single avatar output. It was difficult
to do this because the software tended to stop working and
'glitch' the avatar into a somewhat inert Buddhist image when it
could no longer make sense of the input. But we were able to
create complex movements, and one technique stood out - the
'hive' technique or social avatar 2.0.

The usual mappings we did involved a single performer with the
body nodes remapped on him or her. So there was a topology
involved; the hip was usually the stable or root node. In West
Virginia, we started using two performers; this is what can
happen: [demo the torsion/twist]. When I was in Chicago, I was
able to work with four performers, two on trapezes, all
choreographed into a single avatar - and all capable of watching
the results of their movement on a screen. So we tried:

1. Moving the avatar in utterly untoward ways, so that the
result was a limping or broken avatar; and

2. Moving the avatar in utterly normal ways, which meant
distorted movements on the part of the live performers. This was
fascinating since it resonated back to the performers, who
themselves were twisted in their movement. It was amazing
choreography, created to 'normalize' the equipment output.

Note that with all of this, there are no programming errors,
only other avenues, glitches, to be explored. So the aesthetics
and phenomenology of glitch are important here as well. In
virtual worlds and with motion capture, there are in particular
'edge' glitches - within and without gamespace boundaries - that
define, in a sense, _all_ the possibilities of the avatar, _all_
the possibilities of escape and normalcy...

The imaginaries I work with - virtual worlds; 3d modeling; 3d
printing; very low frequency (VLF) radio; scanner and shortwave
radio; augmented reality; playing music; codework (an entangled
amalgam of code, writing, and computer 'debris'; even birding,
which requires abstractions ranging from migration routes to
morphs.

Finally, the idea that the virtual has always been with us, that
the body is always already inscribed, that culture goes all the
way down, that inscription and the digital are entangled
amalgams as well, and that abjection underlies everything, as
well as pain, suffering, and death, all part of it.

Thank you -

============================================================

Dance description (for the empyre)

I've been following this discussion and thought the best way I
might participate is to describe the work that I've done with
collaborators over the past decade or so. We went from using
video and audio tracks accompanying choreography, to work in
Blender and Poser. The Poser work was created from bvh
(Bio-Vision Hierarchy) files produced with motion capture
(mocap) equipment that used 21 sensors electromagnetically
interacting with an antenna. The antenna fed sensor signals into
a hard-wired 486 microprocessor that output coordinates; these
were fed into a second computer that created the bvh files
themselves. we modified the sensors in a number of ways - some
through the software interface, and some with limb assignment
and position. We did a piece called heap for example
- the sensors were dropped in a heap and the bvh file fed into
Poser. We did a star piece, arranging the sensors in a star
formation on the floor and inverting it by exchanging +r from a
sensor position to -r. We also reassigned sensors in several
ways - dividing them between two bodies, remapping inversely
onto a single body, and so forth. All of this produced bvh/Poser
mannequins that were used as projections in live per- formance,
or chroma-keyed over dance/performance video.

All of this work was at West Virginia University's Virtual
Environments Lab, headed by Frances van Scoy. I received an NSF
consultancy through Sandy Baldwin and NYSCA grant; through the
former, I had a grad assistant from software engineering, Gary
Manes, to assist me. We went into the mocap software itself and
Gary rewrote it, creating a dynamic/behavioral filter interface,
which would produce transforms from the sensor output - before
the 3-d assignment to bvh was made. This was modeled on graphic
software filtering, but the assignments were different - we
applied a function f(x) to the coordinates and/or modified the
coordinate mechanism or input streams themselves. The bvh files
that were produced were sent into Poser for editing; in some
cases, Poser mannequin video was output. But more and more, we
edited in Poser to format the bvh for upload to Second Life;
this way we had live 3-d performance based on the transforms.
This performance could interact within Second Life itself - with
other online performers and audience - or through projection,
without Second Life, in real-space where performers might
interact with the avatars.

The bvh files are complex and avatars perform, most often at
high-speed, with sudden jumps and motions that involve them
intersecting with themselves. The motions appeared convulsive
and sometimes athletic or erotic.

In SL, everything is pure, digital, protocol, numeric; by
'smearing' the animation input, avatar appearance, and location,
we create in-world and out-world experiences that stray from
body and tend towards choratic and pre-linguistic drives. We've
performed a lot at various limits of SL - on sim edges for
example, or at 4k 'up', where the physics changes. The output is
the usual - audience in-world or out-world, as well as video and
stills.

___



More information about the NetBehaviour mailing list