[NetBehaviour] Positive AI
Andreas Dzialocha
kontakt at andreasdzialocha.com
Thu Jan 18 13:39:36 CET 2018
Hej Lara, Hej NetBehaviours,
as some already mentioned I also like to to think about AI as something
deeply embedded in human culture. By using machine learning algorithms
(for example) we do not only learn something about the data we are
trying to analyse but also about ourselves as humans living in this
world - working with machines.
Everytime an AI "fakes" being very human-like at something ("wow, this
looks like a human painted it, wow, this sounds like a "real" composer
etc.") its getting extremely sad and boring. It feels like these are the
main narratives and promises of the startup industry.
Luciana Parisi is brilliant in her work, trying to approach this topic
with looking at (big-)data processing algorithms as a paradigm shift in
the culture of computation( please send me any recommendations for
similar texts if you have any):
"Soft thought is not the new horizon for cognition, or for the
ontological construction of a new form of rationality. Instead, soft
thought stems from the immanent ingression of incomputable data into
digital programming." (her book "Contagious Architecture" for example)
Here I find potentials of thinking about AI in a more "positive" way:
Large datasets are in themselves streaked with human culture, including
all errors and noise. Through processing that data - in a way neural
networks do - we also encode these "human" characteristics in the
computed results, but interpreted by a rather simple chain of
differentiable layers. Through computation power and research we are
suddenly able to derive or "see" something in these results. This is
already pretty fascinating I find.
Then it starts to get even more interesting when we find artefacts of
computation in the results, then one could ask questions like: "What
does a machine 'see' in these images", or when we generate music through
AI: "I can hear an AI 'hearing' Wagner" - thats especially interesting
when the results are different (!) from what we are used to, maybe even
disturbing - in a good way.
>From all of this we can derive questions of encoded inequality, class,
racism, politics of (automated) mass image production / text generation
/ decision making / trading - how do we see and hear, what do we want so
see and hear .. and so on.
The more we understand that the culture of computation has this wierd
ambivalence of being both very close to us and very alien at the same
time, we can start using it as something "semi-outside" of us to start
creating alternative narratives told by stupid machines (and not ones of
super slick promises to find an answer to all our problems) but
hopefully even very lovable forms which are very surprising and challenging.
This is a very loose write-up, hope it makes somehow sense. But I am
super happy to talk about it!
Greetings!
Andreas
More information about the NetBehaviour
mailing list