[NetBehaviour] Positive AI

Marco Donnarumma lists at marcodonnarumma.com
Wed Jan 17 13:32:02 CET 2018


Dear Lara,

thanks for your post, I wish there were more students asking themselves the
question you pose!

I work with AI in performance art, computer-human interaction and robotics
since about 5 years, as an artist and a researcher. Like you, I believe it
is crucial to think critically about what that word "intelligence" really
means, and, even more importantly, what it means in each specific context
where it is used.

The kind of AI being hyped in the media in and in silicon valley these days
is merely a small sub-branch of the entire discipline that is the creation
of computational "intelligence". When most people say AI today in the
media, they actually mean "deep learning", which is simply a mathematical
method to teach machines to recognise patterns. There are literally
hundreds of other methods, and the reason why this particular one is so
hyped today is because it works well **only** with zillions of data.

A nice selection of methods lives here:
http://www.asimovinstitute.org/neural-network-zoo/

And, guess what, Google is one of the major investor in deep learning
research. Because they **have** those zillions data. And they are in a
position of power because other researchers need their data to develop
their own projects. But deep learning was first used at least 30 years ago
and it was discarded exactly because the amount of data it needed to work
properly was ridiculous for the time.

You may know where this argument leads. The comments about "AI" that we are
fed everyday by the media are actually arguments about a very specific
piece of technology, whose development is possible only thanks to the
inexhaustible (computational, human and financial) power that major
corporations exert on us by mining our everyday life. This should already
give enough food for thoughts.

In regard to the possibility of an AI nearing human capabilities, please be
assured that's an idiotic mantra whose believers include mostly tech bros,
transhumanists and tech investors, generally white, male and wealthy. For
my current projects, I work closely with a laboratory of neurorobotics,
they make humanoid robots to study how they develop intelligence. All
scientists there, and most scientists in the field, will unsurprisingly
make clear that a crow is way way smarter than the best AI today.

Development of these kind of technologies is not constantly evolving,
rather the contrary. It gets stuck many more times than it moves ahead.
So here are my two cents, you may want to find further ideas in this sense
in this article by one of the fathers of AI, Rodney Brooks. I do not share
his general view of AI either, but he definitely makes several excellent
points in here:

https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/

Wishing you well,
and strive for being as critical as you can!

--
Marco Donnarumma, Ph.D.

*Performing bodies, sound and machinesUniversität der Künste Berlin*
http://marcodonnarumma.com

Next
Feb 03 | Corpus Nil: Eingeweide - New work premiere, commissioned by CTM
Festival @ HAU2, Berlin
Feb 23-25 | Amygdala MKII @ Dortmund Conference on Digitality and Theater
Mar 13-14 | Biophysical Music Concert + Lecture @ Oxford University

Latest Essay
"Beyond the Cyborg: Performance, attunement and autonomous computation"
<https://www.researchgate.net/publication/316989653_Beyond_the_Cyborg_Performance_attunement_and_autonomous_computation>

Studio
Einsteinufer 43, Raum 212
10587 Berlin, DE
m: +4915221080444



>
> Message: 1
> Date: Tue, 16 Jan 2018 11:59:57 +0100
> From: Lara Stumpf <y at lara-stumpf.de>
> To: netbehaviour at lists.netbehaviour.org
> Subject: [NetBehaviour] Positive AI
> Message-ID: <73D07489-2343-4BF9-AFF4-036C37D6A01A at lara-stumpf.de>
> Content-Type: text/plain; charset="utf-8"
>
> Dear NetBehaviour,
>
> I am a design and art student and have been working on my graduation
> project with the topic Artificial Intelligence. My approach is creating an
> AI-something to support an everyday activity. However, I am lost. I have
> done a lot of research and most of the time I am very critical: A lot of
> power is given to algorithms and them working with statistics creates a big
> and dangerous mainstream (like those big data algorithms deciding what we
> see online), some inventions are dangerous (like self-driving cars) and
> most of the time inventions could be cool, if we ignored the evil people
> behind them.
>
> But I don?t want to create a critical art object, I want to create
> positive AI. Something to support us (with a prototype). How could AI
> support us while not replacing us? As Joseph Weizenbaum states, a computer
> cannot be human; but right now, all those AI developers try to make a human
> AI happen. I don?t want deep learning algorithms to analyse movies with
> their trailers and success statistics in order to find the solution for the
> perfect trailer in order to replace creativity by mainstream in the future.
> So, supporting us could work by assisting us? like Siri or Alexa. Maybe I
> could research an assistant-AI for my graduation presentation? Well, there
> is hardly anything it could assist me with. I don?t want to have AI help me
> with my content because I dislike content being build up through
> statistics. And I want to hold the presentation myself, I don?t want to
> listen to computers instead of humans. Everything else just feels like
> small gadgets. But maybe AI might help me by creating
>   ideas? Mixing statistically useful components or, maybe even more
> interesting, mixing useless components to create new ideas <
> http://artbot.space/> (http://artbot.space/ <http://artbot.space/>)? Hm?
>
> My thoughts go on and on. So, what would happen if I thought about the
> relationship between AI and us? Or maybe AI could help the relationship
> between humans? But not just like an app, where people are assigned to each
> other. I don?t know. Would that even really be AI? Or just boring
> algorithms? Where would we really need AI? Maybe in nature, or at least
> outside, where the surroundings keep changing all the time so we would at
> least need some kind of AI for orientation?
>
> Thinking about nature made me think about bees dying. Maybe AI could help
> us with the environment if we silly humans don?t do it? Maybe I could
> create a small robot to drive around and do some guerilla gardening <
> https://en.wikipedia.org/wiki/Guerrilla_gardening> (
> https://en.wikipedia.org/wiki/Guerrilla_gardening <
> https://en.wikipedia.org/wiki/Guerrilla_gardening>), like loosing a few
> seeds in order to have more flowers in cities. What do you think?
>
> Thank you!
>
> Lara
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.netbehaviour.org/pipermail/netbehaviour/
> attachments/20180116/41bd93fb/attachment-0001.html>
>
> --
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.netbehaviour.org/pipermail/netbehaviour/attachments/20180117/b6c5eae5/attachment.htm>


More information about the NetBehaviour mailing list