Machinocene and artificial intelligence

It seems as if I am more sensible in reading articles in newspapers related to the coursework now. It  just happened that a few days after the ‘Transhumanism’ article appeared, I read in the Swiss newspaper NZZ (22  June 2017) an article by Eduard Kaeser, a scientist and philosopher, about ‘Auf der Schwelle zum Maschinozän / At the threshold towards machinocene‘ .

The author refers to the Cambridge philosopher Huw Prince who stated that ‘non-biological machines might be much more intelligent than we are.’ (Price, 2016) What Price states is the recent development in understanding mechanism of intelligence and the commercial interest in funding it with companies as Google, Amazon, Facebook, IBM and Microsoft.

Turing test - https://en.wikipedia.org/wiki/Turing_test

Turing test – https://en.wikipedia.org/wiki/Turing_test

One aspect Price refers to when talking about artificial, or non-biological intelligence, is the Turing test, established by Alan Turing in 1950. In this test a human evaluator need to discern between verbal responses written down either by a machine or a another human being. Turing’s initial idea was to find alternative words for ‘to think’. That led to the notion of the ‘thinking machine’ and further to the ‘intelligent machine’. Main critique is that the testing set up is just checking responses by a machine and a human and compare machine behaviour to human behaviour.

Interestingly, the nowadays used CAPTCHA on webpages, to copy either distorted numbers and letters or to assign pictures to one word (symbol), are a kind of reverse Turing test, where the machine, the webpage program, is checking whether a human or a bot is entering the page. A ‘test’ that can be forged by other ‘machine programs’ anyhow.

The main topic explored by Kaeser is the ‘deep learning’ of neural networks. Derived and as analogy to the neuro-biological information transmission in human and animals brains (neural).

Translated into machine learning this means basically an image recognition through a multistep and recursive approach, from dark-light pixel detection, over simple shapes towards more complex forms. This form of pattern recognition is e.g. behind Google reverse imaging process. The drawback according to the author is that machine at times responds with wrong answers. E.g. dark skin people were identified as Gorillas.  Another example mentioned is an application of deep learning algorithm in patient diagnosis. Unfortunately, the responses by the machine were not appropriate and not understandable by the people operating the machine.

The key message expressed in the article is the difficulty of understanding and the interpretation of results coming from a ‘deep learning’ algorithm. The author raises the question what is more relevant to human beings, to understand the process of arriving at a result or conclusion (the Why) or the result as such (the What).

He argues for a sensibility for human behaviour. Machine intelligence and especially people behind the development of such machines, devices, algorithms, need to consider the human behaviour and conditions as such, as user of those machines. He argues in context of the critiques against Transhumanism (see my blog post) who defend the human uniqueness against a ‘rationality’ of artificiality. Bottomline, the notion of inscrutability as a discourse between artificial intelligence and human condition.

 

Conclusion:

The article and the argumentation by the author makes me wonder, whether the reason to defend human behaviour against deep learning algorithms are merely build on the aspect of understanding the process of arriving at a result. Is the Why in problem solving the most important aspect? Are not so many devices used in daily life that one do not know how they work? And why for example a machine is not working as one expect? Do we not expect just results from machines? Or do we really expect that we can, if we wish, repeat it ourselves?

For me it seems the mere reduction to interpretability of results or conclusions as a weak argument against artificial intelligence. Perhaps, it is just a reflection and representation of human conditions as such.

 

Reference:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: