On February 28, 2015, an African-American programmer namedJacky Alciné he tweeted in obvious annoyance: “Google Photos, they are fucking wrong! My friend is not a gorilla”, After an Artificial Intelligence (AI) system determined that whoever appeared next to him in one of his selfiesIt was not a dark-skinned woman, but a great mountain ape.
Because the misidentified black people were shortly accusations of racism were made and there were even those who asserted that such mistakes would never occur with Caucasians. The next day Yonatan zunger, then a Google engineer and today head of the Ethics office of the company Humu, came out to clarify: “I wish it was like that! Until recently our algorithm confused white individuals with dogs and seals. Machine learning (machine learning) it's hard!”.
On this point, Professor Tom Froese, from the Institute for Research in Applied Mathematics and Systems (IIMAS) of the UNAM, explains that, although we trust AI too much, it stumbles and goes flat on its face more frequently than expected because Unlike human intelligence, it operates based on patterns and does not understand the subjective or meanings.
“Let's take a one and a zero; for us they may represent our nephew's 10 years, the members of a family or the blocks to go home, but a computer does not know if these numbers refer to an age, the pixels on a postcard or the number that he wears a soccer player on his shirt. For her they are just a dry one and a zero, and she is incapable of assigning meaning to them, something that humans do at all times and without noticing it ”.
In your article The Problem of Meaning in AI and Robotics: Still with Us after All These Years(just published in the magazine Philosophies, from the MDPI and written in collaboration with Shigeru Taguchi, from the University of Hokkaidō, in Japan), Froese dissects one of the most widespread beliefs about AI: that, due to its great advances in such a short time, intelligence artificial will be comparable to human. "My position here is skepticism, especially after seeing her fail in areas where we would not."
To understand why even the most sophisticated algorithms err, the academic uses GoogleNet as an example, an AI system that recognizes images through convolutional neural networksand that, as effective in doing their homework, there have been those who claim that in reality understand what you are seeing. “Can you do it though, or do we just like to believe that?” Asks Tom Froese.
“This system is constantly trained through deep learning (deep neural learning), that is to say, it is put in contact with millions of photographs from where it obtains useful patterns to classify and, therefore, it is said that it 'learns'. But if we introduce a minimal alteration in the image that is to be analyzed, we can cause it to fail in a big way and not even be aware of it ”.
No Comment