Artificial intelligence or difficult art

Brand Content, Technique, User Experience

21 May, 2019

At a time when all companies in the high-tech & digital sector swear by AI, and when governments have already anticipated the potential of this technology, the number of uses continues to grow. Every day it is being improved, refined, and new applications are being found, each more interesting than the next.

franck1 Artificial intelligence or difficult art

In the automotive sector, the use of Tesla autonomous cars would reduce road deaths by 90%. In health, algorithms will allow more effective disease detection than human specialists, until the chances of premature death can be predicted. In addition to being present in the medical sector, Google tests and develops with DeepMind by beating game champions such as GO or Starcraft, games considered complex to apprehend for an A.I. Even McDonald’s will soon equip its Drive terminals with an A.I. in order to propose the most personalized offer possible according to criteria such as time, hour, first additions to the basket, etc.

And we often forget the simplest ones, but who can boast today without Waze and bring out the old maps of France? or Netflix or Spotify and their predictive algorithm helping you to choose the next series or song that will suit you?

And it’s not just abroad! Specialists in the field are increasingly in demand due to the current craze. Whether at governmental, industrial or corporate level, analysts and mathematicians have the wind in their sails. As proof, in France, Facebook, Microsoft and Samsung are each investing in the creation of their own R&D innovation centres in France. IBM announces the recruitment of more than 400 specialists in the field while Google announces its project to create a specialized Master’s degree at the École Polytechnique.

It can therefore be expected that France will soon have a diversified offer of training and research on AI that will not have to compete with the competition.

The limits of artificial intelligence

franck3 Artificial intelligence or difficult art

Automatic learning promises a future rich in technological developments. Due to the multiplicity of its applications, we can already imagine the many technologies that will be added to it. However, however promising it may be, it is still subject to multiple mistakes or failures. In recent years, we have seen several problematic cases, some of which we are still trying to correct. Here are some examples:

> IBM and cancer detection

In 2011, based on their artificial intelligence “Watson”, capable of beating humans in a TV game show like Questions for a Champion, IBM quickly announced that he would revolutionize medicine. By integrating astronomical amounts of studies and analyses, the machine would be able to formulate much more reliable medical diagnoses than those made by an ordinary doctor.

Too bad, in 2018 it is revealed (StatNews) that Watson often prescribes inappropriate or even dangerous cancer treatments for patients. IBM engineers have not given up and continue to improve their AI.

> Microsoft and the limit of its chatbot

In 2016, the chatbot Tay created by Microsoft was born. Its purpose was to discuss more or less common topics with Internet users in order to show what an intelligent chat robot could be. At first, the result was a success but of short duration.

Indeed, after having declared on Twitter that “Feminism was a cancer” or that””Bush was responsible for September 11 and that Hitler did a better job than the monkeys we have now”, or that” Donald Trump is the only hope we have”, Microsoft had to disconnect the robot, after 16 hours of activity.

Where did the rift come from? From its design directly because the learning data of the algorithm came in part from the conversations Tay had with Internet users. Some of the latter having understood this, they assaulted him with racist and sexist remarks and remarks pushing Tay to adopt the same behaviour.

> A Google tagging that needs to be mastered

Whether in machine learning or deep learning, image recognition is the most widely used field to test the automatic learning of an AI. It then makes it possible to group together a series of photos with common or even identical characteristics.

In 2015, Google made the buzz by bringing together in the same album the photos of an African-American couple and those of a gorilla. After apologizing, the giant promised to find a solution. Unfortunately, none could be found. Similarly, it is very complicated for this AI to distinguish between a dog and a wolf, especially outside their own context (city versus forest). We can therefore see the limit and imperfection of this technology, where it is impossible for a human being to make mistakes.

> The A.I. in the courts

In the United States, the Compas software has become a reference tool for some judges. He has been talked about a lot because he may have a direct impact on the judge’s prison sentence. Based on socio-cultural data such as geographical origin, personality, employment status, drug use and many other criteria, it can define an offender’s or criminal’s potential for recidivism.

A ProPublica survey identified a tendency for the software to formulate higher risks for black people. The software was “fed” with available socio-cultural data and simply replicated past trends in its predictions. This process is unfair, discriminatory and totally lacking in transparency.

More recently, Estonia has announced its wish to set up a “robot-judge” by 2021 capable of handling cases with a fine of up to €7,000. The aim is to lighten the burden on the courts that have reached saturation by handling so-called “minor” cases. A subject to follow closely from this company 2.0. The country has already tested its capacities by completely replacing the local Employment Pole with an A.I., noting a significant improvement in its results (percentage of people who have kept their jobs).

franck4 Artificial intelligence or difficult art

Corruptible algorithms

Even if the task can be long and tedious, it is now possible to declare since 2014 that an A.I. can be systematically misled. Indeed, by adding a few pixels to an image we realized that we could confuse a dog and an ostrich.

In 2016, researchers succeeded in demonstrating that an almost imperceptible disturbance can be found that can distort all predictions with a high probability, for each deep neural network…

The following year, by applying these same theories to the autonomous car sector, it was demonstrated that by adding stickers in a certain way to a “Stop” sign, it could be confused with a speed limit sign. An observation that makes you think about the naps we were planning to take in our new Tesla.

Except that these “hackages” were tested on neural networks (complex algorithms) whose operation was well understood. This is not normally the case for those marketed. Normally…

Beware of algorithms

As in Brassens’ song, where the judge is the victim of a gorilla, justice is a little perplexed by the dilemmas that have arisen from these technologies that are beginning to escape us. And if, in German, AI means yes, some have chosen to say no, to a certain extent. A collective of 700 personalities, including Stephen Hawkins, Bill Gates and Steve Jobs (so not the most resistant to new technologies), have signed a petition to encourage reflection on ethics in the field of A.I.

If you look at the risks, the reasons are many and some are chilling.

Indeed, in the event of a dispute or accident involving an A.I., it is still difficult to find someone to blame. We can take the example of this Swiss art collective whose project consisted in developing the software “Random darknet shopper” in order to let it order articles on Darknet, with a budget of about $100 per week. The objective? To carry out an exhibition with all the products ordered. Small problem, the I.A. ordered clothes, cigarette packages but also… ecstasy!

We can then ask ourselves who is responsible? The robot? Its manufacturer? The owner? The software? Or the one who programmed it? Today we don’t know how to decide.

It is to address this issue that Estonia is considering creating a legal status for AI and robots, located between the separate legal personality of a company and the personal property that is an object. This is a first step towards the recognition of criminal liability or a machine offence.

Another point: don’t these technologies and applications to which we are all hooked manipulate us? Who can say today that Waze is leading us by the fastest route, and not by the one that passes in front of the McDonald’s in contract with him? We already know that Google has been condemned for favouring the offer of its subsidiaries in its rankings, and downgrading competitors. Why not the others?

franck2 Artificial intelligence or difficult art

We may also wonder what the place of empathy or the presumption of innocence will be in a system where we are judged by a machine whose motivations and system of thought we do not know, and which is incapable of justifying its decision-making. We don’t master these algorithms. We can see that they work, but we don’t know how. You can diagnose cancer better than a professional, but you can’t explain why. Tomorrow, your CV or housing application will be rejected without any objective reason.

As with the Compass system in the United States or Microsoft’s chatbot, we find the same biases that inspired them to copy the human.

Finally, as seen above, the risk of piracy is high and will be the next playground for hackers. If we stick to image recognition, the impacts will be small, but on a train, a plane, a judge?

What about us?

One can then legitimately imagine that the worst predictions of Science Fiction are coming true. Will they ever replace us?

Today, the challenge is daunting. With the evolution of the A.I., some anticipate that in 2040, 25% of current jobs will be replaced, but that it will also generate new ones. At the same time, if all efforts are devoted to reducing human effort, can we really complain about it?

The phenomenon is already underway. In this blog, we recently mentioned virtual muses (often female), generated from scratch by computer imaging, and used by brands for their role as influencers. Even if for the moment they do not have the capacity for reflection, free will or emotions, we can ask ourselves the question of their evolution once they are coupled with artificial intelligence. Once configured, they could post and interact autonomously with their community without brand intervention.

The human in the face of all this has to specialize in what? What human capacity can never be equalled? Creativity? Even here, major record companies are trying their luck with intelligent programs of compositions either in the style of great composers or by cultivating their own style. Emily Howell, a program of music teacher David Cope, is on her 2nd album released on a classical music label.

In painting, we have just created a 3D Rembrandt from all the artist’s works. This one has been identified by experts as authentic, some would even go so far as to say that it exudes a certain poetry, in any case, that of the artist.

So maybe empathy or conscience? According to Asimov, a scientist and science fiction novelist, the first law of robotics is “A robot cannot harm a human being, nor, by remaining passive, allow a human being to be exposed to danger”. Emotions are chemistry, the notion of danger, good, evil, endorphins, what we could transmit to artificial intelligence.

Moreover, when we code an autonomous car by telling it, for example, that buses and trucks are more likely to force the way than a car, we introduce a psychological criterion into the choice of a machine. Similarly, in a critical accident situation that could cause the death of the driver or of persons outside the vehicle, what will the driver’s choice be? What would be yours? A scene from the film Transcendence with Johnny Depp echoes this reflection, when one of the actors asks the computer “Prove to me that you have a conscience”, the other answers “It’s a difficult question, can you prove that you have one?”

It is very complicated to demonstrate that an artificial intelligence has its own free will. All recent sciences show a higher degree of programming or innate knowledge than previously thought.

So what is the future for a human being? Reflect on the desired future and put ethics back at the heart of the debate so as not to let history be written by algorithms that have no good or bad intentions but can cause unintended damage. We remember the robot Sofia who, interviewed by a journalist “Do you want to destroy humans, please say no” had answered “Yes I want to destroy humans”. Joking or not, a simple clumsiness of design or programming could have variable consequences depending on its role.

Could humanity be destroyed by a misunderstanding?

 

Sources :
Journal du Geek, Le big Data, 01Net.com, Siècle digital, Figaro Le monde, Usine Digitale, Futura Sciences, Ezquimoz, Penseeartificielle.fr, Arte

Leave a comment.