7 Risks Of Artificial Intelligence That We Must Face To Manage It Effectively
According to an OECD report, 14% of jobs in the world could be affected by the emergence of Artificial Intelligence. The percentage is somewhat higher up to 22%, although it is generally lower than expected. Not all jobs can be replaced because it will not always be possible or efficient.
The first example we already see daily in many houses. Children treat virtual assistants rudely since Alexa or Siri obey without the need to ask them for things, please. The risk is that children can transfer these behaviors to their relationships with people, something that, as this Washington Post article warns, is already beginning to happen. More curious but less educated children.
It is getting harder and harder to fool the machines. But it is possible to do it. “We imagine that there is a small human brain inside the computer, but no, it’s just programming and mathematics,” explains Meredith Broussard, author of the book Artificial Stupidity. They are not infallible: there are makeup and costume techniques to fool facial recognition systems. As explained, work still needs to be done to achieve fully efficient Artificial Intelligence.
Bias And Lack Of Neutrality Of Machines
The artificial intelligence system used by judges in the US as an advisor has a bias and discourages black citizens from freedom more often than whites. The algorithm analyzes 173 variables —none of them is race— and gives a probability of recidivism from 0 to 10. The problem is not the machine but the risk that the judge delegates to it.
Artificial Intelligence Security
The video of Barack Obama insulting Donald Trump went viral a while ago in the United States. And yet it was false. Artificial Intelligence can become a great ally to manufacturing fake news if we do not use it with the appropriate ethics. In this specific case, the video is a creation of FakeApp, with the help of Adobe After Effects. This software uses machine learning to scan people’s faces in a video and impersonate them.
In July 2017, all the alarms went off. Two Facebook chatbots had developed a language that their programmers did not understand. It was, however, a simple programming error. an expert in artificial intelligence at the CSIC, maintains that no machine has intentions, nor will they ever have them. “They can teach themselves to play Go and beat a champion, but they don’t know what they’re playing. If we put that same machine to distinguish photos of dogs and cats, it would forget everything else.
Finally, the president of the Think Tank “We the Humans” asserted that robots should not have rights because they do not have responsibilities either. “Today a robot is not free, therefore it should not answer to anyone. It’s the people behind an intelligent system who will have to respond if the need arises,” he said.