By Humaira Taz
For FOSEP’s discussion on Thursday 12th October, we decided to address the growing concerns over Artificial Intelligence (AI). AI has been growing at a rapid rate in the past few years. Interfaces such as “Siri” and “Alexa”, autonomous driving cars, and robots that can communicate with each other and compete with humans on certain sports are all examples of AI being manifested in our daily lives.
At the core of all AI are huge amounts of data collected about people. It ranges from recognizing facial features, to a person’s medical history, food habits, friends and family…and the list goes on. One of the major concerns is who gets access to this massive bundle of data and what purposes are these data being used for. In addition, we have already seen instances where machines are taught to communicate with each other. In such a case, are we comfortable with the level of privacy of our personal data if the system gets hacked? Recently a group at Stanford created an system they called “Gaydar” to detect the sexual orientation of a person based solely on a photo. Their sole purpose was not to use it, but to instead show how big data can be used to create AI systems that could help in discriminatory practices.
Going with the same line of thought, Elon Musk has made the statement that AI could lead to a third world war. While this seems a bit too far-fetched in face value, his concerns are justified by the idea of lethal autonomous weapons. He was one of the 100 signatories that called for a ban on them. Since machines can communicate much faster than humans, a simple misinterpretation of data by machines could start a war before humans can even grasp what is going on.
That leads to another problem: we do not often understand how AIs make the decisions they do. The algorithm of neural networks can link parameters that humans could not have thought of. There is also the added risk of involuntarily manifesting our internal biases in AI, which could lead to questions of how an AI makes ethical choices. Following that step, we have the question of how humans treat AI once they start making ethical choices, and what happens when AI starts to replicate themselves to grow their “population”.
Since the AI systems that we have are very rudimentary, my main concern was related to the privacy of data, who gets access to them and how they are being used. However, a member of FOSEP-UTK mentioned that the concern over AIs would escalate once we start making General Knowledge AIs, which means that their knowledge would be non-localized. Once we got into the topic of AIs self-replicating, all I could think about was the movie “Avengers: Age of Ultron”, and how we could end up in an era when machines might take over the world.
To get to that point, however, we would need to create AIs with sapience, meaning that they would have a moral conscience. But that brings another conundrum: if we as humans develop a fully sapient AI, is it moral to restrict the operational pathways of a fully reasoning body?
We concluded that since the purpose of AI is to make systems more efficient and easier to operate, then we should not have the need to create an AI with sapience. It is best to have AI dedicated to one particular task, and further develop this technology with Asimov’s three rules in mind that he mentioned in his book “I, Robot”:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In addition, companies should have ethical guidelines in this area of research. DeepMind, Google’s London-based AI research facility, has created an ethics group to fulfill this need. However, if all companies developing AI do not take the initiative themselves to have an ethics team, then the government should step in to implement ethics rules in these companies.
(Image taken from: http://www.gulf-times.com/story/566080/Ethical-dilemmas-and-artificial-intelligence)