Applied Ethics is the study of concrete situations which can also be hypothetical, in light of more abstract ethical theorizing, in simpler terms, how to decide whether a particular action is right or wrong. Well, we all can agree that artificial intelligence and technology have exponentially grown over the years, but the only thing stopping them are ethics and beliefs. This is an ongoing debate over the years, we train a machine to think like humans, when it comes to problem-solving, that’s great but when it comes to using your personal beliefs or applying an ethical judgment in a situation. What does the machine inherit, a generalized outlook of the person or the developer’s personal beliefs? If something goes wrong, do we question the machine learning or the developer’s personal instincts? What action is supposed to be taken when a Tesla Self Driving Car is involved in a fatal car crash, who do we hold accountable for the following, autonomous vehicles or Musk?

 

Life 3.0: Being Human in the Age of Artificial Intelligence | Reimagining the Future

Well, talking about how far we have come, as mentioned by Max Tegmark in his TED Talk, he described certain activities being on different levels, starting with the ocean/sea are tasks AI can perform which are Translation, arithmetic, rote memorization, etc. Then, the land area contains, driving, speech recognition, etc, it can be said that AI can perform it but hasn’t mastered it yet. And finally, the top level of the sky has creative components like cinematography, poetry writing, Art, management, etc are the ones we still haven’t achieved yet. Following up with this, when all of these tasks are covered we reach a certain term called the AGI, artificial general intelligence. He believes that, when someone says, there are always jobs humans can do better than machines, indirectly mean that we can’t reach the level of AGI. My personal opinion is that we can reach a certain level of AGI, but it won’t be used to replace human jobs, but to substitute them and making them more efficient. I disagree with Max when he said, reaching AGI will make machines more intelligent, at the end of the day it’s the humans who are the backbone of AI, it can never outgrow it. You know how much a book tells you, there is no exception in technology. Twenty years ago, no one thought a cylindrical device on your table-top could operate your lighting or start playing music, these tasks haven’t really replaced or challenge a human’s capability in any way, they have made our lives easier. I believe AI serves the duty to re-engineer the human mind to increase productivity in innovation. When Bill Gates said, he hires lazy people to do hard jobs because they find an easier way to do it, the AI is that path to an easy way of getting tasks done.

Well, the principle I found the most interesting was AI safety research, when it comes to technology, the dangers of what a piece of innovation entails is the biggest issue. This principle can prevent not only the most harmful bugs but also be less prone to hackers. This principle is the turning point of AI tech technologies because it reduces the risk of the technology turning against us and ‘overtaking the world’ as people say. This principle saves a lot of time and also increases the chances of developing the product more. Talking about the principle of human values, which can be described as AI systems should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. But I feel that it is a little difficult to specify what ideal would come under, this sounds philosophical, but I feel it should be in the best interest of the society will be using the product. Coming to social justice is one of the common principles by Tegmark and Andrew Devin. According to Tegmark, it is described as Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. According to Devin, Social justice is simply this: setting up society in ways that

  •  Doesn’t discriminate based on irrelevant (and in particular uncontrollable) factors
  •  Corrects ways that structures and historical dynamics in society unjustly influence one’s success in life.

This is basically fairness, I guess both of them are genuine would provide the most fruitful result if they were intertwined together to give on a common principle. The one disadvantage I feel is these apply in theoretical situations more than practical. These principles should make use of common court cases or the wicked problems regarding AI in today’s world and then collectively be formed to what is practical and more feasible in today’s world.

Concluding that it is all in the hands of the humans and develops about how much control do they want to transfer onto machines and till what extent. But idealistic ethics and beliefs can’t replace a human judgment because it involves too many factors which influence it, a machine can’t stimulate the same behavior.

AI is good, as long as we enact ethics controls

 

If you enjoyed this post, make sure you subscribe to my RSS feed!