A few weeks ago I attended a meetup in Amsterdam on the topic of AI: A Humanistic Approach. Although I am the furthest possible from a tech expert, especially when it comes to Artificial Intelligence, I find the social and ethical aspects of these developments more interesting.

If we look at our daily lives, we are already becoming cyborgs as Elon Musk mentions: our phones and laptops have become digital extensions of us and we have developed online identities and virtual lives through the use of social media.

AI is already taking decisions for us.

Google is deciding what answers are best for us. We cannot avoid it and in many cases it’s good that it’s making these decisions. We wouldn’t want to browse through thousands of articles in search for the best answer.

However what we need to understand is what is behind the AI: What are the mechanisms and the decision making processes? On what basis did Google for example singled out some answers over others?

It is not so complex when it comes to Googling and SEO, but when self-driving cars will become a norm, these ethical questions should be set in place.

Google has already announced AutoML, AI capable of generating its own AI, which already created a ‘child’ that outperformed all of its human-made counterparts.Apple just patented a system of self-driving cars which proposes a computerized model for predicting routes using sensors and processors in the vehicle.

So the question with AI is not ‘what if’. The questions that I see are ‘when’, ‘how fast’, and ‘how far’.

What AI is currently doing is imitating human thinking, but what if we went a step further and could imitate the human brain itself? What if AI could develop consciousness? If we take the definition of consciousness being ‘accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions’ then indeed we could develop machines that are conscious. If AI could imitate the human brain, then it would become like a child that learns about the world around on its own.

I have no doubt that tech can develop algorithms and machines capable of doing what we humans do and probably thousand times better. It’s easy to look at a system and optimise it only from a tech perspective. However once it gets complex, it is more and more difficult to reverse-engineer. Therefore, ethical and social implications must be discussed now.

I don’t see AI as a threat, after all AI is just a tool. It will be what we want it to be. The challenge I see is that tech is developing at a much rapid pace than we as a society do and we don’t have a global agreed upon ethical code that everyone is following.

Therefore, when it comes to AI the key words are RESPONSIBILITY and TRANSPARENCY.

Firstly, there is the responsibility of individual developers: are they comfortable developing AI? What are the morals and values influencing their decisions? Everyone has different value systems, so how would we agree on something?

Secondly, there is the responsibility at the company level: the development process should be transparent. As a user, I should be able to know the system behind, for example, self driving cars: I should be able to ask the car why it took one decision over another. History proves that profit hunting often trumps social responsibility, therefore it is important to discuss these questions now and make sure that we as a society have a say in the development of these technologies.

Thirdly, there is responsibility at the institutional level: The same way we trust nowadays EFSA (European Food Safety Authority) to deliver advice and scientific opinions on food safety and nutrition, there should be such an entrusted institution which would tell us: ‘Hey, it’s safe to drive this self driving car’.

The development of AI is undoubtable. The real questions are: what borders we want to have? And how do we find common ground in our ethics and moral codes?

Coming back to the question that I started with. Is AI a design or a human question?

I believe it is a human question.

In his book, Homo Deus (I will review it soon on STRIVE!), Yuval Noah Harari, mentions that it will not matter if computers will have consciousness or not; it will not matter if AI can produce its own children. It will matter only what we think of it. Therefore, it is not a matter of design or technology. Eventually it will be possible to reach technological singularity to the point that it’s almost impossible to reverse engineer it. Therefore, now is the moment when we must decide as a global society what we want AI to become, what are the moral codes and ethics that we attribute to it, and what are the purposes for developing it.

The future implications of AI don’t depend on AI, they depend on us, humans.

Originally published on: https://strive.student-talks.com/articles/is-the-development-of-ai-a-design-or-a-human-question

Image credits: Andy Kelly

Leave a Reply