A good two months ago, I was able to attend an event focussed on using data for good. Many of the talks centered around using Artificial Intelligence (hereafter referred to as AI) and Machine Learning to tackle societal and health issues. To improve living on this planet for its entire population, not just the select few. A few speakers zoomed in on ethics in these booming fields of study, and I too think it’s very important to raise awareness not only in the development community, but also the entire populace. I’ve let all of it sink in and in this blogpost, I will attempt to shed a light on some key concepts and pitfalls in the field of Artificial Intelligence.
Ethics will play a vital role in designing and grooming next-generation services.
We don’t want these services to inherit certain biases and prejudices that its designer might have. That could get very dangerous very fast; say the developer of a system that filters out the best candidates for an IT job thought that such a job should be predominantly executed by a white caucasian male with glasses, then a capable Brazilian woman might never get a chance to apply for the position. This is very dangerous because most biases are much harder to spot than the previous example. It could be something much simpler that is so ingrained in the designers’ culture that the developer is not aware of it.
The aforementioned example brings me to my next point: filters can be dangerous. Take the recommendation engine of any video provider for example, or even the news feed of your favourite social media network. It’s not neutral. These algorithms are based on everything they know about you. Your likes, dislikes, opinions, mood swings and its implications for the content you want to see at that time...
Let’s forget the specific likes and dislikes for a minute and consider the fact that a Machine Learning system only serves us the content that we like to see. In this way, the consumer’s opinion and perspective is reinforced every time they consume their favourite content. There’s also a high probability of censorship, where different or competing views are censored, disallowed or just not represented. This is a concept often referred to as “echo chamber”. That ‘echoed’ content might be socially incorrect, unacceptable, or just plain wrong. But the consumer will never be confronted with that since they do not want to read something they don’t agree with. The consumer will, in fact, never know what has been filtered out and how that has been decided. Filtering out other voices fosters subcultures and divides us as a community because it’s getting harder and harder to understand a different point of view. Some companies, like Cambride Analytica, even make it their business to target a certain audience and change its behaviour. Take the American voting system for example, where you have two parties gunning for the Presidency. A democratic-oriented voter will only see democratic posts in his/her news feed and thus strengthened in the belief that the democrats are the way to go, and that the entire community thinks so too. While the reverse might be true for a republican. CA took it even a step further, and spread misleading content tailored to each individual in key constituencies. See this Vice article if you want to dig deeper into this case. In the end, the nation is surprised to find itself divided on election night. There’s a great TED talk on the filter bubble you should watch if you’ve got nine minutes to spare.
In short, while advanced filters like these sound very reasonable and play a key role in digesting the wealth of content present on the Internet, they can be quite dangerous. In extreme cases, they are able to influence our thought patterns and vision of the world, rob us from the skill called empathy and drive division in communities.
That’s heavy. And sensitive. But that has always been the case with moral concepts. As an IT industry, we have to consciously ask ourselves where we want to draw the line between a business opportunity and morally correct behaviour. Some moral questions are easy to answer, but not every case has a clear-cut answer. It is quite evident that not everyone will adhere to the same principles and answer in the same way, especially in the large grey zone in between ‘right’ and ‘wrong’ - two extremes rarely seen in real world situations. If even we as humans cannot agree on an acceptable solution, how are we supposed to teach a machine to make these for us? Do we have them learn from all of us?
It is an interesting proposition, and even more so, a huge undertaking. The moral machine is a platform set up by MIT to gather a (not the) human perspective on moral decisions made by AI. There, you can browse moral dilemmas and the challenge will soon be very clear. The prime example used is a hardware malfunction in a self-driving car coming up to a crowded pedestrian crossing. The car then has to determine the lesser of two evils. By giving your perspective, we will all be able to determine a median human perspective on these dilemmas.
All of it boils down to one word, and that is trust. We have to take technical limitations and the vision behind the algorithms into account. The algorithms employed have to be geared towards a goal, and their role should be clear to the end user. It will be the industry’s and the maker’s responsibility to educate the users the role that algorithms play in AI and modern IT applications in general. By having everyone understand the fundamental role algorithms play, you’ll do away with the brunt of the perceived ‘magic’ that happens behind the scenes.
Simultaneously, you have to champion privacy by design. This a very valuable mindset to adopt, because not only will it allow you to drive usage of your product, it will also increase trust towards your product and company as a whole. Privacy will be very important going forward since the user deserves to have the right to have the option not to share some data. Not everyone will be willing to share the same amount of data, and they shouldn’t be obligated to do so. To quote Rishi Nalin Kumar - “data is the new oil, privacy is the new climate change”. To say privacy will be very important going forward is an understatement.
Data is the new oil, privacy is the new climate change
This column was by no means comprehensive, but I hope to have piqued your interest, and you’ll be conscious of the aforementioned concerns and challenges when working on your next AI-based feature, or when you’re just scouting for a new AI-powered platform to use. These are very exciting times to live in, but we have to be careful when we’re shaping the Artificial Intelligence landscape. Caution is king to avoid irreversible repercussions.