I studied Artificial Intelligence at Saïd Business School, which is part of the University of Oxford. The six-week executive education program focused on both the commercial opportunities for, and the operational impact of, Artificial Intelligence, Machine Learning, and Deep Learning. As part of my course deliverables I was tasked with creating an Ethical Charter for AI. This is the result of that project.


Be a good corporate citizen when it comes to the rightful privacy of our users

  • We must always obey local, regional (including GDPR), and international privacy laws. Beyond the question of legality, we must always treat users ethically too. This includes creating AI applications that do not invade their privacy, do not seek to exploit their data, do not collect any data without express (and understood) consent, and do not track users outside of our own walled garden. We do not need data from the rest of their activities, so we should not seek to obtain and use it. 

  • We do not, as a hard rule, use data to create profiles of our users to facilitate negatively scoring, predicting, or classifying their behaviours. We must never use their personal attributes or sensitive data for any purpose. Neither of these tactics are required for us to make better software for them (which is what we are here to do) and so it is inappropriate. We must always understand where our ethical red line is and ensure everything we do is on the correct side of that.

Ensure we act in an unbiased manner – always – as we’d expect to be treated too

  • We acknowledge that there can be unacceptable bias in all decision making – whether human or machine based. When we create AI applications we must always try to eliminate personal opinion, judgement, or beliefs; whether conscious or otherwise. Algorithmic bias is partially mitigatable by using accurate and recent data so we must always do so. Remember, a biased AI will produce similar quality results as a biased human - “garbage in, garbage out” applies here, always. 

  • We must use AI to augment good and proper human decision making. We do not want, or need, to build technology to make automated decisions. As in other areas of our business, like recruitment, we have not yet proven the strength of affirmative action (sometimes called positive discrimination) and, so, mathematical de-biasing is not considered an option for us. As such, all decision-making inside any application must include humans. Their skillsets, experience, and emotional intelligence can - and should - then be added to by AI.

  • We work to the principle of “you get out what you put in” and understand that in order to build technology for the future we can neither only look to the past (using out of date data, for example) nor build AI on top of existing human biases. Gender, ethnicity, age, political and sexual orientation bias (this list is not exhaustive) are all discriminatory and we must proactively exclude this human trait of today and yesterday in our search for technology solutions of tomorrow. 

Build in the highest level of explainability possible, because output is important

  • We are not interested in only building black box solutions. If we can’t create defendable IP without doing so then we’re not doing our jobs properly. We want to, wherever possible - and always when possible - be able to explain, replicate, and reproduce the output of a machine we have built. We owe this to our users and it’s also how we’ll get better at what we do. The better we understand what we are building the quicker we can evolve it. 

  • We actively subscribe to the “right to explanation” principle championed by Apple, Microsoft, and others. We must build applications that give users control over their personal data, determine how decisions have been made, and be able to easily understand the role their data has in our product development. We can do this without affecting our ability to defend our IP and, therefore, should do so as a default. Whilst full replication is not always possible (within deep neural networks, for example) our mission - and policy - is to do as much as we feasibly can. 

Overall, our task is simple - we must build technology that is designed to do good

  • Technology is a wonderful and powerful thing. As a software company, we must believe that. But behind any, and every, application for good there are usually opportunities for evil too. As we depend more and more on AI it will take on a bigger role inside our organisation. As we craft and hone it, it is our responsibility to put ethics at the forefront and build responsibly. For now, we are our own regulators. Let’s be the best regulators we can be.

Previous
Previous

Progressing through the Product Team

Next
Next

The Product Manager Toolkit