Terug

Ethics in AI, a way to avoid regulation?

Blog

hanzo-fintech-410x274

During the next minute that you read this blog Google conducts 3.9 million searches, Amazon ships 1,100 packages, Uber users take 1,400 rides and – my personal favourite – Spotify streams over 750,000 songs. All these actions generate an enormous amount of data. By 2020, it’s estimated that 100MB of data will be created every minute for every person on earth. And all this personal data has immense predictive power, well beyond your and my imagination. The positive effects can be manifold, but as a society we need to carefully manage any potential conflict of interests between innovations on the one hand and individual rights on the other.

In my Keynote address at the 3rd Annual Conference on Fintech and Regulation in Brussels I contributed to the ethics in Artificial Intelligence (AI) debate by outlining criteria that could enhance the added value of principles in AI. One of my key messages was that for principles to be effective they should have a clear link to existing rules in our current regulatory framework. In order to illustrate this message and inspired by developments at the AFM, I started my contribution with the following fictional example.

Duty of care versus privacy

The CEO of a retail bank decides to ask his AI experts to use historical payment data in order to estimate the probability of default on mortgage payments. How can we predict which clients fail to pay their monthly installments? Any mortgage advisor can tell you that the main reasons for payment arrears will be divorce or job loss. So, the experts will look for correlations and pre-cursor events in the data that predict these two life-events. The experts build an algorithm based on their analysis. The algorithm matches historical patterns in payment data with the payment history of new clients and is able to predict the possibility of divorce and job loss with a high degree of certainty. So now your mortgage advisor is able to tell you that you have a 68% of divorce and therefore will not be eligible for a mortgage! This example may seem far-fetched. AI experts will tell you that this either can already be done, or will be possible in the near future. 

From a commercial perspective this is a highly valuable algorithm. From a financial duty of care perspective the model is also useful. It is not completely clear whether our rules on data protection (e.g. GDPR) allow this kind of data processing. And most importantly, from a societal perspective, clients would probably not like financial institutions nor fintechs to use this kind of model. They would probably feel this type of predictive analysis invades their privacy.

Ethical principles as a solution or an escape from regulation?

Predictive algorithms will be built, whether we like it or not. Society will need to find a way to deal with dilemma’s as outlined in the example above. We will need to find a way to curb the dark side of AI: the negative biases and the potential for discrimination, exclusion and privacy breaches. We could of course try to regulate AI as some experts propose. I am relatively optimistic that we can use current regulatory building blocks in GDPR and financial regulation to curb the dark side in AI. Principles on ethics in AI, as proposed by large tech firms and industry associations also have a role to play. However, we will need to look closely at who proposes the principles and why, as there is a strong correlation between enthusiasm for principles and resistance to regulation. To me one thing is clear, we can’t allow a vast array of different AI-principles to act as a decoy to hide the dark side of AI. 

A proposed Turing test for AI principles

So we need some form of test to filter out the genuine principles on ethics in AI from the fake ones. Here are five criteria I would propose to do just that. Principles on ethics in AI will need to have:

  1. Independent oversight (no black boxes)
  2. A link to existing legal frameworks (use hard law where we have it)
  3. Clear responsibilities (no outsourcing to Cambridge Analytica)
  4. Practicable approach (no pie in the sky, but clear case examples)
  5. Non-arbitrary outcome (different staff get similar results)
Hanzo van Beusekom has been a member of the Executive Board of the Dutch Authority for the Financial Markets (AFM) since 1 June 2018. Within the board, he is responsible for cross-sector supervision and renewal, with a particular focus on data- and technology-driven supervision.

De AFM maakt zich sterk voor eerlijke en transparante financiële markten.

Als onafhankelijke gedragstoezichthouder dragen wij bij aan duurzaam financieel welzijn in Nederland.

Informatie delen

Delen via: deel