This column, I’d like to turn my attention to the other side of the fintech coin: what are the unintended consequences of the innovations designed to tackle deficiencies in the existing financial infrastructure?
First, let’s look at the foundations of two very significant problems that are being addressed with technology:
- the fact that 1.1 billion of the world’s population lack legal identity (many of them women and children), and
- the fact that 3.5 billion people are underbanked or unbanked (due in part to the limitations of conventional credit modeling systems).
Digital identity is a “keystone” issue that opens up financial access, government benefits, employment, housing and more. Financial inclusion can help mitigate poverty and create foundations of global prosperity.
Very worthy goals, no doubt.
But what does one do “when algorithms attack”? How can one solve for the cases where the innovative models break down? Where new processes fail?
Let us now turn our attention to some examples of failure modes:
- The Government of India decided to bring its population into the identity system, with Aadhar, a biometrically-based identity system. If you are among the very poorest and want your monthly ration of wheat, rice, and sugar, instead of hunting down a ratio card and trying to provide your identity, you simply present your fingerprint and are issued your food. Recently in Uttar Pradesh, a woman died because her family alleges she was too ill to present herself in person, and the store refused to issue the ration without the biometrics. A number of questions remain: the woman had money in her bank account, the government claims she died of illness, there are alternate means of issuing food if someone cannot present themselves in person. What remains undeniable is that the shop refused to issue the ration because of the lack of in-person biometrics. At a very minimum, we are seeing a poor understanding of the new rules in the era of digital. Nothing has surfaced to say the government did anything wrong; there is a suggestion that the shopkeeper was overzealous – but what responsibility does the government have to educate and police its disbursement agents?
- GDPR, or the General Data Protection Regulation, is a new EU regulation coming into effect in 2018. Our colleagues at the Oxford Internet Institute, Sandra Wachter, Brent Mittelstadt and Luciano Floridi, wrote a paper earlier this year explaining how GDPR does not clearly enshrine a right to an explanation of automated decision-making. Why is this important? Well, one of the rising solutions for making more and lower-cost loans available to more people is the use of new “machine learning” based techniques to assist with loan decisioning. If care is not exercised in constructing these systems, they can very quickly spiral into discriminatory models (more on that in a minute). With a conventional loan or credit score, the bank is obligated to provide a “reason code” explaining why credit was declined. The new systems make it more difficult to show a reason code because the human interpreters may not fully understand how a machine came to a certain decision. I spoke recently to a tech executive who had done an audit of a bank’s automated loan decisioning platform. The AI had noticed if it excluded a certain postal code, the loss rates on the loans were dramatically better – so it did so. This was systematic discrimination of the most illegal sort, and the bank hadn’t even known about it until a retroactive technology audit uncovered the issue. Make no mistake: the European GDPR is a huge step forward for personal data privacy and self-sovereignty. However, the language in the law as it is currently constructed is too vague to clearly require an explanation of reasoning.
These are growing pains. It is better that India is trying to enfranchise its population than not; it is better Europe is taking steps to protect personal data privacy than not; it is better for European banks to try out new lending models to expand financial access than not. But that said, issues like the ones above prompt a healthy examination of a few inexpensive safeguards that can be put in place to mitigate against potential issues.
A brief selection of ideas to consider:
- Engagement with stakeholders and an invitation to comment about new features or approaches. Monzo, now over 400,000 subscribers strong and growing, has a dynamic and active dialog with its user base. While you may not engineer solutions to every single edge condition, accessing a diverse user base can at least identify them for you and help you prioritize which ones for which you need to prepare first.
- Opening up regulatory advice. It may seem obvious, but distressingly few companies engage external regulatory experts for perspective on new concepts. Regulation in emerging fintech areas is an evolving, moving target. The concept of “open innovation” applied to thinking through the implications of new technologies is a relatively cheap investment and one that may yield unexpected competitive advantage.
- Audit discipline. Algorithm audits are a purchasable service from major tech consultancies. And the research team that produced Distilled Analytics has created a “black box decoder ring” that can make intelligent guesses about what a machine learning system is doing (we are not alone in this; others are pursuing this interpolation model to AI). Having a verification mechanism for the integrity (and ethics) of machine learning systems is a healthy means of helping prevent the inadvertent algorithm.
We are still in the early days of migrating from digital copies of analog models of identity and financial services to truly, natively digital ones. The more we tango with our digital future, the more we risk tangling with missteps. While perhaps the price of progress, it is eminently feasible to engage in “enlightened advances” in lieu of blind technofetishism.
The views expressed here are my own, and may not reflect those of Saïd Business School, University of Oxford, the Massachusetts Institute of Technology or their respective faculties. My startup, Distilled Analytics, is very much in the center of the identity and credit/inclusion discussions, and providing solutions to each.