In our last article, we gave an overview of developments in the field of finance and, especially, in the domain of fintech. Having found that the largest segment of the global fintech market were digital payments (US$6,752,388 million) with neobanking and alternative financing following suit, this article will argue that policy-frameworks, laws and regulations have to be aligned with innovation in the finance industry and beyond in order to effectively promote new products and services. AI tools are indispensable for optimizing fintech applications and for making them more safe (i.e. by preventing corruption and theft), however – as mentioned in the last article, they also include risks related to entrenching bias, new forms of cyber attacks and the collection of sensitive data by private actors. Therefore, policy-frameworks, which lie at the heart of promoting a customer- and rights-oriented fintech services industry, relate to various domains such as finance, fintech, cryptocurrencies, cyber security, data privacy etc. This article will first discuss the OECD’s remarks about the safe deployment of AI and, later on, its AI principles arguing that policy-frameworks, which regulate the fintech industry, should particularly make sure that they enable a safe and yet explorative deployment of AI technologies.
Trustworthy Deployment of AI in Finance
Back to the OECD Business Outlook 2021
Among others, the OECD Business Outlook 2021 provides an overview of: 1) the OECD’s AI Principles, 2) its recommendations to protect financial consumers and, 3) policy recommendations targeting the intersection of AI and finance. The latter specifically address further risks of the deployment of AI in finance. Among these are…
“…[d]ata management, privacy/confidentiality and concentration risks; [risks related to] [a]lgorithmic bias and discrimination in AI; [risks related with] [t]he explainability conundrum [that is the lack of “‘explicit declarative knowledge’” suited to explain the underlying mathematical functioning of machine learning (ML) models]; [related risks with regard to the] [a]uditability and disclosure of AI techniques used by financial service providers; [risks in relation to lacking] [t]raining, validation and testing of AI models to promote their robustness and resilience; [risks in relation to the] [g]overnance of AI systems and accountability; [o]ther sources of risks in AI use-cases in finance [related to] regulatory considerations, employment and skills”.(OECD Business Outlook 2021)
Notably, a lot of the above risks show a need to further advance algorithmic and ML models as well as AI technologies and their application, while also ensuring the transparency of deployed methods to inform consumers and other parties in an understandable and explicable manner. Despite that this need rightfully puts an emphasis on the necessity to ‘democratize’ the field of AI and its subdomains (i.e. fintech, insurtech) – that is to adequately inform consumers about their underlying choices related to a particular service or product, the OECD Business Outlook 2021 indirectly emphasizes that this might come at the cost of technological innovation. In other words, whereas “the lack of explainability is incompatible with regulations granting citizens a ‘right to explanation’ for decisions made by algorithms and information on the logic involved, such as the EU’s General Data Protection Regulation (GDPR)”, declarative knowledge (i.e. “factual…experiential and…goal-independent [data ]”) is not necessarily central in AI deployment and ML.
As explained on Datanami, AI and ML thrive by consuming and interpreting data. Even if the latter might be ‘declarative’, “AI applications [essentially] leverage procedural knowledge”. Especially, when fintech and AI-based applications serve to support vulnerable communities in crises, consumers might at one point be confronted with a risky choice. Should they trust an application, whose functioning is not entirely explicable, or should they acquiesce in the lack of access to finance, credit and loans? At the current moment, it seems that consumers are at least opting to trust fintech solutions overall. Although fintech already gained popularity in Sub-Saharan Africa prior to the COVID-19 pandemic, the latter has increased the need for this emerging sector, its products and its services – not all, but to a huge extent under the banner of financial inclusion. Whereas there are no concrete estimates that show how many African fintech businesses already rely on AI technologies, a 2020 study led by the Cambridge Centre for Alternative Finance (CCAF) found that 90% of surveyed fintech companies across 33 OECD countries were employing AI technologies in 2020. With Interpol having identified online scams and botnets among the top five cyberthreats in Africa in October 2021, fintech businesses certainly have to work on ensuring that their services remain secure.
The OECD’s AI Principles: Reinterpreting What ‘Value-based’ Implies
Even though AI principles for Sub-Saharan Africa might have to take into account the specificities of this region, the OECD’s AI Principles can certainly be used to discuss when the deployment of AI is feasible. In essence, the latter principles are value-based:
This principle emphasizes that the deployment of AI needs to coincide with the promotion of sustainable development, inclusive growth and well-being – implying that tackling inequality (including with regard to access to AI technologies) and preventing biases (i.e. the afore-mentioned entrenching bias) are important tasks of AI service providers and other actors such as governments, policy-makers etc.
Next to promoting sustainable and inclusive development, AI technologies should also uphold and promote “the rule of law, human rights, democratic values and diversity”. As such human rights impact assessments (HRIAs), due diligence and codes of ethical conduct, among others, should be carried out or established by companies that rely on AI. The ultimate goal of such efforts is to promote a “fair and just society”.
Whatever is the state of art of declarative knowledge on particular AI technologies, businesses and other stakeholders should promote a general understanding of the deployment of AI technologies. Consumers should particularly be able to understand “the main factors in a decision, the determinant factors, the data, logic or algorithm behind the specific outcome, or explaining why similar-looking circumstances generated a different outcome”.
This principle states that, when deploying AI technologies, businesses and other actors should make sure that their systems are robust, secure and safe – not only in the short-, but in the long-term. Continuous efforts to assess and monitor AI systems and technologies are thereby indispensable. Summed up, two recommendations by the OECD, which aim at ensuring robustness, security and safety, are: “1. traceability and subsequent analysis and inquiry, and 2. applying a risk management approach”.
Organizations and individuals, so the OECD states, who are involved in the development, deployment and operation of AI systems, should be held accountable based on their compliance with the OECD AI Principles. Whereas accountability is interpreted as a moral responsibility, it might also certainly be interesting to discuss legal obligations in the future. Especially, because a range of regulations (i.e. GDPR) make it difficult for AI developers to offer new services, which may still sacrifice explainability as they offer innovation, it will certainly require policy-makers to understand and differentiate human and machine-made mistakes etc. When under certain circumstances, the lack of declarative knowledge about certain technologies might be less important than their ability to solve problems, in other situations this may not be the case. Policy-frameworks should certainly provide room for both, while keeping an eye on the most urgent societal needs.
Despite that the OECD’s AI Principles certainly constitute a relevant value-based framework tackling the deployment of AI technologies, they are not comprehensive. On the one hand, such principles obviously have to remain general so that they can be taken over by as many governments, international businesses and other international actors (i.e. banks) as possible. However, assuming that the adherence to the rule of law will, for instance, necessarily lead to involving AI actors in the fight for a ‘just and fair society’ appears a little one-sided. It is no secret that laws are very capable of both taming and reinforcing inequalities and discrimination. Beyond that, as mentioned earlier, extreme situations may very well require sacrificing (some extent of) explainability in the short-term – as long as many individuals can be helped.
Overall, the deployment of AI technologies should certainly be monitored to keep balancing out its positive and negative impacts as well as to optimize its impact – where the latter lies in a grey zone. Whereas the OECD’s AI Principles focus on the obligations of AI providers, it need also not be forgotten that the latter need comprehensive and target-group oriented support as well as an enabling policy and economic environment. While the OECD provides some recommendations for policy-makers, which underline this necessity (i.e. “[i]nvesting in AI R&D; [f]ostering a digital ecosystem for AI; [p]roviding an enabling policy environment for AI; [b]uilding human capacity and preparing labour market transition; [i]nternational cooperation for trustworthy AI”), the success of such efforts is surely dependent on how well such measures can be regionally implemented. Political and economic aspects, among other preconditions (i.e. policy-makers trained in understanding AI tech), certainly play a role in ensuring the quality of support, which individual actors (i.e. start-ups, SMEs) will receive.
Whether you are an investor, a founder of a start-up or a SME, our team will effectively support you with regard to legal questions. We employ legal experts with knowledge across various African jurisdictions and have long-standing experience with giving advice on topics such as labour and immigration, tax and customs, contracts and negotiations, corporate governance and compliance as well as data protection. Next to our African offices, we also have one office in Germany! We have an international mind…You too? Then contact us today to collaborate!