The Artificial Intelligence Challenge
Artificial intelligence (AI) is understood to be this generation’s biggest challenge. We all employ some form of AI in our daily lives, whether consciously or not. It is so deeply embedded in our day to day: the hospitals we visit, the schools we attend, the social media we browse and certainly the financial institutions we bank with.
Artificial intelligence is expected to bring various consequential benefits to society. However, an increasing concern today is that we have not kept up with regulations to match the rate at which AI has developed. While the positive outcomes from the application of AI is indisputable, we have seen in recent years that, if unchecked, it can have serious repercussions as well.
In digital and social applications, it can work to reinforce negative social bias, suppress opposing views in the media that we consume, expedite the spread of misinformation and affect the overall emotional wellbeing of individuals.
Financial services are not immune, with unfair limits on certain consumers’ access to credit or the delivery of financial advice without transparency being among the issues encountered with AI-based applications.
What are AI Ethics?
As AI becomes a central part of products and services, organisations across sectors that employ it are starting to develop AI codes of ethics. They are sets of guiding principles, values and techniques crafted to responsibly govern the design, development and implementation of artificial intelligence technology.
How is the nudge theory impacted by AI?
The vulnerability of artificial intelligence is only relevant when considered in the context of its relationship to end users. While AI is ethically neutral, human beings developing systems of AI have individual opinions and biases. It is therefore important to acknowledge the concerns of applying AI to producing nudges at scale, as it could result in the unintended effect of either algorithmic decisioning bias, or personal bias applied during the development of algorithms itself.
The nudge theory is a concept in behavioural economics that proposes nudges as positive reinforcement to help individuals progressively develop improved thought processes, decisions and behaviours. As human beings, we are all susceptible to bias and suggestion. The concept of nudging proposes that when organisations can understand how people think, they can create environments that simplify the decision-making process and make it easier for people to make better choices.
Nudging is meant to help people make improved decisions without limiting their freedom of choice. Nudging and its effects are so established that in 2010, the British government set up a Behavioural Insights Team (BIT), also known as the Nudge Unit to understand how nudges and interventions can be used to motivate and encourage desired outcomes and behaviours.
In a similar fashion to the British government, organisations today are increasingly using AI to manage and steer individuals into action (or inaction), by nudging them towards certain desired behaviours. However, because there are regulations are lacking currently, data collected on individuals can also be taken advantage of by organisations and governments and be used to nudge them into decisions that might not be most favourable for them.
What are countries and financial regulators doing about AI ethics?
As artificial intelligence and its application continue to develop, countries and governing bodies across the globe are cultivating policy and strategy to keep up with its progress. AI is a global issue and should be addressed as such. Canada led the way by launching the world’s first national AI strategy in March 2017, and more than 30 countries and regions have since followed suit.
In April 2018, the European Commission put forward a Communication on Artificial Intelligence and that was the first international strategy published on addressing how to confront and utilise the challenges brought about by AI. Countries and governments recognise the radical nature of AI and its effects and have adopted various distinct approaches that mirror their economic, cultural and social systems.
Australia
Australia published an AI Ethics Framework as a guide for organisations and the government to ensure the application of AI is safe, secure and reliable. It proposes that AI systems should be built with human centred values, consider individual, societal and environmental wellbeing, be inclusive and accessible, uphold privacy rights, be transparent and explainable, whilst being contestable and accountable.
Applying this framework should aid Australian companies build consumer trust in their product and organisations, drive loyalty in AI-enabled services and positively influence outcomes. The AI Ethics Principles were tested on several businesses and the outcomes were recorded and shared.
Use cases implemented by two of the largest banks in the country feature among the prominent examples illustrated under this initiative.
Commonwealth Bank of Australia (CBA)
CBA uses AI to deliver personalised digital banking services to its users. It developed an AI-based solution called Bill Sense to provide more detailed insights into savings and payment patterns to customers. Bill Sense uses AI and previous transactions to understand the regular patterns, predict when an upcoming payment is due and helps understand how much money they will require to pay their bills every month.
Information is not shared or taken from billing organisations and users have total control over the information that Bill Sense can access to generate its insights and make its predictions.
Robust data and risk management guidelines governed the creation of Bill Sense to ensure it functions safely, is accountable and therefore aligned with the banks and Australia’s AI Ethics framework.
National Australia Bank (NAB)
NAB is another financial institution in Australia that applied the AI Ethics Framework. It implemented a solution using facial recognition technology (FRT) to allow customers to verify their identity digitally by taking a picture of their identification document on their phones, while also providing images or videos of themselves. The FRT compares the images provided to verify users’ identities.
In this case, an external supplier provided the FRT software that is being used. NAB implemented it by following the AI Ethics Framework and getting the third-party providers to design their data systems in a manner that it can be audited externally, and ensuring that the technology is responsible, explainable, sustainable and is implemented fairly.
NAB’s process of data analysis and review ensures that its AI projects are ethical before implementation.
Singapore
In 2018, the Fairness, Ethics, Accountability and Transparency (FEAT) Principles were co-created by the Monetary Authority of Singapore (MAS) and the financial industry as a guideline to organisations offering financial products and services for the responsible use of AI and data analytics, but also to strengthen internal governance around data management and use.
Following that, the Veritas Initiative was created, as a multi-phased collaborative project stemming from Singapore’s National AI Strategy announced in 2019. It is focused on helping financial institutions assess their Artificial Intelligence and Data Analytics (AIDA) driven solutions against the principles of FEAT and ensure compliance.
Based on the immediate requirements that the financial institutions had, a set of fairness metrics and assessment methodology was developed for two banking use cases: credit risk scoring and customer marketing.
Upon completion, two white papers will be published to document the assessment methodology for the FEAT principles, and the use cases.
China
China is likely to be the largest Fintech market in the world in terms of volume and transactions, with online (mobile) payment volume reaching 100 trillion yuan in 2016. In May 2017, the People’s Bank of China established a FinTech committee to streamline the coordination, research and development efforts in the financial sector.
Later that year in July 2017, the State Council of China released one of the most comprehensive AI strategies globally. Named the Next Generation Artificial Intelligence Development Plan, it outlines the country’s vision to build a domestic AI industry in the coming years, and to establish itself as the leading AI power by 2030. The plan highlights finance as a key area for development of AI applications.
It also proposed the establishment of big data systems in the financial sector, the development of intelligent financial services and products, and the requirement to strengthen intelligent early warning systems to prevent financial risks for the Chinese economy.
In August 2019, The People’s Bank of China launched a FinTech Development Plan for 2019 through to 2021 that outlines development targets for the financial sector. Based on the current ecosystem and setup, it outlines the requirement for the financial industry to optimise technologies and systems to allow for the integration of AI.
That being said, despite having one of the most developed AI strategies globally, it is still unclear what China’s approach is when considering ethics in AI, and that remains to be seen in the coming years.
How is AI ethics applicable to digital banking?
In the financial services industry, behavioural science techniques are proving to be successful in changing customers’ attitudes towards their money and helping them manage their finances more effectively. From motivating individuals to set up savings goals or nudging them to track their spending and work on their financial planning, behavioural science techniques are a helpful tool that enables the transformation of the way people view and handle their finances.
However, a primary concern that keeps resurfacing around nudging and even more so when powered by algorithms is: when does a nudge stop being a nudge, and start being an unethical tool that is used to manipulate behavioural change?
The concern that nudges can become unethical is being partly appeased by sharing openly the nature of the nudge and being transparent about the logic behind its generation. Sharing this information with users empowers them with the ability to make conscientious decisions, where users have understood elements of the decision-making process.
Furthermore, nudges should only ever be applied if they enhance the overall wellbeing of the user, and users should always have the final say in the outcome of the course of action they will undertake. The ability to decide on the conclusion is key to ensuring that nudges remain ethical, and that users ultimately have the freedom of choice to decide what they want in any given situation.
As financial institutions increasingly employ artificial intelligence in their digital banking systems to generate personalised insights and actionable nudges, it is important to ensure that AI ethics are taken into consideration. Systems employing AI should be designed in a manner that allows it to be transparent, accountable and explainable for both developers and users alike.
When designed with AI ethics in mind, issues like data privacy concerns, behavioural manipulation or even biases in algorithms will be effectively addressed and managed as systems and applications develop.
Conclusion
As AI becomes a prominent feature in our lives, it is key that financial institutions and services continue to be proactive in fine tuning and redefining their guiding principles to keep up with its evolving nature. This includes predicting potential algorithmic biases, monitoring the development of AI-based applications, and even re-training systems and models when necessary.
How can Moneythor help?
Moneythor offers an orchestration engine deployed between the financial institutions’ systems of records and their digital channels to power engaging and tailored experiences for end users.
With the Moneythor solution, banks and FinTech firms can upgrade the digital experiences they offer with personalised insights, actionable recommendations and contextual nudges designed to deepen the relationship banks have with their users.
The algorithms and AI models used by Moneythor are rooted in behavioural science principles adhering to AI ethics, with a particular focus on:
- Accessibility, to ensure that the suggested behaviour is well explained and accessible in the user’s mind for it to be considered.
- Desirability, where the output of the AI computation transparently highlights the benefits of pursuing a behaviour change as well as the costs of not doing so.
- Feasibility, with built-in personalised calls to action added by the algorithms to enable the user to take a relevant action.
Updated: 18 Aug 2022.
Download AI Ethics Guide
"*" indicates required fields