The use of artificial intelligence poses an ESG headache for the global financial industry

0

Artificial intelligence (AI) is often touted as the panacea for financial services firms’ ability to deal with the impending data onslaught stemming from environmental, social and governance (ESG) regulation. Yet ESG also poses an existential threat to the use of AI by the financial services industry

The European Union’s Sustainable Finance Disclosure Regulation has required asset management companies to start collecting millions of data points from the companies they invest in, and the upcoming Sustainable Finance Reporting Directive corporate sustainability will only add to the volume of data points. Additionally, there is data collected as part of the Task Force on Climate-Related Financial Disclosures (TCFD) initiative and the International Sustainability Standards Board’s plans to create a baseline for reporting. ESG.

Overall, it is becoming clear that AI-based systems will be critical to businesses’ efforts to make sense of and profit from all of these requirements.

However, the potential problems for financial services companies using AI lurk under the three columns E, S and G. The carbon footprint of storing and processing data is huge and growing, it has already been shown that algorithms discriminate against certain groups of the population, and a lack of technological skills in the ranks of senior management and the workforce in general leaves companies vulnerable to errors.

Environment: Carbon footprint of energy consumption

According to the International Energy Agency, data center cooling electricity consumption could account for as much as 15% to 30% of a country’s total use by 2030. algorithms to process data also requires power consumption.

Training AI for business use has a significant environmental impact, says Tanya Goodin, a tech ethics expert and fellow at the Royal Society of Arts in London. “Training in artificial intelligence is a very energy-intensive process,” says Goodin. “AI is trained via deep learning, which involves processing large amounts of data.”

Recent estimates from academics suggest that the carbon footprint of training a single AI is 284 tonnes, which is five times the lifetime emissions of an average car. Separate calculations put the power consumption of a supercomputer at the same level as that of 10,000 households. Yet accounting for this enormous electricity consumption is often hidden. When an organization possesses its data centers, carbon emissions will be captured and reported in its TCFD scope 1 and 2 emissions. However, if, as is the case in a growing number of financial firms, data centers are outsourced to a cloud provider, emissions drop to Level 3 in terms of TCFD reporting, which tends to be on a voluntary basis.

“I think it’s classic misdirection — almost like a misdirection magic trick,” says Goodin. “AI is being sold as a solution to climate change, and if you talk to any of the tech companies, they’ll tell you there’s huge potential for using AI to solve climate problems, but in reality, that’s a big part of the problem.”

Social: Discriminative algorithms and data labeling

Algorithms are only as good as the people who design them and the data they are trained on, a point acknowledged by the Bank for International Settlements (BIS) earlier this year. “AI/ML [machine learning] Models (like traditional models) can reflect biases and inaccuracies in the data on which they are trained, and potentially lead to unethical results if not properly managed,” the BIS said.

Kate Crawford, co-founder of New York University’s AI Now Institute, went further, warning of the ethical and social risks inherent in many AI systems in her book. AI Atlas. “[The] the separation of ethical issues from technique reflects a larger problem in the field [of AI]where liability for harm is not recognized or is considered beyond reach,” Crawford says.

It’s perhaps no surprise, then, that mortgage, loan and insurance companies have already found themselves on the wrong side of regulators when the AI ​​they used to make lending and loan decisions Insurance pricing has been shown to have absorbed and perpetuated certain biases.

In 2018, for example, researchers at the University of California, Berkeley found that AI used in loan decisions perpetuated racial bias. On average, Latino and African American borrowers paid 5.3 basis points more in interest on their mortgages than white borrowers. In the UK, research by the Institute and School of Actuaries and the charity Fair By Design found that people living in low-income neighborhoods paid £300 more a year for car insurance than those with identical vehicles living in wealthier areas.

The UK’s Financial Conduct Authority (FCA) has repeatedly warned firms that it is monitoring how they treat their customers. In 2021, the FCA revised pricing rules for insurers after research showed pricing algorithms generated lower rates for new customers than those given to existing customers. Similarly, the EU legislative package on artificial intelligence should label the algorithms used in credit scoring as high risk and impose strict obligations on their use by companies.

Financial firms also need to be aware of how data has been labeled, Goodin agrees. “When you’re building an AI, one of the things that’s still pretty manual is that the data has to be labelled. Data labeling is outsourced to all these big tech companies, mostly in third world countries that pay [poorly]“, she notes, adding that these situations are akin to “the disposable fashion industry and its sweatshops”.

Governance: management does not understand technology

When it comes to governance, the biggest issue for financial services firms is the lack of technology-skilled staff, and that includes those at the senior management level.

“There is a fundamental lack of expertise and experience in the investment industry when it comes to data,” says Dr Rory Sullivan, co-founder and director of Chronos Sustainability and visiting professor at the Grantham Research Institute on Climate Change in the London School of Economics. .

Investment firms blindly take data and use it to create products without understanding any of the uncertainties or limitations that might be in the data, Sullivan says. “So we have a capacity and expertise issue, and it’s a very technical capacity issue around the data and the interpretation of the data,” he adds.

Goodin agrees, noting that all financial company boards should employ ethicists to advise them on the use of AI. “Going forward, a pretty big area will be AI ethicists working with companies to determine the ethical stance of the AI ​​they use,” she says.

“So I think bank boards need to think about how they will access it.”

Share.

Comments are closed.