Are robo-advisors in the financial industry racist?

0

Could artificial intelligence make your robo-advisor racist?

While the fiduciary debate around robo-advisors has generally revolved around their ability to provide advice and whether they should be used to create “nudges” to influence investor behavior, there is reason to fears that an algorithm fed with biased data could start making biased recommendations, said Bradley Berman, a corporate and securities lawyer at global law firm Mayer Brown.

“To use machine learning and AI, they feed those algorithms with historical data,” Berman said. “The reasoning is that historical data about people could reflect discrimination that has happened in the past, so the algorithm could become biased even though it was designed to be completely neutral.”

Berman cited an SEC Investor Advisory Committee meeting on “Ethical AI and Fiduciary Responsibilities of RoboAdvisors” earlier this month that veered off on the possibility that biased historical data could influence AI platforms. and machine learning.


Not a new problem
To some degree, the algorithms are designed to discriminate, Berman said, particularly based on age. It makes sense that an asset allocation offered to a 65-year-old would be different from that recommended for a 25-year-old.

But machine learning modules can also use information like location and zip code to make decisions.

“These could reflect housing patterns that have been affected by discrimination,” Berman said. “Redlining, urban renewal, things like that pushed certain types of people into certain neighborhoods and we are still dealing with the aftermath of that. There is therefore concern that retrospective historical data may include the legacy of these biases.

Although the algorithms themselves are neutral, the quality of their product depends on the quality of the data fed into them, Berman said, a concept often abbreviated as GIGO (garbage in, garbage out). The committee is concerned that machine learning will make algorithms more biased over time as they deal with more biased data.

Some early social AI experiments illustrate the potential problem. On March 23, 2016, Microsoft launched Tay, a self-learning chatbot, on the Twitter platform. The chatbot was designed to mimic the rambunctious chatting style of a teenage girl, but learn more about language and human interaction on the platform over time. Within hours, Twitter had taught Tay to tweet a highly offensive statement like “Bush did 9/11 and Hitler would have done a better job.”

After 16 hours and over 95,000 tweets, many of which were offensive, Microsoft decided to shut down Tay.

Share.

Comments are closed.