Decision makers are becoming more wary about certain uses of AI, including trusting solutions to make money for them. When things go wrong and losses are up, it’s tough to know who’s responsible – the machine, or the person behind the machine. Bloomberg reports on a recent example of this, where an investor is suing for damages after losing money based on the decisions made by a money management solution.
According to Bloomberg, back in 2017, investor Samathur Li Kin-kan bought into a money management AI solution that another investor, Raffaele Costa, planned on using to mange the money made by his company, Tyndaris. “The idea of a fully automated money manager inspired Li instantly,” driving him to invest in the solution to grow his own money – $2.5 billion- $250 million worth. However, after letting the machine handle his cash, “it was regularly losing money, including over $20 million in a single day.”
As a result, Li is taking Costa to court – he is suing Tyndaris “for about $23 million for allegedly exaggerating what the supercomputer could do,” Bloomberg says. Simultaneously, Tyndaris is suing Li for $3 million in unpaid fees, and “deny that Costa overplayed” the machine’s capabilities.
The trial will take place in April 2020.
Takeaways for decision makers:
Bloomberg says that this particular case raises the “who to blame” question when it comes to AI committing a fault. Since technology can’t be sued, decision makers and investors may go after the next best thing: the people who sold them that technology. Bloomberg also says that this case will help pave the way for future lawsuits involving technology: “The legal battle is a sign of what’s in store as AI is incorporated into all facets of life, from self-driving cars to virtual assistants.”
However, one facet of these cases that remains to be clear, and probably won’t be for years to come, is human interpretation of each case and fault. For example, in this case, there was lots of back and forth between Li’s and Costa’s lawyers about whether or not Costa exaggerated the success and security of the AI solution, whether or not the machine would actually make Li more money, etc. In the end, it will be up to human discretion to determine where the fault lies – in the AI machine for not performing to Li’s standards, or in Costa for making the machine appear to function better than it did.
Bloomberg also suggests that more challenges will unfold on similar cases, especially when chatbots and other tech begin selling products and services to customers. Who will be sued then?
“Misrepresentation is about what a person said to you,” Karishma Paroha, a London-based lawyer at Kennedys who specializes in product liability, told Bloomberg. “What happens when we’re not being sold to by a human?”