Published on:
19 May 2025
3
min read
Financial Times / Bloomberg
The Financial Times reports that insurers at Lloyd's of London have launched a product to cover losses caused by malfunctioning AI tools.¹
This is a logical play: after all, companies have adopted chatbots for various functions, including customer service and in-house research². Even the Singapore government has done so.³ However, there have been some high-profile boo-boos.⁴
Enter insurers - are they the solution?
I make 3 observations for businesses considering integrating chatbots into their workflows or customer facing functions.
1️⃣ Is a human really more reliable than an AI?
I'm going to start with this massive hot take.
I recognise concerns about the risks when AI is involved in the process of creating material or interfacing with customer, especially when there is a need for accuracy.
But the more fundamental question to ask is:
How certain are you that your employees or contractors can do the same job better than a well-trained AI?
For example, I'm sure we've all had the experience where we've been utterly frustrated by our exchange with a customer service officer, who doesn't seem to know what they are doing.
In that situation, would an AI agent necessarily be worse?
When assessing whether to integrate AI into our workflow, I suggest that the question is not whether the AI is infallible. Rather, the question is whether AI is more infallible than the human who would be performing the same tasks.
And it is for every business to answer this question for itself.
2️⃣ Does the use of AI erode trust?
Suppose you run a retail business, and for customer service, you've decided to use an AI chatbot platform instead of a call centre.
Would customers feel betrayed if they subsequently found out that they had been conversing with an AI and not a human being?
I suppose it depends on a number of factors, such as the nature of the business, your client profile, whether your clients have been put on notice, etc.
But what if you run, say, a law firm? Do your clients expect to be conversing with a human being at all times?
I'm not going to try and suggest a one-size-fits-all answer. That would be foolhardy.
I will suggest, however, that business may want to think beyond technical possibilities and performance metrics, and consider whether the use of AI would erode consumer trust.
3️⃣ Insurance isn't a panacea.
So should your business go full steam ahead to integrate AI into its workflow, and simply obtain insurance coverage to mitigate the risk?
Well, there are tradeoffs involved with insurance, which:
a) isn't free. Premiums will have to be paid. Management time and resources will also be required to manage insurance matters So factor in these costs when considering the true cost of AI adoption; and
b) doesn't always pay out. Insurers can seek to deny coverage for a variety of reasons.⁵
So, caveat emptor.
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ https://www.ft.com/content/1d35759f-f2a9-46c4-904b-4a78ccc027df
² https://www.reuters.com/technology/artificial-intelligence/jpmorgan-launches-in-house-chatbot-ai-based-research-analyst-ft-reports-2024-07-26/
³ https://www.straitstimes.com/singapore/more-public-officers-in-singapore-use-government-chatbot-to-enhance-productivity
⁴ Look, I could fill up more than one comment box on this alone, but here's a small sampling:
https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/
https://www.bbc.com/news/technology-68025677
https://finance.yahoo.com/news/virgin-money-chatbot-tells-off-125436677.html
https://edition.cnn.com/2024/12/10/tech/character-ai-second-youth-safety-lawsuit
⁵ As reflected in some of my matters. Being an indie conflicts-free practitioner means having the freedom to act against insurers.