Newsletter and Subscription Sign Up
Subscribe

Insurance Industry Grapples with the Possibilities and Pitfalls of AI

Published Thursday Apr 4, 2024

Author Adam Drapcho

Artificial intelligence has the potential to create significant efficiencies within the insurance industry, performing some tasks better than human workers. However, there’s also the possibility that the technology could lead to undesirable outcomes and invasions of privacy without well-considered guardrails.

AI is already either used by or strongly considered by a majority of insurers, according to surveys conducted by the National Association of Insurance Commissioners (NAIC), which describes itself as a “standard-setting organization” governed by the chief insurance regulators from all U.S. states, Washington D.C. and five U.S. territories.

A report released by the NAIC in December 2023 compared survey results from life insurers to previous results from home insurers, released in August of the same year, and to personal auto insurer surveys, which were released in December 2022. They all revealed AI already has a foothold in the insurance industry. The surveys show 58% of life insurers, 70% of home insurers  and 88% of personal auto insurers “use, plan to use or explore” AI.

From the Telegram to ChatGPT
When Melcher & Prescott Insurance, based in Laconia, was founded in 1862, the telegraph was state-of-the-art technology. The firm’s current president, William Bald, says while computers and the internet have transformed the insurance industry, “I really think we’re going to see, in the next few years, AI race forward and create some incredible efficiencies,” he says. That will include taking on some of the more tedious tasks in the industry, such as making sure that all of the data in an application matches the policy that the insurer is providing.

“For example, let’s say I’m insuring a marina,” Bald says, and the client wants to insure their trio of specialized $400,000 forklifts used to move powerboats. It’s the insurer’s obligation to make sure what is written into the policy—the make and model of each forklift, down to its vehicle identification number—is accurate. That way, if one of them goes tumbling into Lake Winnipesaukee, the marina wouldn’t have to take a half-million-dollar hit.

“It’s very time-consuming but very important to our clients,” Bald says. If AI could expedite that process, it would release the agent to spend more time on what humans do best—interacting with other humans, Bald says, “more of the value-added—client consultations, those type of things.”

Bald started using AI programs to improve his writing. “It’s sometimes nice to throw your email into ChatGPT,” and get some feedback, he says.

Many of the responders in the NAIC survey say they used AI in their marketing, and that concerns Bald. He says AI can open new areas of liability. “You do need to be aware of copyright infringement,” Bald says, noting a recent lawsuit brought by The New York Times, which alleged that AI writing models were plagiarizing from published news articles. “If you are going to be using ChatGPT for marketing, you have to be sure about copyright infringement.”

There’s a more sinister area of liability, too. In the wrong hands, AI could be used to harvest someone’s voice from a YouTube or TikTok video, then manipulate it to send a voicemail. If an employee got an urgent voice memo from someone that sounded like their boss, they might become an unwitting accomplice in a fraud scheme involving the transfer of funds to an offshore account, or sharing a valuable passcode, for example.

These new liabilities, of course, mean that insurers will need to advise their clients about keeping their protections in step with the times. “It’s a complex morass out there that you’re adding an additional layer to with AI,” says Bald.

Regulatory Challenges
D.J. Bettencourt, commissioner for the NH Insurance Department, says AI has been a “subject of conversation” since he joined the department last year. He says he found it encouraging that insurers and their regulators across the country are collaborating to navigate a landscape altered by the emergence of new technology.

“One thing I love about the insurance industry is that it is constantly and rapidly developing,” Bettencourt says. “Obviously,  AI has been a national trend. We are learning together as insurance commissioners.”

Bettencourt views AI as a blade that could cut both ways. On the one hand, it could lead to faster underwriting, which could allow insurers to service more people with lower administrative costs and therefore reduced premiums. In the other direction, it could result in discriminatory practices, assigning unfairly high-risk evaluations to individuals because of their demographics rather than their personal attributes. AI’s ability to scour data caches could lead to invasions of privacy, particularly as it relates to medical information.

“Disparate data points could be linked and violate privacy rights,” Bettencourt says. “We are concerned that AI tools could be used to find correlation instead of pure causation.”

The challenge for regulators will be to find a balance between protecting individual rights for fairness and privacy while also allowing space for the industry to innovate, he says. If done right, the result for consumers would be faster access to insurance products, and at a lower cost.

“That would certainly be my hope and expectation,” Bettencourt says. “I think it’s all about, how can you improve the overall process.”

The NH Insurance Department is evaluating the laws already on the books, especially those regulating privacy, anti-discrimination and consumer rights, and considering whether and how they could be applied to potential problems caused by the use of AI, Bettencourt says.

In addition to the surveys, NAIC has also issued a model bulletin for its member jurisdictions to modify for their market and disseminate to insurers. The intended result is to provide some clarity for insurers, describing what is a fair use of AI and what isn’t.

The NH Insurance Department sent that bulletin out to insurers for review, with the intent that Bettencourt’s staff could “New Hampshire-ize it.”

Part of the guidance, he says, would be to put the onus on the insurers to test their own systems to ensure that they are operating within the bounds of acceptability. He also says that, to the extent that AI is involved in policy decisions, it should be transparent to the consumer.

“AI, and technology, as it innovates, it will obviously innovate in ways that we can’t imagine today. The reality is it is a tool. It can be a tool that can benefit the industry and consumers, but it has to be held accountable. We cannot outsource human brainpower to technology,” Bettencourt says. “There’s still going to be human accountability over these examples. That’s going to be absolutely critical.”

All Stories