Actuarial pricing, capital modelling and reserving

Pricing Squad


Issue 11 -- February 2017

Welcome back to Pricing Squad!

Pricing Squad is a newsletter for fellow pricing practitioners and actuaries in general insurance.

Today's issue shows how GLMs can damage your portfolio when used in conjunction with other pricing models.

You can also read a review of Pietro Parodi's pricing textbook.


Granularity conflicts

Mr Reilly's spies

Mr Reilly employs three spies. Every month he pays agent X 180, agent Y 100 and agent Z 50.

The agents send Reilly intelligence reports. The reports are anonymised so Reilly does not know which of the three agents authored any given report.

One day, Reilly's agency orders him to cut his spy budget because the value added per spy was 100 only. This is 10% below the current average cost of 110 per spy.

Reilly thinks: 'Historical reasons behind the pay scale are clearly obsolete. At 100 per spy, agent X sure looks overpaid by a big margin! Cutting X's salary alone would solve my problem. But wait, maybe X provides the best intel. What if he defects? Better cut Y or Z instead. What should I do?'

Reilly faces a "granularity conflict". He must make a decision using two pieces of information which are incompatible. On the one hand he has a granular payroll. On the other hand he has an unsegmented new budget.

Granularity conflicts in ratemaking

Car Insurance Ltd built a very predictive loss cost GLM.

After generating each renewal invite, they score the invite with this GLM. Next, they divide the predicted loss cost by the quoted premium to derive "scored loss ratio". Finally, they adjust the invited premiums accordingly.

Everybody cheers this clever hack.

But despite the increased pricing sophistication the actual renewal loss ratio deteriorates. This is blamed on increased bodily injury claims, the underwriting cycle and changes to the business mix.

The true cause remains undetected for years.

Granularity conflicts under the surface

A good personal lines motor pricing algorithm is organic.

There are hundreds of pricing rules, exceptions and interactions introduced by generations of analysts and underwriters.

No single person understands all of the rationales behind all pricing calculations. This is OK. It might seem messy, it might be annoying, but it's OK.

Unbeknown to them, when Car Insurance Ltd compares their organic pricing algorithm with a GLM prediction, they face a granularity conflict. By necessity, the GLM simplifies complex interactions, exceptions and classifications. Therefore, it is less granular than the rating algorithm.

Let's say that the existing pricing algorithm differentiates between a "drink driving" conviction 3 years ago and the same conviction 4 years ago. Two otherwise identical drivers with past drink-driving convictions will be quoted differently if they were convicted in different years.

But a GLM is unable to reflect such tiny levels of granularity for a single conviction type.

The driver with the more recent conviction (who is higher risk) will have a higher quoted premium than the other driver. But the same GLM predicted a loss cost and therefore a lower "scored loss ratio". As Car Insurance Ltd adjust invites based on this "scored loss ratio", they reward this driver, relative to the other driver, and retain more bad drivers at reduced premiums.

The same is happening for every minute interaction, every classification and every other micro-segment where a "granularity conflict" occurs.

Granularity conflict problems are not limited to "scored loss ratio" cases.

They can also bite in lifetime value and price optimisation. For example, if an elasticity model has different granularity to a loss cost model.

What is to be done?

All pieces of information should be brought to a common level of granularity before any valid decision can be made.

For Reilly, a common level of granularity is his entire spy network. In the absence of additional information, he must treat all agents equally for example by cutting all salaries by 10%.

For Car Insurance Ltd, a common level of granularity could be a sub-model of the loss cost GLM. To produce a "scored loss ratio" capable of supporting valid adjustments to individual quotes, they would have to model both the predictions from the loss cost GLM and the quoted organic premium on this common level of granularity first.

This involves some work but it delivers reliable results.


Book review: Pricing in General Insurance by Pietro Parodi

Pricing in General Insurance by Pietro Parodi covers basic and advanced pricing topics. I like that it focuses on pricing itself, as opposed to applied maths, accounting or compliance.

The basics broadly align with the FIoA "ST8" exam. However, the book is significantly more entertaining to read than exam materials. Theory is sprinkled with colourful anecdotes and insightful quotes from people like Mike Tyson and Albert Einstein. I learned from the book how one unfortunate snail contributed to the birth of tort liability legal theories.

The basics covered include: pricing processes, underwriting cycles, all kinds of pricing triangles, understanding products and risks, claim inflation and development, various exposure bases, typical data structures, data checking, transformations and reconciliations, currency conversions, large loss handling, increased limit curves, frequency, severity and burning cost analysis, pricing factor selection, bootstrapping and more.

The book mostly gives standard advice on building frequency and severity models. For a practitioner, though, this might not be enough to determine rates. If you have very technical questions on specific lines of business and their intricacies (say, how to pragmatically handle trade codes for commercial public liability), you will not find the answers in this book.

Similarly, for personal lines the book explains the main pricing concepts but stops there. There is little discussion of phased implementation, randomised rate testing, mixing GLMs with other models to improve profit, Winner's Curse or the risk of GLMs with wrong granularity.

There is also a factual error, in that the book claims that Direct Line was the first to implement GLMs. This is not the case. They were late to the GLM game and successful despite that. However, the GLM example included in the book uses the non-by-peril approach, which others often underrate.

Mr Parodi helpfully covers a number of advanced topics, including periodic payment orders, credit risk, weather derivatives, multivariate (generalised) regression, Fourier Transform and capital allocation. Other advanced topics are skipped, or barely outlined (GAM, non-linear models and machine learning, to name a few), while a few exotic topics are surprisingly included.

Specialist pricing seems to be covered somewhat selectively. To use a few recent examples relevant to me: D&O, terrorist insurance and how to make flood rates from a catastrophe model and not covered.

Some chapters include examples in R but others don't, which I found mystifying.

This book will be most useful to non-practitioners looking for insight into actuarial pricing. Other students may also benefit from reading it.


Do you need support?

If you need access to pricing tools to radically simplify your work and deliver reduced loss ratio quickly, or if you are simply looking for an actuarial contractor, get in touch.

Thank you for reading, and have a great day,
Jan Iwanik, FIA PhD


Copyright © 2017 Jan Iwanik, All rights reserved. You are receiving this email because you subscribed to updates from www.iwanik.co.uk. We publish data and analysis for informational and educational purposes only. You can unsubscribe from this list by emailing us.