The Economics of Data Standards

I was recently invited to present as part of the LMA Bitesize series, the title of the presentation was “Data, have we got it right this time?” SPOILER ALERT the answer was “I believe we have made great strides in recognizing that data is as important as it is, but we still have a long way to go.”

The host Rob Myers asked me about the CDR (Core Data record), where would I like to see the CDR next year. This was in light of the previous week’s presentation by the CDR team from Lloyd’s and the undertaking by Bob James, new Market Transformation Director at Lloyd’s in his Blue Print2 Update the previous day. I answered, as I firmly believe, the CORE data record should be core. Just that, not for a specific class or territory, or style of Risk (Proportional Treaty, Delegated Authority and everything in between). Only after the core data elements is aligned to all the other initiatives DDM, DCOM, Placing Platforms (whitespace, PPL, Relay, Insurewave, Dialogue etc) can you then extend out for specific classes or territories.

If we reran the last 5 years of digital transformation initiatives perhaps, confirmed and validated by comments from many sources over the last 18 months, we should have nailed the Data model, then the reference lists, using global standards wherever possible. Afterall we happily use ISO currency codes, why not a Global Class of Business code, there is nothing that provides a competitive edge knowing if it is Cyber or Property. Only then should the Transformation team have gone out to the wider world and evaluated off the shelf, self-build or Hybrid solutions to address the clearly identified problem. All the while, those first two tasks would have enabled technologists to build innovative solutions that they knew could connect, add value and augment the London Market. But we are where we are.

The other question I was posed was around the Economics of Data Standards. Did I think we would nail an appreciation of the value of Data standards in this generation or would it take the progression of the “WhatsApp” generation to scoff at our ways and overhaul it properly! Now this did make me think. Does the WhatsApp generation (we seem to have moved on from labelling generations with letters from the end of the alphabet) actually hold the answer? I am not so sure. Afterall 20 years ago we handled less data, and as a result we could spend time cleaning it up and treasuring it, although also rarely exchanging it – pre-API world. So ironically, we may have been better then than we are now. Now data is all around us and often, someone else’s problem. We are lazy, perhaps by necessity, lapping up radio buttons and drop downs, in fact we are horrified when we don’t find a form with most of the data prefilled, courtesy of the insatiable effect of Cookies.

So just as the amount of data we handle escalates exponentially to levels not able to humanly cared for or cleaned, we also become less aware of it how we need to care for it. We are spoon fed in almost everything we do from wearables, phones, trackers, ticket barriers, online shopping, browsing histories…. We don’t have to fill data in, it is silently collected. They have tried to remove the poor veracity of data when a human is involved. Imagine if, to use a Self-Scanner, it was less “scanner” and more “Barcode input” – you had to type the barcode numbers in rather scan at all. Not only would it not serve the purpose – to speed up checking out and thus reduce checkout staff costs – but the number of errors would make it effectively pointless. Certainly, Tesco’s would have an odd set of reordering lists if I was typing bar codes in, especially without my glasses.

It’s a silly analogy in some ways, as these machines arrive to address a problem with a solution and all the tech needed in place. But it means our next generation is potentially very aware of the value and role of data but less aware of how to design good ways to collect it.  Stand testament 1st hand observing my son’s BTEC in IT in 2016. It was not what I expected both in terms of topics and level required to send out a digitally useful contributor into industry (he made up skills at University and is now completing a Masters in VR).

But for an sector that is, arguably, no-longer in the industry of Insurance but is in the industry of Data  -for which the subject matter is Insurance (credit J.Ward 2018) here lies the problem.  With projects designed by end users, when the skill is specialist, we find the client is never happy, the vendor and their product is bent into unrecognisable shapes, and nobody knows how to augment the data with outside sources – because no or limited standards were used.

By Kirstin Duffield

I urge, as I did on the webinar, for us all to actively educate, ourselves and the next generation in the value of clean, accurate data, and for this market… that means agreeing a single common data model and set of Reference tables at the core, and doing this through our Standards body.

Invest in your standards body, agree to use global reference lists where possible, and don’t invent proprietary lists if it can be avoided. If the story is to be believed – that in the 1990’s the insurance industry recorded that most cars in the UK were “Beige” and finding that this was due to it being the first colour alphabetically in the list of colours, and users not being bothered to change it –  what hope do decision makers have relying on data entered by humans and how can a Data driven industry claim to be just that?


The full Data Black Hole webinar recording can be viewed on YouTube –


Remember to follow Morning Data on LinkedIn –


For more information on the Lloyd’s Market Association, please see their website – Welcome to the LMA (

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s