I was recently interviewed by Daniel O'Donohue for the excellent Mapscaping Podcast. Mapscaping is on my “must-listen” list for tech and business updates so it was a real privilege to be invited onto the show to tell the Addresscloud story. Daniel is a great interviewer, his relaxed style really puts the interviewee at ease and helps draw-out the details. In the interview we explored why getting geocoding right is so important for insurers, and how we use precise property location to provide accurate and highly-detailed risk assessments. You can listen to the interview online, and I encourage you to subscribe to the podcast. An abridged version of the Q&A can be found below.
Mark Varley, the CEO and founder of Addresscloud reminds us that geocoding is not a solved problem and explains why and how inaccuracies during the geocoding process can have consequences in terms of risk assessment models used by insurance companies. Mark walks us through how and why his company built its own geocoder and why locating and describing addresses with rooftop level accuracy is the first step in building risk profiles.
Can you tell us about what Addresscloud, your company, is doing and what problems you're solving?
I started Addresscloud five years ago. It was just me for the first couple of years but there are five of us now. We specifically set about helping insurers with geocoding.
In my previous job, I was in an insurance company that invested a huge amount of money in third party datasets. To get to rooftop level with geocoding required a good system to match addresses on a large scale. The system that we were using at the time was pretty cumbersome. It was an in-house application that we had to host ourselves and manage all of the data updates. This was very challenging. However, as an insurer you need to have really up-to-date information. So if a property is just being built and you want to price it at address level, you need to know where that address is straightaway and be confident that it’s in the right place. That was a real challenge for us. I was looking at other different solutions available out there and thought there was definitely space for a new one. So, I quit my job and built what is now Addresscloud.
The idea behind Addresscloud was to have a kind of Google-like experience that’s really simple to use, a nice interface, really easy to integrate, fully managed and in the cloud. And that's how Addresscloud was born.
There are lots of different geocoders out there. How are you guys different?
There are some good, general purpose geocoding solutions available, doing a fantastic job on a global scale. But when you get within a country, you really need to have specialist knowledge on the way people refer to addresses, which is very much a country specific problem. That’s where we thought we could differentiate by having something that was really trained in on the UK and Ireland and get that extra degree of accuracy.
Our customers then said, "we're really confident in your results but what else can you tell us about that address?" so our focus, I would say, the last two or three years, is around bringing in as many high quality datasets as we can, linking those to the address and having that available as an API and a service that’s very, very quick, scalable and reliable.
There’s an understanding in the industry that location is really important in terms of insurance and assessing risk. What kind of data are you using to make those assessments?
The big risks in the UK tend to be flood risk and fire risk. Historically, quotations were based at the postcode level - maybe somewhere between 15 or 100 properties. We wanted to take it down to an address level. So a classic example is where you've got a street that's on a hill with a river at the bottom. If you've got somebody who's living at the bottom of the hill they could be a worse flood risk than their neighbour living only two or three doors away but located uphill. So it's a very different risk profile. What we are doing is using geography and GIS techniques as part of the quote process to make a more accurate risk assessment at the individual property level.
Can you give us an idea of what other kind of data sources you might use?
We use crime statistics and census data from the Office for National Statistics. We bring in flood data from a third party company, called JBA, who are the market leader in the UK for flood risk. They’ve got some really clever software that basically models the terrain and then simulates water flow; understanding where that will build up and where rivers potentially could burst, as well as surface water and coastal overtopping. Other data we use is fire risk data. So things like taking building outline information, dissolving building outlines, working out where fire could spread and simulating those kind of events as well.
I'm assuming sometimes there must be some edge cases where you can't locate a property. Can you then look at the risk profile of adjoining properties and assume a certain amount of risk based on them?
We recognise that we can't always get to an address level. Often our insurance customers will take a big portfolio of addresses from a broker. Some UK brokers have a real issue with address quality and so we receive some really messy addresses that contain typos and/or incorrect postcodes. We do our best to get them to address level but when we can’t we offer the insurer the option to drill back. For example, if the property is within an apartment block we will know if it resides on the first or ground floor or on the top floor. So, from a flood insurance perspective, it’ll have a different risk profile; you might not want to insure a ground floor property for floods, but you might be quite happy to do that for the property that's on the second floor or above. So when we can't get to an address point level, we can drill back to building level or potentially to the postal code level. But the key thing is being confident and flagging up that level of accuracy to the insurer who might want to then trigger a referral process to go and check out a map – depending on their risk appetite.
Would you mind telling us about accumulations?
Ultimately, most insurers’ biggest cost is their reinsurance. So if you imagine you get these big insurance groups where they might own many different insurance brands and different kinds of subsidiaries–often all writing and competing with one another–they may not know if they are all insuring the same building.
So accumulation is basically understanding and managing where you're insuring. For example, do you have too much in one space, could it potentially introduce a risk to the company? So, accumulation management is really about the process of having good quality data, cleansing that data, plotting it on a map, and then looking to see where your hotspots are, where you have too much risk and how you can potentially mitigate that risk.
Do insurers get a separate risk profile for things like flooding, fire, crime or other hazards out there in the world, or is it just an overall weighted risk profile?
We give a full breakdown. At any given address, we may have anywhere up to 100 to 200 different attributes and we’ll break that down by the different kinds of risk as opposed to coming up with a black box magic score. So what we typically deliver is a very detailed assessment. So the insurer would then plug that into their own algorithm to work out a price for the customer.
If people want to reach out to you, what's the best way for them to do that?