“We need it to scale effortlessly and always be on”. These were the now-hallowed words Mark used to define our requirements for Addresscloud’s new architecture. It was 2018 and we knew that we needed to re-architect Addresscloud to support more requests, new services and bigger data. Two days later, sitting at Mark’s kitchen table, surrounded by laptops, hand-drawn flowcharts and programming books we had something that resembled a plan, and it was built entirely on serverless technology.

We had made the decision to rebuild Addresscloud with serverless architecture. We'd seen other sectors benefit from serverless delivery and were confident that we could do the same for insurance and property tech. Three years on and our service infrastructure is 100% serverless, but what’s the benefit of serverless architecture and does it benefit our customers?

The answer is quality.

Serverless enables us to provide our geocoding, risk assessments, and property intelligence services  with consistency, reliability and speed. It makes our solution scalable, allowing us to provide every customer with an extremely high quality of service no matter their size or transactional volume.

In a non-serverless cloud architecture we would be responsible for the physical machines or virtual containers hosting our services and responding to our customer requests. We'd be responsible for adding more resources in response to demand, as well as the management and maintenance of this infrastructure. Serverless is the re-organisation of business logic, outsourcing infrastructure management to cloud providers such as AWS or Google. The result is an architecture where infrastructure at all levels automatically scales up and down in response to customer requests. The cloud provider is responsible for ensuring that all requests are provisioned with appropriate resources to return an answer to the customer within our SLA.

We developed our serverless architecture by working down our stack replacing non-serverless tech with serverless alternatives one level at a time, we could then test each layer to monitor changes in performance between old and new infrastructure layers. By Spring 2020 we’d migrated our production tech services to use serverless infrastructures. Using this architecture we were able to extend our data model - during the pandemic we supported Riverford Organics to deliver food to 80,000 British homes each week. Work continued into 2021 when we built an award-winning solution with Compare The Market to pre-fill all the aggregator’s residential property questions in real-time, significantly improving the customer journey. By this time, we were happily powering c. 10 million transactions a month with no server room in sight.

The true test

We knew that “standing on the shoulders of giants” could enable us to provide exceptional services to our customers. But there was always a worry that we might reach an unknown limit. Our tests, simulations and observations showed how our system might behave under different load patterns. What if a customer needed to quickly scale-up their transaction volume outside their normal usage pattern? How would our quality of service change if we doubled our transactions in a day? And then doubled them again on the next? Could we still have the same confidence that our systems would provide the same quality of service for every customer?

The true test came in January 2021 when Addresscloud announced its partnership with Flood Re. During April in 2022 we onboarded one insurer a week to the Flood Re Property Data Hub, powered by Addresscloud. With regular batches and monitoring we watched our traffic grow to 400% of our previous volumes, meanwhile our latency stayed the same.

The Flood Re Property Data Hub provides all UK residential property insurers with the capability to verify whether a property’s flood risk is eligible to be ceded to the reinsurance scheme. All the major insurers built API integrations with our service to automatically determine whether risks could be ceded.

Prior to this we had experience working with Flood Re data for one of our customers, so we were able to directly leverage our existing stack, with a simple API Gateway extension to provide a backwards compatible interface to the new Flood Re property data hub service. This meant that insurers were able to consume the new service without any application logic changes and could move from testing to deployment at speed. We’re now doing our previous monthly volume of transactions, every week, with no architectural changes. Our existing customers continue to receive the same quality of service without change.

In summary, we re-architected our application to use well established serverless technology and development patterns, which enabled us to onboard the entire UK residential property market in a matter of days, without a single millisecond change in latency. Next up we’re looking at our system resilience, avoiding single points of failure, and taking advantage of a more distributed architecture that serverless affords. This will enable us to further reduce latency by deploying resources closer to our customers and maintain operational capacity even in the event of large-scale internet outages or regional disruption.

More information:

For a more detailed overview of our architecture see our paper from GIS Research UK 2021 Conference.