Metail is a fashion tech startup whose goal is to reduce the cost and improve the efficiency of a retailer’s garment photograph process and to give consumers confidence in the clothes they buy online. By allowing customers to try clothes online on their body shape we’ve been able to collect a unique data set of customer cloth shopping habits. Metail’s analytics platform, now four years old, drives our data science products, and internal and external dashboards giving summarised view of key business metrics. The pipeline is based on the ideas in Nathan Marz’s lambda architecture (http://lambda-architecture.net/) and uses the Snowplow Analytics (https://snowplowanalytics.com/) pipeline as a foundation for our event tracking, collection and first pass processing. From the start, the pipeline was implemented in [Clojure](https://clojure.org/) using it to connect our pipeline stages and it’s big data libraries are the workhorse of our raw event processing and aggregation. This talk will show how Gareth and his team used Clojure to provide a solid platform to connect and manage our AWS hosted analytics pipeline and the pitfalls they encountered on the way. Gareth also talk about some of the difficulties they’re currently experiencing and how these are being resolved. He will cover our use of Spark jobs implemented in Clojure and how this feeds our Redshift cluster which is taking advantage of AWS’ new Redshift Spectrum technology to build an S3 based data lake.