A Time Series Anomaly Detection Model for All Types of Time Series


Imagine you own a website and sell your products online. Unfortunately, for your recent product release, you made a mistake and put a ridiculously low price for your products. You are a busy person, of course, and you did not realize that there was a pricing error. Yet, when people discover this “crazy deal”, there will likely be an enormous increase in traffic on your website. If you don’t correct the error fast, you could end up with huge losses, just like Amazon’s pricing error in the 2019 Prime Day event. However, if you have a tool to monitor the number of clicks of the checkout or the add to cart buttons on your website, you could detect the unusually high demand for the product in time to take corrective actions and save your job!


Before I go further into my thought process of modeling, let me clarify the challenges associated with this project. These challenges are considered as the anomaly detection model that should work well with any arbitrary time series generated by metrics chosen by clients.

Understanding the current model and exploring options

The first logical thing to do when there is an existing model is to understand how the model performs and to tune its parameters to improve its performance. According to the article about Prophet on the Facebook Research blog, its modeling approach is described as:

Getting creative: Double-checking system

Given the difficulties of working within the existing modeling framework currently used by Lazy Lantern, I started from the top again (Don’t feel bad for me, I gained so many insights from my previous attempts). While I was talking to one of my fellow Insight Fellows about my struggle, she gave me an idea of a double-checking system. So I started searching for simple and intuitive anomaly detection that could work together with the current model as an attachment. Finally, I ended up implementing the low pass filter (LPF) using a moving average. A moving average (rolling mean) takes an average of a subset of a full dataset, while a low pass filter is a filter that passes if the signal has a lower frequency than a fixed threshold and attenuates otherwise. The idea here is that the newly observed point is compared to the average of the past observations in a fixed window of time (μ) and if the new point is far from μ with the distance measured using moving standard deviation (σ), then it is considered as an outlier (anomaly).

Almost there, but not quite.

You may wonder if the window size, w, changes the results and why is chosen as the standard distance to decide the outlier.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store