Index ¦ Archives ¦ Atom

Understanding the local outlier factor (LOF) algorithm

This article explains the well-known LOF-algorithm. We provide intuition for density-based outlier detection, show the problems inherent to this task and then take a look at how LOF solves these problems.

Motivation

Outlier detection is important for many real world applications - typical examples include fraud detection, network intrusion detection or the detection of faulty sensor data. Although there is no "real" definition of what constitutes an outlier, the well-established definition is due to 1:

[...] an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism.

How does one go about finding these observations? The literature distinguishes different models to detect outliers 2: extreme values, clustering models, distance-based models, density-based models, probabilistic models, and information theoretic models. For our purposes, distance-based and density-based models are the most relevant. Note that both are closely linked.

Intuition

The original paper 3 introduced the local outlier factor (LOF) as a density-based model. Going with the disambiguation from 2, this means that "the local density of a data point is used to define its outlier score". Let's visualize this:

Outliers depend on local density

Looking at the figure, we expect O1 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} O_1 and O2 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} O_2 to have the highest outlier score - even though O1 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} O_1 has the same distance from the cluster in the bottom left as all the points have to each other in the cluster on the top right.

The idea of LOF is to look at the neighborhood of each point individually and to compute its "outlierness" based on its neighbors. To this end, one quantifies the difference of the density of the point to the density of its neighboring points. Put more bluntly: Imagine living in a villa in central Manhattan - then you are probably an outlier because Manhattan is a high-density area but you are in a less dense spot. Now, imagine living a rural area. Here you are also in a less dense spot but so are all of your neighbors and you are not an outlier.

Formal Definition

Let's get to the definitions of the paper and see how they map to our intuition of density-based outlier detection. All of the following definitions are repeated in their original form from 3. Note that D \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} D refers to the full data set.

k-distance of an object p: For any positive integer k \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} k, the k \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} k-distance of object p \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} p, denoted as k-distance(p) is defined as the distance d(p,o) \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} d(p,o) between p \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} p and an object oD \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o \in D such that:

  1. for at least k \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} k objects oD{p} \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o' \in D \setminus \{p\} it holds that d(p,o)d(p,o) \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} d(p,o') \leq d(p,o), and
  2. for at most k1 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} k-1 objects oD{p} \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o' \in D\setminus \{p\} it holds that d(p,o)<d(p,o) \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} d(p,o') < d(p,o).

Put differently, the k-distance of an object is the distance to the k-th nearest object. Or, starting from an object o \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o, we compute the distance to all objects oD \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o' \in D, sort them in increasing order and take the k-th element of this list.

k-distance neighborhood of an object p: Given the k-distance of p \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} p, the k-distance neighborhood of p contains every object whose distance from p \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} p is not greater than the k-distance, i.e., Nkdistance(p)(p)={qD{p}:d(p,q)kdistance(p)} \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} N_{k-distance(p)}(p) = \left\{q \in D\setminus\{p\} : d(p,q) \leq k-distance(p) \right\}.

Let's illustrate this with an example. Suppose we have five data point in our data set D={o1,,o5} \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} D=\{o_1,\dots,o_5\} and we are interested in the 3-distance of o1 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o_1 as well as its neighborhood. We compute d(o1,on) \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} d(o_1,o_n) for all n=2,,5 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} n =2,\dots, 5 and order the results. Assume these are the distances:

d(o1,o2)=0.2d(o1,o3)=4.0d(o1,o4)=0.5d(o1,o5)=0.5 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} \begin{aligned} d(o_1,o_2) &= 0.2\\ d(o_1,o_3) &= 4.0\\ d(o_1,o_4) &= 0.5\\ d(o_1,o_5) &= 0.5 \end{aligned}

Then our 3-distance is 0.5 and there are three objects in the neighborhood.

In Euclidian space, think of this as putting a (hyper-)sphere with its center at o1 \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o_1 and increasing the radius until k elements are in it. As distances can be equal there might be more than k elements in the circle.

reachability distance of an object p w.r.t. object o: Let k \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} k be a natural number. The reachability distance of object p \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} p w.r.t. o \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} o is defined as

reachdistk(p,o)=max{kdistance(o),d(p,o)} \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} reach-dist_k(p,o) = max \{ k-distance(o),d(p,o)\}

This is used to smooth the density estimation: For objects far away from each other, their distance is the actual distance. For objects close to each other, their distance is their k-distance.

Please note that the reach-distance is not a proper distance measure as is not symmetric.

local reachability density of an object p: The local reachability density of p is defined as

lrdMinPts(p)=NMinPts(p)oNMinPts(p)reachdistMinPts(p,o). \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} lrd_{MinPts}(p) = \frac{|N_{MinPts}(p)|} {\sum_{o \in N_{MinPts}(p)} reach-dist_{MinPts}(p,o)}.

Here, MinPts "replaces" parameter k and we look at the MinPts-neighborhood of p.

In general, a density is mass per volume, i.e., this definition implicitly assumes each element in the neighborhood of p to have unit mass. "The volume" is given by the sum of all reachability-distances of the element in the neighborhood of a point.

With these preliminaries done, we can finally define the local outlier factor:

(local) outlier factor of an object p: The local outlier factor of p is defined as

LOFMinPts(p)=oNMinPts(p)lrdMinPts(o)lrdMinPts(p)NMinPts(p). \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} LOF_{MinPts}(p) = \frac{\sum_{o\in N_{MinPts}(p)}\frac{lrd_{MinPts}(o)}{lrd_{MinPts}(p)}}{|N_{MinPts}(p)|}.

We can actually rewrite this formula slightly,

oNMinPts(p)lrdMinPts(o)lrdMinPts(p)NMinPts(p)=oNMinPts(p)lrdMinPts(o)lrdMinPts(p)NMinPts(p)=oNMinPts(p)lrdMinPts(o)NMinPts(p)oNMinPts(p)reachdistMinPts(p,o)NMinPts(p)=oNMinPts(p)lrdMinPts(o)oNMinPts(p)reachdistMinPts(p,o)NMinPts(p)2, \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} \begin{aligned} \frac{\sum_{o\in N_{MinPts}(p)}\frac{lrd_{MinPts}(o)}{lrd_{MinPts}(p)}}{|N_{MinPts}(p)|} &= \frac{\sum_{o\in N_{MinPts}(p)}lrd_{MinPts}(o)}{lrd_{MinPts}(p)|N_{MinPts}(p)|}\\ &= \frac{\sum_{o\in N_{MinPts}(p)}lrd_{MinPts}(o)}{\frac{|N_{MinPts}(p)|} {\sum_{o \in N_{MinPts}(p)} reach-dist_{MinPts}(p,o)}|N_{MinPts}(p)|}\\ &= \frac{\sum_{o\in N_{MinPts}(p)}lrd_{MinPts}(o)\sum_{o \in N_{MinPts}(p)} reach-dist_{MinPts}(p,o)}{|N_{MinPts}(p)|^2}, \end{aligned}

so the local outlier factor of a point is given by the density of its neighboring points, times the sum of the reach-distance to these points.

Based on this last formula, we see that a point has a large local outlier factor if its neighbors are in dense areas (then the first sum becomes large) and if these points are far away, i.e., the point has no direct neighbors. It turns out that this notion captures the locality aspect of outlier detection surprisingly well.

Discussion

The main algorithmic difficulty of LOF is finding the neighbors of each point in the data set. The scikit-learn implementation mitigates this problem by constructing a Ball Tree or a kd-tree. To speed up the computation further, one could use a more fuzzy approach: Instead of of looking at a specific neighborhood one looks at a so-called " ϵ \renewcommand{\R}{\mathbb R} \newcommand{\E}{\mathbb E} \newcommand{\Var}{\mathrm{Var}} \epsilon-neighborhood" and thus allows a controllable error. See, e.g., 4.

The main difficulty from a user's point of view is setting the parameter k appropriately - the best value of k depends on the data set and the algorithm is very sensitive with regard to its parameter.

References


  1. D Hawkins. Identification of Outliers. Springer Netherlands, 1980. 

  2. C C Aggarwal. Outlier Analysis. Springer International Publishing, 2nd ed., 2017. 

  3. M M Breunig et al. LOF: identifying density-based local outliers. Proceedings of the 2000 ACM SIGMOD international conference on Management of data (New York, NY, USA, May.-2000), 93–104. 

  4. P Ram and K Sinha. Revisiting kd-tree for Nearest Neighbor Search. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage AK USA, Jul.-2019), 1378–1388. 

© Florian Kalinke. Built using Pelican. Theme by Giulio Fidente on github.