Wednesday, December 28, 2011

DADS: Massaging Data

I've decided that "Diary of an Anomaly Detection System" is too wordy to keep writing in the title of the posts in this series, so I'm shortening it to "DADS" hence the title of this post "DADS: Massaging Data".

Anywho, as I said in the previous post, I'm going to talk a bit about what I needed to do to get my data ready for the anomaly detection algorithm. This post has nothing to do with machine learning, per se, but is an important part of designing an ML algorithm.

I'm going to use seven metrics ("features" in ML parlance) to start with: short-, medium-, and long-term load averages; memory use; number of processes; and the number of zombies. You can argue whether or not these are useful metrics but I'm not interested in that argument at this point. I'm currently building the framework for the ML algo; I'll be adding, subtracting, and inventing metrics once I have something to manipulate them with.

I'm using Python since that is one of the scripting languages of choice at my day job; Perl, unfortunately, is frowned upon and the consensus is Ruby can't do scientific programming just yet. Don't even get me started with Java.

Let's read some data

The data originally resides in RRDtool and needs to be put into a standard matrix form. Shouldn't be that difficult, right? RRDtool has a Python interface, so it's just a matter of reading the data in, right?  I wish! The RRDTool Python API is essentially a wrapper around the command-line tool but the output is "Python-esque".  For example, the CLI ouput for the load average looks like this:

[faber@fabers-desktop data] rrdtool fetch load/load.rrd  AVERAGE -s 1321160400

                      shortterm midterm longterm

1321164000: 5.3888888889e-03 1.2805555556e-02 5.0000000000e-02
1321167600: 3.0555555556e-03 1.1388888889e-02 5.0000000000e-02
1321171200: 3.7500000000e-03 1.1861111111e-02 5.0000000000e-02
...


where the first column is the number of seconds from the epoch and the three remaining colums are short-, medium- and long-term load averages; a very handy format. Unfortunately the Python output looks like this:

>>> mydata = rrdtool.fetch('load/load.rrd', 'AVERAGE', '--start=1321160400')
>>> mydata
((1321160400, 1325098800, 3600), ('shortterm', 'midterm', 'longterm'), [(0.005388888888888891, 0.012805555555555468, 0.05000000000000019), (0.0030555555555555557, 0.011388888888888818, 0.05000000000000019), (0.0037500000000000016, 0.011861111111111041, 0.0500000000000002), ...]


which is not a very handy format. For reasons which I'll get into later, I want the format to be this:

shortterm = ((1321164000, 5.3888888889e-03), 

             (1321167600, 3.0555555556e-03), 
             (1321171200, 3.7500000000e-03),
             ...
            )

mediumterm = ((1321164000, 1.2805555556e-02), (1321167600, 1.1388888889e-02), (1321171200, 1.1861111111e-02_,...)

longterm = ((1321164000, 5.0000000000e-02), (1321167600, 5.0000000000e-02), (1321171200, 5.0000000000e-02),...)


So the next step is to format the data.

List Comprehensions to the Rescue

I've always thought Python was just an okay language but its list comprehensions are kinda cute.  It wasn't until this project that I found out just how useful they are.  Here's the blow-by-blow action:

# mydata[0] = timestamp begin, end, and interval
# mydata[1] = labels
# mydata[2] = list of 3-tuples
mydata = rrdtool.fetch('load/load.rrd', 'AVERAGE', '--start=1321160400')

# create a list of timestamps at the appropriate intervals
tses = [ i for i in range(mydata[0][0], mydata[0][1], mydata[0][2]) ]

# create three lists from the 3-tuple list
st, mt, lt = zip(*mydata[2])

mydict = {}
mydict['shortterm'] = zip(tses, st)
mydict['midterm'] = zip(tses, mt)
mydict['longterm'] = zip(tses, lt)


Seven lines of code. I don't know about you, but I'm impressed when a language allows me to do that with native functions.

So what's with the key/value format?

There's a subtle problem with the raw data that's not obvious until you start reading in other RRDtool files and try munging them together: you don't always have data for all the same timestamps. memory.rrd might have data for timestamps t1 and t2 while load.rrd might have data for t2 and t3. How do you manage your lists so that you don't duplicate timestamps (two t2s in the above case) AND fill in values for data you don't have and don't know you don't have? Easy:

SQL.

I'm going to store my data into an SQLite3 database then generate a matrix from the database table. If I do my SQL correctly (and I will :-), SQLite3 will fill in missing data, order by timestamp and I don't have to keep track of values or timestamps across rrd files! This is why I break every metric.rrd file into a (timestamp, value) data structure and put it into a dictionary called mydict['metric']; so I can easily insert and update the metric column in the database!

How that is actually done, I'll talk about in the next post since it's late.


Monday, December 26, 2011

Diary of an Anomaly Detection System

I recently finished Stanford's Machine Learning course offered as part of SEE. It was one of the best courses I've ever taken. Not only did I learn a lot about ML algorithms, but I learned a lot about the applications thereof as well as some math applications.

One of the algoritms that grabbed my attention was "anomaly detection"; this is the algorithm credit card companies use to flag possibly fraudulent activity. It can also be used for monitoring computers and, I believe, web pages but more about that later.

Which brings us to this series of blog posts. Since I'm on winter break, I decided to spend part of my time designing, coding, and blogging about building an anomaly detection system.

The Motivation


My current and on-going project at work is called Sentry, a system that processes URLs. The system involves 23 (and counting) virtual machines and four physical machines so obviously, system administration is a not-insubstantial part of the project.

Currently, the overall health of the system can be measured by throughput; I use collectd and RRDtool to monitor the system. If the throughput is too low, I know something is wrong with one or more of the machines but that doesn't tell me which machine(s) is having a problem, so I look at the CPU load graph for each machine to see if anything looks odd, if not, I look at the network transfers graph for each machine to see if anything looks off, if not, then I... Since there are 27 machines, each with over a dozen metrics, this is a chore, as you can imagine.

Anomaly detection will reduce the the dozen-plus metrics for each computer down to a range of numbers that we consider "normal behavior" and, by extension, the health of all 27-plus machines down to a range of numbers. So how do we define the range of numbers?


All of my metrics measure different things: load average, memory use, etc. If I plot the range of numbers over time I'll get different looking graphs, but for now, assume all the graphs are Gaussian. What I'm going to do is take each metric's numbers and figure out which Gaussian curve (in other words, the values for μ and σ) best fits the data. I'll then take the latest reading, for say, memory use, and see where on the curve it sits (since I know the equation for the Gaussian distribution function) call that value p(memory), and for any value of p(memory) greater than say, ε, I'll tell the computer to flag that reading as an anomaly.

I'll then take all the latest metric readings (the x-s), calculate the p(x)s and multiply them together; that gives me a single number, super-ε, that tells me if my computer is acting "weird".  If I multiply all the p(x)s for all 27+ computers, I get another super-ε number telling me if my cluster of computers is acting "weird". Neat, huh?

First Things First


But first, I need to gather the data and put it into matrix form that the algorithm can handle. The raw data resides in RRDtool tables. I'll be using Python as my main language. The vectorization libraries will be numpy and scipy. Even though my main machine at home is a Macbook Pro, installing numpy and scipy on a MBP is a PITA, so I'll be doing all of our work on a (Ubuntu) Linux box.

Massaging the data will be the topic of my next blog post.