4.4 Article

Online anomaly detection for multi-source VMware using a distributed streaming framework

Journal

SOFTWARE-PRACTICE & EXPERIENCE
Volume 46, Issue 11, Pages 1479-1497

Publisher

WILEY
DOI: 10.1002/spe.2390

Keywords

real-time anomaly detection; incremental clustering; resource scheduling; data center; Apache Spark; Apache Storm

Funding

  1. Laboratory Directed Research and Development program at Sandia National Laboratories
  2. National Science Foundation (NSF)
  3. US Department of Energy's National Nuclear Security Administration [DE-AC04-94AL85000]
  4. NSF [CNS 1229652, DUE 1129435]
  5. Division Of Computer and Network Systems
  6. Direct For Computer & Info Scie & Enginr [1229652] Funding Source: National Science Foundation

Ask authors/readers for more resources

Anomaly detection refers to the identification of patterns in a dataset that do not conform to expected patterns. Such non-conformant patterns typically correspond to samples of interest and are assigned to different labels in different domains, such as outliers, anomalies, exceptions, and malware. A daunting challenge is to detect anomalies in rapid voluminous streams of data. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we investigate anomaly detection for a multi-source VMware-based cloud data center, which maintains a large number of virtual machines (VMs). This framework continuously monitors VMware performance stream data related to CPU statistics (e.g., load and usage). It collects data simultaneously from all of the VMs connected to the network and notifies the resource manager to reschedule its CPU resources dynamically when it identifies any abnormal behavior from its collected data. A semi-supervised clustering technique is used to build a model from benign training data only. During testing, if a data instance deviates significantly from the model, then it is flagged as an anomaly. Effective anomaly detection in this case demands a distributed framework with high throughput and low latency. Distributed streaming frameworks like Apache Storm, Apache Spark, S4, and others are designed for a lower data processing time and a higher throughput than standard centralized frameworks. We have experimentally compared the average processing latency of a tuple during clustering and prediction in both Spark and Storm and demonstrated that Spark processes a tuple much quicker than storm on average. Copyright (c) 2016 John Wiley & Sons, Ltd.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available