Вы находитесь на странице: 1из 8

WHITEPAPER

The difference between in-depth analysis of virtual infrastructures & monitoring

nd

Scenarios and use cases

infrastructures & monitoring nd Scenarios and use cases In-depth analysis of Virtual Infrastructures vs. Monitoring

In-depth analysis of Virtual Infrastructures vs. Monitoring

By Dennis Zimmer CEO opvizor GmbH, VMware vExpert, VMware VCAP

Table of Contents

1. Introduction

3

1.1 Virtual infrastructures are becoming increasingly complex

3

1.2 A Wide range of virtualization solutions and infrastructure components

3

1.3 Keeping systems reliable through monitoring

3

2. Operating Monitoring solutions

4

2.1

Setting the right threshold

4

3. Depth

4

3.1 Removing ambiguity

4

3.2 The difference between in-depth analysis and monitoring

4

3.3 How to respond when problems arise

6

4. A Question Of Correct Analysis

6

DISCLAIMER

8

1. Introduction

1.1 Virtual infrastructures are becoming increasingly complex

Virtualization is an indispensable part of a modern data center. Frequently, the degree of virtualization is 90 percent or more. What formerly operated on a number of servers today runs on a few hosts. With the high rate of virtualization and the resulting increase in complexity, problems are more difficult to locate. It is therefore necessary to consider how the infrastructure can be monitored accurately and how potential error situations can be found to avoid costly errors. Unfortunately, under certain circumstances, even minor problems can significantly negatively impact the entire infrastructure.

1.2 A Wide range of virtualization solutions and infrastructure

components

Virtualization solutions are many: the selection ranges from suppliers such as KVM and Citrix to Microsoft Hyper-V and the market-leading provider VMware, with its vSphere solution. The variety of combinations with other components of the infrastructure is limitless. Reduced to its basic functionality, each of these solutions operates almost the same way. They mainly enable resource partitioning for optimal and cost-effective use of physical hardware. In addition, completely new methods of high-availability designs are possible.

1.3 Keeping systems reliable through monitoring

What about the reliability of the virtualized machines (VMs)?

operation of VMs and the applications running on your systems guaranteed? Keeping track of this complex infrastructure can be guaranteed only by employing various tools, with at least one monitoring solution serving as the base. The aim is to be promptly notified if system loads are exceeded or failures occur. In many organizations, failure prevention tools offer 99.9% even 99.99% reliability. Such

statistics are not possible without appropriate software and automation.

Are the smooth

2. Operating Monitoring solutions

Monitoring tools are widespread, such as Nagios or Icinga, Microsoft SCOM or

proprietary and application-specific monitoring tools (e.g. integrated in VMware

vCenter).

or if a failure has occurred. If this is the case, then the software alerts the

administrator through email or SMS sounds an alarm.

They offer real-time insight on whether certain thresholds are exceeded

2.1 Setting the right threshold

The biggest challenge is the correct setting of the threshold value, since this threshold determines whether an action should be performed or not. For example, sensitive thresholds lead to many alerts and alarms, and administrators are flooded with harmless or false messages. This causes really important messages to sometimes be overlooked in the crowd. But what is the correct threshold for an administrator? This must be decided based on the unique infrastructure. But of course, recommendations and best practices exist that can be implemented and provide guidance.

3. Depth ANALYSIS?

3.1 Removing ambiguity

An analysis is by definition a systematic study which consists of two processes, data collection and evaluation. In particular, we consider this relationship and its effects and interactions between the elements. In the analysis it is always about the evaluation of the data obtained.

3.2 The difference between in-depth analysis and monitoring

3.2 The difference between in-depth analysis and monitoring Fig. 1 A deep analysis of virtual infrastructures

Fig. 1

On Fig. 1. you can recognize how an issue could escalate if it is not detected by in- depth analysis. The time to act could be increased tremendously if a tool for in-depth Detection has been set up in the infrastructure.

An in-depth analysis of the infrastructure is usually tested in accordance with rules, security and best practices. It's less about the actual state of the load, but rather the HOW, i.e. how something is configured. For example, a message such as "100% CPU utilization" appearing without more information would not be very helpful. Here you can already see a clear distinction between pure monitoring and analysis. You want to know why the reported problem occurred and how it can be fixed. Therefore, an automatic recognition after troubleshooting and recording would be ideal.

A typical example which comes into play at each virtualization manufacturer involves

the topics vCPU (virtual CPU) and vMemory (memory which is assigned to a virtual

machine). Surely every administrator has received a request to create a virtual machine with x number of vCPUs and y GB of RAM. But how will the administrator

take notice if the resources fulfill the requirements of the virtual machine or if sizing

is totally overprovisioned? This is where a deep analysis comes into play. It can be

analyzed using various values, where the corresponding information for resource optimization is then displayed. For a too high number of unnecessary vCPUs can be

a performance problem on the respective host system, too.

always bear in mind that a virtual machine is rarely alone it has as many systems that can be deployed on the physical host without interfering with each other. Thus although it may not be directly relevant, an optimally configured resource impacts the overall infrastructure.

Additionally, we must

The added value of an analysis in automation is to screen information on system configuration and measure the results against predefined rules. The administrator can, of course, check such items manually against best practice recommendations. However, this can be daunting due to the size and complexity of some infrastructures and is also quite error-prone. According to best practices, more components are evaluated and recommendations are made depending on the current version. In the virtual environment, attention should be paid to how storage and network components work together. Another popular theme is whether clusters are uniformly configured. Through a deep analysis, the administrator wants

to be preventively informed. This also enables the ability to respond before an error

to avoid breakdowns and lags in productivity.

virtual machines are supported per physical host (of course a very symbolic number),

the need to operate optimally becomes clear.

Once you imagine that up to 512

In the meantime, applications that are always running, such as SAP, Microsoft Exchange, SQL, SharePoint, Tomcat, etc., are critical to the business. But often the request is only for a virtual machine, without the knowledge of what might be running. In this situation, how can a virtual machine be configured optimally for the request? Usually not with the default values, which are at times just clicks through a

wizard. Often it's the little things that matter, like the right selection of a virtual network card or the correct SCSI controllers in the virtual machine.

3.3 How to respond when problems arise

The administrator receives information that an event has occurred through the monitoring system, which must then be routed to troubleshooting. Given the large number of complex components that are used in a virtual infrastructure, troubleshooting is often quite difficult. Is it just storage latency problems or even misconfigured MTU sizes on the switches? There are several tools to support the administrator In the VMware environment. Esxtop is one popular tool. However, using it effectively requires some know-how, especially when interpreting thresholds. The site administrator therefore usually relies on their own initiative. What’s more, an immediate or timely solution is needed.

The in-depth analysis is different in relation to monitoring in the way that problems that are encountered are treated. A CPU utilization problem of a virtual machine at 100% is displayed and reported, but the administrator doesn’t have the information on why this CPU problem has occurred. In many cases, CPU limits are set in the VM configuration temporarily, and then removing the limit is forgotten. Thus, in-depth analysis combines a monitoring system with an appropriate expert system.

A new approach for in-depth analysis includes tools such as opvizor.

As Andreas Peetz, vExpert and blog author (http://www.v-front.de) said:

"Opvizor lets you run health checks and predictive analyses in a fully automated way. These are derived from up-to-date rules that are centrally provided by notable virtualization experts. Based on these "cloud rules" you can e.g. create weekly reports that are available anytime, anywhere. This way the virtualization admin is enabled to act preventively, but without burdening himself with maintaining complex software, because that is implemented as a real cloud service. Only one small local agent is needed in your environment. In a nutshell opvizor helps to avoid many issues and outages and makes the administrator's job easier and much more efficient. This software is definitely worth an investment!" ALL A QUESTION OF THE CORRECT ANALYSIS

4. A Question Of Correct Analysis

It is not always easy to find THE solution for excellence for a given infrastructure. However, you have to consider how individual software products work together best

in the area of in-depth analysis and monitoring and also what gives the administrator

a secure feeling (see also fig. 2).

Thanks to Big Data, sufficient meta-data is usually available from the virtual infrastructure. However, these need to be properly evaluated and that‘s where in- depth analysis enters.

A deep analysis is the guarantee of a high-performance, secure, and error-free infrastructure. It reduces errors and warnings in the monitoring tools and relieves the administrator of the troubleshooting burden, allowing time for higher-value projects.

Type

Use Case

Effort to Configure

Monitoring

uptime surveillance

high

In-depth analysis

In-depth Compliance check

low - medium

Fig. 2

DISCLAIMER

Copyright 2014 opvizor GmbH, all rights reserved

The content and the information in this document are protected by copyright. This emphasis, processing, distribution or duplication (copying by any means) of this work or portions thereof, are not permitted without the consent of the publisher. The information in this document is provided together with the VMware opvizor analysis software.

This document is for informational purposes only. opvizor GmbH assumes no liability for the accuracy or completeness of the information.

To the extent permitted by applicable law, opvizor GmbH provides this document as is without warranty of any kind, including in particular the implied warranties of merchantability, fitness for a particular purpose and non-infringement. In no event shall opvizor GmbH shall be liable for any loss or direct or indirect damages arising from the use of this document, including, without limitation, lost profits, business interruption, loss of goodwill or lost data, even if opvizor GmbH has been advised of the possibility of such damages.

The opvizor GmbH reserves the right to make changes and improvements to the product in the course of product development.

opvizor GmbH Schönbrunnerstrasse 218-220 , staircase A 4.04 A-1120 Vienna, Austria UID: ATU67195304 www.opvizor.com CEO : Dennis Zimmer

Date: May 3, 2014