Using Microsoft's diskpart (by Tony Fortunato)
Process Monitor: Tips for Running Long-term Captures in the Enterprise (by Paul Offord)

Network Monitoring Basics – What, Why, How ? (by Keith Bromley)

Network monitoring is rapidly becoming a hot topic for enterprises simply because they rely so heavily on their corporate networks. Network outages or slow performance, especially for critical functions such as ecommerce, credit card point of sale transactions, corporate Internet access, email and unified communications, etc. can have a direct impact on an organization’s ability to be successful. The larger the company, the greater the network complexity and the larger the network value for uptime.

So we all know the issue, the question is, what is the best way to respond to these issues?

Is there a better way than how you have been doing things?

The answer depends upon what you have been doing so far. It’s definitely clear that the right type and implementation of network monitoring solutions can easily help IT to prevent potential network and application issues, and quickly solve issues that may have slipped through without anyone noticing. For instance, would you like to reduce your mean time to repair (MTTR) by up to 80%? Depending upon your current troubleshooting methodologies, it’ possible.

Let’s see how.

LMT Network Mon Image 1

 

Introducing Visibility

So, where do you start?  A quick and easy way to envision the overall process is as follows:

  • Capture target data from the network
  • Weed out duplicate/uninteresting data and organize it into information
  • And then analyze the information to gain insight into your network operation

Here’s a basic visual overview of the general process:

Moving from the general process into more specifics, the process would be as follows. First, you need to identify your network blind spots. What are you missing and where? Then you’re going to need to add equipment (like taps and network packet brokers) to access that critical data. Once you have this data, you’ll filter and groom it so you’re left with only the necessary data, which is basically the conversion of the data into information. Finally, the information is forwarded on to the correct monitoring tool so that it can be analyzed. After the analysis is done, you should have the actionable information (i.e. insight) you need to make the necessary network corrections and improvements to solve your issue(s). Let’s explore the details of this more.

Uncovering Potential Blind Spots

Blind spots in the network are the direct result of not having clear and comprehensive visibility. They obscure IT’s ability to quickly identify a network problem and where it may be hiding, including:

  • Network problems and outages
  • Increased network security risk
  • Potential regulatory compliance issues

Unknown issues and “soon to be problems” exist in most networks to some degree. For instance, do any of these apply to you?

  • Are your security, network IT and compliance departments talking and sharing data? If not, these silos in an enterprise can be creating blind spots in your network.
  • Are you currently using virtualization technology? According to Gartner Research, up to 80% of virtualized data center traffic is east-west (i.e. inter- and intra-virtual machine traffic) so it never reaches the top of the rack where it can be monitored by a traditional tap and SPAN technology, creating blind spots in your network.
  • Are your employees accessing your network with their own devices? If so, are your company security policies being bypassed? Policy violations can open the door to security, compliance and liability issues.
  • Do you use SPAN ports? Do all of your IT groups use the same SPAN ports? It’s important to know that SPAN ports are less secure than taps and can lead to blind spots in your network, especially if multiple people/groups change the SPAN programming to collect different sets of data.
  • Have you recently added new network equipment? When new equipment is added, there may not be a record of who owns it and what it does, and it therefore gets “lost” and forgotten, creating network blind spots and security holes. 

LMT - Network Mon Image 2

I wrote a separate article awhile back that can give you more examples of blind spots, if you’re interested.

Solving the Network Blind Spot Challenge

The next step is to remove the blind spots. A real visibility architecture is the solution to blind spots. It enables you to see your network, identify potential problems, and solve them before they impact your business. The visibility architecture exposes the hidden locations where danger, problems and inefficiencies can lurk. This enables IT to address the people, process, return on investment (ROI), and technology issues facing their business.

There are four basic components to an effective visibility architecture:

  • Access to the network
  • Monitoring middleware functionality (such as filtering and packet grooming)
  • Advanced monitoring functions including application intelligence and NetFlow support
  • Connectivity to monitoring tools

Here’s a simple visual of this process:

LMT Network Mon Image 3
The network access layer refers to the capability of gathering the necessary monitoring data and passing it on to packet processing devices. This includes copper and fiber taps, virtual taps, bypass switches, and monitoring agents. The tap will make a complete copy of the data (both good and bad packets) that passes the network at that point. The replicated packets are then sent on to a packet processing device (i.e. monitoring middleware). The original packets continue on into the network. All of these taps can provide data to the monitoring tools for analysis. They can be deployed inline, out of band, within virtual data centers, and as part of high availability solutions. Note that the situation is a little different for inline taps. This is a special type of tap (called a bypass switch) that is normally used for inline security tools. In this special case, a copy of the data is not made. The original data is actually diverted, analyzed by inline tools, and then returned to the network to continue on to its destination.

The control layer contains monitoring middleware technology called network packet brokers (NPB). These devices allow you to groom the monitoring data as required. Since the data coming in from the tap is a complete copy of all data, some of it will need to be filtered and directed to the appropriate monitoring tool. Other functions, such as deduplication, packet slicing, time stamping, data masking, etc., can be applied to the data as required to groom it. Packet brokers also provide aggregation and load balancing of information to the proper monitoring tools. This makes the tools more efficient and can save you money in the short. For instance, load balancing allows you spread the monitoring traffic across multiple tools if you need to. One use case for this is to take faster 10 GE traffic and spread that traffic across multiple 1 GE tools, assuming you have enough 1 GE tools for the load. This allows you to extend the life of your 1 GE tools a little longer until you have enough budget to purchase more expensive tools that can handle the higher data rates.

Intelligence services provide an additional level data monitoring and processing. Examples include filtering at the application level, the generation of NetFlow data, generation of geolocation of users and devices, and the capture of browser information. This allows you to further isolate where problems may be located within your network and what the problem is.

The last component is purpose built monitoring tools which is in the tool layer. These come in all sorts of flavors like security tools (e.g. firewalls, threat intelligence gateways, IPS, IDS, SIEM, DLP), network performance and troubleshooting tools (e.g. data capture tools, data recorder, debug tools, logging tools) and application performance monitoring tools (e.g. application monitors, QoS monitors, synthetic monitors). The purpose of these tools is to analyze the information given to them and then provide you the information you need to go implement an action to solve your problem.

It is important to understand that not all monitoring tools are created equal.  Some vendors have tried to create a perception that all the customer needs to do is add their monitoring tool to the network and it will solve all the problems. This is obviously not true. If a network engineer is using a SPAN port instead of using a tap, then the data getting to the tool may or may not be all of the pertinent data. So the tool is probably missing information. Then, without a network packet broker, the monitoring tool may be getting overloaded with the wrong data (due to duplicate packets, unfiltered irrelevant data, and uncorrelated data), which significantly reduces the tool’s efficiency and accuracy. So to be clear, a network monitoring tool is a key component to an effective visibility architecture, but it’s not the only ingredient.

Realizing the Benefits of a Visibility Architecture

Once implemented, you should see a fairly rapid return on investment from your visibility architecture. This primarily the results from the architectural, process, and technical improvements you implement. So actual results depend upon that implementation. However, there are many significant general business benefits you should be able to notice: 

  • The ability to deliver an enhanced end-customer experience – less network issues should make for happier internal/external customers
  • Greater visibility into both physical and virtual network traffic
  • Delivery of ALL the data needed for true end-to-end visibility and insight as the network scales
  • Ability to leverage your investment in existing monitoring and security tools, even while migrating the network to higher speeds

At Ixia, we have literally seen customers experience an up to 80% reduction in their MTTR. This is documented in case studies on our website. The most common sources for savings are the speed and ease of filtering the packet data before is goes to the monitoring tool, lack of change board approvals (and the delays they introduce) to capture data, load balancing of higher rate data across multiple lower rate tools, and the elimination of crash carts for network analysis.

The End Goal

The final stage of implementing complete visibility is to merge your visibility architecture with your security architecture. This creates network security resilience to resist attacks, but also delivers the flexibility needed to support self-healing capabilities such as inline security tools, real-time responsiveness to security threats, SSL decryption, application intelligence, and application filtering.

Once the visibility and security architectures are integrated, you can realize a wide array of savings and capabilities benefits. For instance, inline security and performance tools can be quickly and easily implemented for immediate time to value. The correlation of out of band tool data (forensic analysis, recording tools, packet captures, and logs) can be combined with inline tool data to accurately diagnose threats and potential problems faster.

The end goal is to get the right information to the right tool at the right time.  This allows you to gain deep insight into what is, and what is not, happening on your network. It’s all about finding any problems quickly and fixing them before they impact your business.

Want more information on network visibility? Check out this whitepaper and/or give Ixia a call at +1 (877) 367-4942 and we can help you out.

KeithAuthor:Keith Bromley is a product marketing manager for Ixia, Inc., with more than 20 years of industry experience in marketing and engineering. Keith is responsible for marketing activities for Ixia’s network monitoring switch solutions. As a spokesperson for the industry, Keith is a subject matter expert on network monitoring, management systems, unified communications, IP telephony, SIP, wireless and wireline infrastructure. Keith joined Ixia in 2013 and has written many industry whitepapers covering topics on network monitoring, network visibility, IP telephony drivers, SIP, unified communications, as well as discussions around ROI and TCO for IP solutions. Prior to Ixia, Keith worked for several national and international Hi-Tech companies including NEC, ShoreTel, DSC, Metro-Optix, Cisco Systems and Ericsson, for whom he was industry liaison to several technical standards bodies. He holds a Bachelor of Science in Electrical Engineering. 

 

 

Comments