45 posts categorized "WildPackets" Feed

Network Monitoring on High-Speed Networks (by Jay Botelho)

While most enterprises are making the jump from 1G to 10G, some early adopters are leaping ahead to 40G and even 100G. These rapid advancements in networking, along with new technologies like virtual networks and software defined networks (SDN), are driving an equally large change in network monitoring technologies.

The ability to seamlessly monitor, analyze, and troubleshoot, especially in real time, at multi-gigabit speeds is becoming a significant challenge. However, there are some ways to make this process more manageable and ensure that you are not missing the data you need for detailed network analysis.

Prioritize Monitoring Needs

Let’s say your 10G link (full duplex) is 50% utilized. That’s 10Gbps of traffic that requires analysis, or about 77GBytes of data that must be analyzed per minute. Besides supercomputing applications, like weather predictions, there are very few applications that require this level of data processing. And of course super-computing platforms are not within the budget for most network analysis solutions, so all this analysis must be packed into more traditional and affordable computing platforms.

Continue reading "Network Monitoring on High-Speed Networks (by Jay Botelho)" »

Analyze VoFi With WildPackets (by Jay Botelho)

With all the hype around gigabit wireless - 802.11ac (scheduled for ratification in early 2014) and 802.11ad (ratified December 2012), the delivery of new services like Voice over Wireless (VoFi) is sure to grow in popularity, not only for consumers, but in the enterprise as well. Handing a few simultaneous calls on a home network is not much of a challenge, but handling 10 – 50 simultaneous calls per AP in an enterprise setting, all while continuing to deliver wireless data feeding ever-more-demanding applications, is most certainly a challenge, hence the limited deployment so far. But with much faster wireless network speeds just around the corner, services like VoFi are ready for primetime.

VoFi can provide a real benefit in the workplace, especially in highly mobile environments buried deep inside buildings, like hospitals, warehouses, and customer service in large box stores. To serve mobile workers today, these industries often use cellular technology, but coverage issues within these facilities significantly reduce call quality, not to mention the cost of service for each cell phone. With VoFi, APs can be placed to ensure optimum call quality throughout the facility, reducing dropped calls and significantly increasing customer satisfaction. And all this for a fixed cost, just the handsets and the APs, with no additional monthly charges.

Whether or not your organization has picked up on the VoFi trend yet, gigabit wireless will be the enabler for many organizations to jump on board. Below are suggested steps for network monitoring and analysis with VoFi, so you can be ready when the time comes.

Continue reading "Analyze VoFi With WildPackets (by Jay Botelho)" »

Application Performance Monitoring is all about the Users (by Jay Botelho)

Whether your users are accessing email from a local server, browsing the web, or working with applications running in a virtual server (local or as-a-service), it’s imperative to constantly track and monitor these events for all users on your network. In other words, you need to be performing Application Performance Monitoring (APM). And this doesn’t need to be complex – it can be as simple as noticing which IP conversations and what activities are “normal” for each user. By uncovering and recording this information, you’ll have everything you need to quickly determine when the user experience heads south.

A key metric to monitor as part of this “normal” behavior is Application Response Time, a quantitative measurement determining when applications are experiencing poor performance. Although quantitative, measurements of application response time can be made in different ways, and from different measurement points, leading to ambiguity as to exactly what is being measured. But in most cases application response time will give you a very good idea about the overall user experience, and that’s the primary goal in APM. It’s when it comes to the next step, determining the root cause of the problem, when the details of how and where the measurements are made really come into play.

Continue reading "Application Performance Monitoring is all about the Users (by Jay Botelho)" »

Role of Packet Capture in Network Security (by Jim MacLeod)

While working on my yard this weekend, I started thinking about the tools that I was using. My favorite is probably the weed whacker. While it’s intended for up-close trimming, its design gives it a great deal of versatility. I can use it to trim, edge, weed, mow, or even dig small holes. However, I recognize that it’s not my only tool, and the best results with the least effort will come from using it in combination with other purpose-built devices. Using a weed whacker to mow the lawn is time consuming and requires more effort than pushing the lawn mower, especially since the weed whacker only covers a small area at once, and forces me to choose how deep to go.

Those of us who love packets tend to feel similarly about our packet capture. We know that professional grade tools can monitor networks 24x7, providing statistical information about protocol and node usage, as well as deep dives for captured traffic once we’ve identified what we need to analyze. However, other purpose built tools are better at certain things. Firewall logs show what traffic was forwarded or blocked. Intrusion Detection Systems (IDS) classify traffic based on patterns that have been seen in malicious activity. While we can gather the same information with packet capture, it takes more work to get to the point of finding what needs to be examined, and what can be ignored.

Continue reading "Role of Packet Capture in Network Security (by Jim MacLeod)" »

Pinpoint Network Bottlenecks (by Jim MacLeod)

Theory of Constraints (TOC), as popularized by the business novel The Goal in 1984, and recently resurrected for DevOps and IT in The Phoenix Project, holds that any given system is limited in throughput by only a few key bottlenecks. Improvements anywhere else won’t speed things up, but improvements at the bottleneck will have a dramatic impact on the whole system.

On a network, there are two kinds of bottlenecks: bandwidth and latency. While these concepts are familiar to most of us, I’d like to highlight them as a base for the techniques in this post. Bandwidth is the ability to move a large amount. I like to think of bandwidth as a cargo ship: lots of containers, lots of capacity, but it takes days to make the journey. Latency is more like a courier: get a small package there as fast as possible. While there’s some overlap between the two – for example, airplanes are fast and have lots of storage – the two concepts have different effects on your data, and different data has a different mix of demands.


Latency is the slowness of moving data. It’s measured in time, and adds up from end to end. Once you lose the time at one location, it’s impossible to make up for it elsewhere. That’s why finding a bottleneck is so important: a single device which adds significant latency slows down the entire trip.

Continue reading "Pinpoint Network Bottlenecks (by Jim MacLeod)" »