TraceMatcher - Sprint 9 Demo (by Paul Offord)
For the Memories (by Paul W. Smith)

What the Heck are Blind Spots? (by Keith Bromley)

What The Heck Are Network Blind Spots

What do people mean when they talk about network blind spots?  And are these really that important?  The answer to the second question is overwhelmingly yes. Blind spots directly correlate to network problems and outages, increased network security risk, and potential regulatory compliance issues.

In regards to the first question asking what do we mean by “blind spots”, blind spots are hidden reasons for a lack of network visibility. Let’s look at some examples of what we’re talking about. Here’s a library list (although it’s not all inclusive) of blind spot examples that I’ve compiled. While several of these may not apply to your organization, some probably do, right? Scan the list and see if anything matches your network.

16 Important areas to review - 

  • Silo IT organizations – Security, Network IT and Compliance groups don’t often talk or share data and can form silos in an enterprise. This can lead to inconsistent data and compliance policies, SPAN port contention issues, improper SPAN port programming that results in incorrect or missing data captures, and plain old data conflicts that arise from collecting data at the wrong places.

Guy in data with rjs and hacker

  • Use of virtualization technology – According to Gartner Research, up to 80% of virtualized data center traffic is east-west (i.e. inter- and intra-virtual machine traffic) so it never reaches the top of the rack where it can be monitored by traditional tap and SPAN technology. While virtual tap technology exists to counter this threat, according to the Ixia 2015 virtualization study, 51% of IT personnel don’t know about the technology. For instance, an Ixia healthcare insurance provider had zero visibility into 100+ virtual hosts. This was immediately solved when they installed the Ixia Phantom vTap.
  • SPAN port overloading – An Ixia case study shows the various problems that a national pharmacy ran into with SPAN port contention problems and the fact their SPAN ports were also dropping packets and creating a loss of information due to a data overload condition. SPAN ports can, and will, drop monitoring data if the CPU is overloaded. Besides running into port contention problems, the case study also shows that the customer ran into problems splitting and filtering the data from the SPAN ports. For more information on general SPAN port visibility issues, see this article.
  • Rogue IT – When users add their own Ethernet switches, access points (from an iPhone), use offsite data storage (like Box), or add something else to the network, company security policies are often subverted which opens the door to security, compliance and liability issues. IT rarely knows anything about these devices, especially as they can appear sporadically, like Wi-Fi hot spots.
  • Mergers and acquisitions – The blending of disparate equipment and systems often causes interoperability issues which adds to system/application downtime, system capabilities being turned off to improve network performance, and the scaling back/elimination of network and application monitoring while extensive network re-architecting takes place. This results in very limited visibility (i.e. blind spots) because no one really knows what is happening. As new M&A’s happened during the very active 2015 year, this may be a potential source of blind spots for 2016.
  • Addition of new network equipment – When new equipment is added, there is often no record as to who owns it and what it does. The equipment can get “lost” and forgotten about, especially if IT key personnel leave the company or change departments. “Lost” equipment that is still functioning in the network can be a source of security vulnerabilities due to lack of proper software updates and unknown user access privileges.
  • New equipment complexity – New equipment is often complex to understand, i.e. what it does and how best to use it. For data networks, complexity never seems to take a rest at all. The rate of increase of this complexity has been characterized by David Cappuccio at Gartner who stated in a Gartner Symposium back in late 2012 that for every 25% increase in functionality of a system, there is a 100% increase in complexity. See the blog that Eric Savitz (with Forbes) wrote about that symposium. If IT doesn’t have time to do the research on new equipment and how to properly program it, they often stop using the equipment and then eventually forget about it. The equipment can often remain running in the network even though it isn’t being utilized.
  • Network complexity – When new links and office locations are added, they can be set up with different VLANs, sub nets, etc. to geographically segment them. These segmented networks often have separate equipment that is used for remote logon, authentication, etc. that makes it hard to track what is happening at those locations.
  • Inconsistent monitoring/data collection policies – This can occur from multiple sources but one of the common effects is that virtual monitoring equipment policies and physical equipment monitoring policies are often different, which can cause compliance data mismatch, requisite data that is simply not captured, and security issues. See this case study for an example.
  • Network planning issues – In many cases, the requisite data just doesn’t exist at all. This can be a common experiences for organizations with external customers. For instance, service providers (especially wireless service providers) need good customer data (service holes, malfunctioning radios, poor coverage, and even customer dissatisfaction) to properly plan their networks and deliver a better quality of experience.
  • Network upgrades that are postponed – Postponing upgrades can result in continuing to use old and antiquated equipment that has limited uses on a higher speed network. Network performance then becomes slow which affects IT’s ability to solve problems as fast as required. 
  • Network upgrades are implemented – Just the action of necessary upgrades can result in blind spots. One example is if new higher speed equipment is added. This equipment may end up overloading various components of the network, especially monitoring and security tools, with too much data. This is especially true if any monitoring and performance tools weren’t upgraded at the same time. These tools can become overloaded and lose (i.e. drop) data or overwrite buffers/logs at a faster than expected pace. In addition, tool dashboards are often limited in what they can see which allows the blind spot to remain hidden. Common vulnerabilities can be found in the CVE database.
  • Addition of new applications – A common blind spot for hospitals is access to application data and application performance trending. In this case study, the customer was using the EpicCare Ambulatory Electronic Medical Record (EMR) application from Epic but was having problems correlating all of the information from their different systems.
  • Security and network audits are postponed or rarely occur – This action will often result in a safe and cozy harbor for various threats and malware on your network. It’s hard to say what will be hidden but whatever it is, I’m sure you don’t want it. See this resource for more information.
  • Anomalies – Unexplained network events happen and are often addressed by IT but if they are spurious and random in nature, and they go undiagnosed, this can result in larger problems later on. Ixia has several customers who have eliminated their network anomalies and also realized a mean time to repair (MTTR) reduction by up to 80%.
  • Incorrect equipment programming rules – An example of this is firewall programming, which is rules-based and typically processed through access lists. When the traffic matches a rule, it is immediately forwarded on, even if subsequent rules exist to tailor the information. This can cause gaps in network security because the packet was routed before the correct security tool got to see the correct information.

So, when it comes to your specific network, where are your potential blind spots? If some of the blind spots listed above apply to you, you’ve typically got two ways to respond – either in a proactive or reactive manner.  The reactive approach is straight forward, just wait until something happens and then go fix it. While it’s the simplest approach, it’s also usually the costliest in terms of locating exactly what issue the blind spot caused (which usually increases your mean time to repair). In addition, it often necessitates the purchase/implementation of expensive long term fixes or multiple “Band-Aid” fixes that never really “fix” the problem. In any case, this approach is very straight forward.

If you want to follow a proactive approach, the best solution is to design a visibility architecture. This involves more upfront cost and planning but will normally pay for itself very quickly. The visibility architecture is a plan you create for organizing exactly how you want your monitoring tools to connect to the network. This involves how they connect (taps or SPAN ports), where they connect (edge, core, which braches, etc.), and how you groom the monitoring data before you send the stream to a tool (packet filtering, application filtering, deduplication, packet trimming, decryption, aggregation, etc.).  If you want to learn more about designing a visibility architecture, check out this New Whitepaper.

To end blind spots in your network, you need to be able to see everything. Unknown issues and “soon to be problems” exist in every network to some degree. To achieve the goal of ending blind spots in your network, you’ll need to implement a visibility architecture. It’s not hard or complicated, but it does require some planning. At the same time, the sooner you can accomplish this step, the faster you can integrate a visibility architecture with your IT network. And the sooner you can realize cost and productivity savings.

KeithAuthor:Keith Bromley is a product marketing manager for Ixia, Inc., with more than 20 years of industry experience in marketing and engineering. Keith is responsible for marketing activities for Ixia’s network monitoring switch solutions. As a spokesperson for the industry, Keith is a subject matter expert on network monitoring, management systems, unified communications, IP telephony, SIP, wireless and wireline infrastructure. Keith joined Ixia in 2013 and has written many industry whitepapers covering topics on network monitoring, network visibility, IP telephony drivers, SIP, unified communications, as well as discussions around ROI and TCO for IP solutions. Prior to Ixia, Keith worked for several national and international Hi-Tech companies including NEC, ShoreTel, DSC, Metro-Optix, Cisco Systems and Ericsson, for whom he was industry liaison to several technical standards bodies. He holds a Bachelor of Science in Electrical Engineering. 

Comments