Use Network Packet Brokers to make data center security infrastructure more efficient and cost effective (by Yoram Ehrlich)
LMTV LIVE | Improving Packet Analysis with Synthetic Testing (with Steve Brown and Ward Cobleigh)

Cloud Networking – Once More Into The Breach (by Keith Bromley)

Cloud Networking – Once More Into The Breach

Anyone who has been in networking for several years has seen an exorbitant amount of change. Initially, businesses had physical (racks and chassis) network and switching equipment that resided on their physical premises. This was their corporate network consisting of routers, switches, servers, etc. The simple network concept expanded to include wide area networking, international and distributed offices, and then extensive security measures including IPS, IDS, DLP, WAF, etc.

The concept took a quantum leap as we went through a “virtualization” mania a few years ago. Everything needed to be moved to virtualized servers located in the virtual data center. Now we are on the precipice of another quantum leap, the use of cloud networking, where a significant majority of capabilities are being moved to either public or private cloud networks.


Whether you are a proponent of this move to the cloud or not, there are some things to consider if, and when, you decide to take the plunge. While there has been a lot of hype around the benefits of cloud computing, very little is being said about the inherent drawbacks.

One of the current challenges for IT teams is the lack of visibility that comes with the shift to the cloud. For instance, once you give up control of the network infrastructure, you lose the ability to capture important packet data from tap and span ports. This data is necessary for troubleshooting and performance monitoring analysis. Monitoring and forensic tools still need to perform deep packet inspection as part of the application performance monitoring (APM) analysis and troubleshooting activities. Log data and log files are simply not good enough.

In addition, while many of the cloud vendors will tell you that they offer security and visibility capabilities, this is in regards to their portion of the cloud (the infrastructure), not your workspace. Their touted “security solution” is often just an access list. If you’ve operated a data center before, are access lists the only thing you did to secure your network? I think not.

However, there is a remedy. You can deploy a virtual tap into a container within your cloud environment. This allows you to capture the specific types of packet data that you are looking for within your portion of the cloud environment. Once the tap captures the data, the data can be copied and sent on to either your cloud-based, or on-premises based, tools for further analysis.

One important note. Make sure that the virtual tap you deploy can scale continuously. Otherwise, you will encounter significant problems as you spin up new apps and services in your cloud network. One of the problems will be lost monitoring data. If a virtual tap is overloaded, it simply cannot collect the requisite data, so that data is lost. At that point, another virtual tap (or set of licenses for the tap) needs to be installed to capture the additional monitoring data. This human intervention requirement will throttle your ability to be effective. If the tap can scale continuously, then this limitation is removed and the monitoring solution can scale as you spin up more apps and services.

Another major challenge for cloud networks is network and application performance. According to a webinar presented by Viavi, 6 Steps for Maintaining Control in the Cloud, a survey was conducted by Gartner Research with IT engineers that had moved workloads to the cloud. The results showed that approximately 53% of respondents were blind as to what happens in their cloud network and 79% were dissatisfied with the monitoring data that they get about their cloud network. This lack of proper monitoring data leads to a lack of ability to accurately understand what your network is doing and how well it is/is not performing.

Network performance is obviously important. You need to be able to answer important questions like, “How will the network handle the application data that you currently have?”, “Is the current contracted work space enough?””, and “Will I encounter performance problems and need to upgrade the CPU and memory in a hurry before I get more user complaints?”

Here are three suggestions to help you with this problem:

  • Test your cloud network for adequate capacity before you migrate from your current on-premises solution
  • Monitor your cloud and on-premises networks during the migration process
  • Continually verify that your cloud provider is delivering upon the contracted SLA

To get the answers you want, the first thing you will want to do is to insert virtual taps into your cloud network (like was mentioned earlier) so that you get the proper monitoring data you need. The second thing you will want to do is create a proactive cloud monitoring solution. Basically, this is a monitoring solution that uses software agents and probes that you can place across your cloud and physical infrastructure.

With a proactive monitoring solution, you can use visibility technology to actively test your solution before migration, during migration, and after migration. For instance, you can pre-test the network with synthetic traffic to understand how the solution will perform against either specific application traffic or a combination of traffic types. The synthetic traffic provides you the network and/or application loading of a “busy hour” and the flexibility to perform evaluations during the network maintenance window.

Once the migration starts, you can measure the ambient latency, throughput, and performance problems on a per-hop basis within the network to see how it is performing. This lets you analyze both your on-premises solution as well as your cloud solution. This can be especially important if you have a hybrid solution right now, and are in the (often multi-year) process of transitioning from the physical to the virtual (cloud) world. A proactive testing and monitoring approach gives you the confidence that your new application rollouts will be successful in either network.

Proactive monitoring also allows you to perform SLA validation during business hours, since it is not service disrupting. This allows you validate the SLA performance at will. The information gathered can then be used to inform management about which goals are being met. If goals are not being met, you can use the impartial data you have collected and contact your vendor to have them either fix any observed network problems, or give you a discount if they are failing to meet agreed upon SLAs.

If you want more information on this topic, click here to see a list of resources that also might help you, especially if you want more details of the various visibility use cases which are available in this free book.

Keith newAuthor: Keith Bromley is a senior product marketing manager for Keysight Technologies with more than 20 years of industry experience in marketing and engineering. Keith is responsible for marketing activities for Keysight's network monitoring switch solutions. As a spokesperson for the industry, Keith is a subject matter expert on network monitoring, management systems, unified communications, IP telephony, SIP, wireless and wireline infrastructure. Keith joined Ixia in 2013 and has written many industry whitepapers covering topics on network monitoring, network visibility, IP telephony drivers, SIP, unified communications, as well as discussions around ROI and TCO for IP solutions. Prior to Keysight, Keith worked for several national and international Hi-Tech companies including NEC, ShoreTel, DSC, Metro-Optix, Cisco Systems and Ericsson, for whom he was industry liaison to several technical standards bodies. He holds a Bachelor of Science in Electrical Engineering.

Oldcommguy dubs Keith "One Of The Good Guys" in today's technology!

Please note - Keith has many other popular articles on - and on Keysight Technologies.












Top five ways to strengthen a security architecture