What is “Deep Packet Inspection” anyway? (by Chris Greer)
Who's Driving Your WiFi? (by Tony Fortunato)

The Cloud - A Reality Check (by Owen O'Neill)

Cloud Computing and the Data Capture Infrastructure 

In its simplest definition Cloud Computing is “on-demand access to virtualized IT resources that are housed outside of your own data center, shared by others, simple to use, paid for via subscription, and accessed over the Web.” 1    

Cloud-question 
Organizations considering a shift to this technology, despite the obvious appeal of shifting significant capital expenses to the operating expenses side of the ledger, must examine a wealth of issues related to the capture and monitoring of database and application traffic.  Troubleshooting, security, performance monitoring, capacity planning and compliance in the cloud are some of the new challenges this technology creates.  Getting access to the network traffic to perform this analysis is the job of the Data Capture Infrastructure (DCI).  The DCI is a system designed to copy traffic running through a computer network and send that traffic to probes or analyzers that can improve network efficiency or security.

Looking beyond the hype and promises of reduced costs and increased efficiencies, the network administrators, operations and security engineers, managers and other decision makers responsible for ensuring reliability and security in the enterprise environment have fundamental questions to answer.

  • Do requirements for troubleshooting network operation issues decrease?
  • Who is responsible for ensuring application and database performance?
  • Will the cloud provider ensure security monitoring and regulatory compliance?
  • Are changes in Data Capture Infrastructure (network monitoring access systems) required?
  •  

Is “the Cloud” Really That New or Different?

Cloud Computing can be thought of as clients, or individual groups of users, who remotely access applications, databases, and file storage systems that are housed on servers. Cloud service providers, the best known examples being Google, Amazon and IBM, sell these services on a fee basis that charges for the number of applications, and the amount of usage and storage. The number of users, storage space, and required performance levels can be adjusted dynamically in real time, because the servers and applications are virtual in nature.  These services are billed based on actual usage, similar to the manner in which energy suppliers sell electricity. This contrasts to the traditional data network model, in which a business purchases and deploys its own physical servers, buys software that is subject to annual license renewal fees, and must utilize its own resources to deploy, maintain and expand this infrastructure as needs change. Services such as personal Web based email, online gaming, and video streaming could technically be described as “Cloud based”, but the phrase is more accurately used to describe the new generation of commercial application and data storage delivery and access systems.

The growth of network virtualization has paralleled that of Cloud Computing. Defined as a software utility that allows creation of multiple “virtual” devices within a single physical server, virtualization allows the large storage capacity and processing power of a single server to be leveraged by dividing it into multiple “virtual servers”.  Recent developments have extended this to the point that users with a thin client can perform all their activities on a “virtual desktop” that is housed remotely on a server. 

The user can connect via a traditional wired network connection, but it is increasingly common for secure wireless Internet connections to be used instead – especially for telecommuting employees. Providers of Cloud services have leveraged Virtualization to provide the efficiency and flexibility required for the new data network model. 

Virtualization consolidates and increases the efficiency of server and storage space - it does not reduce the number of connections required to the users or reduce bandwidth requirements!

As with so many “emerging” technologies, the buzz phrase “Cloud Computing” is widely abused in technology marketing. Some critics claim that it’s just a new name for an old concept.  In a September 2008 speech, Oracle CEO Larry Ellison derided the trend, comparing it to the fashion trends which typify the clothing industry. In a well publicized speech, he stated “We'll make cloud computing announcements because if orange is the new pink we will make orange clouds," Nine months later, he announced that Oracle would begin venturing into that technology and start offering Cloud Computing services for its well known database software.2  This can be seen less as a contradiction, and more as a recognition that this technology has evolved rapidly, primarily due to Virtualization and widespread access to reasonably priced broadband connectivity.

The flexibility and cost efficiencies that Cloud Computing offers are proving to have strong appeal in today’s competitive global economy. According to research done by the Gartner Group, overall world-wide revenue for these services was $56.3 billion in 2009, an increase of more than 20% from 2008. Annual revenues are projected to reach $150.1 billion in 2013.3

Changes in Traffic Patterns 

A migration to reliance on Cloud services will dramatically increase the need for increased bandwidth at the network edge.  Service provider links that might once have provided adequate bandwidth using T-1, DS-3 or OC-3 links may need to be increased to Gigabit/OC-48 or even 10 Gigabit/OC-192 with direct connections or via a Metro SONET ring.  Faster links at the edge may also require increased use of packet shapers; inline devices that categorize and optimize traffic based on class and type.

In addition to ERP systems (e.g. Oracle’s PeopleSoft), cloud based CRM systems such as Salesforce.com are already in widespread use and rapidly growing in popularity. As organizations move toward these platforms and the number of telecommuters continues to increase there will be an exponential increase in the bandwidth required for VPN connections. This is yet another ingress and egress point that must be monitored and secured (and one that is not the responsibility of the cloud services provider).

Operations: Networks Will Still Break!

Reliance on an outside entity for purchase, maintenance and operations of mission critical server farms will foster substantial savings. The number of required network operations troubleshooting personnel can also be expected to decrease but the need to ensure optimal performance and minimal latency on the internal network is still beholden on the owners.  The expert operations engineers still on the team will be asked to keep “doing more with less.”  In many case these critical human resources will be available only at key locations of a distributed national or global network.  Immediate and seamless remote access to distributed troubleshooting equipment such as protocol analyzers and bulk data recorders will continue to be a mission critical requirement.

Performance: “Stop the Finger Pointing”

It may be a clichéd phrase but like all such expressions – there is an origin in historical fact. Seasoned networking veterans can easily recall the days where WAN service providers would invariably point the finger back at the customer if a service slowdown was reported.  The increased use of protocol analyzers and standardization on the .cap capture file format brought about a fundamental change in this phenomenon. Customers began doing captures at both ends on their own internal networks to validate performance and latency between access points and the servers, correlated that data to the baselines already established for normal transit time between locations, and then provided the quantifiable evidence to the service providers. Performance improved and protocol analyzers became an industry standard.

Service Level Agreements will be in place for cloud services but the prudent owner should always assume that the providers will claim innocence until proven guilty.  Even a minor impairment of mission critical operations can cripple a company’s revenue flow, customer relationships and public reputation. Ensuring that timely and accurate captures can immediately pinpoint whether issues are internal or external will be of paramount importance.

Security: Less of a Concern?

The requirements for security inside and at the egress points of a cloud services provider’s own network are no different than those in the traditional enterprise: software, hardware and security personnel still monitor the security and integrity of data. The owner benefits accrue from the economy of scale and the flexibility of the provider that can offer reduced costs for overall security monitoring. Compliance may even be an easier goal to reach, as the best Cloud providers will be SAS70 Type II Certified and also meet PCI DSS and HIPAA standards by offering secure database servers, hardware based firewalls and customized network implementations.  But is this enough?

A fundamental rule of network security is to provide multiple levels or layers that control access, enforce rules, audit and monitor to ensure that these practices are continually effective.  The SLA with a provider may cover many of these areas but the secure data is still traversing the internal enterprise network – even though the host database and application servers may be housed elsewhere.

No matter where servers reside and how well secured – the data must still get to and from your users!

Research data indicates that as much as 60 percent or more of data breaches stem from events or sources occurring inside the enterprise network rather than from external threats.4  

Monitoring to ensure that internal security practices are properly enforced remains the responsibility of each organization’s own security team.  Internal firewalls and IDS will still be required.  The decreased traffic on some of these internal links may provide an ideal opportunity to increase the scope of visibility for security tools currently used for this monitoring.  Legacy edge security tools may enjoy extended life and even be leveraged for use among a shared group of targeted data sources in the new environment as new higher speed tools are required at the network edge.

Tying It All Together

An intelligently designed DCI is required to ensure a smooth transition to Cloud Computing. It must provide the highest hardware reliability standards, afford easy remote access, allow leveraging of the investment in costly new tools, and offer the flexibility required as more critical applications and data storage move outside the confines of the traditional enterprise.  A properly designed DCI may include a variety of products such as taps, bypass switches, and matrix switches.

Key areas to assess when migrating to a cloud environment:

-          10G tapping, aggregation and filtering capability to be added at the edge

-          Bypass switches to provide 100% uptime assurance for newly added packet shapers

-          Matrix switches for affordable scalable deployment of distributed protocol analyzers

-          User configurable taps and aggregators for redeployment of existing tools in areas of increased activity or interest (e.g. VPN gateways and other heavily utilized user links)

Tim Croftom - Datacom

 

Author Profile: Owen O'Neill is currently a Senior Sales Engineer at Datacom Systems.  Prior to Datacom, Owen was employed by the Cornell University Store, specializing in engineering and architectural supplies.  Owen is pursuing a BS in Applied  Computing from SUNY Empire State College, and is a coffee and travel enthusiast.

 

Since 1992, Datacom Systems has been providing a full product line for passive test and monitoring access and traffic visibility into network links, enabling customers to access critical data from anywhere in their network.  With tens of thousands of systems installed globally, Datacom Systems provides best of breed data capture infrastructure for all major troubleshooting, security, and application monitoring tools.

1. Foley, John. (2008, September). A Definition Of Cloud Computing. Retrieved from http://www.informationweek.com/cloudcomputing/blog/archives/2008/09/a_definition_of.html

2. Hodgson, Jessica. (2009, June). Oracle CEO Ellison Changes Tack on Cloud Computing . Retrieved from http://www.informationweek.online.wsj.com/article/SB124580329161844787.html

3. Gartner Says Worldwide Cloud Services Revenue Will Grow 21.3 Percent in 2009. (2009, March 26). Retrieved from http://www.gartner.com/it/page.jsp?id=920712

4. Bachman, Cooper CSO (2008, August) Reflections on a New Internal Data Theft Study. Retrieved from http://www.infoworld.com/d/security-central/reflections-new-internal-data-theft-study-986

Comments