Four Things to Consider Before You Move Everything to the Cloud
Organizations everywhere are moving pieces, if not all, of their workloads to public clouds. This is understandable as there are some clear benefits to this strategy. At the same, a public cloud instance does not work the same way as a physical on-premises network does. This means that when you make your move to the cloud, you need to understand that it is not simply a “lift and shift” endeavor. Making this assumption could make you very sorry. A new whitepaper (Top Four Considerations When Migrating to Public Cloud) provides an in-depth illustration why.
Instead of hoping your cloud migration works, a solid approach would be to ask yourself the following four questions before you create this new architecture:
- What is the extent and timeframe of your migration strategy?
- How will you handle the decrease in network visibility as you move to the cloud?
- Will you need to deploy inline security and monitoring tools?
- How do you plan to accurately gauge network performance?
These items present serious challenges for businesses considering cloud deployments. However, there are viable solutions and processes that mitigate these considerations to help make cloud migration as beneficial as possible. Let’s explore the four questions further.
Migration Strategy and Planning Is Critical For Success
Data from surveys show that many IT professionals are disappointed with their leap to the cloud. A survey performed by Dimensional Research showed that 9 out of 10 respondents have seen a direct negative business impact due to lack of visibility into public cloud traffic. This includes application and network troubleshooting and performance issues, as well as delays in resolving security alerts stemming from a lack of visibility.
Sanjit Ganguli of Gartner Research also conducted polling at the Gartner December 2017 Data Center Conference and found that 62 percent were not satisfied with the monitoring data they get from their cloud vendor now that they have moved to the cloud. In addition, 53 percent said that they were blind to what happens in their cloud network.
One common misconception is that everything in your physical network has a cloud equivalent. This is not the case. You are moving from an environment where you have full control to an environment where you have limited controls. This situation is akin to moving from ownership of a house to rental of a house. You may still be living in a house, but you are now subject to someone else’s rules; while you pay them money for the privilege.
Cloud Networks Do Not Offer Native Visibility
Once you migrate to the cloud, and during the migration process, you will not have clear visibility into the network layer. You will only be able to get information about the cloud network and some parts of the operating system from cloud-based service providers. They provide summarized metadata on cloud-centric information (network, compute, storage). This includes high-level cloud data (e.g. CPU performance, memory consumption, etc.) and some log data.
What the cloud providers and other cloud tools do not provide is network packet data. This data is absolutely necessary for security forensics and troubleshooting using root cause analysis. Data loss prevention (DLP) tools and most application performance management (APM) tools are dependent upon the packet data for problem analysis. Typical cloud tools provide limited data that is often time-delayed which can dramatically impact tool performance. For instance, tactical data loses 70% of its performance monitoring value after 30 minutes of time.
In addition, cloud providers also do not provide user experience data or the ability to watch conversations. Specifically, this means that you cannot accurately gauge customer quality of experience based upon cloud provider delivered data. In addition, the flow data provided lets you see who the talkers are but does not contain anything about the details of the conversation.
An easy remedy for this issue is to add cloud-based monitoring data sensors (also called virtual taps) to your cloud network. These sensors can replicate copies of the desired data packets and send them to your troubleshooting, security, and or performance tools. This gives your tools the data they need to perform their functions.
One key factor though is that the data sensors need to have the ability to scale automatically as needed. The whole reason you have decided to move to the cloud is to take advantage of its elastic nature. As cloud instances get spun up, the sensors capability needs to be able to scale sufficiently as well. As your cloud solution scales, your visibility solution needs to scale with it, automatically and programmatically. Avoid virtual tap solutions that require manual intervention to load licenses or add instances of the virtual taps, as this is a productivity killer.
Inline On-Premises Security and Monitoring Tools Do Not Work the Same In The Cloud
Due to the nature of public clouds, inline tools are not an option. Public cloud vendors do not allow customers access to their network and system layers to deploy any inline security (e.g., intrusion prevention system (IPS), data loss prevention (DLP), or web application firewall (WAF)) tools, as this can create a security risk to their network. So, if you plan to deploy inline security protection, you should understand that it won’t be a “bump in the wire configuration” that you are used to for on-premises devices, like a typical IPS. When planning your security architecture, make sure you talk to a security vendor that understands how the cloud architecture needs to be configured.
Lack of inline tool deployment obviously creates a risk to your cloud instance that you will need to address. So, how do you secure your environment now? First, you need to deploy an architecture that enables you to be proactive and stay ahead of the bad guys. This includes visibility components (like sensors) that allow you to capture security and monitoring data of interest for analysis.
A second approach is to purchase purpose-built security tools for the cloud. This includes encrypting data at rest and also active threat detection tools like a SIEM or IDS. These tools provide out-of-band anomaly analysis. However, this is still not the same as deploying an inline IPS solution, which would have the ability to investigate and stop threats in real-time. So, trade-offs to your security risk plan will need to be made.
A third option to mitigate the threat would be to use a hybrid architecture that allows you to keep your existing security tools within the physical premises to inspect high risk data (or even general data if you want). Based upon your risk plan, this may provide the protection you need and minimize business risk to an acceptable level. Note, most cloud computing vendors charge you to export data. However, the data bandwidth costs can be limited by simply transferring only the relevant data to the on-premises tools.
Cloud Performance Measurement Is Vendor Dependent
Another important question to answer is how you plan to accurately gauge the impact of poor network performance on your cloud-based application workloads? Performance issues are a real consideration for new cloud networks. Once you migrate to the cloud, and during the migration process, you will not have clear network performance data within your environment. It is up to you to implement this, if you want this visibility. Specifically, this means that you cannot natively tell how your applications are truly performing or even how your cloud instance is performing. Is it meeting or exceeding the service level agreement (SLA) that was put in place? Your cloud vendor will probably tell you that it is, but you have no independent data for a “check and balance” strategy on what they are delivering.
Business intelligence applications are one example of a problem area. After porting the service, you may find that it runs slower (after you receive multiple customer complaints). The result is often an increase in more CPU, RAM, and interconnect bandwidth. This creates an unplanned and perpetual cost increase.
During the migration process, proactive monitoring of both your on-premises and cloud environments will be useful. Many organizations that just blindly port services and applications to the cloud find cloud network issues quickly, particularly performance issues.
Proactive monitoring allows you to accurately understand what is happening and determine where problems are located within your cloud network. As mentioned earlier, once you migrate to the cloud, application performance monitoring will become difficult if you do not properly plan for it. You will not have the data you need natively from the cloud service provider. This loss of data needs to be planned for so that it can be remedied or mitigated.
If you want more information on this topic or network visibility solutions, check out the whitepaper Top Four Considerations When Migrating to Public Cloud and the ebook The Definitive Guide to Network Visibility Use Cases.
Author: Keith Bromley is a senior product marketing manager for Keysight Technologies with more than 20 years of industry experience in marketing and engineering. Keith is responsible for marketing activities for Keysight's network monitoring switch solutions. As a spokesperson for the industry, Keith is a subject matter expert on network monitoring, management systems, unified communications, IP telephony, SIP, wireless and wireline infrastructure. Keith joined Ixia in 2013 and has written many industry whitepapers covering topics on network monitoring, network visibility, IP telephony drivers, SIP, unified communications, as well as discussions around ROI and TCO for IP solutions. Prior to Keysight, Keith worked for several national and international Hi-Tech companies including NEC, ShoreTel, DSC, Metro-Optix, Cisco Systems and Ericsson, for whom he was industry liaison to several technical standards bodies. Keith holds a Bachelor of Science in Electrical Engineering.
Oldcommguy dubs Keith "One Of The Good Guys" in today's technology!