Written by Roark Pollock and Presented by Ziften CEO Chuck Leaver
In accordance with Gartner public cloud services market went beyond $208 billion last year (2016). This represented about 17% growth year over year. Pretty good when you take into consideration the ongoing concerns most cloud consumers still have relating to data security. Another particularly interesting Gartner discovery is the common practice by cloud consumers to contract services to several public cloud companies.
In accordance with Gartner “most organizations are already using a combination of cloud services from different cloud providers”. While the business reasoning for the use of numerous vendors is sound (e.g., preventing supplier lock in), the practice does develop additional intricacy inmonitoring activity throughout an company’s increasingly dispersed IT landscape.
While some service providers support better visibility than others (for instance, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies have to understand and deal with the visibility issues associated with transferring to the cloud irrespective of the cloud service provider or suppliers they work with.
Sadly, the capability to monitor application and user activity, and networking interactions from each VM or endpoint in the cloud is limited.
Irrespective of where computing resources reside, organizations must address the questions of “Which users, devices, and applications are interacting with each other?” Organizations require visibility throughout the infrastructure in order to:
- Quickly recognize and focus on problems
- Speed origin analysis and recognition
- Lower the mean-time to fix problems for end users
- Rapidly recognize and eliminate security risks, minimizing general dwell times.
Alternatively, poor visibility or bad access to visibility data can minimize the effectiveness of existing security and management tools.
Businesses that are comfortable with the ease, maturity, and reasonably low cost of keeping an eye on physical data centers are likely to be disappointed with their public cloud alternatives.
What has been lacking is a basic, common, and classy solution like NetFlow for public cloud infrastructure.
NetFlow, naturally, has had twenty years or thereabouts to become a de facto requirement for network visibility. A normal implementation involves the monitoring of traffic and aggregation of flows where the network chokes, the retrieval and storage of flow information from numerous collection points, and the analysis of this flow information.
Flows include a fundamental set of source and destination IP addresses and port and protocol info that is typically gathered from a router or switch. Netflow data is fairly inexpensive and easy to collect and offers almost ubiquitous network visibility and enables actionable analysis for both network tracking and
performance management applications.
The majority of IT staffs, especially networking and some security groups are incredibly comfortable with the technology.
But NetFlow was created for solving exactly what has actually become a rather limited issue in the sense that it just gathers network data and does this at a limited variety of prospective places.
To make much better use of NetFlow, 2 key modifications are needed.
NetFlow to the Edge: First, we need to expand the beneficial deployment situations for NetFlow. Instead of only collecting NetFlow at network points of choke, let’s expand flow collection to the edge of the network (clients, cloud, and servers). This would considerably expand the overall view that any NetFlow analytics supply.
This would enable companies to enhance and leverage existing NetFlow analytics tools to remove the ever increasing blind spot of visibility into public cloud activities.
Rich, contextual NetFlow: Second, we have to utilize NetFlow for more than easy visibility of the network.
Rather, let’s use an extended variation of NetFlow and take account of info on the device, application, user, and binary responsible for each monitored network connection. That would permit us to quickly link every network connection back to its source.
In fact, these 2 modifications to NetFlow, are precisely what Ziften has achieved with ZFlow. ZFlow provides an expanded variation of NetFlow that can be released at the network edge, also as part of a VM or container image, and the resulting info gathering can be taken in and analyzed with existing NetFlow analysis tools. As well as standard NetFlow Internet Protocol Flow Information eXport (IPFIX) networking visibility, ZFlow supplies extended visibility with the addition of info on device, application, user and binary for every network connection.
Eventually, this permits Ziften ZFlow to provide end-to-end visibility between any 2 endpoints, physical or virtual, eliminating traditional blind spots like East West traffic in data centers and business cloud deployments.