Written By Dr Al Hartmann And Presented By Ziften CEO Chuck Leaver
The dissolving of the traditional border is happening quickly. So where does this leave the endpoint?
Investment in border security, as specified by firewall programs, managed gateways and invasion detection/prevention systems (IDS/IPS), is altering. Investments are being questioned, with returns not able to overcome the costs and complexity to produce, preserve, and justify these old defenses.
More than that, the paradigm has actually changed – workers are no longer solely working in the office. Many individuals are logging time from home or while out in the field – neither location is under the umbrella of a firewall program. Instead of keeping the bad guys out, firewall software frequently have the opposite impact – they prevent the authorized people from being productive. The paradox? They produce a safe haven for attackers to breach and conceal for many weeks, then pass through to critical systems.
So Exactly what Has Changed So Much?
The endpoint has become the last line of defense. With the above mentioned failure in border defense and a “mobile everywhere” workforce, we must now implement trust at the endpoint. Easier said than done, nevertheless.
In the endpoint space, identity & access management (IAM) systems are not the silver bullet. Even innovative companies like Okta, OneLogin, and cloud proxy suppliers such as Blue Coat and Zscaler can not overcome one simple truth: trust surpasses easy recognition, authentication, and permission.
Encryption is a 2nd effort at securing entire libraries and individual assets. In the most current (2016) Ponemon study on data breaches, encryption only conserved 10% of the expense per breached record (from $158 to $142). This isn’t the panacea that some make it appear.
The Whole Picture is altering.
Organizations needs to be prepared to welcome new paradigms and attack vectors. While companies should offer access to trusted groups and individuals, they need to resolve this in a better method.
Critical organization systems are now accessed from anywhere, any time, not just from desks in business office complexes. And contractors (contingent labor force) are rapidly comprising over 50% of the overall enterprise labor force.
On endpoint devices, the binary is mainly the issue. Probably benign events, such as an executable crash, could suggest something simple – like Windows 10 Desktop Manager (DWM) restarting. Or it might be a much deeper problem, such as a harmful file or early signs of an attack.
Trusted access doesn’t resolve this vulnerability. In accordance with the Ponemon Institute, in between 70% and 90% of all attacks are brought on by human error, social engineering, or other human elements. This requires more than easy IAM – it requires behavioral analysis.
Rather than making good better, border and identity access businesses made bad quicker.
When and Where Does the Bright Side Start?
Going back a little, Google (Alphabet Corp) announced a perimeter-less network design in late 2014, and has actually made considerable development. Other businesses – from corporations to federal governments – have actually done this (in silence and less extremely), however BeyondCorp has actually done this and shown its efforts to the world. The design philosophy, endpoint plus (public) cloud displacing cloistered enterprise network, is the crucial idea.
This changes the entire discussion on an endpoint – be it a laptop computer, desktop, workstation, or server – as subservient to the corporate/enterprise/private/ organization network. The endpoint really is the last line of defense, and needs to be protected – yet likewise report its activity.
Unlike the traditional boundary security model, BeyondCorp doesn’t gate access to tools and services based on a user’s physical area or the stemming network; rather, access policies are based upon info about a device, its state, and its associated user. BeyondCorp thinks about both external networks and internal networks to be entirely untrusted, and gates access to applications by dynamically asserting and imposing levels, or “tiers,” of access.
By itself, this seems innocuous. However the reality is that this is an extreme brand-new model which is imperfect. The access criteria have shifted from network addresses to device trust levels, and the network is greatly segmented by VLAN’s, rather than a central design with capacity for breaches, hacks, and threats at the human level (the “soft chewy center”).
The good part of the story? Breaching the perimeter is extremely challenging for potential opponents, while making network pivoting next to impossible when past the reverse proxy (a common system used by assailants today – proving that firewall software do a better job of keeping the bad guys in instead of letting the good guys go out). The opposite model further applies to Google cloud servers, probably securely managed, inside the border, versus client endpoints, who are all out in the wild.
Google has actually done some good improvements on proven security techniques, especially to 802.1 X and Radius, bundled it as the BeyondCorp architecture, including strong identity and access management (IAM).
Why is this essential? Exactly what are the gaps?
Ziften believes in this approach due to the fact that it emphasizes device trust over network trust. Nevertheless, Google doesn’t specifically reveal a device security agent or highlight any type of client-side tracking (apart from very stringent setup control). While there might be reporting and forensics, this is something which every company needs to be knowledgeable about, because it’s a matter of when – not if – bad things will occur.
Given that implementing the initial stages of the Device Inventory Service, we’ve consumed billions of deltas from over 15 data sources, at a typical rate of about three million per day, amounting to over 80 terabytes. Keeping historical data is essential in permitting us to understand the end-to-end life cycle of a given device, track and examine fleet-wide trends, and carry out security audits and forensic examinations.
This is an expensive and data-heavy process with two shortcomings. On ultra-high-speed networks (utilized by the likes of Google, universities and research study companies), ample bandwidth enables this type of communication to happen without flooding the pipelines. The first problem is that in more pedestrian corporate and federal government circumstances, this would cause high user disturbance.
Second, machines must have the horse power to continuously collect and send data. While the majority of employees would be delighted to have current developer-class workstations at their disposal, the expense of the devices and process of refreshing them on a regular basis makes this excessive.
An Absence of Lateral Visibility
Very few products really create ‘improved’ netflow, augmenting conventional network visibility with abundant, contextual data.
Ziften’s trademarked ZFlow ™ provides network flow information on data produced from the endpoint, otherwise achieved utilizing brute force (human labor) or pricey network devices.
ZFlow serves as a “connective tissue” of sorts, which extends and completes the end-to-end network visibility cycle, adding context to on-network, off-network and cloud servers/endpoints, enabling security teams to make quicker and more informed and accurate choices. In essence, purchasing Ziften services result in a labor cost saving, plus an increase in speed-to-discovery and time-to-remediation due to innovation serving as a replacement for people resources.
For companies moving/migrating to the public cloud (as 56% are planning to do by 2021 according to IDG Enterprise’s 2015 Cloud Study), Ziften uses unrivaled visibility into cloud servers to better monitor and protect the complete infrastructure.
In Google’s environment, just corporate owned devices (COPE) are allowed, while crowding out bring your own device (BYOD). This works for a company like Google that can give out new devices to all personnel – smart phone, tablet, laptop computer, etc. Part of the reason is that the vesting of identity in the device itself, plus user authentication as usual. The device must fulfill Google requirements, having either a TPM or a software equivalent of a TPM, to hold the X. 509 cert used to confirm device identity and to facilitate device-specific traffic file encryption. There should be a number of agents on each endpoint to verify the device recognition asserts called out in the access policy, which is where Ziften would need to partner with the systems management agent service provider, because it is most likely that agent cooperation is vital to the process.
In summary, Google has established a first-rate service, however its applicability and usefulness is limited to companies like Alphabet.
Ziften uses the very same level of operational visibility and security defense to the masses, utilizing a lightweight agent, metadata/network flow tracking (from the endpoint), and a best-in-class console. For organizations with specialized requirements or incumbent tools, Ziften offers both an open REST API and an extension framework (to augment ingestion of data and activating response actions).
This yields the advantages of the BeyondCorp model to the masses, while securing network bandwidth and endpoint (machine) computing resources. As companies will be slow to move entirely away from the enterprise network, Ziften partners with firewall and SIEM suppliers.
Lastly, the security landscape is gradually shifting towards managed detection & response (MDR). Managed security suppliers (MSSP’s) offer standard tracking and management of firewalls, gateways and perimeter invasion detection, but this is insufficient. They lack the skills and the technology.
Ziften’s solution has actually been evaluated, integrated, authorized and implemented by a variety of the emerging MDR’s, showing the standardization (ability) and flexibility of the Ziften platform to play a crucial function in removal and occurrence response.