Zero Trust and deep observability: Gigamon reveals the cybersecurity paradigm

From healthcare facilities to large financial institutions, no industry is completely safe from ransomware. According to Ian Farquhar, security CTO at Gigamon, it has become a huge risk to business continuity and it’s now a common discussion in boardrooms.

In response to the exponential rise in ransomware, Zero Trust architecture has now widely used globally as a respected solution for protecting both on-premises and the cloud infrastructure. Under Executive Order 14028, the entire US government is transitioning to zero trust architectures over the next five years.

However only 33% of IT and security professionals in Australia and New Zealand claimed to be comfortable implementing Zero Trust in 2022.

Put simply, a Zero Trust architecture removes the implicit trust that’s given to any part of their IT infrastructure, and erases the concept of internally and externally visible services.  Everything is monitored for trustworthiness, and nothing is assumed to be secure without proof. It is the ultimate implementation of defence in depth.

With this defence in depth approach to security, businesses can improve both productivity and resilience, as systems run more efficiently and downtime can be reduced. However, to get Zero Trust right, security teams need to ensure they achieve deep observability from cloud to core. Indeed, achieving this ubiquitous visibility will substantially accelerate that zero trust journey.

Why Zero Trust matters

Implicit trust within the tech stack can be a huge liability for organisations. IT teams often struggle to implement the appropriate trust measures; they usually take for granted that the organisation owns the system, the only users are employees, or the network used to be secure.

But these measurements of trust are not adequate. Trust based on assumption is leaving organisations vulnerable and open to risk. For threat actors, these negligent metrics of trust can be used against an organisation, enabling network infiltration and data breaches.

A Zero Trust framework eradicates any implicit trust and instead analyses whether an organisation should allow access for each individual case. With bring-your-own-device (BYOD) strategies so prominent following the rise of remote and hybrid working, it is more important than ever before that trust is earned rather than freely given. Everything should be considered a potential threat until proven, and an attacker should be assumed to be active in the environment. Monitoring is continual, and just because something was trusted before doesn’t mean it’s trusted now.

However, it’s not a simple nor quick architecture to implement. There are many components to Zero Trust and most organisations are still at the very beginning of their journey.

Micro-segmentation, for example, is an essential part of Zero Trust. It provides the ability to control workloads in a data centre or a multi-cloud environment with granular policy controls and restricts the spread of lateral threats. Yet it is only one element in a difficult wider defence in depth strategy with no guarantees of effectiveness.

To make it all possible, IT and security teams need absolute visibility and insight into what is happening across their infrastructure. Network segmentation is hard to do, and once implemented, can even blind you to threats.

The role of deep observability

Deep observability is the addition of real-time network-level intelligence to amplify the power of metric, event, log and trace-based monitoring and observability tools in order to mitigate risk.

With it comes increasing intelligence to bolster an enterprise’s security posture – because while threat actors can bypass endpoint detection and response tools or SIEMs, they will leave behind a trail of metadata that deep observability allows security teams to analyse. It is therefore critical in supporting a comprehensive Zero Trust strategy.

Ultimately, Zero Trust’s main goal is to assess trustworthiness of all devices connected to the network — not just those with endpoint agents installed and operational — and to strictly enforce a least-privilege access policy based on a granular analysis of the device. This is impossible to do for assets, devices, users and traffic you cannot see.

Security teams who combine Zero Trust and deep observability will be best placed to prevent cybercriminals from infiltrating their network and therefore to ensure the continuity of their business.

Nation state actors, hacktivists and cybercriminals are all seeking to penetrate a system.  DPRK runs ransomware to make money, Russian-affiliated actors do ransomware with the tacit endorsement of the Putin regime etc. are all becoming more sophisticated in their techniques.

With their attacks becoming much more strategic, this holistic visibility is essential in helping organisations reduce risk. Adversaries will no longer be able to hide behind blind-spots and operate undetected.

Today’s technology landscape demands change and reliability for the future. The very nature of cloud-based applications and the expansion of SaaS (Software-as-a-Service), combined with the hybrid working model, means that Zero Trust is becoming increasingly popular for businesses concerned about becoming ransomware’s next target.

However, if organisations are going to commit to this security initiative, they need a strategy in place to help them get there. Discussions in boardrooms need to continue, and IT and Security teams should be putting plans in place that span the coming five years and realistically reflect the challenges security teams face in their day to day.

It’s no secret that ransomware presents businesses with one of the biggest enterprise risks. Zero Trust enabled by deep observability will be crucial to ensuring business continuity in 2023 and beyond.