Gigamon exec cautions on vulnerabilities in network and port spoofing

Cloud security evangelist at Gigamon, Stephen Goudreault, advises that as with all technology, new tools are iterations built on what came before, and classic network logging and metrics are no different.

He says that tooling, instrumenting and monitoring of network traffic are virtually unchanged across the private cloud and on-premises. Many of the logs and metrics in use today are nearly two decades old and were originally designed to solve for billing, among other problems.

“Visibility into traffic flow patterns was an added bonus. Traffic logging just happens to be the use case that has endured,” says Goudreault. “However, this reliance on established methods has left some vulnerabilities in network and port spoofing.”

But what is port spoofing and why is it important?

Like application and data visibility on the network, many rules and RFCs now in use were written over a decade ago and describe how something ‘should’ work, although there are no real rules enforcing that.

This provides a lot of flexibility for possible deployments that are rarely used. When an application or service is misconfigured or if a bad actor wants to evade detection, even the slightest changes to standard ports can hamper most current visibility and detection schemes.

Port spoofing is a known technique, and MITRE ATT&CK has a whole category dedicated to this kind of evasion.

One of the most common and versatile examples of evading visibility is using Secure Shell (SSH) protocol on non-standard ports. SSH is usually assigned to port 22.

Security tools assume SSH traffic will use port 22, and nearly every security team in the world keeps that port tightly locked down. Common practice is to block this port at the perimeter and call things secure. Easy, right?

Not so fast. What if a bad actor changed the default port on their SSH traffic? Port 443 is widely used for HTTPS/TLS and is nearly always kept open.

HTTPS traffic is ubiquitous in the modern enterprise, for both business critical and personal activities. IT firewalls are not going to routinely block port 443/HTTPS, thus making it an ideal point of entry for attackers.

Changing SSH to operate on 443 is a simple task. There are many forums that provide detailed instructions on legitimate and illegitimate reasons to do this. Almost all modern cloud visibility tools will report the traffic as what it appears to be, not what it actually is.

Even workloads in the cloud can misidentify their own connections. An active SSH session can be misreported as TLS because the Linux OS assumes the type of connection based only on the port.

The network gets it wrong, and the operating systems tools get it wrong as well by reporting this traffic as a known known.

Nearly all traffic is assessed by its TCP and UDP ports today. This leads to many assumptions as to the nature of the traffic. This is true in the public cloud, in private cloud, and on-prem.

In today’s ever more security-conscious world, making assumptions about the nature of traffic isn’t as safe as it once was. SSH is a very powerful tool that threat actors can use for file transfers, tunnelling, and lateral movement across any network.

This is just one example of how a single tool can have many uses. Factor in other applications and protocols, and the realisation of how much can’t be seen becomes daunting. MITRE has its own category for port spoofing, and the trend is only growing.

East-West traffic requires deep observability too. Next-generation firewalls (NGFWs) have solved for this problem on-premises at perimeters points. The public cloud, however, is a different story, and this problem has yet to be solved at scale for East and West or laterally.

VPC flow logs only record the conversations that took place along with the port number, without really knowing what application or protocol was in use. Deep observability with deep packet inspection investigates the conversation and can properly identify the applications and protocols in use.

My company calls this application intelligence, which currently identifies more than 5,000 applications, protocols, and attributes in network traffic inspection.

Application metadata intelligence doesn’t just look at outer headers, it also looks deeper into the packet. We look deep into the unique characteristics of the packet that define a given application. This is called deep observability.

If an attacker is connecting via SSH from workload A to workload B in the same subnet, my company’s deep observability pipeline, using application intelligence, sees the traffic for what it really is and reports it to the security tools.

In this case, we can alert tech that there is SSH traffic masquerading as web traffic on port 443. This depth of observability can be easily spanned East and West across your entire enterprise including the public cloud and container-to-container communications.

In the public cloud, deep packet inspection has a unique set of challenges. There is no broadcast, and to inspect traffic there either needs to be a security VPC to funnel traffic through or traffic mirroring.

The second and less complicated option is to mirror the traffic to appropriate tools. Gigamon solves for this second solution. The benefits include less deployment complexity and operational friction without impairing performance as an inline inspection path would.

The known knowns are that developers will continue to run fast, DevOps will inadvertently deploy unknown or misconfigured applications, and threat actors will continually seek to exploit these vulnerabilities to create blind spots.

SecOps will try to verify rules and protections, which can only really be accomplished with deep observability with network-derived intelligence and insights.

If an organisation can’t detect a simple use case of SSH on a non-standard port, what other known unknowns could be lurking in its hybrid cloud infrastructure?