Why Being Exploited by Shellshock is Needless


Dan Ennis CEO

New day, new vulnerability. Today it is Shellshock; a newly discovered vulnerability, existing in millions of servers, with thousands already compromised. But rather than frantically chasing after the latest patch, a different approach to security can keep organizations safe from Shellshock and from similar vulnerabilities to come.

The Shellshock vulnerability apparently existed since the early days of the web. More precisely: since the first versions of the Linux bash shell and the first versions of the widespread Apache web server. Furthermore, non-web products that run various flavors of Linux OS may also be affected.

Half a year ago it was Heartbleed, and others before that. In fact, new vulnerabilities and exploits are published on a daily basis and we can expect this scenario to repeat itself in the future.

How long will it take to understand that the existing protection approaches just don’t work? Endless patches, endless numbers of security lists to follow – just to get the latest and “currently”-known-to-be-safe version. Why currently? Because the black markets are full of sellers who will provide you, for the right amount of money, with an unpublished exploit that targets currently-known-to-be-safe products.

What do Shellshock and the other exploits have in common? Simply another method to slip through hostile payload. Another zero-day with a new type of attack pattern.

So as Shellshock becomes more widespread, what can we learn from it and the exploits yet to come? A different approach is required – one that minimizes any website’s attack surface and provides IT and security admins peace of mind by preventing them from endlessly trying to catch up with various security patches.

Since blacklist filtering engines do not prove themselves, rather than looking at a known list of vulnerabilities (blacklist), a positive (whitelist) security approach will do the trick. By analyzing the origin site, and learning and enforcing only known, validated patterns of traffic, it is possible to automatically build and maintain a complete whitelist representation of the site. Such an approach will render this and any future zero-day attacks useless. Users will use an alternative, fully functional and much more secure website that will follow these whitelist rules. Moreover, these alternative sites can be hosted on global cloud farms, benefit from full geo-availability, on demand scaling, DDoS mitigation and other cloud benefits.

We need to “exploit” new cybersecurity methods, even though the naming game can be entertaining.  So here’s a new concept:  “White is the New Black.”