SpiderLabs Blog

NASDAQ News Renews Focus (sort of)

Written by | Feb 7, 2011 10:47:00 AM

Reactive security is a common theme within many organizations and the reaction is usually not swift. Anticipation of threats via news reports is a dangerous game. Every time there is a major news item about a breach or malware incident, the calls come in. Speaking with peers within the information security industry, they get the calls too. Fear is a powerful motivator. The fact is that nothing has substantively changed. The world is just as secure, or insecure, as it was last week. Fear and reactive security aside, the focus on security is a good thing.

The old cliché that "generals are always prepared for the last war" can also be applied to information security. Reacting to the last headline may marginally improve security, but is unlikely to prevent most future attacks. Trying to guess the next headline vulnerability is even less likely to succeed.

A strong defense is based on universal principles and practices that collectively protect an organization's assets. More succinctly, this can be described as the ever-popular "best-practices". Patching, regular security testing, diligent user management, application monitoring & filtering, least required privileges, log monitoring, and strong network segmentation are just a few examples of security controls that every organization should implement.

Very few breaches are the result of only a single mistake. Usually, a single vulnerability allows an initial beachhead into the target's environment, but additional vulnerabilities must be exploited before the final goal is reached. A simple example can be made with SQL injection in an ecommerce website. SQL injection allows an attacker to access the application's database, but a second vulnerability of poor (or no) encryption practices must be present before sensitive data can be extracted. This scenario also implies that regular security testing, developer security training, and application filtering were also not being used. All of these deficiencies had to be in place for the exploit to take place.

Last year, Trustwave's SpiderLabs was testing some well-known third party software for one of our clients. We found some significant vulnerabilities in the software's web-based management tool. In many organizations, the findings would have warranted a high-risk label because of the potential impact. However, our client had a number of strong security controls setup (primarily related to network filtering) that made exploitation effectively impossible. The only reason we were able to find the vulnerability was that they disabled some of the controls for our tests. As a result of these controls, the practical risk was effectively reduced from "high" to "low". As far as we know, the client did not employ psychics to predict future vulnerabilities, but they were still protected.

The bottom line is that security "best-practices" weren't invented by consultants as a bizarre form of entertainment; they really do work while reactive security does not.