Many times, in the course of explaining what I do to others that are unfamiliar with information security and penetration testing, the question is eventually asked: "So, after you've broken-in, do you fix the problems?" At which point I get to say, "No, that's the best part. I only have to break it! Fixing it is someone else's job", although that's not entirely true, we always make good recommendations.
Experienced penetration testers rarely break things in the course of testing, but they all started somewhere. At some point, they all either have or will break something. In this post I'm going to review some of the common gotchas, faux pas, and unintentional mistakes that I've seen and experienced during pen tests over the years.
ARP Spoofing Gone Wild
This one is especially common for new pen testers or if someone is in a hurry and mistypes a network range. For example:
ettercap -Tq // //
ettercap -Tq /[gateway IP]/ //
This is a bad idea. Unless you're on some high performance hardware and a fast network link among a sparsely populated or low traffic network, trying to route all local network traffic through your box is generally a very bad idea. The likely result: you blackhole portions of the network, possibly including the local gateway. Clients don't like it when the network stops working. If you were performing the test remotely, you may not even be able to reconnect to your box to stop the problem. Always start with a small number of targeted hosts and watch your traffic like a hawk. You can see traffic statistics (including dropped packets) by hitting 's' from within ettercap's text interface.
My Pipe is Bigger than Yours
This one is a personal, true story. I was performing your basic run of the mill Man-in-the-Middle (MitM) attack between three hosts (that's all, only three) on the local network. Within a minute of beginning my ARP Spoofing, I got a call from my client that their web application was having problems and connections were getting dropped. I immediately stopped, re-ARPed my targets, and everything went back to normal. I pared my attack down to two hosts, and tried again, this time paying attention to my traffic stats. I got a call again from the client that the problem was back and I saw the problem: dropped packets were going through the roof. While I had the client on the phone, I asked them what kind of traffic their server usually handled and what was the size of the network pipe? It turns out, with a 100Mbit network interface, I was attempting to intercept and route a 2Gbit aggregate link -- that's right, I was trying to shove 4Gbits of traffic over a 100Mbit connection. Be mindful of your link speed and don't bite off more than your interface can chew.
One Guess Too Many
Excessive password guessing is something that's so obvious it shouldn't bear mentioning...unless you have multiple consultants, testing multiple environments that share common Windows domains. It becomes even more complicated when the different domains that are shared between these environments have different password policies: number of lockout attempts, lockout counter timeout and lockout duration. It's entirely possible to lockout an entire domain guessing just one password...if several consultants try it at once. Tread lightly, communicate with your partners, and know the password policies of the domains and systems being tested.
Out of Disk Space...
...Not you. Them. Whether it's a tool generating log data or tcpdump capturing packets, any process you start that writes to the disk of a compromised machine (ie. not your machine) has the potential to bring the system to a halt. Always watch the rate of growth of your logs and dump files and make sure that if you lose your connection to the machine, you can get back in and stop the process before it gets out of hand. Memory dumpers, especially, can eat up hard disk space quickly, if not handled with care.
Deus ex Machina
Sometimes things go horribly wrong and there is no good reason why. I have heard of cases where a simple port scan against some common, basic networking services caused the target system to crash. Or, where an enterprise-level web application that can service thousands of requests per second will freeze up if hit with more than 10 requests per second from the same source IP. In these cases, all you can really do is damage control. Don't panic, don't try to shift the blame, and offer any assistance you can in determining the cause of the problem. Be a part of the solution. Who knows, you may have just identified a potential bug or availability weakness.
Keeping It Clean
Cleaning up after yourself is an important part of pen testing. While these things often go unnoticed, sometimes forever, a client finding suspicious evidence left behind after a test just looks bad. I've seen old accounts left behind from prior pen tests, and heard of clients panicking when they discover password dumping tools after an Anti-Virus update that have been sitting quietly on a disk for years. Sometimes a tool gets whacked by AV mid-way through completion and temporary services get left behind or processes hang. Whatever the cause, keep notes, clean up and leave systems the way you found them.
Moral of the Story
In summation, there is one thing all of the above have in common: presence. If the client never knows you were there, you did it right.