Friday, February 11, 2011

Throttling SSH attacks with pf

After a brief chat with Claudio about ways to throttle SSH brute force attacks, I got inspired to do some testing of my own. There are already plenty of howtos on throttling or even automatic blacklisting, but few with some real numbers on how effective it can be. I had some requirements for this pf ruleset:
Not deny legit connections
Not permanently block anything

The test system is FreeBSD 6.0-STABLE. Note that some of the features used in this ruleset are only available in pf 3.7 and later (FreeBSD 6.x is synched to 3.7). I also use 
expiretable to automatically flush old entries in firewall tables. I found this ruby script for brute forcing. It is single threaded and often fails/stalls under non-ideal conditions, but it was better than nothing. 

I dont plan on explaining my rules in detail as this is no guide on pf, but contact me if anything is unclear.

LANIF = "em0"
LOIF = "lo0"
set block-policy drop
pass out all keep state
block in all
pass on $LOIF all
pass on $LANIF inet proto tcp from any to $LANIF port ssh keep state

Explanation: Allow all outgoing, and block all incomming except SSH. 

sshd_conf is pretty much default besides 'UseDNS' is set to 'no'. I also stress the use of 'AllowUsers' for restricting access. I have met people with compromised machines because they temporarely created a backup/backup user and forgot about it. 

Now lets see how fast the script is over a 1Gbit link.

time ./ssh-rbrute.rb -h 10.0.0.4 -u root -l passwords.txt -p 22

real     0m29.602s
user     0m5.091s
sys      0m3.229s

real     0m29.469s
user     0m4.973s
sys      0m3.188s

OK, seems consistent. I also verified that there were 100 attempts in the logs. Lets start throttling!

LANIF = "em0"
LOIF = "lo0"
set block-policy drop
pass out all keep state
block in all
pass on $LOIF all
pass on $LANIF inet proto tcp from any to $LANIF port ssh keep state (max-src-conn 10, max-src-conn-rate 5/60)

Explanation: Allow maximum 10 concurrent and only 5 attempts for every 60 seconds. 

The effects are drastic.

real     8m58.501s
user     0m5.203s
sys      0m2.784s

From 30 seconds to 9 minutes. Quite an improvement, but it can be better. What if people hammering me would suddenly experience, lets say, huge amounts of packetloss?

LANIF = "em0"
LOIF = "lo0"
set block-policy drop
pass out all keep state
block in all
pass on $LOIF all
pass on $LANIF inet proto tcp from any to $LANIF port ssh keep state (max-src-conn 10, max-src-conn-rate 5/60, overload <hammering> flush)
block on $LANIF inet proto tcp from <hammering> to $LANIF port ssh probability 65%

Explanation: Put connections caught in filter to table <hammering> and drop 65% of the packets matching that table. 

Now I must be completely honest with you. The script never managed to actually finish 100 login attempts. It 'crashed' (failed/stalled?) everytime due to 'connection timeout'. However, after many re-runs, it managed to reach roughly 50 login attempts.

real     9m55.291s
user     0m2.214s
sys      0m1.219s

Almost 10 minutes for 50 login attempts. You'd think they were using Ethernet over carrier pigeons during hunting season! 

There's more fun things to do with pf. I desided to forge results when scanned by Nmap. For instance redirect all port connections to port 22 on localhost:

LANIF = "em0"
LOIF = "lo0"
set block-policy drop
rdr pass on $LANIF proto tcp from any os NMAP to any port 1:65535 -> $LOIF port 22
pass out all keep state
block in all
pass on $LOIF all
pass on $LANIF inet proto tcp from any to $LANIF port ssh keep state (max-src-conn 10, max-src-conn-rate 5/60, overload <hammering> flush)
block on $LANIF inet proto tcp from <hammering> to $LANIF port ssh probability 65%

Explanation: Port forward all probes from source OS identified as Nmap to port 22 on localhost. 

There you have it, one terrible messed up firewall. Here's an example on how this would look with Nmap.

%sudo nmap -sS 10.0.0.4 -O -p 100-110
Warning: OS detection will be MUCH less reliable because we did not find at least 1 open and 1 closed TCP port
Insufficient responses for TCP sequencing (0), OS detection may be less accurate
Interesting ports on 10.0.0.4:
PORT STATE SERVICE
100/tcp open newacct
101/tcp open hostname
102/tcp open iso-tsap
103/tcp open gppitnp
104/tcp open acr-nema
105/tcp open csnet-ns
106/tcp open pop3pw
107/tcp open rtelnet
108/tcp open snagas
109/tcp open pop2
110/tcp open pop3
MAC Address: 00:0E:0C:84:82:17 (Intel)

Too many fingerprints match this host to give specific OS details

Everything looks open, but manually connecting to each port will tell you it is closed. Who said firewalling wasnt fun?

1 comment: