Wednesday, November 19, 2014

Security Hardening for Asterisk: Privilege Escalations with Dialplan Functions

Privilege escalations are a nasty security problem in which an attacker uses an exploit to get access to more resources on a computer system then was intended.

Digium has introduced a flag in 'asterisk.conf' that allows us to help prevent one such privilege escalation which in theory could be used as a remote exploit.

By using the Asterisk Manager Interface, (AMI) it's possible to send commands to Asterisk. This gives developers an incredible tool to make all sorts of applications that use Asterisk. This port should be firewalled if you do need the AMI and it should be completely disabled if you have no use for the AMI.  Maintaining strong passwords should also be a given and since the protocol is completely plain text, extra thought should be put into the the network topology between the Asterisk server and the application using the AMI.

If you do need to provide access to the AMI for applications in your environment, I suggest contacting the devs of those apps and seeing if the new flag, 'live_dangerously' can be set or not.

The following link contains some information on that flag:

The scariest part of this situation is that Diguim implies that someone could possibly get shell access on the server and have the ability to change any file on the filesystem.   This would be limited to the same access rights as the user that Asterisk is running as, but there is two problems with that notion. First, I would wager that  many Asterisk processes are running as root in the wild and that even if an IT team was on the ball and had Asterisk running as an unprivileged user, an attacker might make use of another exploit on the system to deploy another payload effectively just using this exploit as the beachhead in a larger more sophisticated attack.

To check the current state of this flag in your Asterisk deployments, run the following commands from the CLI:

  grep live_dangerously /etc/asterisk/asterisk.conf | grep -v ^';'

If the command does not print out any results, you are effectively running with 'live_dangerously=yes'.  To fix this potential problem add "live_dangerously = no" some where in the [options] section in asterisk.conf:

  sudo vim /etc/asterisk/asterisk.conf

Finally, restart Asterisk to make sure this change takes effect.

Tuesday, November 18, 2014

Is fail2ban redundant when a server is firewalled?

The question I was asked today is pretty straight forward, do we still need to run fail2ban on our internet facing servers if we are running a firewall?

Quite simply the answer is that you'll want to run 'fail2ban' in nearly every scenario.  'fail2ban' provides a layer of security that is not made redundant by additional layers of security such as firewalls. They actually complement each other rather nicely.

When it comes to services such as http or SIP where we have to strike a balance between easy public access and reasonable security to safe guard against abuse and denial of service attacks, fail2ban gives us a tool that can stop a would be attacker after a few failed attempts.

I would argue that even if your service is on a corporate LAN and if remote users had to VPN in to access the service, fail2ban should one of the many tools you use on your servers to harden them from attack.

Wednesday, October 29, 2014

What to do when an Asterisk Server is under a dictionary attack

A few days ago I had one of my coworkers let me know that our PBX was under an attack to brute force the credentials of a working SIP account. We all took interest in this attack since there was little else going on that morning.

One of my co-workers started a packet capture during the attack and we found it indeed coming from a single IP address and the closer inspection of the SIP packets and some strong google-fu, his analysis indicated it was an attack by the the SIPVicious tool suite.  An attack by this tool set is pretty easy to classify if the attacker has not modified the tool suite as identifies its' self as 'friendly scanner' in the SIP invites it sends.

Instead of waiting for fail2ban to catch the attack, we decided to just block the source IP on the PBX's firewall.

Really, we all expected the attack would have been halted at this point.  We did the basics: confirmed where the attack was coming from, decided on the corrective measures, acted on the corrective measures immediately since they where low risk changes to our production systems but we did run into a problem.  What we found was that even though SIP communications are done over UDP packets and that is a considered by many to be stateless, the firewall was still letting the attack continue!

After a bit more digging, I realized that the Linux kernel and iptables considered the connection 'established'. Most default firewall rules allow 'established' connections to pass through more or less unmolested. That left me with a bit of a problem.. how do I tell the Linux kernel to forget about any established connections with the attackers IP?

Turns out there is a tool for such a problem: conntrack

It's available in the default repositories so installing it is simple:

  $ sudo apt-get install conntrack

Once you figured out the IP address the attack is coming from, either by a PCAP (packet capture) or by examining the application log (in this case /var/log/asterisk/full) you can get a list of all the connections the Linux kernel is tracking with the following command:

 $ sudo conntrack -L --orig-src

The next step was to remove those established connections so that the firewall rules could block this IP permanently:

$ sudo conntrack -D --orig-src

The output of these command was similar to the first command to list all the connections, but the very last line gives a summary of how many connections where deleted.

Once that was done, the attack was stopped.  It wasn't stopped completely of course, the attackers packets where still hitting the server but the firewall was dropping them before they could reach the application so the threat was neutralized but there was still some amount of bandwidth being consumed by the attacker.  Unfortunately, there is little you can do about that on the server itself,  you'd have to contact your ISP to block that IP completely.  Some hosted services and co-location facilities do offer that type of service, so it may be worth pursuing that avenue.

Hopefully this information can help other people in stopping other types of DDOS and brute force attacks quickly.  The methods we used can really apply to nearly any other application or operating system even if the tools are different.  One of the key things is how well your team can communication during this type of event.  We were all at the office at the time so that was incredibly simple as we were all near each other.  Next important things to do is analyse the threat by checking the logs and even starting a packet capture either for immediate analysis or for full forensics later.  Next is to decide how to neutralize the threat and decide if that can reasonably while in production.  Finally, it's very important to assess if the attack was truly stopped and if the attacker actually got anything.


Thursday, September 25, 2014

Shell Shock Vulnerability: How to test and patch your Debian and Ubuntu machines from the "Shell Shock" vulnerability

'Shell Shock' is a very new vulnerability that has just come to light and it seems like it might be a pretty bad.  One of my co-workers just told me about this.

Luckily the fix is pretty start forward:

$  sudo apt-get update;sudo apt-get install bash
If your system is vulnerable, the following command will print out 'vulnerable'

$  env var='() { ignore this;}; echo vulnerable' bash -c /bin/true
After you have patched your system, the same test will provide different output:

$ env var='() { ignore this;}; echo vulnerable' bash -c /bin/true
bash: warning: var: ignoring function definition attempt
bash: error importing function definition for `var'

Happy patching!

Friday, September 5, 2014

GRUB Options to change on your Ubuntu 12.04 Servers

A recent set of policy changes from Canonical (the maintainers of the Ubuntu distribution) have changed the default way that the GRUB boot loader behaves.  The changes make the boot loader hidden by default during the boot process and they have changed the default

I think that their motivations make perfect sense on the average users desktop PC or the tablet market but in my not so humble opinion, it's the wrong direction for a server.  I want the GRUB menu to come up by default and and wait for a few seconds.  I want to see all the kernel messages as it boots.  This is absolutely critical for troubleshooting problems during the boot process. If you are using a remote KVM the a longer delay can really help with latency issues and bringing up the menu automatically helps with many problems I've run into with the KVM keyboards and slow video mode changes that can occur with some remote KVM models.

On the servers I'm responsible for I use the following settings to make sure that GRUB runs in text mode, always brings up the menu and waits for 10 seconds before booting the default option:

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_INIT_TUNE="480 440 1"