bookmark_borderConsistent logging – good separators

After posting my paper about Remote log injection, most of the feedback I received was regarding how “bad” these tools (e.g. DenyHosts, BlockHosts, etc) are and how bad the idea of log-based automatic response is.

Some people even said that the best approach is to just ignore these logs, since they are just noise. Yes, ignore their sshd/ftpd logs… Of course that I don’t share this opinion. SSH/FTP scans are real attacks and some systems can end up being compromised because of them (well, if they had good passwords, they wouldn’t, but that’s another discussion). Not only we should monitor for failed password attempts, but also for failed passwords followed by success, every success login, etc. Again, that’s another discussion.

Anyway, instead of blaming these tools, I would also put the “blame” on the applications that generate log messages without any good formatting or consistency. In the case of ssh, it uses spaces as a “field separator”, while the user name itself can have spaces. Not a good choice at all. The same applies for Vsftp that uses a bracket as a separator while, again, user names can have brackets.

Logs are there for a reason: to be analyzed, monitored, etc. If they are not consistent or can easily be modified based on remote log injection, they lose their value. That’s why a lot of people just ignore them…

What do I mean by consistent logging? I mean a log that is well defined and uses good separators making it easy for anyone to parse and automatic analyze them. A good example is the ProFTPD logs:

proftpd[12564]: test (hostname[192.168.1.1]) – USER xx: Login successful

Why do I think it is well formatted? Well, it starts with the hostname, followed by IP. These are not user provided input, so they can not influentiate the other fields. The second good point is that the user name separator is a “:” (colon) which is a not valid character for user names. Because of that, log analysis tools can use a simple regex looking for “:” as the end of the user name. Third, it has a descriptive message of the event (in this case, login success)…

There is a lot more on consistent logging that I would like to talk. Hopefully when CEE is really out it will address some of these issues. Comments?

bookmark_borderRemote log injection paper

I just finished an article about “Remote log injection”, that among other things, exposes some vulnerabilities on DenyHostsFail2ban and BlockHosts that can lead to arbitrarily injection of IP addresses in /etc/hosts.deny. To make it more “interesting” (i.e. worse), not only IP addresses can be added, but also the wild card “all”, causing it to block the whole Internet out of the box (bypassing white lists).

The paper is available here: https://dcid.me/texts/attacking-log-analysis-tools.html

Snippet from the article:

The purpose of this article is to point out some vulnerabilities that I found on open source log analysis tools aimed to stop brute force scans against SSH and ftp services. Since these tools also perform active response (automatically blocking the offending IP address), they would be good examples. However, any tool that parse logs can be equally vulnerable.

We will show three 0-day denial-of-service attacks caused by remote log injection on BlockHosts, DenyHosts and fail2ban.

This paper talks about remote log injection, where an external attacker can modify a log, based on the input it provides to an application (in our case OpenSSH and vsftpd). By modifying the way the application logs, we are able to attack these log analysis tools. We are not talking about local log modification or “syslog injection”.

bookmark_borderSqlmanager scans

I have three honeypots looking for web attacks/scans and lately all three of them detected scans looking for sqlmanager (mysqlmanager). It is the first time I see them looking for it and I couldn’t find any reference to new vulnerabilities related to it. I changed my honeypots to respond successfully to these scans to be able to see what the exploits are all about.



Received From: hn1->/var/log/httpd/error_log
Rule: 30114 fired (level 10) -> "Multiple attempts to access non-existent files (web scan) from same source."
Portion of the log(s):

[Mon May 28 15:56:00 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/p
[Mon May 28 15:56:00 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/mysqlmanager
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/sqlmanager
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/pma2006
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/PMA2006
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/dbadmin
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/admin
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/PMA
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/web
[Mon May 28 15:55:59 2007] [error] [client 75.xx.xx.xx] File does not exist: /var/www/html/db

–END OF NOTIFICATION

Any ideas out there? Did I miss something?

bookmark_borderLog analysis using Snort?

In the snort mailing list there was a thread about detecting authentication failures (on ssh, apache, ftp, etc) using Snort. I love Snort, but using a NIDS (Network-Based IDS) for this kind of stuff is trying to use the right tool for the wrong reasons (yes, we could even write a syslog parser using it).

That’s why we need LIDS (Log-based Intrusion detection). Check out my reply to this thread:

That’s what I would call using the right tool for the wrong reasons (or something like that).

The provided sshd signature does not detect brute force attacks, but multiple connections from the same
source ip (failed or not). The HTTP signature can easily generate false positivies since you are just
looking for the content “404″, and it would not work with SSL…

My point is: why not use log analysis to detect failed logins (and brute force attacks)? Both sshd, apache,
apache-ssl, ftp, telnet, etc ,etc log every failed login attempt (and every successful login attempt)?

By using log analysis you can reliably detect every failure and you don’t need to worry about encrypted
traffic. Plus, you can do more useful stuff, like detecting multiple failed login attempts followed
by a success (successful brute force attack) and monitoring every successful login to your systems.

I wrote a paper while back with some patterns that we can look in authentication logs:

http://www.ossec.net/en/loganalysis.html

And if you are looking for an open source tool to monitor all your logs (from Apache to sshd, proftpd,
Windows logs, etc, etc), with the ability to execute active responses based on them (blocking ips,
disabling users, etc), you can try ossec*:

Home

http://www.ossec.net/wiki/index.php/FAQ

*note that I am the author of this tool.

hope it helps.

bookmark_borderCEE – Logging standard

If you are not at the log analysis mailing list, you are missing a good discussion regarding the efforts to create a new logging standard, CEE (Common Event Expression). MITRE is in charge of the process, but it is probably sponsored by Log logic (1), since they were the first ones to report about it.

Before I go any further, I would like to say that I am very interested in this initiative and that I already contacted MITRE to be a part of the CEE working group. Unfortunately, I am not very optimistic that it is going to be widely adopted (hope I am wrong).

First of all, it will require significant changes to all major applications and if the protocols are not very well designed, no one is going to use it.

Secondly, the protocol must be simple enough to be fast and non-blocking (like syslog), but still reliable, with support for encryption, etc.

Thirdly, I am always worried by protocols designed by security people. Most of them have no software engineering experience and if CEE looks anything like IDMEF or SDEE it will go no where.

Anyway, besides my lack of optmism, I will still contribute to it and if it get past the design phase, I will volunteer to write free libraries (LGPL or BSD licensed) to support it.

If you want more information, check out the following blog entries (by Anton Chuvakin and Raffy’s:

Finally, Common Event Expression (CEE) is Out!!!
CEE brochure
Standard Logging Format – Common Event Expression (CEE)

[1] Edit to add (Apr 28 2007): Looks like I spoke too soon (actually without any base) that Log Logic is sponsoring CEE. Thanks Raffy for pointing it out in the comments.

bookmark_borderFinding ADS on NTFS

ADS (Alternate Data Streams) is a “feature” of the NTFS (file system used on Windows 2000, XP, etc) that permit files to be completely hidden from the system. You can read more about ADS in these two links: windowsecurity.com ADS and lads.

Currently I am working on merging rootcheck (an anomaly detection module) to Windows and one of the things it needs to detect is hidden files using NTFS ADS. However, so far, I couldn’t find any open source tool that detects them (yes, there is freeware programs out there, but no source code). Most of the articles I read point to lads, which is free, but not open source.

So, to fill this gap, I am releasing a beta version of a small tool (ads_dump) that scans a given directory and prints every ADS found. It is a standalone tool, but it will be soon included into ossec.

You can download it from here and the source code (GPL v2) from here.

Using this tool is very simple, just execute it and pass as an argument the directory to scan. It is going to print every ADS found. Example:

C:>ads_dump.exe
ads_dump.exe dir

C:>echo hidden > C:temp/a:hidden
C:>echo hidden > C:temp/a:hidden2
C:>ads_dump.exe C:temp
Found NTFS ADS: 'C:tempa:b'
Found NTFS ADS: 'C:tempa:hidden'
Found NTFS ADS: 'C:tempa:hidden2'

*Please note that it is still in beta (comments and suggestions are welcome). It will be also be included on the next version of ossec as part of the Windows anomaly detection module.

bookmark_borderEight daily steps to a more secure network

Michael Mullins wrote an interesting article with eight daily steps to secure your network. What I really liked is that at least 3 of these 8 steps mentioned involves looking at logs. He mentioned looking at antivirus, security and IDS/firewall logs… great suggestions! However, monitoring these logs MANUALLY every day, as he suggested, can be very hard and time consuming… In addition to that, just by browsing the logs you will miss a lot of good information and correlation that just an automated tool can find. What about using a tool designed for this purpose? OSSEC can analyze every mentioned log and much more…

bookmark_borderSecurity Monitoring

Richard Bejtlich posted an excellent entry in his blog (taosecurity) about the difference between alert centric tools and Network Security Monitoring (NSM). He says:

Network Security Monitoring (NSM) is different. Generating statistical, session, full content, and alert data gives analysts some place to go when they get an alert — or when they want to do manual analysis… With NSM, an alert is the beginning of the investigation, not the end..

He also points that with alert centric tools (including most SIMs), the investigation ends with the alert.

I agree with everything he says, but I would like to add an important point to his NSM approach: Log data. Even with full session/content/alert data, you can not see everything that is happening. One of the easiest ways to exemplify that is with encrypted protocols. How do you know what is happening in a specific sshd session? What about SSL traffic? In addition to that, what happens at the host level? Kernel or processes errors are not going to be seeing at all. I will give some examples bellow:

1- SSHD traffic
You see an SSH connection to your system, how do you know what happened? There is no difference at the network level between a failed login attempt (three password attempts) and a successful quick sshd session. However, if you look at your logs, you have:

Jan 10 11:23:02 enigma1 sshd[10732]: Failed password for invalid user tst from 192.168.2.56 port 10395 ssh2

You know that the authentication failed. However, failure events is not the only think to watch. For example, at OSSEC we use FTS (First time seen) to alerts on first time events. Whenever a user logs to a system that he/she has never logged in before, you get the following:

Received From: enigma2->/var/log/auth.log
Rule: 10100 fired (level 4) -> “First time user logged in.”
Portion of the log(s):
Jan 10 11:29:36 enigma2 sshd[30291]: Accepted publickey for vuser from 192.168.2.67 port 52636 ssh2

It can also alert on multiple failed login attempts that you would never know for sure from the network level:

Location: enigma->/var/log/authlog
Src IP: 85.25.147.156
SSHD brute force trying to get access to the system.
Jan 7 23:33:17 enigma sshd[392]: Failed password for invalid user ftp from 85.25.147.156 port 49786 ssh2
Jan 7 23:33:17 enigma sshd[392]: Invalid user ftp from 85.25.147.156
Jan 7 23:33:16 enigma sshd[23470]: Failed password for invalid user admin from 85.25.147.156 port 49717 ssh2
Jan 7 23:33:16 enigma sshd[23470]: Invalid user admin from 85.25.147.156
Jan 7 23:33:13 enigma sshd[29767]: Failed password for invalid user mysql from 85.25.147.156 port 49583 ssh2

And more importantly, how do you know what happens AFTER a valid user got access to the system? Did he tried to increase his privileges using su/sudo? Did he succeeded? I don’t think there is any way to do know besides monitoring your logs. Using FTS for su/sudo, we know whenever a user succeeded (for the first time) or failed using these commands.

2007 Jan 10 11:49:03
Rule: 5403 (level 4) -> ‘First time user executed sudo.’
Jan 10 11:47:26 localhost sudo: dcid-user : TTY=pts/2 ; PWD=/home/dcid ; USER=root ; COMMAND=/bin/dir

2- SSL traffic
Most NIDS (Network-based Intrusion detection systems) can not look at SSL encrypted traffic, but we use SSL everywhere now. How to monitor this traffic? Using log analysis you can see exactly what requests the client made and if they were successful or not. I always run Snort (with sguil) on my network and it misses some attacks that ossec using log analysis alerts me. The following one is a good example:

Received From: (wserver) 192.168.2.0->/var/log/apache-ssl/ssl.error_log
Rule: 30117 fired (level 10) -> “Invalid URI, file name too long.”
Portion of the log(s):
[Tue Jan 9 22:16:07 2007] [error] [client 142.162.1.79] request failed: URI too long

And I have numerous others cases where logging was the only way to know what happened (e.g sql injection or other web application attacks against SSL servers).

A few more examples are available in a paper that I wrote a while back ago: log analysis for intrusion detection.

To conclude, Richard also pointed in the past: “I guarantee I could determine if the system was compromised, and by how many parties, faster using NSM techniques than manual log analysis”. I completely agree with him. Manual log analysis is very hard to do and provide little value for the time spent… This is why I wrote OSSEC: to provide an automatized log analysis tool that is easy to use and extend, providing the alerts in a useful way that adds a lot to a security team. Together with NSM, log analysis can detect a compromised system much faster than without it…

bookmark_borderIs Open Source Rootkit Detection Behind The Curve?

The guys from matasano posted in their blog an entry about the current state of open source rootkit detection. While I agree that we are way behind the latest rootkit technologies (specially for windows), if you look at the public known unix-based rootkits, we are not that bad. Most of them only use basic system call redirections and can be detected by ossec/rootcheck. It looks like very little has been done focusing on unix-based systems lately…

Below is my reply in their blog:

I think the tool you mentioned that does the connect/bind+kill stuff is rootcheck (now part of ossec). It basically does four things to detect anomalies in the system (that may indicate the presence of a rootkit):

1-Attempts to bind to every TCP and UDP port. If it can’t bind the port (port is used), we check if netstat is reporting it.

2-Attempt to kill(0), getsid and getpgid every process (from 1 to maxpid). We compare the output of these three system calls with ps and proc (where available).

3-Compare the output of stat st_nlink with the count from readdir.

4-Attempt to read every file in the system and compares the size read with the one from stat.

I know these techniques can be evaded, but they are sucessfull against most of the public known unix-based rootkits (99% still based on system call redirection). Rootcheck/ossec also has the rootkits signatures stuff…

In addition to that, OSSEC also does file integrity checking and log analysis to complete its HIDS tasks..

In my opinion, the best way to protect against rootkits is by having an updated and “as secure as possible” system. However, as soon as an attacker finds a way in and gets root (kernel) access, the battle becomes much harder… Early warning systems to detect the attack (not the rootkit) may be the only thing left (anything from log analysis to integrity checking and NIDS).

bookmark_borderUsing sshv1 x sshv2

It has become common knowledge that everyone should use ssh version 2 and whenever possible disable support for version 1. The initial version of ssh has some design flaws that makes it vulnerable to some attacks (check out dsniff). However, I just read the following comment from Theo de Raadt at the OpenBSD misc list:

I am actually more worried about security problems in the protocol 2
code which is roughly 4-5x as complicated.  People's fears are
misplaced.  But it is fun to ride a meme, isn't it.

I hope he is not encouraging people to use version 1…