Wednesday, April 16, 2014

Monitoring VPN Logins for with Logstash

Most of us have heard the story about the programer who outsourced his job to China. He was caught when an internal security review noticed his login account coming from an IP address which originated from China.

While this story was making the rounds in our IT dungeon I was asked if there was a way to track our VPN logins to see if we had any unusual login sites. I did a little work and ended up with a pretty descent dashboard to show where our remote users were login in from.
https://drive.google.com/file/d/0B1QvyLibr-GKZ1F2alIxbU1Sd0k/edit?usp=sharing

Our dashboard shows successful and failed logins over time and the location of the remote user logging in.

First things first: Logstash. I love this tool! If aren't using, you need to start. Log monitor and analysis is a tough nut to crack, but logstash is a great tool to make a go off monitoring gigabytes of log data.

Logstash does have a pretty step learning curve. I would recommend going through the documentation on Logstash site to familiarize yourself with how it works. I will show how to setup logstash monitoring for a SonicWall VPN server, but this could be easily modified for other systems.

Logstash can run on Linux or Windows. In my case I will be installing it on an Ubuntu 12.04 server. Check the Elasticsearch download page for the Windows install programs. Before getting started we need to make sure you have Java running on the server. From command line:
java -version

You need to have either the OpenJDK or Oracle Run time installed.

  1. Get a copy of Logstash.
  2. wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.0.tar.gz
  3. Extract the program.
    tar -zxvf logstash-1.4.0.tar.gz 
  4. Move the extracted folder to the /opt folder.
    mv logstash-1.4.0 /opt/logstash
  5. Logstash can store the resulting log data in a number of sources but its built around using the Elasticsearch search database and the dashboard program Kibana. Get a copy of Elasticsearch setup first.
    wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.deb
  6. Now instal Elasticsearch
    dpkg -i elasticsearch-1.1.0.deb
  7. Then start Elasticsearch
    service elasticsearch start
  8. Create the following directories:
    mkdir /etc/logstash
    mkdir /var/log/logstash
  9. Create Logstash Config:
    nano /etc/logstash/logstash.conf
  10. Add the following to the config file or download from Git:
    # logstash.conf

    input {
      syslog {
        type => Sonicwall
        port => 5514
      }
    }

    filter {
      if [type] == "Sonicwall" {
                    kv {
                            exclude_keys => [ "c", "id", "m", "n", "pri", "proto" ]
                    }
                    grok {
                            match => [ "src", "%{IP:srcip}:%{DATA:srcinfo}" ]
                    }
                    grok {
                            match => [ "dst", "%{IP:dstip}:%{DATA:dstinfo}" ]
                    }
                    grok {
                            remove_field => [ "srcinfo", "dstinfo" ]
                    }
            geoip {
                    add_tag => [ "geoip" ]
                    source => "srcip"
                    database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
            }
      }
    }

    output {
            elasticsearch { host => localhost }
    }
  11. Now lets go and give it a test run. From the command line type:
    /opt/logstash/bin/logstash --config /etc/logstash/logstash.conf --log /var/log/logstash/logstash.log
  12. Once running you won't see much in the happen, nothing is configured to send logs to logstash. So now we need to setup the Sonicwall to send its logs to logstash. 
  13. Log into your Sonicwall and go to Log and then Syslog. Click Add. Set the IP address of the Logstash Server and set the port to 5514
  14. Then go to Categories under Log on the Sonicwall and choose VPN Client activity and check the Syslog option.
  15. If everything is going right we will have logs being pulled into Logstash. After a few VPN logins we can check to see if data is being collected. From a web browser enter the following:
    http://<ip address of logstash server>:9200/_search?pretty
    Return results will show the JSON response from the Sonicwall:
    {
      "took" : 1,
      "timed_out" : false,
      "_shards" : {
        "total" : 20,
        "successful" : 20,
        "failed" : 0
      },
      "hits" : {
        "total" : 63855,
        "max_score" : 1.0,
        "hits" : [ {
          "_index" : "logstash-2014.04.16",
          "_type" : "Sonicwall",  
  16. Finally we need to install Kibana. Kibana is the web interface to display the VPN Logins. Download the Kibana web software from Elasticsearch:
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
  17. Kibana can be run from the logstash server if you have Apache or other web server install. Or you can run it from another web server and point it at the Elasticsearch search service. In our case we will install Kibana to the same server:
    tar -zxvf kibana-3.0.1.tar.gz
  18. Then move the Kibana install to the apache web root:
    mv kibana-3.0.1 /var/www/kibana
  19. Then from a web browser navigate to the kibana interface:
    http://<ip address of logstash>/kibana
  20. Out of the box Kibana has an interface for Logstash, but we need to configure a new dashboard to display the VPN Logins. From Kibana we import the template. Click the Folder in the upper right hand corner and then select Advanced. Under Gist add this site:
    https://gist.github.com/jdnow/10901737
  21. Check out your new VPN Logins Dashboard!. Make sure to save your new configuration once done.


Tuesday, August 27, 2013

Windows Disk Monitor for Zabbix


With a Windows 2012 Cluster Microsoft introduced a new file system - CSVFS. I found that currently the zabbix agent does not return this file system when using the vfs.fs.discovery low level discovery for the file system. Additionally the low level discovery will not work on volumes that have been mapped to a directory instead of a driver letter.

So I've written a powershell script that handles finding file system and monitoring for drive capacity and free space:

param(
    [Parameter(Mandatory=$False)]
    [string]$QueryName,
    [string]$FSName
)

if ($QueryName -eq '') {
   
    $colItems = gwmi win32_volume | select Name,FileSystem

    write-host "{"
    write-host " `"data`":["
    write-host

    foreach ($objItem in $colItems) {
        $objItem.Name = $objItem.Name -replace "\\\\\?\\Volume{","Volume-"
        $objItem.Name = $objItem.Name -replace "}\\","`+"
        $objItem.Name = $objItem.Name -replace "\\","/"
        $line =  " { `"{#FSNAME}`":`"" + $objItem.Name + "`" , `"{#FSTYPE}`":`"" + $objItem.FileSystem + "`" },"
        write-host $line
    }

    write-host
    write-host " ]"
    write-host "}"
    write-host
}

else {
    #$FSName = [regex]::escape($FSName)
    $FSName = $FSName -replace "Volume-","\\\\?\\Volume{"
    $FSName = $FSName -replace "\+","}\\"
    $FSName = $FSName -replace "/","\\"
    switch ($QueryName)
        {
        ('Capacity') {$Results = gwmi win32_volume -Filter "name = '$FSName'" | select Capacity | Format-Table -HideTableHeaders -AutoSize}
        ('FreeSpace') {$Results = gwmi win32_volume -Filter "name = '$FSName'" | select FreeSpace | Format-Table -HideTableHeaders -AutoSize}
        default {$Results = "Incorrect Command Given"}
        }
    $Results = $Results | Out-String
    $Results = $Results.trim()
    Write-Host $Results

}
https://github.com/jdnow/ZabbixFiles/blob/master/Scripts/WinDrives_Status.ps1

I have also tested it running on Windows 2008 servers.

Settings for zabbix agent config:
UserParameter=windisk.discover,powershell.exe -NoProfile -ExecutionPolicy Bypass -file "C:\Program Files\Zabbix\WinDrives_Status.ps1"
UserParameter=windisk.check[*],powershell.exe -NoProfile -ExecutionPolicy Bypass -file "C:\Program Files\Zabbix\WinDrives_Status.ps1" $1 "$2"

Zabbix Discovery Rule:

Zabbix Item Prototypes:


Auto Discovery of Window 2012 Volumes:

Friday, August 23, 2013

Adding a SSL Cert to a Barracuda Device

This week has been a fun one as I've been trying to get a number of SSL certificates renewed and deployed to our servers (I thought having them all renew at the same time would be efficient, ended up being a big pain)

We needed to get a SSL cert on our Barracuda Spam Firewall so we could get TLS encryption enabled. I found that adding the Cert was a bit more difficult to do as the documentation provided by barracuda provided. The key to being able to add a Trusted Cert to a Barracuda was having the cert in the PEM file format, something that I had not done before.

We have our certs stored as a PFX file with its private and public key stored with an encrypted password to protect it. As the Barracuda would not take a PFX file, I had to convert it from a PFX file to PEM files, one for public and one for private keys.

  1. To separate the certificate I used OpenSSL to do it. Download OpenSSL and install it.
  2. To Extract the private key from the PFX file:
    openssl.exe pkcs12 -in SSLCert.pfx -nocerts -out privateKey.pem
  3. To Extract the public key from the PFX file:
    openssl.exe pkcs12 -in SSLCert.pfx -clcerts -nokeys -out publicCert.pem
  4. To remove the password from the private key file:openssl.exe rsa -in privateKey.pem -out private.pem 
 To add the cert to the Barracuda, log into device and go to Advanced / Secure Administration.


 

  1. Change the SSL Certificate Configuration to Trusted
  2. For Upload Trusted Certificate: Set the certificate to publicCert.pem
  3. Set the Certificate Password - if you set one.
  4. For Private Key: Set the certificate to private.pem
  5. If there is a Intermediate Certificate, set the Certificate Chain Bundle to the intermediate certificate.
  6. Press Upload Certificate Information and then Save Changes