chickencode: (Default)
We do quite a lot of things through proxies at my place of employment. Mainly so we don't get blacklisted as the behavior of our malware/phish processing systems often times looks like we're doing some pretty shady stuff.

I had a use case where one of our nagios checks was designed to hit a vendor API endpoint from our local on box proxy via specific ports and determine if the endpoint was reachable. This caused some issues as sometimes the endpoint would be down for a very small window of time, or the proxy was being derp for some reason - either way, I was getting up in the middle of the night for false positives. This makes for an angry systems engineer so I thought I'd just rewrite the check.

It may become useful for anyone that needs to check the state of something multiple times before sending off the nagios exit response. Instead of checking for a failure once and alerting it checks for 3 consecutive failures.

The "Nagios Plugin" script

#Varaible initilization
http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 '')

#if the http response isn't 200 it will check 3 consecutive times for a change. If no change occurs it will increment a flag for each failure.
while [ "$http_response" != 200 ]
  echo "$http_response"
  http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 '')
  if [ "$frequency" -eq 3 ]; then break
  sleep 60

#Compare the flag value. if it is less than 3 the check corrected itself and prevented false positive, otherwise its probably a real alert.
if [ "$frequency" -eq 3 ]; then
        echo  "Port $1 not reachable - $http_response response"
        exit 2
        echo "Port $1 reachable"
        exit 0

NRPE definition
command[check_endpoint_through_proxy]=/usr/lib64/nagios/plugins/check_endpoint_proxy "27845"

Nagios definition

define service{
use remote-service,srv-pnp
host_name server.nrpe.response
service_description Endpoint Local Proxy Connection
contact_groups emailadmins
max_check_attempts 3
check_command check_nrpe!check_endpoint_through_proxy
chickencode: (Default)
There are a few moving parts to getting ClamAV installed and set up correctly to scan weekly so I threw together a useful script that does it automatically on any redhat based machine.

chickencode: (Default)
Nagios Hearts AWS

Good ol' Nagios. The monitoring system that just won't die and for good reason it does what we want and does it well.

I found there were not very many good resources on setting up nagios alerts to publish to SNS topics for SMS subscriptions so I did the leg work and am presenting it here.

Note: Its expected that you already have Nagios set up on a server and understand at least the basics of configuration definitions. 

Setting up the SNS topic in AWS

The first step we need to do is actually set up the SNS topic via the AWS dashboard.

create the sns topic

Make sure you get the Topic ARN after creating it, we will need to create a service account and IAM policy to publish the nagios alerts to the topic. It will look like arn:aws:sns:us-east-1:101010101010:nagios-publish where 101010101010 is your account number.

While you're in the Topics dashboard create subscriptions for the engineers that may want to be alerted to nagios alerts by clicking the "create subscription" button

create sms subscription to the sns topic

go to and create the user that will have access to publish. I always use explicit naming conventions to easily manage them via IAM. In this case it will be service.nagios. Save the access and secret key values that are created for the user.

Now create a policy policy.nagios-sns with the following permissions.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Nagios Integration

first thing you are going to want to do is add the aws_access_key_id and aws_secret_access_key to the server nagios is on under the nagios user (or whatever user the nagios service was set up to run under)

By default the user that runs the nagios service has its shell set to /bin/nologin but you can get around that by issuing as root su - nagios -s /bin/bash

If you're already on an AWS instance you can just issue command aws configure if nagios is not running in amazon install the awscli client by following steps here once you add the keys they are stored in ~nagios/.aws/credentials and the policy you created for the service account tied to these keys will be permitted.

We are going to create a separate contact template and command inside our nagios configuration just for being alerted via SNS. Navigate to the objects directory inside your nagios installation. my path is /usr/local/nagios/etc/objects and edit the file templates.cfg in search for generic-contact we are pretty much copying everything from that generic contact default entry and just changing names around and passing a different command. The bottom is our copied and edited entry that we need named sns-contact.

# Generic contact definition template - This is NOT a real contact, just a template!

define contact{
        name                            generic-contact         ; The name of this contact template
        service_notification_period     24x7                    ; service notifications can be sent anytime
        host_notification_period        24x7                    ; host notifications can be sent anytime
        service_notification_options    w,u,c,r,f,s             ; send notifications for all service states, flapping events, and scheduled downtime events
        host_notification_options       d,u,r,f,s               ; send notifications for all host states, flapping events, and scheduled downtime events
        service_notification_commands   notify-service-by-email ; send service notifications via email
        host_notification_commands      notify-host-by-email    ; send host notifications via email
        register                        0                       ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE!

define contact{
        name                            sns-contact         ; The name of this contact template
        service_notification_period     24x7                    ; service notifications can be sent anytime
        host_notification_period        24x7                    ; host notifications can be sent anytime
        service_notification_options    w,u,c,r,f,s             ; send notifications for all service states, flapping events, and scheduled downtime events
        host_notification_options       d,u,r,f,s               ; send notifications for all host states, flapping events, and scheduled downtime events
        service_notification_commands   notify-service-by-sns ; send service notifications via email
        host_notification_commands      notify-host-by-sns    ; send host notifications via email
        register                        0                       ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE!

The real magic behind the nagios aws sns integration is within the command itself. Its how we publish alerts to the topic.

edit the configuartion file /usr/local/nagios/etc/objects/commands.cfg and add this under sample notification commands.

# 'notify-host-by-sns' command definition
define command{
        command_name    notify-host-by-sns
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | aws sns publish --topic-arn arn:aws:sns:us-east-1:101010101010:nagios --message "$NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$"

# 'notify-service-by-sns' command definition
define command{
        command_name    notify-service-by-sns
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" |  aws sns publish --topic-arn arn:aws:sns:us-east-1:101010101010:nagios --message "$NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$"

We're almost done now. The last thing we have to do is define a contact group and contact to inherit the sns-contact object definitions.

Wherever you have your contacts defined add the entry

# SMS  Admin contact group                                                      #

define contactgroup{
        contactgroup_name       smsadmins
        alias                   SMS Nagios Administrators
        members                 sms-distro

define contact{
        contact_name                    sns            ; Short name of user
        use                             sns-contact         ; Inherit default values from generic-contact template (defined above)
        alias                           SNS Alert          ; Full name of user
        service_notification_options    c,r

From here on out when you define a new server check that you want to get SMS notifications for you would just add the contact group we just created to the check.

# Define a service to "ping" the local machine

define service{
        use                             remote-service,srv-pnp         ; Name of service template to use
        host_name                       serverName
        service_description             PING
        contact_groups                  snsgroup,emailadmins
	check_command			check_ping!250.0,20%!500.0,60%

Finally after all the work you can now publish nagios alerts to the SNS topic and get those alerts through SMS subscription.

SNS topic to SMS subscription nagios alerts
chickencode: (Default)
As a linux geek I thought it would be kind of fun just to recreate some basic core tools I use every day, pretty much to see if it could be done and to compare my solution to whats actually implemented. I started off with something super simple the unix "cat" command which i'm sure you know, just displays the contents of a file in your shell.
It seemed uber simple and didn't take very long at all, but I have recreated it to an extent and while its not nearly is in depth or efficient as the real implementation of cat it was a learning experience.
#include stdio.h      

int main(int argc, char* argv[])  
int c; 	
FILE *fp; 	 
fp = fopen(argv[1], "r"); 	 
if(fp == NULL) 	 
printf("Can't open file, does it exist?\n"); 	   
return 1; 	 
while ((c = getc(fp)) != EOF) 	     
return 0;  } 

When I compared my implementation to the one actually found in the coreutlis linux library I was a little surprised. The actual cat command is 768 lines of code whereas my toy cat is a whopping 20, keep in mind though I did not have add any flag use and its in no way is optimized. I was happy that the handling of the command line arguments was the same (how could it not be? derp

check out the full source of the cat command Here

Earlier in the week I talked about doing some MOOCs based on the google guide to technical development because I really want to be a good software engineer. I found one that while not on google's list is still pretty badass because you work in C which is the language I want to become super proficient in anyways. Its called CS50 and its a harvard course, so far blazing through it and on week 3. CS50
chickencode: (Default)
The bulk of my day job is actually analyzing phish, phishkits, drop scripts etc.. Lately we have ran into an issue where the phishing campaign is only accepting local ip's to view the phishing content and blocking out everything else in the httaccess.

For this reason I wrote a little utility that would allow us to check to see if we get any kind of response from the phish based on the geographic location of a proxy connection.


echo " "
echo "------------------------------"
echo " GeoBlocked? "
echo "------------------------------"
echo " "
echo "Enter proxy list file name, if not in same directory provide full path: "
read LIST
echo "Enter URL to see if its being geoblocked"
read URL
echo " "
echo "Checking status of: $URL This could take some time"
echo " "
echo " "

PROXY="$(< "$LIST")"
red=`tput setaf 1`
green=`tput setaf 2`
reset=`tput sgr0`

function url_check()
export http_proxy="http://$i"

status="$(curl --max-time 15 --connect-timeout 15 -s -o /dev/null -I -w '%{http_code}' $URL)"
country="$(curl -s | sed -n 's|.*,\(.*\)|\1|p')"
DOWN="$(echo "${red} $i - URL IS DOWN - $country ${reset}")"
UP="$(echo "${green}$i - URL IS UP - $country ${reset}")"
TIMEOUT="$(echo "${red}$i - Proxy connection took too long${reset}")"

case "$status" in
"200") echo "$UP";;
"201") echo "$UP";;
"202") echo "$UP";;
"203") echo "$UP";;
"204") echo "$UP";;
"400") echo "$DOWN";;
"401") echo "$DOWN";;
"402") echo "$DOWN";;
"403") echo "$DOWN";;
"404") echo "$DOWN";;
"500") echo "$DOWN";;
"501") echo "$DOWN";;
"503") echo "$DOWN";;
*) echo "$TIMEOUT";;
unset http_proxy;

for i in $PROXY; do
url_check $i

chickencode: (Default)

After moving to a new city and finding out a little less than a decade ago the neighborhood my lady and I moved to was wrought with drugs, prostitution and theft I let the paranoia set in despite the fact we have had no problems and the neighbors have been exceptionally nice.

No matter, I'm an engineer, I solve problems - right?

So with a spare hour I set off to build a solution to help ease my paranoia of the house getting broken into while we are gone.

This is the result of a super simple webcam recording security solution for Linux based operating systems.

Things you will need.

  1. A webcam

  2. Linux OS (I used xubuntu 14.04)

  3. streamer software

  4. Dropbox

After gathering the materials first thing you will need is to install streamer, the software we will be manipulating to record our surroundings.

sudo apt-get install streamer

After that create a DropBox account if you do not already have one and install it on your system. The default install directory is set to your home directory.

You can simply install the deb or follow the command line guide here for installation.
Dropbox install for linux

I chose to do this step because I figured if someone was in your house they will probably take the computer that is performing the recording and if you store them locally the effort is meaningless.

Once you have everything installed create a new directory under ~/Dropbox or the path where you installed. I named mine "security" this will sync up to your dropbox for review anywhere.

Lastly the brains of the operation, a super tiny, simple "script" that does the work. Copy the code and save as a shell script, I named mine ""


#change to your dropbox directory you created to store pics
cd ~/Dropbox/security;

#continuous loop 1 million is just arbitrarily big
for i in {1..1000000}
    #streamer software takes pics and stores chronologically
	streamer -c /dev/video0 -o "$i".jpeg
	#this just waits 20 seconds before running again
	#3 pics per minute, can change to however long you want  
	sleep 20

There you have it. Now you just call the script and it will run.


Now anything that shows up in that directory will be synced to your drop box, and while you may still get robbed, at least you have something to show police and hopefully some form of justice.

Thoughts for improvement.

  • Get off of Dropbox - upload to a remote server instead

  • Set this up on multiple raspberry PI's with wifi to monitor outside entry points


chickencode: (Default)

March 2017

5 67891011


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 19th, 2017 07:19 am
Powered by Dreamwidth Studios