Thursday, December 19, 2013

This post will go over digitally signing a file.  Combined with data encrypted with a secure symmetric cipher, this will provide reasonable assurance that an encrypted file was sent from the source that claims to have sent the file, and that the file was not modified during transit, as long as the private key used to sign the file has been kept safe.  As before, GnuPG will be used for both encryption and digital signatures.

Step 1:
Generate a GnuPG key pair.

gpg --gen-keys

There are a number of options that can be specified when generating a key pair.  For this example, the following values will be used:

Type of key: RSA and RSA (default)
Keysize: 4096
Valid for: Never expires
Real name: Test User
Email address: test@user.com
Comment: A test user.

Verify the key pair has been generated with:

gpg --list-keys

Step 2:
Send your public key, or make your public key available, to the party you want to send you digitally signed file to.  It is a good idea to also send the fingerprint of the public key through another means of communication, for example, over the phone.

To export the public key, execute:

gpg --armor --export "Test User"
 
To get the fingerprint of the public key, execute:

gpg --fingerprint "Test User"

Step 3:
Import the public key.  Once the public key has been sent and verified, the receiver needs to import the public key.

gpg --import publickey.key 

Step 4:
Encrypt and sign the file.

gpg --sign --symmetric --cipher-algo AES256 secret.txt

Step 5:
Decrypt the file.  Decrypting the file will automatically verify the digital signature.

gpg -d secret.txt.gpg
...gpg: Good signature from "Test User (A test user.) "...

Tuesday, December 17, 2013

This post will go over how to encrypt and decrypt files using GnuPG with a symmetric cipher.  GnuPG stands for "Gnu Privacy Guard", and is an open implementation of the PGP standard.  Note that the method described below does not provide message integrity (this will be described in another post).

Step 1:
Install GnuPG.  GnuPG should be installed by default during a CentOS installation, but if necessary, execute:

yum install gnupg

Step 2:
Encrypt a file.  Upon first execution (or by using gpg --version), the application will print out a list of available ciphers, hashes, and key algorithms.

Supported algorithms:
Pubkey: RSA, ELG, DSA
Cipher: 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, 
        CAMELLIA192, CAMELLIA256
Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224
Compression: Uncompressed, ZIP, ZLIB, BZIP2

The default symmetric cipher used on this version of gpg was 3DES.  AES256 will be used instead.

gpg --cipher-algo AES256 -c secret.txt 

This will prompt for a password and produce the encrypted file "secret.txt.gpg".  Checking the file type should yield AES256:

file secret.txt.gpg 
secret.txt.gpg: GPG symmetrically encrypted data (AES256 cipher)

Step 3:
Decrypt a file.  Output goes into secret.txt.

gpg -o secret.txt -d secret.txt.gpg

Thursday, December 12, 2013

This post will go over how to install and use fail2ban, an extremely useful and versatile tool that can ban certain ip addresses that are showing malicious behaviour.  This post will go over how to monitor an ssh server for too many failed login attempts, and block the offending source ip address.

Step 1:
Install fail2ban.  If you have not already done so, install the epel repository.

wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm

And install the application.

yum install fail2ban

Step 2:
Copy the configuration file.

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Step 3:
Set up the policy of the application.  There are a number of options that you can set in the newly created jail.local configuration file.  For the purposes of this post, only a few options will be modified:

#any ip that crosses the threshold will be banned for 24 hours, or 86400 seconds.
#bantime=600
bantime=86400

#an ip has 5 chances to log in successfully within the "findtime", defined below, before being banned for 24 hours.
#maxretry=3
maxretry=5

#an ip has 5 chances within one hour to log in successfully to a system before being banned for 24 hours.
#Note that after one hour, the threshold resets and the ip has another five attempts.
#findtime=600
findtime=3600
 
Modify the iptables rules to send mail to whatever address you want.  More importantly, ensure the "logpath" option specifies the log file to check for failed login attempts.

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=youraddress@yourdomain.com, sender=fail2ban@example.com]
logpath  = /var/log/secure
maxretry = 5

Step 4:
Enable fail2ban.

service fail2ban start
chkconfig fail2ban on

Verify the iptables chain is now active.  Execute "iptables -L".  The INPUT chain should have the line

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
fail2ban-SSH  tcp  --  anywhere             anywhere            tcp dpt:ssh

And the chain fail2ban should be available, although empty right now.
Chain fail2ban-SSH (1 references)
 target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere

Step 5:
Verify operation.  Attempting to log in from 192.168.1.15 to the server that now has fail2ban active (192.168.1.16) and failing five times results in the source ip being added to the fail2ban chain.

REJECT     all  --  192.168.1.15         anywhere            reject-with icmp-port-unreachable

And the source machine can not even make an attempt for the next 24 hours.

ssh root@192.168.1.16
ssh: connect to host 192.168.1.16 port 22: Connection refused


Tuesday, December 10, 2013

This post will go over a quick and dirty way to check file integrity on a *nix system.  This may be useful to check for things such as bit rot or malicious tampering, although, in this example, only backups are being checked for bit rot.  There are other tools that can do this better, and some of them will be the topic of future posts.

Step 1:
Specify the hashing algorithm you want to use.  While there are pros and cons for different hashing algorithms, in this example, md5 will be used for one main reason, it is faster than sha256.

Step 2:
Specify the files you want to ensure are still readable.  For example, the folder "Documents" and "Images", as well as all sub-folders are the current subject of scrutiny.  These files are respectively 1.7GB and 178MB.

Step 3:
Calculate the hashes of the files and store in a file.  This can be done with the find and xargs command.

find $DIR/Images -type f -print0 | xargs -0 md5sum > images.md5
find $DIR/Documents -type f -print0 | xargs -0 md5sum > docs.md5

Step 4:
Force the system to re-read the file from disk and calculate the hashes again.  This will notify you if a file has been tampered with, or is suffering from bit-rot.  In this example, the files have already been cached in main memory, and the checksums are being calculated shortly thereafter, thus not hitting the disk.  The whole point of this is to verify the on disk data is still good.

md5sum -c images.md5 | grep FAIL
real    0m0.542s
md5sum -c documents.md5 | grep FAIL
real    0m5.346s

To force the system to clear the cached files, first write any data buffered in memory to disk:

sync

Then, clear the cache:

echo 3 > /proc/sys/vm/drop_caches

Re-check the hashes.

md5sum -c images.md5 | grep FAIL
real    0m19.415s
md5sum -c documents.md5 | grep FAIL
real    1m6.774s

Thursday, December 5, 2013

This post will go over how to log in to a remote system over ssh using expect.  While it would be a better option to use key-based authentication for non-interactive logins, this is not always an option.

Step 1:
Automatically accept the host fingerprint to known_hosts.  Here is an expect script that will accept the remote host's fingerprint to the known_hosts file, otherwise, you will need to accept the key manually.

#!/usr/bin/expect
set timeout 5
set ip [lindex $argv 0]
spawn ssh user@$ip

expect "(yes/no)?"
send "yes\n"

Step 2:
Create a script to log in and do whatever task needs to be automated.  For example, this script logs in to a host, grabs the uptime, and stores the session in the file log.txt.

#!/usr/bin/expect

set timeout 5
set ip [lindex $argv 0]
set password [lindex $argv 1]
set output [open "log.txt" "a+"]

spawn ssh user@$ip

expect "Password:"
send "$password\n"

expect "#"
send "uptime\n"

expect "#"
send "exit\n"

expect eof


Tuesday, December 3, 2013

This post will go over how to use keys to log in to a remote system over ssh.  Note that once this system is in place, it becomes critical to keep your private key safe and secure, and not give it out to anyone.  In this example, 10.0.0.3 is the client, and 10.0.0.2 is the server.

Step 1:
On the client, generate an ssh key pair.  This example is using rsa keys.

ssh-keygen -t rsa -b 4096

This will create a public key, id_rsa.pub, and a private key, id_rsa, in the users home folder in the .ssh directory.  During this command, you have the option to encrypt the private key with a password.  Although you can unlock the private key once and login to remote machines without a password afterwards (shown in step 4), you can also leave the private key in an unencrypted format.  It depends on the convenience/security trade-off you are willing to make.

Step 2:
Copy the public key to the server(s) you want to log in to.  You can use the ssh-copy-id command for convenience.  In this case, the local machine at 10.0.0.3 wants to log in to the remote server at 10.0.0.2.

ssh-copy-id -i ~/.ssh/id_rsa.pub <user>@10.0.0.2

The -i specifies the public key you want to transfer over.  The <user> is the account you want to log in to the server with.  Once this command completes, the public key of the client machine, 10.0.0.3, will have their public key added to the authorized_keys file of the server, 10.0.0.2.  Logging in to the console of the server at 10.0.0.2 as the user and cating the file ~/.ssh/authorized_keys reveals a new public key.

Step 3:
Modify the ssh server config file to only support key based logins.  Although not necessary for key based authentication, the server is still vulnerable to brute force attacks using username/password combinations.  There are tools to limit the number of login attempts and/or block repeated failures, but if possible, it would be best to just disable the option altogether.  In /etc/ssh/sshd_config, change "PasswordAuthentication yes" to "PasswordAuthentication no", and restart the service.

Step 4 (optional):
Unencrypt an encrypted private key once for use multiple times.  If you encrypted your private key, you can use an ssh-agent to unencrypt and load it into memory.

Run the "ssh-agent" command.

ssh-agent

This command will output some environment variables that the program needs.  Add these variables manually to your shell session, or execute the modified version of the command.

eval $(ssh-agent)

Once this is done, add your private key(s).

ssh-add


Thursday, November 21, 2013

This post will go over how to add graphs to nagios using the pnp4nagios plugin.

Step 1:
Install and/or compile the necessary applications.  pnp4nagios was installed as part of the command "yum install nagios*" performed in a previous post.

rpm -q pnp4nagios
pnp4nagios-0.6.20-1.el6.i686

Step 2:
Configure the pnp4nagios commands.  Since this is a small installation with only a few hosts being monitored, most of the defaults will be used.  However, the commands still need to be added to the nagios configuration:

define command {
command_name    process-service-perfdata-file
command_line    /usr/libexec/pnp4nagios/process_perfdata.pl --bulk=/tmp/service-perfdata
}

define command {
command_name    process-host-perfdata-file
command_line    /usr/libexec/pnp4nagios/process_perfdata.pl --bulk=/tmp/host-perfdata
}

Step 3:
Modify nagios.cfg.  The diff with the original cfg is shown below.

diff nagios.cfg nagios.cfg.bak
834,835c834
< 
< process_performance_data=1
---
> process_performance_data=0
857,858c856,857
< host_perfdata_file=/tmp/host-perfdata
< service_perfdata_file=/tmp/service-perfdata
---
> #host_perfdata_file=/tmp/host-perfdata
> #service_perfdata_file=/tmp/service-perfdata
872,873d870
< host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$
\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$
\tHOSTSTATETYPE::$HOSTSTATETYPE$\tHOSTOUTPUT::$HOSTOUTPUT$ < service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$
\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERFDATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$
\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE$
\tSERVICESTATETYPE::$SERVICESTATETYPE$\tSERVICEOUTPUT::$SERVICEOUTPUT$ 887,888d883 < host_perfdata_file_mode=a < service_perfdata_file_mode=a 900,901d894 < host_perfdata_file_processing_interval=15 < service_perfdata_file_processing_interval=15 912,913d904 < host_perfdata_file_processing_command=process-host-perfdata-file < service_perfdata_file_processing_command=process-service-perfdata-file

Step 4:
Restart nagios and verify the page shows up at http://<nagiosip>/pnp4nagios/


Step 5:
Add extended info in nagios to create links to graphs of the applicable host and service.  Add to templates.cfg:

define host {
name            host-pnp
action_url      /pnp4nagios/index.php/graph?host=$HOSTNAME$&srv=_HOST_
register        0
}

define service {
name            service-pnp
action_url      /pnp4nagios/index.php/graph?host=$HOSTNAME$&srv=$SERVICEDESC$
register        0
}

Then add these new templates to the desired host and service definitions.  For example:

define host{
        use                     linux-server,host-pnp
        host_name               puppetmaster
        alias                   puppetmaster
        address                 192.168.1.15
}

define service{
        use generic-service,service-pnp
        host_name puppetmaster
        service_description PING
        check_command check_ping!100.0,20%!500.0,60%
}

Step 6:
Verify functionality.  Note the new graph icons available for the host/services.

Clicking on the icons gives links to the desired rrd graphs.  For example, the custom crond check shows the process has been running as desired.

pnp4nagios can also print out a nicely formatted report of the desired services/hosts to a pdf.

Tuesday, November 19, 2013

This post will go over how to add a custom check to a host being monitored by nagios.  In this case, nagios will check to make sure crond is running on the puppetmaster server, which is a centos machine.

Step 1:
Write the script that will check for the given condition, and verify its functionality.  This is a simple script that will check that the crond process is running. The data after the pipe is interpreted by nagios as performance data, and is being added in so that the status of the process can be graphed over a period of time in an rrd graph. Adding graphs to nagios will be covered in another post.

Note that the exit codes get interpreted by nagios as follows:
0 - OK
1 - WARNING
2 - CRITICAL
3 - UNKNOWN

#!/bin/bash

lineCount=`ps -eaf|grep -v grep|grep " crond"|wc -l`

if [ $lineCount -eq "0" ]; then
        echo "WARNING - crond is not running|proc=$lineCount"
        exit 1;
fi
if [ $lineCount -eq "1" ]; then
        echo "OK - crond is running|proc=$lineCount"
        exit 0;
fi
if [ $lineCount -gt "1" ]; then
        echo "UNKNOWN - crond process count > 1|proc=$lineCount"
        exit 3;
fi
echo "UNKNOWN - crond process count is unknown|proc=$lineCount"
exit 3;

Step 2:
Add the command to /etc/nagios/nrpe.cfg on the host.  The necessary line to add in this case is:

command[check_crond]=/usr/lib/nagios/plugins/check_crond

Step 3:
Add the service check to the nagios server.  Restart the nagios process.

define service{
        use generic-service
        host_name puppetmaster
        service_description Crond Process
        check_command check_nrpe!check_crond

}

service nagios restart

Step 4:
Verify functionality.

Thursday, November 14, 2013

This post will go over how to monitor a host in nagios using nrpe.  The nrpe is the "Nagios Remote Plugin Executor", and allows you to remotely execute commands on another machine and gather desired metrics.  The version of nrpe that was installed on the target centos machine does not allow for command line arguments from the nagios server, so all arguments and thresholds must be specified on the machine itself.  The puppetmaster server is the machine being added to nagios.

Step 1:
On the target machine, add the EPEL repository.

wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm

Step 2:
Install nagios and necessary packages.

yum install nagios* xinetd

Step 3:
Add the service to xinetd.

/etc/xinetd.d/nrpe
service nrpe
{
flags = REUSE
type = UNLISTED
port = 5666
socket_type = stream
wait = no
user = nagios
group = nagios
server = /usr/sbin/nrpe
server_args = -c /etc/nagios/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = 192.168.1.16
}

Restart, the service

service xinetd restart
chkconfig xinetd on 

Step 4:
Make any changes or modifications to the data you want to monitor on the host in /etc/nagios/nrpe.cfg.  In this scenario, the root partition, number of users, current load, and number of processes are being monitored.

Step 5:
Configure the nagios server.
/etc/nagios/objects/commands.cfg
define command{
        command_name check_nrpe
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Add the host, service, and hostgroup definitions to the necessary config files in /etc/nagios.  The puppet server is being added to the "Linux Servers" hostgroup.

Restart the service
service nagios restart

Step 6:
Verify functionality.

Tuesday, November 12, 2013

This post will go over installing nagios on a centos machine that lives on a vmware esxi host.  Nagios is an open source monitoring and alerting system that is widely deployed as an infrastructure monitoring solution and can scale from one to thousands of hosts and services.  This post will go over a basic installation.

Step 1:
Clone a machine.

cd /vmfs/volumes/datastore1
mkdir "CentOS 6.4 - Nagios Server"
cd CentOS\ 6.4\ -\ Nagios\ Server/
cp ../Base CentOS\ 6.4/Base CentOS\ 6.4.vmx ./CentOS\ 6.4\ -\ Nagios\ Server.vmx
vmkfstools -i ../Base CentOS\ 6.4/Base CentOS\ 6.4.vmdk \
CentOS\ 6.4\ -\ Nagios\ Server.vmdk
vim-cmd solo/registervm \
/vmfs/volumes/datastore1/CentOS\ 6.4\ -\ Nagios\ Server/CentOS\ 6.4\ -\ Nagios\ Server.vmx
vim-cmd vmsvc/power.on 21

Step 2:
If necessary, reset the ip addresses and the interfaces in /etc/sysconfig/networ-scripts/ifcfg-*, modify udev rules in /etc/udev/rules.d/70-persistent-net.rules, reset the hostname in /etc/sysconfig/network, reset the root password.

Step 3:
Add the EPEL repository.

wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm

Step 4:
Install nagios and necessary packages.

yum install -y nagios* openssl gd gd-devel httpd php gcc glibc glibc-common httpd

Step 5:
Set up apache and enable services.

htpasswd /etc/nagios/passwd nagiosadmin
chkconfig httpd on
chkconfig nagios on
service httpd restart
service nagios restart

Step 6:
Verify the system is up and monitoring the localhost at http://<systemip>/nagios

A later post will go over adding new hosts and services to the setup.

Thursday, November 7, 2013

This post will go over how to set up a machine so that the newly installed puppet master can control the system.

Step 1:
Install the puppet application on the local machine.
rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm
yum install puppet

Step 2:
Make necessary config changes to the system.

Edit the hosts file
192.168.1.10 node
192.168.1.15 puppetmaster

Edit the puppet.conf file
server = puppetmaster
report = true
pluginsync = true

chkconfig puppet on
puppet agent --daemonize

Step 3:
Add the certs.
puppet agent --server=puppetmaster -t --waitforcert 15
Notice: Did not receive certificate
Notice: Did not receive certificate
Notice: Did not receive certificate
Notice: Did not receive certificate
Notice: Did not receive certificate
Info: Caching certificate for server1.node
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for server1.node
Info: Applying configuration version '1382998929'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.02 seconds

Sign the cert on the puppetmaster.
puppet cert list
  "server1.node" (SHA256) 8E:09:B0:E9:9C:76:99:4A:94:53:5C:39:FD:3A:32:DA:D3:FF:7C:64:F4:BF:6A:83:40:8F:97:E5:FA:5F:BF:87
puppet cert --sign server1.node
Notice: Signed certificate request for server1.node
Notice: Removing file Puppet::SSL::CertificateRequest server1.node at '/var/lib/puppet/ssl/ca/requests/server1.node.pem'

Step 4:
Create and test a manifest for the configured nodes
The canonical example, installing ntp.
Adding to site.pp
package { 'ntp':
        ensure => installed,
}
file { '/etc/ntp.conf':
        path       => '/etc/ntp.conf',
        ensure     => file,
        require    => Package['ntp'],
}
service { 'ntpd':
        name       => 'ntpd',
        ensure     => running,
        enable     => true,
        require    => Package['ntp'],
        subscribe  => File['/etc/ntp.conf'],
}

On the new node:

rpm -q ntp
package ntp is not installed
puppet agent --server=puppetmaster --test
Info: Retrieving plugin
Info: Caching catalog for server1.node
Info: Applying configuration version '1382999615'
Notice: /Stage[main]//Package[ntp]/ensure: created
Notice: Finished catalog run in 5.41 seconds
rpm -q ntp
ntp-4.2.4p8-3.el6.centos.i686


Tuesday, November 5, 2013

This post will go over how to install a puppet server and a puppet client on a CentOS 6.4 vm running on VMWare 5.1.

Step 1:
Clone a vm.

cd /vmfs/volumes/datastore1
mkdir "CentOS 6.4 - Puppet Server"
cd CentOS\ 6.4\ -\ Puppet\ Server/
cp ../Base\ CentOS\ 6.4/Base\ CentOS\ 6.4.vmx ./CentOS\ 6.4\ -\ Puppet\ Server.vmx
vmkfstools -i "/vmfs/volumes/datastore1/Base CentOS 6.4/Base CentOS 6.4.vmdk" \
"/vmfs/volumes/datastore1/CentOS 6.4 - Puppet Server/CentOS 6.4 - Puppet Server.vmdk"
vim-cmd solo/registervm "/vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf/CentOS 6.4 \
- Puppet Server/CentOS 6.4 - Puppet Server.vmx"
vim-cmd vmsvc/getallvms
vim-cmd vmsvc/power.on 18


Step 2:
Install the puppet server.  Add the puppetlabs repository.

rpm -ivh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm

Install the application.

yum install puppet-server

Step 3:
Enable services.  Start the puppet master.

/etc/init.d/puppetmaster start

Permanently enable services.

puppet resource service puppet ensure=running enable=true
puppet resource service puppetmaster ensure=running enable=true

Modify the config file.

/etc/puppet/puppet.conf
[master]
certname = puppetmaster
autosign = false

Step 4:
Install passenger.  First, install necessary packages:

yum install httpd httpd-devel mod_ssl ruby-devel rubygems gcc make gcc-c++ \
curl libcurl-devel openssl-devel

Install passenger.  The output of the second command will display how to configure the apache vhost.

gem install rack passenger
passenger-install-apache2-module

Install the puppet master rack application

mkdir -p /usr/share/puppet/rack/puppetmasterd
mkdir /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp
cp /usr/share/puppet/ext/rack/files/config.ru /usr/share/puppet/rack/puppetmasterd/
chown puppet /usr/share/puppet/rack/puppetmasterd/config.ru

Step 5:
Sign certs of new machines.  Another post will go through how to add a node to the server, but the two commands needed are:

puppet cert list
puppet cert --sign 

Thursday, October 31, 2013

This post will go over how to install security onion in a vm on VMWare ESXi 5.1 host to monitor traffic on the host vm's.  Security onion is a network security monitoring system built on top of ubuntu.

Step 1:
Upload the security onion iso to the esxi host.  Instead of making the esxi host have to constantly ask for data from the iso on the local machine, the security onion iso will be copied to the local disk of the esxi host.  Select the host from the left hand window pane, and select the "Summary" tab.  Right click on the local datastore and choose browse datastore.

Click on the icon to upload a file.  In this case, the security onion iso will be uploaded.


Step 2:
Create and install the new virtual machine.  While creating the vm, be sure to use two nics, one in the management port group, and one in the span port group.  The management address for the host will be 192.168.1.14.

Attach the iso from the local datastore in the virtual machine settings.

Install security onion normally.  Security onion is built on top of ubuntu and has a very easy to use installer.

The process eventually finishes and security onion is ready to use.

Step 3:
Configure security onion.  Run the setup utility on the desktop.  The questions are very straight forward.  There are many other tutorials available online for initial configuration.

Eventually, the setup will complete, and there is an initial web page where many of the tools will be available.

Step 4:
Verify functionality.  There are a number of pcaps available on the wireshark website that are captures of attacks.  Performing a basic nmap scan of some of the hosts, there are a number of event generated in snorby.

More importantly, all of the traffic on the virtual network is now being logged and examined by security onion.  From the snorby interface.

From the squert interface.

One thing to be careful of though is that the vm will be capturing, and saving all of the packets that it sees for analysis.  This can quickly overwhelm the system and possibly slow down other vm's on the esxi host.  Reserving and limiting resources on the esxi host will be covered in another post.


Tuesday, October 29, 2013

This post will show how to get a port mirror, or span port, working on VMWare ESXi 5.1.  This will be helpful with troubleshooting, and will be used for another post.

Step 1:
Add the port group to the vswitch.  Select the esxi host, click on the "Configuration" tab, and click on "Networking".  Click on "Add Networking...", choose "Virtual Machine", choose "Use vswitch0", change the network label to "SPAN", and choose all of the vlans.  The network should now look like this.


Step 2:
Edit the vswitch properties.  Click the "Properties..." link for vswitch0.  Click on "Edit..." for the vswitch.  On the "Security" tab, change "Promiscuous Mode" to "Accept".
Step 3:
Edit the port group properties.  In the same dialog for vswitch0 properties, select "SPAN" and click "Edit...".  Go to the security tab and enable "Promiscuous Mode".

Go to the "Traffic Shaping" tab and mark the status disabled checkbox.


Step 4:
Add a nic to a machine to listen in on the span port group.

Step 5:
Verify functionality.  This is the output of a tcpdump on the new interface in the SPAN port group, which is able to see management network traffic as well as production traffic, and can now be used as a central point for IDS, analysis, troubleshooting, etc.

15:18:07.033540 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 42, length 64
15:18:07.033608 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 42, length 64
15:18:08.034594 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 43, length 64
15:18:08.034651 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 43, length 64
15:18:09.035933 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 44, length 64
15:18:09.036028 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 44, length 64
15:18:10.036544 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 45, length 64
15:18:10.036597 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 45, length 64
15:18:11.037510 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 46, length 64
15:18:11.037566 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 46, length 64
15:19:44.850525 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 1, length 64
15:19:44.850726 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 1, length 64
15:19:45.851507 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 2, length 64
15:19:45.851686 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 2, length 64
15:19:46.852256 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 3, length 64
15:19:46.852385 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 3, length 64

Thursday, October 24, 2013

This post will go over how to clone a vm from the command line (and thus, make it easily scriptable).

Step 1:
Decide which vm you will be cloning.  The "CentOS 6.4 PXE" server has not been touched since creation, so will make a good base system.

Step 2:
Create the vm config file.  The local datastore for vm's on this esxi server is /vmfs/volumes/datastore1 which is a soft link to /vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf.

The directory contains all of the current vm's on the local datastore.

cd /vmfs/volumes/datastore1
ls -1
CentOS 6.4
CentOS 6.4 PXE
FreeBSD 9.1
Security Onion 12.04.3
Test 1
Ubuntu 10.0.4 x32

Create a directory for a new vm.

mkdir "Base CentOS 6.4"

Change to the new directory and copy the "PXE" config file:

cd Base\ CentOS\ 6.4
cp ../CentOS\ 6.4\ PXE/CentOS\ 6.4\ PXE.vmx "Base CentOS 6.4.vmx"

Make any necessary changes to the configuration file for the new system:

cat Base\ CentOS\ 6.4.vmx 
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "8"
displayName = "Base CentOS 6.4"
floppy0.present = "TRUE"
scsi0.present = "TRUE"
scsi0.sharedBus = "none"
scsi0.virtualDev = "pvscsi"
memsize = "512"
scsi0:0.present = "TRUE"
scsi0:0.fileName = "Base CentOS 6.4.vmdk"
scsi0:0.deviceType = "scsi-hardDisk"
ide1:0.present = "TRUE"
ide1:0.clientDevice = "TRUE"
ide1:0.deviceType = "cdrom-raw"
ide1:0.startConnected = "FALSE"
floppy0.startConnected = "FALSE"
floppy0.fileName = ""
floppy0.clientDevice = "TRUE"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "vmxnet3"
ethernet0.networkName = "Management"
ethernet0.addressType = "generated"
ethernet1.present = "TRUE"
ethernet1.virtualDev = "vmxnet3"
ethernet1.networkName = "Production"
ethernet1.addressType = "generated"
guestOS = "rhel6"

Step 3:
Copy the hard disk (make sure the vm is powered off).

vmkfstools -i "/vmfs/volumes/datastore1/CentOS 6.4/CentOS 6.4.vmdk" \
"/vmfs/volumes/datastore1/Base CentOS 6.4/Base CentOS 6.4.vmdk"

Step 4:
Register the new machine with esxi. 

vim-cmd solo/registervm "/vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf/Base CentOS 6.4/Base CentOS 6.4.vmx"

Verify the new machine is registered

vim-cmd vmsvc/getallvms
...trim...
17     Base CentOS 6.4          [datastore1] Base CentOS 6.4/Base CentOS 6.4.vmx                 rhel6Guest     vmx-08            
...trim...  

Step 5:
Power on the vm.  Perform any post installation configuration.  Most notably, new ip addresses will need to be specified.  If necessary, use the dhcp server to get a temporary ip, and watch the server to see what leases are given out, and connect to the new lease.  The mac address can be verified in the vmx configuration file.

vim-cmd vmsvc/power.on 17



Tuesday, October 22, 2013

This post will focus on setting up a Red Hat / CentOS Kickstart server so hosts can perform an automated install using pxe.

Step 1:
Install necessary packages on CentOS.

yum -y install wget syslinux syslinux-tftpboot xinetd \
tftp-server tftp dhcp httpd openssh-clients

Step 2:
Create the anaconda directory structure.

mkdir -p /export/anaconda/iso/CentOS
mkdir -p /export/anaconda/media
mkdir -p /export/anaconda/media/CentOS-6.4-x86_64
mkdir -p /export/anaconda/media/CentOS-6.4-i386
mkdir -p /export/anaconda/tftpboot
mkdir -p /export/anaconda/tftpboot/pxelinux.cfg
mkdir -p /export/anaconda/tftpboot/CentOS-6.4-x86_64
mkdir -p /export/anaconda/tftpboot/CentOS-6.4-i386
mkdir -p /export/anaconda/postinstall/
mkdir -p /export/anaconda/cfg/
ln -s /export/anaconda /anaconda

Step 3:
Modify tftp server configuration.

cd /etc/xinetd.d
diff tftp tftp.bak
13,14c13,14
<       server_args             = -s /export/anaconda/tftpboot
<       disable                 = no
---
>       server_args             = -s /var/lib/tftpboot
>       disable                 = yes

Step 4:
Modify dhcp server configuration.

cd /etc/dhcp 
cat dhcpd.conf
subnet 192.168.1.0 netmask 255.255.255.0 {

        option routers 192.168.1.1;
        option domain-name-servers 8.8.8.8;
        option subnet-mask 255.255.255.0;
        range 192.168.1.240 192.168.1.250;

        next-server 192.168.1.10;
        filename "pxelinux.0";
}

Step 5:
Copy CentOS iso and extract the files.

scp CentOS-6.4-i386-minimal.iso root@192.168.1.10:/export/anaconda/iso/CentOS
cd /export/anaconda/tftpboot/CentOS-6.4-i386
mount -o loop /export/anaconda/iso/CentOS/CentOS-6.4-i386-minimal.iso /mnt
cp -Rp /mnt/* ./
umount /mnt 

Step 6:
Configure the boot menu.


cat /export/anaconda/tftpboot/pxelinux.cfg/default
timeout 3600
default menu.c32

menu title Boot Menu

label 1
    menu label ^ 1) CentOS-6.3-x86_64 (64-bit)
    kernel CentOS-6.4-x86_64/vmlinuz
    append initrd=CentOS-6.4-x86_64/initrd.img ramdisk_size=15491 ip=dhcp ksdevice=bootif \
    ks=http://192.168.1.10/anaconda/cfg/CentOS-6.4-x86_64-ks.cfg
    IPAPPEND 2

label 2
    menu label ^ 2) CentOS-6.4-i386 (32-bit)
    kernel CentOS-6.4-i386/vmlinuz
    append initrd=CentOS-6.4-i386/initrd.img ramdisk_size=15491 ip=dhcp ksdevice=bootif \
    ks=http://192.168.1.10/anaconda/cfg/CentOS-6.4-i386-ks.cfg
    IPAPPEND 2

label 3
    menu label ^ 3) Rescue CentOS-6.4-x86_64 (64-bit)
    kernel CentOS-6.4-x86_64/vmlinuz
    append initrd=CentOS-6.4-x86_64/initrd.img ramdisk_size=15491 ip=dhcp \
    repo=http://192.168.1.10/anaconda/CentOS-6.4-x86_64 lang=en_US.UTF-8 keymap=us rescue

label 4
    menu label ^ 4) Rescue CentOS-6.4-i386 (32-bit)
    menu default
    kernel CentOS-6.4-i386/vmlinuz
    append initrd=CentOS-6.4-i386/initrd.img ramdisk_size=15491 ip=dhcp \
    repo=http://192.168.1.10/anaconda/CentOS-6.4-i386 lang=en_US.UTF-8 keymap=us rescue

Step 7:
Configure apache.

cat /etc/httpd/conf.d/anaconda.conf 
Alias /anaconda/cfg /export/anaconda/cfg

    Options Indexes FollowSymLinks
    Allow from All


Alias /anaconda/postinstall /export/anaconda/postinstall

    Options Indexes FollowSymLinks
    Allow from All


Alias /anaconda /export/anaconda/media

    Options Indexes FollowSymLinks
    Allow from All


Step 8:
Modify the kickstart files.

cat CentOS-6.4-i386-ks.cfg

install

# Specifies the language
lang en_US.UTF-8

# Specifies the keyboard layout
keyboard us

# Skip Red Hat subscriber key input
key --skip

# Forces the text installer to be used (saves time)
text

# Forces the cmdline installer to be used (debugging)
#cmdline

# Skips the display of any GUI during install (saves time)
skipx

# Used with an HTTP install to specify where the install files are located
url --url http://192.168.1.10/anaconda/CentOS-6.4-i386

# Assign a static IP address upon first boot & set the hostname
network --device eth0 --onboot yes --bootproto static --ip=192.168.1.13 \
--netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8

# Give the second interface a DHCP address (if you are not using a second interface comment this line out)
network --device eth1 --onboot yes --bootproto static --ip=172.16.0.13 \
--netmask=255.255.0.0

# Set the root password
rootpw password

# Need a repo as only the minimal iso was used
repo --name=es --baseurl=http://linux.mirrors.es.net/centos/6/os/i386/

# Enable the firewall and open port 22 for SSH remote administration
firewall --enabled --port=22:tcp

# Setup security and SELinux levels
authconfig --enableshadow --passalgo=sha512

selinux --disabled

# Set the timezone
timezone --utc Etc/UTC

# Create the bootloader in the MBR with drive sda being the drive to install it on
bootloader --location=mbr --driveorder=sda,sdb --append=audit=1

# Wipe all partitions and build them with the info below
clearpart --all --initlabel

#Disk partitioning information
zerombr

# Create primary partitions
part /boot --fstype ext4 --size=512 --asprimary --ondisk=sda
part swap --size=256 --asprimary --ondisk=sda
part pv.01 --size=4096 --grow --asprimary --ondisk=sda

# Create LVM logical volumes
volgroup system --pesize=4096 pv.01
logvol  /  --vgname=system  --size=3000  --grow  --name=root_vol

# reboot when installation completes
reboot

# Install the Core software packages, aka "minimal", plus a couple extras
%packages
%end

%pre
# redirect debugging output to tty3
#exec < /dev/tty3 > /dev/tty3
#chvt 3

%post --log=/var/tmp/install.log
# redirect debugging output to tty3
#exec < /dev/tty3 > /dev/tty3
#chvt 3

echo "Creating CentOS-6.4-i386 post installation directory ..."
mkdir -p /opt/postinstall


echo "Downloading CentOS-6.4-i386 post installation files ..."
cd /opt/postinstall
wget http://192.168.1.10/kickstart/postinstall/CentOS-6.4-i386-postinstall.tgz
tar zxf CentOS-6.4-i386-postinstall.tgz
rm CentOS-6.4-i386-postinstall.tgz > /dev/null 2>&1

echo "Executing CentOS-6.4-i386 post installation script ..."
./CentOS-6.4-i386-postinstall >> CentOS-6.4-i386-postinstall.out 2>&1
echo "Done."
 
Step 9:
Finish configuring pxe.
cp /usr/share/syslinux/pxelinux.0 /export/anaconda/tftpboot/
cp /usr/share/syslinux/menu.c32 /export/anaconda/tftpboot/
 
Step 10:
Enable services.
chkconfig dhcpd on
chkconfig httpd on
chkconfig xinetd on
service dhcpd restart
service httpd restart
service xinetd restart
 

Step 11:
Start the server.  The only option that will need to be specified is on the initial pxe boot menu.

Once selected, the system should perform an unattended install, requiring no user interaction.

After it completes, the system will reboot and present a login screen for the newly installed system.

And the new system is up with the packages and ip addresses specified in the kickstart file.

#ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:75:23:1D  
          inet addr:192.168.1.13  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:231d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:70 errors:0 dropped:0 overruns:0 frame:0
          TX packets:62 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:8224 (8.0 KiB)  TX bytes:13776 (13.4 KiB)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:23:27  
          inet addr:172.16.0.13  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe75:2327/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:120 (120.0 b)  TX bytes:720 (720.0 b)

The system can easily be modified to specify different kickstart files for different types of servers.  For example, there could be an option for "Directory Server", "LAMP Server", or "Database Server", all pointing to different kickstart files, and each file specifying the required packages, as well as any necessary post installation configurations.  For example:


Friday, October 18, 2013

This post will go over how to install a vm in VMWare 5.1 using an iSCSI target on FreeBSD using zfs.  FreeBSD 9.1 will be used in virtualbox with a new virtual drive that will be used as the iscsi target.  The ip of the system will be 10.0.0.3.

Step 1:
Set up the nic of the guest os as a bridged adapter in virtualbox using the host systems ethernet port.

Add a rule to allow the iscsi traffic on the localhost firewall.

iptables -A open -p tcp --dport 3260 -j ACCEPT

Step 2:
Install the necessary packages for iSCSI on FreeBSD.


Step 3:
Create the zvol that will be used as the block device.


Step 4:
Modify the configuration files for istgt.


Step 5:
Start the service and verify targets are available.


Step 6:
Add the iscsi target to vsphere.  From vsphere, select the esxi host and select the "Configuration" tab, then hit "Storage Adapters".  Click "Add..." and select the "iSCSI Software Adapter".  Go to the properties of the new iSCSI Software Adapter  and click the "Dynamic Discovery" tab.  Add the ip of the iSCSI target.


Rescan the adapter, and the target should show up.


Go back to "Storage" on the left hand side of the screen.  Click "Add Storage...", and select "Disk/LUN".  The newly added iscsi target should be available.


Give the new datastore a name, and modify any other options if necessary.  Once done, the new datastore should show up in vsphere.


Step 7:
Install a vm on the iscsi target.  FreeBSD will be installed.  Create a new vm as before, but on the storage section, be sure to select the newly created iscsi device.


And as before, there is a management and a production nic.


Attach an iso as before and go through the installation.



The system will get the next available ip addresses in the management and production networks, that being 192.168.1.12/24 and 172.16.0.12/24, respectively.

Wednesday, October 16, 2013

This post will focus on installing a vm on VMWare ESXi 5.1 using an nfs share. The nfs share will exist on an archlinux installation.  The VMWare website offers some best practices for using NFS available here.  Unfortunately, the setup that is being performed is on consumer grade hardware (i.e. laptops that were available), so most of the best practices will have to be skipped.  For test purposes, this will be considered acceptable.

Step 1:
Install nfs server on a local machine.  The nfs server will exist on an archlinux machine.  Archlinux uses pacman to install packages and systemd.

pacman -S nfs-utils
systemctl start rpc-idmapd
systemctl start rpc-mountd

Step 2:
Open some ports. This machine uses a separate chain called "open" for open ports and services.

iptables -A open -p tcp --dport 111 -j ACCEPT
iptables -A open -p tcp --dport 2049 -j ACCEPT
iptables -A open -p tcp --dport 20048 -j ACCEPT

Step 3:
Make an nfs directory that will be exported, add to the exports file, re-export the fs.

mkdir /srv/nfs
echo "/srv/nfs 10.0.0.1(rw,no_root_squash,sync)" >> /etc/exports
exportfs -rav

Step 4:
In the vsphere client, click on the esxi host, then click on the "Configuration" tab, then the "Storage" option under "Hardware".  Click on "Add Storage..." and choose "Network File System".  Set up the nfs share as shown below.

Step 5:
Install CentOS 6.4 on the nfs share.  Right click the esxi host in the vsphere client, and choose "New Virtual Machine".  On the first screen, choose "Custom".  On the storage screen, choose the newly created nfs share.

As before, on the networking screen, use the "Managment" and "Production" port groups for the two interfaces.

As before, edit the boot settings.

 And attach the CentOS iso from the host machine.

CentOS can now be installed and configured similarly to the previous installation.  The management ip will be 192.168.1.11 and the production ip will be 172.16.0.11

Networking on the esxi host should now look like this.


Step 6:
Verify the setup.  The hosts should be able to communicate with each other on the same vlans, but not between.


Taking down the production nic (172.16.0.10) on the vm on the esxi local disk should prevent the node from reaching the nfs host on the same vlan, or 172.16.0.11.  Although the vm still has a route to the nfs host through the default route of 192.168.1.1, it should still not be able to get to the 172.16.0.x network through the default route.  The hosts appear to be isolated as a ping to 172.16.0.11 fails.

Upon closer inspection, the packets are still being routed between vlans, but the target vm does not have a route to the 192.168.1.x network out of its 172.16.0.11 interface, so it is just not responding.  Using tcpdump on the local host doing the routing provides insight (traffic on the 192.168.1.x network should not be seen on vlan 100).
tcpdump -nni eth0.100
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0.100, link-type EN10MB (Ethernet), capture size 65535 bytes
03:36:01.036035 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 19, length 64
03:36:02.036025 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 20, length 64
03:36:03.035997 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 21, length 64
03:36:04.035992 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 22, length 64
03:36:05.036125 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 23, length 64
03:36:06.036007 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 24, length 64
03:36:07.035814 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 25, length 64

To prevent communication between the vlans, add some more rules to the firewall.
iptables -I FORWARD -i eth0.101 -o eth0.100 -j DROP
iptables -I FORWARD -i eth0.100 -o eth0.101 -j DROP

Another ping test and tcpdump confirms traffic is not making it between vlans.  The production and management network are isolated from each other as intended.