This post will go over how to install security onion in a vm on VMWare ESXi 5.1 host to monitor traffic on the host vm's. Security onion is a network security monitoring system built on top of ubuntu.
Step 1:
Upload the security onion iso to the esxi host. Instead of making the esxi host have to constantly ask for data from the iso on the local machine, the security onion iso will be copied to the local disk of the esxi host. Select the host from the left hand window pane, and select the "Summary" tab. Right click on the local datastore and choose browse datastore.
Click on the icon to upload a file. In this case, the security onion iso will be uploaded.
Step 2:
Create and install the new virtual machine. While creating the vm, be sure to use two nics, one in the management port group, and one in the span port group. The management address for the host will be 192.168.1.14.
Attach the iso from the local datastore in the virtual machine settings.
Install security onion normally. Security onion is built on top of ubuntu and has a very easy to use installer.
The process eventually finishes and security onion is ready to use.
Step 3:
Configure security onion. Run the setup utility on the desktop. The questions are very straight forward. There are many other tutorials available online for initial configuration.
Eventually, the setup will complete, and there is an initial web page where many of the tools will be available.
Step 4:
Verify functionality. There are a number of pcaps available on the wireshark website that are captures of attacks. Performing a basic nmap scan of some of the hosts, there are a number of event generated in snorby.
More importantly, all of the traffic on the virtual network is now being logged and examined by security onion. From the snorby interface.
From the squert interface.
One thing to be careful of though is that the vm will be capturing, and saving all of the packets that it sees for analysis. This can quickly overwhelm the system and possibly slow down other vm's on the esxi host. Reserving and limiting resources on the esxi host will be covered in another post.
Thursday, October 31, 2013
Tuesday, October 29, 2013
This post will show how to get a port mirror, or span port, working on VMWare ESXi 5.1. This will be helpful with troubleshooting, and will be used for another post.
Step 1:
Add the port group to the vswitch. Select the esxi host, click on the "Configuration" tab, and click on "Networking". Click on "Add Networking...", choose "Virtual Machine", choose "Use vswitch0", change the network label to "SPAN", and choose all of the vlans. The network should now look like this.
Step 2:
Edit the vswitch properties. Click the "Properties..." link for vswitch0. Click on "Edit..." for the vswitch. On the "Security" tab, change "Promiscuous Mode" to "Accept".
Step 3:
Edit the port group properties. In the same dialog for vswitch0 properties, select "SPAN" and click "Edit...". Go to the security tab and enable "Promiscuous Mode".
Go to the "Traffic Shaping" tab and mark the status disabled checkbox.
Step 4:
Add a nic to a machine to listen in on the span port group.
Step 5:
Verify functionality. This is the output of a tcpdump on the new interface in the SPAN port group, which is able to see management network traffic as well as production traffic, and can now be used as a central point for IDS, analysis, troubleshooting, etc.
Step 1:
Add the port group to the vswitch. Select the esxi host, click on the "Configuration" tab, and click on "Networking". Click on "Add Networking...", choose "Virtual Machine", choose "Use vswitch0", change the network label to "SPAN", and choose all of the vlans. The network should now look like this.
Step 2:
Edit the vswitch properties. Click the "Properties..." link for vswitch0. Click on "Edit..." for the vswitch. On the "Security" tab, change "Promiscuous Mode" to "Accept".
Step 3:
Edit the port group properties. In the same dialog for vswitch0 properties, select "SPAN" and click "Edit...". Go to the security tab and enable "Promiscuous Mode".
Go to the "Traffic Shaping" tab and mark the status disabled checkbox.
Step 4:
Add a nic to a machine to listen in on the span port group.
Step 5:
Verify functionality. This is the output of a tcpdump on the new interface in the SPAN port group, which is able to see management network traffic as well as production traffic, and can now be used as a central point for IDS, analysis, troubleshooting, etc.
15:18:07.033540 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 42, length 64 15:18:07.033608 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 42, length 64 15:18:08.034594 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 43, length 64 15:18:08.034651 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 43, length 64 15:18:09.035933 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 44, length 64 15:18:09.036028 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 44, length 64 15:18:10.036544 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 45, length 64 15:18:10.036597 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 45, length 64 15:18:11.037510 IP 192.168.1.13 > 192.168.1.10: ICMP echo request, id 23559, seq 46, length 64 15:18:11.037566 IP 192.168.1.10 > 192.168.1.13: ICMP echo reply, id 23559, seq 46, length 64 15:19:44.850525 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 1, length 64 15:19:44.850726 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 1, length 64 15:19:45.851507 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 2, length 64 15:19:45.851686 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 2, length 64 15:19:46.852256 IP 172.16.0.13 > 172.16.0.10: ICMP echo request, id 24071, seq 3, length 64 15:19:46.852385 IP 172.16.0.10 > 172.16.0.13: ICMP echo reply, id 24071, seq 3, length 64
Thursday, October 24, 2013
This post will go over how to clone a vm from the command line (and thus, make it easily scriptable).
Step 1:
Decide which vm you will be cloning. The "CentOS 6.4 PXE" server has not been touched since creation, so will make a good base system.
Step 2:
Create the vm config file. The local datastore for vm's on this esxi server is /vmfs/volumes/datastore1 which is a soft link to /vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf.
The directory contains all of the current vm's on the local datastore.
Create a directory for a new vm.
Change to the new directory and copy the "PXE" config file:
Make any necessary changes to the configuration file for the new system:
Step 3:
Copy the hard disk (make sure the vm is powered off).
Step 4:
Register the new machine with esxi.
Verify the new machine is registered
Step 5:
Power on the vm. Perform any post installation configuration. Most notably, new ip addresses will need to be specified. If necessary, use the dhcp server to get a temporary ip, and watch the server to see what leases are given out, and connect to the new lease. The mac address can be verified in the vmx configuration file.
Step 1:
Decide which vm you will be cloning. The "CentOS 6.4 PXE" server has not been touched since creation, so will make a good base system.
Step 2:
Create the vm config file. The local datastore for vm's on this esxi server is /vmfs/volumes/datastore1 which is a soft link to /vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf.
The directory contains all of the current vm's on the local datastore.
cd /vmfs/volumes/datastore1
ls -1 CentOS 6.4 CentOS 6.4 PXE FreeBSD 9.1 Security Onion 12.04.3 Test 1 Ubuntu 10.0.4 x32
Create a directory for a new vm.
mkdir "Base CentOS 6.4"
Change to the new directory and copy the "PXE" config file:
cd Base\ CentOS\ 6.4 cp ../CentOS\ 6.4\ PXE/CentOS\ 6.4\ PXE.vmx "Base CentOS 6.4.vmx"
Make any necessary changes to the configuration file for the new system:
cat Base\ CentOS\ 6.4.vmx .encoding = "UTF-8" config.version = "8" virtualHW.version = "8" displayName = "Base CentOS 6.4" floppy0.present = "TRUE" scsi0.present = "TRUE" scsi0.sharedBus = "none" scsi0.virtualDev = "pvscsi" memsize = "512" scsi0:0.present = "TRUE" scsi0:0.fileName = "Base CentOS 6.4.vmdk" scsi0:0.deviceType = "scsi-hardDisk" ide1:0.present = "TRUE" ide1:0.clientDevice = "TRUE" ide1:0.deviceType = "cdrom-raw" ide1:0.startConnected = "FALSE" floppy0.startConnected = "FALSE" floppy0.fileName = "" floppy0.clientDevice = "TRUE" ethernet0.present = "TRUE" ethernet0.virtualDev = "vmxnet3" ethernet0.networkName = "Management" ethernet0.addressType = "generated" ethernet1.present = "TRUE" ethernet1.virtualDev = "vmxnet3" ethernet1.networkName = "Production" ethernet1.addressType = "generated" guestOS = "rhel6"
Step 3:
Copy the hard disk (make sure the vm is powered off).
vmkfstools -i "/vmfs/volumes/datastore1/CentOS 6.4/CentOS 6.4.vmdk" \ "/vmfs/volumes/datastore1/Base CentOS 6.4/Base CentOS 6.4.vmdk"
Step 4:
Register the new machine with esxi.
vim-cmd solo/registervm "/vmfs/volumes/524734d7-f389d00a-4f68-b870f4dd73cf/Base CentOS 6.4/Base CentOS 6.4.vmx"
Verify the new machine is registered
vim-cmd vmsvc/getallvms ...trim... 17 Base CentOS 6.4 [datastore1] Base CentOS 6.4/Base CentOS 6.4.vmx rhel6Guest vmx-08 ...trim...
Step 5:
Power on the vm. Perform any post installation configuration. Most notably, new ip addresses will need to be specified. If necessary, use the dhcp server to get a temporary ip, and watch the server to see what leases are given out, and connect to the new lease. The mac address can be verified in the vmx configuration file.
vim-cmd vmsvc/power.on 17
Tuesday, October 22, 2013
This post will focus on setting up a Red Hat / CentOS Kickstart server so hosts can perform an automated install using pxe.
Step 1:
Install necessary packages on CentOS.
Step 2:
Create the anaconda directory structure.
Step 3:
Modify tftp server configuration.
Step 4:
Modify dhcp server configuration.
Step 5:
Copy CentOS iso and extract the files.
Step 6:
Configure the boot menu.
Step 7:
Configure apache.
Step 8:
Modify the kickstart files.
Finish configuring pxe.
Enable services.
Step 11:
Start the server. The only option that will need to be specified is on the initial pxe boot menu.
Once selected, the system should perform an unattended install, requiring no user interaction.
After it completes, the system will reboot and present a login screen for the newly installed system.
And the new system is up with the packages and ip addresses specified in the kickstart file.
The system can easily be modified to specify different kickstart files for different types of servers. For example, there could be an option for "Directory Server", "LAMP Server", or "Database Server", all pointing to different kickstart files, and each file specifying the required packages, as well as any necessary post installation configurations. For example:
Step 1:
Install necessary packages on CentOS.
yum -y install wget syslinux syslinux-tftpboot xinetd \ tftp-server tftp dhcp httpd openssh-clients
Step 2:
Create the anaconda directory structure.
mkdir -p /export/anaconda/iso/CentOS mkdir -p /export/anaconda/media mkdir -p /export/anaconda/media/CentOS-6.4-x86_64 mkdir -p /export/anaconda/media/CentOS-6.4-i386 mkdir -p /export/anaconda/tftpboot mkdir -p /export/anaconda/tftpboot/pxelinux.cfg mkdir -p /export/anaconda/tftpboot/CentOS-6.4-x86_64 mkdir -p /export/anaconda/tftpboot/CentOS-6.4-i386 mkdir -p /export/anaconda/postinstall/ mkdir -p /export/anaconda/cfg/ ln -s /export/anaconda /anaconda
Step 3:
Modify tftp server configuration.
cd /etc/xinetd.d diff tftp tftp.bak 13,14c13,14 < server_args = -s /export/anaconda/tftpboot < disable = no --- > server_args = -s /var/lib/tftpboot > disable = yes
Step 4:
Modify dhcp server configuration.
cd /etc/dhcp
cat dhcpd.conf subnet 192.168.1.0 netmask 255.255.255.0 { option routers 192.168.1.1; option domain-name-servers 8.8.8.8; option subnet-mask 255.255.255.0; range 192.168.1.240 192.168.1.250; next-server 192.168.1.10; filename "pxelinux.0"; }
Step 5:
Copy CentOS iso and extract the files.
scp CentOS-6.4-i386-minimal.iso root@192.168.1.10:/export/anaconda/iso/CentOS cd /export/anaconda/tftpboot/CentOS-6.4-i386 mount -o loop /export/anaconda/iso/CentOS/CentOS-6.4-i386-minimal.iso /mnt cp -Rp /mnt/* ./ umount /mnt
Step 6:
Configure the boot menu.
cat /export/anaconda/tftpboot/pxelinux.cfg/default
timeout 3600
default menu.c32
menu title Boot Menu
label 1
menu label ^ 1) CentOS-6.3-x86_64 (64-bit)
kernel CentOS-6.4-x86_64/vmlinuz
append initrd=CentOS-6.4-x86_64/initrd.img ramdisk_size=15491 ip=dhcp ksdevice=bootif \
ks=http://192.168.1.10/anaconda/cfg/CentOS-6.4-x86_64-ks.cfg
IPAPPEND 2
label 2
menu label ^ 2) CentOS-6.4-i386 (32-bit)
kernel CentOS-6.4-i386/vmlinuz
append initrd=CentOS-6.4-i386/initrd.img ramdisk_size=15491 ip=dhcp ksdevice=bootif \
ks=http://192.168.1.10/anaconda/cfg/CentOS-6.4-i386-ks.cfg
IPAPPEND 2
label 3
menu label ^ 3) Rescue CentOS-6.4-x86_64 (64-bit)
kernel CentOS-6.4-x86_64/vmlinuz
append initrd=CentOS-6.4-x86_64/initrd.img ramdisk_size=15491 ip=dhcp \
repo=http://192.168.1.10/anaconda/CentOS-6.4-x86_64 lang=en_US.UTF-8 keymap=us rescue
label 4
menu label ^ 4) Rescue CentOS-6.4-i386 (32-bit)
menu default
kernel CentOS-6.4-i386/vmlinuz
append initrd=CentOS-6.4-i386/initrd.img ramdisk_size=15491 ip=dhcp \
repo=http://192.168.1.10/anaconda/CentOS-6.4-i386 lang=en_US.UTF-8 keymap=us rescue
Step 7:
Configure apache.
cat /etc/httpd/conf.d/anaconda.conf Alias /anaconda/cfg /export/anaconda/cfgOptions Indexes FollowSymLinks Allow from All Alias /anaconda/postinstall /export/anaconda/postinstallOptions Indexes FollowSymLinks Allow from All Alias /anaconda /export/anaconda/mediaOptions Indexes FollowSymLinks Allow from All
Step 8:
Modify the kickstart files.
cat CentOS-6.4-i386-ks.cfg install # Specifies the language lang en_US.UTF-8 # Specifies the keyboard layout keyboard us # Skip Red Hat subscriber key input key --skip # Forces the text installer to be used (saves time) text # Forces the cmdline installer to be used (debugging) #cmdline # Skips the display of any GUI during install (saves time) skipx # Used with an HTTP install to specify where the install files are located url --url http://192.168.1.10/anaconda/CentOS-6.4-i386 # Assign a static IP address upon first boot & set the hostname network --device eth0 --onboot yes --bootproto static --ip=192.168.1.13 \ --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8 # Give the second interface a DHCP address (if you are not using a second interface comment this line out) network --device eth1 --onboot yes --bootproto static --ip=172.16.0.13 \ --netmask=255.255.0.0 # Set the root password rootpw password # Need a repo as only the minimal iso was used repo --name=es --baseurl=http://linux.mirrors.es.net/centos/6/os/i386/ # Enable the firewall and open port 22 for SSH remote administration firewall --enabled --port=22:tcp # Setup security and SELinux levels authconfig --enableshadow --passalgo=sha512 selinux --disabled # Set the timezone timezone --utc Etc/UTC # Create the bootloader in the MBR with drive sda being the drive to install it on bootloader --location=mbr --driveorder=sda,sdb --append=audit=1 # Wipe all partitions and build them with the info below clearpart --all --initlabel #Disk partitioning information zerombr # Create primary partitions part /boot --fstype ext4 --size=512 --asprimary --ondisk=sda part swap --size=256 --asprimary --ondisk=sda part pv.01 --size=4096 --grow --asprimary --ondisk=sda # Create LVM logical volumes volgroup system --pesize=4096 pv.01 logvol / --vgname=system --size=3000 --grow --name=root_vol # reboot when installation completes reboot # Install the Core software packages, aka "minimal", plus a couple extras %packages %end %pre # redirect debugging output to tty3 #exec < /dev/tty3 > /dev/tty3 #chvt 3 %post --log=/var/tmp/install.log # redirect debugging output to tty3 #exec < /dev/tty3 > /dev/tty3 #chvt 3 echo "Creating CentOS-6.4-i386 post installation directory ..." mkdir -p /opt/postinstall echo "Downloading CentOS-6.4-i386 post installation files ..." cd /opt/postinstall wget http://192.168.1.10/kickstart/postinstall/CentOS-6.4-i386-postinstall.tgz tar zxf CentOS-6.4-i386-postinstall.tgz rm CentOS-6.4-i386-postinstall.tgz > /dev/null 2>&1 echo "Executing CentOS-6.4-i386 post installation script ..." ./CentOS-6.4-i386-postinstall >> CentOS-6.4-i386-postinstall.out 2>&1 echo "Done."
Step 9:
Finish configuring pxe.
cp /usr/share/syslinux/pxelinux.0 /export/anaconda/tftpboot/ cp /usr/share/syslinux/menu.c32 /export/anaconda/tftpboot/
Step 10:
Enable services.
chkconfig dhcpd on chkconfig httpd on chkconfig xinetd on service dhcpd restart service httpd restart service xinetd restart
Step 11:
Start the server. The only option that will need to be specified is on the initial pxe boot menu.
Once selected, the system should perform an unattended install, requiring no user interaction.
After it completes, the system will reboot and present a login screen for the newly installed system.
And the new system is up with the packages and ip addresses specified in the kickstart file.
#ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:75:23:1D inet addr:192.168.1.13 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe75:231d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:70 errors:0 dropped:0 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8224 (8.0 KiB) TX bytes:13776 (13.4 KiB) eth1 Link encap:Ethernet HWaddr 00:0C:29:75:23:27 inet addr:172.16.0.13 Bcast:172.16.255.255 Mask:255.255.0.0 inet6 addr: fe80::20c:29ff:fe75:2327/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:120 (120.0 b) TX bytes:720 (720.0 b)
The system can easily be modified to specify different kickstart files for different types of servers. For example, there could be an option for "Directory Server", "LAMP Server", or "Database Server", all pointing to different kickstart files, and each file specifying the required packages, as well as any necessary post installation configurations. For example:
Friday, October 18, 2013
This post will go over how to install a vm in VMWare 5.1 using an iSCSI target on FreeBSD using zfs. FreeBSD 9.1 will be used in virtualbox with a new virtual drive that will be used as the iscsi target. The ip of the system will be 10.0.0.3.
Step 1:
Set up the nic of the guest os as a bridged adapter in virtualbox using the host systems ethernet port.
Add a rule to allow the iscsi traffic on the localhost firewall.
Step 2:
Install the necessary packages for iSCSI on FreeBSD.
Step 3:
Create the zvol that will be used as the block device.
Step 4:
Modify the configuration files for istgt.
Step 6:
Add the iscsi target to vsphere. From vsphere, select the esxi host and select the "Configuration" tab, then hit "Storage Adapters". Click "Add..." and select the "iSCSI Software Adapter". Go to the properties of the new iSCSI Software Adapter and click the "Dynamic Discovery" tab. Add the ip of the iSCSI target.
Rescan the adapter, and the target should show up.
Go back to "Storage" on the left hand side of the screen. Click "Add Storage...", and select "Disk/LUN". The newly added iscsi target should be available.
Give the new datastore a name, and modify any other options if necessary. Once done, the new datastore should show up in vsphere.
Step 7:
Install a vm on the iscsi target. FreeBSD will be installed. Create a new vm as before, but on the storage section, be sure to select the newly created iscsi device.
And as before, there is a management and a production nic.
Attach an iso as before and go through the installation.
The system will get the next available ip addresses in the management and production networks, that being 192.168.1.12/24 and 172.16.0.12/24, respectively.
Step 1:
Set up the nic of the guest os as a bridged adapter in virtualbox using the host systems ethernet port.
Add a rule to allow the iscsi traffic on the localhost firewall.
iptables -A open -p tcp --dport 3260 -j ACCEPT
Step 2:
Install the necessary packages for iSCSI on FreeBSD.
Step 3:
Create the zvol that will be used as the block device.
Step 4:
Modify the configuration files for istgt.
Step 5:
Start the service and verify targets are available.
Step 6:
Add the iscsi target to vsphere. From vsphere, select the esxi host and select the "Configuration" tab, then hit "Storage Adapters". Click "Add..." and select the "iSCSI Software Adapter". Go to the properties of the new iSCSI Software Adapter and click the "Dynamic Discovery" tab. Add the ip of the iSCSI target.
Rescan the adapter, and the target should show up.
Go back to "Storage" on the left hand side of the screen. Click "Add Storage...", and select "Disk/LUN". The newly added iscsi target should be available.
Give the new datastore a name, and modify any other options if necessary. Once done, the new datastore should show up in vsphere.
Step 7:
Install a vm on the iscsi target. FreeBSD will be installed. Create a new vm as before, but on the storage section, be sure to select the newly created iscsi device.
And as before, there is a management and a production nic.
Attach an iso as before and go through the installation.
The system will get the next available ip addresses in the management and production networks, that being 192.168.1.12/24 and 172.16.0.12/24, respectively.
Wednesday, October 16, 2013
This post will focus on installing a vm on VMWare ESXi 5.1 using an nfs share. The nfs share will exist on an archlinux installation. The VMWare website offers some best practices for using NFS available here. Unfortunately, the setup that is being performed is on consumer grade hardware (i.e. laptops that were available), so most of the best practices will have to be skipped. For test purposes, this will be considered acceptable.
Step 1:
Install nfs server on a local machine. The nfs server will exist on an archlinux machine. Archlinux uses pacman to install packages and systemd.
Step 2:
Open some ports. This machine uses a separate chain called "open" for open ports and services.
Step 3:
Make an nfs directory that will be exported, add to the exports file, re-export the fs.
Step 4:
In the vsphere client, click on the esxi host, then click on the "Configuration" tab, then the "Storage" option under "Hardware". Click on "Add Storage..." and choose "Network File System". Set up the nfs share as shown below.
Step 5:
Install CentOS 6.4 on the nfs share. Right click the esxi host in the vsphere client, and choose "New Virtual Machine". On the first screen, choose "Custom". On the storage screen, choose the newly created nfs share.
As before, on the networking screen, use the "Managment" and "Production" port groups for the two interfaces.
As before, edit the boot settings.
And attach the CentOS iso from the host machine.
CentOS can now be installed and configured similarly to the previous installation. The management ip will be 192.168.1.11 and the production ip will be 172.16.0.11
Networking on the esxi host should now look like this.
Step 6:
Verify the setup. The hosts should be able to communicate with each other on the same vlans, but not between.
Taking down the production nic (172.16.0.10) on the vm on the esxi local disk should prevent the node from reaching the nfs host on the same vlan, or 172.16.0.11. Although the vm still has a route to the nfs host through the default route of 192.168.1.1, it should still not be able to get to the 172.16.0.x network through the default route. The hosts appear to be isolated as a ping to 172.16.0.11 fails.
Upon closer inspection, the packets are still being routed between vlans, but the target vm does not have a route to the 192.168.1.x network out of its 172.16.0.11 interface, so it is just not responding. Using tcpdump on the local host doing the routing provides insight (traffic on the 192.168.1.x network should not be seen on vlan 100).
To prevent communication between the vlans, add some more rules to the firewall.
Another ping test and tcpdump confirms traffic is not making it between vlans. The production and management network are isolated from each other as intended.
Step 1:
Install nfs server on a local machine. The nfs server will exist on an archlinux machine. Archlinux uses pacman to install packages and systemd.
pacman -S nfs-utils systemctl start rpc-idmapd systemctl start rpc-mountd
Step 2:
Open some ports. This machine uses a separate chain called "open" for open ports and services.
iptables -A open -p tcp --dport 111 -j ACCEPT iptables -A open -p tcp --dport 2049 -j ACCEPT iptables -A open -p tcp --dport 20048 -j ACCEPT
Step 3:
Make an nfs directory that will be exported, add to the exports file, re-export the fs.
mkdir /srv/nfs echo "/srv/nfs 10.0.0.1(rw,no_root_squash,sync)" >> /etc/exports exportfs -rav
Step 4:
In the vsphere client, click on the esxi host, then click on the "Configuration" tab, then the "Storage" option under "Hardware". Click on "Add Storage..." and choose "Network File System". Set up the nfs share as shown below.
Step 5:
Install CentOS 6.4 on the nfs share. Right click the esxi host in the vsphere client, and choose "New Virtual Machine". On the first screen, choose "Custom". On the storage screen, choose the newly created nfs share.
As before, on the networking screen, use the "Managment" and "Production" port groups for the two interfaces.
As before, edit the boot settings.
And attach the CentOS iso from the host machine.
CentOS can now be installed and configured similarly to the previous installation. The management ip will be 192.168.1.11 and the production ip will be 172.16.0.11
Networking on the esxi host should now look like this.
Step 6:
Verify the setup. The hosts should be able to communicate with each other on the same vlans, but not between.
Taking down the production nic (172.16.0.10) on the vm on the esxi local disk should prevent the node from reaching the nfs host on the same vlan, or 172.16.0.11. Although the vm still has a route to the nfs host through the default route of 192.168.1.1, it should still not be able to get to the 172.16.0.x network through the default route. The hosts appear to be isolated as a ping to 172.16.0.11 fails.
Upon closer inspection, the packets are still being routed between vlans, but the target vm does not have a route to the 192.168.1.x network out of its 172.16.0.11 interface, so it is just not responding. Using tcpdump on the local host doing the routing provides insight (traffic on the 192.168.1.x network should not be seen on vlan 100).
tcpdump -nni eth0.100 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0.100, link-type EN10MB (Ethernet), capture size 65535 bytes 03:36:01.036035 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 19, length 64 03:36:02.036025 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 20, length 64 03:36:03.035997 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 21, length 64 03:36:04.035992 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 22, length 64 03:36:05.036125 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 23, length 64 03:36:06.036007 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 24, length 64 03:36:07.035814 IP 192.168.1.10 > 172.16.0.11: ICMP echo request, id 63236, seq 25, length 64
To prevent communication between the vlans, add some more rules to the firewall.
iptables -I FORWARD -i eth0.101 -o eth0.100 -j DROP iptables -I FORWARD -i eth0.100 -o eth0.101 -j DROP
Another ping test and tcpdump confirms traffic is not making it between vlans. The production and management network are isolated from each other as intended.
Monday, October 14, 2013
The main goal of this project is to have an ESXi host run several vm's using various datastores including local, nfs, and iscsi. There should be a management network that is tagged and uses vlan 101 that the vm's will use for updates to the os and administrative intervention, and a production network that is tagged and will use the vlan id 100. The iso's used will be FreeBSD-9.1 and CentOS 6.4.
Step 1:
Sign up for an account on vmware.com and download the VMWare ESXi iso along with the vsphere client. VMWare ESXi 5.1 update 1 will be used.
Step 2:
Burn iso to disc or usb. Unetbootin was used.
Step 3:
Install vmware esxi on the machine that will be used to host the vm's. Pretty much next next finish. The 10.0.0.x/8 network will be used for the esxi host, 172.16.0.x/16 as production, and 192.168.1.0/24 for management. The VMWare host will get 10.0.0.1.
Step 3a (optional):
Enable ssh on the vmware esxi host. From the console of the esxi host, log in and click on "Troubleshooting Options", then "Enable SSH".
Step 4:
Install vsphere client on a machine. A windows xp machine in a vm using virtualbox will be used. Pretty much next next finish.
Step 5:
Set up the local machine network. The esxi host network will be the native vlan, the production network will be vlan 100, and the management network will be vlan 101. A switch capable of doing vlan tagging is not available, so the local machine will be used for the trunk port, which is also running the windows vm which is running the vsphere application. On the local machine, execute:
For vlan 100, Execute:
And verify the link is up with:
And verify the link is up with:
Log in to the esxi host from the windows xp vm:
Step 6:
Set up the esxi host network. Once logged in to the host through vsphere, click on the "Configuration" tab, then on "Networking". On "vSwitch0", create a "Production" port group with vlan id 100 and a "Management" port group with vlan id 101.
There is now an esxi host that connects directly to the local machine over ethernet. The connection has a native vlan using 10.0.0.0/8, with 10.0.0.1 as the esxi host and 10.0.0.2 as the local machine that has a vm running windows xp and the vsphere client using nat, 172.16.0.0/16 as the production network on vlan 100 where 172.16.0.1 is an svi on the local machine, and 192.168.1.0/24 as the management network on vlan 101 where 192.168.1.1 is an svi on the local machine.
Step 7:
Install CentOS 6.4 on the esxi host. Right click the esxi host in the vsphere client, and choose "New Virtual Machine". On the first screen, choose "Custom". Leave everything the default with the exception of the networking section where two interfaces are used: the "Managment" and "Production" port group.
Once complete, right click on the "CentOS 6.4" vm under the 10.0.0.1 esxi host, and choose "Edit Settings". Click on the "Options" tab, and select "Boot Options". Check the option that says "The next time the virtual machine boots, force entry into the BIOS setup screen."
Power on the vm. The vm should stop at a bios screen. Attach the CentOS iso from the local machine. Towards the top of the screen, click on the icon that is a cd with a wrench, and select "Connect to ISO image on local disk...". Select the iso image from your local machine that you want the esxi host to use for this vm, and hit ok. Install the centos machine normally.
Step 8:
Allow access to the internet on the new vm over the management vlan, vlan 101. On the local machine, where wlp2s0 is the external connection and eth0.101 is the vlan that needs internet access, execute:
Step 9:
Configure centos networking. See the screenshot below, eth0 is management, eth1 is production.
Add a nameserver.
There is now an esxi host directly attached to a local machine that carries a native vlan, a production vlan, and a management vlan. The management vlan has internet access and the local machine can be used to set policy on the outgoing traffic coming in on the trunk uplink from the esxi host.
Step 1:
Sign up for an account on vmware.com and download the VMWare ESXi iso along with the vsphere client. VMWare ESXi 5.1 update 1 will be used.
Step 2:
Burn iso to disc or usb. Unetbootin was used.
Step 3:
Install vmware esxi on the machine that will be used to host the vm's. Pretty much next next finish. The 10.0.0.x/8 network will be used for the esxi host, 172.16.0.x/16 as production, and 192.168.1.0/24 for management. The VMWare host will get 10.0.0.1.
Step 3a (optional):
Enable ssh on the vmware esxi host. From the console of the esxi host, log in and click on "Troubleshooting Options", then "Enable SSH".
Step 4:
Install vsphere client on a machine. A windows xp machine in a vm using virtualbox will be used. Pretty much next next finish.
Step 5:
Set up the local machine network. The esxi host network will be the native vlan, the production network will be vlan 100, and the management network will be vlan 101. A switch capable of doing vlan tagging is not available, so the local machine will be used for the trunk port, which is also running the windows vm which is running the vsphere application. On the local machine, execute:
ip addr add 10.0.0.2/8 dev enp1s0
For vlan 100, Execute:
ip link add link enp1s0 name eth0.100 type vlan id 100 ip addr add 172.16.0.1/16 brd 172.16.0.255 dev eth0.100 ip link set dev eth0.100 up
And verify the link is up with:
ip link ...trim... eth0.100@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT ...trim... route -n ...trim... 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0.100 ...trim... ping 172.16.0.1 ...trim... 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=0.063 ms 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=0.056 ms ...trim...
For vlan 101, Execute:
ip link add link enp1s0 name eth0.101 type vlan id 101 ip addr add 192.168.1.1/24 brd 192.168.1.255 dev eth0.101 ip link set dev eth0.101 up
And verify the link is up with:
ip link ...trim... eth0.101@enp1s0:mtu 1500 qdisc noqueue state UP mode DEFAULT ...trim... route -n ...trim... 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0.101 ...trim... ping 192.168.1.1 ...trim... 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.070 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.068 ms ...trim...
Log in to the esxi host from the windows xp vm:
Step 6:
Set up the esxi host network. Once logged in to the host through vsphere, click on the "Configuration" tab, then on "Networking". On "vSwitch0", create a "Production" port group with vlan id 100 and a "Management" port group with vlan id 101.
There is now an esxi host that connects directly to the local machine over ethernet. The connection has a native vlan using 10.0.0.0/8, with 10.0.0.1 as the esxi host and 10.0.0.2 as the local machine that has a vm running windows xp and the vsphere client using nat, 172.16.0.0/16 as the production network on vlan 100 where 172.16.0.1 is an svi on the local machine, and 192.168.1.0/24 as the management network on vlan 101 where 192.168.1.1 is an svi on the local machine.
Step 7:
Install CentOS 6.4 on the esxi host. Right click the esxi host in the vsphere client, and choose "New Virtual Machine". On the first screen, choose "Custom". Leave everything the default with the exception of the networking section where two interfaces are used: the "Managment" and "Production" port group.
Once complete, right click on the "CentOS 6.4" vm under the 10.0.0.1 esxi host, and choose "Edit Settings". Click on the "Options" tab, and select "Boot Options". Check the option that says "The next time the virtual machine boots, force entry into the BIOS setup screen."
Power on the vm. The vm should stop at a bios screen. Attach the CentOS iso from the local machine. Towards the top of the screen, click on the icon that is a cd with a wrench, and select "Connect to ISO image on local disk...". Select the iso image from your local machine that you want the esxi host to use for this vm, and hit ok. Install the centos machine normally.
Step 8:
Allow access to the internet on the new vm over the management vlan, vlan 101. On the local machine, where wlp2s0 is the external connection and eth0.101 is the vlan that needs internet access, execute:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -P FORWARD ACCEPT
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE
iptables -A FORWARD -i wlp2s0 -o eth0.101 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0.101 -j ACCEPT
iptables -A FORWARD -j DROP
Step 9:
Configure centos networking. See the screenshot below, eth0 is management, eth1 is production.
Add a nameserver.
echo "nameserver 8.8.8.8 > /etc/resolve.conf"
There is now an esxi host directly attached to a local machine that carries a native vlan, a production vlan, and a management vlan. The management vlan has internet access and the local machine can be used to set policy on the outgoing traffic coming in on the trunk uplink from the esxi host.
Subscribe to:
Posts (Atom)