Build Document for ESX4.0 System in Main Office
I deployed an ESX 4.0 server and VMWare Infrastructure in the Davis Server Room to accommodate testing and migration to the new vSphere Platform. I have also deployed the same ESX 4.0 server and VMWare Infrastructure in the Sacramento Colo. This is the build document for those two deploys.
Test the conciquences of going from physical servers to a VMWare Environment:
- FreeBSD 6.3
- CentOS 5.3
- Solaris 10
- Windows 2003 R2 Server (VCS Server)
- Mac OS/X (optional)
Test the capabilities of the following VMWare Features:
- Failover with Hot Spare.
- Failover with the backup Hot Spare.
- Performance - Net / Disk / CPU / Mem.
- Thin Provisioning.
- Disk Expansion.
I performed the following to deploy the servers:
I put three Aberdeen servers up in the lab so far, with the SAS 1.4TB array as external storage for the time being.
IP Addresses and Management
I used the following IP addresses to deploy the Davis VMW setup:
- 192.168.45.35 - Lab Router.
- 192.168.45.51 - SAS 1.4 TB SAN Management Console.
- 192.168.45.40 - ESX 4.0 Host -1.
- 192.168.45.41 - ESX 4.0 Host -2.
- 192.168.45.42 - ESX 4.0 Host -3.
- 192.168.45.43 - ESX 4.0 Host -4.
- 192.168.45.45 - VCS Server - This is what you want to Terminal Services into to manage the Davis VM Environment. It is also a VM.
*192.168.45.60-80 - Use for your VMs.
- 192.168.133.60 - Eth0 on the SAS SAN.
- 192.168.133.61 - Eth1 on the SAS SAN.
- 192.168.133.10 - iSCSI1 Host1
- 192.168.133.11 - iSCSI2 Host1
- 192.168.133.20 - iSCSI1 Host2.
- 192.168.133.21 - iSCSI2 Host2.
- 192.168.133.30 - iSCSI1 Host3.
- 192.168.133.31 - iSCSI2 Host3.
- 192.168.133.40 - iSCSI1 Host4.
- 192.168.133.41 - iSCSI2 Host4.
- 134 Fault Tolerance 192.168.134.0/24
- 135 Vmotion 192.168.135.0/24
- 2901 Admin 10.3.1.0/24
- 2920 IHC 10.3.20.0/24
- 2921 IDX 10.3.21.0/24
- 2922 MLS 10.3.22.0/24
- 2923 CRM 10.3.23.0/24
- 2924 CTT 10.3.24.0/24
- 2925 PBX 10.3.25.0/24
- 2926 AUX 10.3.26.0/24
- 2927 Mail 10.3.27.0/24
- 2928 MIS 10.3.28.0/24
There is one router, lab-router, at 192.168.45.35, which does sub-interface dot1Q encapsulation, to which, for each VLAN, the router's IP address will be <network>.1. So for Aux, that router will be 10.3.26.1. The router has another leg, interface 0/1, that is directly attached to the DSL modem at 220.127.116.11, with a GW of 18.104.22.168. It's routes are:
- Default = WWW.XXX.YYY.ZZZ
- 10.3.0.0 = 192.168.45.1
- 192.168.45.0 = 192.168.45.1
To manage the environment, Terminal Services into 192.168.45.45, launch the VMWare Infrastructure Client on the Desktop, log into "localhost" with Administrator / <PW>, you're good to go.
To manage the SAN, point the web browser on the VCS VM (192.168.45.45) to http://192.168.45.51 with grp-admin / <PW>. I have already created 3 256GB volumes and have formatted them as a VMFS filesystem and labelled them as ESX-Vol1-3.
Colo Install / Remove
I have removed the cluster from the colo.
OS Specific Installs
Solaris 10 Notes
Solaris 10 and VMWare vmxnet ethernet driver
The problem that I had been experiencing was that I was unable to install the VMWare vxmnet ethernet driver onto a Solaris 10 x86 server running on a VMWare vSphere 4.0 server.
The solution to getting the vmxnet interface to work is as follows:
1) install stock e1000 ethernet interface on a Solaris VM
2) install the VMWare 4.0 tools on a Solaris VM
3) halt and power off VM
4) install a second ethernet interface onto a Solaris VM, using the vmxnet3 driver
5) ifconfig vmxnet3s0 plumb
6) ifconfig e1000g0 unplumb
7) mv /etc/hostname.e1000g0 /etc/hostname.vmxnet3s0
The trick is to have two ethernet interfaces (e1000 and vmxnet3) and just unplumb from Solaris the e1000 interface. Don't remove the e1000 interface from the VMWare host configuration.