No Time For Narrative – Solaris 11 Express ZFS and NFS share to VMWare ESXi 4.1u1

Welcome to my scratchpad. See links for some excellent write ups.

This article is based on Solaris 11 Express and VMWare ESXi 4.1.

Step 0: Build your hardware and install Solaris.

HCL: http://www.oracle.com/webfolder/technetwork/hcl/index.html

Step 1: Mirror the rpool

zpool status #figure out what’s already allocated

cfgadm -s "select=type(disk)" #list disks

fdisk -W - [rootdisk2] #check

fdisk -B [rootdisk2] #apply a default Solaris partition to the disk

fdisk -W - [rootdisk2] #check again

prtvtoc /dev/rdsk/[rootdisk1] | fmthard -s - /dev/rdsk/[rootdisk2] #slice and dice! You must define slices for rpools, so this step mirrors the original disk’s slice-age to the second root disk

zpool attach -f rpool [rootdisk1] [rootdisk2] #attach, not add. Attach = mirror. Add = stripe.

zpool status #check

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/[rootdisk2] #install grub onto second boot disk

zpool status #check again

see: http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express

For more on rpool manipulation, namely on how to shrink an rpool, see: http://blogs.oracle.com/mock/entry/how_to_shrink_a_mirrored

Step 2: Configure networking

ifconfig -a #cheat and take advantage of nwam’s device lists

svcadm disable network/physical:nwam #disable nwam

svcadm enable network/physical:default #then enable the default config tools

ipadm create-if e1000g0 #or whatever your nic is

ipadm show-if #check

ipadm create-addr -T static -a mg.mt.ip.addy/prefix e1000g0/v4 #create the address

ipadm show-addr #confirm

#link aggregate group (LACP)

dladm create-aggr -l [nic2] -l [nic 3] #create link aggregate group:

Then the same ipadm commands

ipadm create-if aggr1 #configure aggr1 for persistence

ipadm show-if #confirm it was added, should show down.

ipadm create-addr -T static -a 1.2.3.4/24 aggr1/v4 #create the address

ipadm show-addr #confirm it was added, show-if should now show “OK”

netstat -rn #check routing tables

route -p add default 1.2.3.1 #add persistent default route

vi /etc/resolv.conf #configure nameservers

vi /etc/hosts #configure name

vi /etc/nsswitch.conf #enable dns lookups for “hosts” line

svcsadm restart ssh #restart ssh after modifying networking

ping 4.2.2.1 #ping Level3′s DNS resolvers to confirm routing

reboot #reboot to confirm persistency

Step 3: Do your AD thing. I know you love it.

read this: http://blog.scottlowe.org/2006/08/15/solaris-10-and-active-directory-integration/

also: http://download.oracle.com/docs/cd/E19963-01/html/821-1449/index.html

Step 4: Configure ZFS

format #to present list of disks. exit out, leaving the list in screen buffer

or: cfgadm -s "select=type(disk)"

zpool create tank mirror c7t0d0 c7t1d0 mirror c7t2d0 c7t3d0 cache c7t4d0 #create your main zpool. This is 4 drives in raid 10 and a read cache. use “log c#t#d#” to add a ZIL device

zfs set dedup=on tank #enable dedup

zfs set compression=on tank #enable compression.

Note that changes made to the pool will apply to its children when they are created.

zfs create tank/VMOS #create your ESXi share

zfs set sharenfs=none,nosuid,root=1.2.3.xx:1.2.3.xy tank/VMOS #share with 1.2.3.xx and 1.2.3.xy

From esxi 4.1:

esxcfg-nas -a -o IP.Or.Host.Name -s /tank/VMOS [esxiDatastoreName]# http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005057

ZFS /Solaris links

http://lildude.co.uk/zfs-cheatsheet

http://download.oracle.com/docs/cd/E19963-01/index.html

iSCSI - Solaris 11 = use COMSTAR.

http://wikis.sun.com/display/OpenSolarisInfo200906/Configuring+an+iSCSI+Storage+Array+With+COMSTAR+%28Task+Map%29

Setting up Hyper-V with NAT

This post was originally posted by me at http://forums.serverbeach.com/showthread.php?t=6411.

I’ve edited out the ServerBeach specific stuff and will post pictures…. soonish.

The following link has some great pictures not included here.¬†http://sqlblog.com/blogs/john_paul_c…h-hyper-v.aspx

I’ll add some nice little pictures here once I get some screenshots together.

CONFIGURE HYPERV

1. Configure an “Internal” HyperV network
2. Set each Virtual Machine to use the Internal network and assign them and your HyperV host on the correct subnet (in this example 10.0.0.1 for the host and 10.0.0.10 for the VM).

ENABLE ROUTING AND REMOTE ACCESS ON THE HOST MACHINE

1. Click -> Start -> Administrative Tools -> Routing and Remote Access
2. Right Click on Server#### (local) -> Configure & Enable Routing & Remote Access
3. Click -> Next on Welcome Window
4. Select Custom Configuration Click -> Next
5. Select NAT Click -> Next
6. Select your public interface
7. Select your Internal HyperV interface
8. Select “I will set up name and address services later” Click -> Next
9. Click -> Finish

CONFIGURE ROUTING AND REMOTE ACCESS ON THE HOST MACHINE

1. Routing and Remote Access should be running on the server now
2. Expand out the Server
3. Expand out IP Routing
4. Select NAT/Basic Firewall
5. Right-click your public interface. Select properties
7. Network Address Translation Properties Window will open
8. Select Radio Button for “Public Interface Connected to the Internet”
9. Select the check box for both “Enable NAT on this interface”
10. Click on the Address Pool Tab
11. Click the Add button and add your secondary IP addresses. The “Start Address” and “End Address” will be the same in most cases.

*NOTE* You do not want the secondary IP address configured in the TCP/IP Properties of the Host machine.

12. Click the Reservations button and enter your static IP mappings. That is, specify that you want traffic on your secondary IP mapped to your VM’s internal IP.
13. In services.msc, make sure that RRAS is set to start automatically and Windows ICS is disabled.

NOTES #1

When configuring and experimenting with the RRAS firewall, create a batch file to stop the service in case you forget to allow RDC or otherwise render the system inaccessible.

Code:

net stop “remoteaccess”

Then add the batch file to the scheduler and have it run some time after you apply your changes.

NOTE #2

RRAS is really finicky about the interfaces installed on the server. If an interface is changed in any significant way, it’ll have to be disabled and reconfigured.

Hyper-V is also similarly finicky about its virtual networks. I can’t count the number of times I had to remove and recreate networks. Thankfully, this was rather painless with only one VM to propagate changes to.

If you should encounter any difficulties with adding your additional VMs to the server, try resetting HyperV networking, individual VM network binding (in the VM’s settings), confirming physical host interfaces, and then reconfiguring RRAS in this order.

NOTE #3

Those who have had HyperV configuration problems solved it by disabling TCP/Offload Engine. Symptoms include, RRAS just not working, or working sporadically. If in doubt, disable TCP/Offload Engine

http://social.technet.microsoft.com/…8-d22aca6154ee
http://support.microsoft.com/default…b;EN-US;904946

So if this applies to you, run on the host and on any 2008 VMs:

$ netsh int ip set global taskoffload=disabled

and add the following registry key to any 2003 VMs:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Tcpip\Parameters\DisableTaskOffload

This is a DWORD entry that should have a value of 1.