After installing VMware ESXi on Mordor, the next step in my Mordor 1.0 -> Mordor 2.0 project is to regain the basic Mordor 1.0 functionality based on a VM, AKA, set up an unRAID-powered-NAS.
In this post I document the details of my experience of setting up unRAID (version 5.0-beta12a) on my ESXi server (running version 5.1.0). I describe creating a VM on ESXi that boots from a physical USB flash drive (using Plop Boot Manager), my adventures with trying to passthrough the onboard SATA controller to the VM (spoiler: it failed), and other options for configuring the data drives for the VM (Raw Device Mapping, VMDirectPath I/O passthrough).
By far, the most useful resource that guided me in this process was the amazingly detailed documentation by the unRAID user Johnm in his ATLAS post in the Lime Tech forum. I have used guidance and tips from his post extensively, and will probably repeat some in this post.
Challenges and Requirements
So, what are the challenges in running unRAID as a VM? And what are some of the requirements I have from this set up and migration process?
- unRAID runs from a USB flash drive as its boot media, and persistent memory for configuration and plugins, but ESXi doesn’t offer a “boot a VM from USB flash drive” option, nor does it support simulating a USB flash drive as a VM booting media.
- The main function of unRAID is to manage storage. Thus, it makes sense to give unRAID complete and absolute control of the NAS hard-drives it will manage. ESXi, running on VT-d-enabled hardware, has support for DirectPath I/O passthrough – but it is far from simple and/or straight forward to use it directly with physical drives…
- I don’t have the luxury of having separate “production” and “staging” servers, to experiment without affecting my actual system and data. Mordor 1.0 managed 3 data-drives (2TB each), and this is all the large drives I have, so I must have the unRAID VM working with the existing configuration and data, while minimizing the risk that something bad happens to my data during the process.
Create an unRAID VM that boots from a USB flash drive using Plop Boot Manager
To tackle the first challenge – I decided to go for a solution that allows a VM to boot from a physical USB flash drive, using the Plop Boot Manager.
The Plop Boot Manager is a minimal program that allows booting various operating systems from various devices (e.g. CD, floppy, USB), with no dependency on BIOS (it is bundled with drivers for IDE-CDROM and USB).
The game plan is to create a Plop image that boots from USB, and have the unRAID VM use that image as a boot-CD image. Following is a detailed description of my implementation of this plan.
Make a Plop Boot Manager image (on Windows)
- Download the Plop Boot Manager, and the PLPBT-createiso tool (see the download page, I used the latest available, which was 5.0.14 at the time).
- The default behavior of Plop Boot Manager is to display a menu with all possible boot devices detected, and let the user choose which device to boot from:
I want to change this behavior, so the VM automatically and immediately boots from USB.
- Extract the
plpbt-5.0.14.zip
, and launch theplpcfgbtGUI
(underWindows\plpcfgbtGUI.exe
)
- Load the
plpbt.bin
file, found in the same directory, set the Start mode to Hidden, mark the Countdown checkbox, set the Countdown value to 1, choose USB as Default boot (see above screenshot for the configured settings dialog), and click the Configure plpbt.bin button.
This will modify theplpbt.bin
file that was loaded, in the same directory.
- Load the
- Extract the
plpbt-createiso.zip
, and copy to modifiedplpbt.bin
file from the previous step to the extracted directory. - From a command-line prompt, execute
create-iso.bat
– it will generate a file in the same directory –plpbtmycd.iso
.
Create the unRAID VM
- Launch VMware vSphere Client and log in the the ESXi host.
- Start the Create New Virtual Machine wizard, and select a Typical configuration:
- Name the VM:
- Select a datastore for the VM files (note: this will just hold the VM configuration files, RAM snapshot, and boot media image – no need for much storage):
- Set guest OS to some generic Linux (e.g. “FreeBSD (32-bit)”):
- Leave Network connections with the default settings:
- Create a small disk (e.g. 32MB, Thin Provision) for the VM boot image:
- On the Ready to Complete stage, check the Edit the virtual machine settings before completion checkbox, and hit Continue:
- Increase the RAM to 2GB:
- Click the Add… button to launch the Add Hardware wizard, and add a USB Controller:
- Choose EHCI+UHCI for controller type:
- Launch the Add Hardware wizard again, this time adding a USB Device (make sure the bootable unRAID USB flash drive is plugged in!):
- Select the USB flash drive from the list of available USB devices:
- Confirm and finalize the VM creation process.
Make the unRAID VM boot from USB
- Using the Datastore Browser, browse to the directory of the VM, and upload the
plpbtmycd.iso
file created in a previous step:
- Edit the settings of the VM – select the CD/DVD drive 1 item, check the Connect at power on checkbox, set the Device Type to Datastore ISO File, and browse to select the
plpbtmycd.iso
file that was uploaded in the previous step:
- Confirm and start the VM. It should successfully boot all the way into the unRAID console, just without the data drives. At this point I checked the MAC address of the virtual NIC (by running the
ifconfig
command) in order to configure my router DHCP to assign a static IP address to this VM, before shutting down the VM in order to proceed to configuring the data drives:
This concludes the unRAID-VM creation process, but it still has very little use as long as the data drives are not present, which is what the next section covers.
Configuring the data drives
Let’s review the available solutions for configuring data drives for use in the unRAID VM:
- Adding the data drives as datastore drives in the ESXi host, formatting them using VMFS, creating maximum-size virtual disks on each drive, and assigning the virtual disks to the unRAID VM.
- This is probably the simplest method. It is natively supported in ESXi, and doesn’t involve any “hardware magic”.
-
This is also probably the worst possible solution, for any and all of the following reasons:
- Performance hell: This configuration goes through all possible software and hardware layers for every read/write operation.
-
Possible capacity loss due to limitations on the size of virtual disks in VMFS, which might be smaller than the actual capacity of the physical drive.
- Doesn’t meet my requirement for transparent migration, as it requires reformatting all the drives to VMFS, thus losing the existing data. I don’t have 4TB of swap space laying around…
- Doesn’t allow unRAID to manage the drives directly, including SMART parameters and various low-level sensors.
-
Using VMDirectPath I/O to passthrough the disks directly to the unRAID VM, and letting unRAID manage the disks on its own.
- This solution sounds great, and meets all the requirements!
-
But, alas, VMDirectPath I/O can be used to passthrough supported PCI and PCIe devices, and not individual HDDs…
- So maybe I can passthrough an entire SATA controller to the VM, and assign all the HDDs on that controller to the unRAID VM?
- This indeed would have been great, if it worked, but it didn’t – more details below.
- Future note here – I still plan to go back to this method, by installing a PCIe RAID Card (specifically IBM M1015), and assigning it to the unRAID VM with passthourgh. But this requires getting that RAID card…
-
Using Raw Device Mapping to assign the physicals HDDs to the unRAID VM.
- This solution almost meets all of my requirements, with the following disadvantages:
- It isn’t really direct physical access, so unRAID doesn’t get to manage the drives directly (no SMART parameters and various sensors).
-
It isn’t natively supported in ESXi, and requires some manual ESXi voodoo that isn’t guaranteed to work in the future.
- But, despite the shortcomings, this method is much better than option #1, and I can implement it without waiting for some expansion card – which makes it the chosen solution!
- This solution almost meets all of my requirements, with the following disadvantages:
In the following sections I will give further details on my failure with solution #2, and success with solution #3.
During the attempts and experiments I had only one data HDD connected (out of the three), in case I do something stupid and invalidate the data. This is a reasonable safety net, since I already had protection for 1-HDD-loss based on a parity drive. Of course it would have been better to have a full backup, or perform the experiments with non-production drives… (living on the edge :-p )
SATA Controller DirectPath I/O Adventures
As mentioned above, the SATA Controller passthrough alternative was the preferred solution for giving unRAID full control over the data HDDs, so this is what I tried first.
Towards that goal, I launched vSphere Client and logged in to the ESXi host. In the host management interface, the DirectPath I/O Configuration can be found under the Configuration tab, in Advanced Settings:
Clicking the Configure Passthrough… link (top-right) will bring up the Mark devices for passthrough dialog, listing all PCI and PCIe devices detected and available for passthrough:
You can see that individual HDDs are not available for passthrough, which is reasonable considering HDDs are not PCI/PCIe devices. But the ASUS motherboard I have installed in this server (ASUS P8Z68-V Pro s1155) has two on-board SATA controllers, both appear on the list above:
- Intel Corporation Cougar Point 6 SATA AHCI Controller – part of the Intel Z68 Chipset, this controller includes 2 SATA 6Gb/s ports (gray), and 4 SATA 3Gb/s ports (blue).
- Marvell Technology Group Ltd. 88SE9172 – additional PCIe SATA controller that includes 2 SATA 6Gb/s ports (navy blue).
Now, remember that the ESXi host itself needs its own datastore drives, so it will be using some SATA ports, but I don’t see a reason for installing more than two drives for the datastore, so why not use the Marvell controller for ESXi, and passthrough the Intel controller to unRAID? Wouldn’t it be great?
It would! But, sadly, it turns out that the Marvell controller is not supported by ESXi at all… In all possible configurations I tried (connecting every HDD I have to each Marvell port), ESXi simply didn’t detect the HDD connected – as if nothing’s connected… I assume this is a compatibility issue, as the controller does not appear on the ESXi 5.1 I/O Compatibility Guide. I tried asking about this on VMware Communities, but got no response so far.
Another approach I tried is to work around the compatibility issue, by letting the unRAID VM to deal with the Marvell controller – configure the Marvell using passthrough to the VM, and let the VM use it directly. I know unRAID is able to deal with this controller, since it worked fine back in the Mordor 1.0 days. This is of course a very partial workaround, as it gives unRAID only two HDDs, which is not enough…
- So I configured the Marvell controller for passthrough:
- Edit the settings of the unRAID VM, and launch the Add Hardware wizard in order to add the Marvell controller as a PCI device:
- Select the device from the list:
- Confirm and finish:
This appeared to be OK, but once I booted the unRAID VM with HDDs connected to the Marvell ports, unRAID could not see any HDDs connected. I was unable to solve this issue (which is also probably part of the incompatibility of the Marvell controller), and decided to drop it in favor of the RDM solution, at least until I get the IBM M1015 RAID Card.
Raw Device Mapping Configuration
As mentioned, this solution isn’t natively supported in ESXi. This means that vSphere Client access to the host is not sufficient, and some operations must be performed manually on the host, using direct SSH access.
This is not a tutorial about SSHing into ESXi hosts, so I sum it up with this: enable Tech Support Mode in the ESXi host, and use some SSH client (e.g. PuTTY in Windows) to connect to the host.
It should be noted here that I found the RDM mapping of local SATA storage for ESXi post useful, with some variations based on a LimeTech forum thread.
- In vSphere Client, under the host Configuration tab, in the Storage settings, in the Devices view, verify that the NAS-data-HDD is recognized:
(I have 2 datastores drives, and the 3rd device is one of the NAS-data-drives – all are WDC)
- In an active SSH session (logged on as root), run
ls -l /dev/disks
to obtain the full handle of the source HDD:
(my source HDD is
t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA1318851
, which I will later refer to as<HDD_ID>
for brevity and generality) - Decide on a destination for the RDM handle. For instance, I want to use the RDMs in the unRAID VM, and I want to be able to keep track of which RDM belongs to which physical HDD, so I want to create the RDMs in the unRAID VM directory (e.g.
/vmfs/volumes/Patriot-64G-SSD/unRAID
), and to name them following the schemerdm_HDD-ID.vmdk
. - Create the RDM using the
vmkfstools
program:vmkfstools -a lsilogic -z /vmfs/devices/disks/<HDD_ID> /vmfs/volumes/<RDM-destination>.vmdk
:
- Back in vSphere Client, edit the settings on the unRAID VM, and launch the Add Hardware wizard to add a Hard Disk:
- Select Use an existing virtual disk:
- Browse to the destination where the RDM was created and choose it:
- Select a Virtual Device Node
SCSI (#:0)
, where # is a number of a SCSI controller that is not yet in use by the VM! (usuallySCSI (1:0)
) – this is very important!
Also set the mode to persistent independent:
- Finish the wizard:
- Note that in addition to an extra Hard Disk, the Hardware inventory of the VM shows also a New SCSI Controller (because I used a yet-unused SCSI controller in a previous step, remember?):
See that it is configured as LSI Logic Parallel? It is very important to change the type of the SCSI Controller to LSI Logic SAS:
I didn’t do that at first, and unRAID did not recognize the RDM’ed HDD as a result… (no harm done, though)
- Boot unRAID, and verify the HDD is recognized (e.g. by running
ls -l /dev/disk/by-id/
):
Check that it is available in the list of devices in the unRAID management web interface:
- Now that the process is validated using a single HDD – shutdown, connect the other HDDs, and repeat:
- The vmkfstools commands:
# vmkfstools -a lsilogic -z /vmfs/devices/disks/t10.ATA_____WDC_WD20EARS2D00MVWB0_________________________WD2DWCAZA1318851 /vmfs/volumes/Patriot-64G-SSD/unRAID/rdm_WD2DWCAZA1318851.vmdk # vmkfstools -a lsilogic -z /vmfs/devices/disks/t10.ATA_____WDC_WD20EARX2D00PASB0_________________________WD2DWCAZAA396700 /vmfs/volumes/Patriot-64G-SSD/unRAID/rdm_WD2DWCAZAA396700.vmdk # vmkfstools -a lsilogic -z /vmfs/devices/disks/t10.ATA_____WDC_WD20EARX2D00PASB0_________________________WD2DWCAZAA463853 /vmfs/volumes/Patriot-64G-SSD/unRAID/rdm_WD2DWCAZAA463853.vmdk
- The vmkfstools commands:
And this concludes another pretty-long-post, just about setting up a single VM!
Go up to the Mordor project page for more Mordor adventures. Or maybe jump straight to the post describing how I switched from RDM-based-NAS-drives to a RAID card as passthrough device.
April 1, 2015
Excellent documentation and very helpful. I will be buying a m1015. Thank you.
April 4, 2015
Although I have an IBM M1015 card on the way, I couldn’t wait and had to test my onboard controller on my SuperMicro X7sbe. So I followed your steps all the way, except I passed-through my onboard controller and added them as PCI devices in the VM. The unRAID VM boots to ‘loading /bzroot…….’ and seems to hang forever. I aborted and rolled back to native unRAID until my card arrives. Thanks!
June 3, 2015
That’s interesting! I think I had similar experience with trying to pass-through “unsupported” stuff (like random on-board controllers).
Did your M1015 arrive? Is it working for you?