Virtualisation with ESXi

27. March, 2021 7 min read Homelab

Configure and setup ESXi and vSphere

I'm a data hoarder; that's nothing new to people who know me. This year around, I drove my passion a little bit further and streamlined my setup. The following articles will be a multi-part series introducing various areas such as:

Choice of server

I used a Mac mini as my media server and a Synology DS1815+ as my media manager previously. The goal was to consolidate both systems into a mighty server using virtualisation to host each service.

At first, I wanted to build the server myself using an Intertech 2U 20255 chassis. The components would have cost me over 1200.- CHF for a decent system that could host about 4-5 demanding virtual machines.

Taking that as a baseline, I scoured Ricardo and Tutti for some Dell PowerEdge or HP ProLiant systems, as they appeared to have a good price/performance ratio overall.

I ultimately settled on a baseline PowerEdge R820 for about 900 bucks and upgraded it through AliExpress for an additional 500.- CHF, ending up with the following specs:

  • 4x Intel® Xeon® Processor E5-4650 (8 Cores up to 3.30 GHz)
  • 768 GB DDR3 1333 MHz ECC Memory (32 GB modules)
  • 4x 300 GB SATA and 4x 900 GB SATA drives
  • 1 TB M.2 Kingston A2000 SSD with PCIe adapter card
  • PERC H710 raid card
  • 1 Gbps integrated quad-port NIC
  • 1 Gbps I350-T4V2 quad-port PCIe NIC
  • 10 Gbps X540 dual-port PCIe NIC
  • 2x Platinum+ 1100 Watt power supplies
  • MSI GeForce GTX 1050 TI OC
  • Internal Dual SD Module (2x Sandisk Ultra U1 16GB)
  • iDRAC Enterprise

Not too bad for a total of around 1’500 bucks 🤪

Discovering the basics

I had no experience with server hardware before. So getting access to the management interface appeared to be challenging already. The server turned on when pressing the power button, which was good, yes? After getting a VGA cable, mouse, keyboard and monitor, I was ready to dig in.

The server took some time to boot up, finally showing a colourful VMware ESXi “ready” screen. “Nice, whatever this is”, were my first thoughts. There was a yellow-black screen with an IP shown; let’s open it! Ah! That’s a virtualisation UI, excellent.

The first hours of experimenting lead me to some insights:

  • The booting sequence takes some time, as the server needs to “rediscover” all the hardware and regenerate its “Inventory”.
  • On boot, I could start the “Dell Lifecycle Controller” which is “an advanced embedded systems management technology… blah” – in short, it’s an advanced UEFI/BIOS interface that allows configuring or “deployment” of the server. We’ll get back to that one later.
  • VMware ESXi is an excellent virtualisation tool to manage virtual machines. The web interface looked decent, and naturally, I broke it after some time 😅
  • There is a dedicated iDRAC (Integrated Dell Remote Access Controller), which provides access to the server’s admin interface through the browser. The IP address is static and was visible after the boot. There was even a Virtual Console so I could ditch the extra monitor and continue from my laptop.

Resolving some issues

The server showed two issues when booting on the front display. The first indicated that the second power cable wasn’t connected, easy to solve.

The latter was a bit more tricky. A message about “The storage BP1 SAS A cable is not connected” indicated that the raid controller did not correctly connect to the hard drives, but they were, and even the hard drives showed up. I first tried disconnecting both SAS cables and reconnecting them again, with no success. I finally solved it by changing the order of the wires (wtf).

Always check that all connections are firm and the cables are in the proper order 🤓.

Updating the firmware

As the iDRAC provided access to almost any functionality the server provided, I booted into the Dell Lifecycle Controller and upgraded the firmware versions for its hardware and components. There were some issues, and I had to manually download some of the drivers and install them by hand. The web interface helped as it provided a simple upload form with good error feedback.

Setting up the storage

I used the iDRAC to recreate the RAID setup. I used raid six as the double-parity was more important to me than total space. I created two raid groups, “datastore_300” (name in ESXi) with the four 300 GB SAS drives, “datastore_900” with the four 900 GB SAS drives and “datastore_nvme” for the 1 TB M.2 SSD.

The purpose is to use “datastore_300” to use for ISO storage or when moving VMs around, “datastore_900” as an archive/backup drive and “datastore_nvme” as the primary host for the VMs.

Installing the Host OS

I looked at other virtualisation software such as Proxmox VE, Hyper-V or Citrix Hypervisor, amongst many others. Though they all seemed fine, the combination of VMware ESXi and vSphere seemed pretty good to me from what I read. Plus, I also got a free enterprise license while at school 🥳.

It took me a while to figure out where the existing ESXi instance got installed. Dell provides a Dual SD Module where you can install the host system. Of course, you can also use the SAS drives, but ¯\_(ツ)_/¯, seemed to be good enough. I set up the system to mirror both SD cards. That way, it would be easy to restore a failure, plus I back up the contents regularly.

First, I installed a plain VMware ESXi, which showed some irregularities with missing drivers and weird behaviour. I later discovered that I need a DELL EMC Customized Image of VMware ESXi. After that was installed, everything seemed fine.

Virtualisation using ESXi

The default installation served me already pretty well. I tinkered a bit with the Networking defaults and promptly set up my datastores afterwards. Everything seemed fine. Creating VMs worked, but I wasn’t able to move VMs around freely, which was odd.

I discovered that I needed VMware vSphere for “advanced VM management”. The problem was that it was necessary to be installed as a separate VM on the ESXi host, which would later take over the ESXi host to manage it, weird or? 🤪

Well, I downloaded vSphere, got some licenses via my school account and set it up. The installation procedure is pretty straightforward. vSphere creates a separate VM on the ESXi host to provide an improved administrative interface. Once vSphere completed the setup, it took over the host’s control, and I was able to move VMs around. vSphere even provided updated functionalities. However, I didn’t understand its inner working and broke the installation several times. I settled to use it for creating, editing and moving only for now.

So what’s next

Now, I have set up my Dell PowerEdge R820 with the latest firmware, ESXi as Host OS with vSphere as managing director, proper storage and resolved all significant issues. My virtualisation beast was ready, and I moved to the next step, setting up the storage monster to hold all the data. We’ll cover this in the next section.

‘Till next time!