On-Prem DevOps Infrastructure Project (2026 Edition)

In this project, we will design and implement a complete on-premises IT infrastructure within a home lab environment. The objective is to simulate a real-world enterprise setup that includes virtualization, server administration, automation, monitoring, patch management, and performance testing.

This project will cover the following key areas:

  • Installation and management of Windows and Linux servers

  • Deployment of Linux servers on VMware ESXi hosts

  • Centralized management using VMware vCenter

  • Server provisioning and configuration using Ansible

  • Automation of administrative tasks using Ansible and Puppet

  • Monitoring of Linux and Windows servers

  • OS patching and upgrade management

  • Stress testing and performance analysis

The purpose of this project is to maintain hands-on experience across VMware technologies, Linux administration, and modern DevOps practices within a controlled lab environment.


Host System Configuration

The entire lab environment is built on the following desktop system:

Device Name: win01
Processor: AMD FX™-8350 Eight-Core Processor @ 4.00 GHz
Installed RAM: 32.0 GB (31.5 GB usable)
Operating System: Windows 10 Pro
Virtualization Platform: VMware® Workstation 16 Pro

VMware Workstation is used to create and manage nested virtualization environments, including VMware ESXi hosts and other virtual machines.


Virtual Machines in the Lab

The following virtual machines will be created on VMware® Workstation 16 Pro:

1. win01

    • Hardware Details: CPU 1, RAM 4GB, Disk 60GB, 2 NIC
    • OS Details: Windows 2019 Datacenter Desktop Edition

2. esxi01

    • Hardware Details: CPU 2, RAM 14GB, Disk 1000GB, 2 NIC
    • OS Details: VMware ESXi 7.0.3

3. esxi02

    • Hardware Details: CPU 2, RAM 8GB, Disk 1000GB, 2 NIC
    • OS Details: VMware ESXi 7.0.3

4. kub01

    • Hardware Details: CPU 2, RAM 4GB, Disk 40GB, 2 NIC
    • OS Details: Ubuntu 18

5. kub02

    • Hardware Details: CPU 1, RAM 2GB, Disk 40GB, 2 NIC
    • OS Details: Ubuntu 18

We are going to create two networks:

  • 172.16.0.0/16 for internal communication and optimal speed.
  • 192.168.2.0/24 for external communication, such as downloading software from the internet.

Task 1. Installation and configuration of Windows Server 2019 on VMWare WorkStation VM.

1. We initiated the installation of Windows Server 2019 Datacenter Desktop Edition on win01.
2. We renamed the hostname to "win01."
3. We assigned the following IP addresses:
  •   Internal IP address: 172.16.1.200
  •   External IP address: 192.168.2.200
4. The firewall was disabled.
5. We configured Active Directory and set the domain to "darole.org."

The Windows Server 2019 installation and configuration on win01 have been completed as outlined above.

Task 2. Add A and MX records to DNS hosted on win01.

We recognize the importance of DNS for the proper functioning of VMware vCenter, and as there are two networks in use, we will configure DNS to optimize communication via the internal network for improved performance. Additionally, we will configure MX records to facilitate sending and receiving emails.

A Record Details:

For Container Orchestration:
  • 172.16.1.230 kub01.darole.org
  • 172.16.1.231 kub02.darole.org
  • 172.16.1.240 dock01.darole.org
For VMs:
  • 172.16.1.211 lamp01.darole.org
  • 172.16.1.212 zap01.darole.org
  • 172.16.1.213 pup01.darole.org
  • 172.16.1.221 web01.darole.org
  • 172.16.1.222 db01.darole.org
  • 172.16.1.223 ans01.darole.org
  • 172.16.1.252 jen01.darole.org
  • 172.16.1.253 son01.darole.org
  • 172.16.1.241 gra01.darole.org
For Websites:
  • 172.16.1.215     ninom.darole.org
  • 172.16.1.216     online-education.darole.org
  • 172.16.1.217     organic-farm.darole.org
  • 172.16.1.225     jobsearch.darole.org
  • 172.16.1.218 travel.darole.org
  • 172.16.1.219 jewellery.darole.org
  • 172.16.1.220 carvilla.darole.org
For VMware:
  • 172.16.1.205 esxi01.darole.org
  • 172.16.1.206 esxi02.darole.org
  • 172.16.1.207 vcenter01.darole.org
  • 172.16.1.200 win01.darole.org

MX Record Details:
  • 172.16.1.213 pup01.darole.org

Task 3: ESXi Host Deployment and Configuration on VMware WorkStation VM.

VMware ESXi Free is a bare-metal, type-1 hypervisor that allows you to run multiple virtual machines (VMs) on a single physical server or host. It is a basic virtualization platform that is ideal for testing, development, and smaller-scale deployments.

Here are some of the key features and benefits of ESXi Free:

  • No licensing cost: ESXi Free is available at no cost, making it an attractive option for organizations with budget constraints.
  • Easy to use: ESXi Free has a web-based management interface that makes it easy to create, configure, and manage VMs.
  • Scalable: ESXi Free can support up to 8 virtual CPUs and 4GB of RAM per VM.
  • Reliable: ESXi Free is a stable and reliable hypervisor that is used by millions of organizations around the world.

However, ESXi Free also has some limitations:

  • Limited features: ESXI Free lacks some advanced features like vMotion (live migration), High Availability (HA), and Distributed Resource Scheduler (DRS).
  • No official support: VMware does not provide official support for ESXi Free. Users are expected to rely on community forums, documentation, and self-help resources for troubleshooting and support.

A. Installed and Configured ESXi Host VM (VMware WorkStation VM - esxi01):

  • CPU: 2 Cores with Virtualization enabled.
  • RAM: 14 GB.
  • Disks: 1000 GB.
  • Internal IP: 172.16.1.205.
  • Host Name: esxi01.darole.org.
  • OS: VMware ESXi 7.0.3
B. Installed and Configured ESXi Host VM (VMware WorkStation VM - esxi02):
  • CPU: 2 Cores with Virtualization enabled.
  • RAM: 8 GB.
  • Disks: 1000 GB.
  • Internal IP: 172.16.1.206.
  • Host Name: esxi02.darole.org.
  • OS: VMware ESXi 7.0.3.
After the installation is complete, log in to win01 and check esxi01 and esxi02 through the web console.

Web Console Login Details:
User Name: root
Password: Pass@1234
Ensure that you have access to both esxi01 and esxi02 via the web console using the provided login credentials. This step is essential for further configuration and management of your ESXi hosts.

Task 4. Deploying the vCenter Server Appliance on esxi01.

Before proceeding with the vCenter Server Appliance deployment, ensure that you meet the hardware requirements as specified for a tiny environment:
  • Number of vCPUs: 2
  • Memory: 12 GB
  • Default Storage Size: 1000 GB
Additionally, make sure that DNS resolution is functioning correctly by running the following command on win01:
# nslookup vcenter01.darole.org

Now, follow these steps to deploy the vCenter Server Appliance:
A. Add the VMware-VMvisor-Installer-7.0.0-15843807.x86_64.iso image to win01.
B. Navigate to the `vcsa-ui-installer\win32` directory on the mounted disk and run `installer.exe`.
C. There are two stages in the deployment process:
   - Deploy vCenter Server.
   - Set up vCenter Server.
D. After completing the installation, log in to the web console for VMware Appliance Management at:
   - URL: https://vcenter01.darole.org:5480
   - User Name: administrator@vsphere.local
   - Password: Pass@1234
E. In the vCenter Server web console, navigate to:
   - URL: https://vcenter01.darole.org
   - User Name: administrator@vsphere.local
   - Password: Pass@1234
F. Create a datacenter named "darole-dc" and a cluster named "my-cluster."
G. Add the ESXi hosts esxi01 and esxi02 to the cluster.
H. To start vCenter, log in to the console of esxi01 and start the VM.
I. To stop vCenter, log in to the appliance configuration and choose the shutdown option.

Important Note: Use the Internat network (172.16.0.0) for vCenter installation. Using the external network (192.168.2.0) for vCenter installation may lead to failures due to network issues.

For more detailed instructions, you can refer to the provided link: [VMware vSphere 7 Installation Setup](https://www.nakivo.com/blog/vmware-vsphere-7-installation-setup/)

Follow these steps carefully to ensure a successful deployment of the vCenter Server Appliance on esxi01.

Task 5. Virtual networking setup on VCenter01.

Now that the vCenter installation is complete with a single network card using the internet network (172.16.1.0), we will proceed with configuring virtual networking. This involves adding extra network cards to both esxi01 and esxi02 and configuring them for internal and external communication. Follow these steps:

For esxi01:

1.Shutdown esxi01.
2. Add 3 extra network cards to esxi01:
  • Ethernet 2: Host-Only (for internal communication)
  • Ethernet 3: Bridge (for external communication)
3. Start esxi01.
4. Start the vCenter.
5. Once the vCenter is created, go to esxi01:
  • Navigate to "Configure" -> "Networking."
6. Add 2 NICs to the existing internet network (switch0).
7. Create a new switch for the external network and add the pending NIC (bridge).
8. Internal Network IP: 172.16.1.205 (teaming of 3 NICs)
    External Network IP: 192.168.2.205 (only 1 NIC)

For esxi02:

1. Shutdown esxi02.
2. Add 3 extra network cards to esxi02:
  • Ethernet 2: Host-Only (for internal communication)
  • Ethernet 3: Bridge (for external communication)
3. Start esxi02.
4. Start the vCenter.

5. Once the vCenter is created, go to esxi02:
  • Navigate to "Configure" -> "Networking."
6. Add 2 NICs to the existing internet network (switch0).
7. Create a new switch for the external network and add the pending NIC (bridge).
8. Internal Network IP: 172.16.1.206 (teaming of 3 NICs)
   External Network IP: 192.168.2.206 (only 1 NIC)

After the network configuration is complete, check if both esxi hosts are accessible from the external network:

http://192.168.2.205 for esxi01
http://192.168.2.206 for esxi02

     Ensure that the networking changes have been applied correctly, and you can access both hosts externally as specified.

Task 6. Create ISO store and VM templates.

Datastore in vCenter:
A datastore in vCenter is a centralized storage location where virtual machines' files are stored. It can be a SAN, NAS, or local storage. Datastores hold virtual machine files, ISO images, templates, and snapshots

Template in vCenter:
A template in vCenter is a master copy of a virtual machine that can be used to create multiple identical VMs quickly. They help maintain consistency and reduce manual setup efforts when creating new VMs.

Now that we have a better understanding of datastores and templates, let's proceed.

1. Uploading ISO Images to ESXi02
As our vCenter is running on ESXi01 and is experiencing high loads, we'll upload ISO images to ESXi02 for better resource distribution. We'll place the ISO files in the /iso folder within the ESXi02 datastore.

The ISO images we're uploading include:
  • Ubuntu 20.04
  • Rocky 8.7
  • Red Hat 8.5 
2. Installing Rocky 8.7, We'll create a VM with the following specifications:
  • Hostname: rocky
  • CPU: 1
  • Memory: 2 GB
  • Disk: 16 GB
  • Internal IP: 172.16.1.228
  • External IP: 192.168.2.228
  • user root
  • password redhat
3. Installing Red Hat 8.5,  Another VM with these specifications:
  • Hostname: redhat
  • CPU: 1
  • Memory: 2 GB 
  • Disk: 16 GB
  • Internal IP: 172.16.1.226
  • External IP: 192.168.2.226
  • user root
  • password redhat
Note: To upgrade RHEL 7 to 8 minimum RAM required is 2 GB 

4. Installing Ubuntu 20.04, And a third VM with these specifications:
  • Hostname: ubuntu
  • CPU: 1
  • Memory: 2 GB
  • Disk: 16 GB
  • Internal IP: 172.16.1.227
  • External IP: 192.168.2.227
  • user vallabh
  • password redhat

Converting VMs into Templates
Once the installation and configuration of these VMs are complete, we'll convert them into VM templates. This process essentially captures the VM's current state, allowing us to deploy new VMs based on these templates with ease.





No comments:

Post a Comment