Key requirements are as follows:
- Need separate storage clusters for SSD, Hybrid HDD, and HDD
- Storage clusters should be on separate subnets
- Storage should be distributed with high availability and high scalability
For this scenario, each Proxmox node must have at least four network interface cards: three to connect to three storage cluster subnets and one to connect the virtual environment. This example is for six virtual machines to have access to three differently performing storages. The following are the three Ceph clusters and their performance categories:
Subnet |
Network description |
192.168.10.0:6789 |
CEPH cluster #1 with SSDs for all OSDs. This subnet is connected with Proxmox nodes through eth1. This storage is used by VM6. |
192.168.20.0:6790 |
CEPH cluster #2 with hybrid HDDs for all OSDs. This subnet is connected with Proxmox nodes through eth2. This storage is used by VM5. |
192.168.30.0:6791 |
CEPH cluster #3 with HDDs for all OSDs. This subnet is connected with Proxmox nodes through eth3. This storage is used by VM1, VM2, VM3, and VM4. |
10.160.10.0 |
This is the main subnet for all virtual machines. |
Multi-tiered infrastructure is very typical for data centers where there is a different level of SLA-based clients with various requirements for storage performance: