When designing a Proxmox VE cluster, one of the most critical decisions is whether to use shared storage or local storage for your virtual machines (VMs) and containers.
Each model offers unique advantages — and tradeoffs — depending on your goals around performance, high availability (HA), scalability, and cost.

In this article, we’ll walk through both approaches — and clarify that true HA requires cluster-aware storage like Ceph — not just any shared storage.


What is Shared Storage?

Shared storage provides a centralized storage platform accessible by all nodes in the Proxmox cluster.
Common shared storage technologies include:

  • NFS (Network File System)

  • iSCSI SANs

  • Fibre Channel SANs

  • Distributed storage systems like Ceph or GlusterFS

With shared storage, nodes can share access to VM disk images, enabling migration and simplified centralized management.


What is Local Storage?

Local storage means that each Proxmox node stores its own VM disks on its own directly attached drives, like:

  • LVM volumes

  • ZFS pools

  • RAID arrays

Local storage is not shared between nodes, and VMs are tied to their originating server unless migrated.


Benefits of Shared Storage

1. Seamless Live Migration

Shared storage allows VMs to move between nodes without transferring their disks, enabling fast, low-downtime migrations.

2. Supports High Availability (only with Ceph)

Important:
In Proxmox VE, true automatic HA with disk access requires Ceph or another cluster-aware storage backend.

Traditional shared storage like NFS or iSCSI can support manual migrations but cannot provide full automatic HA.
Ceph is tightly integrated with Proxmox’s HA Manager, allowing:

  • Automatic VM failover if a node goes offline

  • Redundant, self-healing disk storage

  • Minimal downtime during failures

3. Centralized Management

Snapshots, replication, and backup policies can be centralized across the cluster, streamlining administration.

4. Cluster Scalability

Adding new nodes becomes easier, since they can immediately access the same storage pool without moving VM disks.


Tradeoffs of Shared Storage

1. Higher Cost

Building reliable shared storage (especially Ceph) demands significant investment in:

  • Servers

  • Enterprise disks

  • High-speed networking (10/25/40/100Gbps)

2. Greater Complexity

Shared storage (particularly Ceph) has a steep learning curve and demands careful design for fault tolerance.

3. Potential Performance Bottlenecks

Networked storage can introduce latency, especially if backend networking isn’t fast or reliable enough.

4. Storage Redundancy is Mandatory

Without proper redundancy (triple replication, erasure coding, dual controllers, etc.), storage failures can affect the whole cluster.


Benefits of Local Storage

1. Maximum Performance

Local disks, especially SAS SSDs, offer the highest performance for VMs:

  • Extremely high IOPS

  • Predictable latency even under heavy load

  • Higher write endurance than consumer-grade NVMe drives

Recommendation:
Enterprise SAS SSDs are ideal for Proxmox clusters using local storage, offering the best combination of speed, endurance, and reliability compared to SATA or even many consumer NVMe drives.

2. Simplicity

No need to build, configure, or troubleshoot external storage systems.
Each node handles its own storage locally.

3. Lower Cost

You can build a powerful virtualization host using affordable local storage without needing SAN appliances or extra networking.

4. Fault Isolation

Storage issues (disk failures, filesystem corruption) are isolated to the specific node, not the whole cluster.


Tradeoffs of Local Storage

1. Limited Live Migration

Without shared storage, moving a VM between nodes requires copying disk images over the network, causing longer downtime.

2. No Built-In High Availability

Local storage does not support automatic HA.
You must:

  • Manually restart VMs

  • Or use Proxmox storage replication (manual failover, not instant)

3. Backup Complexity

Each node’s local disks must be backed up individually.

4. Scaling Challenges

Adding more storage requires expanding each node separately, complicating cluster-wide storage management.


Hybrid Approach: The Best of Both Worlds

Many Proxmox users combine both models:

  • Critical VMs run on a Ceph cluster for HA, live migration, and fault tolerance.

  • Performance-critical VMs (databases, cache servers) live on local SAS SSDs for maximum speed.

Proxmox’s storage replication feature allows you to mirror local storage across nodes, providing redundancy — though without full HA capabilities.


Conclusion

Choosing between shared and local storage depends on your specific needs:

If You Value…Choose…
Full automatic HA and redundancyCeph shared storage
Maximum VM performance and simplicityLocal storage with SAS SSDs
A balance between cost, performance, and resilienceHybrid model

Key Takeaway:
In Proxmox VE, only Ceph (or equivalent clustered storage) enables full, automatic HA for VMs.
Traditional NFS/iSCSI storage offers shared access but does not guarantee automatic failover.

Choosing the right storage architecture is crucial to building a Proxmox cluster that meets your performance, uptime, and scalability goals for the future.