A Technical Reality Check for Admins and Decision-Makers
TrueNAS and ZFS are now regarded in many IT departments as a capable alternative to traditional SAN or NAS systems. Nevertheless, certain assumptions about performance persist stubbornly — mostly carried over from RAID-based thinking, older storage generations, or misinterpretations of the ZFS architecture. This article debunks the most common misconceptions and explains how ZFS actually works — and why many problems simply arise from incorrect expectations.
1. “RAIDZ is excellent for virtualization.”
RAIDZ is a capacity-optimized layout, not a performance layout. While RAIDZ2 or Z3 offer excellent protection mechanisms, their structure means that IOPS performance is always limited by a single disk. For sequential workloads this may be acceptable — for virtualization it is disastrous. Random I/O in particular, which constantly occurs with VMs, demands parallel I/O channels. These can only be achieved with mirror VDEVs or NVMe mirrors. Anyone running VMs on RAIDZ will inevitably experience fluctuating latencies and poor performance.
2. “L2ARC makes the system faster in general.”
A widespread misconception. L2ARC is not a silver bullet and does not replace insufficient RAM. It is a supplement, not an accelerator card. In many cases, it is used incorrectly — especially when there is barely enough RAM for metadata. Only when the working set exceeds the ARC, the workload is predominantly read-intensive, and the architecture provides sufficient RAM can L2ARC deliver benefits. For most SMB environments, additional RAM is almost always more effective and sustainable.
3. “Dedup automatically saves a lot of storage space.”
Dedup is fascinating — technically impressive, yet in practice one of the most expensive features ZFS offers. The reason lies in the large DDT (dedup table), which can consume enormous amounts of RAM. In many real-world environments, especially heterogeneous VM landscapes or file shares, the savings are negligible. Dedup is only worthwhile in very specific use cases — for example, with large, highly redundant VDI pools. For typical SMB workloads, it usually remains a misconfiguration.
4. “The volblocksize can be adjusted later.”
No — once a ZVOL has been created, the block size is fixed. This parameter has a massive impact on performance and efficiency. Especially with VMs, an incorrect block size is a common cause of poor latencies. The best choice for VM storage remains: 16K. It is a robust, universal layout that works well with most hypervisors and performs consistently.
5. “NFS is fundamentally slower than iSCSI.”
That may have been true in the past — but hardly anymore today. Modern implementations such as NFSv4.2, nconnect, and corresponding NIC offloading optimizations allow NFS to reach the same level as iSCSI. The difference today is less about technology and more about the desired behavior: iSCSI is more strictly structured, while NFS offers greater flexibility. “NFS is slow” is almost always the result of poor configuration — not the protocol itself.
6. “ZFS barely benefits from more CPU cores.”
ZFS is an exceptionally parallel file system. Internally, it works with many threads that can process checksums, compression, snapshots, ARC calculations, and scrubs in parallel. More CPU cores ensure that these processes do not become bottlenecks. Especially with ZSTD compression or very large ARC caches, additional cores show a clearly measurable effect.
7. “NVMe always accelerates the entire system.”
NVMe is fast — but only when used in the right layout. An NVMe RAIDZ group remains limited by the RAIDZ structure despite the fast media. NVMe only realizes its full potential when used in mirror configurations. There it delivers low latencies and very high IOPS — ideal for virtualization and data-intensive services. Used incorrectly, it merely becomes a very expensive capacity solution.
8. “A SLOG accelerates every type of storage workload.”
SLOG exclusively accelerates synchronous write operations. This applies to, for example, certain VM workloads (depending on hypervisor settings), databases, or NFS with sync=always. For traditional SMB shares or backup targets, a SLOG makes no difference. On the contrary: an incorrectly sized or unmirrored SLOG can become a risk. Anyone using a SLOG should place it exclusively on fast, power-loss-protected SSDs (with power-loss protection) — and always as a mirror.
9. “Snapshots significantly degrade performance.”
In ZFS, a snapshot is not a heavy image like in conventional file systems, but a simple reference to existing metadata. This means: Snapshots are lightweight, generate barely any overhead, and cost virtually no IOPS. Only when thousands of snapshots exist within a single dataset can management overhead become noticeable. For typical SMB environments, this is irrelevant.
10. “More RAM only pays off for large systems.”
The opposite is true. RAM is the most important performance factor for ZFS, because the ARC accelerates nearly all recurring operations. More RAM improves:
-
Caching
-
Metadata access
-
Snapshots
-
Read performance
-
Compression efficiency
The rule of thumb still applies: ZFS loves RAM — and there is hardly such a thing as too much.
Comparison Table: What Most Myths Have in Common
| Myth | Root Cause in Practice | What ZFS Actually Does |
|---|---|---|
| RAIDZ for VMs is fast | incorrect RAID-based thinking | ZFS needs parallelized VDEVs |
| NFS is slow | old versions / wrong config | modern implementations are performant |
| L2ARC accelerates everything | misunderstanding of ARC | only useful + RAM-intensive |
| Snapshots cost IOPS | comparison with traditional NAS | ZFS COW — virtually no overhead |
| Dedup saves space | incorrect expectations | only suitable for special cases |
Conclusion
Many performance issues in TrueNAS do not stem from insufficient hardware, but from incorrect expectations inherited from a different storage generation. ZFS has its own rules — and those who understand them build systems that are more stable and performant than conventional solutions.
We analyze your existing environment, identify bottlenecks, and develop an optimized ZFS layout — precisely tailored to your workloads.
DATAZONE supports you with implementation — contact us for a no-obligation consultation.
More on these topics:
More articles
TrueNAS Dataset Encryption: ZFS Encryption in Practice
Understanding and implementing TrueNAS ZFS Encryption: dataset vs. pool encryption, passphrase vs. key file, key management, and performance impact with AES-NI.
Proxmox Storage Types Compared: LVM, ZFS, Ceph, NFS, and iSCSI
LVM, ZFS, Ceph, NFS, or iSCSI? All Proxmox storage types compared: features, performance, HA support, and recommendations for every use case.
TrueNAS ZFS Replication: Offsite Disaster Recovery Between Sites
ZFS replication with TrueNAS for offsite DR: send/receive, SSH encryption, bandwidth throttling, RPO/RTO planning, and failover testing — the complete guide.