Optional: Setting up your DA node to use ZFS
WARNING
Using ZFS compression may impact node performance depending on your hardware configuration. Ensure your system meets the recommended requirements before proceeding. This is an optional optimization that may not be suitable for all deployments.
Enabling ZFS compression on a DA Node server can significantly optimize storage efficiency by compressing data on the fly. Follow this step-by-step guide to implement ZFS compression without requiring any additional tuning on the DA node.
NOTE
ZFS, compression zstd-3
:
$ zfs get compressratio celestia && du -h /celestia/bridge/.celestia-bridge
NAME PROPERTY VALUE SOURCE
celestia compressratio 1.22x -
1.3T /celestia/bridge/.celestia-bridge
$ zfs get compressratio celestia && du -h /celestia/bridge/.celestia-bridge
NAME PROPERTY VALUE SOURCE
celestia compressratio 1.22x -
1.3T /celestia/bridge/.celestia-bridge
EXT4, no compression:
$ du -h ~/.celestia-bridge/
1.8T /home/ubuntu/.celestia-bridge/
$ du -h ~/.celestia-bridge/
1.8T /home/ubuntu/.celestia-bridge/
Requirements:
- A bare metal server with:
- RAM: 64GB or more
- CPU: Latest generation EPYC or Xeon with:
- Clock speed: 2.1GHz or higher
- Threads: 32 or higher
- Note: Additional CPU overhead is required for ZFS compression
- At least one empty disk (with no filesystem)
Guide:
Get your disk name:
lsblk --nodeps -o name
lsblk --nodeps -o name
Verify disk is empty (should show no partitions):
lsblk YOUR_DISK_NAME (/dev/nvme0n1 or /dev/sda i.e.)
lsblk YOUR_DISK_NAME (/dev/nvme0n1 or /dev/sda i.e.)
Verify disk is not mounted:
mount | grep YOUR_DISK_NAME
mount | grep YOUR_DISK_NAME
Set variables:
ZFS_POOL_NAME="celestia" && ZFS_DATASET_NAME="bridge"
ZFS_POOL_NAME="celestia" && ZFS_DATASET_NAME="bridge"
Validate variables are set:
if [ -z "$ZFS_POOL_NAME" ] || [ -z "$ZFS_DATASET_NAME" ]; then
echo "Error: Variables not set correctly"
exit 1
fi
if [ -z "$ZFS_POOL_NAME" ] || [ -z "$ZFS_DATASET_NAME" ]; then
echo "Error: Variables not set correctly"
exit 1
fi
Install ZFS utils:
sudo apt update && sudo apt install zfsutils-linux
sudo apt update && sudo apt install zfsutils-linux
Create ZFS pool:
zpool create -o ashift=12 $ZFS_POOL_NAME /dev/nvme0n1
zpool create -o ashift=12 $ZFS_POOL_NAME /dev/nvme0n1
NOTE
If you have more than one disk available - you can add them also:
zpool create -o ashift=12 $ZFS_POOL_NAME /dev/nvme0n1 /dev/nvme1n1
zpool create -o ashift=12 $ZFS_POOL_NAME /dev/nvme0n1 /dev/nvme1n1
Verify pool status:
zpool status $ZFS_POOL_NAME
zpool status $ZFS_POOL_NAME
Verify pool properties:
zpool get all $ZFS_POOL_NAME
zpool get all $ZFS_POOL_NAME
Create dataset:
zfs create $ZFS_POOL_NAME/$ZFS_DATASET_NAME
zfs create $ZFS_POOL_NAME/$ZFS_DATASET_NAME
Enable compression:
zfs set compression=zstd-3 $ZFS_POOL_NAME/$ZFS_DATASET_NAME
zfs set compression=zstd-3 $ZFS_POOL_NAME/$ZFS_DATASET_NAME
Set the custom path to the bridge data folder:
# Add flag --node.store /celestia/bridge/.celestia-bridge to your command, example:
celestia bridge start --metrics.tls=true --metrics --metrics.endpoint otel.celestia.observer --p2p.metrics --node.store /celestia/bridge/.celestia-bridge
# Add flag --node.store /celestia/bridge/.celestia-bridge to your command, example:
celestia bridge start --metrics.tls=true --metrics --metrics.endpoint otel.celestia.observer --p2p.metrics --node.store /celestia/bridge/.celestia-bridge
# Add flag --node.store /celestia/bridge/.celestia-bridge-mocha-4 to your command, example:
celestia bridge start --metrics.tls=true --metrics --metrics.endpoint otel.mocha.celestia.observer --p2p.metrics --node.store /celestia/bridge/.celestia-bridge-mocha-4 --p2p.network mocha
# Add flag --node.store /celestia/bridge/.celestia-bridge-mocha-4 to your command, example:
celestia bridge start --metrics.tls=true --metrics --metrics.endpoint otel.mocha.celestia.observer --p2p.metrics --node.store /celestia/bridge/.celestia-bridge-mocha-4 --p2p.network mocha
# Add flag --node.store /celestia/bridge/.celestia-bridge-arabica-11 to your command, example:
celestia bridge start --node.store /celestia/bridge/.celestia-bridge-arabica-11 --p2p.network arabica
# Add flag --node.store /celestia/bridge/.celestia-bridge-arabica-11 to your command, example:
celestia bridge start --node.store /celestia/bridge/.celestia-bridge-arabica-11 --p2p.network arabica
NOTE
It is recommended to sync from scratch. In case of using a snapshot it is important to have your local route to --data.store
identical to one in a snapshot.
After completing the steps above, you can begin syncing your DA node.
You can check your compression rate with the following command:
zfs get compressratio $ZFS_POOL_NAME
zfs get compressratio $ZFS_POOL_NAME
ZFS Fine-Tuning (Advanced)
DANGER
The following settings can significantly impact data integrity and system stability. Only proceed if you fully understand the implications of each setting. These optimizations should be carefully tested in a non-production environment first.
If you want to increase your I/O performance and sync speed, you can try the following steps:
Disable Auto-Trim
Auto-trim disabling can improve I/O performance, but may lead to increased SSD wear over time.
sudo zpool set autotrim=off $ZFS_POOL_NAME
sudo zpool set autotrim=off $ZFS_POOL_NAME
NOTE
You can always trim manually: sudo zpool trim $ZFS_POOL_NAME
Disable sync
DANGER
Disabling sync provides faster write speeds but significantly increases the risk of data corruption in case of system crashes or power failures. Data in memory may be permanently lost before being written to disk.
This setting should:
- Only be used during initial node sync
- Never be used in production environments
- Be re-enabled immediately after initial sync completes
Disable prefetch
Disabling reduces memory usage but can slow down performance for sequential read workloads.
echo 1 | sudo tee /sys/module/zfs/parameters/zfs_prefetch_disable
echo 1 | sudo tee /sys/module/zfs/parameters/zfs_prefetch_disable
NOTE
You can always re-enable it: echo 0 | sudo tee /sys/module/zfs/parameters/zfs_prefetch_disable
Set record size
Setting recordsize=256K
defines the maximum block size that ZFS will use when writing data to a dataset.
zfs set recordsize=256K $ZFS_POOL_NAME/$ZFS_DATASET_NAME
zfs set recordsize=256K $ZFS_POOL_NAME/$ZFS_DATASET_NAME