Ever wanted your remote storage to perform like a local NVMe drive? This guide will show you several approaches to optimize remote storage performance on Debian/RHEL-based systems, with benchmark-proven results.

Table of Contents

Benchmark Results

Here's what you can achieve with these optimization techniques:

Traditional Remote Mounting (No Optimization)

dd if=/dev/zero of=./testfile bs=1M count=100
104857600 bytes (105 MB, 100 MiB) copied, 29.1344 s, 3.6 MB/s

With Advanced Optimization Techniques

dd if=/dev/zero of=./testfile bs=1M count=100
104857600 bytes (105 MB, 100 MiB) copied, 0.144608 s, 725 MB/s

Method 1: Optimized Rclone Caching

This method uses rclone with fine-tuned parameters for maximum performance.

Prerequisites

  • Remote storage with SFTP access
  • 10GB+ free local disk space for caching
  • Debian/Ubuntu: apt install rclone fuse
  • RHEL/CentOS/Fedora: dnf install epel-release && dnf install rclone fuse

Setup

  1. Create necessary directories
mkdir -p /mnt/storage /mnt/.cache
  1. Configure rclone

For headless servers:

rclone config

For GUI configuration (home desktop):

rclone rcd --rc-web-gui

Create a new remote with these settings:

name = storage-sftp
storage type = sftp
host = [Storage Server IP-address]
username = [Username]
port = [SSH Port - usually 22]
password = [Password]
  1. Mount with optimized parameters
rclone mount storage-sftp:/remote/path /mnt/storage --daemon \
    --dir-cache-time 72h --cache-dir=/mnt/.cache \
    --vfs-cache-mode full --vfs-cache-max-size 10G \
    --buffer-size 256M --vfs-read-ahead 512M \
    --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G \
    --transfers 4 --checkers 8 --contimeout 60s \
    --timeout 300s --low-level-retries 10
  1. Create systemd service for automatic mounting

Create file /etc/systemd/system/storage-mount.service:

[Unit]
Description=Optimized Rclone Remote Mount
AssertPathIsDirectory=/mnt/storage
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount storage-sftp:/remote/path /mnt/storage \
    --dir-cache-time 72h --cache-dir=/mnt/.cache \
    --vfs-cache-mode full --vfs-cache-max-size 10G \
    --buffer-size 256M --vfs-read-ahead 512M \
    --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G \
    --transfers 4 --checkers 8 --contimeout 60s \
    --timeout 300s --low-level-retries 10
ExecStop=/bin/fusermount -u /mnt/storage
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
  1. Enable and start the service
systemctl daemon-reload
systemctl enable storage-mount
systemctl start storage-mount

Method 2: NFS with Local Caching

If your storage provider supports NFS, this method often provides better performance than SFTP.

Prerequisites

  • Remote storage with NFS support
  • Debian/Ubuntu: apt install nfs-common
  • RHEL/CentOS/Fedora: dnf install nfs-utils

Setup

  1. Install required packages
# Debian/Ubuntu
apt install nfs-common bcache-tools

# RHEL/CentOS/Fedora
dnf install nfs-utils bcache-tools
  1. Create an NVMe cache partition

Warning: This will erase data on the specified partition!

# Identify your NVMe device
lsblk

# Create a bcache device
# /dev/nvme0n1p1 is your NVMe partition for caching
# Replace X with your actual device
make-bcache -C /dev/nvme0n1p1
  1. Mount the NFS share
mkdir -p /mnt/storage-backend
mount -t nfs storage.server:/path /mnt/storage-backend
  1. Set up bcache with the NFS as backend
# Register the NFS mount as a bcache backend device
echo /mnt/storage-backend > /sys/fs/bcache/register

# Find the bcache device name
ls /dev/bcache*
mkdir -p /mnt/storage
mount /dev/bcache0 /mnt/storage
  1. Create systemd service files for automatic mounting

Create file /etc/systemd/system/storage-nfs-backend.service:

[Unit]
Description=NFS Backend Mount
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/mount -t nfs storage.server:/path /mnt/storage-backend
ExecStop=/bin/umount /mnt/storage-backend

[Install]
WantedBy=multi-user.target

Create file /etc/systemd/system/storage-cached.service:

[Unit]
Description=Bcache NFS Mount
After=storage-nfs-backend.service
Requires=storage-nfs-backend.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/sh -c 'echo /mnt/storage-backend > /sys/fs/bcache/register && mount /dev/bcache0 /mnt/storage'
ExecStop=/bin/umount /mnt/storage

[Install]
WantedBy=multi-user.target
  1. Enable and start the services
systemctl daemon-reload
systemctl enable storage-nfs-backend storage-cached
systemctl start storage-nfs-backend storage-cached

Method 3: Wireguard + Advanced I/O

Using Wireguard instead of traditional SSH reduces encryption overhead while maintaining security.

Prerequisites

  • Control over both the client and remote server
  • Debian/Ubuntu: apt install wireguard-tools
  • RHEL/CentOS/Fedora: dnf install wireguard-tools

Setup

  1. Install Wireguard on both systems
# Debian/Ubuntu
apt install wireguard-tools

# RHEL/CentOS/Fedora
dnf install wireguard-tools
  1. Generate keys on local system
wg genkey | tee local-private.key | wg pubkey > local-public.key
  1. Generate keys on remote storage server
wg genkey | tee remote-private.key | wg pubkey > remote-public.key
  1. Configure Wireguard on local system

Create file /etc/wireguard/wg0.conf:

[Interface]
PrivateKey = <local-private-key>
Address = 10.10.10.1/24
ListenPort = 51820

[Peer]
PublicKey = <remote-public-key>
AllowedIPs = 10.10.10.2/32
Endpoint = <remote-server-ip>:51820
PersistentKeepalive = 25
  1. Configure Wireguard on remote storage server

Create file /etc/wireguard/wg0.conf:

[Interface]
PrivateKey = <remote-private-key>
Address = 10.10.10.2/24
ListenPort = 51820

[Peer]
PublicKey = <local-public-key>
AllowedIPs = 10.10.10.1/32
  1. Start Wireguard on both systems
wg-quick up wg0
systemctl enable wg-quick@wg0
  1. Optimize network parameters on both systems
# Create a file /etc/sysctl.d/99-network-performance.conf with these contents:
cat > /etc/sysctl.d/99-network-performance.conf << EOF
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = bbr
net.core.netdev_max_backlog = 5000
EOF

# Apply changes
sysctl -p /etc/sysctl.d/99-network-performance.conf
  1. Mount using NFS over Wireguard with io_uring

On the remote storage server, configure NFS:

# Debian/Ubuntu
apt install nfs-kernel-server

# RHEL/CentOS/Fedora
dnf install nfs-utils

# Export the directory
echo '/storage 10.10.10.1(rw,sync,no_subtree_check)' >> /etc/exports
exportfs -a
systemctl restart nfs-server

On the local system:

mkdir -p /mnt/storage
mount -t nfs -o vers=4.2,fsc,nocto,noatime,nodiratime 10.10.10.2:/storage /mnt/storage

# For Linux kernel 5.1+ with io_uring support
mount -t nfs -o vers=4.2,fsc,nocto,noatime,nodiratime,io_uring 10.10.10.2:/storage /mnt/storage
  1. Create systemd service for automatic mounting

Create file /etc/systemd/system/storage-wireguard.service:

[Unit]
Description=Mount storage over Wireguard
After=network-online.target wg-quick@wg0.service
Requires=wg-quick@wg0.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/mount -t nfs -o vers=4.2,fsc,nocto,noatime,nodiratime 10.10.10.2:/storage /mnt/storage
ExecStop=/bin/umount /mnt/storage

[Install]
WantedBy=multi-user.target
  1. Enable and start the service
systemctl daemon-reload
systemctl enable storage-wireguard
systemctl start storage-wireguard

Method 4: Hybrid Local/Remote Approach

This approach gives you true local NVMe performance while automatically syncing to remote storage.

Prerequisites

  • Debian/Ubuntu: apt install lsyncd
  • RHEL/CentOS/Fedora: dnf install epel-release && dnf install lsyncd

Setup

  1. Install required packages
# Debian/Ubuntu
apt install lsyncd

# RHEL/CentOS/Fedora
dnf install epel-release
dnf install lsyncd
  1. Create local and remote mount points
mkdir -p /mnt/local-nvme /mnt/remote-storage
  1. Mount your local NVMe drive
# Replace with your actual NVMe partition
mount /dev/nvme0n1p1 /mnt/local-nvme
  1. Mount your remote storage using one of the previous methods
# Example using rclone
rclone mount storage-sftp:/remote/path /mnt/remote-storage --daemon
  1. Configure lsyncd for real-time synchronization

Create file /etc/lsyncd/lsyncd.conf.lua:

settings {
   logfile = "/var/log/lsyncd/lsyncd.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",
   statusInterval = 10
}

sync {
   default.rsync,
   source = "/mnt/local-nvme/",
   target = "/mnt/remote-storage/",
   delay = 5,
   rsync = {
      archive = true,
      compress = true,
      bwlimit = 10000  -- 10MB/s bandwidth limit
   }
}
  1. Create necessary log directories
mkdir -p /var/log/lsyncd
  1. Start and enable lsyncd
systemctl start lsyncd
systemctl enable lsyncd
  1. Create convenience symbolic links (optional)
ln -s /mnt/local-nvme /storage

Troubleshooting

Connection Issues

# Check if the mount is responsive
ls -la /mnt/storage

# Check system logs for errors
journalctl -u storage-mount -n 50

# Restart the mount service
systemctl restart storage-mount

# Force unmount if stuck
fusermount -uz /mnt/storage

Performance Degradation

# Check disk space for cache
df -h /mnt/.cache

# Clear rclone cache (if using rclone)
rm -rf /mnt/.cache/*
systemctl restart storage-mount

# Check network performance
iperf3 -c <remote-server-ip>

Monitoring Mount Health

# Monitor ongoing transfers with rclone
rclone rc core/stats

# Watch real-time I/O activity
iostat -xm 2

Security Considerations

  1. Use key-based authentication instead of password Generate an SSH key pair:
   ssh-keygen -t ed25519 -f ~/.ssh/storage_key

For rclone, update your config to use the key:

   rclone config

Choose your remote, then:

   key_file = /root/.ssh/storage_key
   key_use_agent = false
  1. Encrypt sensitive data If storing sensitive data, use rclone crypt:
   rclone config

Create a new remote of type "crypt" that points to your existing remote.

  1. Restrict mount permissions
   chmod 700 /mnt/storage

Remember that while these methods create the impression of local access speeds, actual file transfers to the remote storage still happen in the background. For the best experience, ensure a stable and fast internet connection.