Intro
I’ve had qBittorrent running happily in Docker for over a year now, using the linuxserver/qbittorrent image. I run the docker container and others on an Ubuntu Server host. Recently, the host regularly
becomes unresponsive. SSH doesn’t work, and the only way to recover is to power cycle the server (running on a small NUC).
The symptoms were: RAM usage would climb and climb, the server would become sluggish, and then it would completely lock up. Not “restart the container” lock up. Not “SSH is slow” lock up. But: DEAD, nothing, only pulling the plug and power cycling could bring it back.
People say “unused RAM is wasted RAM” - but this was taking the piss
After some digging, this turns out to be a known quirk of how qBittorrent (more specifically the libtorrent library it uses) interacts with Linux’s filesystem cache.
This post is the findings, the actual root cause, and what fixed it for me.
What I thought it was
From the outside, it looks like a memory leak:
- Container memory usage goes up over time, usually depending on the number of torrents (linux ISOs) being downloaded in a session
- Host “available” memory drops to basically nothing
- Eventually the whole machine stops responding
The root cause: mmap + Linux page cache
After searching round the web and not having anything concrete to go on. I saw a few posts about how it might be related to qBitorrent using libtorrent 2.0+ (rather than 1.* - which older versions used) The problem looked to be memory-mapped files (mmap) usage in libtorrent 2.0+.
Instead of doing “normal” buffered reads/writes, libtorrent can ask the kernel to map pieces of torrent data directly into memory. Linux then keeps those mapped pages around as file cache (page cache).
This is why Linux systems usually feel so fast.
However:
- This cached memory can grow aggressively when you’re seeding lots of torrents/touching lots of files.
- Container memory limits don’t always protect you in the way you’d think, because the pressure is happening via the kernel’s caching behaviour - not via Docker.
- Once RAM is effectively “full”, the system starts thrashing (spends more time swapping data between RAM and disk, than running application code) so hard that it becomes unresponsive.
So the “leak” isn’t really a leak. We’re just not working with/being kind to the OS.
The fix: disable OS cache in qBittorrent
On qBittorrent 4.4.0+, there’s a setting that stops the host from being eaten alive by page cache:
- Open qBittorrent Web UI
- Go to Tools → Options → Advanced
- For:
- Disk IO read mode
- Disk IO write mode
- Change both from
DefaulttoDisable OS cache - Also set Physical memory usage limit to something explicit (I’ve opted for 6144 MiB - my host has 32GB of RAM)
This is the single most impactful change, because it stops qBittorrent from leaning so heavily on the kernel’s caching behaviour.
From my testing, I’ve been downloading a number of torrents without issue for the past 48 hours. Previously, I’d see the RAM usage steadily climb until the machine would lock up about 24-48 hours later.
Using docker stats I can see that the container hasn’t used over 500mb of RAM. This looks to have solved the problem and hopefully wont require me to pull the plug every couple of days.
I’ve not seen any obvious impacts to download/upload speeds, but this isn’t really an optional change for me. If it does reduce download speeds, I’ll just have to live with it.
Enforce Docker Memory Limits
I did also have a memory limit set on the Docker container (of 16GB) but that never solved the problem. It’d climb to 16GB and then lock up.
Whilst it’s not a solution for this container/scenario, it’s still a good idea to set anyway.
In docker-compose.yml, you can set a hard memory limit. For example:
services:
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
# ... existing config ...
mem_limit: 16gIf you’re using LinuxServer.io: switch to libtorrent v1.2
If neither of the above fixes work, then it’s advisable to downgrade the libtorrent version.
Users on reddit have reported that libtorrent 1.2 fixes the issue.
The LinuxServer.io image has a tag for libtorrent v1.2: libtorrentv1
Here’s what your docker-compose.yml should look like:
services:
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:libtorrentv1Prevent the “thrashing”: enable earlyoom (Ubuntu/Debian)
This next bit isn’t a fix, but a mitigation of the host locking up and requiring a power cycle.
Install and enable earlyoom:
sudo apt install earlyoom
sudo systemctl enable --now earlyoomearlyoom monitors memory pressure and kills the worst offender before the kernel gets into an unrecoverable thrash.
Again, not a solution, a mitigation for the undesirable side effects of running out of RAM.
Summary
If qBittorrent in Docker is “leaking” memory and hard-freezing your Linux host, the problem is often:
- libtorrent mmap behaviour
- Linux page cache doing its job a bit too eagerly
- host memory pressure and eventually a dead machine
The most effective fix I’ve found is the Disable OS cache setting.
I’m 48 hours into this, and currently it looks to have resolved the issue.
If you have the same problem, I hope this fixes it for you too.