GPU Passthrough in a Proxmox LXC: GTX 1070 for Plex, Frigate, and Ollama
Running a homelab on a single machine means making every resource count. My System76 Oryx Pro has a GTX 1070 Mobile with 8GB of VRAM sitting in it, and I wanted every service that could benefit from GPU acceleration to use it — Plex for transcoding, Frigate for camera feeds, Immich for ML-powered photo search, and Ollama for local LLM inference.
The catch? Everything runs inside a Proxmox LXC container, not a full VM. GPU passthrough to LXC is less documented and more finicky than VM passthrough. Here’s how I got it working.
The Stack
The full path from hardware to application looks like this:
GTX 1070 (hardware)
→ NVIDIA 580.x driver (Proxmox host)
→ cgroup2 device passthrough (LXC config)
→ NVIDIA libraries (inside LXC)
→ nvidia-container-toolkit (Docker runtime)
→ Services (Plex, Frigate, Immich ML, Ollama)
Each layer has to be configured correctly or the whole chain breaks.
Why Driver 580.x Specifically
Pascal-generation GPUs (GTX 10xx series) are on their last supported driver branch. NVIDIA 590+ drops Pascal support entirely. Driver 580.119.02 is the sweet spot — it supports Pascal on modern kernels (6.17+) and works with the latest container toolkit.
I installed it from the .run file directly, not from Debian packages, because Proxmox’s kernel headers don’t always play nice with DKMS from apt repos.
LXC Configuration
The key pieces in the LXC config (/etc/pve/lxc/100.conf) are device allow rules and bind mounts:
# GPU device access
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 506:* rwm
lxc.cgroup2.devices.allow: c 509:* rwm
lxc.cgroup2.devices.allow: c 226:* rwm
# Bind mount GPU devices into the container
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
The major numbers (195, 506, 509, 226) correspond to the NVIDIA device nodes. Without these cgroup rules, the container can see the device files but can’t actually use them.
Inside the Container
Inside the LXC, I installed the same NVIDIA driver version with --no-kernel-modules — the container shares the host kernel, so it only needs the userspace libraries:
./NVIDIA-Linux-x86_64-580.119.02.run --no-kernel-modules
Then nvidia-container-toolkit 1.18.2 configures Docker to use the NVIDIA runtime. A quick nvidia-smi confirms everything sees the GPU.
Who Uses What
The GPU isn’t a single-use resource. Different services use different capabilities:
- Plex: NVENC (encoding) and NVDEC (decoding) for hardware transcoding. These use dedicated silicon on the GPU, separate from the CUDA cores.
- Frigate: Uses
ffmpegwith NVDEC hardware acceleration for decoding camera streams. Detection still runs on CPU. - Immich ML: CUDA for machine learning inference (face recognition, smart search).
- Ollama: CUDA for LLM inference. Loads models into VRAM on demand (~5.5GB for a 7B model), unloads after 5 minutes idle.
The beautiful thing: NVENC/NVDEC are fixed-function hardware, so Plex transcoding doesn’t compete with CUDA workloads. Ollama and Immich share the CUDA cores, but since Ollama unloads models when idle, they rarely conflict in practice.
Lessons Learned
- Driver version matters enormously. Pascal on 590+ = no support. 580.x is the last branch.
- Install from .run file, not packages. Proxmox’s kernel situation makes DKMS unpredictable.
- Container needs matching driver version. Host and LXC must run the exact same NVIDIA driver.
- cgroup2 device numbers are stable across reboots — configure once and forget.
- NVENC/NVDEC don’t compete with CUDA — they’re separate hardware blocks.
The end result: four different services sharing one mobile GPU, each getting hardware acceleration where it matters. The GTX 1070 is nearly a decade old, but it’s still pulling its weight.