Jellyfin is an open-source media server forked from Emby with clients for most platforms that supports real-time transcoding, including hardware-accelerated support for VA-API, nVidia NVEC, Intel QuickSync and AMD AMF.
GitHub - jellyfin/jellyfin: The Free Software Media System
Deploying Jellyfin is quite simple. Distributed deployment isn't recommended, since Jellyfin manages its data in a embedded SQLite file and doesn't use external services to manage real-time features such as group sessions.
Since Jellyfin needs volumes to access your library in Kubernetes, it is best to create your own custom Terraform module that maps local paths with Jellyfin's pod. K3s comes with the Rancher local-path
storage driver that allows you to mount a hostPath
volume in pods.
Example Deployment
of Jellyfin:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.13.1"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "homelab01"
}
locals {
namespace = "jellyfin"
jellyfin_data_pvc = "jellyfin-data-pvc"
}
resource "kubernetes_namespace" "jellyfin" {
metadata {
name = local.namespace
}
}
resource "kubernetes_deployment" "jellyfin" {
metadata {
name = "jellyfin"
namespace = kubernetes_namespace.jellyfin.metadata.0.name
labels = {
"app" = "jellyfin"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "jellyfin"
}
}
template {
metadata {
labels = {
"app" = "jellyfin"
}
}
spec {
container {
image = "lscr.io/linuxserver/jellyfin:latest"
name = "jellyfin"
env_from {
config_map_ref {
name = kubernetes_config_map.jellyfin_env.metadata.0.name
}
}
port {
name = "web"
container_port = 8096
}
port {
name = "local-discovery"
container_port = 7359
}
port {
name = "dlna"
container_port = 1900
}
volume_mount {
name = "data"
mount_path = "/config"
}
volume_mount {
name = "movies"
mount_path = "/data/movies"
}
volume_mount {
name = "tv"
mount_path = "/data/tv"
}
resources {
requests = {
cpu = 2
}
limits = {
cpu = 4
}
}
}
volume {
name = "data"
persistent_volume_claim {
claim_name = local.jellyfin_data_pvc
}
}
volume {
name = "movies"
host_path {
path = "/mnt/media/library/movies"
type = "Directory"
}
}
volume {
name = "tv"
host_path {
path = "/mnt/media/library/tv"
type = "Directory"
}
}
}
}
}
}
resource "kubernetes_persistent_volume_claim" "jellyfin_data" {
metadata {
name = local.jellyfin_data_pvc
namespace = kubernetes_namespace.jellyfin.metadata.0.name
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "15Gi"
}
}
storage_class_name = "local-path"
}
}
resource "kubernetes_service" "jellyfin_web" {
metadata {
name = "jellyfin-web"
namespace = kubernetes_namespace.jellyfin.metadata.0.name
}
spec {
type = "LoadBalancer"
selector = {
"app" = "jellyfin"
}
port {
name = "web"
port = 8096
target_port = "web"
}
}
depends_on = [
kubernetes_deployment.jellyfin
]
}
resource "kubernetes_service" "jellyfin_discovery" {
metadata {
name = "jellyfin-local-discovery"
namespace = kubernetes_namespace.jellyfin.metadata.0.name
}
spec {
type = "LoadBalancer"
selector = {
"app" = "jellyfin"
}
port {
name = "local-discovery"
port = 7359
target_port = "local-discovery"
}
port {
name = "dlna"
port = 1900
target_port = "dlna"
}
}
}
With LoadBalancer
services to expose Jellyfin's ports, you can connect to Jellyfin in the local network by getting the IP of your Kubernetes node and directly connect to it. Optionally, you can setup a Traefik Ingress to expose Jellyfin to the web.
Jellyfin supports hardware acceleration, and to enable it you need to use Device Plugins to add passthrough support for GPUs in Kubernetes.
Once you've done that, you can add the GPU to your resource requests and limits, which will schedule the GPU for the pod:
resources {
requests = {
"amd.com/gpu" = 1
}
limits = {
"amd.com/gpu" = 1
}
}
After that, head to the Jellyfin settings to enable hardware-accelerated encoding. The settings will very based on your configuration. If you need to exec into the Jellyfin container to view the devices, you can use this command:
kubectl -n <jellyfin-namespace> exec -it <jellyfin-container> -- /bin/sh
This will allow you to call ls -l /dev/dri
to view all your devices, and vainfo
to verify the entrypoints supported by your GPU:
/usr/lib/jellyfin-ffmpeg/vainfo
Trying display: drm
libva info: VA-API version 1.16.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/radeonsi\_drv\_video.so
libva info: Found init function \_\_vaDriverInit\_1\_16
libva info: va\_openDriver() returns 0
vainfo: VA-API version: 1.16 (libva 2.16.0)
vainfo: Driver version: Mesa Gallium driver 22.2.4 for AMD Radeon RX 6400 (navi24, LLVM 14.0.0, DRM 3.48, 6.0.0-6-amd64)
vainfo: Supported profile and entrypoints
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264Main : VAEntrypointVLD
VAProfileH264High : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
VAProfileAV1Profile0 : VAEntrypointVLD
VAProfileNone : VAEntrypointVideoProc
The output above shows
vainfo
for the RX 6400 which does not support hardware encoding. if you see an output without encoding entrypoints, you won't be able to use hardware-accelerated encoding in Jellyfin, and will have to rely on your CPU to handle software encoding.