QFlood is a Docker image packaged by hotio.dev that includes QBittorrent and the Flood management UI for your download client.
hotio/qflood - hotio.dev
You can deploy QFlood to Kubernetes with Terraform, and configure a host path volume that your media management suite (Sonarr, Radarr, etc.) can access, to automate media downloads.
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.13.1"
}
}
}
resource "kubernetes_deployment" "qflood" {
metadata {
name = "qflood"
namespace = var.namespace
labels = {
"servarr.app" = "qflood"
}
}
spec {
replicas = 1
selector {
match_labels = {
"servarr.app" = "qflood"
}
}
template {
metadata {
labels = {
"servarr.app" = "qflood"
}
}
spec {
container {
image = "cr.hotio.dev/hotio/qflood:release-4.3.9--4.7.0"
name = "qflood"
env_from {
config_map_ref {
name = kubernetes_config_map.qflood_env.metadata.0.name
}
}
port {
container_port = 8080
name = "qbittorrent"
}
port {
container_port = 3000
name = "floodui"
}
volume_mount {
name = "data"
mount_path = "/config"
}
volume_mount {
name = "downloads"
mount_path = "/downloads"
}
}
volume {
name = "data"
persistent_volume_claim {
claim_name = "qflood-data"
}
}
volume {
name = "downloads"
host_path {
path = "/mnt/media/torrents/downloads"
type = "Directory"
}
}
}
}
}
}
resource "kubernetes_persistent_volume_claim" "qflood_data" {
metadata {
name = "qflood-data"
namespace = var.namespace
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
storage_class_name = "local-path"
}
}
resource "kubernetes_service" "qbittorrent" {
metadata {
name = "qbittorrent"
namespace = var.namespace
}
spec {
type = "ClusterIP"
selector = {
"servarr.app" = "qflood"
}
port {
name = "qbittorrent"
port = 8080
target_port = "qbittorrent"
}
}
depends_on = [
kubernetes_deployment.qflood
]
}
resource "kubernetes_service" "floodui" {
metadata {
name = "floodui"
namespace = var.namespace
}
spec {
type = "ClusterIP"
selector = {
"servarr.app" = "qflood"
}
port {
name = "floodui"
port = 3000
target_port = "floodui"
}
}
depends_on = [
kubernetes_deployment.qflood
]
}
resource "kubernetes_config_map" "qflood_env" {
metadata {
name = "qflood-env"
namespace = var.namespace
}
data = {
"PUID" = "1000"
"PGID" = "1000"
"TZ" = "Europe/Zurich"
"FLOOD_AUTH" = "true"
}
}
You can use the LoadBalancer
service type to access QBittorrent and FloodUI's ports in the network. But you most likely want to access them via a domain, which you can do with Traefik as a reverse proxy.
Once you've configured QBittorrent, you should immediately change the default admin password which is adminadmin
, under Tools > Options > Web UI > Authentication. You can use the same credentials with FloodUI to attach it to QBittorrent.
Hotio's Qflood container comes with built-in Wireguard. Read more about it here.
To copy the Wireguard configuration file (wg0.conf
) to the /config/wireguard
directory, you'll need to use an initContainer
that copies the file from a Secret
or ConfigMap
. This will ensure that the file has the proper permissions:
// spec.template.spec
init_container {
image = "busybox"
name = "copy-wireguard-config"
command = ["sh", "-c", "mkdir -p /config/wireguard && cp /wireguard/wg0.conf /config/wireguard/wg0.conf"]
volume_mount {
name = "data"
mount_path = "/config"
}
volume_mount {
name = "wireguard"
mount_path = "/wireguard"
}
}
You can mount the config using a volume, referencing a Secret
containing the config or a ConfigMap
:
// spec.template.spec
volume {
name = "wireguard"
secret {
secret_name = kubernetes_secret.wireguard_conf.metadata.0.name
}
}
The Secret
can be declared in Terraform with a kubernetes_secret
resource:
resource "kubernetes_secret" "wireguard_conf" {
metadata {
name = "qflood-wireguard"
namespace = var.namespace
}
data = {
"wg0.conf" = <<-EOT
[Interface]
PrivateKey = supersecretprivatekey
Address = xx.xx.xxx.xxx/32
DNS = 1.1.1.1
[Peer]
PublicKey = publickey
AllowedIPs = 0.0.0.0/0
Endpoint = xxx.x.xxx.x:51820
EOT
}
}
Additionally, you'll have to set two sysctls
:
// spec.template.spec
security_context {
sysctl {
name = "net.ipv4.conf.all.src_valid_mark"
value = "1"
}
sysctl {
name = "net.ipv6.conf.all.disable_ipv6"
value = "1"
}
}
If you're getting the
sysctlForbidden
scheduling error in K3s, see Allowing unsafe Sysctls to allow these values to be set on the Kubelet.
As well as add the NET_ADMIN
capability to the container:
// spec.template.spec.container
security_context {
capabilities {
add = ["NET_ADMIN"]
}
}