replace radarr

This commit is contained in:
Nicholas St. Germain 2020-09-05 21:45:52 -05:00
parent ba4e6b978c
commit 3070528d2f
No known key found for this signature in database
GPG Key ID: 7221152119DAB1E6
16 changed files with 20 additions and 741 deletions

View File

@ -1,20 +0,0 @@
apiVersion: v2
name: radarr
description: Radarr Chart
type: application
version: 1.0.0
appVersion: 3.0.0.3591
keywords:
- radarr
home: https://github.com/k8s-at-home/charts/tree/master/charts/media-common/radarr
sources:
- https://github.com/Radarr/Radarr
- https://hub.docker.com/r/itscontained/radarr
maintainers:
- name: DirtyCajunRice
email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: 1.0.0
alias: radarr

View File

@ -1,11 +0,0 @@
# Default values for radarr.
radarr:
image:
organization: itscontained
repository: radarr
pullPolicy: IfNotPresent
tag: ""
service:
port: 7878
configPath: /var/lib/radarr

View File

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# OWNERS file for Kubernetes
OWNERS

View File

@ -1,17 +1,20 @@
apiVersion: v2 apiVersion: v2
appVersion: 3.0.0.3543
description: Radarr is a movie downloading client
name: radarr name: radarr
version: 5.0.1 description: Radarr Chart
type: application
version: 6.0.0
appVersion: 3.0.0.3591
keywords: keywords:
- radarr - radarr
- usenet
- bittorrent
home: https://github.com/k8s-at-home/charts/tree/master/charts/radarr home: https://github.com/k8s-at-home/charts/tree/master/charts/radarr
icon: https://avatars3.githubusercontent.com/u/25025331?s=400&v=4
sources: sources:
- https://hub.docker.com/r/linuxserver/radarr/ - https://github.com/Radarr/Radarr
- https://github.com/Radarr/Radarr/ - https://hub.docker.com/r/itscontained/radarr
maintainers: maintainers:
- name: billimek - name: DirtyCajunRice
email: jeff@billimek.com email: nick@cajun.pro
dependencies:
- name: media-common
repository: https://k8s-at-home.com/charts/
version: 1.0.0
alias: radarr

View File

@ -1,4 +0,0 @@
approvers:
- billimek
reviewers:
- billimek

View File

@ -1,131 +0,0 @@
# radarr movie download client
This is a helm chart for [radarr](https://github.com/Radarr/Radarr/) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/radarr/)
## TL;DR;
```shell
$ helm repo add k8s-at-home https://k8s-at-home.com/charts/
$ helm install k8s-at-home/radarr
```
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install --name my-release k8s-at-home/radarr
```
## Upgrading
Chart versions 3.2.0 and earlier used separate PVCs for Downloads and Movies. This presented an issue where Radarr would be unable to hard-link files between the /downloads and /movies directories when importing media. This is caused because each PVC is exposed to the pod as a separate filesystem. This resulted in Radarr copying files rather than linking; using additional storage without the user's knowledge.
This chart now uses a single PVC for Downloads and Movies. This means all of your media (and downloads) must be in, or be subdirectories of, a single directory. If upgrading from v1 of the chart, do the following:
1. [Uninstall](#uninstalling-the-chart) your current release
2. On your backing store, organize your media, ie. media/movies, media/downloads
3. If using a pre-existing PVC, create a single new PVC for all of your media
4. Refer to the [configuration](#configuration) for updates to the chart values
5. Re-install the chart
6. Update your settings in the app to point to the new PVC, which is mounted at /media. This can be done using Radarr's `Movie Editor` under the `Movies` tab. Simply select all movies in your library, and use the editor to change the `Root Folder` and hit save.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
helm delete my-release --purge
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Sentry chart and their default values.
| Parameter | Description | Default |
| ------------------------------------------- | -------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| `image.repository` | Image repository | `linuxserver/radarr` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/radarr/tags/). | `v0.2.0.1480-ls58` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the instance should run as | `1001` |
| `pgid` | process groupID the instance should run as | `1001` |
| `exportarr.enabled` | Enable Prometheus monitoring with [Exportarr](https://github.com/onedr0p/exportarr) | `false` |
| `exportarr.image.repository` | Exportarr image repository | `onedr0p/exportarr` |
| `exportarr.image.tag` | Exportarr image tag | `v0.3.0` |
| `exportarr.image.pullPolicy` | Exportarr image pullPolicy | `IfNotPresent` |
| `exportarr.port` | Prometheus scrape port | `9708` |
| `exportarr.url` | Radarr's URL | `http://radarr.default.svc.cluster.local:7878` |
| `exportarr.apikey` | Radarr's API Key | |
| `exportarr.serviceMonitor.enabled` | Enable Prometheus Operator ServiceMonitor monitoring | `false` |
| `exportarr.serviceMonitor.namespace` | Define namespace where to deploy the ServiceMonitor resource | (namespace where you are deploying) |
| `exportarr.serviceMonitor.path` | Prometheus scrape path | `/metrics` |
| `exportarr.serviceMonitor.interval` | Prometheus scrape interval | `4m` |
| `exportarr.serviceMonitor.scrapeTimeout` | Prometheus scrape timeout | `90s` |
| `exportarr.serviceMonitor.additionalLabels` | Add custom labels to ServiceMonitor | `{}` |
| `probes.liveness.initialDelaySeconds` | Specify liveness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.timeoutSeconds` | Specify liveness `timeoutSeconds` parameter for the deployment | `10` |
| `probes.readiness.initialDelaySeconds` | Specify readiness `initialDelaySeconds` parameter for the deployment | `60` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.timeoutSeconds` | Specify readiness `timeoutSeconds` parameter for the deployment | `10` |
| `service.type` | Kubernetes service type for the GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the GUI is exposed | `7878` |
| `service.annotations` | Service annotations for the GUI | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.loadBalancerIP` | Loadbalancer IP for the GUI | `{}` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.media.enabled` | Use persistent volume to store media | `true` |
| `persistence.media.size` | Size of persistent volume claim | `10Gi` |
| `persistence.media.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.media.storageClass` | Type of persistent volume claim | `-` |
| `persistence.media.subPath` | Mount a sub directory if set | `nil ` |
| `persistence.media.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.media.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `persistence.extraExistingClaimMounts` | Optionally add multiple existing claims | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install --name my-release \
--set timezone="America/New York" \
k8s-at-home/radarr
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install --name my-release -f values.yaml stable/radarr
```
---
**NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/radarr/values.yaml) file. It has several commented out suggested values.

View File

@ -1,19 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "radarr.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "radarr.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "radarr.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "radarr.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

View File

@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "radarr.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "radarr.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "radarr.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "radarr.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,149 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "radarr.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 7878
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.liveness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.liveness.timeoutSeconds }}
readinessProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.readiness.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
timeoutSeconds: {{ .Values.probes.readiness.timeoutSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: {{ .Values.persistence.config.subPath }}
{{- end }}
- mountPath: /media
name: media
{{- if .Values.persistence.media.subPath }}
subPath: {{ .Values.persistence.media.subPath }}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.exportarr.enabled }}
- name: radarr-exporter
image: "{{ .Values.exportarr.image.repository }}:{{ .Values.exportarr.image.tag }}"
imagePullPolicy: {{ .Values.exportarr.image.pullPolicy }}
command: ["exportarr"]
args: ["radarr"]
env:
- name: PORT
value: "{{ .Values.exportarr.port }}"
- name: URL
value: "{{ .Values.exportarr.url }}"
- name: APIKEY
value: "{{ .Values.exportarr.apikey }}"
ports:
- name: monitoring
containerPort: {{ .Values.exportarr.port }}
livenessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: monitoring
failureThreshold: 5
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
{{- end }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "radarr.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: media
{{- if .Values.persistence.media.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.media.existingClaim }}{{ .Values.persistence.media.existingClaim }}{{- else }}{{ template "radarr.fullname" . }}-media{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range .Values.persistence.extraExistingClaimMounts }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "radarr.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@ -1,29 +0,0 @@
{{- if and .Values.persistence.media.enabled (not .Values.persistence.media.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "radarr.fullname" . }}-media
{{- if .Values.persistence.media.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.media.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.media.size | quote }}
{{- if .Values.persistence.media.storageClass }}
{{- if (eq "-" .Values.persistence.media.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.media.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,20 +0,0 @@
{{- if .Values.exportarr.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "radarr.fullname" . }}-exporter
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- name: monitoring
port: {{ .Values.exportarr.port }}
targetPort: monitoring
selector:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@ -1,52 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "radarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
helm.sh/chart: {{ include "radarr.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@ -1,24 +0,0 @@
{{- if .Values.exportarr.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "radarr.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "radarr.chart" . }}
{{- with .Values.exportarr.serviceMonitor.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ include "radarr.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: monitoring
interval: {{ .Values.exportarr.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.exportarr.serviceMonitor.scrapeTimeout }}
path: {{ .Values.exportarr.serviceMonitor.path }}
{{- end }}

View File

@ -1,151 +1,11 @@
# Default values for radarr. # Default values for radarr.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
radarr:
image: image:
repository: linuxserver/radarr organization: itscontained
tag: 3.0.0.3543-ls21 repository: radarr
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
tag: ""
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
readiness:
initialDelaySeconds: 60
failureThreshold: 5
timeoutSeconds: 10
# Prometheus Metrics
exportarr:
enabled: false
image:
repository: onedr0p/exportarr
tag: v0.3.0
pullPolicy: IfNotPresent
url: "http://radarr.default.svc.cluster.local:7878"
apikey:
port: 9708
serviceMonitor:
enabled: false
namespace: default
path: /metrics
interval: 4m
scrapeTimeout: 90s
additionalLabels: {}
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service: service:
type: ClusterIP
port: 7878 port: 7878
## Specify the nodePort value for the LoadBalancer and NodePort service types. configPath: /var/lib/radarr
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence:
config:
enabled: true
## radarr configuration data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 1Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
media:
enabled: true
## radarr media volume configuration
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
# subPath: some-subpath
accessMode: ReadWriteOnce
size: 10Gi
## Do not delete the pvc upon helm uninstall
skipuninstall: false
extraExistingClaimMounts: []
# - name: external-mount
# mountPath: /srv/external-mount
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
# readOnly: true
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}