[nzbhydra2] use common chart (#128)

* [nzbhydra2] use common chart

* remove app name in app.env example
This commit is contained in:
ᗪєνιη ᗷυнʟ 2020-11-09 07:19:58 -05:00 committed by GitHub
parent 4566fda5ea
commit 8c93651a63
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 54 additions and 429 deletions

View File

@ -1,8 +1,8 @@
apiVersion: v2 apiVersion: v2
appVersion: v2.26.0 appVersion: v3.4.3
description: Usenet meta search description: Usenet meta search
name: nzbhydra2 name: nzbhydra2
version: 3.0.2 version: 4.0.0
keywords: keywords:
- nzbhydra2 - nzbhydra2
- usenet - usenet
@ -14,3 +14,7 @@ sources:
maintainers: maintainers:
- name: billimek - name: billimek
email: jeff@billimek.com email: jeff@billimek.com
dependencies:
- name: common
repository: https://k8s-at-home.com/charts/
version: ^1.0.4

View File

@ -1,6 +1,6 @@
# nzbhydra2 # Nzbhydra2
This is a helm chart for [nzbhydra2](https://github.com/theotherp/nzbhydra2) leveraging the [Linuxserver.io image](https://hub.docker.com/r/linuxserver/hydra2/) This is a helm chart for [Nzbhydra2](https://github.com/theotherp/nzbhydra2).
## TL;DR; ## TL;DR;
@ -28,70 +28,49 @@ helm delete my-release --purge
The command removes all the Kubernetes components associated with the chart and deletes the release. The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration ## Configuration
Read through the charts [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/nzbhydra2/values.yaml)
The following tables lists the configurable parameters of the Sentry chart and their default values. file. It has several commented out suggested values.
Additionally you can take a look at the common library [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/common/values.yaml) for more (advanced) configuration options.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `linuxserver/hydra2` |
| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/linuxserver/nzbhydra2/tags/).| `v2.22.2-ls9`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `timezone` | Timezone the nzbhydra2 instance should run as, e.g. 'America/New_York' | `UTC` |
| `puid` | process userID the nzbhydra2 instance should run as | `1001` |
| `pgid` | process groupID the nzbhydra2 instance should run as | `1001` |
| `probes.liveness.failureThreshold` | Specify liveness `failureThreshold` parameter for the deployment | `5` |
| `probes.liveness.periodSeconds` | Specify liveness `periodSeconds` parameter for the deployment | `10` |
| `probes.readiness.failureThreshold` | Specify readiness `failureThreshold` parameter for the deployment | `5` |
| `probes.readiness.periodSeconds` | Specify readiness `periodSeconds` parameter for the deployment | `10` |
| `probes.startup.initialDelaySeconds` | Specify startup `initialDelaySeconds` parameter for the deployment | `5` |
| `probes.startup.failureThreshold` | Specify startup `failureThreshold` parameter for the deployment | `30` |
| `probes.startup.periodSeconds` | Specify startup `periodSeconds` parameter for the deployment | `10` |
| `service.type` | Kubernetes service type for the nzbhydra2 GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the nzbhydra2 GUI is exposed| `5076` |
| `service.annotations` | Service annotations for the nzbhydra2 GUI | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.loadBalancerIP` | Loadbalance IP for the nzbhydra2 GUI | `{}` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | None
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}`
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `chart-example.local` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `persistence.config.enabled` | Use persistent volume to store configuration data | `true` |
| `persistence.config.size` | Size of persistent volume claim | `1Gi` |
| `persistence.config.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.config.subPath` | Mount a sub directory of the persistent volume if set | `""` |
| `persistence.config.storageClass` | Type of persistent volume claim | `-` |
| `persistence.config.accessMode` | Persistence access mode | `ReadWriteOnce` |
| `persistence.config.skipuninstall` | Do not delete the pvc upon helm uninstall | `false` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `podAnnotations` | Key-value pairs to add as pod annotations | `{}` |
| `deploymentAnnotations` | Key-value pairs to add as deployment annotations | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console ```console
helm install --name my-release \ helm install nzbhydra2 \
--set timezone="America/New York" \ --set env.TZ="America/New York" \
k8s-at-home/nzbhydra2 k8s-at-home/nzbhydra2
``` ```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example, chart. For example,
```console ```console
helm install --name my-release -f values.yaml k8s-at-home/nzbhydra2 helm install nzbhydra2 k8s-at-home/nzbhydra2 --values values.yaml
```
```yaml
image:
tag: ...
``` ```
--- ---
**NOTE** **NOTE**
If you get `Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...` it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`. If you get
```console
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: ...`
```
it may be because you uninstalled the chart with `skipuninstall` enabled, you need to manually delete the pvc or use `existingClaim`.
--- ---
Read through the [values.yaml](https://github.com/k8s-at-home/charts/blob/master/charts/nzbhydra2/values.yaml) file. It has several commented out suggested values. ## Upgrading an existing Release to a new major version
A major chart version change (like 4.0.1 -> 5.0.0) indicates that there is an incompatible breaking change potentially needing manual actions.
### Upgrading from 3.x.x to 4.x.x
Due to migrating to a centralized common library some values in `values.yaml` have changed.
Examples:
* `service.port` has been moved to `service.port.port`.
* `persistence.type` has been moved to `controllerType`.
Refer to the library values.yaml for more configuration options.

View File

@ -0,0 +1,2 @@
ingress:
enabled: true

View File

@ -1,19 +1 @@
1. Get the application URL by running these commands: {{- include "common.notes.defaultNotes" . -}}
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "nzbhydra2.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ include "nzbhydra2.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "nzbhydra2.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "nzbhydra2.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:5076 to use your application"
kubectl port-forward $POD_NAME 5076:80
{{- end }}

View File

@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nzbhydra2.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nzbhydra2.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nzbhydra2.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -0,0 +1 @@
{{ include "common.all" . }}

View File

@ -1,29 +0,0 @@
{{- if and .Values.persistence.config.enabled (not .Values.persistence.config.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "nzbhydra2.fullname" . }}-config
{{- if .Values.persistence.config.skipuninstall }}
annotations:
"helm.sh/resource-policy": keep
{{- end }}
labels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
helm.sh/chart: {{ include "nzbhydra2.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.config.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.config.size | quote }}
{{- if .Values.persistence.config.storageClass }}
{{- if (eq "-" .Values.persistence.config.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.config.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

View File

@ -1,95 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nzbhydra2.fullname" . }}
{{- if .Values.deploymentAnnotations }}
annotations:
{{- range $key, $value := .Values.deploymentAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
labels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
helm.sh/chart: {{ include "nzbhydra2.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 1
revisionHistoryLimit: 3
strategy:
type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podAnnotations }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 5076
protocol: TCP
livenessProbe:
tcpSocket:
port: http
failureThreshold: {{ .Values.probes.liveness.failureThreshold }}
periodSeconds: {{ .Values.probes.liveness.periodSeconds }}
readinessProbe:
tcpSocket:
port: http
failureThreshold: {{ .Values.probes.readiness.failureThreshold }}
periodSeconds: {{ .Values.probes.readiness.periodSeconds }}
startupProbe:
tcpSocket:
port: http
initialDelaySeconds: {{ .Values.probes.startup.initialDelaySeconds }}
failureThreshold: {{ .Values.probes.startup.failureThreshold }}
periodSeconds: {{ .Values.probes.startup.periodSeconds }}
env:
- name: TZ
value: "{{ .Values.timezone }}"
- name: PUID
value: "{{ .Values.puid }}"
- name: PGID
value: "{{ .Values.pgid }}"
volumeMounts:
- mountPath: /config
name: config
{{- if .Values.persistence.config.subPath }}
subPath: "{{ .Values.persistence.config.subPath }}"
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: config
{{- if .Values.persistence.config.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.persistence.config.existingClaim }}{{ .Values.persistence.config.existingClaim }}{{- else }}{{ template "nzbhydra2.fullname" . }}-config{{- end }}
{{- else }}
emptyDir: {}
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "nzbhydra2.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
helm.sh/chart: {{ include "nzbhydra2.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.labels -}}
{{ toYaml . | nindent 4 }}
{{- end -}}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}

View File

@ -1,53 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "nzbhydra2.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
helm.sh/chart: {{ include "nzbhydra2.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end -}}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
ports:
- name: http
port: {{ .Values.service.port }}
protocol: TCP
targetPort: http
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
selector:
app.kubernetes.io/name: {{ include "nzbhydra2.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@ -1,113 +1,20 @@
# Default values for nzbhydra2. # Default values for Jackett.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image: image:
repository: linuxserver/nzbhydra2 repository: linuxserver/nzbhydra2
tag: v2.26.0-ls14
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
tag: version-v3.4.3
# upgrade strategy type (e.g. Recreate or RollingUpdate)
strategyType: Recreate
# Probes configuration
probes:
liveness:
failureThreshold: 5
periodSeconds: 10
readiness:
failureThreshold: 5
periodSeconds: 10
startup:
initialDelaySeconds: 5
failureThreshold: 30
periodSeconds: 10
nameOverride: ""
fullnameOverride: ""
timezone: UTC
puid: 1001
pgid: 1001
service: service:
type: ClusterIP port:
port: 5076 port: 5076
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
labels: {}
## Use loadBalancerIP to request a specific static IP,
## otherwise leave blank
##
loadBalancerIP:
# loadBalancerSourceRanges: []
## Set the externalTrafficPolicy in the Service to either Cluster or Local
# externalTrafficPolicy: Cluster
ingress: env: {}
enabled: false # TZ: UTC
annotations: {} # PUID: 1001
# kubernetes.io/ingress.class: nginx # PGID: 1001
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
persistence: persistence:
config: config:
enabled: true enabled: true
## nzbhydra2 configuration data Persistent Volume Storage Class emptyDir: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 1Gi
## If subPath is set mount a sub folder of a volume instead of the root of the volume.
## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs).
##
subPath: ""
## Do not delete the pvc upon helm uninstall
skipuninstall: false
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
deploymentAnnotations: {}