[otel-collector] add open telemetry collector with example config (#1447)

Co-authored-by: Mike Terhar <mike@terhar.com>
Co-authored-by: Devin Buhl <onedr0p@users.noreply.github.com>
This commit is contained in:
Mike Terhar 2022-03-14 15:13:08 -04:00 committed by GitHub
parent 798bfdf3af
commit 7c19db377d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 432 additions and 1 deletions

View File

@ -52,7 +52,7 @@ tasks:
helm-docs: helm-docs:
desc: generate helm-docs desc: generate helm-docs
dir: "{{.GIT_ROOT}}/hack" dir: "{{.GIT_ROOT}}/.github/scripts"
cmds: cmds:
- ./gen-helm-docs.sh "{{.CHART_TYPE}}" "{{.CHART}}" - ./gen-helm-docs.sh "{{.CHART_TYPE}}" "{{.CHART}}"
deps: deps:

View File

@ -0,0 +1,26 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# OWNERS file for Kubernetes
OWNERS
# helm-docs templates
*.gotmpl

View File

@ -0,0 +1,23 @@
apiVersion: v2
appVersion: 0.46.0
description: OpenTelemetry collector helm package
name: otel-collector
version: 1.0.0
kubeVersion: ">=1.16.0-0"
keywords:
- otel-collector, open telemetry, tracing
home: https://github.com/k8s-at-home/charts/tree/master/charts/stable/otel-collector
icon: https://otel-collector.org/icon
sources:
- https://github.com/otel-collector/otel-collector-docker
maintainers:
- name: mterhar
email: mike@terhar.com
dependencies:
- name: common
repository: https://library-charts.k8s-at-home.com
version: 4.3.0
annotations:
artifacthub.io/changes: |
- kind: added
description: Initial version

View File

@ -0,0 +1,50 @@
# otel-collector
![Version: 1.0.0](https://img.shields.io/badge/Version-1.0.0-informational?style=flat-square) ![AppVersion: 0.46.0](https://img.shields.io/badge/AppVersion-0.46.0-informational?style=flat-square)
OpenTelemetry collector helm package
**Homepage:** <https://github.com/k8s-at-home/charts/tree/master/charts/stable/otel-collector>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| mterhar | mike@terhar.com | |
## Source Code
* <https://github.com/otel-collector/otel-collector-docker>
## Requirements
Kubernetes: `>=1.16.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://library-charts.k8s-at-home.com | common | 4.3.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| configFile | string | `nil` | Create a new secret with the following multi-line spec which gets mounted to /conf/otel-collector-config.yaml. For more information, see the [otel docs](https://opentelemetry.io/docs/collector/configuration/) |
| configFileSecret | string | `nil` | Configure the open telemetry secret using an existing secret or create a configuration file using the `configFile` below The secret needs a single key inside it called `otelConfigFile` |
| image.pullPolicy | string | `"IfNotPresent"` | image pull policy |
| image.repository | string | `"otel/opentelemetry-collector-contrib"` | image repository |
| image.tag | string | `nil` | image tag |
| ingress.main | object | disabled | Enable and configure ingress settings for the chart under this key. This OTEL Collector is built to trust items within the same cluster so exposing externally will allow unauthenticated traces to be processed. |
| metrics.enabled | bool | enabled: false is set so it can scrape itself but | Configure Prometheus serviceMonitor for the built-in exporter. circular dependencies are never good enable this for a secondary scraper |
| metrics.prometheusRule | object | See values.yaml | Enable and configure Prometheus Rules for the chart under this key. |
| metrics.prometheusRule.rules | list | See prometheusrules.yaml | Configure additionial rules for the chart under this key. |
| metrics.serviceMonitor.interval | string | `"3m"` | |
| metrics.serviceMonitor.labels | object | `{}` | |
| metrics.serviceMonitor.scrapeTimeout | string | `"1m"` | |
| probes | object | expects config to include `extensions:health_check:endpoint: 0.0.0.0:13133` | probes is configured to use an otel extension to get health information from the pod |
| service | object | The defaults expose the services needed to receive http and otlp traces | Configures service settings for the chart. |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.create | bool | `false` | Specifies whether a service account should be created |
| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.7.0](https://github.com/norwoodj/helm-docs/releases/v1.7.0)

View File

@ -0,0 +1,105 @@
{{- define "custom.custom.configuration.header" -}}
## Custom configuration
{{- end -}}
{{- define "custom.custom.configuration" -}}
{{ template "custom.custom.configuration.header" . }}
The open telemetry collector is used to receive, process, and deliver open telemetry traces to a backend.
There are many backends that can be used by setting "exporter" configurations.
The example configuration in the repository sends data to [Honeycomb.io](https://honeycomb.io) since they provide a free plan with 50,000,000 events per month.
See the values.yaml file for an example configuration and reference the [Open Telemetry docs](https://opentelemetry.io/docs/collector/configuration/) for additional options.
Be sure to replace all configurations in the values file which are encased in double brackets like `[[something]]`.
The example below has many different telemetry system configurations for example purposes.
Most people will have only one telemetry backend, though the example shows how to send to multiple.
If you do not wish to use a service, for example to remove NewRelic delete the exporter called `otlp/newrelic` and then remove pipeline `traces/2` which sends to that exporter and remove the `otlp/newrelic` exporter from the metrics pipeline.
```yaml
configFile: |-
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages: {}
memory_ballast:
# Memory Ballast size should be max 1/3 to 1/2 of memory.
size_mib: 683
exporters:
logging:
logLevel: debug
otlp/honeycombtraces:
endpoint: api.honeycomb.io:443
headers:
x-honeycomb-team: [[YourAPIKeyHere]]
x-honeycomb-dataset: [[YouApplicationDataSetHere]]
otlp/newrelic:
endpoint: otlp.nr-data.net:4317
headers:
api-key: [[YourTokenHere]]
otlp/lightstep:
endpoint: ingest.lightstep.com:443
headers:
{"lightstep-access-token": "[[YourTokenHere]]"}
otlp/sapm:
access_token: [[YourTokenHere]]
access_token_passthrough: true
endpoint: https://ingest.us0.otlp/signalfx.com/v2/trace
max_connections: 100
num_workers: 8
otlp/signalfx:
access_token: [[YourTokenHere]]
realm: us0
correlation:
service:
extensions: [zpages, memory_ballast, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp/honeycombtraces]
traces/2:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp/newrelic]
traces/3:
receivers: [otlp]
processors: [memory_limiter, attributes, batch]
exporters: [otlp/sapm, otlp/signalfx]
traces/4:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp/lightstep]
```
To get the secret for troubleshooting, you can use a command like:
```bash
kubectl get secret -A --selector=configsecret=otelcollector -o go-template='{{range .items}}{{"----\n# "}}{{ .metadata.name }}{{"."}}{{ .metadata.namespace }}{{":"}}{{"\n"}}{{.data.otelConfigFile|base64decode}}{{end}}'
```
{{- end -}}

View File

@ -0,0 +1,6 @@
Send telemetry to this pod via the clusterIP:
1. http://{{ include "common.names.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.otlpports.ports.otlpgrpc.port }}
2. GRPC to {{ include "common.names.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.otlpports.ports.otlphttp.port }}
It will then be processed and sent to exporters based on the configFile or configFileSecret.

View File

@ -0,0 +1,28 @@
{{/* Make sure all variables are set properly */}}
{{- include "common.values.setup" . }}
{{/* Append the hardcoded settings */}}
{{- define "otel-collector.harcodedValues" -}}
{{/* merge config file path argument which is hard coded*/}}
args:
- "--config=/conf/otel-collector-config.yaml"
{{/* Append the config secret volume to the volumes */}}
persistence:
otel-config-file:
enabled: true
type: "custom"
mountPath: "/conf/otel-collector-config.yaml"
subPath: "otelConfigFile"
volumeSpec:
secret:
{{- if .Values.configFileSecret }}
secretName: "{{ .Values.configSecretName }}"
{{- else }}
secretName: "{{ include "common.names.fullname" . }}-otelconfig"
{{- end }}
{{- end -}}
{{- $_ := mergeOverwrite .Values (include "otel-collector.harcodedValues" . | fromYaml) -}}
{{ include "common.all" . }}

View File

@ -0,0 +1,18 @@
{{/*
The open telemetry config secret to be included.
*/}}
{{- if and .Values.configFile (not .Values.configFileSecret) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "common.names.fullname" . }}-otelconfig
labels:
configsecret: otelcollector
{{- include "common.labels" $ | nindent 4 }}
stringData:
{{- with .Values.configFile }}
otelConfigFile: |-
{{- . | nindent 4}}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,24 @@
{{- if .Values.metrics.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "common.names.fullname" . }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.metrics.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "common.labels.selectorLabels" . | nindent 6 }}
endpoints:
- port: metrics
{{- with .Values.metrics.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.metrics.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
path: /
{{- end }}

View File

@ -0,0 +1,151 @@
#
# IMPORTANT NOTE
#
# This chart inherits from our common library chart. You can check the default values/options here:
# https://github.com/k8s-at-home/library-charts/tree/main/charts/stable/common/values.yaml
#
image:
# -- image repository
repository: otel/opentelemetry-collector-contrib
# -- image tag
tag:
# -- image pull policy
pullPolicy: IfNotPresent
# -- Configures service settings for the chart.
# @default -- The defaults expose the services needed to receive http and otlp traces
service:
main:
enabled: false
otlpports:
enabled: true
type: ClusterIP
ports:
# Default endpoint for OpenTelemetry gRPC receiver.
otlpgrpc:
enabled: true
protocol: TCP
port: 4317
targetPort: 4317
# Default endpoint for OpenTelemetry HTTP receiver.
otlphttp:
enabled: true
protocol: TCP
port: 4318
targetPort: 4318
# Default endpoint for querying metrics.
metrics:
enabled: true
protocol: TCP
port: 8888
targetPort: 8888
# -- probes is configured to use an otel extension to get health information from the pod
# @default -- expects config to include `extensions:health_check:endpoint: 0.0.0.0:13133`
probes:
liveness:
enabled: true
custom: true
spec:
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 3
httpGet:
path: /
port: 13133
readiness:
enabled: false
startup:
enabled: false
ingress:
# -- Enable and configure ingress settings for the chart under this key.
# This OTEL Collector is built to trust items within the same cluster so
# exposing externally will allow unauthenticated traces to be processed.
# @default -- disabled
main:
enabled: false
# -- Configure the open telemetry secret using an existing secret or create
# a configuration file using the `configFile` below
# The secret needs a single key inside it called `otelConfigFile`
configFileSecret:
# -- Create a new secret with the following multi-line spec which gets mounted
# to /conf/otel-collector-config.yaml. For more information, see the
# [otel docs](https://opentelemetry.io/docs/collector/configuration/)
configFile: |-
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 1500
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages: {}
memory_ballast:
# Memory Ballast size should be max 1/3 to 1/2 of memory.
size_mib: 683
exporters:
logging:
logLevel: debug
service:
extensions: [zpages, memory_ballast, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
metrics:
# -- Configure Prometheus serviceMonitor for the built-in exporter.
# @default -- enabled: false is set so it can scrape itself but
# circular dependencies are never good enable this for a secondary scraper
enabled: false
serviceMonitor:
interval: 3m
scrapeTimeout: 1m
labels: {}
# -- Enable and configure Prometheus Rules for the chart under this key.
# @default -- See values.yaml
prometheusRule:
enabled: false
labels: {}
# -- Configure additionial rules for the chart under this key.
# @default -- See prometheusrules.yaml
rules: []
# - alert: OtelCollectorDown
# annotations:
# description: Otel Collector service is down.
# summary: Otel Collector is down.
# expr: |
# up == 0
# for: 5m
# labels:
# severity: critical
serviceAccount:
# -- Specifies whether a service account should be created
create: false
# -- Annotations to add to the service account
annotations: {}
# -- The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""