sasquatch Helm values reference¶
Helm values reference table for the sasquatch
application.
Key |
Type |
Default |
Description |
---|---|---|---|
global.baseUrl |
string |
Set by Argo CD |
Base URL for the environment |
global.host |
string |
Set by Argo CD |
Host name for ingress |
global.vaultSecretsPath |
string |
Set by Argo CD |
Base path for Vault secrets |
app-metrics.apps |
list |
|
The apps to create configuration for. |
app-metrics.enabled |
bool |
|
Enable the app-metrics subchart with topic, user, and telegraf configurations |
chronograf.enabled |
bool |
|
Whether Chronograf is enabled |
chronograf.env |
object |
See |
Additional environment variables for Chronograf |
chronograf.envFromSecret |
string |
|
Name of secret to use. The keys |
chronograf.image.repository |
string |
|
Docker image to use for Chronograf |
chronograf.image.tag |
string |
|
Docker tag to use for Chronograf |
chronograf.ingress.className |
string |
|
Ingress class to use |
chronograf.ingress.enabled |
bool |
|
Whether to enable the Chronograf ingress |
chronograf.ingress.hostname |
string |
None, must be set if the ingress is enabled |
Hostname of the ingress |
chronograf.ingress.path |
string |
|
Path for the ingress |
chronograf.ingress.tls |
bool |
|
Whether to obtain TLS certificates for the ingress hostname |
chronograf.persistence.enabled |
bool |
|
Whether to enable persistence for Chronograf data |
chronograf.persistence.size |
string |
|
Size of data store to request, if enabled |
chronograf.resources |
object |
See |
Kubernetes resource requests and limits for Chronograf |
influxdb-enterprise.enabled |
bool |
|
Whether to use influxdb-enterprise |
influxdb.config.continuous_queries.enabled |
bool |
|
Whether continuous queries are enabled |
influxdb.config.coordinator.log-queries-after |
string |
|
Maximum duration a query can run before InfluxDB logs it as a slow query |
influxdb.config.coordinator.max-concurrent-queries |
int |
|
Maximum number of running queries allowed on the instance (0 is unlimited) |
influxdb.config.coordinator.query-timeout |
string |
|
Maximum duration a query is allowed to run before it is killed |
influxdb.config.coordinator.write-timeout |
string |
|
Duration a write request waits before timeout is returned to the caller |
influxdb.config.data.cache-max-memory-size |
int |
|
Maximum size a shared cache can reach before it starts rejecting writes |
influxdb.config.data.max-series-per-database |
int |
|
Maximum number of series allowed per database before writes are dropped. Change the setting to 0 to allow an unlimited number of series per database. |
influxdb.config.data.trace-logging-enabled |
bool |
|
Whether to enable verbose logging of additional debug information within the TSM engine and WAL |
influxdb.config.data.wal-fsync-delay |
string |
|
Duration a write will wait before fsyncing. This is useful for slower disks or when WAL write contention is present. |
influxdb.config.http.auth-enabled |
bool |
|
Whether authentication is required |
influxdb.config.http.enabled |
bool |
|
Whether to enable the HTTP endpoints |
influxdb.config.http.flux-enabled |
bool |
|
Whether to enable the Flux query endpoint |
influxdb.config.http.max-row-limit |
int |
|
Maximum number of rows the system can return from a non-chunked query (0 is unlimited) |
influxdb.config.logging.level |
string |
|
Logging level |
influxdb.enabled |
bool |
|
Whether InfluxDB is enabled |
influxdb.image.tag |
string |
|
InfluxDB image tag |
influxdb.ingress.annotations |
object |
See |
Annotations to add to the ingress |
influxdb.ingress.className |
string |
|
Ingress class to use |
influxdb.ingress.enabled |
bool |
|
Whether to enable the InfluxDB ingress |
influxdb.ingress.hostname |
string |
None, must be set if the ingress is enabled |
Hostname of the ingress |
influxdb.ingress.path |
string |
|
Path for the ingress |
influxdb.ingress.tls |
bool |
|
Whether to obtain TLS certificates for the ingress hostname |
influxdb.initScripts.enabled |
bool |
|
Whether to enable the InfluxDB custom initialization script |
influxdb.persistence.enabled |
bool |
|
Whether to use persistent volume claims. By default, |
influxdb.persistence.size |
string |
1TiB for teststand deployments |
Persistent volume size |
influxdb.resources |
object |
See |
Kubernetes resource requests and limits |
influxdb.setDefaultUser.enabled |
bool |
|
Whether the default InfluxDB user is set |
influxdb.setDefaultUser.user.existingSecret |
string |
|
Use |
kafdrop.enabled |
bool |
|
Whether Kafdrop is enabled |
kafka-connect-manager |
object |
|
Overrides for kafka-connect-manager configuration |
kafka-connect-manager-enterprise.enabled |
bool |
|
Whether enterprise kafka-connect-manager is enabled |
kapacitor.enabled |
bool |
|
Whether Kapacitor is enabled |
kapacitor.envVars |
object |
See |
Additional environment variables to set |
kapacitor.existingSecret |
string |
|
Use |
kapacitor.image.repository |
string |
|
Docker image to use for Kapacitor |
kapacitor.image.tag |
string |
|
Tag to use for Kapacitor |
kapacitor.influxURL |
string |
|
InfluxDB connection URL |
kapacitor.persistence.enabled |
bool |
|
Whether to enable Kapacitor data persistence |
kapacitor.persistence.size |
string |
|
Size of storage to request if enabled |
kapacitor.resources |
object |
See |
Kubernetes resource requests and limits for Kapacitor |
rest-proxy.enabled |
bool |
|
Whether to enable the REST proxy |
squareEvents.enabled |
bool |
|
Enable the Square Events subchart with topic and user configurations |
strimzi-kafka.connect.enabled |
bool |
|
Whether Kafka Connect is enabled |
strimzi-kafka.kafka.listeners.external.enabled |
bool |
|
Whether external listener is enabled |
strimzi-kafka.kafka.listeners.plain.enabled |
bool |
|
Whether internal plaintext listener is enabled |
strimzi-kafka.kafka.listeners.tls.enabled |
bool |
|
Whether internal TLS listener is enabled |
strimzi-registry-operator.clusterName |
string |
|
Name of the Strimzi Kafka cluster |
strimzi-registry-operator.clusterNamespace |
string |
|
Namespace where the Strimzi Kafka cluster is deployed |
strimzi-registry-operator.operatorNamespace |
string |
|
Namespace where the strimzi-registry-operator is deployed |
telegraf-kafka-consumer |
object |
|
Overrides for telegraf-kafka-consumer configuration |
app-metrics.affinity |
object |
|
Affinity for pod assignment |
app-metrics.apps |
list |
|
A list of applications that will publish metrics events, and the keys that should be ingested into InfluxDB as tags. The names should be the same as the app names in Phalanx. |
app-metrics.args |
list |
|
Arguments passed to the Telegraf agent containers |
string |
|
Name of the Strimzi cluster. Synchronize this with the cluster name in the parent Sasquatch chart. |
|
app-metrics.debug |
bool |
false |
Run Telegraf in debug mode. |
app-metrics.env |
list |
See |
Telegraf agent enviroment variables |
app-metrics.envFromSecret |
string |
|
Name of the secret with values to be added to the environment |
app-metrics.globalAppConfig |
object |
See |
app-metrics configuration in any environment in which the subchart is enabled. This should stay globally specified here, and it shouldn’t be overridden. See here for the structure of this value. |
app-metrics.globalInfluxTags |
list |
|
Keys in an every event sent by any app that should be recorded in InfluxDB as “tags” (vs. “fields”). These will be concatenated with the |
app-metrics.image.pullPolicy |
string |
|
Image pull policy |
app-metrics.image.repo |
string |
|
Telegraf image repository |
app-metrics.image.tag |
string |
|
Telegraf image tag |
app-metrics.imagePullSecrets |
list |
|
Secret names to use for Docker pulls |
app-metrics.influxdb.url |
string |
|
URL of the InfluxDB v1 instance to write to |
app-metrics.nodeSelector |
object |
|
Node labels for pod assignment |
app-metrics.podAnnotations |
object |
|
Annotations for telegraf-kafka-consumers pods |
app-metrics.podLabels |
object |
|
Labels for telegraf-kafka-consumer pods |
app-metrics.replicaCount |
int |
|
Number of Telegraf replicas. Multiple replicas increase availability. |
app-metrics.resources |
object |
See |
Kubernetes resources requests and limits |
app-metrics.tolerations |
list |
|
Tolerations for pod assignment |
influxdb-enterprise.bootstrap.auth.secretName |
string |
|
Enable authentication of the data nodes using this secret, by creating a username and password for an admin account. The secret must contain keys |
influxdb-enterprise.bootstrap.ddldml.configMap |
string |
Do not run DDL or DML |
A config map containing DDL and DML that define databases, retention policies, and inject some data. The keys |
influxdb-enterprise.bootstrap.ddldml.resources |
object |
|
Kubernetes resources and limits for the bootstrap job |
influxdb-enterprise.data.affinity |
object |
See |
Affinity rules for data pods |
influxdb-enterprise.data.config.antiEntropy.enabled |
bool |
|
Enable the anti-entropy service, which copies and repairs shards |
influxdb-enterprise.data.config.cluster.log-queries-after |
string |
|
Maximum duration a query can run before InfluxDB logs it as a slow query |
influxdb-enterprise.data.config.cluster.max-concurrent-queries |
int |
|
Maximum number of running queries allowed on the instance (0 is unlimited) |
influxdb-enterprise.data.config.cluster.query-timeout |
string |
|
Maximum duration a query is allowed to run before it is killed |
influxdb-enterprise.data.config.continuousQueries.enabled |
bool |
|
Whether continuous queries are enabled |
influxdb-enterprise.data.config.data.cache-max-memory-size |
int |
|
Maximum size a shared cache can reach before it starts rejecting writes |
influxdb-enterprise.data.config.data.trace-logging-enabled |
bool |
|
Whether to enable verbose logging of additional debug information within the TSM engine and WAL |
influxdb-enterprise.data.config.data.wal-fsync-delay |
string |
|
Duration a write will wait before fsyncing. This is useful for slower disks or when WAL write contention is present. |
influxdb-enterprise.data.config.hintedHandoff.max-size |
int |
|
Maximum size of the hinted-handoff queue in bytes |
influxdb-enterprise.data.config.http.auth-enabled |
bool |
|
Whether authentication is required |
influxdb-enterprise.data.config.http.flux-enabled |
bool |
|
Whether to enable the Flux query endpoint |
influxdb-enterprise.data.config.logging.level |
string |
|
Logging level |
influxdb-enterprise.data.env |
object |
|
Additional environment variables to set in the meta container |
influxdb-enterprise.data.image.pullPolicy |
string |
|
Pull policy for data images |
influxdb-enterprise.data.image.repository |
string |
|
Docker repository for data images |
influxdb-enterprise.data.ingress.annotations |
object |
See |
Extra annotations to add to the data ingress |
influxdb-enterprise.data.ingress.className |
string |
|
Ingress class name of the data service |
influxdb-enterprise.data.ingress.enabled |
bool |
|
Whether to enable an ingress for the data service |
influxdb-enterprise.data.ingress.hostname |
string |
None, must be set if the ingress is enabled |
Hostname of the data ingress |
influxdb-enterprise.data.ingress.path |
string |
|
Path of the data service |
influxdb-enterprise.data.nodeSelector |
object |
|
Node selection rules for data pods |
influxdb-enterprise.data.persistence.accessMode |
string |
|
Access mode for the persistent volume claim |
influxdb-enterprise.data.persistence.annotations |
object |
|
Annotations to add to the persistent volume claim |
influxdb-enterprise.data.persistence.enabled |
bool |
|
Whether to persist data to a persistent volume |
influxdb-enterprise.data.persistence.existingClaim |
string |
Use a volume claim template |
Manually managed PersistentVolumeClaim to use. If defined, this PVC must be created manually before the meta service will start |
influxdb-enterprise.data.persistence.size |
string |
|
Size of persistent volume to request |
influxdb-enterprise.data.persistence.storageClass |
string |
|
Storage class of the persistent volume (set to |
influxdb-enterprise.data.podAnnotations |
object |
|
Annotations for data pods |
influxdb-enterprise.data.podDisruptionBudget.minAvailable |
int |
|
Minimum available pods to maintain |
influxdb-enterprise.data.podSecurityContext |
object |
|
Pod security context for data pods |
influxdb-enterprise.data.preruncmds |
list |
|
Commands to run in data pods before InfluxDB is started. Each list entry should have a cmd key with the command to run and an optional description key describing that command |
influxdb-enterprise.data.replicas |
int |
|
Number of data replicas to run |
influxdb-enterprise.data.resources |
object |
|
Kubernetes resources and limits for the meta container |
influxdb-enterprise.data.securityContext |
object |
|
Security context for meta pods |
influxdb-enterprise.data.service.annotations |
object |
|
Extra annotations for the data service |
influxdb-enterprise.data.service.externalIPs |
list |
Do not allocate external IPs |
External IPs for the data service |
influxdb-enterprise.data.service.externalTrafficPolicy |
string |
Do not set an external traffic policy |
External traffic policy for the data service |
influxdb-enterprise.data.service.loadBalancerIP |
string |
Do not allocate a load balancer IP |
Load balancer IP for the data service |
influxdb-enterprise.data.service.nodePort |
int |
Do not allocate a node port |
Node port for the data service |
influxdb-enterprise.data.service.type |
string |
|
Service type for the data service |
influxdb-enterprise.data.tolerations |
list |
|
Tolerations for data pods |
influxdb-enterprise.envFromSecret |
string |
No secret |
The name of a secret in the same kubernetes namespace which contain values to be added to the environment |
influxdb-enterprise.fullnameOverride |
string |
|
Override the full name for resources (includes the release name) |
influxdb-enterprise.image.addsuffix |
bool |
|
Set to true to add a suffix for the type of image to the Docker tag (for example, |
influxdb-enterprise.image.tag |
string |
|
Tagged version of the Docker image that you want to run |
influxdb-enterprise.imagePullSecrets |
list |
|
List of pull secrets needed for images. If set, each object in the list should have one attribute, name, identifying the pull secret to use |
influxdb-enterprise.license.key |
string |
|
License key. You can put your license key here for testing this chart out, but we STRONGLY recommend using a license file stored in a secret when you ship to production. |
influxdb-enterprise.license.secret.key |
string |
|
Key within that secret that contains the license |
string |
|
Name of the secret containing the license |
|
influxdb-enterprise.meta.affinity |
object |
See |
Affinity rules for meta pods |
influxdb-enterprise.meta.env |
object |
|
Additional environment variables to set in the meta container |
influxdb-enterprise.meta.image.pullPolicy |
string |
|
Pull policy for meta images |
influxdb-enterprise.meta.image.repository |
string |
|
Docker repository for meta images |
influxdb-enterprise.meta.ingress.annotations |
object |
See |
Extra annotations to add to the meta ingress |
influxdb-enterprise.meta.ingress.className |
string |
|
Ingress class name of the meta service |
influxdb-enterprise.meta.ingress.enabled |
bool |
|
Whether to enable an ingress for the meta service |
influxdb-enterprise.meta.ingress.hostname |
string |
None, must be set if the ingress is enabled |
Hostname of the meta ingress |
influxdb-enterprise.meta.ingress.path |
string |
|
Path of the meta service |
influxdb-enterprise.meta.nodeSelector |
object |
|
Node selection rules for meta pods |
influxdb-enterprise.meta.persistence.accessMode |
string |
|
Access mode for the persistent volume claim |
influxdb-enterprise.meta.persistence.annotations |
object |
|
Annotations to add to the persistent volume claim |
influxdb-enterprise.meta.persistence.enabled |
bool |
|
Whether to persist data to a persistent volume |
influxdb-enterprise.meta.persistence.existingClaim |
string |
Use a volume claim template |
Manually managed PersistentVolumeClaim to use. If defined, this PVC must be created manually before the meta service will start |
influxdb-enterprise.meta.persistence.size |
string |
|
Size of persistent volume to request |
influxdb-enterprise.meta.persistence.storageClass |
string |
|
Storage class of the persistent volume (set to |
influxdb-enterprise.meta.podAnnotations |
object |
|
Annotations for meta pods |
influxdb-enterprise.meta.podDisruptionBudget.minAvailable |
int |
|
Minimum available pods to maintain |
influxdb-enterprise.meta.podSecurityContext |
object |
|
Pod security context for meta pods |
influxdb-enterprise.meta.preruncmds |
list |
|
Commands to run in meta pods before InfluxDB is started. Each list entry should have a cmd key with the command to run and an optional description key describing that command |
influxdb-enterprise.meta.replicas |
int |
|
Number of meta pods to run |
influxdb-enterprise.meta.resources |
object |
|
Kubernetes resources and limits for the meta container |
influxdb-enterprise.meta.securityContext |
object |
|
Security context for meta pods |
influxdb-enterprise.meta.service.annotations |
object |
|
Extra annotations for the meta service |
influxdb-enterprise.meta.service.externalIPs |
list |
Do not allocate external IPs |
External IPs for the meta service |
influxdb-enterprise.meta.service.externalTrafficPolicy |
string |
Do not set an external traffic policy |
External traffic policy for the meta service |
influxdb-enterprise.meta.service.loadBalancerIP |
string |
Do not allocate a load balancer IP |
Load balancer IP for the meta service |
influxdb-enterprise.meta.service.nodePort |
int |
Do not allocate a node port |
Node port for the meta service |
influxdb-enterprise.meta.service.type |
string |
|
Service type for the meta service |
influxdb-enterprise.meta.sharedSecret.secret |
object |
|
Shared secret used by the internal API for JWT authentication between InfluxDB nodes. Must have a key named |
influxdb-enterprise.meta.sharedSecret.secret.key |
string |
|
Key within that secret that contains the shared secret |
string |
|
Name of the secret containing the shared secret |
|
influxdb-enterprise.meta.tolerations |
list |
|
Tolerations for meta pods |
influxdb-enterprise.nameOverride |
string |
|
Override the base name for resources |
influxdb-enterprise.serviceAccount.annotations |
object |
|
Annotations to add to the service account |
influxdb-enterprise.serviceAccount.create |
bool |
|
Whether to create a Kubernetes service account to run as |
string |
Name based on the chart fullname |
Name of the Kubernetes service account to run as |
|
kafdrop.affinity |
object |
|
Affinity configuration |
kafdrop.cmdArgs |
string |
See |
Command line arguments to Kafdrop |
kafdrop.existingSecret |
string |
Do not use a secret |
Existing Kubernetes secrect use to set kafdrop environment variables. Set |
kafdrop.host |
string |
|
The hostname to report for the RMI registry (used for JMX) |
kafdrop.image.pullPolicy |
string |
|
Image pull policy |
kafdrop.image.repository |
string |
|
Kafdrop Docker image repository |
kafdrop.image.tag |
string |
|
Kafdrop image version |
kafdrop.ingress.annotations |
object |
|
Additional ingress annotations |
kafdrop.ingress.enabled |
bool |
|
Whether to enable the ingress |
kafdrop.ingress.hostname |
string |
None, must be set if ingress is enabled |
Ingress hostname |
kafdrop.ingress.path |
string |
|
Ingress path |
kafdrop.jmx.port |
int |
|
Port to use for JMX. If unspecified, JMX will not be exposed. |
kafdrop.jvm.opts |
string |
|
JVM options |
kafdrop.kafka.broker |
string |
|
Bootstrap list of Kafka host/port pairs |
kafdrop.nodeSelector |
object |
|
Node selector configuration |
kafdrop.podAnnotations |
object |
|
Pod annotations |
kafdrop.replicaCount |
int |
|
Number of kafdrop pods to run in the deployment. |
kafdrop.resources |
object |
See |
Kubernetes requests and limits for Kafdrop |
kafdrop.schemaregistry |
string |
|
The endpoint of Schema Registry |
kafdrop.server.port |
int |
|
The web server port to listen on |
kafdrop.server.servlet.contextPath |
string |
|
The context path to serve requests on |
kafdrop.service.annotations |
object |
|
Additional annotations to add to the service |
kafdrop.service.port |
int |
|
Service port |
kafdrop.tolerations |
list |
|
Tolerations configuration |
kafka-connect-manager.enabled |
bool |
|
Whether to enable Kafka Connect Manager |
kafka-connect-manager.env.kafkaBrokerUrl |
string |
|
Kafka broker URL |
kafka-connect-manager.env.kafkaConnectUrl |
string |
|
Kafka connnect URL |
kafka-connect-manager.env.kafkaUsername |
string |
|
Username for SASL authentication |
kafka-connect-manager.image.pullPolicy |
string |
|
Pull policy for Kafka Connect Manager |
kafka-connect-manager.image.repository |
string |
|
Docker image to use for Kafka Connect Manager |
kafka-connect-manager.image.tag |
string |
|
Docker tag to use for Kafka Connect Manager |
kafka-connect-manager.influxdbSink.autoUpdate |
bool |
|
Whether to check for new Kafka topics |
kafka-connect-manager.influxdbSink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector |
kafka-connect-manager.influxdbSink.connectInfluxDb |
string |
|
InfluxDB database to write to |
kafka-connect-manager.influxdbSink.connectInfluxErrorPolicy |
string |
|
Error policy, see connector documetation for details |
kafka-connect-manager.influxdbSink.connectInfluxMaxRetries |
string |
|
The maximum number of times a message is retried |
kafka-connect-manager.influxdbSink.connectInfluxRetryInterval |
string |
|
The interval, in milliseconds, between retries. Only valid when the connectInfluxErrorPolicy is set to |
kafka-connect-manager.influxdbSink.connectInfluxUrl |
string |
|
InfluxDB URL |
kafka-connect-manager.influxdbSink.connectProgressEnabled |
bool |
|
Enables the output for how many records have been processed |
kafka-connect-manager.influxdbSink.connectors |
object |
See |
Connector instances to deploy. See |
kafka-connect-manager.influxdbSink.excludedTopicsRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka |
kafka-connect-manager.influxdbSink.tasksMax |
int |
|
Maxium number of tasks to run the connector |
kafka-connect-manager.influxdbSink.timestamp |
string |
|
Timestamp field to be used as the InfluxDB time. If not specified use |
kafka-connect-manager.jdbcSink.autoCreate |
string |
|
Whether to automatically create the destination table |
kafka-connect-manager.jdbcSink.autoEvolve |
string |
|
Whether to automatically add columns in the table schema |
kafka-connect-manager.jdbcSink.batchSize |
string |
|
Specifies how many records to attempt to batch together for insertion into the destination table |
kafka-connect-manager.jdbcSink.connectionUrl |
string |
|
Database connection URL |
kafka-connect-manager.jdbcSink.dbTimezone |
string |
|
Name of the JDBC timezone that should be used in the connector when inserting time-based values |
kafka-connect-manager.jdbcSink.enabled |
bool |
|
Whether the JDBC Sink connector is deployed |
kafka-connect-manager.jdbcSink.insertMode |
string |
|
The insertion mode to use. Supported modes are: |
kafka-connect-manager.jdbcSink.maxRetries |
string |
|
The maximum number of times to retry on errors before failing the task |
string |
|
Name of the connector to create |
|
kafka-connect-manager.jdbcSink.retryBackoffMs |
string |
|
The time in milliseconds to wait following an error before a retry attempt is made |
kafka-connect-manager.jdbcSink.tableNameFormat |
string |
|
A format string for the destination table name |
kafka-connect-manager.jdbcSink.tasksMax |
string |
|
Number of Kafka Connect tasks |
kafka-connect-manager.jdbcSink.topicRegex |
string |
|
Regex for selecting topics |
kafka-connect-manager.s3Sink.behaviorOnNullValues |
string |
|
How to handle records with a null value (for example, Kafka tombstone records). Valid options are |
kafka-connect-manager.s3Sink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector |
kafka-connect-manager.s3Sink.enabled |
bool |
|
Whether the Amazon S3 Sink connector is deployed |
kafka-connect-manager.s3Sink.excludedTopicRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka |
kafka-connect-manager.s3Sink.flushSize |
string |
|
Number of records written to store before invoking file commits |
kafka-connect-manager.s3Sink.locale |
string |
|
The locale to use when partitioning with TimeBasedPartitioner |
string |
|
Name of the connector to create |
|
kafka-connect-manager.s3Sink.partitionDurationMs |
string |
|
The duration of a partition in milliseconds, used by TimeBasedPartitioner. Default is 1h for an hourly based partitioner |
kafka-connect-manager.s3Sink.pathFormat |
string |
|
Pattern used to format the path in the S3 object name |
kafka-connect-manager.s3Sink.rotateIntervalMs |
string |
|
The time interval in milliseconds to invoke file commits. Set to 10 minutes by default |
kafka-connect-manager.s3Sink.s3BucketName |
string |
|
S3 bucket name. The bucket must already exist at the s3 provider |
kafka-connect-manager.s3Sink.s3PartRetries |
int |
|
Maximum number of retry attempts for failed requests. Zero means no retries. |
kafka-connect-manager.s3Sink.s3PartSize |
int |
|
The part size in S3 multi-part uploads. Valid values: [5242880,…,2147483647] |
kafka-connect-manager.s3Sink.s3Region |
string |
|
S3 region |
kafka-connect-manager.s3Sink.s3RetryBackoffMs |
int |
|
How long to wait in milliseconds before attempting the first retry of a failed S3 request |
kafka-connect-manager.s3Sink.s3SchemaCompatibility |
string |
|
S3 schema compatibility |
kafka-connect-manager.s3Sink.schemaCacheConfig |
int |
|
The size of the schema cache used in the Avro converter |
kafka-connect-manager.s3Sink.storeUrl |
string |
|
The object storage connection URL, for non-AWS s3 providers |
kafka-connect-manager.s3Sink.tasksMax |
int |
|
Number of Kafka Connect tasks |
kafka-connect-manager.s3Sink.timestampExtractor |
string |
|
The extractor determines how to obtain a timestamp from each record |
kafka-connect-manager.s3Sink.timestampField |
string |
|
The record field to be used as timestamp by the timestamp extractor. Only applies if timestampExtractor is set to RecordField. |
kafka-connect-manager.s3Sink.timezone |
string |
|
The timezone to use when partitioning with TimeBasedPartitioner |
kafka-connect-manager.s3Sink.topicsDir |
string |
|
Top level directory to store the data ingested from Kafka |
kafka-connect-manager.s3Sink.topicsRegex |
string |
|
Regex to select topics from Kafka |
kafka-connect-manager-enterprise.enabled |
bool |
|
Whether to enable Kafka Connect Manager |
kafka-connect-manager-enterprise.env.kafkaBrokerUrl |
string |
|
Kafka broker URL |
kafka-connect-manager-enterprise.env.kafkaConnectUrl |
string |
|
Kafka connnect URL |
kafka-connect-manager-enterprise.env.kafkaUsername |
string |
|
Username for SASL authentication |
kafka-connect-manager-enterprise.image.pullPolicy |
string |
|
Pull policy for Kafka Connect Manager |
kafka-connect-manager-enterprise.image.repository |
string |
|
Docker image to use for Kafka Connect Manager |
kafka-connect-manager-enterprise.image.tag |
string |
|
Docker tag to use for Kafka Connect Manager |
kafka-connect-manager-enterprise.influxdbSink.autoUpdate |
bool |
|
Whether to check for new Kafka topics |
kafka-connect-manager-enterprise.influxdbSink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxDb |
string |
|
InfluxDB database to write to |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxErrorPolicy |
string |
|
Error policy, see connector documetation for details |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxMaxRetries |
string |
|
The maximum number of times a message is retried |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxRetryInterval |
string |
|
The interval, in milliseconds, between retries. Only valid when the connectInfluxErrorPolicy is set to |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxUrl |
string |
|
InfluxDB URL |
kafka-connect-manager-enterprise.influxdbSink.connectProgressEnabled |
bool |
|
Enables the output for how many records have been processed |
kafka-connect-manager-enterprise.influxdbSink.connectors |
object |
See |
Connector instances to deploy. See |
kafka-connect-manager-enterprise.influxdbSink.excludedTopicsRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka |
kafka-connect-manager-enterprise.influxdbSink.tasksMax |
int |
|
Maxium number of tasks to run the connector |
kafka-connect-manager-enterprise.influxdbSink.timestamp |
string |
|
Timestamp field to be used as the InfluxDB time. If not specified use |
kafka-connect-manager-enterprise.jdbcSink.autoCreate |
string |
|
Whether to automatically create the destination table |
kafka-connect-manager-enterprise.jdbcSink.autoEvolve |
string |
|
Whether to automatically add columns in the table schema |
kafka-connect-manager-enterprise.jdbcSink.batchSize |
string |
|
Specifies how many records to attempt to batch together for insertion into the destination table |
kafka-connect-manager-enterprise.jdbcSink.connectionUrl |
string |
|
Database connection URL |
kafka-connect-manager-enterprise.jdbcSink.dbTimezone |
string |
|
Name of the JDBC timezone that should be used in the connector when inserting time-based values |
kafka-connect-manager-enterprise.jdbcSink.enabled |
bool |
|
Whether the JDBC Sink connector is deployed |
kafka-connect-manager-enterprise.jdbcSink.insertMode |
string |
|
The insertion mode to use. Supported modes are: |
kafka-connect-manager-enterprise.jdbcSink.maxRetries |
string |
|
The maximum number of times to retry on errors before failing the task |
string |
|
Name of the connector to create |
|
kafka-connect-manager-enterprise.jdbcSink.retryBackoffMs |
string |
|
The time in milliseconds to wait following an error before a retry attempt is made |
kafka-connect-manager-enterprise.jdbcSink.tableNameFormat |
string |
|
A format string for the destination table name |
kafka-connect-manager-enterprise.jdbcSink.tasksMax |
string |
|
Number of Kafka Connect tasks |
kafka-connect-manager-enterprise.jdbcSink.topicRegex |
string |
|
Regex for selecting topics |
kafka-connect-manager-enterprise.s3Sink.behaviorOnNullValues |
string |
|
How to handle records with a null value (for example, Kafka tombstone records). Valid options are |
kafka-connect-manager-enterprise.s3Sink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector |
kafka-connect-manager-enterprise.s3Sink.enabled |
bool |
|
Whether the Amazon S3 Sink connector is deployed |
kafka-connect-manager-enterprise.s3Sink.excludedTopicRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka |
kafka-connect-manager-enterprise.s3Sink.flushSize |
string |
|
Number of records written to store before invoking file commits |
kafka-connect-manager-enterprise.s3Sink.locale |
string |
|
The locale to use when partitioning with TimeBasedPartitioner |
string |
|
Name of the connector to create |
|
kafka-connect-manager-enterprise.s3Sink.partitionDurationMs |
string |
|
The duration of a partition in milliseconds, used by TimeBasedPartitioner. Default is 1h for an hourly based partitioner |
kafka-connect-manager-enterprise.s3Sink.pathFormat |
string |
|
Pattern used to format the path in the S3 object name |
kafka-connect-manager-enterprise.s3Sink.rotateIntervalMs |
string |
|
The time interval in milliseconds to invoke file commits. Set to 10 minutes by default |
kafka-connect-manager-enterprise.s3Sink.s3BucketName |
string |
|
S3 bucket name. The bucket must already exist at the s3 provider |
kafka-connect-manager-enterprise.s3Sink.s3PartRetries |
int |
|
Maximum number of retry attempts for failed requests. Zero means no retries. |
kafka-connect-manager-enterprise.s3Sink.s3PartSize |
int |
|
The part size in S3 multi-part uploads. Valid values: [5242880,…,2147483647] |
kafka-connect-manager-enterprise.s3Sink.s3Region |
string |
|
S3 region |
kafka-connect-manager-enterprise.s3Sink.s3RetryBackoffMs |
int |
|
How long to wait in milliseconds before attempting the first retry of a failed S3 request |
kafka-connect-manager-enterprise.s3Sink.s3SchemaCompatibility |
string |
|
S3 schema compatibility |
kafka-connect-manager-enterprise.s3Sink.schemaCacheConfig |
int |
|
The size of the schema cache used in the Avro converter |
kafka-connect-manager-enterprise.s3Sink.storeUrl |
string |
|
The object storage connection URL, for non-AWS s3 providers |
kafka-connect-manager-enterprise.s3Sink.tasksMax |
int |
|
Number of Kafka Connect tasks |
kafka-connect-manager-enterprise.s3Sink.timestampExtractor |
string |
|
The extractor determines how to obtain a timestamp from each record |
kafka-connect-manager-enterprise.s3Sink.timestampField |
string |
|
The record field to be used as timestamp by the timestamp extractor. Only applies if timestampExtractor is set to RecordField. |
kafka-connect-manager-enterprise.s3Sink.timezone |
string |
|
The timezone to use when partitioning with TimeBasedPartitioner |
kafka-connect-manager-enterprise.s3Sink.topicsDir |
string |
|
Top level directory to store the data ingested from Kafka |
kafka-connect-manager-enterprise.s3Sink.topicsRegex |
string |
|
Regex to select topics from Kafka |
rest-proxy.affinity |
object |
|
Affinity configuration |
rest-proxy.configurationOverrides |
object |
See |
Kafka REST configuration options |
rest-proxy.customEnv |
object |
|
Kafka REST additional env variables |
rest-proxy.heapOptions |
string |
|
Kafka REST proxy JVM Heap Option |
rest-proxy.image.pullPolicy |
string |
|
Image pull policy |
rest-proxy.image.repository |
string |
|
Kafka REST proxy image repository |
rest-proxy.image.tag |
string |
|
Kafka REST proxy image tag |
rest-proxy.ingress.annotations |
object |
See |
Additional annotations to add to the ingress |
rest-proxy.ingress.enabled |
bool |
|
Whether to enable the ingress |
rest-proxy.ingress.hostname |
string |
None, must be set if ingress is enabled |
Ingress hostname |
rest-proxy.ingress.path |
string |
`”/sasquatch-rest-proxy(/ |
$)(.*)”` |
rest-proxy.kafka.bootstrapServers |
string |
|
Kafka bootstrap servers, use the internal listerner on port 9092 with SASL connection |
string |
|
Name of the Strimzi Kafka cluster. |
|
rest-proxy.kafka.topicPrefixes |
list |
|
List of topic prefixes to use when exposing Kafka topics to the REST Proxy v2 API. |
rest-proxy.kafka.topics |
list |
|
List of Kafka topics to create via Strimzi. Alternatively topics can be created using the REST Proxy v3 API. |
rest-proxy.nodeSelector |
object |
|
Node selector configuration |
rest-proxy.podAnnotations |
object |
|
Pod annotations |
rest-proxy.replicaCount |
int |
|
Number of Kafka REST proxy pods to run in the deployment |
rest-proxy.resources |
object |
See |
Kubernetes requests and limits for the Kafka REST proxy |
rest-proxy.schemaregistry.url |
string |
|
Schema registry URL |
rest-proxy.service.port |
int |
|
Kafka REST proxy service port |
rest-proxy.tolerations |
list |
|
Tolerations configuration |
string |
|
||
strimzi-kafka.brokerStorage |
object |
|
Configuration for deploying Kafka brokers with local storage |
strimzi-kafka.cluster.monitorLabel |
object |
|
Site wide label required for gathering Prometheus metrics if they are enabled |
string |
|
Name used for the Kafka cluster, and used by Strimzi for many annotations |
|
strimzi-kafka.connect.config.“key.converter” |
string |
|
Set the converter for the message ke |
strimzi-kafka.connect.config.“key.converter.schema.registry.url” |
string |
|
URL for the schema registry |
strimzi-kafka.connect.config.“key.converter.schemas.enable” |
bool |
|
Enable converted schemas for the message key |
strimzi-kafka.connect.config.“value.converter” |
string |
|
Converter for the message value |
strimzi-kafka.connect.config.“value.converter.schema.registry.url” |
string |
|
URL for the schema registry |
strimzi-kafka.connect.config.“value.converter.schemas.enable” |
bool |
|
Enable converted schemas for the message value |
strimzi-kafka.connect.enabled |
bool |
|
Enable Kafka Connect |
strimzi-kafka.connect.image |
string |
|
Custom strimzi-kafka image with connector plugins used by sasquatch |
strimzi-kafka.connect.replicas |
int |
|
Number of Kafka Connect replicas to run |
strimzi-kafka.cruiseControl |
object |
|
Configuration for the Kafka Cruise Control |
strimzi-kafka.kafka.affinity |
object |
See |
Affinity for Kafka pod assignment |
strimzi-kafka.kafka.config.“log.retention.minutes” |
int |
4320 minutes (3 days) |
Number of days for a topic’s data to be retained |
strimzi-kafka.kafka.config.“message.max.bytes” |
int |
|
The largest record batch size allowed by Kafka |
strimzi-kafka.kafka.config.“offsets.retention.minutes” |
int |
4320 minutes (3 days) |
Number of minutes for a consumer group’s offsets to be retained |
strimzi-kafka.kafka.config.“replica.fetch.max.bytes” |
int |
|
The number of bytes of messages to attempt to fetch for each partition |
strimzi-kafka.kafka.externalListener.bootstrap.annotations |
object |
|
Annotations that will be added to the Ingress, Route, or Service resource |
strimzi-kafka.kafka.externalListener.bootstrap.host |
string |
Do not configure TLS |
Name used for TLS hostname verification |
strimzi-kafka.kafka.externalListener.bootstrap.loadBalancerIP |
string |
Do not request a load balancer IP |
Request this load balancer IP. See |
strimzi-kafka.kafka.externalListener.brokers |
list |
|
Brokers configuration. host is used in the brokers’ advertised.brokers configuration and for TLS hostname verification. The format is a list of maps. |
strimzi-kafka.kafka.externalListener.tls.certIssuerName |
string |
|
Name of a ClusterIssuer capable of provisioning a TLS certificate for the broker |
strimzi-kafka.kafka.externalListener.tls.enabled |
bool |
|
Whether TLS encryption is enabled |
strimzi-kafka.kafka.listeners.external.enabled |
bool |
|
Whether external listener is enabled |
strimzi-kafka.kafka.listeners.plain.enabled |
bool |
|
Whether internal plaintext listener is enabled |
strimzi-kafka.kafka.listeners.tls.enabled |
bool |
|
Whether internal TLS listener is enabled |
strimzi-kafka.kafka.metricsConfig.enabled |
bool |
|
Whether metric configuration is enabled |
strimzi-kafka.kafka.minInsyncReplicas |
int |
|
The minimum number of in-sync replicas that must be available for the producer to successfully send records Cannot be greater than the number of replicas. |
strimzi-kafka.kafka.replicas |
int |
|
Number of Kafka broker replicas to run |
strimzi-kafka.kafka.resources |
object |
See |
Kubernetes requests and limits for the Kafka brokers |
strimzi-kafka.kafka.storage.size |
string |
|
Size of the backing storage disk for each of the Kafka brokers |
strimzi-kafka.kafka.storage.storageClassName |
string |
|
Name of a StorageClass to use when requesting persistent volumes |
strimzi-kafka.kafka.tolerations |
list |
|
Tolerations for Kafka broker pod assignment |
strimzi-kafka.kafka.version |
string |
|
Version of Kafka to deploy |
strimzi-kafka.kafkaController.enabled |
bool |
|
Enable Kafka Controller |
strimzi-kafka.kafkaController.resources |
object |
See |
Kubernetes requests and limits for the Kafka Controller |
strimzi-kafka.kafkaController.storage.size |
string |
|
Size of the backing storage disk for each of the Kafka controllers |
strimzi-kafka.kafkaController.storage.storageClassName |
string |
|
Name of a StorageClass to use when requesting persistent volumes |
strimzi-kafka.kafkaExporter.enableSaramaLogging |
bool |
|
Enable Sarama logging for pod |
strimzi-kafka.kafkaExporter.enabled |
bool |
|
Enable Kafka exporter |
strimzi-kafka.kafkaExporter.groupRegex |
string |
|
Consumer groups to monitor |
strimzi-kafka.kafkaExporter.logging |
string |
|
Logging level |
strimzi-kafka.kafkaExporter.resources |
object |
See |
Kubernetes requests and limits for the Kafka exporter |
strimzi-kafka.kafkaExporter.topicRegex |
string |
|
Kafka topics to monitor |
strimzi-kafka.kraft.enabled |
bool |
|
Enable KRaft mode for Kafka |
strimzi-kafka.mirrormaker2.enabled |
bool |
|
Enable replication in the target (passive) cluster |
strimzi-kafka.mirrormaker2.replicas |
int |
|
Number of Mirror Maker replicas to run |
strimzi-kafka.mirrormaker2.replication.policy.class |
string |
|
Replication policy. |
strimzi-kafka.mirrormaker2.replication.policy.separator |
string |
|
Convention used to rename topics when the DefaultReplicationPolicy replication policy is used. Default is “” when the IdentityReplicationPolicy replication policy is used. |
strimzi-kafka.mirrormaker2.source.bootstrapServer |
string |
None, must be set if enabled |
Source (active) cluster to replicate from |
strimzi-kafka.mirrormaker2.source.topicsPattern |
string |
|
Topic replication from the source cluster defined as a comma-separated list or regular expression pattern |
strimzi-kafka.registry.ingress.annotations |
object |
|
Annotations that will be added to the Ingress resource |
strimzi-kafka.registry.ingress.enabled |
bool |
|
Whether to enable an ingress for the Schema Registry |
strimzi-kafka.registry.ingress.hostname |
string |
None, must be set if ingress is enabled |
Hostname for the Schema Registry |
strimzi-kafka.registry.resources |
object |
See |
Kubernetes requests and limits for the Schema Registry |
strimzi-kafka.registry.schemaTopic |
string |
|
Name of the topic used by the Schema Registry |
strimzi-kafka.superusers |
list |
|
A list of usernames for users who should have global admin permissions. These users will be created, along with their credentials. |
strimzi-kafka.users.camera.enabled |
bool |
|
Enable user camera, used at the camera environments |
strimzi-kafka.users.consdb.enabled |
bool |
|
Enable user consdb |
strimzi-kafka.users.kafdrop.enabled |
bool |
|
Enable user Kafdrop (deployed by parent Sasquatch chart). |
strimzi-kafka.users.kafkaConnectManager.enabled |
bool |
|
Enable user kafka-connect-manager |
strimzi-kafka.users.promptProcessing.enabled |
bool |
|
Enable user prompt-processing |
strimzi-kafka.users.replicator.enabled |
bool |
|
Enable user replicator (used by Mirror Maker 2 and required at both source and target clusters) |
strimzi-kafka.users.telegraf.enabled |
bool |
|
Enable user telegraf (deployed by parent Sasquatch chart) |
strimzi-kafka.users.tsSalKafka.enabled |
bool |
|
Enable user ts-salkafka, used at the telescope environments |
telegraf-kafka-consumer.affinity |
object |
|
Affinity for pod assignment |
telegraf-kafka-consumer.args |
list |
|
Arguments passed to the Telegraf agent containers |
telegraf-kafka-consumer.enabled |
bool |
|
Wether the Telegraf Kafka Consumer is enabled |
telegraf-kafka-consumer.env |
list |
See |
Telegraf agent enviroment variables |
telegraf-kafka-consumer.envFromSecret |
string |
|
Name of the secret with values to be added to the environment. |
telegraf-kafka-consumer.image.pullPolicy |
string |
|
Image pull policy |
telegraf-kafka-consumer.image.repo |
string |
|
Telegraf image repository |
telegraf-kafka-consumer.image.tag |
string |
|
Telegraf image tag |
telegraf-kafka-consumer.imagePullSecrets |
list |
|
Secret names to use for Docker pulls |
telegraf-kafka-consumer.influxdb.database |
string |
|
Name of the InfluxDB v1 database to write to |
telegraf-kafka-consumer.influxdb.url |
string |
|
URL of the InfluxDB v1 instance to write to |
telegraf-kafka-consumer.kafkaConsumers.test.collection_jitter |
string |
“0s” |
Data collection jitter. This is used to jitter the collection by a random amount. Each plugin will sleep for a random time within jitter before collecting. |
telegraf-kafka-consumer.kafkaConsumers.test.compression_codec |
int |
3 |
Compression codec. 0 : None, 1 : Gzip, 2 : Snappy, 3 : LZ4, 4 : ZSTD |
telegraf-kafka-consumer.kafkaConsumers.test.consumer_fetch_default |
string |
“20MB” |
Maximum amount of data the server should return for a fetch request. |
telegraf-kafka-consumer.kafkaConsumers.test.debug |
bool |
false |
Run Telegraf in debug mode. |
telegraf-kafka-consumer.kafkaConsumers.test.enabled |
bool |
|
Enable the Telegraf Kafka consumer. |
telegraf-kafka-consumer.kafkaConsumers.test.fields |
list |
|
List of Avro fields to be recorded as InfluxDB fields. If not specified, any Avro field that is not marked as a tag will become an InfluxDB field. |
telegraf-kafka-consumer.kafkaConsumers.test.flush_interval |
string |
“10s” |
Data flushing interval for all outputs. Don’t set this below interval. Maximum flush_interval is flush_interval + flush_jitter |
telegraf-kafka-consumer.kafkaConsumers.test.flush_jitter |
string |
“0s” |
Jitter the flush interval by a random amount. This is primarily to avoid large write spikes for users running a large number of telegraf instances. |
telegraf-kafka-consumer.kafkaConsumers.test.max_processing_time |
string |
“5s” |
Maximum processing time for a single message. |
telegraf-kafka-consumer.kafkaConsumers.test.max_undelivered_messages |
int |
10000 |
Maximum number of undelivered messages. Should be a multiple of metric_batch_size, setting it too low may never flush the broker’s messages. |
telegraf-kafka-consumer.kafkaConsumers.test.metric_batch_size |
int |
1000 |
Sends metrics to the output in batches of at most metric_batch_size metrics. |
telegraf-kafka-consumer.kafkaConsumers.test.metric_buffer_limit |
int |
100000 |
Caches metric_buffer_limit metrics for each output, and flushes this buffer on a successful write. This should be a multiple of metric_batch_size and could not be less than 2 times metric_batch_size. |
telegraf-kafka-consumer.kafkaConsumers.test.offset |
string |
|
Kafka consumer offset. Possible values are |
telegraf-kafka-consumer.kafkaConsumers.test.precision |
string |
“1us” |
Data precision. |
telegraf-kafka-consumer.kafkaConsumers.test.replicaCount |
int |
|
Number of Telegraf Kafka consumer replicas. Increase this value to increase the consumer throughput. |
telegraf-kafka-consumer.kafkaConsumers.test.tags |
list |
|
List of Avro fields to be recorded as InfluxDB tags. The Avro fields specified as tags will be converted to strings before ingestion into InfluxDB. |
telegraf-kafka-consumer.kafkaConsumers.test.timestamp_field |
string |
|
Avro field to be used as the InfluxDB timestamp (optional). If unspecified or set to the empty string, Telegraf will use the time it received the measurement. |
telegraf-kafka-consumer.kafkaConsumers.test.timestamp_format |
string |
|
Timestamp format. Possible values are |
telegraf-kafka-consumer.kafkaConsumers.test.topicRegexps |
string |
|
List of regular expressions to specify the Kafka topics consumed by this agent. |
telegraf-kafka-consumer.kafkaConsumers.test.union_field_separator |
string |
|
Union field separator: if a single Avro field is flattened into more than one InfluxDB field (e.g. an array |
telegraf-kafka-consumer.kafkaConsumers.test.union_mode |
string |
|
Union mode: this can be one of |
telegraf-kafka-consumer.nodeSelector |
object |
|
Node labels for pod assignment |
telegraf-kafka-consumer.podAnnotations |
object |
|
Annotations for telegraf-kafka-consumers pods |
telegraf-kafka-consumer.podLabels |
object |
|
Labels for telegraf-kafka-consumer pods |
telegraf-kafka-consumer.resources |
object |
See |
Kubernetes resources requests and limits |
telegraf-kafka-consumer.tolerations |
list |
|
Tolerations for pod assignment |
telegraf-kafka-consumer-oss.affinity |
object |
|
Affinity for pod assignment |
telegraf-kafka-consumer-oss.args |
list |
|
Arguments passed to the Telegraf agent containers |
telegraf-kafka-consumer-oss.enabled |
bool |
|
Wether the Telegraf Kafka Consumer is enabled |
telegraf-kafka-consumer-oss.env |
list |
See |
Telegraf agent enviroment variables |
telegraf-kafka-consumer-oss.envFromSecret |
string |
|
Name of the secret with values to be added to the environment. |
telegraf-kafka-consumer-oss.image.pullPolicy |
string |
|
Image pull policy |
telegraf-kafka-consumer-oss.image.repo |
string |
|
Telegraf image repository |
telegraf-kafka-consumer-oss.image.tag |
string |
|
Telegraf image tag |
telegraf-kafka-consumer-oss.imagePullSecrets |
list |
|
Secret names to use for Docker pulls |
telegraf-kafka-consumer-oss.influxdb.database |
string |
|
Name of the InfluxDB v1 database to write to |
telegraf-kafka-consumer-oss.influxdb.url |
string |
|
URL of the InfluxDB v1 instance to write to |
telegraf-kafka-consumer-oss.kafkaConsumers.test.collection_jitter |
string |
“0s” |
Data collection jitter. This is used to jitter the collection by a random amount. Each plugin will sleep for a random time within jitter before collecting. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.compression_codec |
int |
3 |
Compression codec. 0 : None, 1 : Gzip, 2 : Snappy, 3 : LZ4, 4 : ZSTD |
telegraf-kafka-consumer-oss.kafkaConsumers.test.consumer_fetch_default |
string |
“20MB” |
Maximum amount of data the server should return for a fetch request. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.debug |
bool |
false |
Run Telegraf in debug mode. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.enabled |
bool |
|
Enable the Telegraf Kafka consumer. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.fields |
list |
|
List of Avro fields to be recorded as InfluxDB fields. If not specified, any Avro field that is not marked as a tag will become an InfluxDB field. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.flush_interval |
string |
“10s” |
Data flushing interval for all outputs. Don’t set this below interval. Maximum flush_interval is flush_interval + flush_jitter |
telegraf-kafka-consumer-oss.kafkaConsumers.test.flush_jitter |
string |
“0s” |
Jitter the flush interval by a random amount. This is primarily to avoid large write spikes for users running a large number of telegraf instances. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.max_processing_time |
string |
“5s” |
Maximum processing time for a single message. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.max_undelivered_messages |
int |
10000 |
Maximum number of undelivered messages. Should be a multiple of metric_batch_size, setting it too low may never flush the broker’s messages. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.metric_batch_size |
int |
1000 |
Sends metrics to the output in batches of at most metric_batch_size metrics. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.metric_buffer_limit |
int |
100000 |
Caches metric_buffer_limit metrics for each output, and flushes this buffer on a successful write. This should be a multiple of metric_batch_size and could not be less than 2 times metric_batch_size. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.offset |
string |
|
Kafka consumer offset. Possible values are |
telegraf-kafka-consumer-oss.kafkaConsumers.test.precision |
string |
“1us” |
Data precision. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.replicaCount |
int |
|
Number of Telegraf Kafka consumer replicas. Increase this value to increase the consumer throughput. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.tags |
list |
|
List of Avro fields to be recorded as InfluxDB tags. The Avro fields specified as tags will be converted to strings before ingestion into InfluxDB. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.timestamp_field |
string |
|
Avro field to be used as the InfluxDB timestamp (optional). If unspecified or set to the empty string, Telegraf will use the time it received the measurement. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.timestamp_format |
string |
|
Timestamp format. Possible values are |
telegraf-kafka-consumer-oss.kafkaConsumers.test.topicRegexps |
string |
|
List of regular expressions to specify the Kafka topics consumed by this agent. |
telegraf-kafka-consumer-oss.kafkaConsumers.test.union_field_separator |
string |
|
Union field separator: if a single Avro field is flattened into more than one InfluxDB field (e.g. an array |
telegraf-kafka-consumer-oss.kafkaConsumers.test.union_mode |
string |
|
Union mode: this can be one of |
telegraf-kafka-consumer-oss.nodeSelector |
object |
|
Node labels for pod assignment |
telegraf-kafka-consumer-oss.podAnnotations |
object |
|
Annotations for telegraf-kafka-consumers pods |
telegraf-kafka-consumer-oss.podLabels |
object |
|
Labels for telegraf-kafka-consumer pods |
telegraf-kafka-consumer-oss.resources |
object |
See |
Kubernetes resources requests and limits |
telegraf-kafka-consumer-oss.tolerations |
list |
|
Tolerations for pod assignment |