sasquatch Helm values reference#
Helm values reference table for the sasquatch
application.
Key |
Type |
Default |
Description |
---|---|---|---|
global.baseUrl |
string |
Set by Argo CD |
Base URL for the environment |
global.host |
string |
Set by Argo CD |
Host name for ingress |
global.vaultSecretsPath |
string |
Set by Argo CD |
Base path for Vault secrets |
bucketmapper.image |
object |
|
image for monitoring-related cronjobs |
bucketmapper.image.repository |
string |
|
repository for rubin-influx-tools |
bucketmapper.image.tag |
string |
|
tag for rubin-influx-tools |
chronograf.enabled |
bool |
|
Enable Chronograf. |
chronograf.env |
object |
|
Chronograf environment variables. |
chronograf.envFromSecret |
string |
|
Chronograf secrets, expected keys generic_client_id, generic_client_secret and token_secret. |
chronograf.image |
object |
|
Chronograf image tag. |
chronograf.ingress |
object |
disabled |
Chronograf ingress configuration. |
chronograf.persistence |
object |
|
Chronograf data persistence configuration. |
chronograf.resources.limits.cpu |
int |
|
|
chronograf.resources.limits.memory |
string |
|
|
chronograf.resources.requests.cpu |
int |
|
|
chronograf.resources.requests.memory |
string |
|
|
influxdb-enterprise |
object |
|
Override influxdb-enterprise configuration. |
influxdb-staging.config |
object |
|
Override InfluxDB configuration. See https://docs.influxdata.com/influxdb/v1.8/administration/config |
influxdb-staging.enabled |
bool |
|
Enable InfluxDB staging deployment. |
influxdb-staging.image |
object |
|
InfluxDB image tag. |
influxdb-staging.ingress |
object |
disabled |
InfluxDB ingress configuration. |
influxdb-staging.initScripts.enabled |
bool |
|
Enable InfluxDB custom initialization script. |
influxdb-staging.persistence.enabled |
bool |
|
Enable persistent volume claim. By default storageClass is undefined choosing the default provisioner (standard on GKE). |
influxdb-staging.persistence.size |
string |
|
Persistent volume size. @default 1Ti for teststand deployments |
influxdb-staging.resources.limits.cpu |
int |
|
|
influxdb-staging.resources.limits.memory |
string |
|
|
influxdb-staging.resources.requests.cpu |
int |
|
|
influxdb-staging.resources.requests.memory |
string |
|
|
influxdb-staging.setDefaultUser |
object |
|
Default InfluxDB user, use influxb-user and influxdb-password keys from secret. |
influxdb.config |
object |
|
Override InfluxDB configuration. See https://docs.influxdata.com/influxdb/v1.8/administration/config |
influxdb.enabled |
bool |
|
Enable InfluxDB. |
influxdb.image |
object |
|
InfluxDB image tag. |
influxdb.ingress |
object |
disabled |
InfluxDB ingress configuration. |
influxdb.initScripts.enabled |
bool |
|
Enable InfluxDB custom initialization script. |
influxdb.persistence.enabled |
bool |
|
Enable persistent volume claim. By default storageClass is undefined choosing the default provisioner (standard on GKE). |
influxdb.persistence.size |
string |
|
Persistent volume size. @default 1Ti for teststand deployments |
influxdb.resources.limits.cpu |
int |
|
|
influxdb.resources.limits.memory |
string |
|
|
influxdb.resources.requests.cpu |
int |
|
|
influxdb.resources.requests.memory |
string |
|
|
influxdb.setDefaultUser |
object |
|
Default InfluxDB user, use influxb-user and influxdb-password keys from secret. |
kafdrop.enabled |
bool |
|
Enable Kafdrop. |
kafka-connect-manager |
object |
|
Override kafka-connect-manager configuration. |
kafka-connect-manager-enterprise |
object |
|
Override kafka-connect-manager-enterprise configuration. |
kapacitor.enabled |
bool |
|
Enable Kapacitor. |
kapacitor.envVars |
object |
|
Kapacitor environment variables. |
kapacitor.existingSecret |
string |
|
InfluxDB credentials, use influxdb-user and influxdb-password keys from secret. |
kapacitor.image |
object |
|
Kapacitor image tag. |
kapacitor.influxURL |
string |
|
InfluxDB connection URL. |
kapacitor.persistence |
object |
|
Chronograf data persistence configuration. |
kapacitor.resources.limits.cpu |
int |
|
|
kapacitor.resources.limits.memory |
string |
|
|
kapacitor.resources.requests.cpu |
int |
|
|
kapacitor.resources.requests.memory |
string |
|
|
rest-proxy |
object |
|
Override rest-proxy configuration. |
source-influxdb.config |
object |
|
Override InfluxDB configuration. See https://docs.influxdata.com/influxdb/v1.8/administration/config |
source-influxdb.enabled |
bool |
|
Enable InfluxDB staging deployment. |
source-influxdb.image |
object |
|
InfluxDB image tag. |
source-influxdb.ingress |
object |
disabled |
InfluxDB ingress configuration. |
source-influxdb.initScripts.enabled |
bool |
|
Enable InfluxDB custom initialization script. |
source-influxdb.persistence.enabled |
bool |
|
Enable persistent volume claim. By default storageClass is undefined choosing the default provisioner (standard on GKE). |
source-influxdb.persistence.size |
string |
|
Persistent volume size. @default 1Ti for teststand deployments |
source-influxdb.resources.limits.cpu |
int |
|
|
source-influxdb.resources.limits.memory |
string |
|
|
source-influxdb.resources.requests.cpu |
int |
|
|
source-influxdb.resources.requests.memory |
string |
|
|
source-influxdb.setDefaultUser |
object |
|
Default InfluxDB user, use influxb-user and influxdb-password keys from secret. |
source-kafka-connect-manager |
object |
|
Override source-kafka-connect-manager configuration. |
source-kapacitor.enabled |
bool |
|
Enable Kapacitor. |
source-kapacitor.envVars |
object |
|
Kapacitor environment variables. |
source-kapacitor.existingSecret |
string |
|
InfluxDB credentials, use influxdb-user and influxdb-password keys from secret. |
source-kapacitor.image |
object |
|
Kapacitor image tag. |
source-kapacitor.influxURL |
string |
|
InfluxDB connection URL. |
source-kapacitor.persistence |
object |
|
Chronograf data persistence configuration. |
source-kapacitor.resources.limits.cpu |
int |
|
|
source-kapacitor.resources.limits.memory |
string |
|
|
source-kapacitor.resources.requests.cpu |
int |
|
|
source-kapacitor.resources.requests.memory |
string |
|
|
squareEvents.enabled |
bool |
|
Enable the Square Events subchart with topic and user configurations. |
strimzi-kafka |
object |
|
Override strimzi-kafka subchart configuration. |
strimzi-registry-operator |
object |
|
strimzi-registry-operator configuration. |
telegraf-kafka-consumer |
object |
|
Override telegraf-kafka-consumer configuration. |
influxdb-enterprise.bootstrap.auth.secretName |
string |
|
|
influxdb-enterprise.bootstrap.ddldml |
object |
|
|
influxdb-enterprise.data.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key |
string |
|
|
influxdb-enterprise.data.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator |
string |
|
|
influxdb-enterprise.data.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] |
string |
|
|
influxdb-enterprise.data.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey |
string |
|
|
influxdb-enterprise.data.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight |
int |
|
|
influxdb-enterprise.data.config.antiEntropy.enabled |
bool |
|
|
influxdb-enterprise.data.config.cluster.log-queries-after |
string |
|
|
influxdb-enterprise.data.config.cluster.max-concurrent-queries |
int |
|
|
influxdb-enterprise.data.config.cluster.query-timeout |
string |
|
|
influxdb-enterprise.data.config.continuousQueries.enabled |
bool |
|
|
influxdb-enterprise.data.config.data.cache-max-memory-size |
int |
|
|
influxdb-enterprise.data.config.data.trace-logging-enabled |
bool |
|
|
influxdb-enterprise.data.config.data.wal-fsync-delay |
string |
|
|
influxdb-enterprise.data.config.hintedHandoff.max-size |
int |
|
|
influxdb-enterprise.data.config.http.auth-enabled |
bool |
|
|
influxdb-enterprise.data.config.http.flux-enabled |
bool |
|
|
influxdb-enterprise.data.config.logging.level |
string |
|
|
influxdb-enterprise.data.env |
object |
|
|
influxdb-enterprise.data.image |
object |
|
|
influxdb-enterprise.data.ingress.annotations.“nginx.ingress.kubernetes.io/proxy-read-timeout” |
string |
|
|
influxdb-enterprise.data.ingress.annotations.“nginx.ingress.kubernetes.io/proxy-send-timeout” |
string |
|
|
influxdb-enterprise.data.ingress.annotations.“nginx.ingress.kubernetes.io/rewrite-target” |
string |
|
|
influxdb-enterprise.data.ingress.className |
string |
|
|
influxdb-enterprise.data.ingress.enabled |
bool |
|
|
influxdb-enterprise.data.ingress.hostname |
string |
|
|
influxdb-enterprise.data.ingress.path |
string |
`”/influxdb-enterprise-data(/ |
$)(.*)”` |
influxdb-enterprise.data.persistence.enabled |
bool |
|
|
influxdb-enterprise.data.podDisruptionBudget.minAvailable |
int |
|
|
influxdb-enterprise.data.replicas |
int |
|
|
influxdb-enterprise.data.resources |
object |
|
|
influxdb-enterprise.data.service.type |
string |
|
|
influxdb-enterprise.fullnameOverride |
string |
|
|
influxdb-enterprise.imagePullSecrets |
list |
|
|
influxdb-enterprise.license.secret.key |
string |
|
|
string |
|
||
influxdb-enterprise.meta.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key |
string |
|
|
influxdb-enterprise.meta.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator |
string |
|
|
influxdb-enterprise.meta.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0] |
string |
|
|
influxdb-enterprise.meta.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey |
string |
|
|
influxdb-enterprise.meta.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight |
int |
|
|
influxdb-enterprise.meta.env |
object |
|
|
influxdb-enterprise.meta.image |
object |
|
|
influxdb-enterprise.meta.ingress.annotations.“nginx.ingress.kubernetes.io/proxy-read-timeout” |
string |
|
|
influxdb-enterprise.meta.ingress.annotations.“nginx.ingress.kubernetes.io/proxy-send-timeout” |
string |
|
|
influxdb-enterprise.meta.ingress.annotations.“nginx.ingress.kubernetes.io/rewrite-target” |
string |
|
|
influxdb-enterprise.meta.ingress.className |
string |
|
|
influxdb-enterprise.meta.ingress.enabled |
bool |
|
|
influxdb-enterprise.meta.ingress.hostname |
string |
|
|
influxdb-enterprise.meta.ingress.path |
string |
`”/influxdb-enterprise-meta(/ |
$)(.*)”` |
influxdb-enterprise.meta.persistence.enabled |
bool |
|
|
influxdb-enterprise.meta.podDisruptionBudget.minAvailable |
int |
|
|
influxdb-enterprise.meta.replicas |
int |
|
|
influxdb-enterprise.meta.resources |
object |
|
|
influxdb-enterprise.meta.service.type |
string |
|
|
influxdb-enterprise.meta.sharedSecret.secretName |
string |
|
|
influxdb-enterprise.nameOverride |
string |
|
|
influxdb-enterprise.serviceAccount.annotations |
object |
|
|
influxdb-enterprise.serviceAccount.create |
bool |
|
|
string |
|
||
kafdrop.affinity |
object |
|
Affinity configuration. |
kafdrop.cmdArgs |
string |
|
Command line arguments to Kafdrop. |
kafdrop.existingSecret |
string |
|
Existing k8s secrect use to set kafdrop environment variables. Set SCHEMAREGISTRY_AUTH for basic auth credentials in the form username:password |
kafdrop.host |
string |
Defaults to localhost. |
The hostname to report for the RMI registry (used for JMX). |
kafdrop.image.pullPolicy |
string |
|
Image pull policy. |
kafdrop.image.repository |
string |
|
Kafdrop Docker image repository. |
kafdrop.image.tag |
string |
|
Kafdrop image version. |
kafdrop.ingress.annotations |
object |
|
Ingress annotations. |
kafdrop.ingress.enabled |
bool |
|
Enable Ingress. This should be true to create an ingress rule for the application. |
kafdrop.ingress.hostname |
string |
|
Ingress hostname. |
kafdrop.ingress.path |
string |
|
Ingress path. |
kafdrop.jmx.port |
int |
Defaults to 8686 |
Port to use for JMX. If unspecified, JMX will not be exposed. |
kafdrop.jvm.opts |
string |
|
JVM options. |
kafdrop.kafka.broker |
string |
|
Bootstrap list of Kafka host/port pairs |
kafdrop.nodeSelector |
object |
|
Node selector configuration. |
kafdrop.podAnnotations |
object |
|
Pod annotations. |
kafdrop.replicaCount |
int |
|
Number of kafdrop pods to run in the deployment. |
kafdrop.resources.limits.cpu |
int |
|
|
kafdrop.resources.limits.memory |
string |
|
|
kafdrop.resources.requests.cpu |
int |
|
|
kafdrop.resources.requests.memory |
string |
|
|
kafdrop.schemaregistry |
string |
|
The endpoint of Schema Registry |
kafdrop.server.port |
int |
Defaults to 9000. |
The web server port to listen on. |
kafdrop.server.servlet |
object |
Defaults to /. |
The context path to serve requests on (must end with a /). |
kafdrop.service.annotations |
object |
|
Service annotations |
kafdrop.service.port |
int |
|
Service port |
kafdrop.tolerations |
list |
|
Tolerations configuration. |
kafka-connect-manager.enabled |
bool |
|
Enable Kafka Connect Manager. |
kafka-connect-manager.env.kafkaBrokerUrl |
string |
|
Kafka broker URL. |
kafka-connect-manager.env.kafkaConnectUrl |
string |
|
Kafka connnect URL. |
kafka-connect-manager.env.kafkaUsername |
string |
|
Username for SASL authentication. |
kafka-connect-manager.image.pullPolicy |
string |
|
|
kafka-connect-manager.image.repository |
string |
|
|
kafka-connect-manager.image.tag |
string |
|
|
kafka-connect-manager.influxdbSink.autoUpdate |
bool |
|
If autoUpdate is enabled, check for new kafka topics. |
kafka-connect-manager.influxdbSink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
kafka-connect-manager.influxdbSink.connectInfluxDb |
string |
|
InfluxDB database to write to. |
kafka-connect-manager.influxdbSink.connectInfluxErrorPolicy |
string |
|
Error policy, see connector documetation for details. |
kafka-connect-manager.influxdbSink.connectInfluxMaxRetries |
string |
|
The maximum number of times a message is retried. |
kafka-connect-manager.influxdbSink.connectInfluxRetryInterval |
string |
|
The interval, in milliseconds, between retries. Only valid when the connectInfluxErrorPolicy is set to |
kafka-connect-manager.influxdbSink.connectInfluxUrl |
string |
|
InfluxDB URL. |
kafka-connect-manager.influxdbSink.connectProgressEnabled |
bool |
|
Enables the output for how many records have been processed. |
kafka-connect-manager.influxdbSink.connectors |
object |
|
Connector instances to deploy. |
kafka-connect-manager.influxdbSink.connectors.example.enabled |
bool |
|
Whether this connector instance is deployed. |
kafka-connect-manager.influxdbSink.connectors.example.removePrefix |
string |
|
Remove prefix from topic name. |
kafka-connect-manager.influxdbSink.connectors.example.repairerConnector |
bool |
|
Whether to deploy a repairer connector in addition to the original connector instance. |
kafka-connect-manager.influxdbSink.connectors.example.tags |
string |
|
Fields in the Avro payload that are treated as InfluxDB tags. |
kafka-connect-manager.influxdbSink.connectors.example.topicsRegex |
string |
|
Regex to select topics from Kafka. |
kafka-connect-manager.influxdbSink.excludedTopicsRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
kafka-connect-manager.influxdbSink.tasksMax |
int |
|
Maxium number of tasks to run the connector. |
kafka-connect-manager.influxdbSink.timestamp |
string |
|
Timestamp field to be used as the InfluxDB time, if not specified use |
kafka-connect-manager.jdbcSink.autoCreate |
string |
|
Whether to automatically create the destination table. |
kafka-connect-manager.jdbcSink.autoEvolve |
string |
|
Whether to automatically add columns in the table schema. |
kafka-connect-manager.jdbcSink.batchSize |
string |
|
Specifies how many records to attempt to batch together for insertion into the destination table. |
kafka-connect-manager.jdbcSink.connectionUrl |
string |
|
Database connection URL. |
kafka-connect-manager.jdbcSink.dbTimezone |
string |
|
Name of the JDBC timezone that should be used in the connector when inserting time-based values. |
kafka-connect-manager.jdbcSink.enabled |
bool |
|
Whether the JDBC Sink connector is deployed. |
kafka-connect-manager.jdbcSink.insertMode |
string |
|
The insertion mode to use. Supported modes are: |
kafka-connect-manager.jdbcSink.maxRetries |
string |
|
The maximum number of times to retry on errors before failing the task. |
string |
|
Name of the connector to create. |
|
kafka-connect-manager.jdbcSink.retryBackoffMs |
string |
|
The time in milliseconds to wait following an error before a retry attempt is made. |
kafka-connect-manager.jdbcSink.tableNameFormat |
string |
|
A format string for the destination table name. |
kafka-connect-manager.jdbcSink.tasksMax |
string |
|
Number of Kafka Connect tasks. |
kafka-connect-manager.jdbcSink.topicRegex |
string |
|
Regex for selecting topics. |
kafka-connect-manager.s3Sink.behaviorOnNullValues |
string |
|
How to handle records with a null value (for example, Kafka tombstone records). Valid options are ignore and fail. |
kafka-connect-manager.s3Sink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
kafka-connect-manager.s3Sink.enabled |
bool |
|
Whether the Amazon S3 Sink connector is deployed. |
kafka-connect-manager.s3Sink.excludedTopicRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
kafka-connect-manager.s3Sink.flushSize |
string |
|
Number of records written to store before invoking file commits. |
kafka-connect-manager.s3Sink.locale |
string |
|
The locale to use when partitioning with TimeBasedPartitioner. |
string |
|
Name of the connector to create. |
|
kafka-connect-manager.s3Sink.partitionDurationMs |
string |
|
The duration of a partition in milliseconds, used by TimeBasedPartitioner. Default is 1h for an hourly based partitioner. |
kafka-connect-manager.s3Sink.pathFormat |
string |
|
Pattern used to format the path in the S3 object name. |
kafka-connect-manager.s3Sink.rotateIntervalMs |
string |
|
The time interval in milliseconds to invoke file commits. Set to 10 minutes by default. |
kafka-connect-manager.s3Sink.s3BucketName |
string |
|
s3 bucket name. The bucket must already exist at the s3 provider. |
kafka-connect-manager.s3Sink.s3PartRetries |
int |
|
Maximum number of retry attempts for failed requests. Zero means no retries. |
kafka-connect-manager.s3Sink.s3PartSize |
int |
|
The Part Size in S3 Multi-part Uploads. Valid Values: [5242880,…,2147483647] |
kafka-connect-manager.s3Sink.s3Region |
string |
|
s3 region |
kafka-connect-manager.s3Sink.s3RetryBackoffMs |
int |
|
How long to wait in milliseconds before attempting the first retry of a failed S3 request. |
kafka-connect-manager.s3Sink.s3SchemaCompatibility |
string |
|
s3 schema compatibility |
kafka-connect-manager.s3Sink.schemaCacheConfig |
int |
|
The size of the schema cache used in the Avro converter. |
kafka-connect-manager.s3Sink.storeUrl |
string |
|
The object storage connection URL, for non-AWS s3 providers. |
kafka-connect-manager.s3Sink.tasksMax |
int |
|
Number of Kafka Connect tasks. |
kafka-connect-manager.s3Sink.timestampExtractor |
string |
|
The extractor determines how to obtain a timestamp from each record. |
kafka-connect-manager.s3Sink.timestampField |
string |
|
The record field to be used as timestamp by the timestamp extractor. Only applies if timestampExtractor is set to RecordField. |
kafka-connect-manager.s3Sink.timezone |
string |
|
The timezone to use when partitioning with TimeBasedPartitioner. |
kafka-connect-manager.s3Sink.topicsDir |
string |
|
Top level directory to store the data ingested from Kafka. |
kafka-connect-manager.s3Sink.topicsRegex |
string |
|
Regex to select topics from Kafka. |
kafka-connect-manager-enterprise.enabled |
bool |
|
Enable Kafka Connect Manager. |
kafka-connect-manager-enterprise.env.kafkaBrokerUrl |
string |
|
Kafka broker URL. |
kafka-connect-manager-enterprise.env.kafkaConnectUrl |
string |
|
Kafka connnect URL. |
kafka-connect-manager-enterprise.env.kafkaUsername |
string |
|
Username for SASL authentication. |
kafka-connect-manager-enterprise.image.pullPolicy |
string |
|
|
kafka-connect-manager-enterprise.image.repository |
string |
|
|
kafka-connect-manager-enterprise.image.tag |
string |
|
|
kafka-connect-manager-enterprise.influxdbSink.autoUpdate |
bool |
|
If autoUpdate is enabled, check for new kafka topics. |
kafka-connect-manager-enterprise.influxdbSink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxDb |
string |
|
InfluxDB database to write to. |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxErrorPolicy |
string |
|
Error policy, see connector documetation for details. |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxMaxRetries |
string |
|
The maximum number of times a message is retried. |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxRetryInterval |
string |
|
The interval, in milliseconds, between retries. Only valid when the connectInfluxErrorPolicy is set to |
kafka-connect-manager-enterprise.influxdbSink.connectInfluxUrl |
string |
|
InfluxDB URL. |
kafka-connect-manager-enterprise.influxdbSink.connectProgressEnabled |
bool |
|
Enables the output for how many records have been processed. |
kafka-connect-manager-enterprise.influxdbSink.connectors |
object |
|
Connector instances to deploy. |
kafka-connect-manager-enterprise.influxdbSink.connectors.example.enabled |
bool |
|
Whether this connector instance is deployed. |
kafka-connect-manager-enterprise.influxdbSink.connectors.example.removePrefix |
string |
|
Remove prefix from topic name. |
kafka-connect-manager-enterprise.influxdbSink.connectors.example.repairerConnector |
bool |
|
Whether to deploy a repairer connector in addition to the original connector instance. |
kafka-connect-manager-enterprise.influxdbSink.connectors.example.tags |
string |
|
Fields in the Avro payload that are treated as InfluxDB tags. |
kafka-connect-manager-enterprise.influxdbSink.connectors.example.topicsRegex |
string |
|
Regex to select topics from Kafka. |
kafka-connect-manager-enterprise.influxdbSink.excludedTopicsRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
kafka-connect-manager-enterprise.influxdbSink.tasksMax |
int |
|
Maxium number of tasks to run the connector. |
kafka-connect-manager-enterprise.influxdbSink.timestamp |
string |
|
Timestamp field to be used as the InfluxDB time, if not specified use |
kafka-connect-manager-enterprise.jdbcSink.autoCreate |
string |
|
Whether to automatically create the destination table. |
kafka-connect-manager-enterprise.jdbcSink.autoEvolve |
string |
|
Whether to automatically add columns in the table schema. |
kafka-connect-manager-enterprise.jdbcSink.batchSize |
string |
|
Specifies how many records to attempt to batch together for insertion into the destination table. |
kafka-connect-manager-enterprise.jdbcSink.connectionUrl |
string |
|
Database connection URL. |
kafka-connect-manager-enterprise.jdbcSink.dbTimezone |
string |
|
Name of the JDBC timezone that should be used in the connector when inserting time-based values. |
kafka-connect-manager-enterprise.jdbcSink.enabled |
bool |
|
Whether the JDBC Sink connector is deployed. |
kafka-connect-manager-enterprise.jdbcSink.insertMode |
string |
|
The insertion mode to use. Supported modes are: |
kafka-connect-manager-enterprise.jdbcSink.maxRetries |
string |
|
The maximum number of times to retry on errors before failing the task. |
string |
|
Name of the connector to create. |
|
kafka-connect-manager-enterprise.jdbcSink.retryBackoffMs |
string |
|
The time in milliseconds to wait following an error before a retry attempt is made. |
kafka-connect-manager-enterprise.jdbcSink.tableNameFormat |
string |
|
A format string for the destination table name. |
kafka-connect-manager-enterprise.jdbcSink.tasksMax |
string |
|
Number of Kafka Connect tasks. |
kafka-connect-manager-enterprise.jdbcSink.topicRegex |
string |
|
Regex for selecting topics. |
kafka-connect-manager-enterprise.s3Sink.behaviorOnNullValues |
string |
|
How to handle records with a null value (for example, Kafka tombstone records). Valid options are ignore and fail. |
kafka-connect-manager-enterprise.s3Sink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
kafka-connect-manager-enterprise.s3Sink.enabled |
bool |
|
Whether the Amazon S3 Sink connector is deployed. |
kafka-connect-manager-enterprise.s3Sink.excludedTopicRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
kafka-connect-manager-enterprise.s3Sink.flushSize |
string |
|
Number of records written to store before invoking file commits. |
kafka-connect-manager-enterprise.s3Sink.locale |
string |
|
The locale to use when partitioning with TimeBasedPartitioner. |
string |
|
Name of the connector to create. |
|
kafka-connect-manager-enterprise.s3Sink.partitionDurationMs |
string |
|
The duration of a partition in milliseconds, used by TimeBasedPartitioner. Default is 1h for an hourly based partitioner. |
kafka-connect-manager-enterprise.s3Sink.pathFormat |
string |
|
Pattern used to format the path in the S3 object name. |
kafka-connect-manager-enterprise.s3Sink.rotateIntervalMs |
string |
|
The time interval in milliseconds to invoke file commits. Set to 10 minutes by default. |
kafka-connect-manager-enterprise.s3Sink.s3BucketName |
string |
|
s3 bucket name. The bucket must already exist at the s3 provider. |
kafka-connect-manager-enterprise.s3Sink.s3PartRetries |
int |
|
Maximum number of retry attempts for failed requests. Zero means no retries. |
kafka-connect-manager-enterprise.s3Sink.s3PartSize |
int |
|
The Part Size in S3 Multi-part Uploads. Valid Values: [5242880,…,2147483647] |
kafka-connect-manager-enterprise.s3Sink.s3Region |
string |
|
s3 region |
kafka-connect-manager-enterprise.s3Sink.s3RetryBackoffMs |
int |
|
How long to wait in milliseconds before attempting the first retry of a failed S3 request. |
kafka-connect-manager-enterprise.s3Sink.s3SchemaCompatibility |
string |
|
s3 schema compatibility |
kafka-connect-manager-enterprise.s3Sink.schemaCacheConfig |
int |
|
The size of the schema cache used in the Avro converter. |
kafka-connect-manager-enterprise.s3Sink.storeUrl |
string |
|
The object storage connection URL, for non-AWS s3 providers. |
kafka-connect-manager-enterprise.s3Sink.tasksMax |
int |
|
Number of Kafka Connect tasks. |
kafka-connect-manager-enterprise.s3Sink.timestampExtractor |
string |
|
The extractor determines how to obtain a timestamp from each record. |
kafka-connect-manager-enterprise.s3Sink.timestampField |
string |
|
The record field to be used as timestamp by the timestamp extractor. Only applies if timestampExtractor is set to RecordField. |
kafka-connect-manager-enterprise.s3Sink.timezone |
string |
|
The timezone to use when partitioning with TimeBasedPartitioner. |
kafka-connect-manager-enterprise.s3Sink.topicsDir |
string |
|
Top level directory to store the data ingested from Kafka. |
kafka-connect-manager-enterprise.s3Sink.topicsRegex |
string |
|
Regex to select topics from Kafka. |
rest-proxy.affinity |
object |
|
Affinity configuration. |
rest-proxy.configurationOverrides |
object |
|
Kafka REST configuration options |
rest-proxy.customEnv |
string |
|
Kafka REST additional env variables |
rest-proxy.heapOptions |
string |
|
Kafka REST proxy JVM Heap Option |
rest-proxy.image.pullPolicy |
string |
|
Image pull policy. |
rest-proxy.image.repository |
string |
|
Kafka REST proxy image repository. |
rest-proxy.image.tag |
string |
|
Kafka REST proxy image tag. |
rest-proxy.ingress.annotations |
object |
|
Ingress annotations. |
rest-proxy.ingress.enabled |
bool |
|
Enable Ingress. This should be true to create an ingress rule for the application. |
rest-proxy.ingress.hostname |
string |
|
Ingress hostname. |
rest-proxy.ingress.path |
string |
`”/sasquatch-rest-proxy(/ |
$)(.*)”` |
rest-proxy.kafka.bootstrapServers |
string |
|
Kafka bootstrap servers, use the internal listerner on port 9092 wit SASL connection. |
string |
|
Name of the Strimzi Kafka cluster. |
|
rest-proxy.kafka.topicPrefixes |
string |
|
List of topic prefixes to use when exposing Kafka topics to the REST Proxy v2 API. |
rest-proxy.kafka.topics |
string |
|
List of Kafka topics to create via Strimzi. Alternatively topics can be created using the REST Proxy v3 API. |
rest-proxy.nodeSelector |
object |
|
Node selector configuration. |
rest-proxy.podAnnotations |
object |
|
Pod annotations. |
rest-proxy.replicaCount |
int |
|
Number of Kafka REST proxy pods to run in the deployment. |
rest-proxy.resources.limits.cpu |
int |
|
Kafka REST proxy cpu limits |
rest-proxy.resources.limits.memory |
string |
|
Kafka REST proxy memory limits |
rest-proxy.resources.requests.cpu |
int |
|
Kafka REST proxy cpu requests |
rest-proxy.resources.requests.memory |
string |
|
Kafka REST proxy memory requests |
rest-proxy.schemaregistry.url |
string |
|
Schema registry URL |
rest-proxy.service.port |
int |
|
Kafka REST proxy service port |
rest-proxy.tolerations |
list |
|
Tolerations configuration. |
source-kafka-connect-manager.enabled |
bool |
|
Enable Kafka Connect Manager. |
source-kafka-connect-manager.env.kafkaBrokerUrl |
string |
|
Kafka broker URL. |
source-kafka-connect-manager.env.kafkaConnectUrl |
string |
|
Kafka connnect URL. |
source-kafka-connect-manager.env.kafkaUsername |
string |
|
Username for SASL authentication. |
source-kafka-connect-manager.image.pullPolicy |
string |
|
|
source-kafka-connect-manager.image.repository |
string |
|
|
source-kafka-connect-manager.image.tag |
string |
|
|
source-kafka-connect-manager.influxdbSink.autoUpdate |
bool |
|
If autoUpdate is enabled, check for new kafka topics. |
source-kafka-connect-manager.influxdbSink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
source-kafka-connect-manager.influxdbSink.connectInfluxDb |
string |
|
InfluxDB database to write to. |
source-kafka-connect-manager.influxdbSink.connectInfluxErrorPolicy |
string |
|
Error policy, see connector documetation for details. |
source-kafka-connect-manager.influxdbSink.connectInfluxMaxRetries |
string |
|
The maximum number of times a message is retried. |
source-kafka-connect-manager.influxdbSink.connectInfluxRetryInterval |
string |
|
The interval, in milliseconds, between retries. Only valid when the connectInfluxErrorPolicy is set to |
source-kafka-connect-manager.influxdbSink.connectInfluxUrl |
string |
|
InfluxDB URL. |
source-kafka-connect-manager.influxdbSink.connectProgressEnabled |
bool |
|
Enables the output for how many records have been processed. |
source-kafka-connect-manager.influxdbSink.connectors |
object |
|
Connector instances to deploy. |
source-kafka-connect-manager.influxdbSink.connectors.example.enabled |
bool |
|
Whether this connector instance is deployed. |
source-kafka-connect-manager.influxdbSink.connectors.example.removePrefix |
string |
|
Remove prefix from topic name. |
source-kafka-connect-manager.influxdbSink.connectors.example.repairerConnector |
bool |
|
Whether to deploy a repairer connector in addition to the original connector instance. |
source-kafka-connect-manager.influxdbSink.connectors.example.tags |
string |
|
Fields in the Avro payload that are treated as InfluxDB tags. |
source-kafka-connect-manager.influxdbSink.connectors.example.topicsRegex |
string |
|
Regex to select topics from Kafka. |
source-kafka-connect-manager.influxdbSink.excludedTopicsRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
source-kafka-connect-manager.influxdbSink.tasksMax |
int |
|
Maxium number of tasks to run the connector. |
source-kafka-connect-manager.influxdbSink.timestamp |
string |
|
Timestamp field to be used as the InfluxDB time, if not specified use |
source-kafka-connect-manager.jdbcSink.autoCreate |
string |
|
Whether to automatically create the destination table. |
source-kafka-connect-manager.jdbcSink.autoEvolve |
string |
|
Whether to automatically add columns in the table schema. |
source-kafka-connect-manager.jdbcSink.batchSize |
string |
|
Specifies how many records to attempt to batch together for insertion into the destination table. |
source-kafka-connect-manager.jdbcSink.connectionUrl |
string |
|
Database connection URL. |
source-kafka-connect-manager.jdbcSink.dbTimezone |
string |
|
Name of the JDBC timezone that should be used in the connector when inserting time-based values. |
source-kafka-connect-manager.jdbcSink.enabled |
bool |
|
Whether the JDBC Sink connector is deployed. |
source-kafka-connect-manager.jdbcSink.insertMode |
string |
|
The insertion mode to use. Supported modes are: |
source-kafka-connect-manager.jdbcSink.maxRetries |
string |
|
The maximum number of times to retry on errors before failing the task. |
string |
|
Name of the connector to create. |
|
source-kafka-connect-manager.jdbcSink.retryBackoffMs |
string |
|
The time in milliseconds to wait following an error before a retry attempt is made. |
source-kafka-connect-manager.jdbcSink.tableNameFormat |
string |
|
A format string for the destination table name. |
source-kafka-connect-manager.jdbcSink.tasksMax |
string |
|
Number of Kafka Connect tasks. |
source-kafka-connect-manager.jdbcSink.topicRegex |
string |
|
Regex for selecting topics. |
source-kafka-connect-manager.s3Sink.behaviorOnNullValues |
string |
|
How to handle records with a null value (for example, Kafka tombstone records). Valid options are ignore and fail. |
source-kafka-connect-manager.s3Sink.checkInterval |
string |
|
The interval, in milliseconds, to check for new topics and update the connector. |
source-kafka-connect-manager.s3Sink.enabled |
bool |
|
Whether the Amazon S3 Sink connector is deployed. |
source-kafka-connect-manager.s3Sink.excludedTopicRegex |
string |
|
Regex to exclude topics from the list of selected topics from Kafka. |
source-kafka-connect-manager.s3Sink.flushSize |
string |
|
Number of records written to store before invoking file commits. |
source-kafka-connect-manager.s3Sink.locale |
string |
|
The locale to use when partitioning with TimeBasedPartitioner. |
string |
|
Name of the connector to create. |
|
source-kafka-connect-manager.s3Sink.partitionDurationMs |
string |
|
The duration of a partition in milliseconds, used by TimeBasedPartitioner. Default is 1h for an hourly based partitioner. |
source-kafka-connect-manager.s3Sink.pathFormat |
string |
|
Pattern used to format the path in the S3 object name. |
source-kafka-connect-manager.s3Sink.rotateIntervalMs |
string |
|
The time interval in milliseconds to invoke file commits. Set to 10 minutes by default. |
source-kafka-connect-manager.s3Sink.s3BucketName |
string |
|
s3 bucket name. The bucket must already exist at the s3 provider. |
source-kafka-connect-manager.s3Sink.s3PartRetries |
int |
|
Maximum number of retry attempts for failed requests. Zero means no retries. |
source-kafka-connect-manager.s3Sink.s3PartSize |
int |
|
The Part Size in S3 Multi-part Uploads. Valid Values: [5242880,…,2147483647] |
source-kafka-connect-manager.s3Sink.s3Region |
string |
|
s3 region |
source-kafka-connect-manager.s3Sink.s3RetryBackoffMs |
int |
|
How long to wait in milliseconds before attempting the first retry of a failed S3 request. |
source-kafka-connect-manager.s3Sink.s3SchemaCompatibility |
string |
|
s3 schema compatibility |
source-kafka-connect-manager.s3Sink.schemaCacheConfig |
int |
|
The size of the schema cache used in the Avro converter. |
source-kafka-connect-manager.s3Sink.storeUrl |
string |
|
The object storage connection URL, for non-AWS s3 providers. |
source-kafka-connect-manager.s3Sink.tasksMax |
int |
|
Number of Kafka Connect tasks. |
source-kafka-connect-manager.s3Sink.timestampExtractor |
string |
|
The extractor determines how to obtain a timestamp from each record. |
source-kafka-connect-manager.s3Sink.timestampField |
string |
|
The record field to be used as timestamp by the timestamp extractor. Only applies if timestampExtractor is set to RecordField. |
source-kafka-connect-manager.s3Sink.timezone |
string |
|
The timezone to use when partitioning with TimeBasedPartitioner. |
source-kafka-connect-manager.s3Sink.topicsDir |
string |
|
Top level directory to store the data ingested from Kafka. |
source-kafka-connect-manager.s3Sink.topicsRegex |
string |
|
Regex to select topics from Kafka. |
string |
|
||
string |
|
Name used for the Kafka cluster, and used by Strimzi for many annotations. |
|
strimzi-kafka.cluster.releaseLabel |
string |
|
Site wide label required for gathering Prometheus metrics if they are enabled. |
strimzi-kafka.connect.config.“key.converter” |
string |
|
Set the converter for the message key |
strimzi-kafka.connect.config.“key.converter.schemas.enable” |
bool |
|
Enable converted schemas for the message key |
strimzi-kafka.connect.enabled |
bool |
|
Enable Kafka Connect. |
strimzi-kafka.connect.image |
string |
|
Custom strimzi-kafka image with connector plugins used by sasquatch. |
strimzi-kafka.connect.replicas |
int |
|
Number of Kafka Connect replicas to run. |
strimzi-kafka.kafka.affinity |
object |
|
Affinity for Kafka pod assignment. |
strimzi-kafka.kafka.config.“log.retention.bytes” |
string |
|
How much disk space Kafka will ensure is available, set to 70% of the data partition size |
strimzi-kafka.kafka.config.“log.retention.hours” |
int |
|
Number of days for a topic’s data to be retained. |
strimzi-kafka.kafka.config.“message.max.bytes” |
int |
|
The largest record batch size allowed by Kafka. |
strimzi-kafka.kafka.config.“offsets.retention.minutes” |
int |
|
Number of minutes for a consumer group’s offsets to be retained. |
strimzi-kafka.kafka.config.“replica.fetch.max.bytes” |
int |
|
The number of bytes of messages to attempt to fetch for each partition. |
strimzi-kafka.kafka.config.“replica.lag.time.max.ms” |
int |
|
Replica lag time can’t be smaller than request.timeout.ms configuration in kafka connect. |
strimzi-kafka.kafka.disruption_tolerance |
int |
|
Number of down brokers that the system can tolerate. |
strimzi-kafka.kafka.externalListener.bootstrap.annotations |
object |
|
Annotations that will be added to the Ingress, Route, or Service resource. |
strimzi-kafka.kafka.externalListener.bootstrap.host |
string |
|
Name used for TLS hostname verification. |
strimzi-kafka.kafka.externalListener.bootstrap.loadBalancerIP |
string |
|
The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature. Once the IP address is provisioned this option make it possible to pin the IP address. We can request the same IP next time it is provisioned. This is important because it lets us configure a DNS record, associating a hostname with that pinned IP address. |
strimzi-kafka.kafka.externalListener.brokers |
list |
|
Borkers configuration. host is used in the brokers’ advertised.brokers configuration and for TLS hostname verification. The format is a list of maps. |
strimzi-kafka.kafka.externalListener.tls.certIssuerName |
string |
|
Name of a ClusterIssuer capable of provisioning a TLS certificate for the broker. |
strimzi-kafka.kafka.externalListener.tls.enabled |
bool |
|
Whether TLS encryption is enabled. |
strimzi-kafka.kafka.listeners.external.enabled |
bool |
|
Whether external listener is enabled. |
strimzi-kafka.kafka.listeners.plain.enabled |
bool |
|
Whether internal plaintext listener is enabled. |
strimzi-kafka.kafka.listeners.tls.enabled |
bool |
|
Whether internal TLS listener is enabled. |
strimzi-kafka.kafka.metricsConfig.enabled |
bool |
|
Whether metric configuration is enabled. |
strimzi-kafka.kafka.replicas |
int |
|
Number of Kafka broker replicas to run. |
strimzi-kafka.kafka.storage.size |
string |
|
Size of the backing storage disk for each of the Kafka brokers. |
strimzi-kafka.kafka.storage.storageClassName |
string |
|
Name of a StorageClass to use when requesting persistent volumes. |
strimzi-kafka.kafka.tolerations |
list |
|
Tolerations for Kafka broker pod assignment. |
strimzi-kafka.kafka.version |
string |
|
Version of Kafka to deploy. |
strimzi-kafka.kafkaExporter.enableSaramaLogging |
bool |
|
Enable Sarama logging for pod |
strimzi-kafka.kafkaExporter.enabled |
bool |
|
Enable Kafka exporter |
strimzi-kafka.kafkaExporter.groupRegex |
string |
|
Consumer groups to monitor |
strimzi-kafka.kafkaExporter.logging |
string |
|
Logging level |
strimzi-kafka.kafkaExporter.resources |
object |
|
Resource specification for Kafka exporter |
strimzi-kafka.kafkaExporter.topicRegex |
string |
|
Kafka topics to monitor |
strimzi-kafka.mirrormaker2.enabled |
bool |
|
Enable replication in the target (passive) cluster. |
strimzi-kafka.mirrormaker2.replicas |
int |
|
|
strimzi-kafka.mirrormaker2.replication.policy.class |
string |
IdentityReplicationPolicy |
Replication policy. |
strimzi-kafka.mirrormaker2.replication.policy.separator |
string |
“” |
Convention used to rename topics when the DefaultReplicationPolicy replication policy is used. Default is “” when the IdentityReplicationPolicy replication policy is used. |
strimzi-kafka.mirrormaker2.source.bootstrapServer |
string |
|
Source (active) cluster to replicate from. |
strimzi-kafka.mirrormaker2.source.topicsPattern |
string |
|
Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. |
strimzi-kafka.mirrormaker2.sourceConnect.enabled |
bool |
|
Whether to deploy another Connect cluster for topics replicated from the source cluster. Requires the sourceRegistry enabled. |
strimzi-kafka.mirrormaker2.sourceRegistry.enabled |
bool |
|
Whether to deploy another Schema Registry for the schemas replicated from the source cluster. |
strimzi-kafka.mirrormaker2.sourceRegistry.schemaTopic |
string |
|
Name of the topic Schema Registry topic replicated from the source cluster |
strimzi-kafka.registry.ingress.annotations |
object |
|
Annotations that will be added to the Ingress resource. |
strimzi-kafka.registry.ingress.enabled |
bool |
|
Whether to enable ingress for the Schema Registry. |
strimzi-kafka.registry.ingress.hostname |
string |
|
Hostname for the Schema Registry. |
strimzi-kafka.registry.schemaTopic |
string |
|
Name of the topic used by the Schema Registry |
strimzi-kafka.superusers |
list |
|
A list of usernames for users who should have global admin permissions. These users will be created, along with their credentials. |
strimzi-kafka.users.condsb.enabled |
bool |
|
Enable user consdb |
strimzi-kafka.users.kafdrop.enabled |
bool |
|
Enable user Kafdrop (deployed by parent Sasquatch chart). |
strimzi-kafka.users.kafkaConnectManager.enabled |
bool |
|
Enable user kafka-connect-manager |
strimzi-kafka.users.promptProcessing.enabled |
bool |
|
Enable user prompt-processing |
strimzi-kafka.users.replicator.enabled |
bool |
|
Enable user replicator (used by Mirror Maker 2 and required at both source and target clusters) |
strimzi-kafka.users.telegraf.enabled |
bool |
|
Enable user telegraf (deployed by parent Sasquatch chart) |
strimzi-kafka.users.tsSalKafka.enabled |
bool |
|
Enable user ts-salkafka, used at the telescope environments |
strimzi-kafka.zookeeper.affinity |
object |
|
Affinity for Zookeeper pod assignment. |
strimzi-kafka.zookeeper.metricsConfig.enabled |
bool |
|
Whether metric configuration is enabled. |
strimzi-kafka.zookeeper.replicas |
int |
|
Number of Zookeeper replicas to run. |
strimzi-kafka.zookeeper.storage.size |
string |
|
Size of the backing storage disk for each of the Zookeeper instances. |
strimzi-kafka.zookeeper.storage.storageClassName |
string |
|
Name of a StorageClass to use when requesting persistent volumes. |
strimzi-kafka.zookeeper.tolerations |
list |
|
Tolerations for Zookeeper pod assignment. |
telegraf-kafka-consumer.affinity |
object |
|
Affinity for pod assignment. |
telegraf-kafka-consumer.args |
list |
|
Arguments passed to the Telegraf agent containers. |
telegraf-kafka-consumer.enabled |
bool |
|
Wether the Telegraf Kafka Consumer is enabled |
telegraf-kafka-consumer.envFromSecret |
string |
|
Name of the secret with values to be added to the environment. |
telegraf-kafka-consumer.env[0].name |
string |
|
|
telegraf-kafka-consumer.env[0].valueFrom.secretKeyRef.key |
string |
|
Telegraf KafkaUser password. |
telegraf-kafka-consumer.env[0].valueFrom.secretKeyRef.name |
string |
|
|
telegraf-kafka-consumer.env[1].name |
string |
|
|
telegraf-kafka-consumer.env[1].valueFrom.secretKeyRef.key |
string |
|
InfluxDB v1 user |
telegraf-kafka-consumer.env[1].valueFrom.secretKeyRef.name |
string |
|
|
telegraf-kafka-consumer.env[2].name |
string |
|
|
telegraf-kafka-consumer.env[2].valueFrom.secretKeyRef.key |
string |
|
InfluxDB v1 password |
telegraf-kafka-consumer.env[2].valueFrom.secretKeyRef.name |
string |
|
|
telegraf-kafka-consumer.image.pullPolicy |
string |
IfNotPresent |
Image pull policy. |
telegraf-kafka-consumer.image.repo |
string |
|
Telegraf image repository. |
telegraf-kafka-consumer.image.tag |
string |
|
Telegraf image tag. |
telegraf-kafka-consumer.imagePullSecrets |
list |
|
Secret names to use for Docker pulls. |
telegraf-kafka-consumer.influxdb.database |
string |
|
Name of the InfluxDB v1 database to write to. |
telegraf-kafka-consumer.kafkaConsumers.test.enabled |
bool |
|
Enable the Telegraf Kafka consumer. |
telegraf-kafka-consumer.kafkaConsumers.test.fields |
list |
|
List of Avro fields to be recorded as InfluxDB fields. If not specified, any Avro field that is not marked as a tag will become an InfluxDB field. |
telegraf-kafka-consumer.kafkaConsumers.test.flush_interval |
string |
|
Default data flushing interval to InfluxDB. |
telegraf-kafka-consumer.kafkaConsumers.test.interval |
string |
|
Data collection interval for the Kafka consumer. |
telegraf-kafka-consumer.kafkaConsumers.test.replicaCount |
int |
|
Number of Telegraf Kafka consumer replicas. Increase this value to increase the consumer throughput. |
telegraf-kafka-consumer.kafkaConsumers.test.tags |
list |
|
List of Avro fields to be recorded as InfluxDB tags. The Avro fields specified as tags will be converted to strings before ingestion into InfluxDB. |
telegraf-kafka-consumer.kafkaConsumers.test.timestamp_field |
string |
|
Avro field to be used as the InfluxDB timestamp (optional). If unspecified or set to the empty string, Telegraf will use the time it received the measurement. |
telegraf-kafka-consumer.kafkaConsumers.test.timestamp_format |
string |
|
Timestamp format. Possible values are “unix” (the default if unset), “unix_ms”, “unix_us”, and “unix_ns”. At Rubin, use “unix” timestamp format for SAL timestamps. |
telegraf-kafka-consumer.kafkaConsumers.test.topicRegexps |
string |
|
List of regular expressions to specify the Kafka topics consumed by this agent. |
telegraf-kafka-consumer.kafkaConsumers.test.union_field_separator |
string |
|
Union field separator: if a single Avro field is flattened into more than one InfluxDB field (e.g. an array “a”, with four members, would yield “a0”, “a1”, “a2”, “a3”; if the field separator were “_”, these would be “a_0”…“a_3”. |
telegraf-kafka-consumer.kafkaConsumers.test.union_mode |
string |
|
Union mode: this can be one of “flatten”, “nullable”, or “any”. If empty, the default is “flatten”. When “flatten” is set, then if you have an Avro union type of ‘[ “int”, “float” ]’ for field “a”, and you have union_field_separator set to “_”, then measurements of “a” will go into Telegraf fields “a_int” and “a_float” depending on their type. This keeps InfluxDB happy with your data even when the same Avro field has multiple types (see below). One common use of Avro union types is to mark fields as optional by specifying ‘[ “null”, “ |
telegraf-kafka-consumer.nodeSelector |
object |
|
Node labels for pod assignment. |
telegraf-kafka-consumer.podAnnotations |
object |
|
Annotations for telegraf-kafka-consumers pods. |
telegraf-kafka-consumer.podLabels |
object |
|
Labels for telegraf-kafka-consumer pods. |
telegraf-kafka-consumer.resources |
object |
|
Kubernetes resources requests and limits. |
telegraf-kafka-consumer.tolerations |
list |
|
Tolerations for pod assignment. |