qserv-kafka Helm values reference¶
Helm values reference table for the qserv-kafka application.
Key  | 
Type  | 
Default  | 
Description  | 
|---|---|---|---|
config.consumerGroupId  | 
string  | 
  | 
Kafka consumer group ID  | 
config.jobCancelTopic  | 
string  | 
  | 
Kafka topic for query cancellation requests  | 
config.jobRunTopic  | 
string  | 
  | 
Kafka topic for query execution requests  | 
config.jobStatusTopic  | 
string  | 
  | 
Kafka topic for query status  | 
config.logLevel  | 
string  | 
  | 
Logging level  | 
config.logProfile  | 
string  | 
  | 
Logging profile (  | 
config.maxWorkerJobs  | 
int  | 
  | 
Maximum number of arq jobs each worker can process simultaneously  | 
config.qservDatabaseOverflow  | 
int  | 
  | 
Extra database connections that may be opened in excess of the pool size to handle surges in load. This is used primarily by the frontend for jobs that complete immediately.  | 
config.qservDatabasePoolSize  | 
int  | 
  | 
Database pool size. This is the number of MySQL connections that will be held open regardless of load. This should generally be set to the same as   | 
config.qservDatabaseUrl  | 
string  | 
None, must be set  | 
URL to the Qserv MySQL interface (must use a scheme of   | 
config.qservPollInterval  | 
string  | 
  | 
Interval at which Qserv is polled for query status in Safir   | 
config.qservRestMaxConnections  | 
int  | 
  | 
Maximum simultaneous connections to open to the REST API.  | 
config.qservRestTimeout  | 
string  | 
  | 
Timeout for REST API calls in Safir   | 
config.qservRestUrl  | 
string  | 
None, must be set  | 
URL to the Qserv REST API  | 
config.resultTimeout  | 
int  | 
3600 (1 hour)  | 
How long to wait for result processing (retrieval and upload) before timing out, in seconds. This doubles as the timeout forcibly terminating result worker pods.  | 
frontend.affinity  | 
object  | 
  | 
Affinity rules for the qserv-kafka frontend pod  | 
frontend.nodeSelector  | 
object  | 
  | 
Node selection rules for the qserv-kafka frontend pod  | 
frontend.podAnnotations  | 
object  | 
  | 
Annotations for the qserv-kafka frontend pod  | 
frontend.resources  | 
object  | 
See   | 
Resource limits and requests for the qserv-kafka frontend pod  | 
frontend.tolerations  | 
list  | 
  | 
Tolerations for the qserv-kafka frontend pod  | 
global.baseUrl  | 
string  | 
Set by Argo CD  | 
Base URL for the environment  | 
global.host  | 
string  | 
Set by Argo CD  | 
Host name for ingress  | 
global.vaultSecretsPath  | 
string  | 
Set by Argo CD  | 
Base path for Vault secrets  | 
image.pullPolicy  | 
string  | 
  | 
Pull policy for the qserv-kafka image  | 
image.repository  | 
string  | 
  | 
Image to use in the qserv-kafka deployment  | 
image.tag  | 
string  | 
The appVersion of the chart  | 
Tag of image to use  | 
ingress.annotations  | 
object  | 
  | 
Additional annotations for the ingress rule  | 
redis.config.secretKey  | 
string  | 
  | 
Key inside secret from which to get the Redis password (do not change)  | 
redis.config.secretName  | 
string  | 
  | 
Name of secret containing Redis password  | 
redis.persistence.accessMode  | 
string  | 
  | 
Access mode of storage to request  | 
redis.persistence.enabled  | 
bool  | 
  | 
Whether to persist Redis storage. Setting this to false will use   | 
redis.persistence.size  | 
string  | 
  | 
Amount of persistent storage to request  | 
redis.persistence.storageClass  | 
string  | 
  | 
Class of storage to request  | 
redis.persistence.volumeClaimName  | 
string  | 
  | 
Use an existing PVC, not dynamic provisioning. If this is set, the size, storageClass, and accessMode settings are ignored.  | 
redis.resources  | 
object  | 
See   | 
Resource limits and requests for the Redis pod  | 
resultWorker.affinity  | 
object  | 
  | 
Affinity rules for the qserv-kafka worker pods  | 
resultWorker.autoscaling.enabled  | 
bool  | 
  | 
Enable autoscaling of qserv-kafka result workers  | 
resultWorker.autoscaling.maxReplicas  | 
int  | 
  | 
Maximum number of qserv-kafka worker pods. Each replica will open database connections up to the configured pool size and overflow limits, so make sure the combined connections are under the postgres connection limit.  | 
resultWorker.autoscaling.minReplicas  | 
int  | 
  | 
Minimum number of qserv-kafka worker pods  | 
resultWorker.autoscaling.targetCPUUtilizationPercentage  | 
int  | 
  | 
Target CPU utilization of qserv-kafka worker pods.  | 
resultWorker.nodeSelector  | 
object  | 
  | 
Node selection rules for the qserv-kafka worker pods  | 
resultWorker.podAnnotations  | 
object  | 
  | 
Annotations for the qserv-kafka worker pods  | 
resultWorker.replicaCount  | 
int  | 
  | 
Number of result worker pods to start if autoscaling is disabled  | 
resultWorker.resources  | 
object  | 
See   | 
Resource limits and requests for the qserv-kafka worker pods  | 
resultWorker.tolerations  | 
list  | 
  | 
Tolerations for the qserv-kafka worker pods  |