Configuring Nebula optional reporter
The following table shows the config variables used to set nebula optional reporter at start time:
Tip
The tables slides to to side for viewing it's full information (not clear do to the rtfd theme)
config/conf.json variable name | envvar variable name | default value | example value | type | description | required |
---|---|---|---|---|---|---|
kafka_bootstrap_servers | KAFKA_BOOTSTRAP_SERVERS | mykafka.mydomain.com:9092 or empty | string | the FQDN\ip address of the bootstrap kafka nodes, if not set everything regarding the optional reporting system will be unused, setting it to any value is the trigger to turn on the reporting component of the workers | yes | |
kafka_security_protocol | KAFKA_SECURITY_PROTOCOL | PLAINTEXT | PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL | string | Protocol used to communicate with the kafka brokers, valid values are PLAINTEXT, SSL, SASL_PLAINTEXT or SASL_SSL | no |
kafka_sasl_mechanism | KAFKA_SASL_MECHANISM | PLAIN or empty | string | string picking sasl mechanism when security_protocol is SASL_PLAINTEXT or SASL_SSL. valid values are PLAIN or EMPTY, leaving empty\undefeind will mean sasl is not used | no | |
kafka_sasl_plain_username | KAFKA_SASL_PLAIN_USERNAME | mysaslusername or empty | string | the username to use to connect to the kafka brokers if kafka_sasl_mechanism is set to PLAIN & kafka_security_protocol is set to SASL_PLAINTEXT | no | |
kafka_sasl_plain_password | KAFKA_SASL_PLAIN_PASSWORD | mysaslpassword or empty | string | the password to use to connect to the kafka brokers if kafka_sasl_mechanism is set to PLAIN & kafka_security_protocol is set to SASL_PLAINTEXT | no | |
kafka_ssl_keyfile | KAFKA_SSL_KEYFILE | /mykeyfile or empty | string | path of SSL keyfile to connecto to the kafka brokers with if SSL is set to be used | no | |
kafka_ssl_password | KAFKA_SSL_PASSWORD | mysslpassword or empty | string | path of SSL keyfile password to connecto to the kafka brokers with if SSL is set to be used | no | |
kafka_ssl_certfile | KAFKA_SSL_CERTFILE | /mycertfile or empty | string | path of SSL certfile to connecto to the kafka brokers with if SSL is set to be used | no | |
kafka_ssl_cafile | KAFKA_SSL_CAFILE | /mycafile or empty | string | path of SSL cafile to connecto to the kafka brokers with if SSL is set to be used | no | |
kafka_ssl_crlfile | KAFKA_SSL_CRLFILE | /mycrlfile or empty | string | path of SSL crlfile to connecto to the kafka brokers with if SSL is set to be used | no | |
kafka_sasl_kerberos_service_name | KAFKA_SASL_KERBEROS_SERVICE_NAME | kafka | kafka | string | the kerberos service name used to connect to the kafka brokers if kerberos is configured to be used | no |
kafka_sasl_kerberos_domain_name | KAFKA_SASL_KERBEROS_DOMAIN_NAME | kafka | kafka | string | the kerberos domain name used to connect to the kafka brokers if kerberos is configured to be used | no |
kafka_topic | KAFKA_TOPIC | nebula-reports | my-nebula-kafka-topic | string | the kafka topic name reports will be written to, it's up to the admin to ensure proper topic sizing\partitioning on the kafka side | no |
mongo_url | MONGO_URL | mongodb://mongo_user:mongo_pass@mongo_host:27017/?ssl=true&replicaSet=mongo_replica_set&authSource=mongo_auth_schema | string | mongo URI string | yes | |
schema_name | SCHEMA_NAME | nebula | mongo_schema | string | mongo schema name | yes |
mongo_max_pool_size | MONGO_MAX_POOL_SIZE | 25 | 100 | int | the size of the connection pool between the manager and the backend MongoDB - a good rule of thumb is to have 3 for each device_group in the cluster but no more then 100 at most | yes |
kafka_auto_offset_reset | KAFKA_AUTO_OFFSET_RESET | earliest | latest | string | the point to start pulling data from Kafka should the current data point is not avilable, must be either "earliest" or "latest" | no |
kafka_group_id | KAFKA_GROUP_ID | nebula-reporter-group | my-kafka-consumer-group | string | the group ID of the kafka consumers. | no |
mongo_report_ttl | MONGO_REPORT_TTL | 3600 | 1800 | int | the amount of time in seconds which the reports will be saved in the backend DB after being pulled from Kafka | no |
envvars take priority over variables set in the config/conf.json file in case both are set (the registry user & pass values can also be set by using the standard "~/.docker/config.json" file and not setting them as envvars and the Nebula config file ), it's suggested to always wrap envvars values in quotation marks but it's only required if the envvar uses special characters (for example "mongodb://mongo_user:mongo_pass@mongo_host:27017/?ssl=true&replicaSet=mongo_replica_set&authSource=mongo_auth_schema"), some variables have defaults that will be used if they are not set as envvars and in the conf.json file.
An example config file is located at "example_conf.json" at the /config/ folder of the worker github repo (and inside the containers of them by extension).
The following table shows the path of each config file inside the docker containers:
container | config path inside container | example Dockerfile COPY command overwrite |
---|---|---|
reporter | /reporter/config/conf.json | COPY config/conf.json /reporter/config/conf.json |