Configuring Nebula manager
The following table shows the config variables used to set nebula manager at start time, due to using parse_it for config parsing you can use env_vars as well as JSON, YAML or TOML (and more) to configure the app provided they are in the config folder and have a correct file extension:
The tables slides to to side for viewing it's full information (not clear do to the rtfd theme)
|config file variable name||envvar variable name||default value||example value||type||description||required|
|basic_auth_user||BASIC_AUTH_USER||admin||string||the basic auth user used to secure the manager - unless you set auth_enabled=false you must configure either basic_auth (user & pass) or token auth||no|
|basic_auth_password||BASIC_AUTH_PASSWORD||P@ssw0rd||string||the basic auth password used to secure the api-manger - unless you set auth_enabled=false you must configure either basic_auth (user & pass) or token auth||no|
|auth_token||AUTH_TOKEN||ZDMwN2JmYjBmODliMDRmOTViZDlkYmJl||string||the bearer token to use to secure the api-manager - unless you set auth_enabled=false you must configure either basic_auth (user & pass) or token auth||no|
|auth_enabled||AUTH_ENABLED||true||false||bool||defaults to true, if set to false will disable basic auth on nebula-api, setting to false is only recommended to be used if you are using an upstream LB\web server that handles the basic-auth for you instead||yes|
|ENV||prod||dev||string||envvar only, defaults to "prod", if set to "dev" will use flask built-in web server on port 5000 to make devs life a tad easier, unless your developing new features to nebula or hunting down a bug in it best to not include it at all||no|
|mongo_url||MONGO_URL||mongodb://mongo_user:mongo_pass@mongo_host:27017/?ssl=true&replicaSet=mongo_replica_set&authSource=mongo_auth_schema||string||mongo URI string||yes|
|schema_name||SCHEMA_NAME||nebula||mongo_schema||string||mongo schema name||yes|
|cache_time||CACHE_TIME||10||30||int||the amount of time (in seconds) the cache will be kept before the manager will check in with the backend DB to grab any changes, the higher this number the less frequent changes will take affect on the workers & the less load there will be on the DB||yes|
|cache_max_size||CACHE_MAX_SIZE||1024||2048||int||the maximum number of device_groups that will be kept in cache, if you have more device_groups then this number the LRU device_group will be evicted from the cache so for best performance best to make sure this number is higher then the maximum amount of device_group||yes|
|mongo_max_pool_size||MONGO_MAX_POOL_SIZE||25||100||int||the size of the connection pool between the manager and the backend MongoDB - a good rule of thumb is to have 3 for each device_group in the cluster but no more then 100 at most||yes|
envvars take priority over variables set in the config/ files in case both are set (the registry user & pass values can also be set by using the standard "~/.docker/config.json" file and not setting them as envvars and the Nebula config file ), it's suggested to always wrap envvars values in quotation marks but it's only required if the envvar uses special characters (for example "mongodb://mongo_user:mongo_pass@mongo_host:27017/?ssl=true&replicaSet=mongo_replica_set&authSource=mongo_auth_schema"), some variables have defaults that will be used if they are not set as envvars and in the conf.json file.
An example config file is located at "example_conf.json.example" at the /config/ folder of the manager github repo (and inside the containers of them by extension).
The following table shows the path of each config file inside the docker containers:
|container||config path inside container||example Dockerfile COPY command overwrite|
|manager||/www/config/conf.json||COPY config/conf.json /www/config/conf.json|
As the manager uses gunicorn as the main application web-server inside the container there is a config.py at the container root folder which is pulled from the repo /config/config.py file, consult gunicron guide if you wish to change it's parameters.