Configuring apps

The following table shows the config variables used to set individual apps inside nebula (via the API):


The tables slides to to side for viewing it's full information (not clear do to the rtfd theme)

field type default value example value description
starting_ports list of int and\or dicts [] [{"85": "80"}, 443, 5555] a list of starting ports to bind the work containers, if ints the host port & the container port will be the same value, if dict it will follow the {"host_port":"container_port"} format, note that due to JSON format if using a dict values must be passed as strings (with ""), if using ints values must not be passed as strings, mixing and matching is allowed, for example a worker server with 8 CPU's & containers_per={"cpu": 3} and starting_ports=[{"81": "80"}] will have 24 containers binding to port 81 to port 104, so your worker-node LB can bind to port 80 and redirect traffic among the container on ports 81...104 which are bounded on the host & redirect to port 80 inside the container, an HAProxy config example can be found at example-config, use [] for non
containers_per dict {server: 1} {"server": 3} or {"cpu": 0.5} or {"memory": 512} the number of containers to run on each worker node, possible keys are {"server": int}, {instance": int} & {cpu": int/float}, "server" and "instance" are both the same and will load x number of containers on each node, cpu will load x containers per cpu on each node, "memory" or "mem" will both figure out an run an amount of containers on the workers so that on average each will have the X you set memory in mb for it to use so setting them to {"memory": 512} on a worker with 1024mb will result in 2 containers running of that app
env_vars dict {} {"test1": "test123", "example": "example123"} a dict of envvars that will be passed to each work containers, use {} for non
docker_image string none - must be declared "" what docker image to run, note that it's currently not possible to set a different starting command then the one set in the container Dockerfile
running bool true true true - app will run, false - stops said app
networks list ["nebula", "bridge"] ["nebula", "bridge"] the networks containers will be part of, if you wish to use "host" or "none" networks make sure that they are alone in the list (["host"] or ["none"]), a default bridge user network named "nebula" is also added in any case which is not "host" or "none", the "nebula" network can either be added explicitly (["nebula", "example-net"]) or implicitly (["example-net"]) and will also be added if an empty list is set ([]), note that aside from the default "nebula" network (& Docker default networks of "bridge", "none", "host") you will have to create what networks you wish to use manually on the worker prior to using them, the main advantage of the default "nebula" user network over the "bridge" network is that it allows container name based discovery so if you have a server with 2 containers named "con1" and "con2" you can ping from "con1" to "con2" just by using "ping con2" from inside "con1"
volumes list [] ["/tmp:/tmp/1", "/var/tmp/:/var/tmp/1:ro"] what volumes to mount inside the containers ,follows docker run -v syntax of host_path:container_path:ro/rw, use [] for non
privileged bool false false true - app gets privileged permissions, false - no privileged permissions
devices list [] ["/dev/usb/hiddev0:/dev/usb/hiddev0:rwm"] what devices to grant the containers access ,follows docker run --device of host_path:container_path:ro/rwm, use [] for non
rolling_restart bool false false if the apps should be updated using a rolling restart or not, unless you configure the "containers_per" param to be higher then 1 container of the app on each device will have no affect